text
stringlengths 4
5.48M
| meta
stringlengths 14
6.54k
|
---|---|
\section{Introduction}
Members of our group have been involved in long-term studies of abundances in
Galactic halo stars.
These studies have been designed to address a number of important
issues, including: the synthesis mechanisms of
the heavy,
specifically, neutron capture ($n$-capture)
elements, early in the history of the
Galaxy; the identities of the earliest stellar generations,
the progenitors of the
halo stars; the site or sites for the synthesis of the
rapid $n$-capture ({\it i.e.}, $r$-process) material
throughout the Galaxy; the Galactic Chemical Evolution (GCE) of
the elements; and by employing the abundances of the
radioactive elements (Th and U) as
chronometers, the ages of the oldest stars, and hence the lower limit
on the age of the Galaxy and the Universe. (See \citealt{truran02},
\citealt{snedencowan03}, \citealt{cowan04}, and \citealt{sneden08}
for discussions of these related and significant topics.)
In the following paper we review some of the results of our studies,
starting with new stellar abundance determinations arising from
more accurate laboratory atomic data
in \S II, followed by abundance comparisons
of the
lighter and heavier $n$-capture elements in the $r$-process-rich stars
in \S III, with new species detections in the star BD+17{$^{\circ}$} 3248 \
and the ubiquitous nature of the $r$-process throughout the
Galaxy described in
sections \S IV
and \S V, respectively. We end with our Conclusions in \S VI.
\section{Atomic Data Improvements and Abundance Determinations}
Stellar abundance determinations of the $n$-capture elements in
Galactic halo stars have become increasingly more accurate
over the last decade with typical errors now
of less than 10\% \citep{sneden08}.
Much of that improvement in the precision of the stellar abundances
has been due to increasingly more accurate laboratory atomic data.
New measurements of the transition probabilities
have been published for the rare earth elements (REE) and several
others, including:
La~II \citep{lawler01a};
Ce~II (\citealt{palmeri00};
and recently transition probabilities for 921 lines for Ce~II,
\citealt{lawler09});
Pr~II \citep{ivarsson01};
Nd~II (transition probabilities for more than 700
Nd~II lines, \citealt{denhartog03});
Sm~II (\citealt{xu03};
and recently transition probabilities for more
than 900 Sm~II lines, \citealt{lawler06});
Eu~I, II, and III (\citealt{lawler01c}; \citealt{denhartog02});
Gd~II \citep{denhartog06};
Tb~II (\citealt{denhartog01}; \citealt{lawler01b});
Dy~I and II \citep{wickliffe00};
Ho~II \citep{lawler04};
Er~II (transition probabilities for 418
lines of Er II, \citealt{lawler08});
Tm~I and II (\citealt{anderson96}; \citealt{wickliffe97});
Lu~I, II, and III (\citealt{denhartog98}; \citealt{quinet99},
\citealt{fedchak00});
Hf~II \citep{lawler07};
Os~I and II (\citealt{ivarsson03,ivarsson04}; \citealt{quinet06});
Ir~I and II (\citealt{ivarsson03,ivarsson04}; \citealt{xu07});
Pt~I \citep{denhartog05};
Au~I and II (\citealt{fivet06}; \citealt{biemont07});
Pb~I \citep{biemont00};
Th~II \citep{nilsson02a};
U~II \citep{nilsson02b};
and finally in new, more precise solar and stellar abundances of
Pr, Dy, Tm, Yb, and Lu \citep{sneden09}.
These new atomic data have been employed to redetermine the solar
and stellar abundances.
We show in Figure~\ref{f8} (from \citealt{sneden09})
the relative REE, and Hf,
abundances in five $r$-process rich stars: BD+17{$^{\circ}$} 3248, \mbox{CS~22892-052}, \mbox{CS~31082-001},
HD~115444 and HD~221170,
where
the abundance distributions have been scaled to the element Eu for
these comparisons. Also shown in Figure~\ref{f8}
are two Solar System $r$-process-only
abundance predictions from \citet{arlandini99} (based upon a
stellar model calculation) and
\citet{simmerer04} (based upon the ``classical'' $r$-process residual
method) that are also matched to the Eu abundances.
What is clear from the figure is that all of the
REE abundances---as well as Hf, which is a heavier interpeak element---are
in the same relative proportions from
star-to-star and with respect to the solar $r$-process abundances.
This agreement between the heavier $n$-capture elements and the
Solar System $r$-process abundance distribution
has been noted in the past (see, {\it e.g.}, \citealt{sneden03}), but
the overall agreement has become much more
precise, and convincing, as a result of the new atomic laboratory data.
\begin{figure*}
\plotone{f8.eps}
\caption{Recent
abundance determinations in five $r$-process rich
stars, based upon
new atomic lab data, compared with two solar system $r$-process only
predictions. The abundances in each star have been normalized to the
element Eu. After \citet{sneden09}.
Reproduced by permission of the AAS.
}
\label{f8}
\end{figure*}
\section{Abundance Comparisons}
We can also compare more comprehensive---not just the
REE---elemental abundance determinations for the
$r$-process-rich halo stars. This is potentially
a more rewarding enterprise, as it can illuminate the complex
nucleosynthetic
origin of the lightest $n$-capture elements, and can provide new ways of
looking at the age of the Galactic halo.
\subsection{Heavy $n$-capture Elements}
We show in Figure~\ref{compnew4} abundance comparisons with extensive
elemental data for
10 $r$-process-rich stars
(from the top):
filled (red) circles, CS~22892-052 \citep{sneden03,sneden09};
filled (green) squares, HD~115444 \citep{westin00,sneden09,hansen11};
filled (purple) diamonds, BD+17{$^{\circ}$} 3248\ \citep{cowan02,roederer10b};
(black) stars, CS~31082-001 \citep{hill02,plez04};
solid (turquoise) left-pointing triangles, HD~221170 \citep{ivans06,sneden09};
solid (orange) right-pointing triangles, HE~1523-0901 \citep{frebel07};
(green) crosses, CS~22953-003 \citep{francois07};
open (maroon) squares, HE~2327-5642 \citep{mashonkina10};
open (brown) circles, CS~29491-069 \citep{hayek09}; and
open (magenta) triangles, HE~1219-0312 \citep{hayek09}.
The abundances of all the stars except
\mbox{CS~22892-052}\ have been vertically displaced
downwards for display purposes.
In each case the
solid lines are (scaled) solar system $r$-process only predictions from
\citet{simmerer04} that have been matched to the Eu abundances.
The figure indicates that for the ten stars plotted, the
abundances of {\it all} of the heavier stable $n$-capture elements
({\it i.e.}, Ba and above) are
consistent with the relative solar system $r$-process abundance distribution
(see also \citealt{sneden09}).
Earlier work had demonstrated this agreement for
several $r$-process rich stars (where [Eu/Fe] $\simeq$ 1), including \mbox{CS~22892-052},
and the addition of still more such $r$-process-rich stars supports that
conclusion.
\begin{figure*}
\centering
\includegraphics[angle=-90,width=7.00in]{compnew4.ps}
\caption{
Abundance comparisons between 10 $r$-process rich stars and the Solar
System $r$-process values.
See text for references.
Adapted from \citet{sneden11}.
}
\label{compnew4}
\end{figure*}
\subsection{Light $n$-capture Elements}
While the heavier $n$-capture elements appear to be consistent with
the scaled solar system $r$-process curve, the lighter $n$-capture
elements (Z $<$ 56) seem to fall below that same solar curve.
One problem in analyzing this region of interest
is that there have been relatively few stellar observations of these
lighter $n$-capture elements until now.
With the limited amount of data it is not yet clear if the pattern
is the same from star-to-star for the lighter $n$-capture elements in
these $r$-process rich stars.
There has been extensive work on trying to understand the
synthesis of these elements.
Observations of 4 metal-poor $r$-enriched stars
by \citet{crawford98}
suggested that Ag (Z = 47) was produced in rough proportion
to the heavy elements in stars with $-$2.2~$<$~[Fe/H]~$< -$~1.2.
\citet{wasserburg96} and \citet{mcwilliam98} pointed out
that multiple sources of heavy elements (other than the $s$-process)
were required to account for the observed abundances
in the solar system and extremely metal-poor stars, respectively.
\citet{travaglio04} quantized this effect, noting that Sr-Zr
Solar System abundances
could not be totally accounted for from traditional sources, such as
the $r$-process, the (main) $s$-process and the weak $s$-process.
They suggested that
the remaining (missing) abundances---8\% for Sr to 18\%
for Y and Zr---came from
a light element primary process (LEPP).
Travaglio et al.\ also noted,
``The discrepancy in the $r$-fraction of Sr-Y-Zr between the
$r$-residuals method and the \mbox{CS~22892-052}\ abundances
becomes even larger for elements from Ru to Cd: the weak
$s$-process does not contribute to elements from Ru to Cd. As
noted [previously], this discrepancy suggests an even
more complex multisource nucleosynthetic origin for elements
like Ru, Rh, Pd, Ag, and Cd.''
\citet{montes07} extended studies of the LEPP and suggested
that a range of $n$-capture elements, perhaps even including heavier
elements such as Ba, might have a contribution
from this primary process. (Since, however, Ba in $r$-process
rich stars is consistent with
the solar $r$-process abundances, such contributions
for these heavier
elements must be quite small.)
They noted, in particular, that this LEPP might
have been important in synthesizing the abundances in the $r$-process poor
star HD~122563.
Further insight into the (complicated)
origin of the lighter $n$-capture elements is
provided by the detections of Ge (Z = 32) in a few stars.
\citet{cowan05} noted a correlation of Ge with the
iron abundances in the halo
stars with $-$3.0~$\lesssim$~[Fe/H]~$\lesssim -$1.5,
suggesting that the Ge is being produced along with the Fe-group elements
at these low metallicities.
To produce the protons needed to satisfy
such a correlation, a new neutrino ({\it i.e.}, $\nu$-p) process that might
occur in supernovae was suggested \citep{frohlich06}.
We note that for higher ({\it i.e.}, at solar)
metallicities, Ge is considered a neutron-capture element,
synthesized in the $r$-process (52\%) and the $s$-process (48\%)
(Simmerer et al. 2004; Sneden et al. 2008). Thus, there should be
a change in the slope of the Ge abundances from low metallicities to
higher metallicities, a behavior that has not yet been observed.
\begin{figure*}
\centering
\plotone{bdp17.eps}
\caption{$r$-process abundance predictions for light $n$-capture elements
compared with observations of BD+17{$^{\circ}$} 3248 \ from \citet{roederer10b}.
See text for further details.}
\label{bdp17}
\end{figure*}
We show in
Figure~\ref{bdp17} several $r$-process predictions for the
lighter $n$-capture element abundances compared with observations of
those elements in BD+17{$^{\circ}$} 3248 \ from \citet{roederer10b}.
The two Solar System $r$-process models (``classical'' and ``stellar model'')
reproduce some of these elements but begin to diverge from the
observed abundances at Rh (Z = 45).
Also shown in
Figure~\ref{bdp17} are predictions from
a High Entropy Wind (HEW) model, that might be
typical in a core-collapse (or Type II) supernova (\citealt{farouqi09};
K.-L.~Kratz, private communication.)
This model gives a better fit to the abundances, but does not reproduce
the observed odd-even effects in Ag (Z = 47)and Cd (Z = 48) in this star
(resembling a trend discovered in other
$r$-enriched stars by \citealt{johnson02}).
Recent work by \citet{hansen11} to study Pd (Z = 46) and Ag
abundances in stars with $-$3.2~$\lesssim$~[Fe/H]~$\lesssim -$0.6
confirms the divergence between observations and simulation predictions.
These comparisons between calculations and observations do in fact
argue for a combination of processes to reproduce
the observed stellar abundances of some of these light $n$-capture elements.
This combination of processes might include (contributions from)
the main $r$-process, the LEPP, the $\nu$-p process,
charged-particle reactions
accompanied by $\beta$-delayed fission
and the weak $r$-process ({\it e.g.}, \citealt{kratz07}).
(See, {\it e.g.},
\citealt{farouqi09,farouqi10}, \citealt{roederer10a,roederer10b},
and \citealt{arcones11}
for further discussion.)
It may also be that during the synthesis
the main $r$-process and the LEPP are separate processes,
and that the abundance patterns in all metal-poor stars could be
reproduced by mixing their yields \citep{montes07}.
Alternatively, it may be
that the $r$-process and the LEPP
can be produced in the same events, but sometimes
only the lower neutron density components are present
\citep{kratz07,farouqi09}.
It has also been suggested that the
heavier and lighter
$n$-capture elements are synthesized in separate sites (see {\it e.g.},
\citealt{qian08}).
New observations of heavy elements in metal-poor globular cluster
stars reaffirm the abundance patterns seen in field stars.
In the globular cluster M92, \citet{roederer11b} found that the
observed star-to-star dispersion in Y (Z = 39) and Zr (Z = 40)
is the same as for the Fe-group elements ({\it i.e.}, consistent
with observational uncertainty only).
Yet, the Ba \citep{sneden00}, La, Eu, and Ho abundances exhibit
significantly larger star-to-star dispersion that cannot be
attributed to observational uncertainty alone.
Furthermore, the Ba and heavier elements were produced by $r$-process
nucleosynthesis without any $s$-process contributions.
This indicates that, as in the field stars,
these two groups of elements could not have
formed entirely in the same nucleosynthetic process in M92.
\section{New Species Detections}
\citet{roederer10b} reanalyzed near-UV spectra obtained with
HST/STIS of the
star BD+17{$^{\circ}$} 3248.
(See also \citealt{cowan02,cowan05} for earlier HST observations of BD+17{$^{\circ}$} 3248.)
We show in
Figure~\ref{f1} (from \citealt{roederer10b})
spectral regions around Os~II and Cd~I lines in the
stars BD+17{$^{\circ}$} 3248, HD~122563 and HD~115444. There is a clear detection of Os~II
in both BD+17{$^{\circ}$} 3248\ and HD~115444 but not in HD~122563. The star HD~115444 is
similar in metallicity and atmospheric parameters to HD~122563
(see \citealt{westin00}), but much more
$r$-process rich: [Eu/Fe] = 0.7 versus $-$0.5, respectively.
In the lower panel of
Figure~\ref{f1} we see the presence of Cd~I in BD+17{$^{\circ}$} 3248 \ and HD~115444, as well
as a weak detection in HD~122563.
Synthetic fits to these spectra in BD+17{$^{\circ}$} 3248\ and HD~122563
indicate the presence of
Cd~I and Lu~II lines in both stars,
as well as the detection (and upper limit of) Os~II in the same
two stars, respectively.
This work was
the first to detect Cd~I, Lu~II, and Os~II
in metal-poor halo stars.
\begin{figure}
\centering
\plotone{f1.eps}
\vskip0pt
\caption{
HST (near-UV)
spectral regions containing Os~II and Cd~I lines in BD+17{$^{\circ}$} 3248, HD~122563,
and HD~115444 from \citet{roederer10b}.
Reproduced by permission of the AAS.
}
\label{f1}
\end{figure}
In addition to these new detections,
\citet{roederer10b} employed Keck/HIRES spectra
to derive new abundances of Mo I, Ru I and Rh I in this star.
Combining these abundance determinations led to the detection of a total of
32 $n$-capture species ---the most of any metal-poor halo star.
(Previously, CS~22892-052 had the most such detections.)
Further, we note that
the total detections in BD+17{$^{\circ}$} 3248 \ did not count the element Ge.
And while Ge may be
synthesized in proton-rich processes early in the history of the
Galaxy,
it is classified as a $n$-capture element in Solar System material
(see \citealt{simmerer04} and \citealt{cowan05}).
We illustrate this total abundance distribution
in Figure~\ref{bdfourthb} compared with the
two Solar System $r$-process curves from \citet{simmerer04} and
\citet{arlandini99}. We again see the close agreement between the heavier
$n$-capture elements and (both of) the predictions for
the Solar System $r$-process curve, as well as
the deviation between the abundances of the
lighter $n$-capture elements and that
same curve.
\begin{figure*}
\centering
\vskip 0.55in
\includegraphics[width=5.00in]{bdfourthb.eps}
\vskip 0.35in
\caption{
The total observed abundance distribution in BD+17{$^{\circ}$} 3248.
There are a total of 32---not including Ge---detections of $n$-capture elements,
the most in any metal-poor halo star. This distribution is
compared with the
two Solar System $r$-process curves from \citet{simmerer04} and
\citet{arlandini99}.
}
\label{bdfourthb}
\end{figure*}
\section{The $r$-process Throughout the Galaxy}
The results of \citet{roederer10b} also confirm earlier work indicating
significant differences in the abundances between $r$-process rich stars, such
as BD+17{$^{\circ}$} 3248, and $r$-process poor stars, such as HD~122563.
This difference is shown clearly in Figure~\ref{f4}. The abundance
distribution for BD+17{$^{\circ}$} 3248 \ (shown in the top panel) is relatively flat---compare
the abundance of Sr with Ba---and
is consistent with the scaled Solar System $r$-process abundances for
the heavy $n$-capture elements.
In contrast the lower panel of this figure indicates that the abundances
in the $r$-process poor HD~122563 fall off dramatically with increasing
atomic number---again compare the abundance of Sr with Ba.
\begin{figure*}
\centering
\includegraphics[width=5.45in]{f4.eps}
\caption{
Abundance distributions in BD+17{$^{\circ}$} 3248\ and HD~122563 with detections indicated by
filled symbols and upper limits by downward-pointing open triangles.
The new measurements of Os, Cd, and Lu illustrated in Figure~\ref{f1}
are labeled.
In the top panel (BD+17{$^{\circ}$} 3248)
the bold curve is an HEW calculation from \citet{farouqi09}
normalized to Sr, while the solid line is the Solar System $r$-process
curve \citep{sneden08} normalized to Eu.
In the bottom panel (HD~122563) the solar curve is normalized both to Eu
(solid line) and
Sr (dotted line).
Abundances were obtained from \citet{cowan02,cowan05}, \citet{honda06},
\citet{roederer09,roederer10b}, and \citet{sneden09}.
Figure from \citet{roederer10b}.
Reproduced by permission of the AAS.
}
\label{f4}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=6.00in]{f11b.eps}
\caption{
Differences between Solar System $r$-process abundances and stellar abundances
for 16 metal-poor stars, normalized to Sr.
The stars are listed in order of descending [Eu/Fe], and that value and
[Sr/Fe] are listed in the box to the right in the figure.
A value for a typical uncertainty is illustrated in the lower left.
Note the difference in the abundance pattern between the $r$-process
rich star \mbox{CS~22892-052}\ and that of HD~122563, with the other stars falling
between those extremes.
Abundance references are as follows:
S.S. $r$-process abundances \citep{sneden08};
\mbox{HE~1523-0901} (\citealt{frebel07} and A.\ Frebel,
2009, private communication);
\mbox{CS~31082-001} \citep{hill02,plez04,sneden09};
\mbox{CS~22892-052} \citep{sneden03,sneden09};
\mbox{HE~1219-0312} \citep{hayek09,roederer09};
\mbox{UMi-COS~82} \citep{aoki07};
\mbox{CS~31078-018} \citep{lai08};
\mbox{CS~30306-132} \citep{honda04};
\mbox{BD$+$17~3248} \citep{cowan02,sneden09,roederer10b};
\mbox{HD~221170} \citep{ivans06,sneden09};
\mbox{HD~115444} \citep{westin00,roederer09,sneden09};
\mbox{HD~175305} \citep{roederer10c};
\mbox{BD$+$29~2356} \citep{roederer10c};
\mbox{BD$+$10~2495} \citep{roederer10c};
\mbox{CS~22891-209} \citep{francois07};
\mbox{HD~128279} \citep{roederer10c};
\mbox{HD~13979} (I.\ Roederer et al., in preparation);
\mbox{CS~29518-051} \citep{francois07};
\mbox{CS~22873-166} \citep{francois07};
\mbox{HD~88609} \citep{honda07};
\mbox{CS~29491-053} \citep{francois07};
\mbox{HD~122563} \citep{honda06,roederer10b}; and
\mbox{CS~22949-037} \citep{depagne02}.
Figure from \citet{roederer10a}.
Reproduced by permission of the AAS.
}
\label{f11b}
\end{figure*}
It is clear from much work
({\it e.g.}, Honda et al. 2006, 2007) that the abundances even in a
star such as HD~122563 do come from the
$r$-process---the source for the $s$-process,
low- or intermediate-mass stars on the AGB with longer evolutionary
timescales, have not had sufficient time to evolve prior to the formation
of this metal-poor halo star (cf.\ \citealt{truran81}).
Instead, one can think of the abundance distribution
in HD~122563, illustrated in Figure~\ref{f4}, as the result of
an ``incomplete $r$-process''---there were not sufficient numbers of
neutrons to form all of the heavier $n$-capture elements,
particularly the ``third-peak'' elements of Os, Ir, and Pt.
In the classical ``waiting point approximation'' the
lighter $n$-capture elements are synthesized from lower neutron number
density (n$_n$)
fluxes, typically 10$^{20}$--10$^{24}$, with the heavier $n$-capture elements
(and the total $r$-process abundance distribution) requiring
values of n$_n$ = 10$^{23}$--10$^{28}$ cm$^{-3}$
(see Figures 5 and 6 of \citealt{kratz07}).
Physically in this ``incomplete'' or ``weak $r$-process,''
the neutron flux was too low to push the $r$-process ``path''
far enough away from the valley of $\beta$-stability
to reach the higher mass numbers after
$\alpha$ and $\beta$-decays back to stable nuclides.
Instead the lower neutron number densities result in the
$r$-process path being too close to the valley of stability
leading to a diminution in the
abundances of the heavier $n$-capture elements.
The lighter $n$-capture elements, such as Sr, in this star
may have formed as a result of this incomplete or
weak $r$-process, or the LEPP, or combinations as described previously for the
$r$-process rich stars.
This analysis was extended to a larger sample by \citet{roederer10a}
and is illustrated in Figure~\ref{f11b}.
We show the differences between the abundance distributions of 16
metal-poor stars, normalized to Sr,
compared with the Solar System $r$-process distribution \citep{sneden08}.
The stars are plotted in order of descending values of
[Eu/Fe], a measure of their $r$-process richness. Thus, we see near the
top \mbox{CS~22892-052}\ with a value of [Eu/Fe] = 1.6 and near the bottom, HD~122563 with
[Eu/Fe] = $-$0.5. The figure illustrates
the relative flatness of the distributions of
the most $r$-process-rich stars ([Eu/Fe] $\simeq$ 1)
with respect to the solar curves, while the
$r$-process poor stars have abundances that fall off sharply with
increasing atomic number.
It is also clear from Figure~\ref{f11b}
that there are a range of abundance distributions
falling between
these two extreme examples.
(We note that Figure~\ref{f11b} should not be taken as an unbiased
distribution of stars at low metallicity.)
We emphasize four important points here.
First, not all of the metal-poor stars have the same abundance pattern
as \mbox{CS~22892-052}, only those that are $r$-process rich.
Second, while the distributions
are different between the $r$-process rich and poor stars there
is no indication of $s$-process synthesis for these elements.
Thus, all of the
elements in these stars were synthesized in the $r$-process, at least for
the heavier $n$-capture elements, and $r$-process material was common in the
early Galaxy.
Third, the approximate downward displacement from the top to the bottom
of the Figure~\ref{f11b} (a measure of the decreasing [Eu/Sr] ratio)
roughly scales as the [Eu/Fe] ratio, listed
in the right-hand panel.
This can be understood as follows: since the abundance patterns are
normalized to Sr, and if Sr is roughly proportional to Fe in these stars
(with a moderate degree of scatter---cf.\ Figure~7 of \citealt{roederer10a}),
then {\it of course} the [Eu/Sr] ratio roughly follows [Eu/Fe].
(See also \citealt{aoki05}.)
Finally, we note that Ba has been detected in all of these stars
and the vast majority of low-metallicity field and globular cluster stars
studied to date.
Only in a few Local Group dwarf galaxies do Ba upper limits
hint that Ba (and, by inference, all heavier elements) may be
extremely deficient or absent
\citep{fulbright04,koch08,frebel10}.
\section{Conclusions}
Extensive studies have demonstrated the presence of
$n$-capture elements in the atmospheres of
metal-poor halo and globular cluster stars.
New detections of the $n$-capture elements Cd~I (Z = 48), Lu~II
(Z = 71) and Os~II (Z = 76),
derived from HST/STIS spectra,
have been made in several metal-poor halo stars.
These were the first detections of these species in such stars.
Supplementing these observations with Keck data
and new measurements of Mo I, Ru I and Rh I,
we reported the detections of 32 $n$-capture
elements in BD+17{$^{\circ}$} 3248. This is currently the most detections of these
elements in any metal-poor halo star, supplanting the previous ``champion''
\mbox{CS~22892-052}.
Comparisons among the most
$r$-process-rich stars ([Eu/Fe] $\simeq$ 1) demonstrate that the heaver
stable elements (from Ba and above) are remarkably consistent from star-to-star
and consistent with the (scaled) solar system $r$-process distribution.
Detailed comparisons of the REE (along with Hf) among
a well-studied group of $r$-process-rich stars, employing new
experimental atomic data,
strongly supports this finding.
The newly determined, and lab-based, stellar abundances are more
precise and show very little scatter from star-to-star and with
respect to the Solar System $r$-process abundances.
This suggests that the $r$-process produced these elements
early in the history of the Galaxy and that the same type(s) of process
was responsible for the synthesis of the $r$-process elements at the
time of the formation of the Solar System.
While the heavier elements appear to have formed from the main
$r$-process and are
apparently consistent with the solar $r$-process abundances,
the lighter $n$-capture element abundances in
these stars do
not conform to the solar pattern.
There have been little data in these stars
until recently, but now with the new detections
of Cd and increasing Pd and Ag detections, some patterns are becoming clear.
First, the main $r$-process alone is not responsible for the synthesis of
these lighter $n$-capture elements. Instead, other processes, alone or in
combination, may have responsible for such formation.
These processes include a so-called
``weak'' $r$-process (with lower values of n$_n$), the LEPP,
the $\nu$-p process, or charged particle
reactions in the HEW of a core-collapse supernova.
It is also not clear whether different processes are responsible for
different mass regions with one for Ge, a different one for Sr-Zr and
still another for Pd, Ag, and Cd.
It is also not clear whether these processes operate separately from
each other or in the same site, or whether different mass ranges of
the $n$-capture elements are synthesized in different sites.
Clearly, much more work needs to be undertaken to understand the
formation of these lighter $n$-capture elements.
The stellar abundance signatures of the heaviest of these elements,
i.e., Ba and above, are consistent with the rapid neutron
capture process, $r$-process, but not the $s$-process in these
old stars. Similar conclusions are found for stars in the ancient globular
clusters
with comparable abundance spreads in the $r$-process elements
(see, {\it e.g.}, \citealt{gratton04}, \citealt{sobeck11},
\citealt{roederer11a}, and \citealt{roederer11b}).
There is also a clear distinction between the abundance patterns of the
$r$-process rich stars such as \mbox{CS~22892-052}\ and the $r$-process poor stars like
HD~122563. The latter seem to have an element pattern that was
formed as a result of a ``weak'' or ``incomplete'' $r$-process.
Most of the old, metal-poor halo stars have abundance distributions
that fall between the extremes of \mbox{CS~22892-052}\ and HD~122563.
However, the very presence of
$n$-capture elements in the spectra of these stars
argues for $r$-process events being a common occurrence
early in the
history of
the Galaxy.
Finally, we note the need for additional stellar observations,
particularly of the UV regions of the spectra {\it only} accessible
using STIS or COS aboard HST.
These observations require
high signal-to-noise ratios and high resolution to identify faint lines
in crowded spectral regions. Also we will require more laboratory
atomic data for
elements that have not been well studied to improve the precision of
the stellar and solar abundances.
Additional experimental nuclear data,
not yet available,
for the heaviest neutron-rich nuclei
that participate in the $r$-process,
will be critical to these studies. Until that time
new, more physically based,
theoretical prescriptions for nuclear masses, half-lives, etc.\
for these $r$-process nuclei will be necessary.
New theoretical models of supernova explosions and detailed synthesis
scenarios, such as might occur in the HEW, will be very important to help to
identify the site or sites
for the $r$-process, a search that has been ongoing since 1957.
\section{Acknowledgments}
We thank our colleagues for all of their contributions and helpful
discussions. We particularly are grateful for all of the
contributions from George W.\ Preston, as he celebrates his
80th birthday. Partial scientific support for this research was
provided by the NSF (grants AST~07-07447 to J.J.C., AST~09-08978 to C.S.,
and AST~09-07732 to J.E.L.).
I.U.R.\ is supported by the Carnegie Institution of Washington
through the Carnegie Observatories Fellowship.
| {'timestamp': '2011-06-07T02:07:30', 'yymm': '1106', 'arxiv_id': '1106.1109', 'language': 'en', 'url': 'https://arxiv.org/abs/1106.1109'} |
\section{Introduction}
General relativity (GR) is currently the most established theory of gravitation. It correctly describes a number of observations, such as planetary orbits in the Solar System, the motion of masses in the Earth's gravitational field \cite{BETA.Will:2014kxa}, the recently discovered gravitational waves \cite{BETA.Ligo} or the $\Lambda$CDM model in cosmology \cite{BETA.Planck}. However successful on these scales, GR itself does not provide sufficient answers to fundamental open questions such as the reason for the accelerated expansion of the universe, the phase of inflation or the nature of dark matter. Further tension arises from the fact that so far no attempt to extend GR to a full quantum theory has succeeded.
GR is expected to be challenged by different upcoming experiments on ground and in space, such as high precision clocks \cite{BETA.cacciapuoti2011atomic} and atom interferometers in Earth orbits, pulsar timing experiments \cite{BETA.Pulsar} and direct observations of black hole shadows \cite{BETA.Goddi,BETA.Broderick}. This plethora of existing and expected experimental data, together with the tension with cosmological observations, motivates studying alternative theories of gravitation \cite{BETA.Nojiri}. In particular, the upcoming experiments are expected to give more stringent constraints on the parameter spaces of such theories or even find violations of GR's predictions.
One class of alternative theories are scalar-tensor theories of gravity (STG) - an extension to GR that contains a scalar degree of freedom in addition to the metric tensor.
The detection of the Higgs proved that scalar particles exist in nature \cite{BETA.higgs} and scalar fields are a popular explanation for inflation \cite{BETA.inflation.guth} and dark energy \cite{BETA.Quintessence.and.the.Rest.of.the.World}.
Further, effective scalar fields can arise, e.g., from compactified extra dimensions \cite{BETA.compactified.extra.dimensions} or string theory \cite{BETA.Damour}.
While the motivation for such alternative theories of gravitation is often related to cosmology, of course any such theory must also pass Solar System tests. The most prominent class of such tests is based on the post-Newtonian limit of the theory under consideration, which is usually discussed in the parametrized post-Newtonian (PPN) framework \cite{BETA.will.book,BETA.Will:2014kxa}.
It allows characterizing theories of gravitation in the weak field limit in terms of a number of parameters, that can be calculated from the field equations of the theory, and will, in general, deviate from the parameters predicted by general relativity. These parameters can be constrained using observational data and experiments \cite{BETA.Fomalont:2009zg,BETA.Bertotti:2003rm,BETA.Hofmann:2010,BETA.Verma:2013ata,BETA.Devi:2011zz}.
In this work we are interested in the parameters $\gamma$ and $\beta$ only, as these are the only parameters that may differ in fully conservative gravity theories, to which also STG belongs~\cite{BETA.will.book}.
The most thoroughly studied standard example of a scalar-tensor theory is Brans-Dicke theory \cite{BETA.Brans-Dicke.1961}, which contains a massless scalar field, whose non-minimal coupling to gravity is determined by a single parameter $\omega$. This theory predicts the PPN Parameter $\gamma = (1+\omega)/(2+\omega)$, in contrast to $\gamma=1$ in GR. Both theories predict $\beta = 1$. Adding a scalar potential gives the scalar field a mass, which means that its linearized field equation assumes the form of a Klein-Gordon equation, which is solved by a Yukawa potential $\sim e^{-m r}/r$ in the case of a point-like source. In this massive scalar field case, the PPN parameter $\gamma$ becomes a function of the radial coordinate $r$ \cite{BETA.Olmo1,*BETA.Olmo2,BETA.Perivolaropoulos}.
Scalar-tensor theories can be expressed in different but equivalent conformal frames. This means that the form of the general scalar-tensor action is invariant under conformal transformations of the metric, which depend on the value of the scalar field. There are two such frames that are most often considered:
In the Jordan frame, test particles move along geodesics of the frame metric while in the Einstein frame, the scalar field is minimally coupled to curvature.
The PPN parameters $\gamma$ and $\beta$ for scalar-tensor theories with a non-constant coupling have been calculated in the Jordan \cite{BETA.HohmannPPN2013,*BETA.HohmannPPN2013E} and in the Einstein frame \cite{BETA.SchaererPPN2014}.
These works consider a spacetime consisting of a point source surrounded by vacuum.
As will be elucidated below, this assumption leads to problems when it comes to the definition and calculation of the PPN parameter $\beta$.
Applying conformal transformations and scalar field redefinitions allows to transform STG actions, field equations and observable quantities between different frames. It is important to note that these different frames are physically equivalent, as they yield the same observable quantities~\cite{BETA.Postma:2014vaa,BETA.Flanagan}. Hence, STG actions which differ only by conformal transformations and field redefinitions should be regarded not as different theories, but as different descriptions of the same underlying theory.
This observation motivates the definition of quantities which are invariant under the aforementioned transformations, and to express observable quantities such as the PPN parameters or the slow roll parameters characteristic for models of inflation fully in terms of these invariants~\cite{BETA.JarvInvariants2015,BETA.KuuskInvariantsMSTG2016,BETA.Jarv:2016sow,BETA.Karam:2017zno}.
The PPN parameters $\gamma$ and $\beta$ were calculated for a point source \cite{BETA.HohmannPPN2013,*BETA.HohmannPPN2013E,BETA.SchaererPPN2014}, and later expressed in terms of invariants~\cite{BETA.JarvInvariants2015,BETA.KuuskInvariantsMSTG2016}. However, the assumption of a point source leads to a number of conceptual problems. The most important of these problems is the fact that, in terms of post-Newtonian potentials, the Newtonian gravitational potential becomes infinite at the location of the source, so that its gravitational self-energy diverges. It is therefore impossible to account for possible observable effects caused by a modified gravitational self-energy of the source in a theory that differs from GR. We therefore conclude that the assumption of a point source is not appropriate for a full application of the PPN formalism to STG. This has been realized earlier in the particular case of STG with screening mechanisms~\cite{BETA.SchaererPPN2014,BETA.Zhang:2016njn}.
The goal of this article is to improve on the previously obtained results for the PPN parameters $\gamma$ and $\beta$ for a general class of scalar-tensor theories, in which the divergent gravitational self-energy has been neglected. Instead of a point mass source, the gravitating mass source we consider in this article is given by a sphere with homogeneous density, pressure and internal energy that is surrounded by vacuum. In this case the gravitational self-energy remains finite, and can therefore be taken into account. During our calculation we do not fix a particular frame, but instead make use of the formalism of invariants mentioned above already from the beginning in order to calculate the effective gravitational constant as well as the PPN parameters $\gamma$ and $\beta$.
The article is structured as follows. In Sec. \ref{sec:theory} we discuss the scalar-tensor theory action, the field equations and the invariants.
The perturbative expansion of relevant terms is outlined in Sec. \ref{sec PPN Expansion} and the expanded field equations are provided in Sec. \ref{sec Expanded field equations}.
Next, in Sec. \ref{sec Massive field and spherical source}, these are solved explicitly for a non-rotating homogeneous sphere and the PPN parameters are derived.
Sec. \ref{sec Comparison to observations} applies our results to observations.
Finally, we conclude with a discussion and outlook in Sec.~\ref{sec Conclusion}.
The main part of our article is supplemented by Appendix~\ref{app coefficients}, in which we list the coefficients appearing in the post-Newtonian field equations and their solutions.
\section{Theory}\label{sec:theory}
We start our discussion with a brief review of the class of scalar-tensor tensor theories we consider. The most general form of the action, in a general frame, is displayed in section~\ref{ssec:action}. We then show the metric and scalar field equations derived from this action in section~\ref{ssec:feqns}. Finally, we provide the definition of the relevant invariant quantities, and express the field equations in terms of these, in section~\ref{ssec:invariants}.
\subsection{Action}\label{ssec:action}
We consider the class of scalar-tensor gravity theories with a single scalar field \(\Phi\) besides the metric tensor \(g_{\mu\nu}\), and no derivative couplings. Its action in a general conformal frame is given by~\cite{BETA.Flanagan}
\ba
\label{BETA.equ: action}
S = \frac{1}{2\kappa^2} \int d^4x \sqrt{-g}
\left\{ \mathcal{A}(\Phi) R - \mathcal{B}(\Phi) g^{\mu\nu} \partial_\mu \Phi \partial_\nu \Phi
- 2 \kappa^2 \mathcal{U}(\Phi)\right\}
+ S_m [e^{2\alpha(\Phi)} g_{\mu\nu},\chi] \,.
\ea
Any particular theory in this class is determined by a choice of the four free functions $\mathcal{A}, \mathcal{B}, \mathcal{U}$ and $\alpha$, each of which depends on the scalar field $\Phi$.
The function $\mathcal{B}$ determines the kinetic energy part of the action. The scalar potential is given by $\mathcal{U}$; a non-vanishing potential may be used to model inflation, a cosmological constant or give a mass to the scalar field.
The last part \(S_m\) is the matter part of the action. The matter fields, which we collectively denote by $\chi$, couple to the so-called Jordan frame metric $e^{2\alpha(\Phi)} g_{\mu\nu}$. It is conformally related to the general frame metric $g_{\mu\nu}$. The latter is used to raise and lower indices and determines the spacetime geometry in terms of its Christoffel symbols, Riemann tensor and further derived quantities.
In general, the scalar field is non-minimally coupled to curvature. This coupling is determined by the function $\mathcal{A}(\Phi)$.
There are different common choices of the conformal frame; see~\cite{BETA.JarvInvariants2015} for an overview. In the Jordan frame, one has \(\alpha = 0\) and the matter fields couple directly to the metric \(g_{\mu\nu}\). By a redefinition of the scalar field one may further set $\mathcal{A} \equiv \Phi$. Typically, one considers the coupling function $\omega(\Phi) \equiv \mathcal{B}(\Phi) \Phi$. This particular choice of the parametrization is also known as Brans-Dicke-Bergmann-Wagoner parametrization.
Another possible choice for the conformal frame is the Einstein frame, in which the field couples minimally to curvature, $\mathcal{A} \equiv 1$. However, in this case the matter fields in general do not couple to the frame metric directly, but through a non-vanishing coupling function $\alpha \neq 0$. In this case one may also choose the canonical parametrization $B \equiv 2$.
We call the scalar field minimally coupled if the Jordan and Einstein frames coincide, i.e., if one can achieve $\mathcal{A} \equiv 1$ and $\alpha \equiv 0$ simultaneously through a conformal transformation of the metric.
\subsection{Field equations}\label{ssec:feqns}
The metric field equations are obtained by varying the action \eqref{BETA.equ: action} with respect to the metric. Written in the trace-reversed form they are
\ba \bs
\label{BETA.equ: tensor field equation trace reversed long}
R_{\mu\nu}
&- \frac{\mathcal{A}'}{\mathcal{A}} \left( \nabla_\mu \nabla_\nu \Phi + \f12 g_{\mu\nu} \square \Phi \right)
- \left( \frac{\mathcal{A}''}{\mathcal{A}} + 2\mathcal{F} - \frac{3 {\mathcal{A}'}^2 }{2 \mathcal{A}^2} \right) \partial_\mu \Phi \partial_\nu \Phi
\\
&- \f12 g_{\mu\nu} \frac{\mathcal{A}''}{\mathcal{A}} g^{\rho\sigma} \partial_\rho \Phi \partial_\sigma \Phi
- \frac{1}{\mathcal{A}} g_{\mu\nu} \kappa^2 \mathcal{U}
= \frac{\kappa^2}{\mathcal{A}} \left( T_{\mu\nu} - \f12 g_{\mu\nu} T \right) \,,
\es \ea
where we use the d'Alembertian $\square X \equiv \nabla^2 X = g^{\mu\nu} \nabla_\mu \nabla_\nu X$ and the notation $X' \equiv \frac{\partial X}{\partial\Phi}$.
Taking the variation with respect to the scalar field gives the scalar field equation
\ba \bs
\label{BETA.equ: scalar field equation}
\mathcal{F} \, \square \Phi
&+ \f12 \left( \mathcal{F}' + 2 \mathcal{F} \frac{\mathcal{A}'}{\mathcal{A}} \right) g^{\mu\nu} \partial_\mu \Phi \partial_\nu \Phi
+ \frac{\mathcal{A}'}{\mathcal{A}^2} \kappa^2 \mathcal{U}
- \frac{1}{2 \mathcal{A}} \kappa^2 \mathcal{U}'
= \kappa^2 \frac{\mathcal{A}' - 2 \mathcal{A} \alpha' }{4 \mathcal{A}^2} T \,.
\es \ea
The function $\mathcal{F}$ introduced on the left hand side is defined by
\ba
\label{BETA.F}
\mathcal{F} \equiv \frac{2 \mathcal{A} \mathcal{B} + 3 {\mathcal{A}'}^2}{4 \mathcal{A}^2} \,.
\ea
Note that these equations simplify significantly in the Einstein frame $\mathcal{A} \equiv 1$ and $\alpha \equiv 0$. We will make use of this fact in the following, when we express the field equations in terms of invariant quantities.
Further, note that the functions $\mathcal{A}$ and $\mathcal{B}$ should be chosen such that $\mathcal{F} > 0$. A negative $\mathcal{F}$ would lead to a positive kinetic term in the Einstein frame, causing a ghost scalar field that should be avoided.
\subsection{Invariants}\label{ssec:invariants}
Given a scalar-tensor theory in a particular frame, it can equivalently be expressed in a different frame by applying a Weyl transformation of the metric tensor $g_{\mu\nu} \rightarrow \bar{g}_{\mu\nu}$
and a reparametrization of the scalar field $\Phi \rightarrow \bar{\Phi}$
\bsub
\label{BETA.equ: transformations}
\ba
\label{BETA.equ: Weyl reparametrization}
g_{\mu\nu} &= e^{2\bar{\gamma}(\bar{\Phi})} \bar{g}_{\mu\nu} \,,
\\
\label{BETA.equ: scalar field redefinition}
\Phi &= \bar{f} (\bar{\Phi}) \,.
\ea
\esub
We defined $\mathcal{F}$ in \eqref{BETA.F} since it transforms as a tensor under scalar field redefinition and is invariant under Weyl transformation,
\ba \bs
\mathcal{F} &= \left( \frac{\partial \bar{\Phi}}{\partial \Phi} \right)^2 \bar{\mathcal{F}} \,.
\es \ea
In order to have a frame independent description, we want to express everything in terms of invariants, i.e., quantities that are invariant under the transformations given above. The matter coupling and the scalar potential can be written in an invariant form by introducing the two invariants~\cite{BETA.JarvInvariants2015}
\bsub
\ba
\mathcal{I}_1(\Phi) = \frac{e^{2\alpha(\Phi)}}{\mathcal{A}(\Phi)} \,,
\\
\mathcal{I}_2(\Phi) = \frac{\mathcal{U}(\Phi)}{\mathcal{A}^2(\Phi)} \,.
\ea
\esub
Given the action in a general frame, we can define the invariant Einstein and Jordan frame metrics by
\bsub
\label{BETA equ: Einstein and Jordan frame metric}
\ba
\label{BETA equ: Einstein frame metric}
g^{\mathfrak{E}}_{\mu\nu} := \mathcal{A}(\Phi) g_{\mu\nu} \,,
\\
\label{BETA equ: Jordan frame metric}
g^{\mathfrak{J}}_{\mu\nu} := e^{2\alpha(\Phi)} g_{\mu\nu}\,,
\ea
\esub
which are related by
\ba
\label{BETA equ: Einstein Jordan frame metric relation}
g^{\mathfrak{J}}_{\mu\nu} = \mathcal{I}_1 g^{\mathfrak{E}}_{\mu\nu} \,.
\ea
Note that if the action is already given in the Einstein frame, the metric coincides with the Einstein frame metric defined above, $g_{\mu\nu} = g^{\mathfrak{E}}_{\mu\nu}$, and the same holds for the Jordan frame.
We define the Einstein frame metric \eqref{BETA equ: Einstein frame metric} as it significantly simplifies the field equations.
The metric field equations reduce to
\ba
\label{equ: full metric field equation E-frame}
R^{\mathfrak{E}}_{\mu\nu} - 2 \mathcal{F} \, \partial_{\mu}\Phi \partial_{\nu} \Phi - \kappa^{2}g^{\mathfrak{E}}_{\mu\nu}\mathcal{I}_2 = \kappa^2 \bar{T}^{\mathfrak{E}}_{\mu\nu}\,,
\ea
where
\ba
\bar{T}^{\mathfrak{E}}_{\mu\nu} = T^{\mathfrak{E}}_{\mu\nu} - \frac{1}{2}g^{\mathfrak{E}}_{\mu\nu}T^{\mathfrak{E}}\,, \quad T^{\mathfrak{E}} = g^{\mathfrak{E}\,\mu\nu}T^{\mathfrak{E}}_{\mu\nu} = \frac{T}{\mathcal{A}^2}\,, \quad T^{\mathfrak{E}}_{\mu\nu} = \frac{T_{\mu\nu}}{\mathcal{A}}\,.
\ea
is the trace-reversed energy-momentum tensor in the Einstein frame. It is invariant under conformal transformations and field redefinitions, since also the left hand side of the field equations~\eqref{equ: full metric field equation E-frame} is invariant. Note that we use the invariant Einstein metric \(g^{\mathfrak{E}}_{\mu\nu}\) for taking the trace and moving indices here, in order to retain the invariance of this tensor. For later use, we also define the invariant Jordan frame energy-momentum tensor
\ba
\bar{T}^{\mathfrak{J}}_{\mu\nu} = T^{\mathfrak{J}}_{\mu\nu} - \frac{1}{2}g^{\mathfrak{J}}_{\mu\nu}T^{\mathfrak{J}}\,, \quad T^{\mathfrak{J}} = g^{\mathfrak{J}\,\mu\nu}T^{\mathfrak{J}}_{\mu\nu} = \frac{T}{e^{4\alpha(\Phi)}}\,, \quad T^{\mathfrak{J}}_{\mu\nu} = \frac{T_{\mu\nu}}{e^{2\alpha(\Phi)}}\,.
\ea
Similarly to the metric field equations, we obtain the scalar field equation \eqref{BETA.equ: scalar field equation}
\ba
\label{equ: full scalar field equation}
\mathcal{F} g^{\mathfrak{E}\,\mu\nu} \partial_{\mu}\partial_{\nu}\Phi
- \mathcal{F} g^{\mathfrak{E}\,\mu\nu} \Gamma^{\mathfrak{E}\,\rho}{}_{\nu\mu}\partial_{\rho}\Phi
+ \frac{\mathcal{F}'}{2} g^{\mathfrak{E}\,\mu\nu} \partial_{\mu}\Phi \partial_{\nu}\Phi
- \frac{\kappa^2}{2}{\mathcal{I}_{2}}'
= -\f14 \kappa^2 {(\ln\mathcal{I}_1)}' T^{\mathfrak{E}}\,.
\ea
These are the field equations we will be working with. In order to solve them in a post-Newtonian approximation, we will perform a perturbative expansion of the dynamical fields around a flat background solution. This will be done in the following section.
\section{PPN formalism and expansion of terms}
\label{sec PPN Expansion}
In the preceding section we have expressed the field equations of scalar-tensor gravity completely in terms of invariant quantities. In order to solve these field equations in a post-Newtonian approximation, we make use of the well known PPN formalism. Since we are dealing with different invariant metrics and their corresponding conformal frames, we briefly review the relevant parts of the PPN formalism for this situation. We start by introducing velocity orders in section~\ref{ssec:velorder}. These are used to define the PPN expansions of the scalar field in section~\ref{ssec:ppnscalar}, the invariant metrics in section~\ref{ssec:ppnmetric}, the energy-momentum tensor in section~\ref{ssec:ppnenmom} and the Ricci tensor in section~\ref{ssec:ppnricci}.
\subsection{Slow-moving source matter and velocity orders}
\label{ssec:velorder}
Starting point of the PPN formalism is the assumption of perfect fluid matter, for which the (Jordan frame) energy-stress tensor is given by
\ba
T^{\mathfrak{J}\,\mu\nu} = \left( \rho + \rho \Pi + p \right) u^\mu u^\nu + p g^{\mathfrak{J}\,\mu\nu} \,.
\ea
Since test particles fall on geodesics of the Jordan frame metric, we consider this as the `physical metric' and we define mass density $\rho$, pressure $p$ and specific internal energy $\Pi$ in this frame.
By $u^\mu$ we denote the four-velocity, normalized such that $u^\mu u_\mu = -1$, where indices are raised and lowered using the Jordan frame metric $g^{\mathfrak{J}}_{\mu\nu}$.
We now consider the PPN framework to expand and solve the field equations up to the first post-Newtonian order. For this purpose we assume that the source matter is slow-moving, $v^i = u^i/u^0 \ll 1$. We use this assumption to expand all dynamical quantities in velocity orders $\mathcal{O}(n) \sim |\vec{v}|^n$.
Note that $\rho$ and $\Pi$ each contribute at order $\mathcal{O}(2)$, while $p$ contributes at $\mathcal{O}(4)$. The velocity terms $v^i$ are, obviously, of order $\mathcal{O}(1)$. We finally assume a quasi-static solution, where any time evolution is caused by the motion of the source matter. Hence, each time derivative $\partial_0 \sim \mathcal{O}(1)$ increases the velocity order of a term by one.
\subsection{PPN expansion of the scalar field}
\label{ssec:ppnscalar}
We now expand the scalar field around its cosmological background value $\Phi_0$ in terms of velocity orders,
\ba
\Phi = \Phi_0 + \phi
= \Phi_0 + \order{\phi}{2} + \order{\phi}{4} + \mathcal{O}{(6)}\,,
\ea
where $\order{\phi}{2}$ is of order $\mathcal{O}{(2)}$ and $\order{\phi}{4}$ is of order $\mathcal{O}{(4)}$. Other velocity orders either vanish due to conservation laws or are not relevant for the PPN calculation.
Any function of the scalar field $\mathcal{X}(\Phi)$ can then be expanded in a Taylor series as
\ba
\bs
\mathcal{X}(\Phi)
&= \mathcal{X}(\Phi_0) + \mathcal{X}'(\Phi_0) \phi + \f12 \mathcal{X}''(\Phi_0) \phi^2 + \mathcal{O}(6)
\\
&= \mathcal{X}(\Phi_0) + \mathcal{X}'(\Phi_0) \order{\phi}{2}
+ \left[ \mathcal{X}'(\Phi_0) \order{\phi}{4} + \f12 \mathcal{X}''(\Phi_0) \order{\phi}{2} \,\, \order{\phi}{2} \right]
+ \mathcal{O}{(6)} \,.
\es
\ea
For convenience, we denote the Taylor expansion coefficients, which are given by the values of the functions and their derivatives evaluated at the background value, in the form
$F \equiv \mathcal{F}(\Phi_0)$,
$F' \equiv \mathcal{F}'(\Phi_0)$,
$I_1 \equiv \mathcal{I}_1(\Phi_0)$,
$I_1' \equiv \mathcal{I}_1'(\Phi_0)$,
$I_1'' \equiv \mathcal{I}_1''(\Phi_0)$,
and similarly for all functions of the scalar field.
\subsection{PPN expansion of the metric tensors}
\label{ssec:ppnmetric}
In the next step, we assume that the Jordan frame metric, which governs the geodesic motion of test masses, is asymptotically flat, and can be expanded around a Minkowski vacuum solution in suitably chosen Cartesian coordinates. The expansion of the Jordan frame metric components up to the first post-Newtonian order is then given by
\begin{subequations}\label{eqn:metricjppn}
\begin{align}
g^{\mathfrak{J}}_{00} &= -1 + \order{h}{2}^{\mathfrak{J}}_{00} + \order{h}{4}^{\mathfrak{J}}_{00} + \mathcal{O}(6)\,,\\
g^{\mathfrak{J}}_{0i} &= \order{h}{3}^{\mathfrak{J}}_{0i} + \mathcal{O}(5)\,,\\
g^{\mathfrak{J}}_{ij} &= \delta_{ij} + \order{h}{2}^{\mathfrak{J}}_{ij} + \mathcal{O}(4)\,.
\end{align}
\end{subequations}
It can be shown that these are all relevant and non-vanishing components.
A similar expansion of the Einstein frame metric \(g^{\mathfrak{E}}_{\mu\nu}\) can be defined as
\begin{subequations}\label{eqn:metriceppn}
\begin{align}
I_1g^{\mathfrak{E}}_{00} &= -1 + \order{h}{2}^{\mathfrak{E}}_{00} + \order{h}{4}^{\mathfrak{E}}_{00} + \mathcal{O}(6)\,,\\
I_1g^{\mathfrak{E}}_{0i} &= \order{h}{3}^{\mathfrak{E}}_{0i} + \mathcal{O}(5)\,,\\
I_1g^{\mathfrak{E}}_{ij} &= \delta_{ij} + \order{h}{2}^{\mathfrak{E}}_{ij} + \mathcal{O}(4)\,.
\end{align}
\end{subequations}
The $I_1$'s on the left sides are required in order to satisfy \eqref{BETA equ: Einstein Jordan frame metric relation}.
The expansion coefficients in the two frames are then related by
\begin{subequations}
\begin{align}
\order{h}{2}^{\mathfrak{E}}_{00} &= \order{h}{2}^{\mathfrak{J}}_{00}
+ \frac{I_{1}'}{I_1}\order{\phi}{2}\,,\\
\order{h}{2}^{\mathfrak{E}}_{ij} &= \order{h}{2}^{\mathfrak{J}}_{ij}
- \frac{I_{1}'}{I_1}\order{\phi}{2} \delta_{ij}\,,\\
\order{h}{3}^{\mathfrak{E}}_{0i} &= \order{h}{3}^{\mathfrak{J}}_{0i}\,,\\
\order{h}{4}^{\mathfrak{E}}_{00} &= \order{h}{4}^{\mathfrak{J}}_{00}
+ \frac{I_{1}'}{I_1}\order{\phi}{4} + \frac{I_1I_{1}'' - 2I_{1}' I_{1}'}{2I_1^2}\order{\phi}{2} \, \order{\phi}{2}
- \frac{I_{1}'}{I_1}\order{\phi}{2} \,\order{h}{2}^{\mathfrak{J}}_{00}\,,
\end{align}
\end{subequations}
as one easily checks.
Conversely, one finds the inverse relations
\begin{subequations}
\label{BETA.equ:metric E to J frame}
\begin{align}
\order{h}{2}^{\mathfrak{J}}_{00} &= \order{h}{2}^{\mathfrak{E}}_{00}
- \frac{I_{1}'}{I_1}\order{\phi}{2} \,,\\
\order{h}{2}^{\mathfrak{J}}_{ij} &= \order{h}{2}^{\mathfrak{E}}_{ij}
+ \frac{I_{1}'}{I_1}\order{\phi}{2} \delta_{ij}\,,\\
\order{h}{3}^{\mathfrak{J}}_{0i} &= \order{h}{3}^{\mathfrak{E}}_{0i}\,,\\
\order{h}{4}^{\mathfrak{J}}_{00} &= \order{h}{4}^{\mathfrak{E}}_{00}
- \frac{I_{1}'}{I_1}\order{\phi}{4} - \frac{I_{1}''}{2I_1}\order{\phi}{2} \, \order{\phi}{2}
+ \frac{I_{1}'}{I_1}\order{\phi}{2} \, \order{h}{2}^{\mathfrak{E}}_{00}\,.
\end{align}
\end{subequations}
\subsection{PPN expansion of the energy-momentum tensors}
\label{ssec:ppnenmom}
We now come to the PPN expansion of the energy-momentum tensors. Here we restrict ourselves to displaying the expansion of the invariant energy-momentum tensor in the Einstein frame, since this is the frame we will be using for solving the field equations. It is related to the invariant Jordan frame energy-momentum tensor by
\(T^{\mathfrak{E}}_{\mu\nu} = \mathcal{I}_1T^{\mathfrak{J}}_{\mu\nu}\).
Its PPN expansion follows from the standard PPN expansion of the energy-momentum tensor in the Jordan frame~\cite{BETA.will.book} and is given by
\begin{subequations}
\begin{align}
T^{\mathfrak{E}}_{00} &= I_1\rho\left(1 + \frac{2 I_{1,A}}{I_1}\order{\phi}{2}^A - \order{h}{2}^{\mathfrak{E}}_{00} + v^2 + \Pi\right) + \mathcal{O}(6)\,,\\
T^{\mathfrak{E}}_{0i} &= - I_1\rho v_i + \mathcal{O}(5)\,,\\
T^{\mathfrak{E}}_{ij} &= I_1(\rho v_iv_j + p\delta_{ij}) + \mathcal{O}(6)\,.
\end{align}
\end{subequations}
Its trace, taken using the Einstein frame metric, has the PPN expansion
\begin{equation}
T^{\mathfrak{E}} = I_1^2\left(-\rho + 3p - \Pi\rho - 2 \frac{I_{1,A}}{I_1}\rho\order{\phi}{2}^A\right) \,.
\end{equation}
Consequently, the trace-reversed energy-momentum tensor is given by
\begin{subequations}
\begin{align}
\bar{T}^{\mathfrak{E}}_{00} &= I_1\rho\left(\frac{1}{2} + \frac{I_{1,A}}{I_1}\order{\phi}{2}^A - \frac{\order{h}{2}^{\mathfrak{E}}_{00}}{2} + v^2 + \frac{\Pi}{2} + \frac{3p}{2\rho}\right) + \mathcal{O}(6)\,,\\
\bar{T}^{\mathfrak{E}}_{0i} &= - I_1\rho v_i + \mathcal{O}(5)\,,\\
\bar{T}^{\mathfrak{E}}_{ij} &= I_1\rho\left[v_iv_j + \frac{\order{h}{2}^{\mathfrak{E}}_{ij}}{2} + \left(\frac{1}{2} + \frac{I_{1,A}}{I_1}\order{\phi}{2}^A + \frac{\Pi}{2} - \frac{p}{2\rho}\right)\delta_{ij}\right] + \mathcal{O}(6)\,.
\end{align}
\end{subequations}
\subsection{Invariant Ricci tensor}
\label{ssec:ppnricci}
Finally, we come to the PPN expansion of the Ricci tensor of the invariant Einstein metric. We will do this in a particular gauge, which is determined by the gauge conditions
\bsub
\ba
h^{\mathfrak{E}}_{ij,j} - h^{\mathfrak{E}}_{0i,0} - \frac{1}{2}h^{\mathfrak{E}}_{jj,i} + \frac{1}{2}h^{\mathfrak{E}}_{00,i} = 0 \,,
\\
h^{\mathfrak{E}}_{ii,0} = 2h^{\mathfrak{E}}_{0i,i} \,,
\ea
\esub
which will simplify the calculation. In this gauge, the components of the Ricci tensor to the orders that will be required are given by
\bsub
\ba
\order{R}{2}^{\mathfrak{E}}_{00} &= -\frac{1}{2}\triangle\order{h}{2}^{\mathfrak{E}}_{00}\,,
\\
\order{R}{2}^{\mathfrak{E}}_{ij} &= -\frac{1}{2}\triangle\order{h}{2}^{\mathfrak{E}}_{ij}\,,
\\
\order{R}{3}^{\mathfrak{E}}_{0i} &= -\frac{1}{2}\left(\triangle\order{h}{3}^{\mathfrak{E}}_{0i} + \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{jj,0i} - \order{h}{2}^{\mathfrak{E}}_{ij,0j}\right)\,,
\\
\order{R}{4}^{\mathfrak{E}}_{00} &= -\frac{1}{2}\triangle\order{h}{4}^{\mathfrak{E}}_{00} + \order{h}{3}^{\mathfrak{E}}_{0i,0i} - \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{ii,00} + \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{00,i}\left(\order{h}{2}^{\mathfrak{E}}_{ij,j} - \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{jj,i} - \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{00,i}\right) + \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{ij}\order{h}{2}^{\mathfrak{E}}_{00,ij}\,.
\ea
\esub
We now have expanded all dynamical quantities which appear in the field equations into velocity orders. By inserting these expansions into the field equations, we can perform a similar expansion of the field equations, and decompose them into different velocity orders. This will be done in the next section.
\section{Expanded field equations}
\label{sec Expanded field equations}
We will now make use of the PPN expansions displayed in the previous section and insert them into the field equations. This will yield us a system of equations, which are expressed in terms of the metric and scalar field perturbations that we aim to solve for. We start with the zeroth order field equations in section~\ref{ssec:eqns0}, which are the equations for the Minkowski background, and will give us conditions on the invariant potential \(\mathcal{I}_2\). We then proceed with the second order metric equation in section~\ref{ssec:eqnsh2}, the second order scalar equation in section~\ref{ssec:eqnsp2}, the third order metric equation in section~\ref{ssec:eqnsh3}, the fourth order metric equation in section~\ref{ssec:eqnsh4} and finally the fourth order scalar equation in section~\ref{ssec:eqnsp4}.
\subsection{Zeroth order metric and scalar equations}
\label{ssec:eqns0}
At the zeroth velocity order, the metric equations \eqref{equ: full metric field equation E-frame} are given by
\begin{equation}\label{eqn:h0mn}
-\kappa^2\frac{I_2}{I_1}\eta_{\mu\nu} = 0\,,
\end{equation}
which is satisfied only for \(I_2 = 0\), and hence restricts the choice of the invariant potential \(\mathcal{I}_2\).
At the same velocity order, the scalar equation reads
\begin{equation}\label{eqn:phi0}
-\frac{\kappa^2}{2}I_{2}' = 0 \,,
\end{equation}
and is solved only by \(I_{2}' = 0\), so that we obtain another restriction on the allowed potential $\mathcal{I}_2$. In the following, we will only consider theories in which these conditions on $\mathcal{I}_2$ are satisfied.
\subsection{Second order metric $h^{\mathfrak{E}}_{00}$ and $h^{\mathfrak{E}}_{ij}$}
\label{ssec:eqnsh2}
At the second velocity order we find the $00$-metric field equation
\begin{equation}
\order{R}{2}^{\mathfrak{E}}_{00}
- \kappa^2\frac{I_2}{I_1}\order{h}{2}^{\mathfrak{E}}_{00}
+ \kappa^2\frac{I_{2}'}{I_1}\order{\phi}{2}^A
= \frac{\kappa^2}{2}I_1\rho \,.
\end{equation}
Inserting the expansion of the Ricci tensor shown in section~\ref{ssec:ppnricci} and using \(I_2 = 0\) and \(I_{2}' = 0\) we solve for \(\order{h}{2}^{\mathfrak{E}}_{00}\) and find the Poisson equation
\begin{equation}
\label{eqn:h200}
\triangle \order{h}{2}^{\mathfrak{E}}_{00} = -\kappa^2I_1\rho = -8\pi G\rho\,,
\end{equation}
where we introduced the Newtonian gravitational constant
\ba
\label{equ: Newtonian gravitational constant}
G = \frac{\kappa^2I_1}{8\pi}\,.
\ea
The $ij$-equations at the same order are given by
\ba
\order{R}{2}^{\mathfrak{E}}_{ij}
- \kappa^2\frac{I_2}{I_1}\order{h}{2}^{\mathfrak{E}}_{ij}
- \kappa^2\frac{I_{2}'}{I_1}\order{\phi}{2}^A\delta_{ij}
= \frac{\kappa^2}{2}I_1\rho\delta_{ij} \,,
\ea
which similarly reduces to
\ba
\label{eqn:h2ij}
\triangle\order{h}{2}^{\mathfrak{E}}_{ij} = -\kappa^2I_1\rho\delta_{ij} = -8\pi G\rho\delta_{ij}\,.
\ea
Note that the diagonal components $i=j$ satisfy the same equation~\eqref{eqn:h200} as \(\order{h}{2}^{\mathfrak{E}}_{00}\).
\subsection{Second order scalar field $\phi^A$}
\label{ssec:eqnsp2}
The second order scalar field equation is given by
\ba
I_1 F \triangle\order{\phi}{2}
- \frac{\kappa^2}{2}I_{2}''\order{\phi}{2}
= \frac{\kappa^2}{4}I_1I_{1}'\rho\,.
\ea
It is convenient to introduce the scalar field mass $m$ by
\ba
\label{equ: scalar mass}
m^2 &\equiv \frac{\kappa^2}{2} \frac{1}{I_1 F} I_{2}''
\ea
and
\ba
k &= \frac{\kappa^2}{4} \frac{1}{F} I_{1}' \,.
\ea
We assume that $m^2 > 0$, since otherwise the scalar field would be a tachyon.
Then, the second order scalar field equation takes the form of a screened Poisson equation,
\ba
\label{eqn:phi2}
\triangle\order{\phi}{2} - m^2 \order{\phi}{2} = k \rho\,.
\ea
We will see that $m$ can be interpreted as the mass of the scalar field, while $k$ is a measure for the non-minimal coupling of the scalar field at the linear level. We finally remark that \(m\) is an invariant, while \(k\) transforms as a tangent vector to the real line of scalar field values~\cite{BETA.JarvInvariants2015}.
\subsection{Third order metric $h^{\mathfrak{E}}_{0i}$}
\label{ssec:eqnsh3}
The third order metric equation reads
\begin{equation}
\order{R}{3}^{\mathfrak{E}}_{0i} - \kappa^2\frac{I_2}{I_1}\order{h}{3}^{\mathfrak{E}}_{0i} = -\kappa^2I_1\rho v_i \,.
\end{equation}
Thus we can solve for the third order metric perturbation and obtain another Poisson equation,
\begin{equation}\label{eqn:h30i}
\triangle\order{h}{3}^{\mathfrak{E}}_{0i} = \order{h}{2}^{\mathfrak{E}}_{ij,0j} - \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{jj,0i} + 2\kappa^2I_1\rho v_i\,.
\end{equation}
Note that the source terms on the right hand side of this equation are given by time derivatives of other metric components and moving source matter, and hence vanish for static solutions and non-moving sources.
\subsection{Fourth order metric $h^{\mathfrak{E}}_{00}$}
\label{ssec:eqnsh4}
The fourth order metric field equation reads
\begin{equation}
\bs
\order{R}{4}^{\mathfrak{E}}_{00}
- \kappa^2\frac{I_2}{I_1}\order{h}{4}^{\mathfrak{E}}_{00}
+ \kappa^2\frac{I_{2}'}{I_1}\order{\phi}{4}
- \kappa^2\frac{I_{2}'}{I_1}\order{\phi}{2} \; \order{h}{2}^{\mathfrak{E}}_{00}
+ \frac{\kappa^2}{2}\frac{I_{2}''}{I_1}\order{\phi}{2} \; \order{\phi}{2}
\\
= \frac{\kappa^2}{2}I_1\rho\left(2\frac{I_{1}'}{I_1}\order{\phi}{2}
- \order{h}{2}^{\mathfrak{E}}_{00}
+ 2v^2 + \Pi + 3\frac{p}{\rho}\right)\,.
\es
\end{equation}
Solving for the fourth order metric perturbation then yields
\begin{equation}\label{eqn:h400}
\begin{split}
\triangle\order{h}{4}^{\mathfrak{E}}_{00}
&= 2\order{h}{3}^{\mathfrak{E}}_{0i,0i}
- \order{h}{2}^{\mathfrak{E}}_{ii,00}
+ \order{h}{2}^{\mathfrak{E}}_{00,i}\left(\order{h}{2}^{\mathfrak{E}}_{ij,j}
- \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{jj,i}
- \frac{1}{2}\order{h}{2}^{\mathfrak{E}}_{00,i}\right)
+ \order{h}{2}^{\mathfrak{E}}_{ij}\order{h}{2}^{\mathfrak{E}}_{00,ij}\\
&\phantom{=}+ \kappa^2\left(\frac{I_{2}''}{I_1}\order{\phi}{2} \; \order{\phi}{2}
- 2I_{1}'\order{\phi}{2} \rho
+ I_1\order{h}{2}^{\mathfrak{E}}_{00}\rho
- 2 I_1 v^2 \rho - I_1 \Pi \rho - 3 I_1 p \right)\,.
\end{split}
\end{equation}
Also this equation has the form of a Poisson equation.
\subsection{Fourth order scalar field $\phi^A$}
\label{ssec:eqnsp4}
Finally, for the scalar field we have the fourth order equation
\begin{multline}
I_1 F \triangle\order{\phi}{4}
- I_1 F \order{\phi}{2}_{,00}
- \frac{\kappa^2}{2}I_{2}''\order{\phi}{4}
- I_1 F \order{\phi}{2}_{,ij}\order{h}{2}^{\mathfrak{E}}_{ij}
+ I_1 F' \triangle\order{\phi}{2} \; \order{\phi}{2}
\\
+ \frac{I_1}{2} F' \order{\phi}{2}_{,i}\order{\phi}{2}_{,i}
+ \frac{I_1}{2} F \order{\phi}{2}_{,i}\left(2\order{h}{2}^{\mathfrak{E}}_{ij,j}
- \order{h}{2}^{\mathfrak{E}}_{jj,i}
+ \order{h}{2}^{\mathfrak{E}}_{00,i}\right)
- \frac{\kappa^2}{4}I_{2}'''\order{\phi}{2} \; \order{\phi}{2} \\
= -\frac{\kappa^2}{4}\left[3I_1I_{1,A}p
- I_1 I_1' \Pi\rho
- (I_{1}' I_{1}'
+ I_1 I_1'')\order{\phi}{2} \rho \right] \,.
\end{multline}
Solving for the fourth order scalar perturbation then yields
\begin{equation}\label{eqn:phi4}
\begin{split}
\triangle\order{\phi}{4}
- m^2 \order{\phi}{4}
&= \order{\phi}{2}_{,00}
+ \order{\phi}{2}_{,ij} \order{h}{2}^{\mathfrak{E}}_{ij}
- \frac{1}{2}\order{\phi}{2}_{,i}\left(2\order{h}{2}^{\mathfrak{E}}_{ij,j}
- \order{h}{2}^{\mathfrak{E}}_{jj,i}
+ \order{h}{2}^{\mathfrak{E}}_{00,i}\right)
- \frac{F'}{F} \left[ \triangle\order{\phi}{2} \; \order{\phi}{2}
+ \frac{1}{2} \order{\phi}{2}_{,i} \order{\phi}{2}_{,i}\right]\\
&\phantom{=}+ \frac{\kappa^2}{4} \f1F \left[\frac{I_{2}'''}{I_1}
\order{\phi}{2} \; \order{\phi}{2}
- 3 I_{1}' p + I_{1}' \Pi \rho + \left(\frac{({I_1}')^2}{I_1}
+ {I_1}'' \right) \order{\phi}{2} \rho\right]\,.
\end{split}
\end{equation}
This is again a screened Poisson equation, which contains the same mass parameter \(m\) as the second order scalar field equation~\eqref{eqn:phi2}.
These are all necessary equations in order to determine the relevant perturbations of the invariant Einstein frame metric and the scalar field. We will solve them in the next section, under the assumption of a massive scalar field, \(m > 0\), and a static, homogeneous, spherically symmetric source mass.
\section{Massive field and spherical source}
\label{sec Massive field and spherical source}
In the previous section we derived the gravitational field equations up to the required post-Newtonian order. We will now solve these field equations for the special case of a homogeneous, non-rotating spherical mass distribution. This mass distribution, as well as the corresponding ansatz for the PPN metric perturbation and the PPN parameters, are defined in section~\ref{ssec:homosphere}. We then solve the field equations by increasing order. The second order equations for the invariant Einstein frame metric and the scalar field are solved in sections~\ref{ssec:solh2} and~\ref{ssec:solp2}, while the corresponding fourth order equations are solved in sections~\ref{ssec:solh4} and~\ref{ssec:solp4}. From these solutions we read off the effective gravitational constant as well as the PPN parameters \(\gamma\) and \(\beta\) in section~\ref{sec PPN parameters}. A few limiting cases of this result are discussed in section~\ref{ssec:limits}.
\subsection{Ansatz for homogeneous, spherical mass source}
\label{ssec:homosphere}
In the following we consider a static sphere of radius $R$ with homogeneous rest mass density, pressure and specific internal energy, surrounded by vacuum. Its density \(\rho\), pressure \(p\) and specific internal energy \(\Pi\) are then given by
\ba\label{eqn:homosource}
\rho(r) =
\begin{cases}
\rho_0 & \text{if } r \leq R\\
0, & \text{if } r > R\\
\end{cases} \,,
\quad
p(r) =
\begin{cases}
p_0 & \text{if } r \leq R\\
0, & \text{if } r > R\\
\end{cases} \,,
\quad
\Pi(r) =
\begin{cases}
\Pi_0 & \text{if } r \leq R\\
0, & \text{if } r > R\\
\end{cases} \,,
\ea
where \(r\) is the radial coordinate and we use isotropic spherical coordinates. We further assume that the mass source is non-rotating and at rest with respect to our chosen coordinate system, so that the velocity \(v^i\) vanishes.
For the metric perturbation corresponding to this matter distribution, which is likewise spherically symmetric, we now use the ansatz
\begin{subequations}
\label{BETA.equ:PPN metric ansatz}
\begin{align}
\label{BETA.equ:PPN metric ansatz h200}
\order{h}{2}^{\mathfrak{J}}_{00} &= 2 G_\text{eff} U
\,,\\
\label{BETA.equ:PPN metric ansatz h2ij}
\order{h}{2}^{\mathfrak{J}}_{ij} &= 2 \gamma G_\text{eff} U \delta_{ij}
\,,\\
\label{BETA.equ:PPN metric ansatz h30i}
\order{h}{3}^{\mathfrak{J}}_{0i} &= 0
\,,\\
\label{BETA.equ:PPN metric ansatz h400}
\order{h}{4}^{\mathfrak{J}}_{00} &= -2 \beta G_\text{eff}^2 U^2
+ 2 G_\text{eff}^2 (1+3 \gamma-2 \beta) \Phi_2
+ G_\text{eff}(2\Phi_3 +6 \gamma \Phi_4)
\,.
\end{align}
\end{subequations}
Here \(U, \Phi_2, \Phi_3, \Phi_4\) denote the standard PPN potentials, which satisfy the Poisson equations~\cite{BETA.will.book}
\bsub
\label{BETA.equ: Poisson equ potentials}
\ba
\label{BETA.equ: Poisson equ U}
\triangle U &= - 4 \pi \rho \,,
\\
\label{BETA.equ: Poisson equ Phi_2}
\triangle \Phi_2 &= - 4 \pi U \rho \,,
\\
\label{BETA.equ: Poisson equ Phi_3}
\triangle \Phi_3 &= - 4 \pi \rho \Pi \,,
\\
\label{BETA.equ: Poisson equ Phi_4}
\triangle \Phi_4 &= - 4 \pi p \,.
\ea
\esub
For the homogeneous, spherically symmetric mass source we consider they are given by
\bsub
\ba
U(r)
&= \begin{cases}
- \frac{M}{2 R^3}(r^2 - 3 R^2)
& \text{if } r \leq R
\\
\frac{M}{r}
& \text{if } r > R
\\
\end{cases} \,,
\\
\Phi_2 &=
\begin{cases}
\frac{3 M^2}{40 R^6}(r^2 - 5 R^2)^2 & \text{if } r \leq R
\\
\frac{6 M^2}{5 R r} & \text{if } r > R
\\
\end{cases} \,,
\\
\Phi_3 &=
\begin{cases}
-\frac{M \Pi_0}{2 R^3} (r^2 - 3 R^2) & \text{if } r \leq R
\\
\frac{M \Pi_0}{r} & \text{if } r > R
\\
\end{cases} \,,
\\
\Phi_4 &=
\begin{cases}
-\frac{2\pi p_0}{3} (r^2 - 3 R^2) & \text{if } r \leq R
\\
\frac{4 \pi p_0 R^3}{3 r} & \text{if } r > R
\\
\end{cases} \,,
\ea
\esub
where \(M = \frac{4\pi}{3}\rho_0R^3\) is the total mass. The metric ansatz~\eqref{BETA.equ:PPN metric ansatz} further depends on the effective gravitational constant \(G_{\text{eff}}\) and the PPN parameters \(\gamma\) and \(\beta\). These quantities, which are sufficient to describe the post-Newtonian limit of a fully conservative theory, i.e., a theory without preferred location or preferred frame effects, are determined by the particular theory under consideration. Note that these parameters are, in general, not constant, if one considers a massive scalar field, as we will do in the following.
We finally remark that in the ansatz~\eqref{BETA.equ:PPN metric ansatz} we have used the perturbations of the invariant Jordan frame metric \(g^{\mathfrak{J}}_{\mu\nu}\) defined in~\eqref{BETA equ: Jordan frame metric}. This choice is related to the fact that the matter coupling, and hence the geodesic motion of test particles from which the PPN parameters are determined, is given by \(g^{\mathfrak{J}}_{\mu\nu}\).
\subsection{Second order metric}
\label{ssec:solh2}
We start by solving the metric field equations at the second velocity order. Its temporal component~\eqref{eqn:h200} takes the form
\ba
\triangle \order{h}{2}^{\mathfrak{E}}_{00}
= \begin{cases}
-\frac{3 \text{I}_1 \kappa^2 M}{4 \pi R^3}
& \text{if } r \leq R
\\
0
& \text{if } r > R
\\
\end{cases}\,.
\ea
The solution is given by
\ba
\order{h}{2}^{\mathfrak{E}}_{00}
= 2GU
= \begin{cases}
- \frac{I_1 \kappa^2 M}{8 \pi R^3}(r^2 - 3 R^2)
& \text{if } r \leq R
\\
\frac{\text{I}_1 \kappa ^2 M}{4 \pi r}
& \text{if } r > R
\\
\end{cases} \,.
\ea
Since the spatial metric equations~\eqref{eqn:h2ij} at the same order are identical to the temporal equation, except for a Kronecker symbol, their solution immediately follows as
\ba
\order{h}{2}^{\mathfrak{E}}_{ij} = \order{h}{2}^{\mathfrak{E}}_{00} \delta_{ij} \,.
\ea
\subsection{Second order scalar}
\label{ssec:solp2}
We then continue with the scalar field equation~\eqref{eqn:phi2} at the second velocity order, which reads
\ba
\left(\triangle - m^2 \right) \order{\phi}{2}
= \begin{cases}
\frac{3 I_1' \kappa^2 M}{16 \pi F R^3}
& \text{if } r \leq R
\\
0
& \text{if } r > R
\\
\end{cases} \,.
\ea
The solution is then given by.
\ba
\order{\phi}{2}
= \begin{cases}
-\frac{3 I_1' \kappa^2 M}{16 \pi F m^2 R^3}+\frac{3 e^{-m R} I_1' \kappa^2 M \
(1+m R) }{16 \pi F m^3 R^3} \frac{\sinh (m r)}{r}
& \text{if } r \leq R
\\
-\frac{3 \kappa^2 M I_1' \left( e^{-m R}(1+m R) + e^{m R} (-1+m R)\right)}{32 \pi F m^3 R^3} \frac{e^{-m r}}{r}
& \text{if } r > R
\\
\end{cases} \,.
\ea
Note that outside the source, the field is proportional to $\frac{e^{-m r}}{r}$, i.e., it has the form of a Yukawa potential. Therefore, the parameter $m$ can be interpreted as the mass of the scalar field.
\subsection{Fourth order metric}
\label{ssec:solh4}
Since the only third order equations are trivially solved by \(\order{h}{3}^{\mathfrak{E}}_{0i} = 0\) in the case of a static, non-moving matter source, we continue directly with the metric field equation at the fourth velocity order. As it is rather lengthy, we give here only its generic form, while all appearing coefficients are stated explicitly in the appendix. This generic form reads
\ba
\label{BETA equ hE004 equation}
\triangle \order{h}{4}^{\mathfrak{E}}_{00}
= \begin{cases}
A_{h400}^{I1}
+\frac{A_{h400}^{I2}}{r^2}
+A_{h400}^{I3} r^2
+\frac{A_{h400}^{I4} e^{-m r}}{r}
+\frac{A_{h400}^{I5} e^{-2 m r}}{r^2}
+\frac{A_{h400}^{I6} e^{m r}}{r}
+\frac{A_{h400}^{I7} e^{2 m r}}{r^2}
& \text{if } r \leq R
\\
\frac{A_{h400}^{E1}}{r^4}
+\frac{A_{h400}^{E2} e^{-2 m r}}{r^2}
& \text{if } r > R
\\
\end{cases}\,.
\ea
Also its solution is lengthy, and so we proceed in the same fashion to display only its generic form here, which is given by
\bsub
\ba
\bs
\label{BETA equ hE004 solution int}
\order{h}{4}^{\mathfrak{E}}_{00} (r \leq R)
&= B_{h400}^{I1}
+B_{h400}^{I2} r^2
+B_{h400}^{I3} r^4
+\frac{B_{h400}^{I4} e^{-m r}}{r}
+\frac{B_{h400}^{I5} e^{-2 m r}}{r}
+\frac{B_{h400}^{I6} e^{m r}}{r}
\\
&\phantom{=}+\frac{B_{h400}^{I7} e^{2 m r}}{r}
+B_{h400}^{I8} \mathrm{Ei}(-2 m r)
+B_{h400}^{I9} \mathrm{Ei}(2 m r)
+B_{h400}^{I10} \ln\left(\frac{r}{R}\right) \,,
\es
\\
\label{BETA equ hE004 solution ext}
\order{h}{4}^{\mathfrak{E}}_{00} (r > R)
&=\frac{B_{h400}^{E1}}{r}
+\frac{B_{h400}^{E2}}{r^2}
+\frac{B_{h400}^{E3} e^{-2 m r}}{r}
+B_{h400}^{E4} \mathrm{Ei}(-2 m r) \,,
\ea
\esub
where $\mathrm{Ei}$ is the exponential integral defined as
\ba
\mathrm{Ei}(x) = -\fint_{-x}^{\infty}\frac{e^{-t}}{t}dt\,,
\ea
with $\fint$ denoting the Cauchy principal value of the integral. The values of the coefficients can be found in the appendix \ref{app coefficients fourth order metric}.
Note that $B_{h400}^{I8}=B_{h400}^{I9}$ and thus the exponential integral terms can be written more compactly as
\ba
B_{h400}^{I8} \mathrm{Ei}(-2 m r)+B_{h400}^{I9} \mathrm{Ei}(2 m r)
= 2 B_{h400}^{I8} \mathrm{Chi}(2 m r) \,,
\ea
where we used $\mathrm{Chi}$ for the hyperbolic cosine integral
\ba
\mathrm{Chi}(x) = \frac{\mathrm{Ei}(x) + \mathrm{Ei}(-x)}{2} = \upgamma + \ln x + \int_0^x\frac{\cosh t - 1}{t}dt\,,
\ea
and $\upgamma$ is Euler's constant.
\subsection{Fourth order scalar}
\label{ssec:solp4}
The final equation we must solve is the scalar field equation at the fourth velocity order, since the fourth order scalar field \(\order{\phi}{4}\) enters the Jordan frame metric perturbation \(\order{h}{4}^{\mathfrak{J}}_{00}\), from which the PPN parameter \(\beta\) is read off. This equation is similarly lengthy, and so also here we restrict ourselves to displaying only the generic form, which reads
\bsub
\ba
\bs
\label{BETA equ phi4 equation int}
\left( \triangle - m^2 \right) \order{\phi}{4}(r \leq R) &=
A_{\phi 4}^{I1}
+\frac{A_{\phi 4}^{I2}}{r^2}
+\frac{A_{\phi 4}^{I3}}{r^4}
+\frac{A_{\phi 4}^{I4} e^{-m r}}{r}
+\frac{A_{\phi 4}^{I5} e^{-2 m r}}{r^2}
+\frac{A_{\phi 4}^{I6} e^{-2 m r}}{r^3}\\
&\phantom{=}+\frac{A_{\phi 4}^{I7} e^{-2 m r}}{r^4}
+\frac{A_{\phi 4}^{I8} e^{m r}}{r}
+\frac{A_{\phi 4}^{I9} e^{2 m r}}{r^2}
+\frac{A_{\phi 4}^{I10} e^{2 m r}}{r^3}
+\frac{A_{\phi 4}^{I11} e^{2 m r}}{r^4}\\
&\phantom{=}+A_{\phi 4}^{I12} e^{-m r} r
+A_{\phi 4}^{I13} e^{m r} r \,,
\es
\\
\bs
\label{BETA equ phi4 equation ext}
\left( \triangle - m^2 \right) \order{\phi}{4}(r > R) &=
\frac{A_{\phi 4}^{E1} e^{-m r}}{r^2}
+\frac{A_{\phi 4}^{E2} e^{-2 m r}}{r^2}
+\frac{A_{\phi 4}^{E3} e^{-2 m r}}{r^3}
+\frac{A_{\phi 4}^{E4} e^{-2 m r}}{r^4}\,.
\es
\ea
\esub
The generic form of the solution then follows as
\bsub
\ba
\bs
\label{BETA equ phi4 solution int}
\phi_4(r \leq R)
&= B_{\phi 4}^{I1}
+\frac{B_{\phi 4}^{I2}}{r^2}
+\frac{B_{\phi 4}^{I3} e^{-m r}}{r}
+\frac{B_{\phi 4}^{I4} e^{-2 m r}}{r^2}
+\frac{B_{\phi 4}^{I5} e^{m r}}{r}
+\frac{B_{\phi 4}^{I6} e^{2 m r}}{r^2}
+B_{\phi 4}^{I7} e^{-m r}\\
&\phantom{=}+B_{\phi 4}^{I8} e^{-m r} r
+B_{\phi 4}^{I9} e^{-m r} r^2
+B_{\phi 4}^{I10} e^{m r}
+B_{\phi 4}^{I11} e^{m r} r
+B_{\phi 4}^{I12} e^{m r} r^2\\
&\phantom{=}+\frac{B_{\phi 4}^{I13} e^{-m r} \mathrm{Ei}(-m r)}{r}
+\frac{B_{\phi 4}^{I14} e^{m r} \mathrm{Ei}(-m r)}{r}
+\frac{B_{\phi 4}^{I15} e^{m r} \mathrm{Ei}(-3 m r)}{r}\\
&\phantom{=}+\frac{B_{\phi 4}^{I16} e^{m r} \mathrm{Ei}(m r)}{r}
+\frac{B_{\phi 4}^{I17} e^{-m r} \mathrm{Ei}(m r)}{r}
+\frac{B_{\phi 4}^{I18} e^{-m r} \mathrm{Ei}(3 m r)}{r} \,,
\es
\\
\bs
\label{BETA equ phi4 solution ext}
\phi_4(r > R) &=
\frac{B_{\phi 4}^{E1} e^{-m r}}{r}
+\frac{B_{\phi 4}^{E2} e^{-2 m r}}{r^2}
+\frac{B_{\phi 4}^{E3} e^{-m r} \mathrm{Ei}(-m r)}{r}
+\frac{B_{\phi 4}^{E4} e^{m r} \mathrm{Ei}(-2 m r)}{r}\\
&\phantom{=}+\frac{B_{\phi 4}^{E5} e^{m r} \mathrm{Ei}(-3 m r)}{r}
+\frac{B_{\phi 4}^{E6} e^{-m r} \ln\left(\frac{r}{R}\right)}{r} \,.
\es
\ea
\esub
The coefficients can be found in the appendix \ref{app coefficients fourth order scalar}.
\subsection{PPN parameters}
\label{sec PPN parameters}
We now have solved the field equations which determine all terms that enter the Jordan frame metric, and hence contribute to the PPN parameters \(\gamma\) and \(\beta\). The Jordan frame metric is then obtained by inserting the solutions obtained before into the relation~\eqref{BETA.equ:metric E to J frame} between the different invariant metrics.
Using the metric ansatz~\eqref{BETA.equ:PPN metric ansatz h200} we find the effective gravitational `constant'
\ba
G_\text{eff}(r)
= \frac{\order{h}{2}^{\mathfrak{J}}_{00}}{2 U}
= \begin{cases}
G \left[ 1 + 3
\frac{\sinh(mr)(1+m R)e^{-mR} - 2 m r }{(2\omega + 3) m^3 r (r^2 - 3 R^2)} \right]
& \text{if } r \leq R
\\
G \left[ 1 + 3
\frac{ mR\cosh(mR) - \sinh(mR) }{(2\omega + 3) m^3 R^3}e^{-mr} \right]
& \text{if } r > R
\\
\end{cases}\,.
\ea
Here we have introduced the abbreviation
\ba
\omega = 2F\frac{I_1^2}{I_1'^2} - \frac{3}{2}\,,
\ea
which is invariant under reparametrizations of the scalar field and chosen such that it agrees with the parameter $\omega$ in case of the Jordan-Brans-Dicke theory~\cite{BETA.Jordan:1959eg,BETA.Brans:1961sx}. In the next step, the PPN parameter $\gamma$ is obtained from the metric ansatz~\eqref{BETA.equ:PPN metric ansatz h2ij} giving
\ba
\gamma(r) = \frac{\order{h}{2}^{\mathfrak{J}}_{ii}}{2 G_\text{eff} U} =
\begin{cases}
1+\frac{12 \left[2 e^{m (r+R)} m r-\left(-1+e^{2 m r}\right) (1+m R)\right]}{6 \left(-1+e^{2 m r}\right) (1+m R)+2 e^{m (r+R)} m r \left[-6 + (2\omega + 3) m^2 \left(r^2-3 R^2\right)\right]}
& \text{if } r \leq R
\\
1-\left(\frac{1}{2}+\frac{(2\omega + 3) m^3 R^3 e^{m r}}{6 [m R \cosh (m R)-\sinh (m R)]}\right)^{-1}
& \text{if } r > R
\\
\end{cases} \,.
\ea
Finally, the PPN parameter $\beta$ is obtained from the ansatz~\eqref{BETA.equ:PPN metric ansatz h400}, and hence can be obtained from
\ba
\beta(r)
= -\frac{\order{h}{4}^{\mathfrak{J}}_{00}
- 2 G_\text{eff}[ (1+3\gamma)G_\text{eff}\Phi_2 + \Phi_3 + 3 \gamma \Phi_4 ]}{2 G_\text{eff}^2 (U^2 + 2 \Phi_2)} \,.
\ea
Due to the even more lengthy, and practically irrelevant solution for \(\beta\) inside the source, we omit this part of the solution. The solution outside the source, $r>R$, takes the generic form
\ba
\bs
\label{BETA.equ: beta}
\beta(r>R)
&= \left[ \left(\frac{1}{r^2} + \f1r \frac{12}{5R} \right) \left(1 + e^{-m r} \frac{C_{\beta}^{E4}}{2} \right)^2 \right]^{-1}
\Bigg[\frac{C_{\beta}^{E1}}{r}
+ \frac{C_{\beta}^{E2}}{r^2}
+ C_{\beta}^{E3} \frac{e^{-m r}}{r}
+ C_{\beta}^{E4} \frac{e^{-m r}}{r^2} \\
&\qquad\qquad+ C_{\beta}^{E5} \frac{e^{-2 m r}}{r}
+ C_{\beta}^{E6} \frac{e^{-2 m r}}{r^2}
+ C_{\beta}^{E7} \frac{e^{- m r}}{r} \mathrm{Ei}{(-m r)}
+ C_{\beta}^{E8} \frac{e^{m r}}{r} \mathrm{Ei}{(-2 m r)}\\
&\qquad\qquad+ C_{\beta}^{E9} \frac{e^{m r}}{r} \mathrm{Ei}{(-3 m r)}
+ C_{\beta}^{E10} \mathrm{Ei}{(-2 m r)}
+ C_{\beta}^{E11} \frac{e^{-m r}}{r} \ln\left(\frac{r}{R}\right)
\Bigg] \,.
\es
\ea
The values of the coefficients can be found in the appendix \ref{app PPN Beta}, where we further introduce the abbreviations
\ba
\sigma = \frac{2F(I_1I_1'' + I_1'^2) - F'I_1I_1'}{2F^2I_1^2}\,, \quad \mu = \kappa^2I_1'\frac{2FI_2''' - 3F'I_2''}{4F^3I_1^2}\,.
\ea
Both $\gamma$ and $\beta$ depend only on the parameters $m, \omega, \mu, \sigma$ of the theory, which are invariant both under conformal transformations and redefinitions of the scalar field, and on the radius of the sphere $R$. As expected, it is independent of $M,\Pi_0,p_0$, which are absorbed in the metric potentials, and characterize the source only.
\subsection{Limiting cases}
\label{ssec:limits}
We finally discuss a number of physically relevant limiting cases. We start this discussion with the massless limit, i.e., the case of a vanishing potential $\mathcal{U} \rightarrow 0$, corresponding to $\mathcal{I}_2 \rightarrow 0$. This limit is achieved by successively applying to our result the limits $\mu \rightarrow 0$ and $m \rightarrow 0$. For $\gamma$, which does not depend on $\mu$, we obtain the limit
\ba
\gamma(m \rightarrow 0) = \frac{\omega + 1}{\omega + 2} = \frac{4FI_1^2 - I_1'^2}{4FI_1^2 + I_1'^2}\,.
\ea
For $\beta$ we find the limit
\ba
\bs
\beta(\mu \rightarrow 0, m \rightarrow 0) &= \frac{(2\omega + 3)\sigma - 8}{16(\omega + 2)^2}\\
&= 1 - \left( 1 + \frac{1}{4 F} \frac{I_1'^2}{I_1^2} \right)^{-2}
\left( \frac{F'}{32 F^3} \frac{I_1'^3}{I_1^3} + \frac{1}{16 F^2} \frac{I_1'^4}{I_1^4}
- \frac{1}{16 F^2} \frac{I_1'^2 I_1''}{I_1^3} \right) \,.
\es
\ea
These limits correspond to the result found in \cite{BETA.KuuskInvariantsMSTG2016}, if reduced to the case of a single scalar field.
Another interesting case is given by the large interaction distance limit. In the limit $r \rightarrow \infty$ we obtain
\ba
\gamma(r \rightarrow \infty) = 1 \,,
\ea
and
\ba
\bs
\beta(r \rightarrow \infty)
&= \frac{5 C_{\beta}^{E1} R}{12 I_1^2} \\
&= 1 + 5\frac{\left[39+m^2 R^2 (20 m R - 33)\right] -3 (1+m R) [13+m R (13+2 m R)]e^{-2 m R} }{16 (2\omega + 3) m^5 R^5} \,.\label{eqn:betainf}
\es
\ea
Note that it does not take the GR value $1$ as one might expect. This is due to the fact that the finite self-energy of the extended mass source influences $\beta$. If in addition we take the limit $R\rightarrow \infty$, we find that indeed $\beta$ goes to the GR value $1$,
\ba
\beta(r \rightarrow \infty, R \rightarrow \infty) = 1 \,.
\ea
Note that first we have to take the limit $r \rightarrow \infty$, since the solution we used here is valid only in the exterior region $r > R$, and the limit \(R \to \infty\) would otherwise be invalid. We finally remark that the same limit \(\beta \to 1\) is also obtained for \(m \to \infty\), which becomes clear from the fact that \(m\) always appears multiplied by either \(r\) or \(R\).
This concludes our section on the static spherically symmetric solution we discussed here. We can now use our results for \(\beta\) and \(\gamma\) and compare them to Solar System measurements of these PPN parameters. We will do so in the next section.
\section{Comparison to observations}
\label{sec Comparison to observations}
In the preceding sections we have derived expressions for the PPN parameters \(\beta\) and \(\gamma\). We have seen that they depend on the radius \(R\) of the gravitating source mass, the interaction distance \(r\) and constant parameters \(m, \omega, \mu, \sigma\), which characterize the particular scalar-tensor theory under consideration and are invariant under both conformal transformations of the metric and redefinitions of the scalar field. We now compare our results to observations of the PPN parameters in the Solar System, in order to obtain bounds on the theory parameters.
In the following we will not consider the parameters \(\mu\) and \(\sigma\), and set them to \(0\) in our calculations, as they correspond to higher derivatives of the invariant functions \(\mathcal{I}_1\) and \(\mathcal{I}_2\). Restricting our discussion to the parameters \(m\) and \(\omega\) will further allow us to plot exclusion regions which we can compare to previous results~\cite{BETA.HohmannPPN2013,BETA.HohmannPPN2013E,BETA.SchaererPPN2014}. To be compatible with the plots shown in these articles, we display the rescaled mass \(\tilde{m} = m/\sqrt{2\omega + 3}\) measured in inverse astronomical units \(m_{\mathrm{AU}} = 1\mathrm{AU}^{-1}\) on the horizontal axis. Regions that are excluded at a confidence level of \(2\sigma\) are shown in gray. In particular, we consider the following experiments:
\begin{itemize}
\item
The deflection of pulsar signals by the Sun has been measured using very long baseline interferometry (VLBI)~\cite{BETA.Fomalont:2009zg}. From this \(\gamma\) has been determined to satisfy \(\gamma - 1 = (-2 \pm 3) \cdot 10^{-4}\). The radar signals were passing by the Sun at an elongation angle of 3\textdegree, and so we will assume a gravitational interaction distance of \(r \approx 5.23 \cdot 10^{-2}\mathrm{AU}\). The region excluded from this measurement is shown in Fig.~\ref{fig:vlbi}.
\item
The most precise value for \(\gamma\) has been obtained from the time delay of radar signals sent between Earth and the Cassini spacecraft on its way to Saturn~\cite{BETA.Bertotti:2003rm}. The experiment yielded the value \(\gamma - 1 = (2.1 \pm 2.3) \cdot 10^{-5}\). The radio signals were passing by the Sun at a distance of \(1.6\) solar radii or \(r \approx 7.44 \cdot 10^{-3}\mathrm{AU}\). The excluded region, shown in Fig.~\ref{fig:cassini}, agrees with our previous findings~\cite{BETA.HohmannPPN2013,BETA.HohmannPPN2013E}.
\item
The classical test of the parameter \(\beta\) is the perihelion precession of Mercury~\cite{BETA.Will:2014kxa}. Its precision is limited by other contributions to the perihelion precession, most importantly the solar quadrupole moment \(J_2\). The current bound is \(\beta - 1 = (-4.1 \pm 7.8) \cdot 10^{-5}\). As the gravitational interaction distance we take the semi-major axis of Mercury, which is \(r \approx 0.387\mathrm{AU}\). We obtain the excluded region shown in Fig.~\ref{fig:mercury}. Note that for small values of \(\omega\) we obtain a tighter bound on the scalar field mass than from the Cassini tracking experiment, despite the larger interaction distance \(r\). This can be explained by the fact that the main contribution to \(\beta\) comes from a modification of the gravitational self-energy of the source mass, which is independent of the interaction distance, and depends only on the radius of the gravitating body.
\item
A combined bound on \(\beta\) and \(\gamma\) has been obtained from lunar laser ranging experiments searching for the Nordtvedt effect, which would cause a different acceleration of the Earth and the Moon in the solar gravitational field~\cite{BETA.Hofmann:2010}. In fully conservative theories with no preferred frame effects, such as scalar-tensor gravity, the Nordtvedt effect depends only on the PPN parameters \(\beta\) and \(\gamma\). The current bound is \(4\beta - \gamma - 3 = (0.6 \pm 5.2) \cdot 10^{-4}\). Since the effect is measured using the solar gravitational field, the interaction distance is \(r = 1\mathrm{AU}\). The excluded region is shown in Fig.~\ref{fig:llr}.
\item
A more recent measurement of both \(\beta\) and \(\gamma\) with higher precision has been obtained using combined ephemeris data and the Mercury flybys of the Messenger spacecraft in the INPOP13a data set~\cite{BETA.Verma:2013ata}. From these observations, combined bounds in the two-dimensional parameter space spanned by \(\beta\) and \(\gamma\) can be obtained, as well as bounds on the individual parameters by fixing one of them to its GR value. Since we have determined both parameters in our calculation, we do not perform such a fixing here, and use the full parameter space instead. From the 25\% residuals one finds a bounding region that can be approximated as
\begin{equation}
\left[(\beta - 1) - 0.2 \cdot 10^{-5}\right]^2 + \left[(\gamma - 1) + 0.3 \cdot 10^{-5}\right]^2 \leq \left(2.5 \cdot 10^{-5}\right)^2\,.
\end{equation}
Note that in this case one cannot easily define an interaction distance \(r\), since ephemeris from objects across the Solar System has been used. However, we may use the fact that for \(mr \gg 1\) the PPN parameters approach their limiting values \(\gamma \to 1\) and~\eqref{eqn:betainf}, so that the dominant effect is determined by the modified gravitational self-energy of the Sun. The excluded region under this assumption is shown in Fig.~\ref{fig:inpop}. One can see that for small values of \(\omega\) one obtains a bound on the scalar field mass which is approximately twice as large as the bound obtained from Cassini tracking and lunar laser ranging.
\end{itemize}
\begin{figure}[hbtp]
\centering
\includegraphics[width=100mm]{vlbi.png}
\caption{Region excluded by VLBI measurements.}
\label{fig:vlbi}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[width=100mm]{cassini.png}
\caption{Region excluded by Cassini tracking.}
\label{fig:cassini}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[width=100mm]{mercury.png}
\caption{Region excluded by the perihelion shift of Mercury.}
\label{fig:mercury}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[width=100mm]{llr.png}
\caption{Region excluded by lunar laser ranging.}
\label{fig:llr}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[width=100mm]{inpop.png}
\caption{Region excluded by the ephemeris data set INPOP13a.}
\label{fig:inpop}
\end{figure}
Our results must be taken with care, since they are based on a number of assumptions and simplifications. Most importantly, we have calculated the PPN parameters under the assumption of a homogeneous, non-rotating, spherical gravitational source. This is only a very crude approximation for the Sun, whose density decreases with growing distance from its center. A full treatment of the post-Newtonian limit of a non-homogeneous body would be required to improve on this assumption. However, since a larger amount of matter is located closer to the center of the Sun, hence increasing its gravitational self-energy and decreasing the effective radius \(R\), one might expect that the effect on \(\beta\) will be even larger in such a full treatment.
As another simplification we have assumed that experiments based on electromagnetic waves passing by the Sun can be described by a single effective interaction distance. A rigorous treatment would involve an explicit calculation of the wave trajectory~\cite{BETA.Devi:2011zz,BETA.Deng:2016moh}. However, this affects only the VLBI and Cassini measurements of \(\gamma\), while the measurements of \(\beta\), which are less dependent on the interaction distance, are unaffected.
\section{Conclusion}
\label{sec Conclusion}
We have calculated the PPN parameters \(\gamma\) and \(\beta\) of scalar-tensor gravity with a general potential for a homogeneous, spherical mass source. For our calculation we have used a formalism which is manifestly invariant under both conformal transformations of the metric and redefinitions of the scalar field. The result we have obtained depends on four constant parameters of the theory under consideration, which are derived from the invariant functions that characterize the theory. Further, the result also depends on the radius \(R\) of the gravitating mass source and the interaction distance \(r\) at which the PPN parameters are measured. We have finally compared our results to a number of measurements in the Solar System and derived bounds on two of the four constant theory parameters.
Our results improve on previous work in which we assumed a point-like mass source~\cite{BETA.HohmannPPN2013,BETA.HohmannPPN2013E,BETA.SchaererPPN2014}. We have seen that \(\gamma\) receives a correction which depends on the source mass radius, but retains the large distance limit \(\gamma \to 1\) for \(r \to \infty\). In contrast, \(\beta\) receives a modification also in the large distance limit. This is explained by a modified gravitational self-energy of the source mass, which influences its gravitational effects also at large distances, and which has been neglected for the point mass. As a result, measurements of \(\beta\) at an interaction distance which is large compared to the radius of the source mass, \(r \gg R\), are significantly more sensitive to modifications of GR by a massive scalar field than measurements of \(\gamma\) at the same interaction distance. We have shown this in particular for measurements of \(\beta\) using lunar laser ranging and planetary ephemeris, where the interaction distance is in the order of astronomical units, and which yield bounds on the scalar field mass comparable to or even better that the bound obtained from the Cassini tracking experiment, with an interaction distance in the order of the solar radius. Our work suggests that measurements of \(\beta\) in the gravitational field of smaller, more compact objects could yield even stricter bounds.
Of course also our assumption of a spherically symmetric and homogeneous source mass is still only an approximation. Further improvement of our results would require a weakening of this assumption, and considering the density profile of the gravitating source mass. Such a calculation would have to be done numerically. While we have provided all necessary equations in this article, we leave performing such a calculation for future work.
Finally, it is also possible to extend our results to more general or related theories. A straightforward generalization is given by considering multiple scalar fields, for which \(\gamma\) has been calculated for a point mass source~\cite{BETA.HohmannGamma2016}, or by allowing derivative coupling as in Horndeski gravity, where it would lead to a similar improvement on previous calculations of \(\gamma\) and \(\beta\)~\cite{BETA.Hohmann:2015kra}. Another possibility is to consider massive bimetric gravity, where GR is augmented by a massive tensor degree of freedom instead of a scalar field, and a similar result on \(\gamma\) for a point mass can be improved and extended to \(\beta\)~\cite{BETA.Hohmann:2017uxe}.
\section*{Acknowledgments}
The authors thank Sofya Labazova for pointing out an error in a previous calculation.
MH gratefully acknowledges the full financial support of the Estonian Research Council through the Startup Research Grant PUT790 and the European Regional Development Fund through the Center of Excellence TK133 ``The Dark Side of the Universe''.
MH and AS acknowledge support from the Swiss National Science Foundation.
This article is based upon work from COST Action CANTATA, supported by COST (European Cooperation in Science and Technology).
\pagebreak
| {'timestamp': '2017-09-08T02:10:06', 'yymm': '1708', 'arxiv_id': '1708.07851', 'language': 'en', 'url': 'https://arxiv.org/abs/1708.07851'} |
\section{Introduction}
Subgraph detection and graph partitioning are fundamental problems in network analysis, each typically framed in terms of identifying a group or groups of vertices of the graph so that the vertices in a shared group are well connected or ``similar'' to each other in their connection patterns while the vertices in different groups (or the complement group) are ``dissimilar''. The specific notion of connectedness or similarity is a modeling choice, but one often assumes that edges connect similar vertices, so that in general the detected subgraph is dense and the ``communities'' identified in graph partitioning are very often more connected within groups than between groups (``assortative communities'').
The identification of subgraphs with particular properties is a long-standing pursuit of network analysis with various applications. Dense subgraphs as assortative communities might represent coordinating regions of interest in the brain \cite{Meunier_2009,Bassett_2011} or social cliques in a social network \cite{Moody_2001}. In biology, subgraph detection plays a role in discovering DNA motifs and in gene annotation \cite{fratkin2006motifcut}. In cybersecurity, dense subgraphs might represent anomalous patterns to be highlighted and investigated (e.g., \cite{Yan_2021}). See \cite{ma2020efficient} for a recent survey and a discussion of alternative computational methods. As noted there, some of the existing algorithms apply to directed graphs, but most do not.
In the corresponding computer science literature, much of the focus has been on approximation algorithms since the dense $k$-subgraph is NP-hard to solve exactly (a fact easily seen by a reduction from the $k$-clique problem). An algorithm that on any input $(G, k)$ returns a subgraph of order $k$ (that is, $k$ vertices or ``nodes''; note, we will sometimes refer to the ``size'' of a graph or subgraph to be the number of vertices, not the number of edges) with average degree within a factor of at most $n^{1/3-\delta}$ from the optimum solution, where $n$ is the order of graph $G$ and $\delta\approx 1/60$ was proposed in \cite{feige2001dense}. This approximation ratio was the best known for almost a decade until a log-density based approach yielded $n^{1/4+\varepsilon}$
for any $\varepsilon > 0$ \cite{bhaskara2010}. This remains the state-of-the-art approximation algorithm. On the negative side it has been shown \cite{manurangsi2017}, assuming the exponential time hypothesis, that there is no polynomial-time algorithm that approximates to within an $n^{1/(\log\log n)^c}$ factor of the optimum. Variations of the problem where the target subgraph has size at most $k$ or at least $k$ have also been considered \cite{andersen2009}.
Depending on the application of interest, one might seek one or more dense subgraphs within the larger network, a collection of subgraphs to partition the network (i.e., assign a community label to each node), or a set of potentially overlapping subgraphs (see, e.g., \cite{Wilson_2014}). While the literature on ``community detection'' is enormous (see, e.g., \cite{fortunato_community_2010,fortunato_community_2016,fortunato_community_2022,porter_communities_2009,shai_case_2017} as reviews), a number of common thematic choices have emerged. Many variants of the graph partitioning problem can be formalized as a (possibly constrained) optimization problem. One popular choice minimizes the total weight of the cut edges while making the components roughly equal in size \cite{shi2000normalized}. Another common choice maximizes the total within-community weight relative to that expected at random in some model \cite{newman_finding_2004}. Other proposed objective functions include ratio cut weight \cite{bresson2013adaptive}, and approximate ``surprise'' (improbability) under a cumulative hypergeometric distribution \cite{Traag_Aldecoa_Delvenne_2015}.
However, most of these objectives are NP-hard to optimize, leading to the development of a variety of heuristic methods for approximate partitioning (see the reviews cited above for many different approaches). Some of the methods that have been studied are based on the Fielder eigenvector \cite{fiedler1973algebraic}, multicommunity flows \cite{leighton1989approximate}, semidefinite programming \cite{arora2004expander,arora2008geometry,arora2009expander}, expander flows \cite{arora2010logn}, single commodity flows \cite{khandekar2009graph}, or Dirichlet partitions \cite{osting2014minimal,osting2017consistency,wang2019diffusion}.
Whichever choice is made for the objective and heuristic, the identified communities can be used to describe the mesoscale structure of the graph and can be important in a variety of applications (see, e.g., the case studies considered in \cite{shai_case_2017}). Subgraphs and communities can also be important inputs to solving problems like graph traversal, finding paths, trees, and flows; while partitioning large networks is often an important sub-problem for complexity reduction or parallel processing in problems such as graph eigenvalue computations \cite{BDR13}, breadth-first search \cite{BM13}, triangle listing \cite{CC11}, PageRank \cite{SW13} and Personalized PageRank~\cite{andersen2007algorithms}.
In the present work, we consider a different formulation of the subgraph detection problem, wherein we aim to identify a subgraph with a long mean exit time---that is, the expected time for a random walker to escape the subgraph and hit its complement. Importantly, this formulation inherently respects the possibly directed nature of the edges. This formulation is distinct from either maximizing the total or average edge weight in a dense subgraph and minimizing the edge cut (as a count or suitably normalized) that is necessary to separate a subgraph from its complement. Furthermore, explicitly optimizing for the mean exit time to identify subgraphs may in some applications be preferred as a more natural quantity of interest. For example, in studying the spread of information or a disease on a network, working in terms of exit times is more immediately dynamically relevant than structural measurements of subgraph densities or cuts. Similarly, the development of respondent-driven sampling in the social survey context (see, e.g., \cite{Mouw_2012,Verdery_2015}) is primarily motivated by there being subpopulations that are difficult to reach (so we expect they often also have high exit times on the directed network with edges reversed). We thus argue that the identification of subgraphs with large exit times is at least as interesting---and typically related to---those subgraphs with large density and or small cut. Indeed, random walker diffusion on a network and assortative communities are directly related in that the modularity quality function used in many community detection algorithms can be recovered as a low-order truncation of a ``Markov stability'' auto-correlation measurement of random walks staying in communities \cite{Lambiotte_Delvenne_Barahona_2014}. However, the directed nature of the edges is fully respected in our escape time formulation of subgraph detection presented here (cf.\ random walkers moving either forward or backward along edges in the Markov stability calculation \cite{Mucha_Richardson_Macon_Porter_Onnela_2010} that rederives modularity for a directed network \cite{Leicht_Newman_2008}).
From an optimization point of view, the method presented here can be viewed as a rearrangemnet method or a Merriman-Bence-Osher (MBO) scheme \cite{MBO1993} as applied to Poisson solves on a graph. Convergence of MBO schemes is an active area of research in a variety of other scenarios: see \cite{chambolle2006convergence,ishii2005optimal} in the case of continuum mean curvature flows, \cite{budd2020graph,van_Gennip_2014} in a graph Allen-Cahn type problem, and \cite{jacobs2018auction} for a volume constrained MBO scheme on undirected networks.
Similarly, proving convergence rates for our algorithm by determining quantitative bounds on the number of interior iterations required for a given $\epsilon$ is an important question for the numerical method and its applications to large data sets. Importantly, the method for subgraph detection that we develop and explore, and then extend to a partitioner, is inherently capable of working on directed graphs without any modification.
Also, searching for related graph problems where this type of rearrangement algorithm for optimization can be applied will be an important endeavor.
\subsection{A New Formulation in Graphs}
Let $G = (V,E)$ be a (strongly) connected graph (undirected or directed; we use the term ``graph'' throughout to include graphs that are possibly directed), with adjacency matrix $A$ with element $A_{ij}$ indicating presence/absence (and possible weight) of an edge from $i$ to $j$. We define the (out-)degree matrix $D$ to be diagonal with values $D_{ii}=\sum_j A_{ij}$. For weighted edges in $A$ this weighted degree is typically referred to as ``strength'' but we will continue to use the word ``degree'' throughout to be this weighted quantity. Consider the discrete time Markov chain $M_n$ for the random walk described by the (row stochastic) \emph{probability transition matrix}, $P := D^{-1} A$. The \emph{exit time from $S\subset V$} is the stopping time $T_S = \inf\{n\geq 0: M_n\in S^c\}$.
The \emph{mean exit time from $S$ of a node $i$} is defined by $\mathbb{E}_i T_S$ (where $\mathbb{E}_i$ is the expectation if the walker starts at node $i$) and is given by $v_i$, where $v$ is the solution to the system of equations
\begin{subequations}
\label{ht_def}
\begin{align}
\label{ht_defa}
(I-P)_{SS} v_S &= 1_S \\
v_{S^c} &= 0\,,
\end{align}
\end{subequations}
where the subscript $S$ represents restriction of a vector or matrix to the indices in $S$.
The \emph{average mean escape time (MET) from $S$} is then
\begin{equation} \label{e:MET}
\tau (S) = \frac{1}{|V|} \sum_{i \in V} v_{i},
\end{equation}
representing the mean exit time from $S$ of a node chosen uniformly at random in the graph (noting that $v_i=0$ for $i\in S^c$).
We are interested in finding vertex sets (of fixed size) having large MET, as these correspond to sets that a random walker would remain in for a long time. Thus, for fixed $k\in \mathbb N$, we consider the \emph{subgraph detection problem},
\begin{equation}
\label{e:subgraphDetection}
\max_{\substack{S\subset V \\ |S| =k}} \tau(S).
\end{equation}
Multiplying~\eqref{ht_defa} on the left by $D$, we obtain the equivalent system,
\begin{subequations}
\label{e:Poisson}
\begin{align}
& L v = d
\textrm{ on } S , \\
& v = 0 \textrm{ on } S^c\,,
\end{align}
\end{subequations}
where $L = D-A$ is the (unnormalized, out-degree) graph Laplacian, and $d = D 1$ is the out-degree vector. We denote the solution to \eqref{e:Poisson} by $v = v(S)$.
For $\varepsilon> 0$, we will also consider the approximation to \eqref{e:Poisson},
\begin{equation}
\label{e:relax}
\left[ L + \varepsilon^{-1} (1-\phi) \right] u = d
\end{equation}
where $\phi$ is a vector and action by $(1-\phi)$ on the left is interpreted as multiplication by the diagonal matrix $I - {\rm diag} (\phi)$. We denote the solution $u = u_\varepsilon$.
Formally, for $\phi = \chi_S$, the characteristic function of $S$, as $\varepsilon \to 0$, the vector $u_\varepsilon \to v_S$ where $v_S$ satisfies \eqref{e:Poisson}.
We can also define an associated approximate MET
\begin{equation}
\label{e:l1energy}
E_\varepsilon (\phi) := \frac{1}{|V|} \| u_\varepsilon \|_{
\ell^1(V)} = \frac{1}{|V|} \left\| \left[ L + \varepsilon^{-1} (1-\phi) \right]^{-1} d \right\|_{\ell^1 (V)},
\end{equation}
where as $\varepsilon \to 0$, we have that $E_\varepsilon(\chi_S) \to \frac{1}{|V|} \| v_S \|_{\ell^1(V)} = \tau(S)$. We then arrive at the following \emph{relaxed subgraph detection problem}
\begin{equation}
\label{exp:l1_opt}
\max_{\substack{0 \leq \phi \leq 1 \\ \langle \phi, 1 \rangle = k}} E_\epsilon (\phi),
\end{equation}
which we solve and study in this paper.
For small $\varepsilon>0$, we will study the relationship between the subgraph detection problem \eqref{e:subgraphDetection} and its relaxation
\eqref{exp:l1_opt}.
We are also interested in finding node partitions with high MET in the following sense: Given a vertex subset $S \subset V$, a random walker that starts in $S$ should have difficulty escaping to $S^c$ and a random walker that starts in $S^c$ should have difficulty escaping to $S$. This leads to the problem $\max_{V = S \amalg S^c} \,\tau(S) + \tau(S^c)$.
More generally, for a vertex partition, $V = \amalg_{\ell \in [K]} S_\ell$, we can consider
\begin{equation} \label{e:minEscape}
\max_{V = \amalg_{\ell \in [K]} S_\ell} \ \sum_{\ell \in [K]} \ \tau(S_\ell).
\end{equation}
The solution embodies the idea that in a good partition a random walker will transition between partition components very infrequently.
An approximation to \eqref{e:minEscape} is
\begin{equation}
\label{e:Opt}
\max_{V = \amalg_{\ell \in [K]} S_\ell} \ \sum_{\ell \in [K]} \ E_\varepsilon (\chi_{S_\ell}).
\end{equation}
We can make an additional approximation by relaxing the constraint set.
Define the admissible class
$$
\mathcal A_K = \left\{ \{\phi_\ell\}_{\ell\in [K]} \colon
\phi_\ell \in \mathbb R^{|V|}_{+}
\text{ and }
\sum_{\ell \in [K] } \phi_\ell = 1 \right\}.
$$
Observe that the collection of indicator functions for any $K$-partition of the vertices is a member of $\mathcal A_K$. Furthermore, we can see that $\mathcal A_K \cong (\Delta_K)^{|V|}$, where $\Delta_K$ is the unit simplex in $K$ dimensions. Thus, the extremal points of $\mathcal A_K$ are precisely the collection of indicator functions for a $K$-partition of the vertices.
For $\varepsilon >0$, a modified relaxed version of the graph partitioning problem \eqref{e:minEscape} can be formulated as
\begin{equation}
\label{e:minEscapeRelax_alt}
\min_{ \{\phi_\ell\}_{\ell \in [K]} \in \mathcal A_K } \tilde{E}_{\epsilon} \left( \{\phi_\ell\}_{\ell \in [K]} \right), \quad \textrm{where} \quad \tilde{E}_{\epsilon} \left( \{\phi_\ell\}_{\ell \in [K]} \right) = \sum_{i = 1}^K [1+ \epsilon |V| E_\epsilon(\phi_i)]^{-1}.
\end{equation}
For small $\varepsilon>0$, we will study the relationship between the graph partitioning problem \eqref{e:minEscape} and its relaxation
\eqref{e:minEscapeRelax_alt}. An important feature of \eqref{e:minEscapeRelax_alt} is that it can be optimized using fast rearrangement methods that effectively introduces a volume normalization for the partition sets, while optimization of \eqref{e:minEscape} resulted in favoring one partition being full volume. We will discuss this further in Section \ref{sec:gp} below.
\subsection{Outline of the Paper}
In \Cref{s:Analysis}, we lay the analytic foundation for rearrangement methods for both the subgraph detection and partitioning problems.
We prove the convergence of the methods to local optimizers of our energy functionals in both cases and establish the fact that our fast numerical methods increase the energy.
To begin, we establish properties of the gradient and Hessian of the functionals $E_\epsilon (\phi)$ for vectors $0 \leq \phi \leq 1$. Then, using those properties, we introduce rearrangement methods for finding optimizers and prove that our optimization schemes reduce the energy.
Then, we discuss how to adapt these results to the partitioning problem.
Lastly, we demonstrate how one can easily add a semi-supervised component to our algorithm.
In \Cref{s:NumRes}, we apply our methods to a variety of model graphs, as well as some empirical data sets to assess their performance. In the subgraph setting, we consider how well we do detecting communities in a family of model graphs related to stochastic block models, made up of a number of random Erd\H{o}s-R\'enyi (ER) communities of various sizes and on various scales. The model graphs are designed such that the overall degree distribution is relatively similar throughout. We demonstrate community detectability and algorithm efficacy thresholds by varying a number of parameters in the graph models. We also consider directed graph models of cycles connected to Erd\H{o}s-R{\'e}nyi graphs, on which our methods perform quite well. For the partitioners, we also consider related performance studies over our model graph families, as well as on a large variety of clustering data sets.
We conclude in \Cref{s:disc} with a discussion including possible future directions and applications of these methods.
\section{Analysis of our proposed methods}
\label{s:Analysis}
In this section, we first analyze the relaxed subgraph detection problem \cref{exp:l1_opt} and the relaxed graph partitioning problem \Cref{e:minEscapeRelax_alt}. Then, we propose and analyse computaitonal methods for the problems. As noted above, we assume throughout that the graph is (strongly) connected.
\subsection{Analysis of the relaxed subgraph detection problem and the relaxed graph partitioning problem}
For fixed $\epsilon > 0$ and
$\phi \in [0,1]^{|V|}$, denote the operator on the RHS of \Cref{e:relax} by
$L_\phi := D-A + \frac{1}{\epsilon} (1-\phi)$.
\begin{lemma}[Discrete maximum principle]
\label{lem:max}
Given the regularized operator $L_\phi$ and a vector $f > 0$, we have $(L_{\phi}^{-1} f)_v > 0$ for all $v \in V$. Without strong connectivity, this result still holds (with $>$ replaced by $\ge$) as long as there are no leaf nodes.
\end{lemma}
\begin{proof}
Writing $L_\phi = \left( D + \frac{1}{\epsilon} (1-\phi) \right) - A$, we observe that
\begin{align*}
L_\phi^{-1} & = \left( \left( D + \frac{1}{\epsilon} (1-\phi) \right) \left( I - \left( D + \frac{1}{\epsilon} (1-\phi) \right)^{-1} A \right) \right)^{-1} \\
& = \left( I - \left( D + \frac{1}{\epsilon} (1-\phi) \right)^{-1} A \right)^{-1} \left( D + \frac{1}{\epsilon} (1-\phi) \right)^{-1} \\
& = \sum_{n=0}^\infty \left[ \left( D + \frac{1}{\epsilon} (1-\phi) \right)^{-1} A \right]^n \left( D + \frac{1}{\epsilon} (1-\phi) \right)^{-1}.
\end{align*}
Since all entries in the corresponding matrices are positive (by strong connectivity), the result holds.
\end{proof}
For simplicity, in the following we consider simply setting the potential $X : = \epsilon^{-1} ( 1- \phi)$ and we use $X$ and ${\rm diag}\ X$ interchangeably for graph Schr\"odinger operators of the form $L_X := D-A + X$ and solutions of the Poisson equation $L_X u = d$. We can then consider the related energy functional
\begin{equation}
\label{e:l1energy_alt}
E (X) : = \left\| \left[ L + X \right]^{-1} d \right\|_{\ell^1 (V)} = \| u\|_{
\ell^1(V)} .
\end{equation}
\begin{lemma}
\label{diff:lem}
The gradient of $E(X)$ with respect to $X$ is given by
\begin{equation}
\label{Jgrad}
\nabla E = - u \odot v
\end{equation}
where $\odot$ denotes the Hadamard product and
\begin{equation}
\label{uvdef}
u = (L+X)^{-1} d, \ \ v = (L+X)^{-T} e.
\end{equation}
Here $e$ is the all-ones vector. The Hessian of $E(X)$ with respect to $X$ is then given by
\begin{equation}
\label{Jhess}
H = \nabla^2 E = (L + X)^{-1} \odot W + (L + X)^{-T} \odot W^T
\end{equation}
where
\begin{equation*}
W := u \otimes v
\end{equation*}
where $\otimes$ is the Kronecker (or outer) product.
\end{lemma}
\begin{proof}
Write $e_j$ as the indicator vector for the $j$th entry. First, differentiating~\cref{uvdef} with respect to $X_j$, we compute
$$
(L + X) \frac{\partial u}{ \partial X_j} = - e_j \odot u
\qquad \implies \qquad
\frac{\partial u}{ \partial X_j} = - \langle e_j, u \rangle (L + X)^{-1} e_j.
$$
Taking the second derivative, we obtain
\begin{align*}
(L + X) \frac{\partial^2 u}{ \partial X_j \partial X_k}
&= - e_j \left\langle e_j, \frac{\partial u}{ \partial X_k} \right\rangle
- e_k \left\langle e_k , \frac{\partial u}{ \partial X_j} \right\rangle \\
&= e_j \langle e_k, u \rangle \left \langle e_j, (L + X)^{-1} e_k \right\rangle
+ e_k \langle e_j, u \rangle \left \langle e_k, (L + X)^{-1} e_j \right\rangle,
\end{align*}
which implies that
$$
\frac{\partial^2 u}{ \partial X_j \partial X_k} =
\left \langle e_j, (L + X)^{-1} e_k \right\rangle \langle e_k, u \rangle (L+X)^{-1}e_j + \left \langle e_k, (L + X)^{-1} e_j \right\rangle
\langle e_j, u \rangle (L+X)^{-1}e_k.
$$
By the maximum principle (Lemma~\ref{lem:max}), $u$ is positive and we can write
$ E(X) = \| u \|_1 = \langle e, u \rangle$. Thus, the gradient is
\begin{align*}
\frac{\partial E}{ \partial X_j}
&= \left \langle e, \frac{\partial u}{ \partial X_j} \right \rangle \\
& = - \langle (L+X)^{-T} e, e_j \rangle \langle u, e_j \rangle,
\end{align*}
or in other words
\[
\nabla_X E = u \odot v
\]
for $u$ and $v$ as in \eqref{uvdef}.
For the Hessian, we have
\begin{align*}
&\frac{\partial^2 E}{ \partial X_j \partial X_k}
= \left \langle e, \frac{\partial^2 u}{ \partial X_j \partial X_k} \right \rangle \\
& \hspace{.2cm} = \left \langle e_k, (L + X)^{-1} e_j \right\rangle
\langle u, e_j \rangle \left \langle e_k , v\right \rangle + \left \langle e_j, (L + X)^{-1} e_k \right\rangle \langle v, e_j \rangle \left \langle e_k,u \right \rangle.
\end{align*}
Thus, the Hessian can be written
$$
H = \nabla^2 E = (L + X)^{-1} \odot W + (L + X)^{-T} \odot W^T
$$
where
\begin{equation*}
W := u \otimes v.
\end{equation*}
as claimed.
\end{proof}
\begin{remark}
If $L$ is symmetric, the above statements can be simplified greatly to give
$$
H = \nabla^2 E = (L + X)^{-1} \odot (W+W^T)
$$
where
\begin{equation*}
W + W^T := u \otimes v + v \otimes u = \frac{1}{2} (u + v) \otimes (u+w) - \frac{1}{2} (u - v) \otimes (u- v) .
\end{equation*}
\end{remark}
\begin{proposition}
\label{p:Covexity}
For $f > 0$ fixed, let $u$ satisfy $
(L+X) u = f$.
The mapping
$X \mapsto E(X) = \| u \|_1$ is strongly convex on $\{X \geq 0, \ X \neq 0 \}$.
\end{proposition}
\begin{proof}
We wish to show that
\[E(X) = e^T (L +X)^{-1} d \]
is convex on $[0,X_{\infty}]^n$ for fixed constant $X_{\infty}$.
Replacing $D+X$ with $X$, this is equivalent to
\[e^T (X-A)^{-1} d\]
being convex on $\{V : d_i + X_{\infty} \ge X_i \ge d_i\}$.
Expanding, we have
\[e^T \left( I - X^{-1} A \right)^{-1} X^{-1} d = e^T \sum_{k=0}^{\infty} \left(X^{-1} A \right)^k X^{-1} d.\]
So it is enough to show that
\[e^T \left(X^{-1} A \right)^k X^{-1} d\]
is convex for each $ k > 0 $.
This is true as long as
\[ f(x) = \prod_i x_i^{-\alpha_i}\]
is convex for any $\alpha = (\alpha_1,\cdots,\alpha_n)$.
Computing second derivatives gives
\[f_{X_iX_i}(X) = f(X) \alpha_i (\alpha_i + 1) X_i^{-2}\]
and
\[f_{X_i X_j}(X) = f(X) \alpha_i \alpha_j X_i^{-1} X_j^{-1}.\]
So the Hessian of $f$ is
\[f(X) \left[ (\alpha X^{-1})^T (\alpha X^{-1}) + \textrm{diag}(\alpha X^{-2})\right],\] which is clearly positive semi-definite, being the sum of positive semi-definite matrices.
To observe strong convexity, recognize that the $k = 0$ term contributes a term to the Hessian of the form $DX^{-2}$, which is positive definite on the domain in question.
\end{proof}
Proposition \ref{p:Covexity} gives that $\phi \to E_\varepsilon(\phi)$ is strongly convex on $\mathbb R^{|V|}_+$, so $\{\phi_\ell\}_{\ell \in [K]} \mapsto \mathcal{E}^\varepsilon\left( \{\phi_\ell\}_{\ell \in [K]} \right) $ is also convex on $\mathcal{A}_K$. The following corollary is then immediate.
\begin{corollary}[Bang-bang solutions] \label{c:BangBang}
Every maximizer of \Cref{exp:l1_opt} is an extreme point of $\{ \phi \in [0,1]^{|V|} \colon \langle \phi,1\rangle =k \}$, {\it i.e.}, an indicator function for some vertex set $S \subset V$ with $|S| =k$.
\end{corollary}
Thus, in the language of control theory,
\Cref{c:BangBang} shows that
\Cref{exp:l1_opt}
is a bang-bang relaxation of
\eqref{e:subgraphDetection}
and that
\eqref{e:minEscapeRelax_alt}
is a bang-bang relaxation of
\eqref{e:minEscape}.
\bigskip
\begin{corollary}
\label{Hess:cor}
Since the set of values $(x_1,\dots,x_n) \in \mathbb{R}^n_+$ with which we are concerned is convex and $E$ is $C^2$ in $X$, the resulting Hessian matrix $H$ is positive definite.
\end{corollary}
\begin{remark}
Note that though the Hadamard product of two positive definite matrices is positive definite, Corollary \ref{Hess:cor} is not obvious from the structure of the Hessian, given that the matrix $W$ is indefinite when $u$ and $w$ are linearly independent. As a result, this positive definiteness is strongly related to the structure of the $L+X$ matrix and its eigenvectors.
\end{remark}
\subsection{Optimization scheme}
\subsubsection{Subgraph detector}
We solve~\Cref{exp:l1_opt} using rearrangement ideas as follows. After initializing $S$ (randomly in our experiments), we use the gradient~\Cref{Jgrad} to find the locally optimal next choice of $S$, and then iterate until convergence (typically $<10$ iterations in our experiments). More explicitly, we follow these steps:
\begin{align}
L u + \epsilon^{-1} (1- \chi_{S^0}) u & = d, \label{eq:grad_comp1} \\
L^T v + \epsilon^{-1} (1- \chi_{S^0}) v & = 1.
\label{eq:grad_comp}
\end{align}
The update, $S^1$, then contains those nodes $\ell$ that maximize $u_{\ell}v_{\ell}$.
\begin{algorithm}[t]
\caption{Subgraph detector}
\label{alg:subgraph}
\begin{algorithmic}
\State Input $S^0 \subset V$.
\While{$S^t \ne S^{t-1}$}
\State Solve~\Cref{eq:grad_comp1} and~\Cref{eq:grad_comp} for $u$ and $v$.
\State Assign vertex $\ell$ to subgraph $S^1$ if $\nabla_\phi E$ is optimized. That is, solve the following sub-problem.
\begin{equation}
\max_{|S| = k} \ \sum_{\ell \in S} u(\ell) \cdot v (\ell) .
\label{subgraph_inner}
\end{equation}
(Note that~\Cref{subgraph_inner} is easily solved by taking the $k$ indices corresponding to the largest values of $u(\ell) \cdot v(\ell)$, breaking ties randomly if needed.)
\State Reset now, building on $ S^1 \subset V$ accordingly and repeat until $S^n = S^{n-1}$.
\EndWhile
\end{algorithmic}
\end{algorithm}
Pseudocode for this approach is given in~\Cref{alg:subgraph}, which has the following ascent guarantee:
\begin{proposition}
\label{prop:sgascent}
Every nonstationary iteration of~\cref{alg:subgraph} strictly increases the energy $E_\epsilon$. \Cref{alg:subgraph} terminates in a finite number of iterations.
\end{proposition}
\begin{proof}
Let $S^0$ and $S^1$ be the vertex subsets for successive iterations of the method.
Define $W = \chi_{S^1} - \chi_{S^0}$. Assuming $W \neq 0$, by strong convexity (Theorem~\ref{p:Covexity}) and the formula for the gradient \eqref{Jgrad}, we compute
\begin{subequations}
\begin{align}
E_\epsilon (\chi_{S^1}) &> E_\epsilon (\chi_{S^0}) + \frac{1}{\epsilon} \langle W, uv \rangle \\
&= E_\epsilon (\chi_{S^0}) + \frac{1}{\epsilon} \left( \sum_{i \in S_1} u_i v_i - \sum_{i \in S_0} u_i v_i \right) \\
& \geq E_\epsilon (\chi_{S^0}).
\end{align}
\end{subequations}
Thus, the energy is strictly increasing on non-stationary iterates.
Since we assume that $V$ is a finite size vertex set and the rearrangement method increases the energy, it cannot cycle and hence must terminate in a finite number of iterations.
\end{proof}
To avoid hand-selection of $\epsilon$, we always set $\epsilon = C/\lambda_F$, where $\lambda_F$ is the Frobenius norm of the graph Laplacian and $C >1$ is typically set at $C=50$ to make sure $\epsilon$ allows communication between graph vertices. If $C$ is chosen to take a different value below, we will highlight those cases.
\subsubsection{Graph partitioner}
\label{sec:gp}
Given the success of the energy \eqref{e:l1energy}, one might naively consider partitioning the graph by maximizing an energy of the form
\begin{equation}
\label{part:energy_bad}
(S_1, S_2, \dots, S_K) \mapsto \sum_{i = 1}^K [E_\epsilon(\chi_{S_i})].
\end{equation}
However, it can be computed that this energy does not properly constrain the volumes of each partition in a reasonable fashion and the optimizer of this nice problem merely results in putting all the vertices in a single box.
The partition energy we initially worked to minimize instead is of the form
\begin{equation}
\label{part:energy}
(S_1, S_2, \dots, S_K) \mapsto \sum_{i = 1}^K [ |V| E_\epsilon( \chi_{S_i} )]^{-1},
\end{equation}
since the inverses penalize putting all nodes into the same partition by making the resulting empty classes highly costly. Intuitively, this energy functional provides an effective volume normalization of the relative gradients (similar to a K-means type scheme). However, while in practice this functional appears to work reasonably well on all graph models considered here, we were unable to prove, upon analysis of the Hessian, that rearrangements based on such an algorithm are bang-bang like the subgraph detector.
As an alternative, we instead consider
\begin{equation}
\label{part:energyalt}
\tilde{E}_{\delta, \epsilon} (S_1, S_2, \dots, S_K) = \sum_{i = 1}^K [1+ \delta |V| E_\epsilon( \chi_{S_i})]^{-1}.
\end{equation}
Applied to functions, $0 \leq \phi_j \leq 1$, instead of indicator functions, we consider
\begin{equation}
\label{part:energyalt_phi}
\tilde{E}_{\delta,\epsilon} (\phi_1, \phi_2, \dots, \phi_K) = \sum_{i = 1}^K [1+ \delta |V| E_\epsilon(\phi_i)]^{-1}.
\end{equation}
We then have that
\begin{equation}
\label{Grad:pe}
\nabla_{\phi_j} \tilde{E} = - \frac{\delta}{[1+ \delta |V| E_\epsilon(\phi_i)]^{2}} \nabla_{\phi_j} ( |V| E_\epsilon (\phi_j))
\end{equation}
making the Hessian consist of blocks of the form
\begin{align}
\label{Hess:pe}
\nabla^2_{\phi_j} \tilde{E} & = - \frac{\delta}{[1+ \delta |V| E_\epsilon(\phi_i)]^{2}} \nabla^2_{\phi_j} (|V| E_\epsilon (\phi_j)) \\
& \hspace{.5cm} + 2 \frac{\delta^2}{[1+ \delta |V| E_\epsilon(\phi_i)]^{3}} (\nabla_{\phi_j} ( |V| E_\epsilon (\phi_j)) ) ( \nabla_{\phi_j} ( |V| E_\epsilon (\phi_j)) )^T. \notag
\end{align}
Note, this is the sum of a negative definite operator and a rank one matrix, meaning that for $\delta$ sufficiently small, the Hessian will prove that $\tilde E$ is concave with respect to each component. In practice, we find that taking $\delta=\epsilon$ is sufficient both for having a negative definite Hessian and generating good results with respect to our rearrangement scheme. As such, we will generically take $\delta = \epsilon$ henceforward.
Our approach to the node partitioner is largely analogous to that of the subgraph detector, with the exception that we use class-wise $\ell^1$ normalization when comparing which values of $u \cdot v$ at each node. In detail, the algorithm is presented in Algorithm \ref{alg:partitioner}. It is a relatively straightforward exercise applying the gradient computation for $E_\epsilon (S_i)$ from Proposition \ref{p:Covexity} to prove that the energy functional \eqref{part:energy} will decrease with each iteration of our algorithm as in Proposition \ref{prop:sgascent}.
\begin{algorithm}[t]
\caption{Graph Partitioner}
\label{alg:partitioner}
\begin{algorithmic}
\State Input $\vec{S} = \{ S_1^0, \dots, S_K^0 \}$ a $K$ partition of $V$.
\While{${\vec S}^t \ne {\vec S}^{t-1}$}
\State For $j = 1, \dots, K$, solve the equations
\begin{align*}
L u_j + \epsilon^{-1} (1- \chi_{S_j^0}) u_j & = d, \\
L^T v_j + \epsilon^{-1} (1- \chi_{S_j^0}) v_j & = 1.
\end{align*}
\State Normalize $u_j = \frac{u_j}{(1 + \epsilon \| u_j \|_{\ell^1})^2}$, $v_j = v_j$.
\State Assign vertex $v$ to ${\vec S}^{t+1}_j$ where
\[
j=\mathrm{argmax} \{ u_1 \cdot v_1 (v) , \dots, u_K\cdot v_K (v) \}
\]
(that is, optimize $\nabla_\phi E$)
breaking ties randomly if needed.
\State Set $t = t+1$.
\EndWhile
\end{algorithmic}
\end{algorithm}
\subsubsection{Semi-supervised learning}
In cases where we have a labeled set of nodes $T$ with labels $\hat\phi_v \in \{ 0,1 \}$ indicating whether we want node $i$ to be in the subgraph ($\hat\phi_v = 1$) or its complement ($\hat\phi_v = 0$), we can incorporate this information into our approach as follows.
For the subgraph detector, we use
$E_{\epsilon,\lambda,T}(\phi) = E_{\epsilon}(\phi) + \lambda \sum_{v \in T} \left( \phi_v - (1-\hat\phi_v) \right)^2$. Then the rearrangement algorithm needs to be modified at step 3 to become: Assign vertex $\ell$ to subgraph $S^1$ if $\nabla_\phi E$ is optimized
\[
\max_{|S| = k} \ \frac1\epsilon \sum_{\ell \in S} u(\ell) \cdot v (\ell) + 2 \lambda \sum_{v\in T}[\chi_S(v) - (1-\hat\phi_v)],
\]
where $\chi$ is the binary-valued indicator function. This again is solved by picking the largest elements
(we break ties by picking the lowest-index maximizers if needed).
Since the energy is still convex, the energy still increases at each iteration.
For the $K$-partitioner, we have a labeled set of nodes $T_i$ with labels $\hat\phi_{i,v} \in \{ 0,1 \}$, for $i = 1,\dots, K$ indicating whether we want node $v$ to be in partition element $i$, with $\sum_i \hat\phi_{i,v} = 1$ for $v\in \cup_i T_i$. We can incorporate this information into our approach by modifying the energy to be the concave functional
\begin{equation}
\label{eqn:parEssl}
\tilde{E}_{\epsilon,\lambda}(\phi_1,\dots,\phi_K) = \tilde{E}_{\epsilon}(\phi_1,\dots,\phi_K) - \lambda \sum_{v \in T} \sum_{j=1}^K (\phi_{j,v} - (1 - \hat \phi_{j,v}))^2
\end{equation}
with the gradient rearrangement being appropriately modified.
\section{Numerical Results}
\label{s:NumRes}
We test the performance of these algorithms both on synthetic graphs and an assortment of ``real-world" graphs.
For the synthetic tests, we use a particular set of undirected stochastic block models which we call the MultIsCale $K$-block Escape Ensemble (MICKEE), designed to illustrate some of the data features which our algorithms handle.
A MICKEE graph consists of $N$ nodes partitioned into $K+1$ groups of sizes $N_1$, $\ldots$, $N_K$, and $N_{K+1} = N-\sum_{j=1}^K N_j$, where $N_1<N_2<\ldots<N_K<N_{K+1}$ (see the 2-MICKEE schematic in~\cref{fig:lopsided}).
The nodes in the first $K$ groups induce densely connected Erd\H{o}s--R\'enyi (ER) subgraphs (from which we will study escape times) while the last group forms a sparsely connected ER background graph.
Each of the $K$ dense subgraphs is sparsely connected to the larger background graph.
The goal is to recover one of the planted subgraphs, generally the smallest.
A na\"ive spectral approach will often find one of the planted graphs, but we know of no way to control which subgraph is recovered. Our subgraph detector method, in contrast, can be directed to look at the correct scale to recover a specific subgraph, as we will demonstrate in the 2-MICKEE example (i.e., with two planted subgraphs).
\begin{figure}
\centering
\includegraphics[width=.35\textwidth]{lopsided.pdf}
\caption{Schematic of a 2-MICKEE graph, with three dense subgraphs that are randomly connected to each other. Our subgraph detectors can identify the target subgraph, ignoring other planted subgraphs at different scales. Our partitioner correctly identifies each subgraph as a partition, regardless of the scale.}
\label{fig:lopsided}
\end{figure}
We explore a number of variations on the basic MICKEE theme, including (1) making the large subgraph have a power law degree distribution (with edges drawn using a loopy, multi-edged configuration model), (2) adding more planted subgraphs with sizes ranging across several scales, (3) adding uniformly random noise edges across the entire graph or specifically between subgraphs, and (4) varying the edge weights of the various types of connections. For brevity, we refer to a MICKEE graph with $K$ planted subgraphs (not including the largest one) as a $K$-MICKEE graph.
\subsection{Subgraph Detection}
We explore the performance of~\Cref{alg:subgraph} using four benchmarks, which emphasize (1) noise tolerance, (2) multiscale detection,
(3) robustness to heavy-tailed degree distributions, and
(4) effective use of directed edges, respectively. In each of these tests, the target subgraph is the smallest planted subgraph.
\subsubsection*{Robustness to noise.} In~\cref{fig:3earnonlocal_sg} we visualize results from~\Cref{alg:subgraph} on $3$-MICKEE graphs, varying the amount and type of noise. While it is possible to get a bad initialization and thus find a bad local optimum the subgraph detector usually finds the target exactly, except in the noisiest regime (which occurs roughly at the point where the number of noise edges is equal to the number of signal edges).
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_nonlocal_sg_ave-eps-converted-to.pdf}
\caption{Average of 5 runs}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_nonlocal_sg_max_alt-eps-converted-to.pdf}
\caption{Best of 5 runs}
\end{subfigure}
\caption{Accuracy of~\Cref{alg:subgraph} as a function of mean inter-subgraph degree (the mean taken over the nodes of the target subgraph) and mean weight of the inter-component edges (not including non-edges) for $3$-MICKEE graphs with planted subgraphs of sizes $80$, $160$, and $240$ nodes, with a total of $1,000$ nodes in the entire graph. The expected in-subgraph-degree is fixed at $20.8$ (with intra-component edge weights given by $1$). Inter-group edge weights are drawn from a normal distribution with maximum ranging from $.01$-$.25$. As long as the noise level is not too high, the subgraph detector finds the smallest planted subgraph despite the presence of ``decoy'' subgraphs at larger scales. This may be contrasted with spectral clustering, which is attracted to the larger scales.}
\label{fig:3earnonlocal_sg}
\end{figure}
\subsubsection*{Range of scales.}
We generated $2$-MICKEE graphs with varying sizes of the subgraphs relative to each other and the total mass. We take $1500<N<2500$ for the total size and vary the percentage of smallest planted subgraph as $.02N\leq N_1 \leq .15N$ with $N_2 = 2 N_1$. Here, the inter-edge density was set to $.01$ (in-subgraph-degree values between $(1-3p)*N*.01$ for $.02<p<.15$) with mean inter-edge weight $.05$ compared to intra-group edge weights of $1$. We used this framework to assess the detectability limits of sizes of the smallest components, and numerically we observe that small communities are quite detectable using our algorithm. Using the best result over $5$ initializations, we were able to detect the smallest ear over the entire range and we did so reliably on average as well. Since the resulting figure would thus not be terribly informative for this range, we forego including a similar heat plot over this range of parameters.
\subsubsection*{Heavy-tailed degree distributions.} For the results in \cref{fig:3earpowerlaw_sg}, we use a power law degree distribution in the largest component of $3$-MICKEE graphs with $N_1 = 80, N_2 = 160, N_3 = 240$ and $N = 1000$. Surprisingly (at least to us), smaller power-law exponents (corresponding to more skewed degree distributions) actually make the problem much easier (whereas adding noise edges had little effect). We conjecture that this is because, in the presence of very high-degree nodes, it is difficult to have a randomly occurring subgraph with high mean escape time, since connections into and out of the hubs are difficult to avoid.
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_powerlaw_sg-eps-converted-to.pdf}
\caption{Average of 5 runs}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_powerlaw_sg_max_alt1-eps-converted-to.pdf}
\caption{Best of 5 runs}
\end{subfigure}
\caption{Accuracy of~\Cref{alg:subgraph} on a $3$-MICKEE graph with a power law distribution as a function of the power law exponent and inter-cluster edge density. We observe a robustness to both the exponent and density (especially in the right panel) up to a sharp cutoff around 3.4. Note the low exponents (typically considered to be the harder cases) are actually easier in this problem.}
\label{fig:3earpowerlaw_sg}
\end{figure}
\subsubsection*{Directed edge utilization.} In~\cref{fig:ERcycle_sg} we consider the problem of detecting a directed cycle appended to an ER graph. The graph weights have been arranged so that the expected degree of all nodes is roughly equal. There are many edges leading from the ER graph into the cycle, with only one edge leading back into the ER graph. This makes the directed cycle a very salient dynamical feature, but not readily detectable by undirected (e.g.\ spectral) methods. We considered a large number of cycle sizes relative to the ER graph and with a proper choice of $\epsilon$, we were able to detect the cycle in all cases. Thus, this detector finds directed components very robustly due to the nature of the escape time.
\begin{figure}
\centering
\includegraphics[width=0.25\textwidth]{er_plus_cycle.pdf}
\caption{A directed ER graph with a directed cycle appended. Note that there is only one edge (in the upper left) leading from the cycle to the ER graph, with many edges going the other direction from the ER graph to the cycle. The cycle nodes have the same expected degree as the ER nodes, yet a random walker would naturally get stuck in the cycle for a long time. Detecting such a dynamical trap is a challenge for undirected algorithms, but~\Cref{alg:subgraph} detects it consistently over a wide range of cycle lengths and ER graph sizes.}
\label{fig:ERcycle_sg}
\end{figure}
\subsubsection*{Variation over choice of $N_1$.} In Figure \ref{fig:EK}, we consider how the Mean Exit Time as well as the regularized energy in \eqref{e:l1energy} behaves as we vary the constrained volume of our algorithm. We considered a $2$-MICKEE graph with $N_1 = 50$, $N_2 = 100$ and $N = 1000$. We took the baseline ER density $.03$ and the inter-edge density was set to $.025$ with mean inter-edge weight $.1$.
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{EvsKsweep_MET-eps-converted-to.pdf}
\caption{True mean exit time
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{EvsKsweep-eps-converted-to.pdf}
\caption{Regularized energy
\end{subfigure}
\caption{The score of the optimal sub-graph found with~\cref{alg:subgraph}. Both plots have clear shifts near $k = 50$ corresponding to the smallest component and $k = 100$ corresponding to the second smallest component. This suggests that the size of natural subgraphs within a given graph can be detected from breaks in the subgraph scores as the size of the target in~\cref{alg:subgraph} varies.}
\label{fig:EK}
\end{figure}
In summary, we find that the subgraph detector is able to robustly recover planted communities in synthetic graphs and is robust to a range of application-relevant factors.
\subsection{\texorpdfstring{$K$}{K}-partition method}
We will now consider the performance of \cref{alg:partitioner} in a variety of settings. Throughout, we will give heat plots over the variation of the parameters to visualize the purity measure of our detected communities from our ground-truth smallest component of the graph, over $5$ iterations of the algorithm. The purity measure is
\[
\frac{1}{N} \sum_{k=1}^K \max_{1 \leq l \leq K} N_k^l
\]
for $N_k^l$ the number of data samples in cluster $k$ that are in ground truth class $l$.
In \cref{fig:3earnonlocal} we consider a $\rho-\Delta$ heat plot of the purity measure for a 4-partition of a $3$-MICKEE graph using delocalized connections with $N_1 = 80, N_2 = 160, N_3 = 240$ and $N = 1000$, varying the density of the inter-community edge connections ($0 < \rho < .1$) and the mean weight of the inter-component edges ($0<\Delta<.125$). We vary over number and strength of connecting edges between components and consider the purity measure as output.
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_nonlocal_ave-eps-converted-to.pdf}
\caption{Average of 5 runs}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_nonlocal_max-eps-converted-to.pdf}
\caption{Best of 5 runs}
\end{subfigure}
\caption{The purity measure for~\cref{alg:partitioner} on $3$-MICKEE graphs. We vary the density of the inter-region edges and their edge weights. We observe robust (usually perfect) detection over a range of these parameters, with a sharp cutoff (especially in the left panel) when the noise levels grow too high, suggesting that detection is still possible beyond this cutoff, but the energy landscape has more bad local optima beyond this point.}
\label{fig:3earnonlocal}
\end{figure}
In addition, we have tested \cref{alg:partitioner} on MICKEE graphs with varying sizes of the components relative to each other and the total mass where the connections between ER graphs include more random edges with weak connection weights. \Cref{fig:2earnonlocal} shows results from testing the algorithm on $2$-MICKEE graphs with varying sizes of the components relative to each other and the total mass. We take $1500<N<2500$ for the total size and vary the percentage of smallest planted subgraph as $.02N\leq N_1 \leq .15N$ with $N_2 = 2 N_1$. Here, the inter-edge density was set to $.025$ with mean inter-edge weight $.05$. The question addressed in this experiment is how small can we get the components and still detect them. We heat map the average purity measure varying the number of vertices in the graph and the relative size of the smallest sub-graph (i.e., $N_1/N$).
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{2ear_paperfig_N_percentsmallestear_ave-eps-converted-to.pdf}
\caption{Average of 5 runs}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{2ear_paperfig_N_percentsmallestear_max-eps-converted-to.pdf}
\caption{Best of 5 runs}
\end{subfigure}
\caption{The purity measure for the partitioner acting on a $2$-MICKEE graph with the fraction of nodes in the smaller planted subgraph varying, along with the size of the graph. We observe a generally robust partitioning.}
\label{fig:2earnonlocal}
\end{figure}
We similarly consider the partitioning problem on a version of the $3$-MICKEE graph with power-law degree distribution in the largest component, using delocalized connections with $N_1 = 80, N_2 = 160, N_3 = 240$ and $N = 1000$. \Cref{fig:3earpowerlaw} provides a $\rho-q$ plot for results from varying the density ($.001<\rho<.03$) of the edge-density of connections between the components of the graph, using a power law degree distribution for the largest component with exponent ($2.1 \leq q \leq 4$).
\begin{figure}
\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_powerlaw_ave_alt-eps-converted-to.pdf}
\caption{Average of 5 runs}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=\textwidth]{3ear_paperfig_powerlaw_max_alt-eps-converted-to.pdf}
\caption{Best of 5 runs}
\end{subfigure}
\caption{Purity achieved by~\cref{alg:partitioner} on $3$-MICKEE graphs with a power law degree distribution, varying the exponent of the power law and inter-subgraph edge density. We observe generally robust partitioning (especially in the right panel).}
\label{fig:3earpowerlaw}
\end{figure}
\subsubsection{Graph clustering examples}
We consider the family of examples as in \cite{yang2012clustering} and compare the best presented purity measures from that paper to a number of settings using our algorithms. Since some of these examples are by their nature actually directed data sets, we throughout computed both the directed and undirected adjacency matrix representations as appropriate to test against. We ran the $K$-partitioner over a variety of scenarios for both cases. In all these runs, we chose the value of $K$ to agree with the metadata (we avoid term "
ground truth", as the node labels themselves may be noisy or not the only good interpretation of the data). However, we note that our algorithm also does a good job in a variety of settings selecting the number of partitions to fill without precisely providing this correct number \emph{a priori}.
For our study, we consider a number of various options for the algorithm. First, the initial seeding sets were chosen either uniformly at random or using $K$-means on the first $K$-eigenvectors of the graph Laplacian. We consider the best result over $10$ outcomes. In addition, we considered a range of values of $\epsilon$, all of which were a multiplicative factor of the inverse of the Frobenius norm of the graph Laplacian, denoted $\| L \|_{{\rm Fro}}$, which sets a natural scaling for separation in the underlying graph. See for instance the related choice in \cite{osting2014minimal}. We computed a family of partitions for $\epsilon = {50 \nu}/{\| L \|_{{\rm Fro}}}$, where $\nu = e^{.2 \ell}$ with $-50 < \ell < 50$. Finally, we also considered the impact of semi-supervised learning by toggling between $\lambda = 0$ and $\lambda = 10^6$ in \cref{eqn:parEssl} with $10$\% of the nodes being included in the learning set. Clearly, there are many ways we might improve the outcomes, by for instance increasing the number and method of initialization and refining our choices of $\epsilon$ or $\lambda$; nevertheless, we see under our current choices that our fast algorithm performs well over a range of such parameters, as reported in Table \ref{tab:purmeas}.
For each data set in Table \ref{tab:purmeas}, we report the best outcome using directed adjacency matrices to build the Graph Laplacian using both the $K$-means and random initializations but with no semi-supervised learning (Directed); the best outcome using symmetrized adjacency matrices to build the Graph Laplacian using both the $K$-means and random initializations but with no semi-supervised learning (Undirected); the best outcome when Semi-Supervised Learning is turned on over any configuration (Semi-supervision), the $K$-means only outcome ($K$-means only) and the best data from all the experiments reported in \cite{yang2012clustering} (Best from \cite{yang2012clustering}). Our results promisingly demonstrate that our fast algorithm is very successful in many cases in discovering large amounts of community structure that agrees with the metadata in these explicit data sets. Given that our communities are all built around random walks in the graph, it is not clear that all ground-truth designated communities would align well with our methods. For example, we note that our results do not align well with the metadata in the {\rm POLBLOGS} data set. A major takeaway from the table, however, is that in several examples we see that using the directed nature of the data provides better agreement with the metadata (as indicated by the green cells). Perhaps most striking in the table is that the best run of our fast algorithm, even without semi-supervised learning, provides better agreement with the metadata than \cite{yang2012clustering} for many of the data sets.
As a statistical summary of our findings, we had in total
$39$ directed datasets and $6$ undirected data sets that came from a variety of domains (image, social, biological, physical, etc.).
The networks are sized between $35$ nodes and $98,528$ nodes, having $2$-$65$ classes per network. Among directed networks,
$21$ data sets gave highest purity with the metadata with semi-supervised learning turned on, while $13$ have the best result from \cite{yang2012clustering}, and $3$ have $K$-means only best. For $9$ total data sets (green in the table), the directed version of our algorithm is better than the symmetrized undirected version, while $5$ are tied (yellow) and for $25$ the undirected method is better (orange). When \cite{yang2012clustering} is best, the median gap from our result with semi-supervised learning is $.05$.
When our algorithm with semi-supervised learning is best, the median gap from \cite{yang2012clustering} is $.05$. There is no clear relationship between data domain and performance or node count and performance. However, semi-supervision generally did improve the results the most with a smaller class count (median $3$) versus \cite{yang2012clustering} (median $20$).
When the directed algorithm is better than the undirected version, the median gap is $0.03$. Interestingly, $5$ of the datasets where directed was better were image or sensor data, with the two largest gaps ($.07$ and $.11$) being digit datasets.
When undirected was better, the median gap was $0.06$, with the largest gap being $.29$, for the 20NEWS dataset. When semi-supervision improves over our method (max of directed and undirected performance), the median improvement is $.06$, and the max improvements were $.22$ and $.20$. There is no obvious relationship between edge density and algorithm performance.
\begin{footnotesize}
\begin{table}
\centering
\sisetup{detect-weight,mode=text}
\renewrobustcmd{\bfseries}{\fontseries{b}\selectfont}
\begin{tabular}{llr>{\raggedleft}p{.3in}p{.3in}p{.3in}p{.3in}p{.3in}p{.3in}p{.3in}}
\rot{Network} & \rot{Domain} & \rot{Vertices} & \rot{Density} & \rot{Classes} & \rot{Directed} & \rot{Undirected} & \rot{Semi-supervision} & \rot{$K$-means only} & \rot{Best from \cite{yang2012clustering}} \\
\midrule
\multicolumn{8}{l}{\bf Directed data}\\
MNIST & Digit & 70,000 & 0.00 & 10& \colorbox{green}{0.85} & \colorbox{green}{0.78} & \textbf{0.98} & 0.84 & 0.97\\
VOWEL & Audio & 990 & 0.01 & 11& \colorbox{green}{0.35} & \colorbox{green}{0.32} & \textbf{0.44} & 0.34 & 0.37 \\
FAULTS & Materials & 1,941 & 0.00 & 7 & \colorbox{green}{0.44} & \colorbox{green}{0.42} & \textbf{0.49} & 0.39 & 0.41 \\
SEISMIC & Sensor & 98,528 & 0.00 & 3 & \colorbox{green}{0.60} & \colorbox{green}{0.59} & \textbf{0.66} & 0.58 & 0.59 \\
7Sectors & Text & 4,556 & 0.00 & 7 & \colorbox{green}{0.27} & \colorbox{green}{0.26} & \textbf{0.39} & 0.26 & 0.34 \\
PROTEIN & Protein & 17,766 & 0.00 & 3 & \colorbox{green}{0.47} & \colorbox{green}{0.46} & \textbf{0.51} & 0.46 & 0.50 \\
KHAN & Gene & 83 & 0.06 & 4 & \colorbox{yellow}{0.59} & \colorbox{yellow}{0.59} & \textbf{0.61} & 0.59 & 0.60 \\
ROSETTA & Gene & 300 & 0.02 & 5 & \colorbox{yellow}{0.78} & \colorbox{yellow}{0.78} & \textbf{0.81} & 0.77 & 0.77 \\
WDBC & Medical & 683 & 0.01 & 2 & \colorbox{yellow}{0.65} & \colorbox{yellow}{0.65} & \textbf{0.70} & 0.65 & 0.65 \\
POLBLOGS & Social & 1,224 & 0.01 & 2 & \colorbox{yellow}{0.55} & \colorbox{yellow}{0.55} & \textbf{0.59} & 0.51 & NA \\
CITESEER & Citation & 3,312 & 0.00 & 6 & \colorbox{orange}{0.28} & \colorbox{orange}{0.29} & \textbf{0.49} & 0.25 & 0.44\\
SPECT & Astronomy & 267 & 0.02 & 3 & \colorbox{orange}{0.79} & \colorbox{orange}{0.80} & \textbf{0.84} & 0.79 & 0.79 \\
DIABETES & Medical & 768 & 0.01 & 2 & \colorbox{orange}{0.65} & \colorbox{orange}{0.67} & \textbf{0.74} & 0.65 & 0.65 \\
DUKE & Medical & 44 & 0.11 & 2 & \colorbox{orange}{0.64} & \colorbox{orange}{0.68} & \textbf{0.73} & 0.52 & 0.70 \\
IRIS & Biology & 150 & 0.03 & 3 & \colorbox{orange}{0.87} & \colorbox{orange}{0.90} & \textbf{0.97} & 0.67 & 0.93 \\
RCV1 & Text & 9,625 & 0.00 & 4 & \colorbox{orange}{0.35} & \colorbox{orange}{0.40} & \textbf{0.62} & 0.32 & 0.54 \\
CORA & Citation & 2,708 & 0.00 & 7 & \colorbox{orange}{0.33} & \colorbox{orange}{0.39} & \textbf{0.50} & 0.32 & 0.47 \\
CURETGREY & Image & 5,612 & 0.00 & 61& \colorbox{orange}{0.23} & \colorbox{orange}{0.29} & \textbf{0.33} & 0.22 & 0.28\\
SPAM & Email & 4,601 & 0.00 & 2 & \colorbox{orange}{0.64} & \colorbox{orange}{0.70} & \textbf{0.73} & 0.61 & 0.69 \\
GISETTE & Digit & 7,000 & 0.00 & 2 & \colorbox{orange}{0.87} & \colorbox{orange}{0.94} & \textbf{0.97} & 0.81 & 0.94 \\
WEBKB4 & Text & 4,196 & 0.00 & 4 & \colorbox{orange}{0.42} & \colorbox{orange}{0.53} & \textbf{0.66} & 0.40 & 0.63 \\
CANCER & Medical & 198 & 0.03 & 14& \colorbox{orange}{0.49} & \colorbox{orange}{\textbf{0.55}} & 0.54 & 0.45 & 0.54 \\
YALEB & Image & 1,292 & 0.00 & 38& \colorbox{orange}{0.44} & \colorbox{orange}{\textbf{0.54}} & 0.52 & 0.41 & 0.51 \\
COIL-20 & Image & 1,440 & 0.00 & 20& \colorbox{orange}{0.74} & \colorbox{orange}{\textbf{0.85}} & 0.78 & 0.82 & 0.81 \\
ECOLI & Protein & 327 & 0.02 & 5 & \colorbox{orange}{0.79} & \colorbox{orange}{\textbf{0.83}} & 0.81 & 0.81 & \textbf{0.83} \\
YEAST & Biology & 1,484 & 0.00 & 10& \colorbox{orange}{0.46} & \colorbox{orange}{0.53} & 0.54 & 0.47 & \textbf{0.55} \\
20NEWS & Text & 19,938 & 0.00 & 20& \colorbox{orange}{0.20} & \colorbox{orange}{0.49} & 0.62 & 0.16 & \textbf{0.63} \\
MED & Text & 1,033 & 0.00 & 31& \colorbox{orange}{0.50} & \colorbox{orange}{0.54} & 0.54 & 0.48 & \textbf{0.56} \\
REUTERS & Text & 8,293 & 0.00 & 65& \colorbox{orange}{0.60} & \colorbox{orange}{0.69} & 0.75 & 0.60 & \textbf{0.77} \\
ALPHADIGS & Digit & 1,404 & 0.00 & 6 & \colorbox{orange}{0.42} & \colorbox{orange}{0.48} & 0.48 & 0.46 & \textbf{0.51} \\
ORL & Face & 400 & 0.01 & 40& \colorbox{orange}{0.76} & \colorbox{orange}{0.82} & 0.76 & 0.78 & \textbf{0.83} \\
OPTDIGIT & Digit & 5,620 & 0.00 & 10& \colorbox{orange}{0.90} & \colorbox{orange}{0.93} & 0.91 & 0.90 & \textbf{0.98} \\
PIE & Face & 1,166 & 0.00 & 53& \colorbox{orange}{0.53} & \colorbox{orange}{0.66} & 0.62 & 0.51 & \textbf{0.74}\\
SEG & Image & 2,310 & 0.00 & 7 & \colorbox{orange}{0.54} & \colorbox{orange}{0.64} & 0.59 & 0.51 & \textbf{0.73} \\
UMIST & Face & 575 & 0.01 & 20& \colorbox{green}{0.74} & \colorbox{green}{0.71} & 0.67 & 0.67 & \textbf{0.74} \\
PENDIGITS & Digit & 10,992 & 0.00 & 10& \colorbox{green}{0.82} & \colorbox{green}{0.73} & 0.82 & 0.83 & \textbf{0.87}\\
SEMEION & Digit & 1,593 & 0.00 & 10& \colorbox{green}{0.86} & \colorbox{green}{0.82} & 0.77 & 0.81 & \textbf{0.94} \\
AMLALL & Medical & 38 & 0.13 & 2 & \colorbox{orange}{0.92} & \colorbox{orange}{\textbf{0.95}} & 0.94 & \textbf{0.95} & 0.92 \\
IONOSPHERE & Radar & 351 & 0.01 & 2 & \colorbox{yellow}{0.77} & \colorbox{yellow}{0.77} & \textbf{0.85} & \textbf{0.85} & 0.70 \\
\multicolumn{8}{l}{\bf Undirected data} \\
POLBOOKS & Social & 105 & 0.08 & 3 & 0.83 & \textbf{0.85} & \textbf{0.85} & 0.82 & 0.83 \\
KOREA & Social & 35 & 0.11 & 2 & \textbf{1.00} & \textbf{1.00} & \textbf{1.00} & 0.71 & \textbf{1.00} \\
FOOTBALL & Sports & 115 & 0.09 & 12 & \textbf{0.94} & 0.93 & 0.90 & 0.93 & 0.93 \\
MIREX & Music & 3,090 & 0.00 & 10 & 0.21 & 0.24 & 0.27 & 0.12 & \textbf{0.43} \\
HIGHSCHOOL & Social & 60 & 0.10 & 5 & 0.82 & 0.85 & 0.83 & 0.82 & \textbf{0.95} \\
\bottomrule
\end{tabular}
\caption{Purity Measure Table}
\label{tab:purmeas}
\end{table}
\end{footnotesize}
\begin{comment}
\begin{small}
\begin{table}
\centering
\begin{tabular}{c|c|c|c|c|c|c|c}
Network & Size & Classes & RaUMET & RDMET & UnKUMET & DiKDMET & UnKDMET \\ \hline \hline
7Sectors & 4556 & 7 & 0.26 & 0.26 & 0.25 & 0.27 & 0.28\\
20NEWS & 19938 & 20 & 0.23 & 0.23 & 0.21 & 0.18 & 0.15\\
ADS & 2359 & 2 & 0.84 & 0.84 & 0.84 & 0.84 & 0.84\\
ALPHADIGS & 1404 & 36 & 0.21 & 0.21 & 0.47 & 0.44 & 0.43\\
AMLALL &38 & 2 & 0.92 & 0.92 & 0.92 & 0.82 & 0.82\\
CANCER & 198 & 14 & 0.41 & 0.41 & 0.52 & 0.52 & 0.55\\
CITESEER & 3312 & 6 & 0.3 & 0.3 & 0.22 & 0.22 & 0.21\\
COIL-20 & 1440 & 20 & 0.78 & 0.78 & 0.63 & 0.46 & 0.57\\
CORA & 2708 & 7 & 0.33 & 0.33 & 0.33 & 0.32 & 0.32\\
CURETGREY & 5612 & 61 & 0.031 & 0.11 & 0.22 & 0.27 & 0.27\\
DIABETES & 768 & 2 & 0.65 & 0.65 & 0.65 & 0.65 & 0.65\\
DUKE & 44 & 2 & 0.64 & 0.64 & 0.55 & 0.57 & 0.57\\
ECOLI & 327 & 5 & 0.78 & 0.72 & 0.79 & 0.62 & 0.81\\
FAULTS & 1941 & 7 & 0.4 & 0.42 & 0.4 & 0.35 & 0.39\\
FOOTBALL & 115 & 12 & 0.47 & 0.46 & 0.91 & 0.9 & 0.83\\
GISETTE & 7000 & 2 & 0.95 & 0.71 & 0.9 & 0.9 & 0.9\\
HIGHSCHOOL & 60 & 5 & 0.68 & 0.7 & 0.63 & 0.83 & 0.82\\
IONOSPHERE & 351 & 2 & 0.69 & 0.85 & 0.69 & 0.8 & 0.8\\
IRIS & 150 & 3 & 0.89 & 0.62 & 0.91 & 0.61 & 0.61\\
KHAN & 83 & 4 & 0.52 & 0.55 & 0.59 & 0.54 & 0.54\\
KOREA & 35 & 2 & 0.71 & 0.66 & 0.66 & 0.94 & 0.66\\
MED & 1033 & 31 & 0.37 & 0.38 & 0.54 & 0.46 & 0.47\\
MIREX & 3090 & 10 & 0.13 & 0.12 & 0.28 & 0.16 & 0.16\\
MNIST & 70000 & 10 & 0.11 & 0.82 & 0.56 & 0.65 & 0.47\\
OPTDIGITS & 5620 & 10 & 0.8 & 0.9 & 0.78 & 0.61 & 0.59\\
ORL & 400 & 40 & 0.34 & 0.53 & 0.77 & 0.64 & 0.6\\
PENDIGITS & 10992 & 10 & 0.81 & 0.66 & 0.8 & 0.58 & 0.63\\
PIE & 1166 & 53 & 0.42 & 0.49 & 0.5 & 0.56 & 0.55\\
POLBOOKS & 105 & 3 & 0.83 & 0.8 & 0.82 & 0.82 & 0.82\\
PROTEIN & 17766 & 3 & 0.46 & 0.46 & 0.46 & 0.46 & 0.46\\
RCV1 & 9625 & 4 & 0.31 & 0.3 & 0.3 & 0.33 & 0.33\\
REUTERS & 8293 & 65 & 0.67 & 0.62 & 0.69 & 0.63 & 0.62\\
ROSETTA & 300 & 5 & 0.77 & 0.77 & 0.77 & 0.77 & 0.77\\
SEG & 2310 & 7 & 0.29 & 0.5 & 0.63 & 0.29 & 0.35\\
SEISMIC & 98528 & 3 & 0.5 & 0.56 & 0.53 & 0.5 & 0.5\\
SEMEION & 1593 & 10 & 0.14 & 0.85 & 0.8 & 0.55 & 0.59\\
SPAM & 4601 & 2 & 0.61 & 0.61 & 0.61 & 0.61 & 0.61\\
SPECT & 267 & 3 & 0.79 & 0.79 & 0.79 & 0.79 & 0.79 \\
STRIKE & 24 & 3 & 0.79 & 1.0 & 0.96 & 0.96 & 0.96\\
TERROR & 1293 & 6 & 0.44 & 0.44 & 0.45 & 0.43 & 0.43\\
UMIST & 575 & 20 & 0.35 & 0.78 & 0.62 & 0.65 & 0.67\\
VOWEL & 990 & 11 & 0.25 & 0.3 & 0.36 & 0.26 & 0.25\\
WDBC & 683 & 2 & 0.65 & 0.65 & 0.65 & 0.65 & 0.65\\
WEBKB4 & 4196 & 4 & 0.42 & 0.42 & 0.39 & 0.4 & 0.39\\
YALEB & 1292 & 38 & 0.1 & 0.29 & 0.39 & 0.42 & 0.41\\
YEAST &1484 & 10 & 0.32 & 0.52 & 0.52 & 0.48 & 0.5
\end{tabular}
\caption{Purity Measure Table}
\label{tab:purmeas}
\end{table}
\end{small}
\end{comment}
We have discussed the output of a variety of experiments on a large number of data sets, but we also want to discuss their dependence upon the $\epsilon$ parameter and the percentage of nodes that are learned in the energy \eqref{eqn:parEssl}. To that end, we consider the output purity measure for some representative data sets and look at the outputs over a range of epsilon parameters and percentages of learning. In this case, we considered only the $K$-means initialization for consistency and simplicity of comparison. For the $\epsilon$ sweep, we recall that we considered the range $\epsilon = {50 \nu}/{\| L \|_{{\rm Fro}}}$, where $\nu = e^{.2 \ell}$ with $-50 < \ell < 50$. In \cref{fig:epsweep} we show the variation in the purity measure with $\epsilon$ for a small graph ({\rm FOOTBALL}), a medium sized graph ({\rm OPTDIGITS)}, and a large graph ({\rm SEISMIC}). Similarly, in \cref{fig:persweep} we visualize how results vary with the fraction of supervision (nodes with labels provided) under semi-supervised learning, for the same graphs, with $\nu = .6,.8,1.0,1.2,1.4,1.6,1.8$.
\begin{figure}[t!]
\centering
\begin{subfigure}{.25\textwidth}
\includegraphics[width=\textwidth]{Football_EpsSweep-eps-converted-to.pdf}
\caption{Football}
\end{subfigure}
\begin{subfigure}{.25\textwidth}
\includegraphics[width=\textwidth]{Optdigits_EpsSweep-eps-converted-to.pdf}
\caption{Optdigits}
\end{subfigure}
\begin{subfigure}{.25\textwidth}
\includegraphics[width=\textwidth]{Seismic_EpsSweep-eps-converted-to.pdf}
\caption{Seismic}
\end{subfigure}
\caption{Purity measures for three selected data sets as a function of the scale parameter $\nu$. In all three panels, we observe a stable range (on a log scale) where purity is stably nontrivial, and in the left panel, there are two such scales.}
\label{fig:epsweep}
\end{figure}
\begin{figure}[t!]
\centering
\begin{subfigure}{.25\textwidth}
\includegraphics[width=\textwidth]{Football_persweep-eps-converted-to.pdf}
\caption{Football}
\end{subfigure}
\begin{subfigure}{.25\textwidth}
\includegraphics[width=\textwidth]{Optdigits_persweep-eps-converted-to.pdf}
\caption{Optdigits}
\end{subfigure}
\begin{subfigure}{.25\textwidth}
\includegraphics[width=\textwidth]{Seismic_persweep-eps-converted-to.pdf}
\caption{Seismic}
\end{subfigure}
\caption{Purity measures on three selected data sets as a function of the fraction of supervision (nodes with labels provided) under semi-supervised learning. We observe that supervision can either consistently help (as in the right panel) or can have inconsistent effects (as in the left and middle panels). Once possible explanation for this is that there may be multiple clustering structures present in the data, and it takes a lot of supervision to force the partitioner to switch to a partition aligned with the metadata indicated by the supervision, rather than a different clustering structure that is better from the perspective of the optimizer.}
\label{fig:persweep}
\end{figure}
\section{Discussion}
\label{s:disc}
Throughout our study we emphasize that our methodology operates fundamentally on the possibly directed nature of the underlying graph data. Considering the Index of Complex Networks \cite{ICON} as a representative collection of widely-studied networks, we note that (as of our writing here) 327 of the 698 entries in the Index contain directed data. Whereas there are undoubtedly settings where one can ignore edge direction, there are inevitably others where respecting direction is essential. By formulating a strategy for subgraph detection and graph partitioning inherently built on processes running on the directed graph, we avoid the need for any \emph{post hoc} modifications to try to respect directed edges. In particular, our method nowhere relies on any correspondingly undirected version of the graph, avoiding possible information lost in symmetrizing.
While we expect that our formulation of escape times can be useful in general, including for undirected graphs, our proper treatment of the directed graph data should prove especially useful. For example, the directed follower v.\ following nature of some online social networks (e.g., Twitter) is undoubtedly important for understanding the processes involved in the viral spread of (mis)information. As shown by \cite{Weng_Menczer_Ahn_2013} (and extended by \cite{li2019infectivity}), the community structure is particularly important for identifying the virality of memes specifically because a meme that ``escapes" (in our present language) its subgraph of origin is typically more likely to continue to propagate. Another application where directed escape times could be relevant is in detecting the (hidden) circulation of information, currency, and resources that is part of coordinated adversarial activity, as explored for example in~\cite{jin2019noisy,moorman2018filtering,sussman2020matched}
To close, we highlight two related thematic areas for possible future work that we believe would lead to important extensions on the methods presented here.
\subsection{Connection to distances on directed graphs}
In previous work of the present authors \cite{boyd2020metric} along with Jonathan Weare, we construct a symmetrized distance function on the vertices of a directed graph. We recall the details briefly here, which is based somewhat upon the hitting probability matrix construction used in umbrella sampling (\cite{dinner2017stratification,Thiede_2015}). For a general probability transition matrix $P$, we denote the Perron eigenvector as
\[
P' \phi = \phi.
\]
Let us define a matrix $M$ such that $M_{ij} = \prob_i [\tau_j < \tau_i]$, where $\prob_i [\tau_j < \tau_i]$ is the probability that starting from site $i$ the hitting time of $j$ is less than the time it takes to return to $i$. Let $X(t)$ be the Markov chain with transition matrix $P$. Then, it can be observed that (\cite{dinner2017stratification,Thiede_2015})
\[
\prob_i [\tau_j < \tau_i] \phi_i = \prob_j [\tau_i < \tau_j] \phi_j,
\]
where $\prob_i$ represents the probability from $X(0) = i$.
This means that from hitting times, one can construct a symmetric adjacency matrix,
\begin{equation}
\label{Aht}
A^{(hp)}_{ij} = \frac{ \sqrt{\phi_i} }{ \sqrt{\phi_j} } \prob_i [\tau_j < \tau_i] = A^{(hp)}_{ji}\,.
\end{equation}
This adjacency matrix has built-in edge weights based upon hitting times, and we can then easily partition this adjacency matrix using our symmetric algorithms, in particular the mean exit time fast algorithm developed here.
The distance function in \cite{boyd2020metric} is given by $d^\beta \colon [n] \times [n] \to \mathbb R$, which we refer to as the \emph{hitting probability pseudo-metric}, by
\begin{equation} \label{e:Dist}
d (i,j) = - \log \left( A^{(hp)}_{ij} \right).
\end{equation}
This is generically a pseudo-metric as it is possible distinct nodes can be distance $0$ from one another, however there exists a quotient graph on which $d$ is a genuine metric.
Indeed, a family of metrics is given in \cite{boyd2020metric} that has to do with possible choices of the normalization in \cref{Aht} with different powers of the invariant measure.
A natural question to pursue is whether parsing the directed network with this approach to create the symmetrized $A^{(hp)}$ matrix, then applying our clustering scheme can be used to effectively detect graph structures in a more robust manner. In particular, comparison of our clustering scheme versus $K$-means studies of the distance structure should be an important direction for future study.
\subsection{Continuum Limits}
The methods presented here have a clear analog in the continuum setting to the motivated problems in the continuum discussed in the introduction. The primary continuum problem is related to the landscape function, or torsion function, on a sub-domain prescribed with Dirichlet boundary conditions,
\begin{align}
-\Delta u_S = 1_S, \ \ u_S |_{\partial S} = 0.
\end{align}
This is known as the mean exit time from a set $S$ of a standard Brownian motion random walker, see \cite{pavliotis2014stochastic}, Chapter $7$. Correspondingly, for a domain $\Omega$ with Neumann boundary conditions (to make life easier with graphs) and some $0 < \alpha < 1$, we propose the following optimization
\begin{align}
\max_{S \subset \Omega, |S| = \alpha |\Omega|} \int_S u_S\, dx\,,
\end{align}
meaning that we wish to maximize the exit time of a random walker from a given sub-domain. Through the Poisson formula for the mean exit time, we have that $\int u_S = (- \Delta u_S, u_S)$, allowing us to frame things similarly via a Ginzburg--Landau like penalty term for being in a set $S$,
$$
\min_{ \substack{0\leq \phi \leq 1 \\ \int \phi = \alpha |\Omega| }} \ \min_{ \int u = 1 } \frac12 (- \Delta u_S, u_S) + \frac{1}{2 \epsilon} \langle u, (1-\phi) u \rangle.
$$
Analysis of optimizers for such a continuum problem and its use in finding sub-domains and domain partitions is one important direction for future study.
Related results in a continuum setting have been studied for instance in \cite{briancon2004regularity,buttazzo1993existence}, but the regularization of this problem seems to be new and connects the problem through the inverse of the Laplacian to the full domain and its boundary conditions. Following works such as \cite{osting2017consistency,singer2017spectral,trillos2016continuum,trillos2018variational,trillos2016consistency,YUAN_2021}, an interesting future direction would be to prove consistency of our algorithm to these well-posed continuum optimization problems.
\bibliographystyle{amsplain}
| {'timestamp': '2022-12-27T02:09:46', 'yymm': '2212', 'arxiv_id': '2212.12839', 'language': 'en', 'url': 'https://arxiv.org/abs/2212.12839'} |
\section{Introduction}
The theory of Mean Field Games (MFG) was introduced independently by Lasry and Lions \cite{bib:LL1,bib:LL2,bib:LL3} and Huang, Malham\'e and Caines \cite{bib:HCM1,bib:HCM2} to study Nash equilibria for games with a very large number of players. Without entering technical details, let us recall that such an approach aims to describe the optimal value $u$ and distribution $m$ of players at a Nash equilibrium by a system of partial differential equations. Stochastic games are associated with a second order PDE system while deterministic games lead to the analysis of the first order system
\begin{equation}\label{eq:omegaMFG}
\begin{cases}
\ -\partial _{t} u^{T} + H(x, Du^{T})=F(x, m^{T}(t)) & \text{in} \quad (0,T)\times\OO,
\\ \ \partial _{t}m^{T}-\text{div}\Big(m^{T}D_{p}H(x, Du^{T}(t,x)\Big)=0 & \text{in} \quad (0,T)\times\OO,
\\ \ m^{T}(0)=m_{0}, \quad u^{T}(T,x)=u^{f}(x), & x\in\OO.
\end{cases}
\end{equation}
where $\OO$ is an open domain in the Euclidean space or on a manifold.
Following the above seminal works, this subject grew very fast producing an enormous literature. Here, for space reasons, we only refer to \cite{bib:NC, bib:DEV, bib:BFY, bib:CD1} and the references therein. However, most of the papers on this subject assumed the configuration space $\OO$ to be the torus $\mathbb{T}^{d}$ or the whole Euclidean space $\mathbb{R}^{d}$.
In this paper we investigate the long time behavior of the solution to \cref{eq:omegaMFG} where $\OO$ is a bounded domain of $\mathbb{R}^{d}$ and the state of the system is constrained in $\OOO$.
The constrained MFG system with finite horizon $T$ was analyzed in \cite{bib:CC, bib:CCC1, bib:CCC2}. In particular, Cannarsa and Capuani in \cite{bib:CC} introduced the notion of constrained equilibria and mild solutions $(u^T,m^T)$ of the constrained MFG system \cref{eq:omegaMFG} with finite horizon on $\OOO$ and proved an existence and uniqueness result for such a system. In \cite{bib:CCC1, bib:CCC2}, Cannarsa, Capuani and Cardaliaguet studied the regularity of mild solutions of the constrained MFG system and used such results to give a precise interpretation of \cref{eq:omegaMFG}.
At this point, it is natural to raise the question of the asymptotic behavior of solutions as $T \to +\infty$. In the absence of state constraints, results describing the asymptotic behavior of solutions of the MFG system were obtained in \cite{bib:CLLP2, bib:CLLP1}, for second order systems on $\mathbb{T}^{d}$, and in \cite{bib:CCMW, bib:CAR}, for first order systems on $\mathbb{T}^{d}$ and $\mathbb{R}^{d}$, respectively. Recently, Cardaliaguet and Porretta studied the long time behavior of solutions for the so-called Master equation associated with a second order MFG system, see \cite{bib:CP2}. As is well known, the introduction of state constraints creates serious obstructions to most techniques which can be used in the unconstrained case. New methods and ideas become necessary.
In order to understand the specific features of constrained problems, it is useful to recall the main available results for constrained Hamilton-Jacobi equations. The dynamic programming approach to constrained optimal control problems has a long history going back to Soner \cite{bib:Soner}, who introduced the notion of constrained viscosity solutions as subsolutions in the interior of the domain and supersolutions up to the boundary. Several results followed the above seminal paper, for which we refer to \cite{bib:BD, bib:Cap-lio} and the references therein.
As for the asymptotic behavior of constrained viscosity solutions of
\begin{equation*}
\partial_{t} u(t,x) + H(x, Du(t,x))=0, \quad (t,x) \in [0,+\infty) \times \OOO
\end{equation*}
we recall the paper \cite{bib:Mit} by Mitake, where the solution $u(t,x)$ is shown to converge as $t \to +\infty$ to a viscosity solution, $\bar{u}$, of the ergodic Hamilton-Jacobi equation
\begin{equation}\label{eq:1-1}
H(x, Du(x))=c, \quad x \in \OOO
\end{equation}
for a unique constant $c \in \mathbb{R}$.
In the absence of state constraints, it is well known that the constant $c$ can be characterized by using Mather measures, that is, invariant measures with respect to the Lagrangian flow which minimize the Lagrangian action, see for instance \cite{bib:FA}.
On the contrary, such an analysis is missing for constrained optimal control problems and the results in \cite{bib:Mit} are obtained by pure PDE methods without constructing a Mather measure.
On the other hand, as proved in \cite{bib:CCMW, bib:CAR}, the role of Mather measures is crucial in the analysis of the asymptotic behavior of solutions to the MFG system on $\mathbb{T}^{d}$ or $\mathbb{R}^{d}$. For instance, on $\mathbb{T}^{d}$ the limit behavior of $u^{T}$ is described by a solution $(\bar{c}, \bar{u}, \bar{m})$ of the ergodic MFG system
\begin{align*}
\begin{cases}
H(x, Du(x))=c+F(x,m), & \text{in}\ \mathbb{T}^{d}
\\
\ddiv\Big(mD_{p}H(x, Du(x)) \Big)=0, & \text{in}\ \mathbb{T}^{d}\\
\int_{\mathbb{T}^{d}}m(dx)=1
\end{cases}
\end{align*}
where $\bar{m}$ is given by a Mather measure. Then, the fact that $\bar{u}$ is differentiable on the support of the Mather measure, allows to give a precise interpretation of the continuity equation in the above system.
Motivated by the above considerations, in this paper, we study the ergodic Hamil\-ton-Jacobi equation \cref{eq:1-1} from the point of view of weak KAM theory, aiming at constructing a Mather measure. For this purpose, we need to recover a fractional semiconcavity result for the value function of a constrained optimal control problem, which is inspired by a similar property derived in \cite{bib:CCC2}. Indeed, such a regularity is needed to prove the differentiability of a constrained viscosity solution of \cref{eq:1-1} along calibrated curves and, eventually, construct the Mather set.
With the above analysis at our disposal, we address the existence and uniqueness of solutions to the ergodic constrained MFG system
\begin{align}\label{eq:eMFG}
\begin{cases}
H(x, Du(x))=c+F(x,m), & \text{in}\ \OOO,
\\
\ddiv\Big(mD_{p}H(x, Du(x)) \Big)=0, & \text{in}\ \OOO,\\
\int_{\OOO}m(dx)=1.
\end{cases}
\end{align}
As for existence, we construct a triple $(\bar{c}, \bar u, \bar m) \in \mathbb{R} \times C(\OOO) \times \mathcal{P}(\OOO)$ such that $\bar u$ is a constrained viscosity solution of the first equation in \cref{eq:eMFG} for $c=\bar{c}$, $D\bar u$ exists for $\bar m$-a.e. $x \in \OOO$, and $\bar m$ satisfies the second equation of system \cref{eq:eMFG} in the sense of distributions (for the precise definition see \Cref{def:ers}). Moreover, under an extra monotonicity assumption for $F$, we show that $\bar{c}$ is the unique constant for which the system \cref{eq:eMFG} has a solution and $F(\cdot, \bar{m})$ is unique.
Then, using energy estimates for the MFG system, we prove our main result concerning the convergence of $u^{T} / T$: there exists a constant $C \geq $ such that
\begin{equation*}
\sup_{t \in [0,T]} \Big\|\frac{u^{T}(t, \cdot)}{T} + \bar{c}\left(1-\frac{t}{T}\right) \Big\|_{\infty, \OOO} \leq \frac{C}{T^{\frac{1}{d+2}}}.
\end{equation*}
Even for the distribution of players $m^{T}$ we obtain an asymptotic estimate of the form:
\begin{equation*}
\frac{1}{T}\int_{0}^{T}{\big\| F(\cdot, m^{\eta}_s)- F(\cdot, \bar m) \big\|_{\infty, \OOO} ds} \leq \frac{C}{T^{\frac{1}{d+2}}}
\end{equation*}
for some constant $C \geq 0$.
We conclude this introduction recalling that asymptotic results for second-order MFG systems on $\mathbb{T}^{d}$ have been applied in \cite{bib:CLLP2, bib:CLLP1} to recover a Turnpike property with an exponential rate of convergence. A similar property, possibly at a lower rate, may then be expected for first-order MFG systems as well. We believe that the results of this paper could be used to the purpose.
The rest of this paper is organized as follows.
In \Cref{sec:preliminaries}, we introduce the notation and some preliminaries. In \Cref{sec:HJstateconstraints}, we provide some weak KAM type results for Hamilton-Jacobi equations with state constraints. \Cref{sec:EMFG} is devoted to the existence of solutions of \cref{eq:MFG1}. We show the convergence result of \cref{eq:lab1} in \Cref{sec:convergence}.
\section{Preliminaries}
\label{sec:preliminaries}
\subsection{Notation}
We write below a list of symbols used throughout this paper.
\begin{itemize}
\item Denote by $\mathbb{N}$ the set of positive integers, by $\mathbb{R}^d$ the $d$-dimensional real Euclidean space, by $\langle\cdot,\cdot\rangle$ the Euclidean scalar product, by $|\cdot|$ the usual norm in $\mathbb{R}^d$, and by $B_{R}$ the open ball with center $0$ and radius $R$.
\item Let $\OO\subset \mathbb{R}^d$ be a bounded open set with $C^2$ boundary. $\OOO$ stands for its closure, $\partial \OO$ for its boundary and $\OO^c=\mathbb{R}^d\setminus \OO$ for the complement. For $x\in \partial \OO$, denote by $\nu(x)$ the outward unit normal vector to $\partial \OO$ at $x$.
\item The distance function from $\OOO$ is the function $d_{\OO}:\mathbb{R}^d\to[0,+\infty)$ defined by
$
d_{\OO}(x):=\inf_{y\in\OOO}|x-y|.
$
Define the oriented boundary distance from $\partial \OOO$ by $b_{\OOO}(x):=d_{\OO}(x)-d_{\OO^c}(x)$.
Since the boundary of $\OO$ is of class $C^2$, then $b_{\OO}(\cdot)$ is of class $C^2$ in a neighborhood of $\partial \OO$.
\item Denote by $\pi_1:\OOO\times \mathbb{R}^d\to\OOO$ the canonical projection.
\item Let $\Lambda$ be a real $n\times n$ matrix. Define the norm of $\Lambda$ by
\[
\|\Lambda\|=\sup_{|x|=1,\ x\in\mathbb{R}^d}|\Lambda x|.
\]
\item Let $f$ be a real-valued function on $\mathbb{R}^d$. The set
\[
D^+ f(x)=\left\{p\in\mathbb{R}^d: \limsup_{y\to x}\frac{f(y)-f(x) - \langle p, y-x \rangle}{|y-x|}\leq 0\right\},
\]
is called the superdifferential of $f$ at $x$.
\item Let $A$ be a Lebesgue-measurable subset of $\mathbb{R}^{d}$. Let $1\leq p\leq \infty$.
Denote by $L^p(A)$ the space of Lebesgue-measurable functions $f$ with $\|f\|_{p,A}<\infty$, where
\begin{align*}
& \|f\|_{\infty, A}:=\esssup_{x \in A} |f(x)|,
\\& \|f\|_{p,A}:=\left(\int_{A}|f|^{p}\ dx\right)^{\frac{1}{p}}, \quad 1\leq p<\infty.
\end{align*}
Denote $\|f\|_{\infty,\mathbb{R}^d}$ by $\|f\|_{\infty}$ and $\|f\|_{p,\mathbb{R}^d}$ by $\|f\|_{p}$, for brevity.
\item $C_b(\mathbb{R}^d)$ stands for the function space of bounded uniformly continuous functions on $\mathbb{R}^d$. $C^{2}_{b}(\mathbb{R}^{d})$ stands for the space of bounded functions on $\mathbb{R}^d$ with bounded uniformly continuous first and second derivatives.
$C^k(\mathbb{R}^{d})$ ($k\in\mathbb{N}$) stands for the function space of $k$-times continuously differentiable functions on $\mathbb{R}^d$, and $C^\infty(\mathbb{R}^{d}):=\cap_{k=0}^\infty C^k(\mathbb{R}^{d})$.
$C_c^\infty(\OOO)$ stands for the space of functions $f\in C^\infty(\OOO)$ with $\supp(f)\subset \OO$. Let $a<b\in\mathbb{R}$.
$AC([a,b];\mathbb{R}^d)$ denotes the space of absolutely continuous maps $[a,b]\to \mathbb{R}^d$.
\item For $f \in C^{1}(\mathbb{R}^{d})$, the gradient of $f$ is denoted by $Df=(D_{x_{1}}f, ..., D_{x_{n}}f)$, where $D_{x_{i}}f=\frac{\partial f}{\partial x_{i}}$, $i=1,2,\cdots,d$.
Let $k$ be a nonnegative integer and let $\alpha=(\alpha_1,\cdots,\alpha_d)$ be a multiindex of order $k$, i.e., $k=|\alpha|=\alpha_1+\cdots +\alpha_d$ , where each component $\alpha_i$ is a nonnegative integer. For $f \in C^{k}(\mathbb{R}^{d})$,
define $D^{\alpha}f:= D_{x_{1}}^{\alpha_{1}} \cdot\cdot\cdot D^{\alpha_{d}}_{x_{d}}f$.
\item Denote by $\mathcal{B}(\OOO)$ the Borel $\sigma$-algebra on $\OOO$, by $\mathcal{P}(\OOO)$ the set of Borel probability measures on $\OOO$, by $\mathcal{P}(\OOO\times\mathbb{R}^d)$ the set of Borel probability measures on $\OOO\times\mathbb{R}^d$. $\mathcal{P}(\OOO)$ and $\mathcal{P}(\OOO\times\mathbb{R}^d)$ are endowed with the weak-$\ast$ topology. One can define a metric on $\mathcal{P}(\OOO)$ by \cref{eq:2-100} below, which induces the weak-$\ast$ topology.
\end{itemize}
\subsection{Measure theory and MFG with state constraints}
Denote by $\mathcal{B}(\mathbb{R}^d)$ the Borel $\sigma$-algebra on $\mathbb{R}^d$ and by $\mathcal{P}(\mathbb{R}^d)$ the space of Borel probability measures on $\mathbb{R}^d$.
The support of a measure $\mu \in \mathcal{P}(\mathbb{R}^n)$, denoted by $\supp(\mu)$, is the closed set defined by
\begin{equation*}
\supp (\mu) := \Big \{x \in \mathbb{R}^d: \mu(V_x)>0\ \text{for each open neighborhood $V_x$ of $x$}\Big\}.
\end{equation*}
We say that a sequence $\{\mu_k\}_{k\in\mathbb{N}}\subset \mathcal{P}(\mathbb{R}^d)$ is weakly-$*$ convergent to $\mu \in \mathcal{P}(\mathbb{R}^d)$, denoted by
$\mu_k \stackrel{w^*}{\longrightarrow}\mu$,
if
\begin{equation*}
\lim_{n\rightarrow \infty} \int_{\mathbb{R}^d} f(x)\,d\mu_n(x)=\int_{\mathbb{R}^d} f(x) \,d\mu(x), \quad \forall f \in C_b(\mathbb{R}^d).
\end{equation*}
For $p\in[1,+\infty)$, the Wasserstein space of order $p$ is defined as
\begin{equation*}
\mathcal{P}_p(\mathbb{R}^d):=\left\{m\in\mathcal{P}(\mathbb{R}^d): \int_{\mathbb{R}^d} |x_0-x|^p\,dm(x) <+\infty\right\},
\end{equation*}
where $x_0 \in \mathbb{R}^d$ is arbitrary.
Given any two measures $m$ and $m^{\prime}$ in $\mathcal{P}_p(\mathbb{R}^n)$, define
\[
\Pi(m,m'):=\Big\{\lambda\in\mathcal{P}(\mathbb{R}^d \times \mathbb{R}^d): \lambda(A\times \mathbb{R}^d)=m(A),\ \lambda(\mathbb{R}^d \times A)=m'(A),\ \forall A\in \mathcal{B}(\mathbb{R}^d)\Big\}.
\]
The Wasserstein distance of order $p$ between $m$ and $m'$ is defined by
\begin{equation*}\label{dis1}
d_p(m,m')=\inf_{\lambda \in\Pi(m,m')}\left(\int_{\mathbb{R}^d\times \mathbb{R}^d}|x-y|^p\,d\lambda(x,y) \right)^{1/p}.
\end{equation*}
The distance $d_1$ is also commonly called the Kantorovich-Rubinstein distance and can be characterized by a useful duality formula (see, for instance, \cite{bib:CV}) as follows
\begin{equation}\label{eq:2-100}
d_1(m,m')=\sup\left\{\int_{\mathbb{R}^d} f(x)\,dm(x)-\int_{\mathbb{R}^d} f(x)\,dm'(x) \ |\ f:\mathbb{R}^d\rightarrow\mathbb{R} \ \ \text{is}\ 1\text{-Lipschitz}\right\},
\end{equation}
for all $m$, $m'\in\mathcal{P}_1(\mathbb{R}^d)$.
We recall some definitions and results for the constrained MFG system
\begin{equation}\label{eq:lab1}
\begin{cases}
\ -\partial _{t} u^{T} + H(x, Du^{T})=F(x, m^{T}(t)) & \text{in} \quad (0,T)\times\OOO, \\ \ \partial _{t}m^{T}-\text{div}\Big(m^{T}V(t,x)\Big)=0 & \text{in} \quad (0,T)\times\OOO, \\ \ m^{T}(0)=m_{0}, \quad u^{T}(T,x)=u^{f}(x), & x\in\OOO,
\end{cases}
\end{equation}
where
\begin{align*}
V(t,x)=
\begin{cases}
D_{p}H(x, Du^{T}(t,x)), & (t,x) \in [0,T] \times (\supp(m^{T}(t)) \cap \OO),
\\
D_{p}H(x, D^{\tau}u^{T}(t,x)+\lambda_{+}(t,x)\nu(x)), & (t,x) \in [0,T] \times (\supp(m^{T}(t)) \cap \partial\OO)
\end{cases}
\end{align*}
and $\lambda_{+}$ is defined in \cite[Proposition 2.5]{bib:CCC2}.
Let
\begin{equation*}
\Gamma=\{\gamma \in AC([0,T]; \mathbb{R}^{d}): \gamma(t) \in \OOO\ \text{for all}\ t \in [0,T] \}.
\end{equation*}
For any $x\in\OOO$, define
\begin{equation*}
\Gamma(x)=\{\gamma \in \Gamma: \gamma(0)=x \}.
\end{equation*}
For any $t \in [0,T]$, denote by $e_{t}: \Gamma \to \OOO$ the evaluation map, defined by
\[
e_{t}(\gamma)=\gamma(t).
\]
For any $t \in [0,T]$ and any $\eta \in \mathcal{P}(\Gamma)$, we define
\[
m^{\eta}_{t}=e_{t} \sharp \eta \in \mathcal{P}(\OOO)
\]
where $e_{t} \sharp \eta$ stands for the image measure (or push-forward) of $\eta$ by $e_{t}$. Thus, for any $\varphi \in C(\overline{\Omega})$
\begin{equation*}
\int_{\overline{\Omega}}{\varphi(x)\ m^{\eta}_{t}(dx)}=\int_{\Gamma}{\varphi(\gamma(t))\ \eta(d\gamma)}.
\end{equation*}
For any fixed $m_{0} \in \mathcal{P}(\OOO)$, denote by $\mathcal{P}_{m_{0}}$ the set of all Borel probability measures $\eta \in \mathcal{P}(\Gamma)$ such that $m^{\eta}_{0}=m_{0}$.
For any $\eta \in \mathcal{P}_{m_{0}}$ define the following functional
\begin{equation}\label{eq:evolutioncv}
J_{\eta}[\gamma]=\int_{0}^{T}{\Big(L(\gamma(s), \dot\gamma(s)) +F(\gamma(s), m^{\eta}_{s}) \Big)\ ds} + u^f(\gamma(T)),\quad \forall \gamma\in\Gamma.
\end{equation}
\begin{definition}[Constrained MFG equilibrium]\label{def:equili}
Let $m_{0} \in \mathcal{P}(\OOO)$. We say that $\eta \in \mathcal{P}_{m_{0}}(\Gamma)$ is a constrained MFG equilibrium for $m_{0}$ if
\begin{equation*}
\supp(\eta) \subset \Gamma^*_{\eta}:=\bigcup_{x \in \OOO} \Gamma^{\eta}(x),
\end{equation*}
where
\begin{equation*}
\Gamma^{\eta}(x)=\left\{\gamma^* \in \Gamma(x): J_{\eta}[\gamma^*]=\min_{\gamma\in\Gamma(x)} J_{\eta}[\gamma] \right\}.
\end{equation*}
\end{definition}
Assume that $L\in C^1(\OOO\times\mathbb{R}^d)$ is convex with respect to the second argument and satisfies: there are $C_i>0$, $i=1,2,3,4$, such that for all $(x,v)\in\OOO\times\mathbb{R}^d$, there hold
\[
|D_{v}L(x,v)| \leq C_1(1+|v|),\quad |D_{x}L(x,v)| \leq C_2(1+|v|^{2}),\quad C_3|v|^{2}- C_4 \leq L(x,v).
\]
Under these assumptions on $L$, assuming that $F:\OOO\times\mathcal{P}(\OOO)\to\mathbb{R}$ and $u^{f}:\OOO\to\mathbb{R}$ are continuous functions it has been proved in \cite[Theorem 3.1]{bib:CC} that there exists at least one constrained MFG equilibrium.
\begin{definition}[Mild solutions of constrained MFG system]\label{def:defmild}
We say that $(u^T,m^T)\in C([0,T]\times\OOO)\times C([0,T];\mathcal{P}(\OOO))$ is a mild solution of the constrained MFG problem in $\OOO$, if there is a constrained MFG equilibrium $\eta\in\mathcal{P}_{m_0}(\Gamma)$ such that
\begin{enumerate}
\item [(i)] $m^T(t)=e_t\sharp \eta$ for all $t\in [0,T]$;
\item [(ii)] $u^T$ is given by
\begin{equation}\label{eq:MFGValuefunction}
u^T(t,x)=\inf_{\gamma\in\Gamma, \gamma(t)=x} \Big\{\int_t^T\big(L(\gamma(s),\dot{\gamma}(s))+F(\gamma(s),m^T(s))\big)ds+u^f(\gamma(T))\Big\},
\end{equation}
for all $(t,x)\in[0,T]\times\OOO$.
\end{enumerate}
\end{definition}
The existence of a mild solution $(u^{T}, m^{T})$ is of constraint MFG system on $[0,T]$ is a direct consequence of the existence of constrained MFG equilibrium.
In addition, assume that $F$ is strictly monotone, i.e.,
\[
\int_{\OOO}(F(x,m_1)-F(x,m_2))d(m_1-m_2)(x)\geq 0,
\]
for all $m_1$, $m_2\in\mathcal{P}(\OOO)$ and $\int_{\OOO}(F(x,m_1)-F(x,m_2))d(m_1-m_2)(x)=0$ if and only if $F(x,m_1)=F(x,m_2)$ for all $x\in\OOO$. Cannarsa and Capuani \cite{bib:CC} proved that if $(u^T_1,m^T_1)$, $(u^T_2,m^T_2)$ are mild solutions, then $u^T_1=u^T_2$. Moreover, they also provided examples of coupling functions $F$ for which also the distribution $m^{T}$ is unique under the monotonicity assumption.
\subsection{Weak KAM theory on Euclidean space}
In this part we recall some definitions and results in the weak KAM theory on the Euclidean space. Most of the results are due to Fathi \cite{bib:FM} and Contreras \cite{bib:GC}.
\medskip
\noindent $\bullet$ \emph{Tonelli Lagrangians and Hamiltonians}.
Let $L: \mathbb{R}^{n} \times \mathbb{R}^{n} \to \mathbb{R}$ be a $C^{2}$ Lagrangian.
\begin{definition}[Strict Tonelli Lagrangians]\label{def:def2}
$L$ is called a {\it strict Tonelli Lagrangian} if there exist positive constants $C_{i}$ ($i=1,2,3$) such that, for all $(x,v) \in \mathbb{R}^{d} \times \mathbb{R}^{d}$ there hold
\begin{itemize}
\item[(a)] $\frac{I}{C_{1}} \leq D_{vv}^{2}L(x,v) \leq C_{1} I$, where $I$ is the identity matrix;
\item[(b)] $\|D^{2}_{vx}L(x,v)\| \leq C_{2}(1+|v|)$;
\item[(c)] $|L(x,0)|+|D_{x}L(x,0)|+ |D_{v}L(x,0)| \leq C_{3}$.
\end{itemize}
\end{definition}
\begin{remarks}\label{rem:re2.1}
Let $L$ be a strict Tonelli Lagrangian. It is easy to check that there are two positive constants $\alpha$, $\beta$ depending only on $C_{i}$ ($i=1,2,3$) in \Cref{def:def2}, such that
\begin{itemize}
\item[($e$)]$|D_{v}L(x,v)| \leq \alpha(1+|v|)$, \quad $\forall (x,v)\in \mathbb{R}^d\times\mathbb{R}^d$;
\item[($f$)] $|D_{x}L(x,v)| \leq \alpha(1+|v|^{2})$, \quad $\forall (x,v)\in \mathbb{R}^d\times\mathbb{R}^d$;
\item[($g$)]$\frac{1}{4\beta}|v|^{2}- \alpha \leq L(x,v) \leq 4\beta |v|^{2} +\alpha$, \quad $\forall (x,v)\in \mathbb{R}^d\times\mathbb{R}^d$;
\item[($h$)]$\sup\big\{L(x,v): x\in \mathbb{R}^d, |v| \leq R \big\} < +\infty$, \quad $\forall R\geq 0$.
\end{itemize}
\end{remarks}
Define the Hamiltonian $H: \mathbb{R}^{d} \times \mathbb{R}^{d} \to \mathbb{R}$ associated with $L$ by
$$H(x,p)=\sup_{v \in \mathbb{R}^{d}} \Big\{ \big\langle p,v \big\rangle -L(x,v) \Big\}, \quad \forall (x,p) \in \mathbb{R}^{d} \times \mathbb{R}^{d}.$$
It is straightforward to check that if $L$ is a strict Tonelli Lagrangian, then $H$ satisfies ($a$), ($b$), and ($c$) in \Cref{def:def2}. Such a function $H$ is called a strict Tonelli Hamiltonian.
If $L$ is a {\em reversible} Lagrangian, i.e., $L(x,v)=L(x,-v)$ for all $(x,v) \in \mathbb{R}^{n} \times \mathbb{R}^{n}$, then $H(x,p)=H(x,-p)$ for all $(x,p) \in \mathbb{R}^{d} \times \mathbb{R}^{d}$.
\medskip
{\it We always work with Tonelli Lagrangians and Hamiltonians, if not stated otherwise.}
\medskip
\noindent $\bullet$ \emph{Invariant measures and holonomic measures}.
The Euler-Lagrange equation associated with $L$
\begin{equation}\label{eq:EL} \frac{d}{dt}D_{v}L(x, \dot x)=D_{x}L(x, \dot x), \end{equation}
generates a flow of diffeomorphisms $\phi_{t}^{L}: \mathbb{R}^{d} \times \mathbb{R}^{d} \to \mathbb{R}^{d} \times \mathbb{R}^{d}$, with $t \in \mathbb{R}$, defined by
\begin{equation*}\label{lab11} \phi_{t}^{L}(x_{0},v_{0})=( x(t), \dot x(t)), \end{equation*}
where $x: \mathbb{R} \to \mathbb{R}^{d}$ is the maximal solution of \cref{eq:EL} with initial conditions $x(0)=x_{0}, \ \dot x(0)=v_{0}$. It should be noted that, for any Tonelli Lagrangian $L$, the flow $\phi_{t}^{L}$ is complete \cite[Corollary 2.2]{bib:FM}.
We recall that a Borel probability measure $\mu$ on $\mathbb{R}^{d} \times \mathbb{R}^{d}$ is called $\phi_{t}^{L}$-invariant, if $$\mu(B)=\mu(\phi_{t}^{L}(B)), \quad \forall t \in \mathbb{R}, \quad \forall B \in \mathcal{B}(\mathbb{R}^{d} \times \mathbb{R}^{d}),$$ or, equivalently, $$\int_{\mathbb{R}^{d} \times \mathbb{R}^{d}} {f(\phi_{t}^{L}(x,v))\ d\mu(x,v)}=\int_{\mathbb{R}^{d} \times \mathbb{R}^{d}}{f(x,v)\ d\mu(x,v)}, \quad \forall f \in C^{\infty}_{c}(\mathbb{R}^{d} \times \mathbb{R}^{d}).$$ We denote by $\mathfrak{M}_{L}$ the set of all $\phi_{t}^{L}$-invariant Borel probability measures on $\mathbb{R}^{d} \times \mathbb{R}^{d}$.
Let $C^0_{l}$ be the set of all continuous functions $f:\mathbb{R}^d\times \mathbb{R}^d\to \mathbb{R}$ satisfying
\[
\sup_{(x,v)\in \mathbb{R}^d\times \mathbb{R}^d}\frac{|f(x,v)|}{1+|v|}<+\infty
\]
endowed with the topology induced by the uniform convergence on compact subsets.
Denote by $(C^0_{l})'$ the dual of $C^0_{l}$.
Let $\gamma:[0,T]\to \mathbb{R}^d$ be a closed absolutely continuous curve for some $T>0$. Define a probability measure $\mu_{\gamma}$ on the Borel $\sigma$-algebra of $\mathbb{R}^d\times \mathbb{R}^d$ by
\[
\int_{\mathbb{R}^d\times \mathbb{R}^d} fd\mu_{\gamma}=\frac{1}{T}\int_0^Tf(\gamma(t),\dot{\gamma}(t))dt
\]
for all $f\in C^0_{l}$. Let $\mathcal{K}(\mathbb{R}^d)$ denote the set of such $\mu_{\gamma}$'s. We call $\overline{\mathcal{K}(\mathbb{R}^d)}$ the set of holonomic measures, where $\overline{\mathcal{K}(\mathbb{R}^d)}$ denotes the closure of $\mathcal{K}(\mathbb{R}^d)$ with respect to
the topology induced by the weak convergence on $(C^0_{l})'$. By \cite[2-4.1 Theorem]{bib:Con-b}, we have that
\begin{align}\label{eq:3-40}
\mathfrak{M}_{L}\subseteq \overline{\mathcal{K}(\mathbb{R}^d)}.
\end{align}
\medskip
\noindent $\bullet$ \emph{Ma\~n\'e's critical value}.
If $[a,b]$ is a finite interval with $a<b$ and $\gamma:[a,b]\to \mathbb{R}^d$ is an absolutely continuous curve,
we define its $L$ action as
\[
A_{L}(\gamma)=\int_a^bL(\gamma(s),\dot{\gamma}(s))ds.
\]
The critical value of the Lagrangian $L$, which was introduced by Ma\~n\'e in \cite{bib:Man97}, is defined as follows:
\begin{equation}\label{eq:manedef1}
c_L:=\sup\{k\in\mathbb{R}: A_{L+k}(\gamma)<0 \ \text{for some closed absolutely continuous curve}\ \gamma\}.
\end{equation}
Since $\mathbb{R}^{d}$ can be seen as a covering of the torus $\mathbb{T}^{d}$, Ma\~n\'e's critical value has the following representation formula~\cite[Theorem A]{bib:CIPP}:
\begin{equation}\label{eq:lab55}
c_L=\inf_{u \in C^{\infty}(\mathbb{R}^{n})} \sup_{ x \in \mathbb{R}^{n}} H(x, Du(x)).
\end{equation}
By \cite[2-5.2 Theorem]{bib:Con-b}, $c_L$ can be also characterized in the following way:
\begin{equation}\label{eq:3-33}
c_L=-\inf\big\{B_L(\nu): \nu\in \overline{\mathcal{K}(\mathbb{R}^d)}\big\}
\end{equation}
where the action $B_{L}$ is defined as
\begin{equation*}
B_{L}(\nu)=\int_{\mathbb{R}^{d} \times \mathbb{R}^{d}}{L(x,v)\ \nu(dx,dv)}
\end{equation*}
for any $\nu \in \overline{\mathcal{K}(\mathbb{R}^{d})}$.
If $L$ is a reversible Tonelli Lagrangian, and
\[
\argmin_{x\in\mathbb{R}^d}L(x,0)\neq\emptyset,
\]
then for $x\in \argmin_{x\in\mathbb{R}^d}L(x,0)$, the atomic measure supported at $(x,0)$, $\delta_{(x,0)}$, is a $\phi_{t}^{L}$-invariant probability measure, i.e., $\delta_{(x,0)}\in \mathfrak{M}_{L}$. Note that
\[
B_L(\delta_{(x,0)})\leq B_L(\nu),\quad \forall \nu\in \overline{\mathcal{K}(\mathbb{R}^d)},
\]
which, together with \cref{eq:3-40} and \cref{eq:3-33}, implies that
\begin{equation}\label{eq:3-34}
c_L=-\min\big\{B_L(\nu): \nu\in \mathfrak{M}_L\big\}.
\end{equation}
In view of \cref{eq:3-34}, it is straightforward to see that
\begin{equation}\label{eq:3-35}
c_L=-\min_{x\in\mathbb{R}^d}L(x,0).
\end{equation}
\medskip
\noindent $\bullet$ \emph{Weak KAM theorem}.
Let us recall definitions of weak KAM solutions and viscosity solutions of the Hamilton-Jacobi equation
\begin{align}\label{eq:hj}\tag{HJ}
H(x,Du)=c,
\end{align}
where $c$ is a real number.
\begin{definition}[Weak KAM solutions]\label{def:def3}
A function $u \in C(\mathbb{R}^{d})$ is called a backward weak KAM solution of equation \cref{eq:hj} with $c=c_L$, if it satisfies the following two conditions:
\begin{itemize}
\item[($i$)] for each continuous and piecewise $C^{1}$ curve $\gamma:[t_{1}, t_{2}] \to \mathbb{R}^{d}$, we have that $$u(\gamma(t_{2}))-u(\gamma(t_{1})) \leq \int_{t_{1}}^{t_{2}}{L(\gamma(s), \dot\gamma(s))ds}+c_L(t_{2}-t_{1});$$
\item[($ii$)] for each $x \in \mathbb{R}^{d}$, there exists a $C^{1}$ curve $\gamma:(-\infty,0] \to \mathbb{R}^{d}$ with $\gamma(0)=x$ such that
\[
u(x)-u(\gamma(t))=\int_{t}^{0}{L(\gamma(s), \dot\gamma(s))ds}-c_Lt, \quad \forall t<0.
\]
\end{itemize}
\end{definition}
\begin{remarks}\label{rem:re1}
Let $u$ be a function on $\mathbb{R}^d$.
A $C^1$ curve $\gamma:[a,b]\to \mathbb{R}^d$ with $a<b$ is said to be $(u,L,c_L)$-calibrated, if it satisfies
\[
u(\gamma(t'))-u(\gamma(t))=\int_{t}^{t'}{L(\gamma(s), \dot\gamma(s))ds}+c_L(t'-t), \quad \forall a\leq t<t'\leq b.
\]
It is not difficult to check that if $u$ satisfies condition ($i$) in \Cref{def:def3}, then the curves appeared in condition ($ii$) in \Cref{def:def3} are necessarily $(u,L,c_L)$-calibrated curves.
\end{remarks}
\begin{definition}[Viscosity solutions]\label{def:visco}
\begin{itemize}
\item [($i$)] A function $u\in C(\mathbb{R}^{d})$ is called a viscosity subsolution of equation \cref{eq:hj} if for every $\varphi\in C^1(\mathbb{R}^{d})$ at any
local maximum point $x_0$ of $u-\varphi$ on $\mathbb{R}^{d}$ the following holds:
\[
H(x_0,D\varphi(x_0))\leq c;
\]
\item [($ii$)] A function $u\in C(\mathbb{R}^{d})$ is called a viscosity supersolution of equation \cref{eq:hj} if for every $\varphi\in C^1(\mathbb{R}^{d})$ at any
local minimum point $x_0$ of $u-\varphi$ on $\mathbb{R}^{d}$ the following holds:
\[
H(y_0,D\psi(y_0))\geq c;
\]
\item [($iii$)] $u$ is a viscosity solution of equation \cref{eq:hj} on $\mathbb{R}^{d}$ if it is both a viscosity subsolution and a viscosity supersolution on $\mathbb{R}^{d}$.
\end{itemize}
\end{definition}
In \cite{bib:FM} Fathi and Maderna got the existence of backward weak KAM solutions (or, equivalently, viscosity solutions) for $c = c_{L}$.
\section{Hamilton-Jacobi equations with state constraints}
\label{sec:HJstateconstraints}
\subsection{Constrained viscosity solutions.}
Let us recall the notion of constrained viscosity solutions of equation \cref{eq:hj} on $\overline{\Omega}$, see for instance \cite{bib:Soner}.
\begin{definition}[Constrained viscosity solutions]\label{def:scv}
$u\in C(\overline{\Omega})$ is said to be a constrained viscosity solution of \cref{eq:hj} on $\overline{\Omega}$ if it is a subsolution on $\Omega$ and a supersolution on $\overline{\Omega}$.
\end{definition}
Consider the state constraint problem for equation \cref{eq:hj} on $\overline{\Omega}$:
\begin{align}
H(x,Du(x))& \leq c \quad\text{in}\ \ \Omega,\label{eq:3-50}\\
H(x,Du(x))& \geq c \quad\text{on}\ \ \overline{\Omega}.\label{eq:3-51}
\end{align}
Mitake \cite{bib:Mit} showed that there exists a unique constant, denoted by $c_H$, such that problem \cref{eq:3-50}-\cref{eq:3-51} admits solutions. Moreover, $c_H$ can be characterized by
\begin{align}\label{eq:3-55}
c_H=\inf\{c\in\mathbb{R}: \cref{eq:3-50}\ \text{has\ a\ solution}\}=\inf_{\varphi \in W^{1,\infty}(\OO)} \esssup_{x \in \OO} H(x, D\varphi(x)).
\end{align}
See \cite[Theorem 3.3, Theorem 3.4, Remark 2]{bib:Mit} for details.
Furthermore, by standard comparison principle for viscosity solutions it is easy to prove that following representation formula for constrained viscosity solutions holds true.
\begin{proposition}[Representation formula for constrained viscosity solutions]\label{prop:for}
$u \in C(\OOO)$ is a constrained viscosity solution of \cref{eq:hj} on $\overline{\Omega}$ for $c=c_H$ if and only if
\begin{equation}\label{eq:LaxOleinik}
u(x)=\inf_{\gamma\in \mathcal{C}(x;t)} \left\{u(\gamma(0))+\int_{0}^{t}{L(\gamma(s), \dot\gamma(s))\ ds} \right\}+c_Ht, \quad \forall x \in \OOO,\ \forall t>0,
\end{equation}
where $\mathcal{C}(x;t)$ denotes the set of all curves $\gamma\in AC([0,t],\OOO)$ with $\gamma(t)=x$.
\end{proposition}
\begin{remarks}
If $u$ is a constrained viscosity solution of \cref{eq:hj} on $\overline{\Omega}$ for $c=c_H$, then by \Cref{prop:for} and \cite[Theorem 5.2]{bib:Mit} one can deduce that $u\in W^{1,\infty}(\Omega)$. Thus, $u$ is Lipschitz in $\Omega$, since $\Omega$ is open and bounded with $\partial \Omega$ of class $C^2$ (see, for instance, \cite[Chapter 5]{bib:Eva}).
\end{remarks}
\begin{proposition}[Equi-Lipschitz continuity of constrained viscosity solutions]\label{prop:equiliplemma}
Let $u$ be a constrained viscosity solution of \cref{eq:hj} on $\overline{\Omega}$ for $c=c_H$. Then, $u$ is Lipschitz continuous on $\OOO$ with a Lipschitz constant $K_1>0$ depending only on $H$.
\end{proposition}
\begin{proof}
Recall that $\OO$ is a bounded domain with $C^{2}$ boundary. By \Cref{rem:quasi}--(iii) below $\OO$ is $C$-quasiconvex for some $C>0$. Thus, for any $x$, $y\in\OOO$, there is an absolutely continuous curve $\gamma:[0,\tau(x,y)]\to\OOO$ connecting $x$ and $y$, with $0<\tau(x,y)\leq C|x-y|$ and $|\dot{\gamma}(t)|\leq 1$ a.e. in $[0,\tau(x,y)]$. In view of \cref{eq:LaxOleinik}, we deduce that
\[
u(x)-u(y)\leq\int_{0}^{t}L(\gamma(s), \dot{\gamma}(s))\ ds+c_Ht, \quad \forall t>0.
\]
Hence, we get that
\begin{equation*}
u(x)-u(y)\leq C\cdot\Big(\sup_{x\in\OOO,|v|\leq 1}L(x,v)+c_H\Big)\cdot|x-y|
\end{equation*}
By exchanging the roles of $x$ and $y$, we get that
\begin{equation*}
|u(x)-u(y)| \leq K_1|x-y|,
\end{equation*}
where $K_1:=C\cdot\Big(\sup_{x\in\OOO,|v|\leq 1}L(x,v)+c_H\Big)$ depending only on $H$ and $\OO$.
\end{proof}
\begin{remarks}\label{rem:quasi}
Let $U\subseteq \mathbb{R}^d$ be a connected open set.
(i) For any $x \in \bar U$ and $C >0$, we say that $y \in \bar U$ is a $(x, C)$-reachable in $U$, if there exists a curve $\gamma \in \text{AC}([0,\tau(x,y)]; \bar U)$ for some $\tau(x,y) > 0$ such that $|\dot\gamma(t)| \leq 1$ a.e. in $[0, \tau(x,y)]$, $\gamma(0)=x$, $\gamma(\tau(x,y))=y$ and $\tau(x,y) \leq C|x-y|$. We denote by $\mathcal{R}_{C}(x,y)$ the set of all $(x,C)$-reachable points from $x \in U$. We say that $U$ is $C$-quasiconvex if for any $x \in \bar U$ we have that $\mathcal{R}_{C}(x,U)= U$.
(ii) $U$ is called a Lipschitz domain, if $\partial U$ is locally Lipschitz, i.e., $\partial U$ can be locally represented as the graph of a Lipschitz function defined on some open ball of $\mathbb{R}^{d-1}$.
(iii) $U$ is $C$-quasiconvex for some $C>0$ if $U$ is a bounded Lipschitz domain (see, for instance, Sections 2.5.1 and 2.5.2 in \cite{bib:BB1}). Since $\OO$ is a bounded domain with $C^2$ boundary, then it is $C$-quasiconvex for some $C>0$.
\end{remarks}
Consider the assumption
\medskip
\noindent {\bf (A1)} $\displaystyle{\argmin_{x \in \mathbb{R}^{d}}}\ L(x,0) \cap \OOO \not= \emptyset$.
\begin{proposition}\label{prop:criticalvalues}
Let $H$ be a reversible Tonelli Hamiltonian. Assume {\bf (A1)}. Then, $c_H=c_L$.
\end{proposition}
\begin{proof}
By \cite[Theorem 1.1]{bib:FM}, there exists a global viscosity solution $u_{d}$ of equation \cref{eq:hj} with $c=c_L$.
Since $\OO$ is an open subset of $\mathbb{R}^{d}$, by definition $u_{d}\big|_{\OO}$ is solution of \cref{eq:3-50} for $c=c_L$. Thus, $c_H \leq c_L$.
Recalling the characterization \cref{eq:3-55}, since $\varphi$ is differentiable almost everywhere and $H$ is a reversible Tonelli Hamiltonian we have that
\begin{equation*}
c_{H}=\inf_{\varphi \in W^{1,\infty}(\OO)} \esssup_{x \in \OO} H(x, d\varphi(x)) \geq \sup_{x \in \OO} H(x,0) = \max_{x \in \OOO} H(x,0).
\end{equation*}
Therefore, by {\bf (A1)} and the fact that
\begin{equation*}
H(x,0)= - \inf_{v \in \mathbb{R}^{d}} L(x,v) \geq -L(x,0)
\end{equation*}
we deduce that
\begin{equation*}
c_{H} \geq -\min_{x \in \OOO} L(x,0) = - \inf_{x \in \mathbb{R}^{d}} L(x,0) = c_{L},
\end{equation*}
where the last equality holds by \cref{eq:3-35}.
\end{proof}
{\it From now on, we assume that $L$ is a reversible Tonelli Lagrangian and denote by $c$ the common value of $c_L$ and $c_H$.}
\begin{remarks}
Comparing to the classical weak KAM solutions, see \Cref{def:def3}, one can call $u: \overline{\Omega} \to \mathbb{R}$ a constrained weak KAM solution if for any $t_{1} < t_{2}$ and any absolutely continuous curve $\gamma: [t_{1}, t_{2}] \to \overline{\Omega}$ we have
\begin{equation*}
u(\gamma(t_{2}))-u(\gamma(t_{1})) \leq \int_{t_{1}}^{t_{2}}{L(\gamma(s), \dot\gamma(s))\ ds} + c(t_{2}-t_{1})
\end{equation*}
and, for any $x \in \overline{\Omega}$ there exists a $C^{1}$ curve $\gamma_{x}: (-\infty, 0] \to \overline{\Omega}$ with $\gamma_{x}(0)=x$ such that
\begin{equation*}
u(x)-u(\gamma_{x}(t)) = \int_{t}^{0}{L(\gamma_{x}(s), \dot\gamma_{x}(s))\ ds}-t, \quad t <0.
\end{equation*}
One can easily see that a constrained weak KAM solution must be a constrained viscosity solution by definition and \Cref{prop:for}. In order to prove the opposite relation we need to go back to the $C^{1,1}$ regularity of solutions of the Hamiltonian system associated with a state constraint control problem.
This will be the subject of a forthcoming paper.
\end{remarks}
\subsection{Semiconcavity estimates of constrained viscosity solutions}
Here, we give a semiconcavity estimate for constrained viscosity solutions of \cref{eq:1-1}. Note that a similar result has been obtained in \cite[Corollary 3.2]{bib:CCC2} for a general calculus of variation problem under state constraints with a Tonelli Lagrangian and a regular terminal cost. Such regularity of the data allowed the authors to prove the semiconcavity result using the maximum principle which is not possible in our context: indeed, by the representation formula \cref{eq:LaxOleinik}, i.e.
\begin{equation*}
u(x)=\inf_{\gamma\in\mathcal{C}(x;t)}\left\{u(\gamma(0))+\int^t_0L(\gamma(s),\dot{\gamma}(s))\ ds\right\}+ct,\quad\forall x\in\overline{\Omega},\ \forall t>0
\end{equation*}
one can immediately observe that the terminal cost is not regular enough to apply the maximum principle in \cite[Theorem 3.1]{bib:CCC1}. For these reasons, we decided to prove semiconcavity using a dynamical approach based on the properties of calibrated curves.
Let $\Gamma^t_{x,y}(\overline{\Omega})$ be the set of all absolutely continuous curve $\gamma:[0,t]\to\overline{\Omega}$ such that $\gamma(0)=x$ and $\gamma(t)=y$. For each $x,y\in\overline{\Omega}$, $t>0$, let
\begin{align*}
A_t(x,y)=A^{\overline{\Omega},L}_t(x,y)=\inf_{\gamma\in\Gamma^t_{x,y}(\overline{\Omega})}\int^t_0L(\gamma(s),\dot{\gamma}(s))\ ds.
\end{align*}
We recall that, since the boundary of $\Omega$ is of class $C^2$, there exists $\rho_0>0$ such that
\begin{equation}\label{eq:rho_0}
b_{\Omega}(\cdot)\in C^2_b\ \text{on}\ \Sigma_{\rho_0}=\{y\in B(x,\rho_0): x\in\partial\Omega\}.
\end{equation}
Now, recall a result from \cite{bib:CC}.
\begin{lemma}\label{lem:lem_derivative}
Let $\gamma\in AC([0,T],\mathbb{R}^d)$ and suppose $d_{\Omega}(\gamma(t))<\rho_0$ for all $t\in[0,T]$. Then $d_{\Omega}\circ\gamma\in AC([0,T],\mathbb{R})$ and
\begin{align*}
\frac d{dt}(d_{\Omega}\circ\gamma)(t)=\langle Db_{\Omega}(\gamma(t)),\dot{\gamma}(t)\rangle\mathbf{1}_{\Omega^c}(\gamma(t)),\quad a.e., t\in[0,T].
\end{align*}
\end{lemma}
\begin{proposition}\label{prop:fractional_semiconcavity}
For any $x^*\in\overline{\Omega}$, the functions $A^{\overline{\Omega},L}_t(\cdot,x^*)$ and $A^{\overline{\Omega},L}_t(x^*,\cdot)$ both are locally semiconcave with fractional modulus. More precisely, there exists $C>0$ such that, if $h\in\mathbb{R}^d$ with $x\pm h\in\overline{\Omega}$, then we have
\begin{align*}
A^{\overline{\Omega},L}_t(x+h,x^*)+A^{\overline{\Omega},L}_t(x-h,x^*)-2A^{\overline{\Omega},L}_t(x,x^*)\leq C|h|^{\frac 32},\quad x\in\overline{\Omega}.
\end{align*}
In particular, for such an $h$, we also have
\begin{align*}
u(x+h)+u(x-h)-2u(x)\leq C|h|^{\frac 32},\quad x\in\overline{\Omega},
\end{align*}
where $u$ is the constrained viscosity solution defined by \cref{eq:1-1}.
\end{proposition}
\begin{proof}
Here we only study the semi-concavity of the function $A^{\overline{\Omega},L}_t(\cdot,x^*)$ for any $x^*\in\overline{\Omega}$, since $A^{\overline{\Omega},L}_t(x^*,\cdot)=A^{\overline{\Omega},\breve{L}}_t(\cdot,x^*)$, where $\breve{L}(x,v)=L(x,-v)$. We divide the proof into several steps.
\noindent\textbf{I. Projection method.} Fix $x,x^*\in\overline{\Omega}$, $h\in\mathbb{R}^d$ such that $x\pm h\in\overline{\Omega}$. Let $\gamma\in\Gamma^t_{x,x^*}(\overline{\Omega})$ be a minimizer for $A_t(x,x^*)$, and $\varepsilon>0$. For $r\in(0,\varepsilon/2]$, define
\begin{align*}
\gamma_{\pm}(s)=\gamma(s)\pm\left(1-\frac sr\right)_+h,\quad s\in[0,t].
\end{align*}
Recalling \eqref{eq:rho_0}, if $|h|\ll1$, then $d_{\Omega}(\gamma_{\pm}(s))\leq\rho_0$ for $s\in[0,r]$, since
\begin{align*}
d_{\Omega}(\gamma_{\pm}(s))\leq|\gamma_{\pm}(s)-\gamma(s)|\leq\left|\left(1-\frac sr\right)_+h\right|\leq|h|.
\end{align*}
This implies $d_{\Omega}(\gamma_{\pm}(s))\leq\rho_0$ for all $s\in[0,t]$. Denote by $\widehat{\gamma}_{\pm}$ the projection of $\gamma_{\pm}$ onto $\overline{\Omega}$, that is
\begin{align*}
\widehat{\gamma}_{\pm}(s)=\gamma_{\pm}(s)-d_{\Omega}(\gamma_{\pm}(s))Db_{\Omega}(\gamma_{\pm}(s)),\quad s\in[0,t].
\end{align*}
From our construction of $\widehat{\gamma}_{\pm}$, it is easy to see that $\widehat{\gamma}_{\pm}(0)=x{\pm}h$ and for all $s\in[0,t]$,
\begin{equation}\label{eq:diff_projection_1}
\begin{split}
|\widehat{\gamma}_{\pm}(s)-\gamma(s)|=&\,|\gamma_{\pm}(s)-d_{\Omega}(\gamma_{\pm}(s))Db_{\Omega}(\gamma_{\pm}(s))-\gamma(s)|\\
\leq &\,|h|+d_{\Omega}(\gamma_{\pm}(s))\leq 2|h|.
\end{split}
\end{equation}
Moreover, in view of \Cref{lem:lem_derivative}, we conclude that for almost all $s\in[0,r]$
\begin{equation}
\begin{split}
\dot{\widehat{\gamma}}_{\pm}(s)=&\,\dot{\gamma}_{\pm}(s)-\left\langle Db_{\Omega}(\gamma_{\pm}(s)),\dot{\gamma}_{\pm}(s)\right\rangle Db_{\Omega}(\gamma_{\pm}(s))\mathbf{1}_{\Omega^c}(\gamma_{\pm}(s))\\
&\,\quad -d_{\Omega}(\gamma_{\pm}(s))D^2b_{\Omega}(\gamma_{\pm}(s))\dot{\gamma}_{\pm}(s).
\end{split}
\end{equation}
To proceed with the proof we need estimates for $\int^r_0|\widehat{\gamma}_{+}(s)-\widehat{\gamma}_{-}(s)|^2\ ds$ and $\int^r_0|\dot{\widehat{\gamma}}_{+}(s)-\dot{\widehat{\gamma}}_{-}(s)|^2\ ds$. The first one is easily obtained by \cref{eq:diff_projection_1}. That is
\begin{equation}
\begin{split}
\int^r_0|\widehat{\gamma}_{+}(s)-\widehat{\gamma}_{-}(s)|^2\ ds\leq &\,\int^r_0(|\widehat{\gamma}_{+}(s)-\gamma(s)|+|\widehat{\gamma}_{-}(s)-\gamma(s)|)^2\ ds\\
\leq &\,16r|h|^2.
\end{split}
\end{equation}
To obtain the second estimate, notice that
\begin{align*}
&\,\int^r_0|\dot{\widehat{\gamma}}_{+}(s)-\dot{\widehat{\gamma}}_{-}(s)|^2\ ds\\
\leq &\,\int^r_0(|\dot{\widehat{\gamma}}_{+}(s)-\dot{\gamma}_+(s)|+|\dot{\gamma}_+(s)-\dot{\gamma}_-(s)|+|\dot{\widehat{\gamma}}_{-}(s)-\dot{\gamma}_-(s)|)^2\ ds,
\end{align*}
and
\begin{align*}
\int^r_0|\dot{\gamma}_+(s)-\dot{\gamma}_-(s)|^2\ ds\leq\frac{4|h|^2}r.
\end{align*}
It is enough to give the estimate of
\begin{align*}
\int^r_0|\dot{\widehat{\gamma}}_{\pm}(s)-\dot{\gamma}_{\pm}(s)|^2\ ds.
\end{align*}
\noindent\textbf{II. Estimate of $\int^r_0|\dot{\widehat{\gamma}}_{\pm}(s)-\dot{\gamma}_{\pm}(s)|^2\ ds$.} We will only give the estimate for $\int^r_0|\dot{\widehat{\gamma}}_{+}(s)-\dot{\gamma}_{+}(s)|^2\ ds$ since the other is similar. Recalling \Cref{lem:lem_derivative} we conclude that for $s\in[0,r]$,
\begin{align*}
\dot{\widehat{\gamma}}_{+}(s)-\dot{\gamma}_{+}(s)=&\,-\frac d{ds}\left\{d_{\Omega}(\gamma_{+}(s))Db_{\Omega}(\gamma_{+}(s))\right\}\\
=&\,-\langle Db_{\Omega}(\gamma_+(s)),\dot{\gamma}_+(s)\rangle Db_{\Omega}(\gamma_{+}(s))\mathbf{1}_{\Omega^c}(\gamma_+(s))\\
&\,-d_{\Omega}(\gamma_{+}(s))D^2b_{\Omega}(\gamma_{+}(s))\dot{\gamma}_+(s).
\end{align*}
In view of the fact that $\langle D^2b_{\Omega}(x),Db_{\Omega}(x)\rangle=0$ for all $x\in\Sigma_{\rho_0}$, we have that
\begin{align*}
&\,\int^r_0|\dot{\widehat{\gamma}}_{+}(s)-\dot{\gamma}_{+}(s)|^2\ ds\\
\leq &\,\int^r_0\langle Db_{\Omega}(\gamma_+(s)),\dot{\gamma}_+(s)\rangle^2\mathbf{1}_{\Omega^c}(\gamma_+(s))\ ds+\int^r_0[d_{\Omega}(\gamma_{+}(s))D^2b_{\Omega}(\gamma_{+}(s))\dot{\gamma}_+(s)]^2\ ds\\
=&\,I_1+I_2.
\end{align*}
Recall $\gamma\in C^{1,1}([0,T,\overline{\Omega}])$, then $|\dot{\gamma}(s)|$ is uniformly bounded by a constant independent of $h$ and $r$. It follows there exists $C_1>0$ such that
\begin{align*}
I_2\leq\int^r_0|h|^2\cdot|D^2b_{\Omega}(\gamma_{+}(s))|^2\cdot\left|\dot{\gamma}(s)-\frac hr\right|^2\ ds\leq C_1r|h|^2\left(1+\frac {|h|^2}{r^2}+\frac{|h|}{r}\right).
\end{align*}
In view of \Cref{lem:lem_derivative}, we obtain that
\begin{align*}
I_1=&\int^r_0\left\{\frac d{ds}(d_{\Omega}(\gamma_+(s)))\langle Db_{\Omega}(\gamma_+(s)),\dot{\gamma}_+(s)\rangle\right\}\mathbf{1}_{\Omega^c}(\gamma_+(s))\ ds.
\end{align*}
We observe that the set $\{s\in[0,r]: \gamma_+(s)\in\overline{\Omega}^c\}$ is open and it is composed of countable union of disjoint open intervals $(a_i,b_i)$. Thus
\begin{align*}
I_1=\sum_{i=1}^{\infty}\int^{b_i}_{a_i}\left\{\frac d{ds}(d_{\Omega}(\gamma_+(s)))\langle Db_{\Omega}(\gamma_+(s)),\dot{\gamma}_+(s)\rangle\right\}\ ds.
\end{align*}
Integrating by parts we conclude that
\begin{align*}
I_1=\sum_{i=1}^{\infty}\left\{\frac d{ds}(d_{\Omega}(\gamma_+(s)))d_{\Omega}(\gamma_+(s))\bigg\vert_{a_i}^{b_i}-\int^{b_i}_{a_i}\frac {d^2}{ds^2}(d_{\Omega}(\gamma_+(s)))d_{\Omega}(\gamma_+(s))\ ds\right\}.
\end{align*}
Notice $\gamma_+(a_i),\gamma_+(b_i)\in\partial\Omega$ for all $i\in\mathbb{N}$, so $d_{\Omega}(\gamma_+(a_i))=d_{\Omega}(\gamma_+(b_i))=0$. Moreover, there exists $C_2>0$ such that
\begin{align*}
\left|\frac {d^2}{ds^2}(d_{\Omega}(\gamma_+(s)))\right|=\left|\frac d{ds}\left\langle Db_{\Omega}(\gamma_+(s)),\dot{\gamma}(s)-\frac hr\right\rangle\right|\leq C_2
\end{align*}
also since $\gamma\in C^{1,1}([0,T,\overline{\Omega}])$. Therefore
\begin{align*}
I_1\leq C_2r|h|.
\end{align*}
Combing the two estimates on $I_1$ and $I_2$, we have that
\begin{align*}
\int^r_0|\dot{\widehat{\gamma}}_{+}(s)-\dot{\gamma}_{+}(s)|^2\ ds\leq C_3(r|h|+r|h|^2+\frac{|h|^4}{r}+|h|^3).
\end{align*}
\noindent\textbf{III. Fractional semiconcavity estimate of $A_t(x,x^*)$.} From the previous estimates we have that
\begin{align*}
&\,\int^r_0|\dot{\widehat{\gamma}}_{+}(s)-\dot{\widehat{\gamma}}_{-}(s)|^2\ ds\\
\leq &\,3\int^r_0|\dot{\widehat{\gamma}}_{+}(s)-\dot{\gamma}_+(s)|^2+|\dot{\gamma}_+(s)-\dot{\gamma}_-(s)|^2+|\dot{\widehat{\gamma}}_{-}(s)-\dot{\gamma}_-(s)|^2\ ds\\
\leq &\, C_4(r|h|+r|h|^2+\frac{|h|^4}{r}+|h|^3+\frac{|h|^2}{r}).
\end{align*}
Now, let $\gamma\in\Gamma^t_{x,x^*}(\overline{\Omega})$ be a minimizer for $A_t(x,x^*)$. Thus,
\begin{align*}
&\,A_t(x+h,x^*)+A_t(x-h,x^*)-2A_t(x,x^*)\\
\leq &\,\int^r_0\left\{L(\widehat{\gamma}_+(s),\dot{\widehat{\gamma}}_+(s))+L(\widehat{\gamma}_-(s),\dot{\widehat{\gamma}}_-(s))-2L(\gamma(s),\dot{\gamma}(s))\right\}ds\\
\leq &\,C_5\int^r_0(|\widehat{\gamma}_+(s)-\gamma(s)|^2+|\widehat{\gamma}_-(s)-\gamma(s)|^2+|\dot{\widehat{\gamma}}_+(s)-\dot{\gamma}(s)|^2+|\dot{\widehat{\gamma}}_-(s)-\dot{\gamma}(s)|^2)\ ds.
\end{align*}
Owing to \cref{eq:diff_projection_1}, we have
\begin{align*}
\int^r_0|\widehat{\gamma}_{\pm}(s)-\gamma(s)|^2\ ds\leq 4r|h|^2.
\end{align*}
On the other hand
\begin{align*}
&\,\int^r_0|\dot{\widehat{\gamma}}_+(s)-\dot{\gamma}(s)|^2+|\dot{\widehat{\gamma}}_-(s)-\dot{\gamma}(s)|^2\ ds\\
\leq &\,2\int^r_0|\dot{\widehat{\gamma}}_+(s)-\dot{\gamma}_+(s)|^2+|\dot{\widehat{\gamma}}_-(s)-\dot{\gamma}_-(s)|^2\ ds+C_6\frac{|h|^2}{r}\\
\leq &\,C_7(r|h|+r|h|^2+\frac{|h|^4}{r}+|h|^3+\frac{|h|^2}{r}).
\end{align*}
Therefore, taking $r=|h|^{\frac 12}$,
\begin{align*}
&\,A_t(x+h,x^*)+A_t(x-h,x^*)-2A_t(x,x^*)\\
\leq &\,C_8(r|h|+r|h|^2+\frac{|h|^4}{r}+|h|^3+\frac{|h|^2}{r})\\
\leq &\,C_9|h|^{\frac 32}.
\end{align*}
\noindent\textbf{IV. Fractional semiconcavity estimate for $u$ defined in \cref{eq:LaxOleinik}.} By using the fundamental solution and fix $t=1$, we have
\begin{align*}
u(x)=\inf_{y\in\overline{\Omega}}\{u(y)+A_1(y,x)\}+c,\quad\forall x\in\overline{\Omega}.
\end{align*}
Thus the required semiconcavity estimate follow by the relation
\begin{align}\label{eq:semiconc}
u(x+h)+u(x-h)-2u(x)\leq A_1(y^*,x+h)+A_1(y^*,x-h)-2A_1(y^*,x),
\end{align}
where the infimum above achieves at $y=y^*$.
\end{proof}
\subsection{Differentiability of constrained viscosity solutions}
Let $u$ be a constrained viscosity solution of \cref{eq:hj} on $\overline{\Omega}$.
Recall that, we call $\gamma:[t_1,t_2]\to \overline{\Omega}$ is $(u,L,c)$-calibrated, if it satisfies
\begin{equation*}
u(\gamma(t'_{2}))-u(\gamma(t'_{1})) = \int_{t'_{1}}^{t'_{2}}{L(\gamma(s), \dot\gamma(s))\ ds} + c(t'_{2}-t'_{1}),
\end{equation*}
for any $[t'_{1}, t'_{2}] \subset [t_1,t_2]$.
\begin{proposition}[Differentiability property I]\label{prop:diff}
Let $u$ be a constrained viscosity solution of \cref{eq:hj} on $\overline{\Omega}$. Let $x \in \OO$ be such that there exists a $(u,L,c)$-calibrated curve $\gamma:[-\tau,\tau] \to \OO$ such that $\gamma(0)=x$, for some $\tau > 0$. Then, $u$ is differentiable at $x$.
\end{proposition}
\begin{proposition}[Differentiability property II]\label{prop:tangentialdiff}
Let $u$ be a constrained viscosity solution of \cref{eq:hj} on $\overline{\Omega}$. Let $x \in \partial\OO$ be such that there exists a $(u,L,c)$-calibrated curve $\gamma:[-\tau,\tau] \to \OO$ such that $\gamma(0)=x$, for some $\tau > 0$. For any direction $y$ tangential to $\Omega$ at $x$, the directional derivative of $u$ at $x$ in the direction $y$ exists.
\end{proposition}
We omit the proof of \Cref{prop:diff} here since it follows by standard arguments, as for the case without the constraint $\OO$, and for this we refer to \cite[Theorem 4.11.5]{bib:FA}. Moreover, we prove \Cref{prop:tangentialdiff} in \Cref{sec:appendix}, for the reader convenience, since it is given by a combination of the arguments in \cite[Theorem 4.11.5]{bib:FA} and the so-called projection method.
Let $x \in \partial\OO$ be such that there exists a $(u,L,c)$-calibrated curve $\gamma:[-\tau,\tau]\to \overline{\Omega}$ with $\gamma(0)=x$ for some $\tau>0$. From \Cref{prop:tangentialdiff}, for any $y$ tangential to $\Omega$ at $x$, we have
\[
\langle D_{v}L(x, \dot{\gamma}(0)),y\rangle=\frac{\partial u}{\partial y}(x).
\]
Thus, one can define the tangential gradient of $u$ at $x$ by
\[
D^{\tau}u(x)=D_{v}L(x, \dot{\gamma}(0))-\langle D_{v}L(x, \dot{\gamma}(0)),\nu(x)
\rangle\nu(x).
\]
Given $x \in \partial\OO$, each $p \in D^{+}u(x)$ can be written as
\[
p=p^{\tau}+p^{\nu},
\]
where $p^{\nu}=\langle p, \nu(x) \rangle \nu(x)$, and $p^{\tau}$ is the tangential component of $p$, i.e., $\langle p^{\tau}, \nu(x) \rangle=0$.
By similar arguments to the one in \cite[Proposition 2.5, Proposition 4.3 and Theorem 4.3]{bib:CCC2} and by \Cref{prop:tangentialdiff} it is easy to prove the following result: \Cref{prop:lambda}, \Cref{cor:DD} and \Cref{prop:hamiltonianlemma}.
\begin{proposition}\label{prop:lambda}
Let $x \in \partial\OO$ and $u:\OOO \to \mathbb{R}$ be a Lipschitz continuous and semiconcave function. Then,
\begin{equation*}
-\partial^{+}_{-\nu}u(x)=\lambda_{+}(x):=\max\{\lambda_{p}(x): p \in D^{+}u(x)\},
\end{equation*}
where
\begin{equation*}
\lambda_{p}(x)=\max\{\lambda: p^{\tau}+\lambda\nu(x) \in D^{+}u(x)\},\quad \forall p\in D^+u(x)
\end{equation*}
and
\begin{equation*}
\partial^{+}_{-\nu}u(x)=\lim_{\substack{h \to 0^{+} \\ \theta \to -\nu \\ x+h\theta \in \overline{\Omega}}} \frac{u(x+h\theta)- u(x)}{h}
\end{equation*}
denotes the one-sided derivative of $u$ at $x$ in direction $-\nu$.
\end{proposition}
\begin{corollary}\label{cor:DD}
Let $u$ be a constrained viscosity solution of \cref{eq:hj} on $\overline{\Omega}$. Let $x \in \partial\OO$ be a point such that there is a $(u,L,c)$-calibrated curve $\gamma:[-\tau,\tau] \to \OO$ such that $\gamma(0)=x$, for some $\tau > 0$. Then, all $p\in D^+u(x)$ have the same tangential component, i.e.,
\begin{equation*}
\{ p^{\tau} \in \mathbb{R}^{d}: p \in D^{+}u(x)\}=\{ D^{\tau}u(x)\},
\end{equation*}
and
\begin{equation*}
D^{+}u(x)=\big\{ p \in \mathbb{R}^{d}: p=D^{\tau}u(x)+\lambda \nu(x),\ \forall\ \lambda \in (-\infty, \lambda_{+}(x)]\big\}.
\end{equation*}
\end{corollary}
\begin{proposition}\label{prop:hamiltonianlemma}
Let $u$ be a constrained viscosity solution of \cref{eq:hj} on $\overline{\Omega}$. Let $x \in \partial\OO$ be a point such that there is a $(u,L,c)$-calibrated curve $\gamma:[-\tau,\tau] \to \OO$ such that $\gamma(0)=x$, for some $\tau > 0$. Then,
\begin{equation*}
H(x, D^{\tau}u(x)+\lambda_{+}(x)\nu(x))=c.
\end{equation*}
\end{proposition}
\begin{theorem}\label{thm:diff3}
Let $u$ be a constrained viscosity solution of \cref{eq:hj} on $\overline{\Omega}$. Let $x \in \OOO$ and $\gamma:[-\tau,\tau]\to \overline{\Omega}$ be a $(u,L,c)$-calibrated curve with $\gamma(0)=x$ for some $\tau >0$. Then,
\begin{itemize}
\item[($i$)] if $x \in \OO$, then
\begin{equation*}
\dot\gamma(0)=D_{p}H(x, Du(x));
\end{equation*}
\item[($ii$)] if $x \in \partial\OO$, then
\begin{equation*}
\dot\gamma(0)=D_{p}H(x, D^{\tau}u(x)+\lambda_{+}(x)\nu(x)).
\end{equation*}
\end{itemize}
\end{theorem}
\begin{proof}
We first prove ($i$). Since $\gamma$ is a calibrated curve, we have that for any $\eps\in(0,\tau]$
\begin{equation*}
u(\gamma(\eps))-u(x)=\int_{0}^{\eps}{L(\gamma(s), \dot\gamma(s))\ ds}+c\eps.
\end{equation*}
Thus, we deduce that
\begin{equation*}
\frac{u(\gamma(\eps))-u(x)}{\eps}-\frac{1}{\eps}\int_{0}^{\eps}{L(\gamma(s), \dot\gamma(s))\ ds}=c,
\end{equation*}
and passing to the limit as $\eps \to 0$, by \Cref{prop:diff} we get
\begin{equation*}
\langle Du(x), \dot\gamma(0) \rangle-L(x, \dot\gamma(0))=c.
\end{equation*}
Since $u$ is a viscosity solution of equation \cref{eq:hj} in $\Omega$, then $c=H(x,Du(x))$. Thus, we deduce that
\begin{equation*}
\langle Du(x), \dot\gamma(0) \rangle-L(x, \dot\gamma(0))=H(x,Du(x)).
\end{equation*}
By the properties of the Legendre transform, the above equality yields
\begin{equation*}
\dot\gamma(0)=D_{p}H(x, Du(x)).
\end{equation*}
In order to prove ($ii$), proceeding as above by \Cref{prop:tangentialdiff} and \Cref{prop:hamiltonianlemma} we get
\begin{equation*}
\langle D^{\tau}u(x), \dot\gamma(0) \rangle-L(x, \dot\gamma(0))=c=H(x, D^{\tau}u(x)+\lambda_{+}(x)\nu(x)).
\end{equation*}
Hence, we obtain that
\begin{equation*}
\dot\gamma(0)=D_{p}H(x, D^{\tau}u(x)+\lambda_{+}(x)\nu(x)).
\end{equation*}
This completes the proof.
\end{proof}
\subsection{Mather set for reversible Tonelli Lagrangians in $\overline{\Omega}$}
Assume {\bf (A1)}. Set
\[
\tilde{\mathcal{M}}_{\OOO}=\{(x,0)\in\OOO\times\mathbb{R}^d\ |\ L(x,0)=\inf_{y\in\mathbb{R}^d}L(y,0)\}.
\]
It is clear that the set $\tilde{\mathcal{M}}_{\OOO}$ is nonempty under assumption {\bf (A1)} and we call $\tilde{\mathcal{M}}_{\OOO}$ the Mather set associated with the Tonelli Lagrangian $L$. Note that
\[
\inf_{x\in\OOO}L(x,0)=\inf_{x\in\mathbb{R}^d}L(x,0)=\inf_{(x,v)\in\mathbb{R}^d\times\mathbb{R}^d}L(x,v),
\]
since $L$ is reversible. Hence, it is straightforward to check that the constant curve at $x$ is a minimizing curve for the action $A_L(\cdot)$, where $x\in \mathcal{M}_{\OOO}:=\pi_1\tilde{\mathcal{M}}_{\OOO}$. We call $\mathcal{M}_{\OOO}$ the projected Mather set.
\begin{definition}[Mather measures]\label{def:def4}
Let $\mu\in \mathcal{P}(\OOO\times\mathbb{R}^d)$. We say that $\mu$ is a Mather measure for a reversible Tonelli Lagrangian $L$, if $\bar{\mu}\in\mathfrak{M}_L$ and $\supp(\mu)\subset\tilde{\mathcal{M}}_{\OOO}$, where $\bar{\mu}$ is defined by $\bar{\mu}(B):=\mu\big(B\cap(\OOO\times\mathbb{R}^d)\big)$ for all $B\in \mathcal{B}(\mathbb{R}^d\times\mathbb{R}^d)$.
\end{definition}
\begin{remarks}\label{rem:mini}
Let $x\in\mathcal{M}_{\OOO}$. Let $u$ be a constrained viscosity solution of \cref{eq:hj} on $\overline{\Omega}$.
\begin{itemize}
\item [(i)] Obviously, the atomic measure $\delta_{(x,0)}$, supported on $(x,0)$, is a Mather measure.
\item [(ii)] Let $\gamma(t)\equiv x$, $t\in\mathbb{R}$.
Note that $u(\gamma(t'))-u(\gamma(t))=0$ for all $t\leq t'$ and that
\begin{align*}
\int_t^{t'}L(\gamma(s),\dot{\gamma}(s))ds+c(t'-t)=\int_t^{t'}L(x,0)ds+c(t'-t)=0,
\end{align*}
where the last equality comes from \cref{eq:3-35}. Hence, the curve $\gamma$ is a $(u,L,c)$-calibrated curve.
\item [(iii)] By \Cref{thm:diff3}, we have that
\begin{align}\label{eq:3-80}
\begin{split}
\dot\gamma(0)=D_{p}H(x, Du(x)),\quad &\text{if}\ x\in\Omega,\\
\dot\gamma(0)=D_{p}H(x, D^{\tau}u(x)+\lambda_{+}(x)\nu(x)),\quad &\text{if}\ x \in \partial\OO.
\end{split}
\end{align}
\end{itemize}
\end{remarks}
\begin{proposition}\label{Lip}
Let $u$ be a constrained viscosity solution of \cref{eq:hj} on $\overline{\Omega}$. The function
\[
W: \mathcal{M}_{\OOO}\to \mathbb{R}^d,\quad x\mapsto W(x)
\]
is Lipschitz with a Lipschitz constant depending only on $H$ and $\OO$, where
\begin{align*}
W(x)=
\begin{cases}
Du(x), & \quad \text{if}\ x \in \OO,
\\
D^{\tau}u(x)+\lambda_{+}(x)\nu(x), & \quad \text{if}\ x \in \partial\OO.
\end{cases}
\end{align*}
\end{proposition}
\begin{proof}
In view of \cref{eq:3-80} and the properties of the Legendre Transform, for any $x$, $y\in \mathcal{M}_{\OOO}$ we have
\[
|W(x)-W(y)|=\left|\frac{\partial L}{\partial v}(x,0)-\frac{\partial L}{\partial v}(y,0)\right|\leq K_2|x-y|,
\]
where $K_2>0$ is a constant depending only on $H$ and $\OOO$.
\end{proof}
Let $\mu\in \mathcal{P}(\OOO\times\mathbb{R}^d)$ be minimizing measure and $\mu_1:={\pi_1}\sharp\mu$.
Then $\mu_1$ is a probability measure on $\OOO$ and $\supp(\mu_1)\subset \mathcal{M}_{\OOO}$.
It is clear that
\begin{proposition}\label{prop:1to1}
The map $\pi_1: \supp(\mu)\to\supp(\mu_1)$ is one-to-one and the inverse is given by $x\mapsto \big(x,D_pH(x,W(x))\big)$, where $W(x)$ is as in Proposition \ref{Lip}.
\end{proposition}
\section{Ergodic MFG with state constraints}
\label{sec:EMFG}
By the results proved so far the good candidate limit system for the MFG system \cref{eq:lab1} is the following
\begin{equation}\label{eq:MFG1}
\begin{cases}
H(x, Du)=F(x, m) + \lambda & \text{in} \quad \OOO, \\ \ \text{div}\Big(mV(x)\Big)=0 & \text{in} \quad \OOO, \\ \ \int_{\OOO}{m(dx)}=1
\end{cases}
\end{equation}
where
\begin{align*}
V(t,x)=
\begin{cases}
D_{p}H(x, Du(x)), & x \in \supp(m) \cap \OO,
\\
D_{p}H(x, D^{\tau}u(x)+\lambda_{+}(x)\nu(x)), & x \in \supp(m) \cap \partial\OO
\end{cases}
\end{align*}
and $\lambda_{+}$ is defined in \Cref{prop:lambda}.
\subsection{Assumptions}
From now on, we suppose that $L$ is a reversible strict Tonelli Lagrangian on $\mathbb{R}^d$. Let $F: \mathbb{R}^d \times \mathcal{P}(\OOO) \to \mathbb{R}$ be a function, satisfying the following assumptions:
\begin{itemize}
\item[\textbf{(F1)}] for every measure $m \in \mathcal{P}(\OOO)$ the function $x \mapsto F(x,m)$ is of class $C^{2}_{b}(\mathbb{R}^d)$ and
\begin{equation*}
\mathcal{F}:=\sup_{m \in \mathcal{P}(\OOO)} \sum_{|\alpha|\leq 2} \| D^{\alpha}F(\cdot, m)\|_{\infty} < +\infty,
\end{equation*}
where $\alpha=(\alpha_1,\cdots,\alpha_n)$ and $D^{\alpha}=D^{\alpha_1}_{x_1}\cdots D^{\alpha_n}_{x_n}$;
\item[\textbf{(F2)}] for every $x \in \mathbb{R}^d$ the function $m \mapsto F(x,m)$ is Lipschitz continuous and
$${\rm{Lip}}_{2}(F):=\displaystyle{\sup_{\substack{x\in \mathbb{R}^d\\ m_1,\ m_2 \in \mathcal{P}(\OOO) \\ m_1\neq m_2 } }}\frac{|F(x,m_1)-F(x,m_2)|}{d_{1}(m_1, m_2)} < +\infty;$$
\item[\textbf{(F3)}] there is a constant $C_F>0$ such that for every $m_{1}$, $m_{2} \in \mathcal{P}(\OOO)$,
\begin{equation*}
\int_{\OOO}{(F(x,m_{1})-F(x, m_{2}))\ d(m_{1}-m_{2})} \geq C_F \int_{\OOO}{\left(F(x,m_{1})-F(x,m_{2})\right)^{2}\ dx};
\end{equation*}
\item[\textbf{(A2)}] $\displaystyle{\argmin_{x \in \mathbb{R}^{d}}}\big(L(x,0)+F(x,m)\big)\cap \OOO\neq\emptyset$, $\quad \forall m\in \mathcal{P}(\OOO)$.
\end{itemize}
\medskip\noindent
Note that assumption {\bf (A2)} is MFG counterpart of assumption {\bf (A1)} which guarantees, as we will see, that for any measure $m$ the Mather set associated with the Lagrangian $L(x,v)+F(x,m)$ is non-empty.
\begin{definition}[Solutions of constrained ergodic MFG system]\label{def:ers}
A triple $$(\bar\lambda, \bar u, \bar m) \in \mathbb{R} \times C(\OOO) \times \mathcal{P}(\OOO)$$ is called a solution of constrained ergodic MFG system \cref{eq:MFG1} if
\begin{itemize}
\item[($i$)] $\bar u$ is a constrained viscosity solution of the first equation of system \cref{eq:MFG1};
\item[($ii$)] $D\bar u$ exists for $\bar m-a.e. \ \ x \in \OOO$;
\item [($iii$)] $\bar m$ is a projected minimizing measure, i.e., there is a minimizing measure $\eta_{\bar m}$ for $L_{\bar m}$ such that $\bar m={\pi_{1}}\sharp \eta_{\bar m}$;
\item[($iv$)] $\bar m$ satisfies the second equation of system \cref{eq:MFG1} in the sense of distributions, that is,
$$ \int_{\OOO}{\big\langle Df(x), V(x) \big\rangle\ d\bar m(x)}=0, \quad \forall f \in C_c^{\infty}(\OOO),
$$
where the vector field $V$ is related to $\bar u$ in the following way: if $x\in\OO\cap\supp(\bar m)$, then $D\bar u(x)$ exists and
\[
V(x)=D_pH(x,D\bar u(x));
\]
if $x\in\partial\OO\cap\supp(\bar m)$, then
\[
V(x)=D_pH(x,D^{\tau}\bar{u}(x)+\lambda_{+}(x)\nu(x)).
\]
\end{itemize}
\end{definition}
We denote by $\mathcal{S}$ the set of solutions of system \cref{eq:MFG1} and \Cref{thm:MR1} below guarantees the nonemptiness of such set.
\begin{definition}[Mean field Lagrangians and Hamiltonians]\label{def:mfgl}
Let $H$ be the reversible strict Tonelli Hamiltonian associated with $L$.
For any $m\in \mathcal{P}(\OOO)$, define the mean field Lagrangian and Hamiltonian associated with $m$
by
\begin{align}
L_{m}(x,v)&:=L(x,v)+F(x,m),\,\quad (x,v)\in\mathbb{R}^d\times\mathbb{R}^d,\label{eq:lm}\\
H_{m}(x,p)&:=H(x,p)-F(x,m),\quad (x,p)\in\mathbb{R}^d\times\mathbb{R}^d\label{eq:hm}.
\end{align}
\end{definition}
By assumption {\bf (F1)}, it is clear that for any given $m\in \mathcal{P}(\OOO)$, $L_m$ (resp. $H_{m}$) is a reversible strict Tonelli Lagrangian (resp. Hamiltonian). So, in view of {\bf (A2)}, all the results recalled and proved in \Cref{sec:HJstateconstraints} still hold for $L_m$ and $H_m$.
In view of \Cref{prop:criticalvalues}, for any given $m\in \mathcal{P}(\OOO)$, we have $c_{H_{m}}=c_{L_m}$. Denote the common value of $c_{H_{m}}$ and $c_{L_m}$ by $\lambda(m)$.
\begin{lemma}[Lipschitz continuity of the critical value]\label{lem:LEM1}
The function $m\mapsto \lambda(m)$ is Lipschitz continuous on $\mathcal{P}(\OOO)$ with respect to the metric $d_{1}$, where the Lipschitz constant depends on $F$ only.
\end{lemma}
Since the characterization \cref{eq:3-55} holds true, the proof of this result is an adaptation of \cite[Lemma 1]{bib:CCMW}.
\bigskip
\subsection{Existence of solutions of constrained ergodic MFG systems}
We are now in a position to prove $\mathcal{S}\neq\emptyset$.
\begin{theorem}[Existence of solutions of \cref{eq:MFG1}]\label{thm:MR1}
Assume {\bf (F1)}, {\bf (F2)}, and {\bf (A2)}.
\begin{itemize}
\item [($i$)] There exists at least one solution $(c_{H_{\bar m}},\bar u,\bar m)$ of system \cref{eq:MFG1}, i.e., $\mathcal{S}\neq\emptyset$.
\item [($ii$)] Assume, in addition, {\bf (F3)}. Let
$(c_{H_{\bar m_{1}}}, \bar u_{1}, \bar m_{1})$, $(c_{H_{\bar m_{2}}}, \bar u_{2}, \bar m_{2})\in \mathcal{S}$. Then,
\[
F(x,\bar m_{1})= F(x,\bar m_{2}),\quad \forall x\in \OOO\quad \text{and}\quad c_{H_{\bar m_{1}}}=c_{H_{\bar m_{2}}}.
\]
\end{itemize}
\end{theorem}
\begin{remarks}
By ($ii$) in \Cref{thm:MR1}, it is clear that each element of $\mathcal{S}$ has the form $(\bar \lambda,\bar u,\bar m)$, where $\bar m$ is a projected minimizing measure and $\bar\lambda$ denotes the common Ma\~n\'e critical value of $H_{\bar m}$.
\end{remarks}
\proof[Proof of \Cref{thm:MR1}]
The existence result $(i)$ follows by the application of the Kakutani fixed point theorem. Indeed, by the arguments in Section 3, for any $m\in\mathcal{P}(\OOO)$, there is a minimizing measure $\eta_{m}$ associated with $L_m$. Thus we can define a set-valued map as follows
$$
\Psi: \mathcal{P}(\OOO) \rightarrow \mathcal{P}(\OOO), \quad m \mapsto \Psi(m)
$$
where
\[
\Psi(m):=\left\{{\pi_{1}}\sharp \eta_{m}:\ \eta_{m}\ \text{is a minimizing measure for}\ L_m \right\}.
\]
Then, a fixed point $\bar{m}$ of $\Psi$ is a solution in the sense of distributions of the stationary continuity equation and there exists a constrained viscosity solution associated with $H_{\bar{m}}$ by \cite{bib:Mit}. For more details see for instance \cite[Theorem 3]{bib:CCMW}.
Let
$(c_{H_{\bar m_{1}}}, \bar u_{1}, \bar m_{1})$, $(c_{H_{\bar m_{2}}}, \bar u_{2}, \bar m_{2})\in \mathcal{S}$. Given any $T>0$,
define the following sets of curves:
\[
\Gamma:=\{\gamma\in AC([0,T];\mathbb{R}^d): \gamma(t)\in\OOO, \ \forall t\in[0,T]\},
\]
and
\[
\tilde{\mathcal{M}}_{\bar{m}_i}:=\big\{\text{constant curves}\ \gamma:[0,T]\to\OOO,\ t\mapsto x:\ x\in \argmin_{y\in\OOO}L_{\bar{m}_i}(y,0)\big\},\quad i=1,2.
\]
One can define Borel probability measures on $\Gamma$ by
\begin{align*}
\mu_i(\tilde{B})=
\begin{cases}
\bar{m}_1(B), & \quad \tilde{B}\cap \tilde{\mathcal{M}}_{\bar{m}_i}\neq\emptyset,
\\
0, & \quad \text{otherwise},
\end{cases}
\end{align*}
where
\[
B=\{x\in\OOO: \text{the constant curve}\ t\mapsto x\ \text{belongs to}\ \tilde{B}\cap \tilde{\mathcal{M}}_{\bar{m}_i}\}.
\]
By definition, it is direct to see that $\supp(\mu_i)\subset \tilde{\mathcal{M}}_{\bar{m}_i}$ and
\begin{align}\label{eq:4-200}
\bar{m}_i=e_t\sharp\mu_i,\quad \forall t\in[0,T].
\end{align}
Given any $x_0\in\supp(\bar{m}_1)$, let $\gamma_1$ denote the constant curve $t\mapsto x_0$, then for any $t>0$ we have that
\begin{align*}
0=\bar{u}_1(x_0)-\bar{u}_1(\gamma_1(0))=\int_0^t\big(L(\gamma_1,\dot{\gamma}_1)+F(\gamma_1,\bar{m}_1)\big)ds+c_{H_{\bar m_{1}}}t,\\
0=\bar{u}_2(x_0)-\bar{u}_2(\gamma_1(0))\leq\int_0^t\big(L(\gamma_1,\dot{\gamma}_1)+F(\gamma_1,\bar{m}_2)\big)ds+c_{H_{\bar m_{2}}}t,
\end{align*}
which imply that
\[
\int_0^t\big(F(\gamma_1,\bar{m}_1)-F(\gamma_1,\bar{m}_2)\big)ds+(c_{H_{\bar m_{1}}}-c_{H_{\bar m_{2}}})t\leq 0.
\]
By integrating the above inequality on $\Gamma$ with respect to $\mu_1$, we get that
\[
\int_\Gamma\int_0^t\big(F(\gamma_1,\bar{m}_1)-F(\gamma_1,\bar{m}_2)\big)dsd\mu_1+(c_{H_{\bar m_{1}}}-c_{H_{\bar m_{2}}})t\leq 0.
\]
In view of Fubini Theorem and \cref{eq:4-200}, we deduce that
\[
\int_0^t\int_{\OOO}\big(F(x,\bar{m}_1)-F(x,\bar{m}_2)\big)d\bar{m}_1ds+(c_{H_{\bar m_{1}}}-c_{H_{\bar m_{2}}})t\leq 0,
\]
implying that
\[
\int_{\OOO}\big(F(x,\bar{m}_1)-F(x,\bar{m}_2)\big)d\bar{m}_1+(c_{H_{\bar m_{1}}}-c_{H_{\bar m_{2}}})\leq 0.
\]
Exchanging the roles of $\bar{m}_1$ and $\bar{m}_2$, we obtain that
\[
\int_{\OOO}\big(F(x,\bar{m}_2)-F(x,\bar{m}_1)\big)d\bar{m}_2+(c_{H_{\bar m_{2}}}-c_{H_{\bar m_{1}}})\leq 0.
\]
Hence, we get that
\[
\int_{\OOO}\big(F(x,\bar{m}_2)-F(x,\bar{m}_1)\big)d(\bar{m}_2-\bar m_{1})\leq 0.
\]
Recalling assumption ({\bf{F3}}), we deduce that $F(x,\bar{m}_2)=F(x,\bar{m}_1)$ for all $x\in\OOO$ and thus $c_{H_{\bar m_{1}}}=c_{H_{\bar m_{2}}}$.
\qed
\begin{remarks}\label{rem:stress1}
Note that even though the uniqueness result is a consequence of the classical Lasry--Lions monotonicity condition for MFG system, our proof here differs from the one in \cite{bib:CCMW} and in \cite{bib:CAR}: indeed, in our setting the stationary continuity equation has different vector fields depending on the mass of the measure in $\OO$ and the mass on $\partial \OO$. This is why we addressed the problem representing the Mather measures associated with the system through measures supported on the set of calibrated curves.
\end{remarks}
\section{Convergence of mild solutions of the constrained MFG problem}
\label{sec:convergence}
This section is devoted to the long-time behavior of first-order constrained MFG system \cref{eq:lab1}.
We will assume {\bf (F1)}, {\bf (F2)}, {\bf (F3)}, {\bf (A2)}, and the following additional conditions:
\begin{itemize}
\item [\textbf{(U)}] $u^f\in C^1_b(U)$, where $U$ is an open subset of $\mathbb{R}^d$ such that $\OOO\subset U$.
\item [\textbf{(A3)}] the set-value map $(\mathcal{P}(\OOO), d_{1}) \longrightarrow (\mathbb{R}^{d}, |\cdot|)$ such that $$m \mapsto \argmin_{x \in \OOO} \{L(x,0)+F(x,m)\}$$ has a Lipschitz selection, i.e. $\xi_{*}(m) \in \argmin_{x \in \OOO}\{L(x,0) + F(x,m)\}$ and moreover, for any $m \in \mathcal{P}(\OOO)$ $$\min_{x \in \OOO} \left\{L(x,0)+F(x,m) \right\}=0.$$
\end{itemize}
\begin{remarks}\label{rem:re5-100}
Assumptions {\bf (A2)} and {\bf (A3)} imply that $L(x,v)+F(x,m)\geq 0$ for all $(x,v)\in \mathbb{R}^d\times\mathbb{R}^d$ and all $m\in\mathcal{P}(\OOO)$. Moreover, if $(u^{T}, m^{\eta})$ is a mild solution of the MFG system \cref{eq:lab1} then by assumption {\bf (A3)} we have that there exists a Lipschitz continuous curve $\xi_{*}: [0,T] \to \OOO$ such that $$\xi_{*}(t) \in \argmin_{x \in \OOO}\{L(x,0)+F(x, m^{\eta}_{t})\}$$ and $L(\xi_{*}(t), \dot\xi_{*}(t))+F(\xi_{*}(t), m^{\eta}_{t})=0$ for any $t \in [0,T]$.
\end{remarks}
We provide now two examples of mean field Lagrangian that satisfies assumption {\bf (A3)}.
\begin{example}\em
\begin{enumerate}
\item Let $L_{m}(x,v)=\frac{1}{2}|v|^{2}+f(x)g(m)$ for some continuous functions $f: \OOO \to \mathbb{R}$ and $g: \mathcal{P}(\OOO) \to \mathbb{R}_{+}$. Then, we have that $$\argmin_{x \in \OOO}\{L(x,0)+f(x)g(m)\}=\argmin_{x \in \OOO} f(x)g(m), $$ $\bar{x} =\displaystyle{\min_{x \in \OOO}}\ f(x)g(m)$ is unique and doesn't depend on $m \in \mathcal{P}(\OOO)$. Thus, the Lipschitz selection of minimizers of the mean field Lagrangian is the constant one, i.e. $\xi_{*}(m) \equiv \bar{x}$.
\item Let $L_{m}(x,v)= \frac{1}{2}|v|^{2} + \big(f(x) + g(m) \big)^{2}$, where $g: \mathcal{P}(\OOO) \to \mathbb{R}$ is Lipschitz continuous with respect to the $d_{1}$ distance and $f: \OOO \to \mathbb{R}$ is such that $f^{-1}$ is Lipschitz continuous. Thus the minimum is reached at $f(x)=-g(m)$ and by the assumptions on $f$ and $g$ such minimum has a Lipschitz depends with respect to $m \in \mathcal{P}(\OOO)$.
\end{enumerate}
\end{example}
\subsection{Convergence of mild solutions}
In order to get the convergence result of mild solutions of system \cref{eq:lab1}, we prove two preliminary results first.
\begin{lemma}[Energy estimate]\label{lem:energyestimate}
There exists a constant $\bar\kappa \geq 0$ such that for any mild solution $(u^{T}, m^{\eta}_t)$ of constrained MFG system \cref{eq:lab1} associated with a constrained MFG equilibrium $\eta \in \mathcal{P}_{m_{0}}(\Gamma)$, and any solution $(\bar{u}, \bar\lambda, \bar{m})$ of constrained ergodic MFG system \cref{eq:MFG1}, there holds\begin{equation*}
\int_{0}^{T}\int_{\OOO}{\Big(F(x, m^{\eta}_{t})-F(x, \bar{m}) \Big)\ \big(m^{\eta}_{t}(dx)-\bar{m}(dx)\big)dt}\leq \bar\kappa,
\end{equation*}
where $\bar \kappa$ depends only on $L$, $F$ and $\OO$.
\end{lemma}
\begin{proof}
As we did in the proof of \Cref{thm:MR1} (ii), one can define a Borel probability measure on $\Gamma$ by
\begin{align*}
\bar\eta(\tilde{B})=
\begin{cases}
\bar{m}(B), & \quad \tilde{B}\cap \tilde{\mathcal{M}}_{\bar{m}}\neq\emptyset,
\\
0, & \quad \text{otherwise},
\end{cases}
\end{align*}
where
\[
B=\{x\in\OOO: \text{the constant curve}\ t\mapsto x\ \text{belongs to}\ \tilde{B}\cap \tilde{\mathcal{M}}_{\bar{m}}\},
\]
and
\[
\tilde{\mathcal{M}}_{\bar{m}}:=\big\{\text{constant curves}\ \gamma:[0,T]\to\OOO,\ t\mapsto x:\ x\in \argmin_{y\in\OOO}L_{\bar{m}}(y,0)\big\}.
\]
By definition, it is direct to check that $\supp(\bar\eta)\subset \tilde{\mathcal{M}}_{\bar{m}}$ and
\begin{align}\label{eq:4-270}
\bar{m}=e_t\sharp\bar\eta,\quad \forall t\in[0,T].
\end{align}
Note that
\begin{align*}
& \int_{0}^{T}\int_{\OOO}{\Big(F(x,m^{\eta}_{t})-F(x, \bar{m}) \Big)\ (m^{\eta}_{t}(dx)-\bar{m}(dx))dt}
\\
=& \underbrace{\int_{0}^{T}\int_{\Gamma^{*}_{\eta}}{\Big(F(\gamma(t), m^{\eta}_{t})-F(\gamma(t), \bar{m})\Big)\ d\eta(\gamma)dt}}_{\bf A}
\\
-& \underbrace{\int_{0}^{T}\int_{\tilde{\mathcal{M}}_{\bar{m}}}{\Big(F(\bar\gamma(t), m^{\eta}_{t})-F(\bar\gamma(t), \bar{m}) \Big)\ d\bar\eta(\bar\gamma)dt}}_{\bf B},
\end{align*}
where $\Gamma^{*}_{\eta}$ is as in \Cref{def:equili}. First, we consider term {\bf A}:
\begin{align*}
\textbf{A}=&\int_{0}^{T}\int_{\Gamma^{*}_{\eta}}{\Big(F(\gamma(t), m^{\eta}_{t})-F(\gamma(t), \bar{m}) \Big)\ d\eta(\gamma)dt}\\
= & \int_{\Gamma^{*}_{\eta}}\int_{0}^{T}{\Big(L(\gamma(t), \dot\gamma(t))+ F(\gamma(t), m^{\eta}_{t})\Big)\ dt d\eta(\gamma)}
\\
- & \int_{\Gamma^{*}_{\eta}}\int_{0}^{T}{\Big(L(\gamma(t), \dot\gamma(t))+F(\gamma(t), \bar{m}) \Big) dt d\eta(\gamma)}.
\end{align*}
Since $\eta$ is a constrained MFG equilibrium associated with $m_0$, then any curve $\gamma\in\supp(\eta)$ satisfies the following equality
\[
u^{T}(0,\gamma(0))-u^{f}(\gamma(T))=\int_{0}^{T}\Big(L(\gamma(t), \dot\gamma(t))+ F(\gamma(t), m^{\eta}_{t})\Big)\ dt.
\]
In view of \cref{eq:LaxOleinik} with $L=L_{\bar m}$, one can deduce that
\[
\bar u(\gamma(T))-\bar u(\gamma(0))\leq \int_{0}^{T}\Big(L(\gamma(t), \dot\gamma(t))+F(\gamma(t),\bar{m}) \Big)dt+\lambda(\bar{m})T.
\]
Hence, we have that
\begin{align*}
\textbf{A}\leq \int_{\Gamma^{*}_{\eta}}{\Big(u^{T}(0,\gamma(0))-u^{f}(\gamma(T)) \Big)\ d\eta(\gamma)}
+ \int_{\Gamma^{*}_{\eta}}{\Big(\bar u(\gamma(0))-\bar u(\gamma(T)) \Big)\ d\eta(\gamma)}+\lambda(\bar{m})T.
\end{align*}
By \Cref{prop:equiliplemma} we estimate the second term of the right-hand side of the above inequality as follows:
\begin{equation*}
\int_{\Gamma^{*}_{\eta}}{\Big(\bar{u}(\gamma(0))-\bar{u}(\gamma(T)) \Big)\ d\eta(\gamma)} \leq K_{2}\cdot\int_{\Gamma^{*}_{\eta}}{|\gamma(0)-\gamma(T)|\ d\eta(\gamma)} \leq K_{2}\cdot\text{diam}(\OOO),
\end{equation*}
where $K_{2}:=C\cdot\Big( \sup_{x\in\OOO, |v|\leq 1}L(x,v)+\mathcal{F}+\sup_{m\in \mathcal{P}(\OOO)}\lambda(m)\Big)$ comes from \Cref{prop:equiliplemma}. In view of \Cref{lem:LEM1} and the compactness of $\mathcal{P}(\OOO)$, $K_2$ is well-defined and depends only on $L$, $F$ and $\OO$.
For the first term, since $u^{f}$ is bounded on $\OOO$, we only need to estimate $u^{T}(0, \gamma(0))$ where $\gamma \in \supp(\eta)$.
Take $\xi_{*}$ as in assumption {\bf (A3)} and \Cref{rem:re5-100}.
Since $\Omega$ is $C$-quasiconvex, there is $\beta \in \Gamma$ such that $\beta(0)=\gamma(0)$, $\beta(\tau(\gamma(0), \xi_{*}(0)))=\xi_{*}(0)$ and $|\dot\beta(t)| \leq 1$ a.e. in $t \in [0, \tau(\gamma(0), \xi_{*}(0))]$, where $\tau(\gamma(0), \xi_{*}(0))\leq C|\gamma(0)-\xi_{*}(0)|$.
Define a curve $\xi\in\Gamma$ as follows:
\begin{align*}
&\text{if}\ T<\tau(\gamma(0), \xi_{*}(0)),\quad \xi(t)=\beta(t),\ \ \ \qquad t\in[0,T];\\
&\text{if}\ T\geq \tau(\gamma(0), \xi_{*}(0)),\quad \xi(t)=
\begin{cases}
\beta(t), & \quad t \in [0, \tau(\gamma(0), \xi_{*}(0))],
\\
\xi_{*}(t), & \quad t \in (\tau(\gamma(0), \xi_{*}(0)), T].
\end{cases}
\end{align*}
If $T\geq \tau(\gamma(0),\xi_{*}(0))$, since $(u^T,m^{\eta}_t)$ is mild solution of \cref{eq:lab1}, we deduce that
\begin{align*}
& u^{T}(0,\gamma(0)) \leq \int_{0}^{\tau(\gamma(0), \xi_{*}(0))}{\Big(L(\beta(t), \dot\beta(t)) + F(\beta(t), m^{\eta}_t) \Big)\ dt}
\\
+ & \int_{\tau(\gamma(0), \xi_{*}(0))}^{T}{\Big(L(\xi_{*}(t), \dot\xi_{*}(t)) + F(\xi_{*}(t), m^{\eta}_t) \Big)\ dt}+u^f(\xi_{*}).
\end{align*}
Thus, by {\bf (A3)} we have that the second integral of the right-hand side of the above inequality is zero. Hence,
\begin{align*}
u^{T}(0,\gamma(0)) &\leq \int_{0}^{\tau(\gamma(0), \xi_{*}(0))}{\Big(L(\beta(t), \dot\beta(t))+ F(\beta(t), m^{\eta}_t) \Big)\ dt}+u^f(\bar x)\\
&\leq \Big( \max_{\substack{ y \in \OOO \\ |v| \leq 1}}\big(L(y,v)+\mathcal{F}\big) \Big)\cdot\tau(\gamma(0), \bar{x})+\|u^f\|_\infty
\\
&\leq \Big( \max_{\substack{ y \in \OOO \\ |v| \leq 1}}\big(L(y,v)+\mathcal{F}\big) \Big)\cdot C\cdot \text{diam}(\OOO)+\|u^f\|_\infty.
\end{align*}
We can conclude that
\begin{equation}\label{eq:ineA}
{\bf A} \leq K_{2}\cdot\text{diam}(\OOO) + \|u^{f} \|_{\infty} + \Big( \max_{\substack{ y \in \OOO \\ |v| \leq 1}}\big(L(y,v)+F(x,\bar{m}) \big)\Big)\cdot C\cdot \text{diam}(\OOO) + c(H_{\bar{m}})T.
\end{equation}
If $T<\tau(\gamma(0),\xi_{*}(0))$, in view if \Cref{rem:re5-100}, we get that
\begin{align*}
u^{T}(0,\gamma(0)) &\leq \int_{0}^{T}{\Big(L(\beta(t), \dot\beta(t)) + F(\beta(t), m^{\eta}_t) \Big)\ dt} +u^f(\xi(T))\\
&\leq
\int_{0}^{\tau(\gamma(0), \xi_{*}(0))}{\Big(L(\beta(t), \dot\beta(t)) + F(\beta(t), m^{\eta}_t) \Big)\ dt}+u^f(\xi(T)).
\end{align*}
Thus, we can get \cref{eq:ineA} again.
Now we estimate term {\bf B}. Note that
\begin{align*}
\textbf{B}=& \int_{0}^{T}\int_{\tilde{\mathcal{M}}_{\bar{m}}}{\Big(F(\bar{\gamma}(t), m^{\eta}_{t})-F(\bar{\gamma}(t), \bar{m}) \Big)\ d\bar\eta(\bar{\gamma})dt}
\\
=& \int_{\tilde{\mathcal{M}}_{\bar{m}}}\int_{0}^{T}{\Big(L(\bar\gamma(t), \dot{\bar{\gamma}}(t))+F(\bar\gamma(t), m^{\eta}_t) \Big)\ dtd\bar\eta(\bar\gamma)}
\\
- & \int_{\tilde{\mathcal{M}}_{\bar{m}}}\int_{0}^{T}{\Big(L(\bar\gamma(t), \dot{\bar{\gamma}}(t))+F(\bar\gamma(t), \bar{m}) \Big)\ dtd\bar\eta(\bar\gamma)}
\\
\geq & \int_{\tilde{\mathcal{M}}_{\bar{m}}}{\Big(u^{T}(0, \bar\gamma(0))-u^{f}(\bar\gamma(T)) \Big)\ d\bar\eta(\bar\gamma)}
\\
+ & \int_{\tilde{\mathcal{M}}_{\bar{m}}}{\Big(\bar{u}(\bar\gamma(0))-\bar{u}(\bar\gamma(T)) \Big)\ d\bar\eta(\bar\gamma)}+c(H_{\bar{m}})T.
\end{align*}
By \Cref{rem:re5-100}, we obtain that
\begin{equation*}
\int_{\tilde{\mathcal{M}}_{\bar{m}}}{\Big(u^{T}(0, \bar\gamma(0))-u^{f}(\bar\gamma(T)) \Big)\ d\bar\eta(\bar\gamma)} \geq - 2\| u^{f} \|_{\infty},
\end{equation*}
and since $\bar\gamma \in \supp(\bar\eta)$ is a constant curve we deduce that
\begin{equation*}
\int_{\tilde{\mathcal{M}}_{\bar{m}}}{\Big(\bar{u}(\bar\gamma(0))-\bar{u}(\bar\gamma(T)) \Big)\ d\bar\eta(\bar\gamma)}=0.
\end{equation*}
Hence, we have that
\begin{equation}\label{eq:ineB}
-\textbf{B} \leq 2\| u^{f}\|_{\infty}-c(H_{\bar{m}})T.
\end{equation}
Therefore, combining \cref{eq:ineA} and \cref{eq:ineB} we conclude that
\begin{align*}
& \int_{0}^{T}\int_{\OOO}{\Big(F(x, m^{\eta}_{t})-F(x, \bar{m}) \Big)\ \big(m^{\eta}_{t}(dx)-\bar{m}(dx)\big)dt}
\\
\leq & 3 \|u^{f} \|_{\infty} + \left( C\cdot\max_{\substack{ y \in \OOO \\ |v| \leq 1}}\big(L(y,v)+\mathcal{F}\big)+ K_{2} \right)\cdot \text{diam}(\OOO).
\end{align*}
\end{proof}
\begin{lemma}\label{lem:leA}
Let $f: \OOO \to \mathbb{R}$ be a Lipschitz continuous function with Lipschitz constant $\text{Lip}[f] \leq M$ for some constant $M \geq 0$. Then, there exists a constant $C(d,M)\geq 0$ such that
\begin{equation*}
\| f \|_{\infty} \leq C(d,M)\|f\|_{2, \Omega}^{\frac{2}{2+d}}.
\end{equation*}
\end{lemma}
\begin{proof}
Fix $x_{0} \in \partial\OO$ and fix a radius $r \geq 0$. We divide the proof into two parts: first, we assume that $\OOO$ coincides with the half-ball centered in $x_{0}$ with radius $r$ contained in $\{x \in \mathbb{R}^{d}: x_{d} \geq 0\}$ such that $x_{0} \in \{x \in \mathbb{R}^{d}: x_{d}=0\}$; then, we remove this constraint proving the result for a general domain $\OO$.
\underline{Part I}: We denote by $B^{+}$ the set $\overline{B}_{r}(x_{0}) \cap \{x \in \mathbb{R}^{d}: x_{d} \geq 0\}$ and by $B^{-}$ the complement of $B^{+}$.
Let $\tilde{f}$ denote the following extension of $f$ in $\overline{B}_{r}(x_{0})$:
\begin{align*}
\tilde{f}(x)=
\begin{cases}
f(x), & \quad \text{if}\ x \in B^{+}
\\
f(x_{1}, \dots, x_{d-1}, -x_{d}), & \quad \text{if}\ x \in B^{-}.
\end{cases}
\end{align*}
Let $\chi_{r}$ denote a cut-off function such that $\chi_{r}(x)=1$ for $x \in B_{r}(x_{0})$, $\chi_{r}(x)=0$ for $x \in \mathbb{R}^{d} \backslash B_{2r}(x_{0})$ and $0 \leq \chi_{r}(x) \leq 1$ for $x \in B_{2r}(x_{0}) \backslash B_{r}(x_{0})$ and let $\tilde{f}_{r}$ be the extension of $\tilde{f}$ on $\mathbb{R}^{d}$, i.e. $\tilde{f}_{r}(x):= \tilde{f}\cdot \chi_{r}(x)$. Moreover, for any $\delta > 0$ we consider a cover of $\overline{B}_{r}(x_{0})$ through cubes of length $\delta$ denoted by $Q_{\delta}$.
Then, by construction we have that for any cube $Q_{\delta}$
\begin{equation*}
\| \tilde{f}_{r}\|_{2, Q_{\delta}} \leq C(\delta) \| f\|_{2, B^{+}},
\end{equation*}
for some constant $C(\delta) \geq 0$. Therefore, applying Lemma 4 in \cite{bib:CCMW} we get
\begin{equation}\label{eq:est1}
\| f \|_{\infty, B^{+}} \leq \| \tilde{f}_{r} \|_{\infty, Q_{\delta}} \leq C(\delta, M) \| \tilde{f}_{r}\|_{2, Q_\delta}^{\frac{2}{d+2}} \leq C(d, M) \| \tilde{f}_{r}\|_{2, B^{+}}^{\frac{2}{d+2}}.
\end{equation}
Thus, recalling that by construction $B^{+} \equiv \OOO$ we obtain that by \cref{eq:est1}
\begin{equation*}
\| f \|_{\infty, \OOO} \leq C(d, M) \| \tilde{f}_{r}\|_{2, \OOO}^{\frac{2}{d+2}}.
\end{equation*}
\underline{Part II}:
Let $x_{0} \in \partial\OO$ be such that $\OO$ is not flat in a neighborhood of $x_{0}$, that is we are in case I. Then, we can find a $C^{1}$ mapping $\Phi$, with inverse given by $\Psi$ such that changing the coordinate system according to the map $\Phi$ we obtain that $\OO^{\prime}:=\Phi(\OO)$ is flat in a neighborhood of $x_{0}$.
Proceeding similarly as in Part I, we define
\begin{equation*}
B^{+}=\overline{B}_{r}(x_{0}) \cap \{ x \in \mathbb{R}^{d}: x_{d} \geq 0\} \subset \overline{\Omega}^{\prime}
\end{equation*}
and
\begin{equation*}
B^{-}=\overline{B}_{r}(x_{0}) \cap \{x \in \mathbb{R}^{d}: x_{d} \leq 0\} \subset \mathbb{R}^{d}\backslash \Omega^{\prime}.
\end{equation*}
Thus, if we set $y=\Phi(x)$, we have that $x=\Psi(y)$, and if we define $f^{\prime}(y)=f(\Psi(y))$ then by Parti I we get
\begin{equation*}
\| f^{\prime}\|_{\infty, \Omega^{\prime}} \leq C(d, M) \|f^{\prime} \|_{2, \Omega^{\prime}}^{\frac{2}{2+d}}
\end{equation*}
which implies, returning to the original coordinates, that
\begin{equation}\label{eq:startinginequality}
\| f\|_{\infty, \Omega} \leq C(d, M) \|f\|_{2, \Omega}^{\frac{2}{2+d}}
\end{equation}
for a general domain $\OO$ not necessarily flat in a neighborhood of $x_{0} \in \partial\OO$.
Since $\OO$ is compact, there exists a finitely many points $x^{0}_{i} \in \partial\OO$, neighborhood $W_{i}$ is $x^{0}_{i}$ and functions $f^{\prime}_{i}$ defined as before for $i =1, \dots, N$, such that, fixed $W_{0} \subset \OO$, we have $\OO \subset \bigcup_{i=1}^{N} W_{i}$. Furthermore, let $\{\zeta_{i}\}_{i=1, \dots, N}$ be a partition of unit associated with $\{ W_{i}\}_{i =1, \dots, N}$ and define $\bar{f}(x)=\sum_{i=1}^{N}{\zeta_{i}f^{\prime}_{i}(x)}$. Then, by \cref{eq:startinginequality} applied to $\bar{f}$ we get the conclusion.
\end{proof}
\begin{theorem}[Convergence of mild solutions of \cref{eq:lab1}]\label{thm:MR2}
For each $T>1$, let $(u^{T}, m^{\eta}_t)$ be a mild solution of \cref{eq:lab1}.
Let $(\bar\lambda, \bar u, \bar m)\in\mathcal{S}$. Then, there exists a positive constant $C'$ such that
\begin{equation}\label{eq:lab38}
\sup_{t \in [0,T]} \Big\|\frac{u^{T}(t, \cdot)}{T} + \bar\lambda\left(1-\frac{t}{T}\right) \Big\|_{\infty, \OOO} \leq \frac{C'}{T^{\frac{1}{d+2}}},
\end{equation}
\begin{equation}\label{eq:lab39}
\frac{1}{T}\int_{0}^{T}{\big\| F(\cdot, m^{\eta}_s)- F(\cdot, \bar m) \big\|_{\infty, \OOO} ds} \leq \frac{C'}{T^{\frac{1}{d+2}}},
\end{equation}
where $C'$ depends only on $L$, $F$, $u^{f}$ and $\OO$.
\end{theorem}
\proof
Let $\bar{v}(x)= \bar{u}(x)-\bar{u}(0)$ and define
\begin{equation*}
w(t,x):=\bar{v}(x)-\bar\lambda (T-t), \quad \forall (x,t)\in \OOO\times[0,T].
\end{equation*}
Since $(\bar\lambda, \bar u, \bar m)$ is a solution of \cref{eq:MFG1},
one can deduce that $w$ is a constrained viscosity solution of the Cauchy problem
\begin{align*}\label{cal}
\begin{split}
&\begin{cases}
-\partial_{t} w + H(x, Dw)=F(x, \bar m) \quad \text{in} \quad (0,T)\times \OOO, \\ w(T,x)=\bar u (x) \quad \quad\quad\quad\quad\quad\,\,\,\,\,\,\,\,\text{in} \quad \OOO.
\end{cases}
\end{split}
\end{align*}
So, $w(t,x)$ can be represented as the value function of the following minimization problem
\begin{equation}\label{eq:ww}
w(t,x)= \inf_{\gamma \in \Gamma_{t,T}(x)} \left\{\int_{t}^{T}{L_{\bar m}\left(\gamma(s), \dot\gamma(s)\right)\ ds} + \bar u(\gamma(T))\right\}, \quad \forall (x,t) \in \OOO \times [0,T].
\end{equation}
Since $(u^{T}, m^{\eta}_t)$ is a mild solution of \cref{eq:lab1}, in view of \cref{eq:MFGValuefunction} we get that
\begin{equation}\label{eq:www}
u^T(t,x)= \inf_{\gamma \in \Gamma_{t,T}(x)} \left\{\int_{t}^{T}{L_{m^{\eta}_s}\left(\gamma(s), \dot\gamma(s)\right)\ ds} + u^f(\gamma(T))\right\}, \quad \forall (x,t) \in \OOO \times [0,T].
\end{equation}
We prove inequality \cref{eq:lab39} first. By \Cref{lem:leA} below and H\"{o}lder's inequality, we get
\begin{align*}
\begin{split}
& \int_{t}^{T}{\| F(\cdot, m^{\eta}_s) -F(\cdot, \bar m)\| _{\infty, \OOO}\ \frac{ds}{T}} \\
\leq & \ C(\|DF\|_\infty) \int_{t}^{T}{\| F(\cdot, m^{\eta}_s)- F(\cdot, \bar m) \|_{2, \OOO}^{\frac{2}{d+2}} \frac{ds}{T}} \\
\leq & \ \frac{C(\|DF\|_\infty)}{T} \left( \int_{t}^{T}{\| F(\cdot, m^{\eta}_s) -F(\cdot, \bar m) \|_{2,\OOO}^{2}\ ds} \right) ^{\frac{1}{d+2}} \left(\int_{t}^{T}{\bf 1}\ ds \right) ^{\frac{d+1}{d+2}} .
\end{split}
\end{align*}
Now, by assumption {\bf (F3)} and \Cref{lem:energyestimate} the term
$$
\left( \int_{t}^{T}{\| F(\cdot, m^{\eta}_s) -F(\cdot, \bar m) \|_{2,\OOO}^{2}\ ds} \right) ^{\frac{1}{d+2}}
$$
is bounded by a constant depending only on $L$, $F$ and $\OO$, while
$$
\left(\int_{t}^{T}{ {\bf 1}\ ds } \right) ^{\frac{d+1}{d+2}}\leq T^{\frac{d+1}{d+2}}.
$$
Inequality \cref{eq:lab39} follows.
Next, we prove \cref{eq:lab38}. For any given $(x,t)\in\OOO\times[0,T]$, let $\gamma^*:[0,T]\to \OOO$ be a minimizer of problem \cref{eq:ww}. By \cref{eq:ww} and \cref{eq:www}, we have that
\begin{align}\label{eq:lab43}
\begin{split}
\quad u^{T}(t,x) & -w(t,x)
\leq \ \int_{t}^{T} {L_{m^{\eta}_s}(\gamma^{*}(s), \dot\gamma^{*}(s))\ ds }- \int_{t}^{T} {L_{\bar m}(\gamma^{*}(s), \dot\gamma^{*}(s))\ ds}\\
+\ & u^{f}(\gamma^{*}(T)) - \bar u(\gamma^{*}(T)) = \ u^{f}(\gamma^{*}(T))
\\
-\ & \bar u(\gamma^{*}(T)) + \int_{t}^{T}{ \left( F(\gamma^{*}(s), m^{\eta}_s) - F(\gamma^{*}(s), \bar m) \right)\ ds}.
\end{split}
\end{align}
By \cref{eq:lab43}, we get
\begin{align*}
\frac{ u^{T}(t,x) - w(t,x)}{T} \leq & \underbrace{\bigl|\frac{u^{f}(\gamma^{*}(T))-\bar u (\gamma^{*}(T))}{T}\bigl|}_{A}
\\
+ & \underbrace{\frac{1}{T}\int_{t}^{T}{\bigl|F(\gamma^{*}(s), m^{\eta}_s) -F(\gamma^{*}(s), \bar m)\bigl| ds}}_{B}.
\end{align*}
Let us first consider term $B$. Note that
\begin{align}\label{eq:5-500}
& \int_{t}^{T}{\bigl|F(\gamma^{*}(s), m^{\eta}_s)-F(\gamma^{*}(s), \bar m)\bigl|\ \frac{ds}{T}}
\\
\leq & \int_{t}^{T}{\bigl\|F(\cdot, m^{\eta}_s)-F(\cdot, \bar m)\bigl\|_{\infty,\OOO} \ \frac{ds}{T}}
\leq \frac{C'}{T^{\frac{1}{d+2}}},
\end{align}
where $C'>0$ is a constant depending only on $L$, $F$ and $\OO$.
Since $\bar u$ and $u^f$ are continuous functions on $\OOO$, we can conclude that $A \leq O(\frac{1}{T})$, which together with \cref{eq:5-500} implies that
\[
\frac{ u^{T}(t,x) - w(t,x)}{T} \leq\
\frac{C''}{T^{\frac{1}{d+2}}}.
\]
Moreover, for any given $(x,t)\in \OOO\times [0,T]$, let $\xi^{\ast}(\cdot)$ be a minimizer of problem \cref{eq:www}. In view of \cref{eq:ww} and \cref{eq:www}, we deduce that
\begin{align}\label{eq:lab42}
\begin{split}
&w(t,x)-u^{T}(t,x)
\\ \leq & \ \int_{t}^{T} {L_{\bar m}(\xi^{*}(s), \dot\xi^{*}(s))\ ds }+ \bar u(\xi^{*}(T)) - \int_{t}^{T} {L_{m^{\eta}_s}(\xi^{*}(s), \dot\xi^{*}(s))\ ds} - u^{f}(\xi^{*}(T))
\\=& \ \bar{u}(\xi^{*}(T)) - u^{f}(\xi^{*}(T)) + \int_{t}^{T}{ \left( F(\xi^{*}(s), \bar m) - F(\xi^{*}(s), m^{\eta}_s) \right)\ ds}.
\end{split}
\end{align}
So, by almost the same arguments used above, one obtains
\[
\frac{w(t,x)- u^{T}(t,x)}{T} \leq\
\frac{C''}{T^{\frac{1}{d+2}}},
\]
which completes the proof of the theorem.\qed
| {'timestamp': '2020-04-15T02:12:48', 'yymm': '2004', 'arxiv_id': '2004.06505', 'language': 'en', 'url': 'https://arxiv.org/abs/2004.06505'} |
\section{Introduction}
Feature matching is a key component in many 3D vision applications such as structure from motion (SfM) or simultaneous localization and mapping (SLAM).
Conventional pose estimation is a multi-step process: feature detection finds interest points, for which local descriptors are computed. Based on the descriptors, pairs of keypoints from different images are matched,
which defines constraints in the pose optimization.
A major challenge lies in the ambiguity of matching local descriptors by nearest-neighbor search, which is error-prone, particularly in texture-less areas or in presence of repetitive patterns.
Hand-crafted heuristics or outlier filters become necessary to circumvent this problem to some degree.
Recent learning-based approaches~\cite{Sarlin2020SuperGlueLF,Sun2021LoFTRDL,Jiang2021COTRCT}
instead leverage the greater image context to address the matching difficulty, e.g.,
SuperGlue~\cite{Sarlin2020SuperGlueLF} introduces a graph neural network (GNN) for descriptor matching on an image pair.
Graph edges connect keypoints from arbitrary locations
and enable reasoning in a broad context, leading to globally well
informed solutions compared to convolutional neural networks (CNN) with limited receptive field.
The receptive field in SuperGlue, however, remains limited by the two-view setup, despite that more images are typically available in pose estimation tasks.
Our idea is to further facilitate information flow by joining
multiple views into the matching process. This way, we allow multi-view correlation to strengthen geometric reasoning and confidence prediction.
Joint matching of multiple images integrates well into pose estimation pipelines, as they
typically solve for more than two cameras.
Additionally, we note that accurate feature matching, in and of itself, does not necessarily give rise to accurate pose estimation, as the spatial distribution of feature matches is essential for robust pose optimization.
For instance, perfectly precise matches may form a degenerate case (e.g., lying on a line) and thus have no value for pose optimization.
In addition, confidence scores predicted by matching networks do not necessarily reflect the value of matches towards pose optimization.
Feature matching and pose estimation are thus tightly coupled problems, for which we propose a joint solution:
We encode
keypoints and descriptors from multiple images to construct a graph, where self-attention provides context awareness within the same image and cross-attention enables reasoning with respect to all other images. A GNN predicts matches along with confidence weights, which define constraints on the camera poses that we optimize with a differentiable Gauss-Newton solver. The GNN is trained end-to-end using gradients from the pose optimization. From this feedback, the network learns to produce valuable matches for pose estimation and thereby
learns effective outlier rejection.
We evaluate our method on the ScanNet~\cite{Dai2017ScanNetR3}, Matterport3D~\cite{Chang2017Matterport3DLF} and MegaDepth~\cite{Li2018MegaDepthLS} datasets and show that it improves over prior work on learned feature matching.
In summary, we demonstrate that a joint approach to feature matching and pose estimation benefits both matching and pose accuracy, enabled by the following contributions:
\begin{itemize}
\item We propose a multi-view graph attention network to learn feature matches simultaneously across multiple frames.
\item We introduce an end-to-end trainable pose estimation that both guides confidence weights of feature matches in an unsupervised fashion and backpropagates gradients to inform the graph-matching network.
\end{itemize}
\section{Related Work}
\subsubsection{Conventional Feature Matching.}
The classical feature matching pipeline comprises the following steps: 1) interest point detection, 2) feature description, 3) matching through nearest neighbor search in descriptor space, and 4) outlier filtering. In this pipeline, hand-crafted features like SIFT~\cite{LoweDavid2004DistinctiveIF} and ORB~\cite{Rublee2011ORBAE} are very successful and have been widely used for many years. However, they tend to struggle with appearance or viewpoint changes.
Starting with LIFT~\cite{Yi2016LIFTLI}, learning-based descriptors have been developed to tackle these challenges~\cite{Ono2018LFNetLL,Dusmanu2019D2NetAT,Revaud2019R2D2RA,Bhowmik2020ReinforcedFP,Tyszkiewicz2020DISKLL}. They often combine interest point detection and description, such as SuperPoint \cite{DeTone2018SuperPointSI}, which we use for our method.
Nearest neighbor feature matching is prone to outliers, making post-processing methods indispensable. This includes mutual check, ratio test \cite{LoweDavid2004DistinctiveIF}, neighborhood consensus \cite{Tuytelaars2000WideBS,Cech2008EfficientSC,Cavalli2020HandcraftedOD,Bian2017GMSGM,Ma2018LocalityPM} and sampling based outlier rejection~\cite{Fischler1981RandomSC,Barth2019MAGSACMS,Raguram2008ACA}.
Learning-based approaches have also addressed outlier detection as a classification task~\cite{Yi2018LearningTF,Ranftl2018DeepFM,Brachmann2019NeuralGuidedRL,Zhang2019LearningTC}. These methods rely on reasonable matching proposals and lack visual information in their decision process.
\subsubsection{Learning-based Feature Matching.}
Recent approaches employ neural networks for feature matching on image pairs. There are methods that determine dense, pixel-wise correspondences
with confidence estimates for filtering~\cite{Rocco2018NeighbourhoodCN,Rocco2020EfficientNC,Li2020DualResolutionCN}. This effectively combines steps (1)-(3) from the classical matching pipeline. However, it suffers from the limited receptive field of CNNs and fails to distinguish regions of little texture or repetitive structure, due to missing global context.
In contrast, SuperGlue \cite{Sarlin2020SuperGlueLF} represents a sparse matching network that operates on keypoints with descriptors. Using an attention-based GNN~\cite{Vaswani2017AttentionIA} all keypoints interact, hence the receptive field spans across both images, leading to accurate matches in wide-baseline settings. Inspired by the success of GNN-based feature matching, we build up on SuperGlue by further extending its receptive field through multi-view matching and by improving
outlier filtering through end-to-end training with pose optimization.
LoFTR \cite{Sun2021LoFTRDL} recently proposed a detector-free approach, that processes CNN features in a coarse-to-fine manner. Combined with attention it equally achieves a receptive field across the image pair and high quality matches.
COTR \cite{Jiang2021COTRCT}, like LoFTR, operates on images directly
in a coarse-to-fine fashion. It is a transformer network that predicts for a query point in one image the correspondence in a second image. This way, it considers the global context; however, inference for a complete image pair takes tens of seconds.
We show that our multi-view, end-to-end approach performs better than
SuperGlue and the detector-free methods LoFTR and COTR.
\subsubsection{Pose Optimization.}
Once matches between a set of images are found, poses can be optimized using a bundle adjustment formulation~\cite{triggs1999bundle}.
The optimization can be applied to a set of RGB images~\cite{agarwal2011building} or lifted to the RGB-D case, if depth data is available from range sensors~\cite{dai2017bundlefusion}.
The resulting optimization problems typically lead to non-linear least squares formulations which are optimized using non-linear solvers such as Gauss-Newton or Levenberg-Marquardt.
The pipeline in these methods usually performs feature matching as a pre-process; i.e., correspondences are established first and then filtered with a combination of RANSAC and robust optimization techniques~\cite{zach2014robust,Choi_2015_CVPR}.
However, feature matching and pose optimization largely remain separate steps and cannot inform each other.
To this end, differentiable optimization techniques have been proposed for pose estimation, such as DeMoN~\cite{ummenhofer2017demon}, BA-Net~\cite{tang2018ba}, RegNet~\cite{han2018regnet}, or 3DRegNet~\cite{pais20203dregnet}.
The core idea of these methods is to obtain gradients through the pose optimizations that in turn guide the construction of learned feature descriptors.
In comparison to treating feature extraction as a separate step, feature descriptors are now learned with the objective to obtain well-aligned global poses instead of just trying to get good pair-wise matches.
In our work, we go a step further and focus on learning how to match features rather than using a pre-defined matching method.
As a result, we can leverage differentiable pose optimization to provide gradients for our newly-proposed multi-view graph attention network for feature matching, and achieve significantly improved pose estimation results.
\section{Method}
Our method associates keypoints from $N$ images $\{I_n\}^{N}_{n=1}$, such that resulting matches and confidence weights are particularly valuable for estimating the corresponding camera poses $\{\mathbf{p}_n\}^{N}_{n=1}$, $\mathbf{p}_n \in \mathbb{R}^6$.
Keypoints are represented by their image coordinates $\mathbf{x} \in \mathbb{R}^2$, visual descriptors $\mathbf{d} \in \mathbb{R}^D$ and a confidence score $c \in [0, 1]$.
We use the SuperPoint network for feature detection and description, as it has shown to perform well in combination with learned feature matching \cite{DeTone2018SuperPointSI,Sarlin2020SuperGlueLF}. The source of input descriptors, however, is flexible; for instance, the use of conventional descriptors, such as SIFT \cite{LoweDavid2004DistinctiveIF}, is also possible.
Our pipeline, as shown in \cref{fig:pipeline}, ties together feature matching and pose optimization: we employ a GNN to associate keypoints across multiple images (\cref{ssec:multi_view_matching}). The resulting matches and confidence weights define constraints in the subsequent pose optimization (\cref{ssec:pose_optimization}), which is differentiable, thus enabling end-to-end training (\cref{ssec:end2end_training}).
\subsection{Graph Attention Network for Multi-View Matching}
\label{ssec:multi_view_matching}
\subsubsection{Motivation.}
In the multi-view matching problem of $N$ images, each keypoint matches to at most $N - 1$ other keypoints, where each of the matching keypoints has to come from a different input image.
Without knowing the transformations between images, one keypoint can match to any keypoint location in the other images. Hence, all keypoints in the other images need to be considered as matching candidates. Although keypoints from the same image are not matching candidates, they contribute valuable constraints in the assignment problem, e.g., their projection into other images must follow consistent transformations. The matching problem can be represented as a graph, where nodes model keypoints and edges their relationships.
A GNN architecture reflects this structure and enables learning the complex relations between keypoints to determine feature matches. The iterative message passing process enables the search for globally optimal matches as opposed to a greedy local assignment.
On top of that, attention-based message aggregation allows each keypoint to focus on information from the keypoints that provide the most insight for its assignment.
We build upon the SuperGlue architecture, which introduces an attention-based GNN for descriptor matching between image pairs \cite{Sarlin2020SuperGlueLF}. Our extension to multi-image matching is motivated by the following considerations: first, graph-based reasoning can benefit from tracks that are longer than two keypoints---i.e., a match becomes more confident, if multiple views agree on the keypoint similarity and its coherent location with respect to the other keypoints in each frame.
In particular, with regards to robust pose optimization, it is crucial to facilitate this information flow and boost the confidence prediction.
Second, pose estimation or SLAM systems generally consider multiple input views.
With the described graph structure, jointly
matching $N$ images is more efficient in terms of GNN messages than matching the corresponding image pairs individually, as detailed in the following paragraph.
\subsubsection{Graph Construction.}
Each keypoint represents a graph node. The initial node embedding ${}^{(1)}\mathbf{f}_i$ of keypoint $i$ is computed from its image coordinates $\mathbf{x}_i$, confidence $c_i$ and descriptor $\mathbf{d}_i$ (\cref{eq:node_embedding}). This allows the GNN to consider spatial location, certainty and visual appearance in the matching process:
\begin{equation}
{}^{(1)}\mathbf{f}_i = \mathbf{d}_i + F_\mathrm{encode}\left(\left[\mathbf{x}_i \mathbin\Vert c_i\right]\right),
\label{eq:node_embedding}
\end{equation}
where $\mathbin\Vert$ denotes concatenation and $F_{\mathrm{encode}}$ is a multilayer perceptron (MLP) that lifts the image point and its confidence into the high-dimensional space of the descriptor. Such positional encoding helps the spatial learning \cite{Sarlin2020SuperGlueLF,Gehring2017ConvolutionalST,Vaswani2017AttentionIA}.
The graph nodes are connected by two kinds of edges: self-edges connect keypoints within the same image. Cross-edges connect keypoints from different images (\cref{fig:graph_edges}). The edges are undirected, i.e., information flows in both directions. \cref{tab:gnn_messages} shows that jointly matching $N$ images reduces the number of GNN messages compared to separately matching the corresponding $P=\sum_{n=1}^{N-1}n$ pairs. The savings result from fewer intra-frame messages between keypoints of the same image, e.g., for five images with $K$ keypoints each, pairwise matching involves $20K^2$ messages on a self-layer and $20K^2$ on a cross-layer---joint matching requires only $5K^2$ and $20K^2$, respectively.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth,trim={4.8cm 9.9cm 4.3cm 5.7cm},clip]{figures/graph_edges.pdf}
\caption{Self- and cross-edges connected to a node $i$.}
\vspace{-0.4cm}
\label{fig:graph_edges}
\end{figure}
\setlength{\tabcolsep}{4pt}
\begin{table}[tb]
\begin{center}
\caption{Number of GNN messages per layer for matching $N$ images, each with $K$ keypoints, as $P$ individual image pairs versus joint matching in a single graph.}
\label{tab:gnn_messages}
\begin{tabular}{lccc}
\toprule
& Messages along self-edges & Messages along cross-edges \\
\midrule
Pairwise matching & $2PK^2$ & $N(N-1)K^2$ \\
Joint matching & $NK^2$ & $N(N-1)K^2$ \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-0.6cm}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\subsubsection{Message Passing.}
Interaction between keypoints---the graph nodes---is realized through message passing \cite{Duvenaud2015ConvolutionalNO,Gilmer2017NeuralMP}. The goal is to achieve a state where node descriptors of matching keypoints are close in descriptor space, whereas unrelated keypoints are far apart. The GNN has $L$ layers, where each layer $\ell$ corresponds to a message exchange between keypoints. The layers alternate between updates along self-edges $\mathcal{E}_{\mathrm{self}}$ and cross-edges $\mathcal{E}_{\mathrm{cross}}$---starting with an exchange along self-edges in layer $\ell=1$ \cite{Sarlin2020SuperGlueLF}. \cref{eq:node_update} describes the iterative node descriptor update, where ${}^{(\ell)}\mathbf{m}_{\mathcal{E}\rightarrow i}$ is the aggregated message from all keypoints that are connected to keypoint $i$ by an edge in $\mathcal{E} \in \{\mathcal{E}_{\mathrm{self}}, \mathcal{E}_{\mathrm{cross}}\}$. ${}^{(\ell)}F_{\mathrm{update}}$ is a MLP, where each GNN layer $\ell$ has a separate set of network weights.
\begin{equation}
{}^{(\ell+1)}\mathbf{f}_i = {}^{(\ell)}\mathbf{f}_i + {}^{(\ell)}F_{\mathrm{update}}\left(\left[{}^{(\ell)}\mathbf{f}_i \mathbin\Vert {}^{(\ell)}\mathbf{m}_{\mathcal{E}\rightarrow i}\right]\right)
\label{eq:node_update}
\end{equation}
Multi-head attention \cite{Vaswani2017AttentionIA} is used to merge all incoming information for keypoint $i$ into a single message ${}^{(\ell)}\mathbf{m}_{\mathcal{E}\rightarrow i}$ \cite{Sarlin2020SuperGlueLF}.
Messages along self-edges are combined by self-attention between the keypoints of the same image, messages along cross-edges by cross-attention between the keypoints from all other images.
Linear projection of node descriptors is used to compute the query ${}^{(\ell)}\mathbf{q}_i$ of query keypoint $i$, as well as the keys ${}^{(\ell)}\mathbf{k}_j$ and values ${}^{(\ell)}\mathbf{v}_j$ of its source keypoints $j$:
\begin{align}
{}^{(\ell)}\mathbf{q}_i &= {}^{(\ell)}\mathbf{W}_1 {}^{(\ell)}\mathbf{f}_i + {}^{(\ell)}\mathbf{b}_1 ,
\label{eq:query} \\
\begin{bmatrix}
{}^{(\ell)}\mathbf{k}_j \\ {}^{(\ell)}\mathbf{v}_j
\end{bmatrix}
&=
\begin{bmatrix}
{}^{(\ell)}\mathbf{W}_2 \\ {}^{(\ell)}\mathbf{W}_3
\end{bmatrix}
{}^{(\ell)}\mathbf{f}_j +
\begin{bmatrix}
{}^{(\ell)}\mathbf{b}_2 \\ {}^{(\ell)}\mathbf{b}_3
\end{bmatrix}.
\label{eq:key_val}
\end{align}
The set of source keypoints $\{j : (i, j) \in \mathcal{E}\}$ comprises all keypoints connected to $i$ by an edge of the type, that is relevant to the current layer. $\mathbf{W}$ and $\mathbf{b}$ are per-layer weight matrices and bias vectors, respectively.
For each source keypoint the similarity to the query is computed by the dot product ${}^{(\ell)}\mathbf{q}_i\cdot{}^{(\ell)}\mathbf{k}_j$. The softmax over the similarity scores determines the attention weight $\alpha_{ij}$ of each source keypoint $j$ in the aggregated message to $i$:
\begin{equation}
{}^{(\ell)}\mathbf{m}_{\mathcal{E}\rightarrow i} = \sum_{j : (i, j) \in \mathcal{E}} {}^{(\ell)}\alpha_{ij}{}^{(\ell)}\mathbf{v}_j .
\end{equation}
It is important to note that in layers, which update along cross-edges, the source keypoints $j$ to a query keypoint $i$ come from multiple images. The softmax-based weighting is robust to variable number of input views and therewith variable number of keypoints.
After $L$ message passing iterations the node descriptors for subsequent assignment are retrieved by linear projection:
\begin{equation}
\mathbf{f}_i = \mathbf{W}_4 {}^{(L+1)}\mathbf{f}_i + \mathbf{b}_4 .
\label{eq:final_proj}
\end{equation}
\subsubsection{Partial Assignment.}
SuperGlue \cite{Sarlin2020SuperGlueLF} addresses the partial assignment problem between keypoints of two images, $I_1$ and $I_2$, where each keypoint either obtains a match in the other image or remains unmatched. A score matrix $\mathbf{S} \in \mathbb{R}^{(K_1+1)\times (K_2+1)}$ is defined, where $K_1$ and $K_2$ are the number of keypoints in the images, hence all potential matches and the unmatched option are represented. The elements $\mathbf{S}_{i,j}$ are filled with the dot-product similarity of the final node descriptors $\mathbf{f}_{1,i} \cdot \mathbf{f}_{2,j}$, where $\mathbf{f}_{1,i}$ is from $I_1$ and $\mathbf{f}_{2,j}$ from $I_2$.
The last row and column of $\mathbf{S}$, representing unmatched, are initialized with a trainable parameter $q \in \mathbb{R}$. The differentiable Sinkhorn algorithm \cite{Sinkhorn1967ConcerningNM,Cuturi2013SinkhornDL} optimizes for a soft assignment matrix $\mathbf{P} \in [0, 1]^{(K_1+1)\times (K_2+1)}$ that maximizes the sum of scores $\sum_{r,c}\mathbf{S}_{r,c}\mathbf{P}_{r,c}$ while obeying constraints on the number of matches:
\begin{equation}
\mathbf{P}\mathbf{1}_{K_2+1} =
\begin{bmatrix}
\mathbf{1}_{K_1 + 1}^{\top} & K_2
\end{bmatrix}^{\top} \quad \text{and} \quad
\mathbf{P}^\top\mathbf{1}_{K_1+1} =
\begin{bmatrix}
\mathbf{1}_{K_2 + 1}^{\top} & K_1
\end{bmatrix}^{\top}.
\label{eq:sinkhorn_constraints}
\end{equation}
We adopt this approach and apply it pairwise to the images in the multi-view setting. $\mathcal{P}$ is the set of all possible image pairs from $\{I_n\}_{n=1}^N$, excluding pairs between identical images, as well as pairs that are a permutation of an existing pair. For each pair $(a,b) \in \mathcal{P}$, where $a,b \in \{1,2,\dots,N\}$, a score matrix $\mathbf{S}_{ab}$ is created and the assignment $\mathbf{P}_{ab}$ is computed by means of Sinkhorn algorithm. From $\mathbf{P}_{ab}$ the set of matches $\mathcal{M}_{ab}$ is derived: first, a candidate match for each keypoint in $I_a$ and $I_b$ is determined by the row-wise and column-wise maximal elements of $\mathbf{P}_{ab}$. Second, mutual agreement of matching keypoints is enforced.
\subsection{Differentiable Pose Optimization}
\label{ssec:pose_optimization}
We introduce a differentiable optimizer $\Omega$ that jointly estimates all camera poses from the matches determined by the partial assignment:
\begin{equation}
\{\mathbf{p}_n\}_{n=1}^{N} = \Omega(\{\mathcal{M}_{ab}:(a,b) \in \mathcal{P}\}, \{Z_n\}_{n=1}^{N}) .
\label{eq:optimizer}
\end{equation}
To stabilize the optimization, we use the depth maps $\{Z_n\}_{n=1}^{N}$ as additional input. Without depth measurements or good pose initialization, the optimization of bundle adjustment formulations is prone to fall into local minima.
We define the energy as weighted sum of squared errors between matches in world coordinates (\cref{eq:energy}). A match consists of the image coordinates, $\mathbf{x}_a$ in $I_a$ and $\mathbf{x}_b$ in $I_b$, as well as the matching confidence $w$, i.e., the corresponding element from the assignment $\mathbf{P}_{ab}$. The function $\pi_n^{-1}(\mathbf{x}_n, Z_n)$ unprojects an image point $\mathbf{x}_n$ in $I_n$ to homogeneous camera coordinates using its depth from $Z_n$. ${\mathbf{T}_{\mathbf{p}_n} \in \mathbb{R}^{3\times4}}$ defines the transformation from camera pose $\mathbf{p}_n$ to world coordinates. $\mathbf{p} \in \mathbb{R}^{6N}$ refers to the concatenation of all pose vectors, which are in $\mathfrak{se}(3)$ coordinates, i.e., three translation elements followed by three rotation elements.
\begin{align}
E(\mathbf{p}) = \sum_{(a,b) \in \mathcal{P}} \; \sum_{(\mathbf{x}_a,\mathbf{x}_b,w) \in \mathcal{M}_{ab}} w^2 \left\Vert \mathbf{T}_{\mathbf{p}_a}\mathbf{y}_a - \mathbf{T}_{\mathbf{p}_b}\mathbf{y}_b \right\Vert_2^2&,
\label{eq:energy} \\
\text{where} \quad \mathbf{y}_a=\pi_a^{-1}(\mathbf{x}_a, Z_a) \quad \text{and} \quad \mathbf{y}_b=\pi_b^{-1}(\mathbf{x}_b, Z_b)&.
\end{align}
Gauss-Newton is used to minimize the energy with respect to the camera poses. For this purpose, a residual vector $\mathbf{r} \in \mathbb{R}^{3M}$ is created from the energy terms, where $M$ is the total number of matches between all images. Each match $m$ fills its corresponding subvector $\mathbf{r}_m \in \mathbb{R}^3$:
\begin{equation}
\mathbf{r}_m = w(\mathbf{T}_{\mathbf{p}_a}\mathbf{y}_a - \mathbf{T}_{\mathbf{p}_b}\mathbf{y}_b).
\end{equation}
All poses are initialized to $\mathbf{0}$. We keep one pose fixed, which defines the world frame, and optimize for the remaining poses $\bar{\mathbf{p}}\in \mathbb{R}^{6(N-1)}$. The Jacobian matrix $\mathbf{J}\in \mathbb{R}^{3M\times6(N-1)}$ is initialized to $\mathbf{0}$ and filled with the partial derivatives with respect to the pose parameters: for each match $m$ the corresponding blocks $\mathbf{J}_{ma}, \mathbf{J}_{mb} \in \mathbb{R}^{3\times6}$ are assigned \cite{Blanco}:
\begin{equation}
\mathbf{J}_{ma}=\frac{\partial \mathbf{r}_m}{\partial \mathbf{p}_a}=w\begin{bmatrix}
\mathbf{I}_3 & \: & -\left(\mathbf{T}_{\mathbf{p}_a}\mathbf{y}_a\right)^\wedge
\end{bmatrix} \enspace , \enspace
\mathbf{J}_{mb}=\frac{\partial \mathbf{r}_m}{\partial \mathbf{p}_b}=w\begin{bmatrix}
-\mathbf{I}_3 & \: & \left(\mathbf{T}_{\mathbf{p}_b}\mathbf{y}_b\right)^\wedge
\end{bmatrix}.
\label{eq:jac}
\end{equation}
$\mathbf{I}_3$ is a $3\times3$ identity matrix and $(\cdot)^\wedge$ maps a vector $\in \mathbb{R}^3$ to its skew-symmetric matrix: $\scriptsize \begin{bmatrix}
x \\ y \\ z
\end{bmatrix}\rightarrow\begin{bmatrix}
0&-z&y\\
z&0&-x\\
-y&x&0
\end{bmatrix}$.
If $a$ or $b$ identify the fixed pose, the corresponding assignment to $\mathbf{J}$ is skipped.
Using the current state of the camera poses, each Gauss-Newton iteration establishes a linear system, that is solved for the pose update $\mathrm{\Delta} \bar{\mathbf{p}}$ using LU decomposition:
\begin{equation}
\mathbf{J}^\top\mathbf{J} \mathrm{\Delta} \bar{\mathbf{p}}=-\mathbf{J}^\top\mathbf{r}.
\label{eq:gn_update}
\end{equation}
We update the poses in $T=10$ Gauss-Newton iterations, from which the set of poses with minimal energy is used for end-to-end training in \cref{ssec:end2end_training}.
\subsection{End-to-End Training}
\label{ssec:end2end_training}
The learnable parameters include the GNN parameters and the parameter $q$ of the partial assignment module.
The whole pipeline, from the matching network to the pose optimization, is differentiable, which allows for a pose loss that guides the matching network to produce valuable matches and accurate confidences for robust pose optimization. The training objective $\mathcal{L}$ consists of a matching term $\mathcal{L}_{\mathrm{match}}$ \cite{Sarlin2020SuperGlueLF} and a pose term $\mathcal{L}_{\mathrm{pose}}$, which are balanced by the factor $\lambda$:
\begin{align}
\mathcal{L}&=\sum_{(a,b)\in \mathcal{P}}\mathcal{L}_{\mathrm{match}}(a,b)+\lambda \mathcal{L}_{\mathrm{pose}}(a,b), \quad \text{where}
\label{eq:total_loss} \\
\mathcal{L}_{\mathrm{match}}(a,b)&=-\sum_{(i,j)\in \mathcal{T}_{ab}}\log \mathbf{P}_{ab,i,j}-\sum_{i\in \mathcal{U}_{ab}}\log \mathbf{P}_{ab,i,K_b+1}-\sum_{j\in \mathcal{V}_{ab}}\log \mathbf{P}_{ab,K_a+1,j}, \nonumber \\
\mathcal{L}_{\mathrm{pose}}(a,b)&=\left\Vert\hat{\mathbf{t}}_{a\rightarrow b}-\mathbf{t}_{a\rightarrow b}\right\Vert_2+\lambda_{\mathrm{rot}}\cos^{-1}\left(\frac{\mathrm{tr}(\mathbf{R}_{a\rightarrow b}^\top\hat{\mathbf{R}}_{a\rightarrow b})-1}{2}\right). \nonumber
\end{align}
$\mathcal{L}_{\mathrm{match}}$ computes the negative log-likelihood of the assignment between an image pair. The labels are computed using the ground truth depth maps, camera poses and intrinsic parameters: $\mathcal{T}_{ab}$ is the set of matching keypoints, $\mathcal{U}_{ab}$ and $\mathcal{V}_{ab}$ identify unmatched keypoints from $I_a$ and $I_b$, respectively.
$\mathcal{L}_{\mathrm{pose}}$ computes a transformation error between a pair of camera poses, where the translational and rotational components are balanced by $\lambda_{\mathrm{rot}}$. $\hat{\mathbf{R}}_{a\rightarrow b}$ and $\hat{\mathbf{t}}_{a\rightarrow b}$ are a rotation matrix and translation vector computed from the pose optimization result (\cref{ssec:pose_optimization}). Rodrigues' formula is used to convert from axis-angle representation to rotation matrix. $\mathbf{R}_{a\rightarrow b}$ and $\mathbf{t}_{a\rightarrow b}$ define the ground truth transformation.
We use the Adam optimizer \cite{Kingma2015AdamAM}. Further details on the network architecture and training setup are provided in the supplementary material.
\section{Results}
We compare our method to baselines by evaluating indoor and outdoor pose estimation (\cref{ssec:pose_estimation}) and matching accuracy (\cref{ssec:matching}). \cref{ssec:ablation} shows the effectiveness of the added components in an ablation study. Runtime considerations are part of the supplementary material.
\subsection{Datasets}
\label{ssec:datasets}
\subsubsection{ScanNet \cite{Dai2017ScanNetR3}.}
Following the data generation in previous works \cite{Sarlin2020SuperGlueLF,Sun2021LoFTRDL}, we sample images from the video sequence, such that the overlap to the previous image lies in $[0.4, 0.8]$.
Instead of sampling a pair, we append three more images according to this overlap criterion. The resulting 5-tuples enable multi-view evaluation and provide a more realistic pose estimation scenario. The overlap is computed from ground truth poses, depth maps and intrinsic parameters.
\subsubsection{Matterport3D \cite{Chang2017Matterport3DLF}.}
Compared to ScanNet, Matterport3D view captures are much more sparse, i.e., neighboring images are $60\degree$ horizontally and $30\degree$ vertically apart. Hence, Matterport3D is a challenging dataset for the matching task.
To obtain a sufficient dataset size, we relax the overlap criterion to $[0.25, 0.8]$.
This challenging dataset, serves to measure robustness on the pose estimation task.
\subsubsection{MegaDepth \cite{Li2018MegaDepthLS}.}
As in prior work \cite{Sarlin2020SuperGlueLF,Dusmanu2019D2NetAT}, the overlap between images is the portion of co-visible 3D points of the sparse reconstruction, thus the overlap definition is different from the indoor datasets and not comparable. Overlap ranges $[0.1, 0.7]$ and $[0.1, 0.4]$ are used at train and test time, respectively \cite{Sarlin2020SuperGlueLF}.
\subsection{Pose Estimation}
\label{ssec:pose_estimation}
\setlength{\tabcolsep}{4pt}
\begin{table}[tb]
\centering
\caption{Baseline comparison and ablation study on wide-baseline indoor pose estimation on ScanNet; ``cross-dataset'' indicates that COTR was trained on MegaDepth.}
\label{tab:pose_scannet}
\resizebox{\linewidth}{!}{
\begin{tabular}{l >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm}}
\toprule
& \multicolumn{3}{c}{Rotation error AUC [\%] $\uparrow$} & \multicolumn{3}{c}{Translation error AUC [\%] $\uparrow$} \\
\cmidrule(lr){2-4}
\cmidrule(lr){5-7}
& @5\degree & @10\degree & @20\degree & @5cm & @10cm & @20cm \\
\midrule
Mutual nearest neighbor & 14.5 & 25.9 & 40.7 & 3.7 & 8.9 & 17.9 \\
SuperGlue \cite{Sarlin2020SuperGlueLF} & 63.4 & 78.9 & 88.2 & 28.9 & 49.0 & 67.8 \\
LoFTR \cite{Sun2021LoFTRDL} & 72.2 & 83.9 & 90.4 & 40.2 & 59.7 & 75.4 \\
COTR \cite{Jiang2021COTRCT} cross-dataset & 46.2 & 60.5 & 72.0 & 20.9 & 36.1 & 51.8 \\
Ours w/o multi-view & 68.7 & 81.9 & 89.6 & 35.8 & 56.7 & 73.6 \\
Ours w/o end-to-end & 66.0 & 80.6 & 89.2 & 31.0 & 51.8 & 70.3 \\
Ours & \textbf{72.5} & \textbf{84.6} & \textbf{91.5} & \textbf{41.5} & \textbf{61.8} & \textbf{77.5} \\
\bottomrule
\end{tabular}
}
\vspace{0.4cm}
\caption{Baseline comparison and ablation study on wide-baseline indoor pose estimation on Matterport3D.}
\label{tab:pose_matterport}
\resizebox{\linewidth}{!}{
\begin{tabular}{l >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm}}
\toprule
& \multicolumn{3}{c}{Rotation error AUC [\%] $\uparrow$} & \multicolumn{3}{c}{Translation error AUC [\%] $\uparrow$} \\
\cmidrule(lr){2-4}
\cmidrule(lr){5-7}
& @5\degree & @10\degree & @20\degree & @5cm & @10cm & @20cm \\
\midrule
Mutual nearest neighbor & 0.6 & 2.0 & 5.4 & 0.0 & 0.1 & 0.3 \\
SuperGlue \cite{Sarlin2020SuperGlueLF} & 18.5 & 29.6 & 41.7 & 3.4 & 8.5 & 16.9 \\
Ours w/o multi-view & 27.2 & 38.0 & 49.1 & 6.5 & 14.2 & 24.6 \\
Ours w/o end-to-end & 30.5 & 42.3 & 53.5 & 7.1 & 16.3 & 28.2 \\
Ours & \textbf{42.4} & \textbf{55.4} & \textbf{66.2} & \textbf{12.2} & \textbf{24.9} & \textbf{39.8} \\
\bottomrule
\end{tabular}
}
\vspace{0.4cm}
\caption{Baseline comparison and ablation study on wide-baseline outdoor pose estimation on MegaDepth. For comparison to LoFTR, we retrain and test our model on the LoFTR data split (bottom section of the table).}
\label{tab:pose_megadepth}
\resizebox{\linewidth}{!}{
\begin{tikzpicture}[very thick,squarednode/.style={rectangle, draw=none, fill=white, very thin, minimum size=2mm, text opacity=1,fill opacity=0}]
\node[anchor=south west,inner sep=0] (image) at (0,0)
{\begin{tabular}{p{0.3cm}l >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{1.2cm}}
\toprule
& & \multicolumn{3}{c}{Rotation error AUC [\%] $\uparrow$} & \multicolumn{3}{c}{Translation error AUC [\%] $\uparrow$} \\
\cmidrule(lr){3-5}
\cmidrule(lr){6-8}
& & @5\degree & @10\degree & @20\degree & @5\degree & @10\degree & @20\degree \\
\midrule
& Mutual nearest neighbor & 14.3 & 27.8 & 44.2 & 6.6 & 14.6 & 26.5 \\
& SuperGlue \cite{Sarlin2020SuperGlueLF} & 70.3 & 77.8 & 83.7 & 53.3 & 64.1 & 73.6 \\
& COTR \cite{Jiang2021COTRCT} & 61.4 & 69.7 & 77.5 & 45.7 & 56.7 & 66.9 \\
& Ours w/o multi-view & 74.4 & 80.8 & 86.1 & 58.5 & 68.8 & 77.4 \\
& Ours w/o end-to-end & 74.5 & 81.6 & 87.0 & 57.8 & 68.9 & 77.8 \\
& Ours & \textbf{81.1} & \textbf{86.8} & \textbf{91.2} & \textbf{67.7} & \textbf{76.6} & \textbf{83.6} \\
\cmidrule(l){2-8}
& LoFTR \cite{Sun2021LoFTRDL} & 75.2 & 83.0 & 88.6 & 60.5 & 71.3 & 79.7 \\
& Ours & \textbf{89.6} & \textbf{93.6} & \textbf{95.9} & \textbf{74.1} & \textbf{82.3} & \textbf{88.3} \\
\bottomrule
\end{tabular}};
\begin{scope}[x=(image.south east),y=(image.north west)]
\node[squarednode,rotate=90] at (0.02, 0.5) (a) {\scriptsize Split A};
\draw [decorate, decoration = {calligraphic brace}] (0.045,0.23) -- (0.045,0.74);
\node[squarednode,rotate=90] at (0.02, 0.12) (a) {\scriptsize Split B};
\draw [decorate, decoration = {calligraphic brace}] (0.045,0.02) -- (0.045,0.2);
\end{scope}
\end{tikzpicture}
}
\vspace{-0.4cm}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\begin{figure*}[tb]
\centering
\begin{tikzpicture}[squarednode/.style={rectangle, draw=none, fill=white, very thin, minimum size=2mm, text opacity=1,fill opacity=0.}]
\node[anchor=south west,inner sep=0] (image) at (0,0)
{\includegraphics[width=\linewidth]{figures/scannet_results.jpg}};
\begin{scope}[x=(image.south east),y=(image.north west)]
\node[squarednode] at (0.05, 69/72) (a) {\bf 1};
\node[squarednode] at (0.05, 57/72) (b) {\bf 2};
\node[squarednode] at (0.05, 45/72) (c) {\bf 3};
\node[squarednode] at (0.05, 33/72) (d) {\bf 4};
\node[squarednode] at (0.05, 21/72) (e) {\bf 5};
\node[squarednode] at (0.05, 9/72) (f) {\bf 6};
\node[squarednode] at (0.165, -0.025) (g) {Input 5-tuples};
\node[squarednode] at (0.4215, -0.025) (g) {SuperGlue \cite{Sarlin2020SuperGlueLF}};
\node[squarednode] at (0.59, -0.025) (g) {LoFTR \cite{Sun2021LoFTRDL}};
\node[squarednode] at (0.75, -0.025) (g) {COTR \cite{Jiang2021COTRCT}};
\node[squarednode] at (0.915, -0.025) (g) {Ours};
\end{scope}
\end{tikzpicture}
\vspace{-0.6cm}
\caption{Reconstructions (right) from estimated camera poses on ScanNet 5-tuples (left). With multi-view matching and end-to-end training, our method successfully handles challenging pose estimation scenarios, while baselines show severe camera pose errors.}
\label{fig:scannet_results}
\vspace{-0.4cm}
\end{figure*}
\begin{figure*}[tb]
\centering
\begin{tikzpicture}[squarednode/.style={rectangle, draw=none, fill=white, very thin, minimum size=2mm, text opacity=1,fill opacity=0}]
\node[anchor=south west,inner sep=0] (image) at (0,0)
{\includegraphics[width=\linewidth]{figures/matterport_results_short.jpg}};
\begin{scope}[x=(image.south east),y=(image.north west)]
\node[squarednode] at (0.05, 19/20) (a) {\bf 1};
\node[squarednode] at (0.05, 15/20) (b) {\bf 2};
\node[squarednode] at (0.05, 11/20) (c) {\bf 3};
\node[squarednode] at (0.05, 7/20) (d) {\bf 4};
\node[squarednode] at (0.05, 3/20) (e) {\bf 5};
\node[squarednode] at (0.16, -0.026) (g) {Input 5-tuples};
\node[squarednode] at (0.41, -0.026) (g) {SuperGlue \cite{Sarlin2020SuperGlueLF}};
\node[squarednode] at (0.58, -0.026) (g) {Ours w/o};
\node[squarednode] at (0.58, -0.057) (g) {multi-view};
\node[squarednode] at (0.745, -0.026) (g) {Ours w/o};
\node[squarednode] at (0.745, -0.057) (g) {end-to-end};
\node[squarednode] at (0.915, -0.026) (g) {Ours};
\end{scope}
\end{tikzpicture}
\vspace{-0.6cm}
\caption{Reconstructions (right) from estimated camera poses on Matterport3D 5-tuples (left). Our complete method improves camera alignment over the ablated versions and SuperGlue, showing the importance of multi-view matching and end-to-end training.}
\label{fig:matterport_results}
\vspace{-0.4cm}
\end{figure*}
Prior work, in particular SuperGlue~\cite{Sarlin2020SuperGlueLF}, has extensively demonstrated the superiority of the GNN approach over conventional matching. Hence, we focus on comparisons to recent feature matching networks: SuperGlue~\cite{Sarlin2020SuperGlueLF}, LoFTR~\cite{Sun2021LoFTRDL} and COTR~\cite{Jiang2021COTRCT}. We additionally compare to a non-learning-based matcher, i.e., mutual nearest neighbor search on the SuperPoint \cite{DeTone2018SuperPointSI} descriptors. This serves to confirm the effectiveness of SuperGlue and our method, which both use SuperPoint descriptors.
For each method, the matches and confidences are used to optimize for the camera poses according to \cref{ssec:pose_optimization}. As the baselines are designed for matching image pairs, we run them repeatedly on all 10 possible pairs of the 5-tuples and use all resulting matches in the pose optimization.
The pose accuracy is evaluated based on the area under the curve (AUC) in \% at the thresholds $[5\degree, 10\degree, 20\degree]$ for rotation error and $[5\mathrm{cm}, 10\mathrm{cm}, 20\mathrm{cm}]$ for translation error on ScanNet and Matterport3D. As MegaDepth reconstructions are up to an unknown scale factor, the translation error is measured by the angle between translation vectors using thresholds $[5\degree, 10\degree, 20\degree]$ for the AUC. For qualitative comparison we use the computed poses to fuse the 5 depth maps in a truncated signed distance field (TSDF), which is then converted into a mesh using marching cubes \cite{Lorensen1987MarchingCA}.
Quantitative results on ScanNet are shown in \cref{tab:pose_scannet}, demonstrating that our method achieves higher accuracy than baselines.
The misalignments in the reconstructions (\cref{fig:scannet_results}) reveal that the baselines struggle in presence of repetitive patterns such as the washing machines (sample 1), the pictures on the wall (sample 5) or the patterned couches (sample 6). With multi-view reasoning during matching and learned outlier rejection through end-to-end training, our approach is more robust in these situations.
\cref{tab:pose_matterport} evaluates pose estimation on Matterport3D. The pose accuracy on Matterport3D is overall lower than on ScanNet, due to the smaller overlap between images and possibly amplified by the smaller training dataset.
In this scenario, our method outperforms SuperGlue with a larger gap than on ScanNet, which shows that our approach copes better with the more challenging setting in Matterport3D.
We show additional analysis in the ablation study.
Quantitative results on MegaDepth demonstrate the gain from multi-view matching and end-to-end training in the outdoor setting, leading to higher accuracy than baselines (\cref{tab:pose_megadepth}). Qualitative results are provided in the supplement.
\paragraphNoSpace{Implementation Details.}
COTR does not predict confidences, hence we use equal weights for all matches. For SuperGlue and LoFTR, the predicted confidences are used in the pose optimization, which we empirically found to perform better than thresholding. Further implementation detail is available in the supplemental.
\subsection{Matching}
\label{ssec:matching}
To avoid manual setting of confidence thresholds, the matching accuracy is evaluated by computing the weighted mean of the epipolar error $e$ on image pairs:
\begin{equation}
e = \frac{\sum_{m=1}^{M} w_m e_m}{\sum_{m=1}^{M} w_m},
\label{eq:epipolar_error}
\end{equation}
where $M$ is the number of matches between an image pair, $e_m$ the symmetric epipolar error of a match and $w_m$ its confidence.
The epipolar error and the average number of detected matches on ScanNet are listed in \cref{tab:match_scannet}.
As SuperGlue explicitly proposes a confidence threshold at 0.2 to determine valid matches, we also report this version of the baseline.
While our method achieves the lowest epipolar error, LoFTR produces a much higher number of matches.
This shows that the number of matches is not a reliable indicator for pose accuracy, but rather accurate matches and confidences are beneficial.
\setlength{\tabcolsep}{4pt}
\begin{table}[tb]
\centering
\caption{Baseline comparison on wide-baseline matching accuracy on ScanNet.}
\label{tab:match_scannet}
\begin{tabular}{lcc}
\toprule
& Number of matches & Epipolar error [m] $\downarrow$ \\
\midrule
Mutual nearest neighbor & 192 & 0.373 \\
SuperGlue \cite{Sarlin2020SuperGlueLF} & 207 & 0.158 \\
SuperGlue \cite{Sarlin2020SuperGlueLF} w/ threshold 0.2 & 189 & 0.032 \\
LoFTR \cite{Sun2021LoFTRDL} & \textbf{1304} & 0.034 \\
COTR \cite{Jiang2021COTRCT} & 96 & 0.069 \\
Ours & 186 & \textbf{0.020} \\
\bottomrule
\end{tabular}
\vspace{-0.4cm}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\subsection{Ablation Study}
\label{ssec:ablation}
The quantitative results on ScanNet, Matterport3D and MegaDepth (\cref{tab:pose_scannet,tab:pose_matterport,tab:pose_megadepth}), show that the full version of our method achieves the best performance. This is consistent with the qualitative results in \cref{fig:matterport_results}.
\paragraphNoSpace{Without Multi-View.}
Omitting multi-view in the GNN causes an average performance drop of 3.9\% on ScanNet and 13.6\% on Matterport3D.
This suggests that the importance of multi-view increases with decreasing overlap between images. Intuitively, the multi-view receptive field supports information flow from other views to bridge gaps, where the overlap is small. \cref{fig:matterport_results} shows the notably improved camera alignment through multi-view input.
\paragraphNoSpace{Without End-to-End.}
Omitting end-to-end training drops the average performance by 6.8\% on ScanNet and 10.5\% on Matterport3D.
This shows that end-to-end training enables the learning of reliable outlier down-weighting, which is even more beneficial in the difficult Matterport3D scenarios.
Lack of end-to-end training is visible in the reconstructions (\cref{fig:matterport_results}), e.g., the misaligned pattern on the floor (sample 3) or the failure to reconstruct thin objects (sample 5).
\paragraphNoSpace{Variable Number of Input Views.} In \cref{fig:number_images}, we investigate the impact of the number of images used for matching, both in pairwise (ours w/o multi-view) and joint (our full version) fashion.
The experiment is conducted on sequences of 9 images which are generated on ScanNet as described in \cref{ssec:datasets}.
The results show that pose accuracy improves, when matching across a larger span of neighboring images. The curves, however, plateau when a larger window size does not bring any more relevant images into the matching.
Additionally, the results show the benefit of joint matching in a single graph as opposed to matching all possible image pairs individually.
\begin{figure}[tb]
\centering
\begin{tikzpicture}[squarednode/.style={rectangle, draw=white, fill=white, very thin, minimum size=2mm, text opacity=1,fill opacity=0,draw opacity=0}]
\node[anchor=south west,inner sep=0] (image) at (0,0)
{\includegraphics[width=0.85\textwidth,trim={0.2cm 0.3cm -3.cm 0.2cm},clip]{figures/number_images.pdf}};
\begin{scope}[x=(image.south east),y=(image.north west)]
\scriptsize
\node[squarednode] at (0.21, -0.06) (a) {Number of images};
\node[squarednode] at (0.66, -0.06) (a) {Number of images};
\node[squarednode] at (0.97, 0.63) (a) {Pairwise matching};
\node[squarednode] at (0.95, 0.47) (a) {Joint matching};
\node[squarednode] at (0.215, 1.07) (b) {Rotation error AUC @10\degree [\%] $\uparrow$};
\node[squarednode] at (0.665, 1.07) (b) {Translation error AUC @10cm [\%] $\uparrow$};
\end{scope}
\end{tikzpicture}
\vspace{-.2cm}
\caption{Pose error AUC on sequences of 9 images on ScanNet using variable number of images in pairwise or joint matching.}
\label{fig:number_images}
\vspace{-.4cm}
\end{figure}
\subsection{Limitations}
Our method builds on SuperGlue~\cite{Sarlin2020SuperGlueLF} and improves pose estimation accuracy and robustness to small image overlap.
Here, one of our contributions is the end-to-end differentiablity of the pose optimization that guides the matching network.
While this significantly improves matching quality, we currently only backpropgate gradients to the matching network but do not update keypoint descriptors; i.e., we use existing SuperPoint~\cite{DeTone2018SuperPointSI}.
However, we believe that jointly training feature descriptors is a promising avenue to even further improve performance.
\section{Conclusion}
We have presented a method that couples multi-view feature matching and pose optimization into an end-to-end trainable pipeline.
Using a graph neural network, we match features across multiple views in a joint fashion, which increases global awareness of the matching process.
Combined with differentiable pose optimization, gradients inform the matching network, which learns to produce valuable, outlier-free matches for pose estimation.
The experiments show that our method improves both pose and matching accuracy compared to prior work. In particular, we observe increased robustness in challenging settings, such as in presence of repetitive structure or small image overlap.
Overall, we believe that our end-to-end approach is an important stepping stone towards an end-to-end trained SLAM method.
\section*{Acknowledgements}
This project is funded by a TUM-IAS Rudolf Mößbauer Fellowship, the ERC Starting Grant Scan2CAD (804724), and the German Research Foundation (DFG) Grant Making Machine Learning on Static and Dynamic 3D Data Practical.
We thank Angela Dai for the video voice-over.
\section{Qualitative Results on MegaDepth}
\cref{fig:megadepth_results} shows qualitative results from the ablation study and baseline comparison on MegaDepth dataset. The full version of our method accurately estimates camera poses even across large viewpoint changes (e.g., sample 4), and strong appearance variations (e.g., samples 1 and 2).
\begin{figure*}[b]
\centering
\begin{tikzpicture}[squarednode/.style={rectangle, draw=none, fill=white, very thin, minimum size=2mm, text opacity=1,fill opacity=0.}]
\node[anchor=south west,inner sep=0] (image) at (0,0)
{\includegraphics[width=\linewidth]{figures/megadepth_results.jpg}};
\begin{scope}[x=(image.south east),y=(image.north west)]
\node[squarednode] at (0.037, 0.935) (a) {\bf 1};
\node[squarednode] at (0.037, 0.685) (b) {\bf 2};
\node[squarednode] at (0.037, 0.435) (c) {\bf 3};
\node[squarednode] at (0.037, 0.185) (d) {\bf 4};
\node[squarednode] at (0.115, -0.035) (g) {\footnotesize Input 5-tuples};
\node[squarednode] at (0.31, -0.035) (g) {\footnotesize SuperGlue \cite{Sarlin2020SuperGlueLF}};
\node[squarednode] at (0.47, -0.035) (g) {\footnotesize COTR \cite{Jiang2021COTRCT}};
\node[squarednode] at (0.615, -0.035) (g) {\footnotesize Ours w/o};
\node[squarednode] at (0.615, -0.075) (g) {\footnotesize multi-view};
\node[squarednode] at (0.77, -0.035) (g) {\footnotesize Ours w/o};
\node[squarednode] at (0.77, -0.075) (g) {\footnotesize end-to-end};
\node[squarednode] at (0.925, -0.035) (g) {\footnotesize Ours};
\end{scope}
\end{tikzpicture}
\vspace{-0.6cm}
\caption{Reconstructions (right) from estimated camera poses on MegaDepth 5-tuples (left). With multi-view matching and end-to-end training, our method successfully estimates camera poses in challenging outdoor scenarios, while baselines show misalignment.}
\label{fig:megadepth_results}
\vspace{-0.4cm}
\end{figure*}
\section{Runtime}
Our method takes on average 207ms for matching a 5-tuple, which corresponds to matching 10 image pairs. SuperGlue requires on average 40ms for matching one pair, which shows that inference time correlates well with the number of GNN messages (\cref{tab:gnn_messages}). LoFTR takes on average 89ms for a pair and COTR is much slower with 35s.
Although we did not optimize our implementation for speed, the measurement shows that it is suited for real-time application.
We believe that the coupling of multi-view matching with pose optimization fits particularly well for keyframe alignment in reconstruction or SLAM. An alternative pose optimization module using relative poses, e.g., from inertial sensors, instead of depth measurements can also be realized.
All runtime is measured on a Nvidia GeForce RTX 2080.
\section{Architecture Details}
Our multi-view matching network builds up on the SuperGlue \cite{Sarlin2020SuperGlueLF} architecture and uses the following parameters:
\paragraphNoSpace{Keypoint Encoder.}
The input visual descriptors from SuperPoint \cite{DeTone2018SuperPointSI} have size $D = 256$. The graph nodes equally have an embedding size of $D$. Hence, the keypoint encoder $F_{\mathrm{encode}}$ maps a keypoint's image coordinates and confidence score to $D$ dimensions. It is a MLP, composed of five layers with 32, 64, 128, 256 and $D$ channels. Each layer, except the last, uses batch normalization and ReLU activation.
\paragraphNoSpace{Graph Attention Network.}
The GNN has $L=9$ layers. The layers alternate between message exchange along self-edges and message exchange along cross-edges, such that the first and last layers perform updates along self-edges.
The attentional aggregation of incoming messages from other nodes uses multi-head attention with four heads. The resulting messages have size $D$, like the node embeddings.
The MLP $F_{\mathrm{update}}$, which computes the update to the receiving node, operates on the concatenation of the current node embedding with the incoming message. It has two layers with $2D$ and $D$ channels. Batch normalization and ReLU activation are employed between the two layers.
\paragraphNoSpace{Partial Assignment.}
We use 100 iterations of the Sinkhorn algorithm to determine the partial assignment matrices.
\paragraphNoSpace{Pose Optimization.}
The camera poses are optimized by conducting \mbox{$T=10$} Gauss-Newton updates.
\section{Training Details}
\subsubsection{Two-Stage Training.}
Our end-to-end pipeline is trained in two stages. The first stage uses the loss term on the matching result $\mathcal{L}_{\mathrm{match}}$. The second stage additionally employs the pose loss $\mathcal{L}_{\mathrm{pose}}$. Stage 1 is trained until the validation match loss converges, stage 2 until the validation pose loss converges. On ScanNet~\cite{Dai2017ScanNetR3}/ Matterport3D~\cite{Chang2017Matterport3DLF}/ MegaDepth~\cite{Li2018MegaDepthLS} the training takes 25/ 228/ 71 epochs for stage 1 and 4/ 7/ 11 epochs for stage 2. We found that the training on MegaDepth benefits from initializing the network parameters to the network parameters after the first ScanNet training stage. During stage 2 we linearly increase the weight of $\mathcal{L}_{\mathrm{pose}}$ from 0 to 2000 on the indoor datasets, and from 0 to 685 on MegaDepth, while linearly decreasing the weight of $\mathcal{L}_{\mathrm{match}}$ from 1 to 0, over a course of 40000 iterations. The balancing factor of the rotation term in $\mathcal{L}_{\mathrm{pose}}$ is set to $\lambda_{\mathrm{rot}}=2$ on the indoor datasets and $\lambda_{\mathrm{rot}}=6.75$ on MegaDepth.
We use the Adam optimizer with learning rate 0.0001.
The learning rate is exponentially decayed with a factor of 0.999992 starting after 100k iterations.
\subsubsection{Ground Truth Generation.}
The ground truth matches $\mathcal{T}_{ab}$ and sets of unmatched keypoints $\mathcal{U}_{ab}$, $\mathcal{V}_{ab}$ of an image pair are computed by projecting the detected keypoints from each image to the other, resulting in a reprojection error matrix. Keypoint pairs where the reprojection error is both minimal and smaller than 5 pixels in both directions are considered matches. Unmatched keypoints must have a minimum reprojection error greater than 15 pixels on the indoor datasets and greater than 10 pixels on MegaDepth.
\subsubsection{Input Data.}
We train on 5-tuples with a batch size of 24 on indoor data and with a batch size of 4 on outdoor data.
The image size is 480$\times$640 on ScanNet, 512$\times$640 on Matterport3D and 640$\times$640 on MegaDepth.
The SuperPoint network is configured to detect keypoints with a non-maximum suppression radius of 4/ 3 on indoor/ outdoor data.
On the indoor datasets we use 400 keypoints per image during training time: first, keypoints above a confidence threshold of 0.001 are sampled, second, if there are fewer than 400, the remainder is filled with random image points and confidence 0 as a data augmentation. On MegaDepth the same procedure is applied to sample 1024 keypoints using confidence threshold 0.005. At test time on indoor/ outdoor data, we use up to 1024/ 2048 keypoints above the mentioned confidence thresholds.
\subsubsection{Dataset Split.}
On ScanNet and Matterport3D we use the official dataset split. On Mega\-Depth we use scenes 0016, 0047, 0058, 0064, 0121, 0129, 0133, 0168, 0175, 0178, 0181, 0185, 0186, 0204, 0205, 0212, 0217, 0223, 0229 for validation, 0271, 0285, 0286, 0294, 0303, 0349, 0387, 0412, 0443, 0482, 0768, 1001, 3346, 5014, 5015, 5016, 5018 for testing and the remaining scenes for training.
This way on ScanNet/ Matterport3D/ MegaDepth we obtain 240k/ 20k/ 15k 5-tuples for training, 62k/ 2200/ 1200 for validation and 1500/ 1500/ 1000 for testing.
\section{Baseline Comparison}
In the baseline comparison we use the network weights trained by the authors of SuperGlue~\cite{Sarlin2020SuperGlueLF}, LoFTR~\cite{Sun2021LoFTRDL} and COTR~\cite{Jiang2021COTRCT}.
There are SuperGlue and LoFTR models trained on ScanNet and on MegaDepth, as well as a COTR model trained on MegaDepth. We additionally train a SuperGlue model on Matterport3D~\cite{Chang2017Matterport3DLF}.
| {'timestamp': '2022-05-05T02:00:29', 'yymm': '2205', 'arxiv_id': '2205.01694', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.01694'} |
\section{0pt}{12pt plus 4pt minus 2pt}{8pt plus 2pt minus 2pt}
\usepackage{amsthm,amsmath}
\RequirePackage{hyperref}
\usepackage[utf8]{inputenc}
\usepackage{nameref}
\def{}
\def{}
\usepackage{graphicx}
\graphicspath{{./figures/}}
\usepackage{subcaption}
\captionsetup{compatibility=false}
\usepackage{amsfonts}
\usepackage{dsfont}
\usepackage{xcolor}
\usepackage{lipsum}
\usepackage[nameinlink]{cleveref}
\crefname{equation}{Eq.}{Eqs.}
\Crefname{equation}{Equation}{Equations}
\crefname{paragraph}{Paragraph}{Paragraphs}
\usepackage[multidot]{grffile}
\newcommand{\vv} {{\bm v}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\ee}{\end{equation}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\mathrm{e}}{\mathrm{e}}
\renewcommand{\L}{\mathcal{L}}
\renewcommand{\vec}[1]{#1}
\newcommand{\mbeq}{\stackrel{!}{=}}
\newcommand{\mathbf{s}}{\mathbf{s}}
\newcommand{\grad}{{\nabla}}
\let\vaccent=\v
\renewcommand{\v}[1]{\mathbf{#1}}
\renewcommand{\bf}[1]{\textbf{#1}}
\newcommand{\rightarrow}{\rightarrow}
\newcommand{\f}[2]{\frac{#1}{#2}}
\newcommand{\avg}[1]{\left\langle #1 \right\rangle}
\let\setunion=\ccup
\newcommand{\ccup}[1]{\left\{#1\right\}}
\let\setunion=\bup
\let\setunion=\rup
\newcommand{\bup}[1]{\left(#1\right)}
\newcommand{\rup}[1]{\left[#1\right]}
\let\oldt=\t
\renewcommand{\t}[1]{\tilde{#1}}
\newcommand{\omega}{\omega}
\newcommand{\bb}[1]{\mathbf{#1}}
\newcommand{\sigma}{\sigma}
\newcommand{\times}{\times}
\newcommand{\alpha}{\alpha}
\renewcommand{\b}[1]{\bar{#1}}
\newcommand{\h}[1]{\hat{#1}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbf{1}}{\mathbf{1}}
\usepackage[normalem]{ulem}
\newcommand{\CDB}[1]{\textcolor{blue}{#1}}
\newcommand{\CDBcom}[1]{[\textcolor{blue}{CDB: #1}]}
\newcommand{\DBT}[1]{\textcolor{red}{#1}}
\newcommand{\DBTcom}[1]{[\textcolor{red}{DT: #1}]}
\newcommand{\nix}[1]{\sout{#1}}
\newcommand{\mbox{{\small \textit{DMK-Solver}}}}{\mbox{{\small \textit{DMK-Solver}}}}
\newcommand{\mbox{{\small DMK}}}{\mbox{{\small DMK}}}
\newcommand{\mbox{{\small \textit{discrete-DMK-Solver}}}}{\mbox{{\small \textit{discrete-DMK-Solver}}}}
\begin{document}
\title{Convergence properties of optimal transport-based temporal hypernetworks}
\author{Diego Baptista}
\affiliation{ Max Planck Institute for Intelligent Systems, Cyber Valley, Tuebingen, 72076, Germany}
\author{Caterina De Bacco}
\affiliation{ Max Planck Institute for Intelligent Systems, Cyber Valley, Tuebingen, 72076, Germany}
\begin{abstract}
We present a method to extract temporal hypergraphs from sequences of 2-dimensional functions obtained as solutions to Optimal Transport problems. We investigate optimality principles exhibited by these solutions from the point of view of hypergraph structures. Discrete properties follow patterns that differ from those characterizing their continuous counterparts. Analyzing these patterns can bring new insights into the studied transportation principles. We also compare these higher-order structures to their network counterparts in terms of standard graph properties. We give evidence that some transportation schemes might benefit from hypernetwork representations. We demonstrate our method on real data by analyzing the properties of hypernetworks extracted from images of real systems.
\end{abstract}
\maketitle
\section*{Introduction}
Optimal Transport (OT) is a principled theory to compare probability distributions \cite{kantorovich1942transfer, villaniot, santambrogio2015optimal, peyre2019computational}. Although this task is usually framed as an optimization problem, recent studies have mapped it within the framework of dynamic partial differential equations \cite{evans1999differential, facca2016towards, facca2019numerics, facca2021branching,tero2007mathematical, tero2010rules}. In this context, solutions to a transportation problem are often found as the convergent state of evolving families of functions.
\\
In some scenarios, the steady states of these evolving families are supported in network-shaped structures \cite{xia2003optimal, xia2014landscape, Xia2015}. Recently, this fact has called the attention of network scientists and graph theorists leading to the development of methods that convert the solutions of OT problems into actual graph structures \cite{baptista2020network,leite2022revealing}. This has broadened the available set of tools to understand and solve these transportation problems. Recent studies have shown that common patterns can be unveiled in both the original mathematical setting and in the converted graph structures \cite{baptista2021temporal}.
Representations of these functions as sets of dyadic relations have been proven meaningful in various applications \cite{baptista2020principlednet,facca2021branching}. Nonetheless, traditional dyadic representations may be limited in representing flows of quantities like mass or information as observed in real systems. Various examples of systems where interactions happen between 3 individuals or more are observed in applications as social contagion \cite{PhysRevResearch.2.023032,chowdhary2021simplicial}, random walks \cite{PhysRevE.101.022308,schaub2020random} or non-linear consensus \cite{PhysRevE.101.032310}. Understanding the relation between the structure and
dynamics taking place on higher-order structures is an active field of research \cite{taylor2015topological,patania2017topological}. For instance, key elements controlling dynamics are linked to the heterogeneity of hyperedges' sizes present in their higher-order representations \cite{patania2017topological}. These systems are hence best described by hypergraphs, generalizations of networks that encode structured relations among any number of individuals. With this in mind, a natural question to ask is how do OT-based structures perform in terms of higher-order representations?
\\
To help bridge this knowledge gap about higher-order properties of structures derived from OT solutions, we elaborate on the results observed in \cite{baptista2021temporal}. Specifically, we propose a method to convert the families of 2-dimensional functions into temporal hypernetworks. We enrich the existing network structures associated with these functions by encoding the observed interactions into hyperedges. We study classic hypergraph properties and compare them to the predefined cost functional linked to the transportation problems. Finally, we extend this method and the analysis to study systems coming from real data. We build hypergraph representations of \textit{P. polycephalum} \cite{westendorf2016quantitative} and analyze their topological features.
\section*{Methods}\label{section:methods}
\subsection*{The Dynamical Monge-Kantorovich Method}
\paragraph*{The Dynamical Monge-Kantorovich set of equations.} We start by reviewing the basic elements of the mechanism chosen to solve the OT problems. As opposed to other standard optimization methods used to solve this \cite{cuturi2013sinkhorn}, we use an approach that turns the problem into a dynamical set of partial differential equations. In this way, initial conditions are updated until a convergent state is reached. The dynamical system of equations as proposed by Facca et al. \cite{facca2016towards,facca2019numerics,facca2021branching}, is presented as follows. We assume that the OT problem is set on a continuous 2-dimensional space $\Omega \in \mathbb{R}^{2}$, and at the beginning, no underlying network structure is observed. This gives us the freedom of exploring the whole space to design an optimal network topology, solution of the transportation problem. The main quantities that need to be specified in input are \textit{source} and \textit{target} distributions. We refer to them as sources and sinks, where a certain mass (e.g. passengers in a transportation network, water in a water distribution network) is injected and then extracted. We denote these with a ``forcing'' function $f(x)=f^+(x)-f^-(x)\in \mathbb{R}$, describing the flow-generating sources $f^+(x)$ and sinks $f^-(x)$. To ensure mass balance it is imposed $\int_\Omega f(x)dx = 0$. We assume that the flow is governed by a transient Fick-Poiseuille flux $q=- \mu \grad u$, where $\mu,u$ and $q$ are called \textit{conductivity} (or \textit{transport density}), \textit{transport potential} and \textit{flux}, respectively. Intuitively, mass is injected through the source, moved based on the conductivity across space, and then extracted through the sink. The way mass moves determines a flux that depends on the pressure exerted on the different points in space; this pressure is described by a potential function.
The set of \textit{Dynamical Monge-Kantorovich} (DMK) equations is given by:
\begin{align}
-\nabla \cdot (\mu(t,x)\nabla u(t,x)) &= f^+(x)-f^-(x) \,, \label{eqn:ddmk1}\\
\frac{\partial \mu(t,x)}{\partial t} &= \rup{\mu(t,x)\nabla u(t,x)}^{\beta} - \mu(t,x) \,, \label{eqn:ddmk2}\\
\mu(0,x) &= \mu_0(x) > 0 \label{eqn:ddmk3} \,,
\end{align}
where $\nabla=\nabla_{x}$. \Cref{eqn:ddmk1} states the spatial balance of the Fick-Poiseuille flux and is complemented by no-flow Neumann boundary conditions. \Cref{eqn:ddmk2} enforces the dynamics of this system, and it is controlled by the so-called \textit{traffic rate} $\beta$. It determines the transportation scheme, and it shapes the topology of the solution: for $\beta<1$ we have congested transportation where traffic is minimized, whereas $\beta>1$ induces branched transportation where traffic is consolidated into a smaller amount of space. The case $\beta=1$ recovers shortest path-like structures. Finally, \Cref{eqn:ddmk3} constitutes the initialization of the system and can be thought of as an initial guess of the solution.
Solutions $(\mu^*, u^*)$ of \crefrange{eqn:ddmk1}{eqn:ddmk3} minimize the transportation cost function $\mathcal{L}(\mu,u)$ \cite{facca2016towards,facca2019numerics,facca2021branching}, defined as:
\begin{align}\label{eqn:L}
& \mathcal{L}(\mu,u) := \mathcal{E}(\mu,u)+ \mathcal{M}(\mu,u) \\
& \mathcal{E}(\mu,u) := \dfrac{1}{2}\int_{\Omega} \mu |\grad u|^2 dx, \ \ \mathcal{M}(\mu,u) := \dfrac{1}{2}\int_{\Omega} \dfrac{\mu^{\frac{(2-\beta)}{\beta}}}{2-\beta} dx \quad.
\end{align}
$\mathcal{L}$ can be thought of as a combination of $\mathcal{M}$, the total energy dissipated during transport (or network operating cost) and $\mathcal{E}$, the cost to build the network infrastructure (or infrastructural cost). It is known that this functional's convexity changes as a function of $\beta$. Non-convex cases arise in the branched schemes, inducing fractal-like structures \cite{facca2021branching, santambrogio2007optimal}. This is the case that we considered in this work, and it is the only one where meaningful network structures, and thus, hypergraphs, can be extracted \cite{baptista2020network}.
\subsection*{Hypergraph sequences}
\paragraph*{Hypergraph construction.} We define a hypergraph (also, hypernetwork) as follows \cite{battiston2020networks}: a \textit{hypergraph} is a tuple $H = (V, E),$ where $V = \{v_1, ... ,v_n\}$ is the set of \textit{vertices} and $E = \{ e_1, e_2, ... , e_m\}$ is the set of \textit{hyperedges} in which $e_i\subset V, \forall i = 1,...,m,$ and $|e_i|>1$. If $|e_i|=2,\forall i$ then $H$ is simply a graph. We call \textit{edges} those hyperedges $e_i$ with $|e_i|=2$ and \textit{triangles}, those with $|e_i|=3$. We refer to the \textit{1-skeleton} of $H$ as the \textit{clique expansion} of $H$. This is the graph $G=(V,E_{G})$ made of the vertices $V$ of $H$, and of the pairwise edges built considering all the possible combinations of pairs that can be built from each set of nodes defining each hyperedge in $E$.
Let $\mu$ be the conductivity found as a solution of \crefrange{eqn:ddmk1}{eqn:ddmk3}. As previously mentioned, $\mu$ at convergence regulates where the mass should travel for optimal transportation. Similar to \cite{baptista2021temporal}, we turn this 2-dimensional function into a different data structure, namely, a hypergraph. This is done as follows: consider $G(\mu) = (V_G,E_G)$ the network extracted using the method proposed in \cite{baptista2020network}. We define $H(\mu)$ as the tuple $(V_H,E_H)$ where $V_H = V_G$ and $E_H = E_G \cup T_G,$ s.t., $T_G = \{(u,v,w): (u,v),(v,w),(w,u) \in E_G, \}.$ In words, $H(\mu)$ is the graph $G(\mu)$ together with all of its triangles.
This choice is motivated by the fact that the graph-extraction method proposed in \cite{baptista2020network} uses triangles to discretize the continuous space $\Omega$, which can have a relevant impact on the extracted graph or hypergraph structures. Hence, triangles are the natural sub-structure for hypergraph constructions. The method proposed in this work is valid for higher-order structures beyond triangles. Exploring how these additional structures impact the properties of the resulting hypergraphs is left for future work.
\Cref{fig:image1} shows an example of one of the studied hypergraphs. The red shapes represent the different triangles of $H(\mu)$. Notice that, although we consider here the case where $|e|\leq 3$ for each hyperedge $e$---for the sake of simplicity---higher-order structures are also well represented by the union of these elements, as shown in the right panels of the figure.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.95\textwidth}
[width=\textwidth]{./fig1_family_of_subgraphs.jpg}
\end{subfigure}
\caption{\textbf{Hypernetwork construction.} Higher order structures are built using edges and triangles as hyperedges. The leftmost panel shows one of the studied graphs together with the triangles (in red) used. The subsequent panels highlight different clusters of triangles that can be seen in the main hypergraph.} \label{fig:image1}
\end{figure}
Since this hypergraph construction method is valid for any 2-dimensional transport density, we can extract a hypergraph not only from the convergent $\mu$ but also at any time step before convergence. This then allows us to represent optimal transport sequences as hypergraphs evolving in time, i.e. temporal hypernetworks.
\paragraph*{Hypergraph sequences.} Formally, let $\mu(x,t)$ be a \textit{transport density} (or \textit{conductivity}) function of both time and space obtained as a solution of the DMK model. We denote it as the sequence $\{\mu_t\}_{t=0}^T$, for some index $T$ (usually taken to be that of the convergent state). Each $\mu_{t}$ is the $t$-th update of our initial guess $\mu_0$, computed by following the rules described in \crefrange{eqn:ddmk1}{eqn:ddmk3}. This determines a sequence of hypernetworks $\{ H(\mu_t)\}_{t=0}^T$ extracted from $\{\mu_t\}_{t=0}^T$ with the extraction method proposed in \cite{baptista2020network}. \Cref{fig:image2} shows three hypergraphs built from one of the studied sequences $\{\mu_t\}$ using this method at different time steps. The corresponding OT problem is that defined by the (filled and empty) circles: mass is injected in the bottom left circle and must be extracted at the highlighted destinations. On the top row, different updates (namely, $t=12, 18, 26$) of the solution are shown. They are defined on a discretization of $[0,1]^2.$ Darkest colors represent their support. Hypergraphs extracted from these functions are displayed at the bottom row. As can be seen, only edges (in gray) and triangles (in red) are considered as part of $H(\mu_t)$. Notice that the larger the $t$ is, the less dense the hypergraphs are, which is expected for a uniform initial distribution $\mu_0$ and branched OT ($\beta>1$) \cite{facca2021branching}.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.9\textwidth}
[width=\textwidth]{./fig2_hypernet_sequence.jpg}
\end{subfigure}
\caption{\textbf{Temporal hypergraphs.} Top row: different timestamps of the sequence $\{\mu_t\}$; triangles are a discretization of $[0,1]^2$. Bottom row: hypergraphs extracted for $\mu_t$ at the time steps displayed on the top row; triangles are highlighted in red. In both rows, filled and empty circles correspond to the support of $f^+$ and $f^-$, i.e. sources and sinks, respectively. This sequence is obtained for $\beta = 1.5$.} \label{fig:image2}
\end{figure}
\subsection*{Graph and hypergraph properties}
We compare hypergraph sequences to their correspoding network counterparts (defined as described in the previous paragraph). We analyze the following main network and hypergraph properties for the different elements in the sequences and for different sequences. Denote with $G = (V_G,E_G)$ and $H = (V_H, E_H)$ one of the studied graphs and hypergraphs belonging to some sequence $\{ G(\mu_t)\}_{t=0}^T$ and $\{ H(\mu_t)\}_{t=0}^T$, respectively. We consider the following network properties:
\\
\begin{enumerate}
\item $|E_G|$, total number of edges;
\item Average degree $d(G)$, the mean number of neighbors per node;
\item Average closeness centrality $c(G)$: let $v\in V_G$, the closeness centrality of $v$ is defined as $
\sum_{u\in V_G} 1/d(u,v),$ where $d(u,v)$ is the shortest path distance between $u$ and $v$.
\end{enumerate}
Hypernetwork properties can be easily adapted from the previous definitions with the help of generalized adjacency matrices and line graphs \cite{aksoy2020hypernetwork}. Let $H$ be a hypergraph with vertex set $V = \{1,..,n\}$ and edge set $E = \{e_1, ... ,e_m\}$. We define the generalized \textit{node} $s$-\textit{adjacency matrix} $A_s$ of $H$ as the binary matrix of size $n\times n$, s.t., $A_s[i][j]=1$ if $i$ and $j$ are part of at least $s$ shared hyperedges; $A_s[i][j]=0,$ otherwise. We define the $s$-\textit{line graph} $L_s$ as the graph generated by the adjacency matrix $A_s$. Notice that $A_1$ corresponds to the adjacency matrix of $H$'s skeleton (which is $L_1$). \Cref{fig:image3} shows a family of adjacency matrices together with the line graphs generated using them. We can then define hypergraphs properties in the following way:
\\
\begin{enumerate}
\item $|E_H|$, total number of hyperedges;
\item $|T| = |\{e \in E_H: |e|= 3\}|,$ total number of triangles;
\item $S = \sum_{t\in T} a(t),$ \textit{covered area}, where $a(t)$ is the area of the triangle $t;$
\item Average degree $d_s(H)$, the mean number of incident hyperedges of size greater or equal than $s$ per node;
\item Average closeness centrality $c_s(H)$: let $v\in V_H$, the closeness centrality of $v$ is defined as its closeness centrality in $L_s$.
\end{enumerate}
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{\textwidth}
[width=0.98\textwidth]{./fig3_adj_matrices.jpg}
\end{subfigure}
\caption{\textbf{Adjacency matrices and line graphs.} Top: generalized node $s$-adjacency matrices for different values of $s$ from a given toy graph $G$. Bottom, from left to right: reference network $G$, and $s$-line graphs for $s=2,3,$ and $4$. } \label{fig:image3}
\end{figure}
$S$ can be defined in terms of any other property of a hyperedge, e.g. a function of its size $|e|$. Here we consider the area covered by a hyperedge to keep a geometrical perspective. On the other hand, this area $S$ can be easily generalized to hyperedges with $|e_{i}|>3$ by suitably changing the set $T$ in the summation, e.g. by considering structures containing four nodes. As for the centrality measures, we focus our attention to compare the case $s>1$ against $s=1$, as the latter traces back to standard graph properties and we are interested instead to investigate what properties are inherent to hypergraps. \Cref{fig:image4} shows values of the $d_s(H)$ and $c_s(H)$ for convergent hypergraphs $H$ (obtained from different values of $\beta$) together with the degree and closeness centrality of their correspondent graph versions. The considered hypergraphs are displayed in the top row of the figure. As can be seen in the figure, patterns differ considerably for different values of $\beta$. As $s$ controls the minimum number of shared connections for different nodes in the networks, the higher this number, the more restrictive this condition becomes, thus leading to more disconnected line graphs. In the case of the $s$-degree centrality, we observe decreasing values for increasing $s$, with nodes with the highest centrality having much higher values than nodes less central. For both $s=2,3$ we observe higher values than nodes in $G$. This follows from the fact that once hyperedges are added to $G$, the number of incidences per node can only increase. Centrality distributions strongly depend on $\beta$. For small values---more distributed traffic ($\beta=1.1$)---the number of hyperedges per node remains larger than the number of regular edges connected to it. But if traffic is consolidated on less space ($\beta=1.9$), then very few hyperedges are found. This suggests that the information learned from hypergraphs that is distinct to that contained in the graph skeleton is influenced by the chosen traffic regime.
As for the closeness centrality distribution, this resembles that of $G$ for small values of $\beta$, regardless $s$. For higher $\beta$ it switches towards an almost binary signal. Thus, nodes tend to become more central as $\beta$ increases, suggesting that adding hyperedges to networks $G$ leads to shorter distances between nodes. The loss of information seen for the highest values of $s$ is due to the fact that the line graphs $L_s$ become disconnected with many small connected components. In these cases, the closeness centrality of a node is either 0 if it is isolated, or proportional to the diameter of the small connected component where it lives in.
\begin{figure}[!ht]
\centering
[width=0.98\textwidth]{./fig4_side-by-side-hnx-deg-close.jpg}
\caption{\textbf{Graph and Hypergraph properties.} Top row: optimal hypernetworks obtained with different traffic rates. Center and bottom rows: degree distributions and closeness distributions for the hypernetworks shown on the top row, and their 1-skeletons. The node labels in the $x$-axis of the center and bottom rows are sorted by their degree of centrality values.} \label{fig:image4}
\end{figure}
\paragraph{Convergence criteria.} Numerical convergence of the DMK \crefrange{eqn:ddmk1}{eqn:ddmk3} is usually defined by fixing a threshold $\tau$. The updates are considered enough once the cost associated to them does not change more ($\leq \tau$) than that of the previous time step. As it is usually the case when this threshold is too small ($\tau=10^{-12}$ in our experiments), the cost or the network structure may consolidate to a constant value earlier than algorithmic convergence. Similar to \cite{baptista2021temporal}, to meaningfully establish when is hypergraph optimality reached, we consider as convergence time the first time step when the transport cost, or a given network property, reaches a value that is smaller or equal to a certain fraction $p$ of the value reached by the same quantity at algorithmic convergence (in the experiments here we use $p=1.05$). We refer to $t_\mathcal{L}$ and $t_P$ for the convergence in times in terms of cost function or a network property, respectively.
\section*{Results}
To test the properties presented in the previous section and understand their connection to transportation optimality, we synthetically generate a set of optimal transport problems, determined by the configuration of sources and sinks. As done in \cite{baptista2021temporal}, we fix a source's location and sample several points in the set $[0,1]^2$ to be used as sinks' locations. Let $S = \{s_0,s_1,...,s_M\}$ be the set of locations in the space $[0,1]^2,$ and fix a positive number $0<r$. We define the distributions $f^+$ and $f^-$ as $
f^+(x) \propto \mathds{1}_{R_0}(x),$ and $f^-(x) \propto \sum_{i>0} \mathds{1}_{R_i}(x),$ where $\mathds{1}_{R_i}(x) := 1,$ if $x\in R_i$, and $\mathds{1}_{R_i}(x) := 0$, otherwise; $R_i = C(s_i,r)$ is the circle of center $s_i$ and radius $r$. The value of $r$ is chosen based on the used discretization, and as mentioned before, the centers are sampled uniformly at random. The symbol $\propto$ stands for proportionality and is used to ensure that $f^+$ and $f^-$ are both probability distributions. The transportation cost is that of \cref{eqn:L}.
\paragraph{Synthetic OT problems.}\label{sec:synthetic}
The set of transportation problems considered in our experiments consists of 100 source-sink configurations. We place the location of the source $s_0=(0,0)$ (i.e. the support of $f^+$ at $(0,0)$), and sample 15 points $s_1,s_2,...,s_M$ uniformly at random from a regular grid. By sampling them from the nodes of the grid, we ensure that two different locations are at a safe distance so they are considered different once the space is discretized. We initialize $\mu_0(x)=1, \forall x$ to be a uniform distribution on $[0,1]^2$. This can be interpreted as a non-informative initial guess for the solution. Starting from $\mu_0,$ we compute a maximum of 300 updates. Depending on the chosen traffic rate $\beta$ more or fewer iterations can be needed. We claim that the sequence $\{\mu_t\}_{t=0}^T$ \textit{converges} to a certain function $\mu^*$ at iteration $T$ if either $|\mu_T-\mu_{T-1} |<\tau,$ for a \textit{tolerance} $\tau\in (0,1],$ or $T$ reaches the mentioned maximum. For the experiments reported in this manuscript, the tolerance $\tau$ is set to be $10^{-12}$. Given the dependence of the solution of traffic constraints, a wide range of values of $\beta$ is considered. Namely, we study solutions obtained from low traffic cases ($\beta=1.1$, and thus, less traffic penalization) to large ones ($\beta=1.9$), all of them generating branched transportation schemes. Our 100 problems are linked to a total of 900 hypergraph sequences, each of them containing between 50 and 80 higher-order structures.
\begin{figure}[!h]
\centering
[width=\textwidth]{./fig5_surface_decay.jpg}
\caption{\textbf{Covered area and Lyapunov cost.} Mean (markers) and standard deviations (shades around the markers) of the covered area $S$ (top plots) and of the Lyapunov cost, energy dissipation $\mathcal{E}$ and structural cost $\mathcal{M}$ (bottom plots), as functions of time $t$. Means and standard deviations are computed on the set described in Paragraph \textit{Synthetic OT problems}. From left to right: $\beta=1.2, 1.5$ and $1.8$. Red and blue lines denote $t_P$ and $t_\mathcal{L}$.} \label{fig:image5}
\end{figure}
\paragraph{Convergence: transport cost vs hypernetwork properties.}
As presented in \cite{baptista2021temporal}, we show a comparison between hypernetwork properties and the cost function minimized by the dynamics, where convergence times are highlighted (\Cref{fig:image5}). We focus on the property $S$, the area of the surface covered by the triangles in $H$. This quantity is influenced by both the amount of triangles (hence of hyperedges) and their distribution in space. Hence, it is a good proxy for how hypergraph properties change both in terms of iteration time and as we tune $\beta$.
We observe that $t_P>t_\mathcal{L}$ in all the cases, i.e. convergence in terms of transportation cost is reached earlier than the convergence of the topological property. Similar behaviors are seen for other values of $\beta\in[1.1,1.9]$ and other network properties (see Appendix). Similar to DMK-based network properties, the covered area's decay is faster for the smallest values of $\beta$. This is expected, given the convexity properties of $\mathcal{L}$ \cite{facca2016towards,facca2019numerics,facca2021branching}. However, the transport cost decays even faster, in a way that the value of $S$ is still far away from convergence in the congested transportation case (small $\beta$).
\\
Notice that $S$ remains stable after the first few iterations, and then it starts decreasing at different rates (depending on $\beta$) until reaching the converged value. This suggests that the dynamics tend to develop thick branches---covering a large area--- at the beginning of the evolution, and then it slowly compresses them until reaching the optimal topologies.
\\
These different convergence rates for $S$ and $\mathcal{L}$ may prevent construction of converged hypernetwork topologies: if the solver is stopped at $t_\mathcal{L}< t_{P}$, the resulting hypergraphs $H(\mu_t), \ t=t_\mathcal{L}$ would mistakenly cover a surface larger than that covered by the convergent counterpart ($H(\mu_t),$ for $t\geq t_P$). This scenario is less impactful for larger values of $\beta$, although in these scenarios $H$ is much more similar to a regular graph, because of the small number of higher-order structures. Topological differences between converged hypernetworks can be seen in \Cref{fig:image4}.
\\
Finally, we observe that both $t_\mathcal{L}(\beta)$ and $t_P(\beta)$ are increasing functions on $\beta$. This is expected since the larger the traffic rate is, the longer it takes for the sequences to converge. This particular behavior matches what is shown in \cite{baptista2021temporal} in the case of $t_\mathcal{L}$, but this is not the case for $t_P(\beta)$: it was observed a non-monotonic behavior in the network case.
\paragraph{Convergence behavior of hypernetwork properties.}
\Cref{fig:image6} shows how the various network properties change depending on the traffic rate. Mean values and standard deviations are computed across times, for a fixed value of $\beta$. As shown, the number of hyperedges, number of triangles, covered area, and average 1-degree exhibit decreasing patterns as functions of $t$. As a consequence, transport optimality can be thought of as reaching minimum states on the mentioned hypernetwork properties. Another clear feature of these functions is related to the actual converged values: the larger the $\beta$ is, the smaller these metrics become. This is explained by a cost function increasingly encouraging consolidations of paths on fewer edges. Notice also that the gap between these converged values signals a non-linear dependence on the outputs of the dynamics; e.g., a converged hypernetwork obtained for $\beta=1.1.$ loses many more hyperedges if the traffic rate is then set to 1.2, whereas this loss would not be that large if $\beta=1.2$ is increased to 1.3. The nature of these gaps is substantially different depending on the property itself. This also shows that certain properties better reveal the distinction between different optimal traffic regimes.
The behavior of the closeness centralities is distinctly different than that of the other properties. While its initial values are the same for all values of $\beta$ (similar to the previous properties), no clear trend can be found as time increases. For $s=1$, on average $\beta=1.1$ generates sequences that tend to recover initial values after increasing and then decreasing behavior. For the other traffic rates, we observe different patterns. Notice that $s-$closeness centrality on the hypergraph for $s=1$ is the same as the classic closeness centrality on the skeleton of it. Thus, these rather noisy patterns are not due to the addition of hyperedges. On the other hand, for $s=2$ the average centrality shows increasing curves. This may be due to $L_s$ getting increasingly disconnected with small connected components. Therefore, the larger $s$, the closer the nodes are seen (see \Cref{fig:image3}). Moreover, in this case small values of $\beta$ lead to more stable closeness centrality values, showing the impact of $\beta$ in building higher-order structures. While different values of $\beta$ lead to different behaviors of the hypergraph properties (e.g. decreasing degrees and amount of hyperedges for increasing $\beta$) we remark that choosing the value of $\beta$ should depend on the application at hand. The analysis performed here showcases how this choice may impact the resulting topologies. This can help practitioners to visualize possible consequences in terms of downstream analysis on the transportation properties of the underlying infrastructure.
\begin{figure}[!h]
\centering
[width=0.8\textwidth]{./fig6_single_fig_dpp.jpg}
\caption{\textbf{Evolution of hypernetwork properties}. Mean (markers) and standard deviations (shades around the markers) of number of hyperedges $|E_H|$ (upper left), number of triangles $|T|$ (upper center), covered area $S(H)$ (upper right), average $2$-degree $d_2(H)$ (lower left), average $1$-closeness centrality $c_1(H)$(lower center) and $2$-closeness centrality $c_2(H)$(lower right), computed for different values of $\beta$ as a function of time.}\label{fig:image6}
\end{figure}
\section*{\textit{P. polycephalum} hypernetworks}
We now analyze hypernetworks extracted from images of real data. We are interested in the evolution of the area covered by triangles in the sequences $ \{ H(\mu_t)\}_{t=0}^T$ extracted from real images of the slime mold \textit{P. polycephalum}. The behavior of this organism is the inspiration of the modeling ideas of the DMK equations described in \nameref{section:methods}. It has been shown that these slime molds follow a similar optimization strategy as that captured by the DMK dynamics while foraging for food in 2D surfaces \cite{nakagaki2000maze,tero2007mathematical,tero2010rules}.
We extract hypernetworks from images using the idea described in \nameref{section:methods} but instead of applying \cite{baptista2020network} to obtain the networks, we use the method proposed by \cite{baptista2020principlednet} which takes images as input. This pipeline uses the color intensities of the different image pixels to build a graph, by connecting adjacent meaningful nodes. We dedicate our attention to 4 image sequences from the Slime Mold Graph Repository \cite{dirnberger2017introducing}. The sequences are then describing the evolution of a \textit{P. polycephalum} placed in a rectangular Petri dish. Each image, and thus each hypernetwork, is a snapshot of the movement of this organism over periods of 120 seconds.
We study the covered area for every one of the 4 sequences, and plot the results for one of them (namely, image set \textit{motion12}; see Appendix) in \Cref{fig:image7}. We highlight 4 times along the property sequence to display the used images together with the corresponding hypernetworks. The lower leftmost plot shows a subsection of one of the studied snapshots. As can be seen there, this subhypernetwork topology exhibits a significant number of hyperedges of dimension 3, mainly around the thickest parts of the slime mold. On the other side, in the lower rightmost plot, the evolution of $S$ is overall decreasing in time (similar results are obtained for other sequences, as shown in the Appendix). This suggests that the thicker body parts tend to get thinner as the \textit{P. polycephalum} evolves into a consolidated state. This pattern resembles what is shown above for the synthetic data, i.e. the covered area tends to decrease as time evolves similar to the behavior of the DMK-based hypernetwork sequence. This suggests that the DMK model realistically mirrors a consolidation phase towards optimality of real slime molds \cite{dirnberger2017introducing}.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{1\textwidth}
[width=0.95\textwidth]{./fig7_hnx_surface_phys_insets_for_motion12.jpg}
\end{subfigure}
\caption{\textbf{\textit{P. polycephalum} hypergraphs.} On top: \textit{P. polycephalum} images and hypernetworks extracted from them. Bottom left: a zoomed-in part of the hypergraph shown inside the red rectangle on top. Bottom right: covered area as a function of time. The red shade highlights a tentative consolidation phase towards optimality.} \label{fig:image7}
\end{figure}
\section*{Conclusions}
We proposed a method to build higher-order structures from OT sequences. This method maps every member of the sequence into a hypergraph, outputting a temporal hypernetwork. We analyzed standard hypergraph properties on these temporal families and compared them to their continuous counterparts. We showed that convergence in terms of transportation cost tends to happen faster than that given by the covered area of the hypernetworks. This suggests that the dynamics used to solve the OT problems concentrates the displaced mass into main branches, and once this task is carried out, it slightly reduces the area covered by them. We studied this and other hypergraph properties, and compared them to those of their network versions. In some cases, hypernetworks reveal more information about the topology at convergence. This suggests that hypernetworks could be a better alternative representation to solutions of OT problems for some transportation schemes. The conclusions found in this work may further enhance our comprehension of OT solutions and the links between this field and that of hypergraphs.
\paragraph{Acknowledgements
The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Diego Baptista.
\bibliographystyle{splncs03}
| {'timestamp': '2023-01-10T02:25:22', 'yymm': '2301', 'arxiv_id': '2301.03498', 'language': 'en', 'url': 'https://arxiv.org/abs/2301.03498'} |
\section{Introduction}
Because of the broadcast nature of wireless communications, multiple
receiving nodes may overhear the transmission of a transmitting node
in a wireless network. Therefore, a packet can be routed through
different routes to its destination node. This ``multiple-route
diversity'' can be exploited to improve various measure of network
performance including throughput, average delay, and the probability
of transmission failure.
Different routing and scheduling policies have been proposed to
exploit the ``broadcast advantage'' of the wireless medium. Reference
\cite{ExOR} proposes ExOR, a packet-level opportunistic routing
protocol to exploit high loss-rate radio links in wireless
networks. Reference \cite{Rozner_2009} proposes a proactive link state
routing protocol that improves the forward nodes selection in
\cite{ExOR}. Reference \cite{Neely_2008} characterizes the maximum
network capacity region when packet-level opportunistic routing is
exploited, and proposes a routing algorithm named DIVBAR to stabilize
the system. DIVBAR adopts the idea of backpressure algorithms first
proposed in \cite{Tassiulas_1992, Tassiulas_1993}. References
\cite{ExOR, Rozner_2009, Neely_2008} discuss how to select a forwarder
based on the feedback information sent from the receivers that have
successfully received the packets. With a similar maximum weight
scheduling idea, \cite{Yeh_2007} analyzes the optimal information
theoretic based cooperative scheduling in a diamond network. The
system is assumed to be able to adaptively select the optimal
encoding-decoding scheme such that any desirable transmission rate
vector within the corresponding capacity region can be achieved in
every slot. This ``fluid model'' is not quite practical. However,
because it is simple to analyze, it has been widely adopted in the
literature.
The policy space explored in the above papers, in particular those
having to do with wireless~\cite{ExOR, Rozner_2009, Neely_2008,
Yeh_2007}, assumes that packet decoding is carried out within a
single slot. This means that decoding depends only on current control
actions and channel quality. However, there are many physical layer
techniques, such as energy accumulation and mutual information
accumulation (MIA), that do not fit this assumption. Such techniques
do not require decoding to occur within a single slot but rather allow
the receivers to accumulation observations across multiple time slots
before decoding. This allows the network to exploit weak radio links
more fully, thereby increasing system throughput.
In this paper we focus on mutual information accumulation and, while
there is some prior research on resource optimization in networks
using MIA \cite{Draper_2008,Urgaonkar_2010,mia_allerton11}, that work
does not consider questions of network stability and protocol design.
Our objectives in this work are therefore two fold. First, we want to
characterize the maximum network capacity region when mutual
information accumulation is implemented at the physical layer. Second,
we aim to design joint routing and scheduling algorithms that
stabilize any point within the capacity region.
To make the concept of mutual information accumulation more tangible,
it helps to contrast it with energy accumulation. In energy
accumulation multiple transmissions are combined non-coherently by
receiving nodes. This is usually enabled by using space-time or
repetition coding~\cite{Maric_2004, Maric_2005, Chen_2005}. Mutual
information accumulation is more efficient, the difference between the
two being explained in~\cite{Draper_2008}. \textcolor{black}{Consider a pair of senders transmitting information over two independent additive white Gaussian noise channels to the same receiver. Assume transmitter power $P$ and channel noise power $N$. Energy accumulation corresponds to the scenario when each transmitter sends the same codeword. When the decoder uses maximum ratio combining, a throughput of $\frac{1}{2}\log (1+\frac{2P}{N})$ bits per channel use can be achieved. With mutual information accumulation, independent parity symbols are sent, and the system can achieve $2\times\frac{1}{2}\log (1+\frac{P}{N})$ bits per channel use, which outperforms energy accumulation.} It has been noted in
\cite{Draper_2008} that for Gaussian channels at low signal-to-noise
ratio (SNR) energy accumulation is equivalent to mutual information
accumulation because capacity at low SNR is linear in SNR. Mutual
information accumulation can be realized through the use of rateless
(or fountain) codes \cite{Draper_2008, Mitzenmacher_2004, LT_Code}.
Similar to \cite{ExOR, Rozner_2009, Neely_2008}, we assume our system
operates at packet-level, and each active transmitter transmits a
packet in each slot. Different from the probabilistic channel model
discussed in these references, we consider the scenario where the
channel state varies slowly, so that link conditions can be assumed to
be static over a long transmission period. Under this scenario, for a
weak link whose link rate is below the transmission rate, a packet
cannot be successfully delivered across the link in any single slot,
even with repetitive attempts in different slots. Thus, the
corresponding receiver can never decode and become a forwarder under
the schemes discussed in \cite{ExOR, Rozner_2009,
Neely_2008}. However, when rateless codes are used by the
transmitters, although the corresponding receiver of the weak link
cannot decode the packet within a single slot, it can store that
corrupted packet and accumulate more information in later
slots. Eventually, a packet can be successfully delivered across a
weak link after a number of transmissions by accumulating information
across the slots. Thus, weak links can still be utilized in slowly
changing network environments. Compared with opportunistic routing
schemes, mutual information accumulation provides more reliable
throughput over weak links, and doesn't require the feedback
information to determine the forwarder in each slot. Thus it reduces
the required overhead.
Compared to the networks in \cite{Yeh_2007} and references \cite{cohen, Neely05}
where varying rate can be achieved within a slot through adaptive
encoding-decoding schemes, the mutual information accumulation scheme
doesn't require the encoding-decoding to be changed in every
slot. However, on average, it can achieve the same rate by repetitive
transmission and information bits accumulation. Therefore, the system
is more practical to implement without sacrificing throughput. On the
other hand, the information bit accumulation process also brings new
challenges to the design and analysis of routing and scheduling
algorithm.
The contribution of our work is three-fold.
\begin{itemize}
\item We characterize the maximum stability region of a wireless
network enabled with mutual information accumulation under certain
natural assumptions. Compared with networks where information
accumulation is not allowed, the system is able to exploit weak
links and an expanded stability region is thereby achieved.
\item We propose two dynamic routing and scheduling policies to
achieve the maximum stability region. Both policies require simple
coordination and limited overhead.
\item The techniques we develop to cope with the temporally
coupled mutual information accumulation process are novel and
may have use in queuing problems more widely.
\end{itemize}
The rest of the paper is organized as follows. In
Section~\ref{sec:system}, we describe the system model. In
Section~\ref{sec:capacity}, we present the maximum stability region of
the network under our system model. In Section~\ref{sec:Tslot} and
Section~\ref{sec:virtual}, we design two different routing protocols
to achieve the maximum stability region and analyze the
performance. We present simulation evaluation in
Section~\ref{sec:simu}. Finally, we conclude in
Section~\ref{sec:conclusions}. Proofs are deferred to the appendices.
\section{System Model}\label{sec:system}
\subsection{The Basic System Setting}\label{sec:system_para}
We consider a time-slotted system with slots normalized to integral units $t\in\{0, 1, 2, \ldots\}$. There are $N$ network
nodes, and links are labeled according to {\it ordered} node pairs $(i,j)$ for $i,j \in \{1, \ldots , N\}$. We assume that there are $K$ different commodities in the network, $K\leq N$, where each commodity is labeled according to its destination node, e.g., all packets from commodity $c$ should be routed to destination node $c$. Data arrives randomly in packetized units. Let $A^c_i(t)$ denote the number of packets from commodity $c$ that exogenously arrive at network node $i$ during slot $t$.
Arrivals are assumed to be independent and identically distributed (i.i.d.) over timeslots, and we let $\lambda^c_i=E\{A^c_i(t)\}$ represent the arrival rate of commodity $c$ into source node $i$ in units of packets/slot. We assume $A^c_i(t)\leq A_{max}$ for all $c,i,t$.
We assume the channel fading states between any pair of nodes are stationary, and the active transmitters transmit with the same power level. Thus, a fixed reliable communication rate over an active link can be achieved in each slot. We expect that the algorithms developed in this paper can be modified to accommodate more general fading processes.
\textcolor{black}{We use $r_{ij}$ to denote the link rate between nodes $i,j$. We assume the link rate is reciprocal, i.e., $r_{ij}=r_{ji}$.}
We assume that each node can transmit at most one packet during any given node during any single time slot, i.e., $r_{ij}\leq 1$ packet/slot. We term the links with rate one packet per slot to be {\it strong} links; the rest of the links we term {\it weak} links. For weak links, we assume their rates are lower bounded by some constant value $r_{min}$, $0<r_{min}<1$. Define the set of neighbors of node $i$, $\mathcal{N}(i)$, as the set of nodes with $r_{ij}>0$, $j \in \mathcal{N}(i)$. We assume the size of $\mathcal{N}(i)$, denoted as $|\mathcal{N}(i)|$, is upper bounded by a positive integer $d$. We define $\mu_{max}$ as the maximum number of packets a node can successfully decode in any slot, which is equivalent to the maximum number of nodes that can transmit to a single node simultaneously. Therefore, we have $\mu_{max}\leq d$.
We assume the system operates under constraints designed to reduce interference among transmitters. Under the interference model, in any timeslot, only a subset of nodes are allowed to transmit simultaneously. We assume that transmissions from nodes active at the same time are interference free. We denote the set of feasible {\it activation pattern} as $\mathcal{S}$, where an activation pattern $s\in \mathcal{S}$ represents a set of active nodes. With a little abuse of the notation, we interchangeably use $s$ to represent an activation pattern and the set of active nodes of the pattern. \textcolor{black}{For any $s\in \mathcal{S}$, we assume all of its subsets also belong to $s$. This allows us to assert that all of the nodes in an activation pattern always transmit when that activation pattern is selected.} This interference model can accommodate networks with orthogonal channels, restricted TDMA, etc.
\subsection{Mutual Information Accumulation at Physical Layer}\label{sec:system2}
We assume mutual information accumulation is adopted at physical layer. Specifically, we assume that if a {\it weak} link with $r_{ij}<1$ is activated, rateless codes are deployed at its corresponding transmitter. When a packet transmitted over a weak link cannot be successfully decoded during one slot, instead of discarding the observations of the undecodable message, the receiver stores that partial information, and accumulates more information when the same packet is retransmitted. A packet can be successfully decoded when the accumulated mutual information exceeds the packet size. The assumption $r_{ij}>r_{min}$ implies that for any active link, it takes at most some finite number of time slots, $\lceil 1/r_{min}\rceil$, to decode a packet.
In order to simplify the analysis, we assume that $1/r_{ij}$ is an integer. The reason for this choice will become clear when our algorithm is proposed in Section~\ref{sec:capacity}. If $r_{ij}$ does not satisfy this assumption, we round it down to $1/\lceil 1/r_{ij}\rceil$ to enforce it.
The challenge in extending the backpressure routing framework to systems that use mutual information accumulation is illustrated through the following example. Suppose that a node $i$ has already accumulated half of the bits in packet 1 and half of the bits in packet 2. Since neither of the packets can be decoded, none of these bits can be transferred to the next hop, even though the total number of bits at node $i$ is equal to that of a full packet. This means that we need to handle the undecoded bits in a different manner. We also observe that, if node $i$ never accumulates enough information for packets 1 or 2, then these packets can never be decoded and will be stuck at node $i$. If we assume that the system can smartly drop the undecoded partial packet whenever a fully decoded copy is successfully delivered to its destination, coordination among the nodes is required to perform this action. The overhead of the coordination might offset the benefits brought by mutual information accumulation. Moreover, unlike the opportunistic model studied in \cite{ExOR, Rozner_2009, Neely_2008}, given that weak link is active, whether or not a successful transmission will occur in that slot is not an i.i.d random variable but rather a deterministic function of the number of already accumulated bits of that packet at the receiving node. This difference makes the analysis even more complicated.
Therefore, in the following sections, we define two different types of queues. One is the traditional {\em full packet} queue which stores fully decoded packets at each node. The other type of queue is a {\em partial packet} queue. It represents the fraction of accumulated information bits from a particular packet. The specific definition of the queues and their evolution depends on the scheduling policy, and will be clarified in Section~\ref{sec:Tslot}, and Section~\ref{sec:virtual}, respectively.
We assume each node has infinite buffer space to store fully decoded packets and partial packets. Overflow therefore does not occur.
\subsection{Reduced Policy Space}
The policy space of the network we consider can be much larger than that of networks without mutual information accumulation. First, because of the broadcast nature of wireless communication, multiple receivers can accumulate information regarding the same packet in any given slot, and a receiver can collect information on a single packet from multiple nodes. Keeping track of the locations of different copies of the same packet requires a lot of overhead. Second, allowing multiple receivers to store copies of the same packet introduces more traffic into the network. Therefore, stabilizing the network requires a sophisticated centralized control strategy. Finally, accumulating information bits of a packet from multiple nodes makes the decoding options of a packet increase exponentially; thus characterizing the network maximum stability region becomes intractable. Therefore, we make the following assumptions:
\begin{itemize}
{\it \item[A1.] For each packet in the system, at any given time, only one node is allowed to keep a fully decoded copy of it.
\item[A2.] In addition to the node with the fully decoded copy, one other node is chosen as the potential forwarder for the packet. Only the potential forwarder is allowed to accumulate information about the packet.}
\end{itemize}
Restricting ourselves to this policy space may sacrifice some part of the stability region that could be achieved under a more general policy space. However, as we will see, these assumptions enable us to explicitly characterize the maximum stability region with the given policy space. Compared with systems that operate without the capability to accumulate mutual information, our system is able to exploit weak links even when one-slot decoding is not possible. The stability region will thereby be greatly enlarged.
If a node directly contributes to the successful decoding of a packet at another node, this node is denoted as a {\it parent} for that packet. Assumptions A1-A2 guarantee that for any packet in the network, there is only one parent at any given time.
We also note that, if we relax assumptions A1-A2, and make the following assumption instead
\begin{itemize}
{\it \item[A3.] Every packet existing in the network has a single parent at any given time. In other words, the accumulated information required to decode a packet at a node is received from a single transmitting node.}
\end{itemize}
then, the maximum stability region under A1-A2 and that under A3 are equivalent. Assumption A3 means that we don't allow a node to accumulate information from different nodes (different copies of the same packet) to decode that packet. However, multiple copies of a packet may still exist in the network.
\section{Network Capacity with Mutual Information Accumulation}\label{sec:capacity}
In this section we characterize the optimal throughput region under all possible routing and scheduling algorithms that conform to the network structure specified in Section~\ref{sec:system_para} and Assumption A3.
At the beginning of each slot, a certain subset $s$ of nodes is selected to transmit in the slot. Any node $i\in s$ can transmit any packet that it has received {\it and} successfully decoded in the past. Note that because packets must be decoded prior to transmission, partial packets cannot be delivered. For node $j\in\mathcal{N}(i)$, if it has already decoded a packet being transmitted, it can simply ignore the transmission; otherwise, it listens to the transmission and aims to decode it at the end of that slot. Receiving nodes connected with strong links can decode the packet in one slot. Nodes connected with weak links cannot decode the packet in one slot. Rather, they need to listen to the same node transmitting the same packet over a number of slots.
A packet is said to be successfully delivered to its destination node when the {\it first copy} of the packet is decoded by its destination node. These assumptions allow for any possible routing and scheduling policy satisfying Assumption A3. \textcolor{black}{We note that packets arrive at a node from a single commodity may get delivered to their destination node in a permuted order since each packet may take a different route.}
Let $\boldsymbol{\lambda}$ represent the input rate vector to the system, where $\lambda^c_i$ is the input rate of commodity $c$ entering node $i$. Define $Y^c_i(t)$ as the number of packets from commodity $c$ that originated at node $i$ and have been successfully delivered to destination node $c$ over $[0,t)$. According to the definition of network stability \cite{Neely_now}, a policy is defined as {\it stable} if
\begin{align}
\lim_{t\rightarrow \infty} \frac{Y^c_i(t)}{t}=\lambda^c_i,\qquad \forall c.
\end{align}
Stronger definitions of stability can be found in \cite{Tassiulas_1992, Tassiulas_1993, stable94}.
The maximum stability region or {\it network layer capacity region} $\Lambda$ of a wireless network with mutual information accumulation is defined as the closure of all $\boldsymbol{\lambda}$ that can be stabilized by the network according to some policy with the structure described above.
\begin{Theorem}\label{thm1}
For a network with given link rates $\{r_{ij}\}$ and a feasible activation pattern set $\mathcal{S}$,
the network capacity region $\Lambda$ under assumption A3 consists of all rate vectors $\{\lambda^c_n\}$ for which there exists flow variables $\{\mu^c_{ij}\}$ together with a probability $\pi_s$ for each possible activation pattern $s\in \mathcal{S}$ such that
\begin{align}
\mu^c_{ij}&\geq 0,\quad \mu^c_{ci}=0,\quad \mu^c_{ii}=0, \quad \forall i,j,c\label{eqn:cap1}\\
\sum_{l}\mu^c_{li}+\lambda^c_i&\leq \sum_{j}\mu^c_{ij},\quad \forall i\neq c, \forall c\label{eqn:cap2}\\
\sum_c\mu^c_{ij}&\leq \sum_c\sum_{s\in \mathcal{S}}\pi_s\theta^c_{ij}(s)r_{ij},\quad \forall i,j\label{eqn:cap3}\\
\sum_{s\in \mathcal{S}}\pi_s&\leq1,
\end{align}
where the probabilities $\theta_{ij}(s)$ satisfy
\begin{align}
\theta^c_{ij}(s)&=0 \mbox{ \rm{if} }i\notin s, \\
\quad \sum_c\sum_{j}\theta^c_{ij}(s)&= 1,\forall i.\label{eqn:cap5}
\end{align}
\end{Theorem}
The necessity of this theorem can be proved following the same approach in \cite{Neely_2008} and is provided in Appendix~\ref{apx:thm1}. The sufficiency part is proved in Section~\ref{sec:Tslot} by constructing a stabilizing policy for any rate vector $\boldsymbol{\lambda}$ that is in the interior of capacity region.
The capacity region is essentially similar to the capacity theorem of \cite{Neely_now,Neely_2008}. The relations in (\ref{eqn:cap1}) represent non-negativity and flow efficiency constraints for conservation constraints. Those in (\ref{eqn:cap2}) represent flow conservation constraints. Those in (\ref{eqn:cap3}) represent link constraints for each link $(i,j)$. The variable $\theta^c_{ij}(s)$ can be interpreted as the probability that the transmission over link $(i,j)$ eventually contributes to the delivery of a packet of commodity $c$ at node $c$, given that the system operates in pattern $s$. In other words, link $(i,j)$ is on the routing path for this packet from its origin node to its destination node.
This theorem implies that the network stability region under Assumption A3 can be defined in terms of an optimization over the class of all stationary policies that use only single-copy routing. Thus, for any rate vector $\boldsymbol{\lambda}\in \Lambda$, there exists a stationary algorithm that can support that input rate vector by single-copy routing all data to the destination.
The $\Lambda$ defined above are in the same form as when a ``fluid model'' is considered. In other words, the extra decoding constraint imposed by mutual information accumulation does not sacrifice any part of the stability region. We can simply ignore the packetization effect when we search for the maximum stability region.
The $\sum_c\mu^c_{ij}$ defining the stability region represents the {\it effective} flow rate over link $(i,j)$. An ``effective'' transmission means that the bits transferred by that transmission eventually get delivered to the destination. If the transferred information becomes a redundant copy, or discarded partial packet, the transmission is not effective, and doesn't contribute to the effective flow rate. We can always make the inequalities (\ref{eqn:cap2})-(\ref{eqn:cap3}) tight by controlling the value of $\pi_s$.
Solving for the parameters $\{\pi_s\}$ and $\{\theta^c_{ij}(s)\}$ required to satisfy the constraints requires a complete knowledge about the set of arrival rates $\{\lambda^c_i\}$, which cannot be accurately measured or estimated in real networks. On the other hand, even when $\boldsymbol{\lambda}$ is given, solving the equations can still be quite difficult. In the following, we overcome this difficulty with online algorithms which stabilize any $\boldsymbol{\lambda}$ within $\Lambda$, but with a possibly increased average delay as $\lambda$ approaches the boundary of $\Lambda$.
\section{$T$-slot Dynamic Control Algorithm}\label{sec:Tslot}
In the following, we construct a policy that fits Assumptions A1-A2. Although more restrictive than Assumption A3, we will see that the stronger assumptions do not compromise stability performance in the sense that they do not reduce the stability region.
To construct a dynamic policy that stabilizes the system anywhere in the interior of $\Lambda$, specified in Theorem~\ref{thm1}, we first define our decision and queues variables.
We assume that each packet entering the system is labeled with a unique index $k$. At time $t$, $0\leq k\leq \sum_{c,i}\sum_{\tau=1}^t A^c_i(\tau)$. If packet $k$ belongs to commodity $c$, we denote it as $k\in \mathcal{T}_c$. Let $\left\{\beta_{ij}^{(k)}(t)\right\}$ represent the binary control action of the system at time $t$. Specifically, $\beta_{ij}^{(k)}(t)=1$ means that at time $t$, node $i$ transmits packet $k$ to node $j$. We restrict the possible actions so that in each slot each node transmits at most one packet, i.e.,
\begin{align}\label{beta_con}
&\sum_{j,k}\beta_{ij}^{(k)}(t)\leq 1,\quad \forall i,
\end{align}
and at most one node is chosen as the forwarder for packet $k$, i.e.,
\begin{align}\label{beta_con2}
\sum_{i,j}\beta_{ij}^{(k)}(t)\leq 1, \quad\forall k.
\end{align}
Because of the mutual information accumulation property, even if packet $k$ is transmitted over link $(i,j)$ in slot $t$, it doesn't necessarily mean that packet $k$ can be decoded at node $j$ at the end of slot $t$. In particular, under the fixed link rate assumption, successful transmission cannot occur over weak links in a single timeslot.
We let $f_{ij}^{(k)}(t)$ be an indicator function where $f_{ij}^{(k)}(t)=1$ indicates that the packet $k$ has been successfully delivered from node $i$ to node $j$ in slot $t$. The indicator function is a function of the current control action and partial queue status at the beginning of slot $t$. Apparently, $f_{ij}^{(k)}(t)=1$ implies that $\beta_{ij}^{(k)}(t)=1$.
As discussed in Section~\ref{sec:system2}, we define two types of queues at each nodes.
One is to store the fully received and successfully decoded packet at the nodes, while the other queue stores the partially received packets.
We use $Q^c_i(t)$ to denote the length of node $i$'s queue of fully received packets from commodity $c$ at time $t$, and use $P_i^{(k)}(t)$ to represent the total fraction of packet $k$ accumulated by node $i$ up to time $t$. The sum-length of partial queues of commodity $c$ at node $i$ storing partial packets can be represented as $P^c_i(t)=\sum_{k\in \mathcal{T}_c}P_i^{(k)}(t)$. The fraction of packet $k$, $P_i^{(k)}(t)$, can be cleared either when packet $k$ is successfully decoded and enters the full packet queue $Q^c_i$, or when the system controller asks node $i$ to drop packet $k$. \textcolor{black}{With a little abuse of notation, we use $Q_i^c$ and $P_i^c$ to denote the full packet queue and partial packet queue from commodity $c$ at node $i$, respectively.}
Then, according to our Assumptions A1-A2, the queue lengths evolve according to
\begin{align}
Q^c_i(t+1)&=\Big( Q^c_i(t)-\sum_{j,k\in \mathcal{T}_c}\beta_{ij}^{(k)}(t)f^{(k)}_{ij}(t)\Big)^+\nonumber\\
&\quad+\sum_{l,k\in \mathcal{T}_c}\beta_{li}^{(k)}(t)f^{(k)}_{li}(t)+A^c_i(t)\\
P_i^{(k)}(t+1)&=P_i^{(k)}(t)+\sum_{l}\beta_{li}^{(k)}(t)r^{(k)}_{li}(t)\hspace{-0.02in}-\hspace{-0.03in}\sum_{l}\beta_{li}^{(k)}(t)f^{(k)}_{li}(t)\nonumber\\
&\quad-\sum_{l,(m\neq i)}P_i^{(k)}(t)\beta_{lm}^{(k)}(t)\label{eqn:p1}
\end{align}
where
\begin{align}
r^{(k)}_{li}(t)&=\left\{\begin{array}{ll}r_{li}& P_i^{(k)}(t)+r_{li}\leq 1\\
1-P_i^{(k)}(t)& P_i^{(k)}(t)+r_{li}> 1
\end{array}\right.
\end{align}
and $(x)^+=\max\{x,0\}$.
Under the assumption that $1/r_{ij}$ is an integer for every $(i,j)$, we have $r^{(k)}_{li}(t)=r_{li}$.
Since we only allow there to be only a forwarder for any given packet at any time, if $\beta_{lm}^{(k)}(t)=1$, any nodes other than node $m$ which have accumulated partial information of packet $k$ must drop that partial packet $k$. This effect results in the last negative term in (\ref{eqn:p1}). On the other hand, the first negative term in (\ref{eqn:p1}) corresponds to successful decoding of packet $k$, after which it is removed and enters $Q^c_i$ for some $c$.
\subsection{The $T$-slot Algorithm}
Our algorithm works across epochs, each consisting of $T$ consecutive timeslots. Action decisions are made at the start of each epoch and hold constant through the epoch. We analyze the choice of $T$ on the stability region and average backlog. Any rate vector $\boldsymbol{\lambda}$ inside the capacity region $\Lambda$ can be stabilized by a sufficiently large choice of $T$.
\begin{itemize}
\item[1)] \textbf{Check single-link backpressure.} At the beginning of each epoch, i.e., when $t=0,T,2T,\ldots$, node $i$ checks its neighbors and computes the differential backlog weights
$$W_{ij}(t)=\max_c[Q^c_i(t)-Q^c_j(t)]^+r_{ij},\quad j\in\mathcal{N}(i) .$$
Denote the maximizing commodity as $$c^*_{ij}=\arg \max_c [Q^c_i(t)-Q^c_j(t)]^+.$$
\item[2)] \textbf{Select forwarder.} Choose the potential forwarder for the packets in $Q_i$ as the node $j$ with the maximum weight $W_{ij}(t)$. Denote this node as $j^*_i=\arg \max_j W_{ij}(t)$.
\item[3)] \textbf{Choose activation pattern.} Define the activation pattern $s^*$ as the pattern $s\in S$ that maximizes
$$\sum_{i\in s}W_{ij^*_i}.$$
Any node $i\in s^*$ with $W_{ij^*_i}>0$ transmits packets of commodity $c^*_{ij^*_i}$ to $j^*_i$. The pairing of transmitter $i\in s^*$ and receiver $j^*_i$ and the commodity being transmitted $c^*_{ij^*_i}$ is continued for $T$ consecutive timeslots.
\item [4)] \textbf{Clear partial queues.} Release all the accumulated bits in the partial queue $P^c_i$, $\forall i,c$, at the end of each epoch.
\end{itemize}
The $T$-slot algorithm satisfies constraints (\ref{beta_con})-(\ref{beta_con2}). The ``potential forwarder'' in Step 2) refers to the forwarder of node $i$ if node $i$ is active.
We clear all of the partial queues in the system every $T$ slots (in Step 4)) for the simplicity of analysis. It is likely not the best approach to handle the partial queues. Intuitively, the performance should be improved if we only release the partial queues when a selected forwarder for a packet is not the same as the previous one (thus satisfying A3).
\begin{Theorem}\label{thm:Tslot}
The algorithm stabilizes any rate vector satisfying $\boldsymbol{\lambda}+\boldsymbol{\epsilon}(T) \in \Lambda$, where $\boldsymbol{\epsilon}(T)$ is a vector with minimum entry $\epsilon> 1/T$. The average expected queue backlog $\lim_{t\rightarrow \infty}\frac{1}{t}\sum_{\tau=0}^{t-1}\sum_{c,i}\mathds{E}\{Q^c_i(\tau)\}$ in the system is upper bounded by $$\frac{KNT^2(\mu_{max}\hspace{-0.02in}+\hspace{-0.02in}A_{max})^2+NT^2}{2(\epsilon T-1)}+\frac{KN(T\hspace{-0.02in}-\hspace{-0.02in}1)(\mu_{max}\hspace{-0.02in}+\hspace{-0.02in}A_{max})}{2}.$$
\end{Theorem}
The proof of Theorem~\ref{thm:Tslot} is provided in Appendix~\ref{apx:thm_Tslot}. The proof is based on the fact that the $T$-slot algorithm minimizes the $T$-slot Lyapunov drift, which is shown to be negative when $\sum_{c,i}Q^c_i(t)$ is sufficiently large.
The constructed algorithm proves the sufficiency of Theorem~\ref{thm1}. The intuition behind the algorithm is that by using the weak links consecutively over a long window, the potentially contributing loss caused by dropped partial packets is kept small, therefore, the effective rates over the weak links can get close to the link rate. The algorithm approaches the boundary of the capacity region in $O(1/T)$.
When $T$ is large enough, the average expected backlog in the system scales as $O(T)$. For this reason, in the next section, we introduce a virtual queue based algorithm which updates action every single slot. We expect that average backlog under the virtual queue based algorithm will be improved since its upper bound does not scale as $O(T)$.
Given $T> \frac{1}{\epsilon}$, the upper bound \textcolor{black}{in Theorem~\ref{thm:Tslot}} is a convex function of $T$.
This implies that for any $\boldsymbol{\lambda}+\boldsymbol{\epsilon}(T)\in \Lambda$, there exists an optimal value of $T$ which stabilizes the system and introduces minimal delay bound. However, when the arrival rates are unknown, it may not be practical to search for this optimal value.
Finally, we note that for some special values of $T$, the network can still be stabilized even when $T\leq 1/\epsilon$. For example, when $T$ is chosen as $\prod_{(i,j)}\frac{1}{r_{ij}}$, then, under any possible activation pattern $s\in\mathcal{S}$, all partial packets are decoded at the end of the $T$-slot window. This implies that the policy can stabilize any $\boldsymbol{\lambda}+\boldsymbol{\epsilon}\in\Lambda$. This phenomena will be illustrated through examples in Section~\ref{sec:simu}. For small networks, such a value for $T$ can be easily computed and may be small; for large networks with many weak links, such value may still be quite large.
\section{Virtual Queue Based Algorithm}\label{sec:virtual}
In this section, we develop a second algorithm that exhausts the stability region without needing to set a large $T$. Thereby it attains better delay performance. As seen in Theorem~\ref{thm1}, the delay is caused by the long time window of planning and infrequent update of the control action. Therefore, in order to obtain better delay performance, intuitively, we need to update our policy more frequently. This requires us to design more sophisticated mechanisms to handle the partial packet queues and additional analysis challenges brought by the temporally coupled decoding process over weak links.
Our approach is to construct a network that contains ``virtual'' queues, which handle the partial packets and decoding process over weak links. The resulting network has the same maximum stability region as the original network. By stabilizing the constructed network, the original network is also stabilized.
Specifically, in order to handle the partial packet queue in a simple and effective way, we introduce buffers over weak links. We assume there is a buffer at the transmitter side for each weak link. Then, if a node wants to send a packet over a weak link, the packet is pushed into the buffer. The buffer keeps the packet until it is successfully decoded at the corresponding receiver.
The intuition behind the introduction of these buffers is that, since we don't want dropped partial packets to lose much effective rate over weak links, once the system picks a forwarding node for a particular packet, the system never changes this decision.
For transmissions over weak links, a packet can only be decoded and transferred to next hop when enough information is collected. Under the $T$-slot algorithm, since control actions updates every $T$ slot, and partial queues are cleared at the end of every epoch, it is relatively simple to track the queue evolution and perform the analysis. When control actions change every slot, queues may evolve in a more complicated way and thus difficult to track and analyze.
In order to overcome the analysis challenges, we introduce a second buffer at the receiver side of each weak link. Under the proposed algorithm, we ensure that the receiver never tries to decode a packet until it accumulates enough information, i.e., queue length in the second buffer only decreases when it is sufficiently long. By doing this, the evolution of the queue lengths can be tracked and analyzed easily.
\textcolor{black}{Essentially, we only need to introduce virtual nodes and virtual buffers over weak links in order to handle partial packets. However, link rates may vary over time and vary in different activation patterns (discussed in Sec.~\ref{sec:vary}). Therefore, for the virtual queue based algorithm, we introduce virtual nodes and buffers over both weak links and strong links, and treat them uniformly.}
\subsection{The Virtual Queue Vector}
We divide $Q^c_i(t)$ into two parts. The first stores the packets that have not yet been transmitted in any previous slots, denoted as $U^c_i(t)$. The second stores packets partially transmitted over some links but not yet decoded, denoted as $V^c_i(t)$. Since each packet in the second part is associated with some link, in order to prevent any loss of effective rate caused by dropped partial packets, we require these packets to be transmitted over the same link until they are decoded.
We use $V^{(k)}_{ij}(t)$ to denote the information of packet $k$ required to be transmitted over link $(i,j)$, and $P^{(k)}_{ij}(t)$ to denote the accumulated information of packet $k$ at node $j$.
We define $V^c_{ij}(t)=\sum_{k\in \mathcal{T}_c} V^{(k)}_{ij}(t)$, and $P^c_{ij}(t)=\sum_{k\in \mathcal{T}_c} P^{(k)}_{ij}(t)$, where we recall that $\mathcal{T}_c$ is the set of packets of commodity $c$. Note that $P^c_{ij}(t)$ is different from $P^c_{j}(t)$ defined in Section~\ref{sec:Tslot}, since the latter is associated with node $j$ and the former is associated with link $(i,j)$.
Associated with virtual queues, we define {\it virtual nodes}, as depicted in Fig.~\ref{fig:virtual}. For the link from node $i$ to node $j$, we associate one virtual node with $\{V^c_{ij}\}_c$ and a second with $\{P^c_{ij}\}_c$. The virtual node associated with $\{V^c_{ij}\}_c$ is denoted as $v_{ij}$, while the virtual node associated with $\{P^c_{ij}\}_c$ is denoted as $p_{ij}$. We have decomposed the weak link $(i,j)$ into three links: $(i,v_{ij}), (v_{ij},p_{ij}),(p_{ij},j)$, with link rates $1,r_{ij},1$, respectively. The virtual nodes and corresponding link rates for link $(j,i)$ can be defined in a symmetric way.
We follow the definition of control actions, where $\left\{\beta_{ij}^{(k)}(t)\right\}$ represent the control action of the system at time $t$. Depending on whether or not packet $k$ has already been transmitted by node $i$, we also divide the decision actions into two types, denoted as $\beta^{1(k)}_{ij}$ and $\beta^{2(k)}_{ij}$, respectively.
In the following algorithm, we only make control decisions for the packets at the head of corresponding queues. Therefore we can replace the superscript packet index $(k)$ by its commodity $c$ without any worry of confusion.
When $\beta^{1c}_{ij}(t)=1$, node $i$ pushes a {\it new} packet from $U^c_i(t)$ into the tail of $V^c_{ij}$ at the beginning of slot $t$. This implies that the system assigns node $j$ to be the next forwarder for that packet. Once the packet is pushed into $V^c_{ij}$, we transmit the packet that is at the head of $V^c_{ij}$ to node $j$, generally a different packet. Thus, an amount $r_{ij}$ of information can be accumulated at the tail of $P^c_{ij}(t)$, and the length of $V^c_{ij}$ is reduced by $r_{ij}$. This mechanism ensures that the packets in the virtual buffer is transmitted and decoded in a FIFO fashion.
When $\beta^{2c}_{ij}(t)=1$, without pushing a new packet into the buffer, we retransmit the packet at the head of $V^c_{ij}(t)$.
We let
\begin{align}
\beta^c_{ij}(t)&=\beta_{ij}^{1c}(t)+\beta^{2c}_{ij}(t).
\end{align}
We require that
\begin{align}\label{eqn:bcon}
\sum_{c,j}\beta^c_{ij}(t)\leq 1,\quad \forall i,t.
\end{align}
Further, we define $f^c_{ij}(t)\in\{0,1\}$ as binary decoding control actions. $f^c_{ij}(t)=1$ indicates that receiver $j$ has accumulated enough information to decode the packet at the head of $P^c_{ij}(t)$. It then moves that packet out of $P^c_{ij}(t)$ and into $U^c_j(t)$. We impose the following constraint
\begin{align}\label{eqn:fcon}
f^c_{ij}(t)&\leq P^c_{ij}(t),\quad\forall c,i,j
\end{align}
which indicates that $f^c_{ij}(t)=1$ only when $P^c_{ij}(t)\geq 1$, i.e., receiver $j$ has accumulated enough information to decode the packet at the head of $P^c_{ij}(t)$. The reason for imposing this constraint on $f^c_{ij}(t)$ is that, we cannot simply use $(P^c_{ij}(t)-f^c_{ij}(t))^+$ to represent the queue length of $P^c_{ij}$ after a decoding action is taken at that queue. If $P^c_{ij}(t)<1$, even if $f^c_{ij}(t)=1$, after the decoding action, the queue length will still be $P^c_{ij}(t)$ since no packet can be successfully decoded. This is not equal to $(P^c_{ij}(t)-f^c_{ij}(t))^+$, which is zero in this scenario.
Then, according to constraints (\ref{eqn:bcon}) and (\ref{eqn:fcon}), the queue lengths evolve according to
\begin{align}
U^c_i(t+1)&=\Big( U^c_i(t)\hspace{-0.03in}-\hspace{-0.03in}\sum_{j}\beta_{ij}^{1c}(t)\Big)^+\hspace{-0.03in}+\hspace{-0.03in}\sum_{l}f^c_{li}(t)\hspace{-0.03in}+\hspace{-0.03in}A^c_i(t)\label{eqn:u}\\
V^c_{ij}(t+1)&\leq\left(V^c_{ij}(t)+\beta_{ij}^{1c}(t)(1-r_{ij})-\beta^{2c}_{ij}(t)r_{ij}\right)^+\nonumber\\
&= \left(V^c_{ij}(t)+\beta_{ij}^{1c}(t)-\beta^c_{ij}(t)r_{ij}\right)^+\label{eqn:v}\\
P^c_{ij}(t+1)&\leq P^c_{ij}(t)+\beta^c_{ij}(t)r_{ij}-f^c_{ij}(t)\label{eqn:p}
\end{align}
The inequalities in (\ref{eqn:v}) and (\ref{eqn:p}) come from the fact that $\beta^{1c}_{ij}(t)$ and $\beta^{2c}_{ij}(t)$ can be applied to a dummy packet when a queue is empty. When the corresponding queue is not empty, the inequality becomes an equality.
\begin{figure}[t]
\begin{center}
\scalebox{0.45} {\epsffile{virtual_queue2.eps}}
\end{center}
\vspace{-0.15in}
\caption{The constructed virtual system.}
\label{fig:virtual}
\vspace{-0.15in}
\end{figure}
Define
\begin{align}\label{eqn:lya12}
L(\mathbf{U}(t),\mathbf{V}(t),\mathbf{P}(t))&=\sum_{c,i}(U^c_i(t))^2+\sum_{c,(i,j)}\hspace{-0.03in}(V^c_{ij}(t))^2\nonumber\\
&\quad+\sum_{c,(i,j)}\hspace{-0.03in}(P^c_{ij}(t)-\eta)^2
\end{align}
where $\eta$ is a parameter used to control the length of $P^c_{ij}(t)$.
Define $\Delta(t)$ as the one-slot sample path Lyapunov drift:
\begin{align*}
\Delta(t)&:= L(\mathbf{U}(t+1),\hspace{-0.03in}\mathbf{V}(t+1),\hspace{-0.02in}\mathbf{P}(t+1))-L(\mathbf{U}(t),\hspace{-0.03in}\mathbf{V}(t),\hspace{-0.02in}\mathbf{P}(t))
\end{align*}
\begin{Lemma}\label{lemma:1drift}
Under constraints (\ref{eqn:bcon}) and (\ref{eqn:fcon}), the sample path Lyapunov drift satisfies
\begin{align}
\Delta(t)&\leq 2\sum_{c,i}U^c_i(t)A^c_i(t)-2\sum_{c,i,j}[U^c_i(t)-V^c_{ij}(t)]\beta^{1c}_{ij}(t)\nonumber\\
&\quad-2\sum_{c,i,j}[V^c_{ij}(t)-P^c_{ij}(t)]r_{ij}\beta^c_{ij}(t)\nonumber\\
&\quad -2\sum_{c,i,j}[P^c_{ij}(t)-\eta-U^c_j(t)]f^c_{ij}(t)+\alpha_2\label{eqn:delta}
\end{align}
where
\begin{align}
\alpha_2&=KN(d+A_{max})^2+2N+KNd
\end{align}
\end{Lemma}
The proof of this lemma is provided in Appendix~\ref{apx:lemma_1drift}.
\subsection{The Algorithm}\label{sec:virtual_algo}
In contrast to the algorithm of Section~\ref{sec:Tslot}, this algorithm updates every timeslot. The purpose of the algorithm is to minimize the right hand side of (\ref{eqn:delta}) given the current $\mathbf{U},\mathbf{V},\mathbf{P}$.
\begin{itemize}
\item[1)] \textbf{Find per-link backpressure.} At the beginning of a timeslot, node $i$ checks its neighbors and computes the differential backlogs. We compute the weight for the link between node $i$ and the first virtual node, and, separately, the weight for the link between the two virtual nodes. Specially, the weight for control action $\beta^{1c}_{ij}$ is computed as
$$W^{1c}_{ij}(t)= [U^c_i(t)-V^c_{ij}(t)+(V^c_{ij}(t)-P^c_{ij}(t))r_{ij}]^+$$
and the weight for control action $\beta^{2c}_{ij}$ is computed as
$$W^{2c}_{ij}(t)=[V^c_{ij}(t)-P^c_{ij}(t)]^+r_{ij}$$
The weight of commodity $c$ over link $(i,j)$ is $W^c_{ij}(t)=\max\{W^{1c}_{ij}(t),W^{2c}_{ij}(t)\}$.
The weight for the link $(i,j)$ is $$W_{ij}(t)=\max_c W^{c}_{ij}(t),$$ and the optimal commodity $$c^*_{ij}=\arg\max_c W^{c}_{ij}(t).$$
\item[2)] \textbf{Select forwarder.} Choose the potential forwarder of the current slot for node $i$ with the maximum weight $W_{ij}(t)$ and denote it as $j^*_i=\arg \max_j W_{ij}(t)$.
\item[3)] \textbf{Choose activation pattern.} Define the optimal activation pattern $s^*$ as the pattern $s\in S$ that maximizes
$$\sum_{i\in s}W_{ij^*_i}.$$
\item[4)] \textbf{Transmit packets.} For each $i\in s^*$, if $W_{ij^*_i}>0$, let node $i$ transmit a packet of commodity $c^*_{ij^*_i}$ to node $j^*_i$. For strong links, node $i$ transmits a packet from the head of $U^{c^*}_i$. If link $(i,j^*)$ is a weak link, and $W_{ij}(t)=W_{ij}^1(t)$, node $i$ pushes a new packet from $U^{c^*}_i$ into $V^{c^*}_{ij^*_i}$ and transmits the packet from the head of $V^{c^*}_{ij^*_i}(t)$; otherwise, node $i$ resends the packet at the head of $V^{c^*}_{ij^*_i}(t)$.
\item[5)] \textbf{Decide on decoding actions.} For each link $(i,j)$ and each commodity $c$, we choose $f^c_{ij}(t)\in\{0,1\}$ to maximize
\begin{align}
[P^c_{ij}(t)-\eta-U^c_{j}(t)]f^c_{ij}(t)
\end{align}\textcolor{black}{where $\eta$ is a parameter greater than or equal to 1. We let $f^c_{ij}(t)=1$ when $P^c_{ij}(t)-\eta-U^c_{j}(t)=0$.}
\end{itemize}
\begin{Lemma}\label{lemma:Pij}
Under the above virtual queue based algorithm: $(a)$ If $P^c_{ij}(t)<\eta$ for some weak link $(i,j)$ and slot $t$, then $f^c_{ij}(t)=0$. $(b)$ If $P^c_{ij}(t_0)\geq \eta-1$, then, under the proposed algorithm, $P^c_{ij}(t)\geq \eta-1$ for every $t\geq t_0$.
\end{Lemma}
\begin{Proof}
In order to maximize $(P^c_{ij}(t)-\eta-U^c_j(t))f^c_{ij}(t)$, $f^c_{ij}(t)=1$ only when $P^c_{ij}(t)-\eta-U^c_j(t)>0$. Therefore, if $P^c_{ij}(t)<\eta$, $f^c_{ij}(t)$ must equal zero, which proves $(a)$.
Now suppose that $P^c_{ij}(t)\geq \eta-1$ for some slot $t$. We show that it also holds for $t+1$. If $P^c_{ij}(t)\geq \eta$, then it can decrease by at most one packet on a single slot, so that $P^c_{ij}(t+1)\geq P^c_{ij}(t)-f^c_{ij}(t)\geq \eta-1$. If $P^c_{ij}(t)<\eta$, we must have $f^c_{ij}(t)=0$, the queue cannot decrease in slot $t$, and we again have $P^c_{ij}(t+1)\geq \eta-1$.
\end{Proof}
With Lemma~\ref{lemma:Pij}, we can see that when setting $\eta=1$, under the proposed algorithm, if $P^c_{ij}(t)<1$ for some weak link $(i,j)$ and slot $t$, then $f^c_{ij}(t)=0$. $f^c_{ij}(t)$ can only equal one when $P^c_{ij}(t)\geq 1$. Thus, constraint (\ref{eqn:fcon}) is satisfied automatically for every slot under the proposed algorithm
\begin{Theorem}\label{thm:capacity2}
For a network with given link rates $\{r_{ij}\}$ and a feasible activation pattern set $\mathcal{S}$, the network capacity region $\Lambda'$ for the constructed network consists of all rate matrices $\{\lambda^c_n\}$ for which there exists flow variables $\{\mu^{vc}_{ij}\}, v=1,2,3$ together with probabilities $\pi_s$ for all possible activation pattern $s\in S$ such that
\begin{align}
\mu^{vc}_{ij}&\geq 0,\quad \mu^{vc}_{ci}=0,\quad \mu^{vc}_{ii}=0, \quad \forall i,j,v,c\label{eqn:cap21}\\
\sum_{l}\mu^{3c}_{li}+\lambda^c_i&\leq \sum_{j}\mu^{1c}_{ij},\quad \forall i\neq c\label{eqn:cap22}\\
\mu^{1c}_{ij}&\leq \mu^{2c}_{ij},\quad \mu^{2c}_{ij}\leq \mu^{3c}_{ij}\quad \forall i,j,c\label{eqn:cap24}\\
\sum_c\mu^{2c}_{ij}&\leq \sum_c\sum_{s\in S}\pi_s\theta^c_{ij}(s)r_{ij},\quad \forall i,j\label{eqn:cap23}\\
\sum_{s\in \mathcal{S}}\pi_s&\leq1
\end{align}
where the probabilities $\theta^c_{ij}(s)$ satisfies
\begin{align}
\theta^c_{ij}(s)=0 \mbox{ if }i\notin s, \quad \sum_{c,j}\theta^c_{ij}(s)= 1,\forall i\label{eqn:cap25}
\end{align}
\end{Theorem}
\begin{Proof}
The necessary part can be proved in the same way for Theorem~\ref{thm1}. The sufficiency will be proved through constructing an algorithm that stabilizes all rate vectors satisfying the constraints.
\end{Proof}
In this constructed virtual network, $\mu_{ij}^{1c}, \mu_{ij}^{2c}, \mu_{ij}^{3c}$ can be interpreted as the flow over links $(i,v_{ij})$, $(v_{ij},p_{ij})$, $(p_{ij},j)$, respectively.
The constraints (\ref{eqn:cap21}) represent non-negativity and flow efficiency constraints. The constraints in (\ref{eqn:cap22}), (\ref{eqn:cap24}) represent flow conservation constraints, where the exogenous arrival flow rates for nodes $v_{ij}, p_{ij}$ are zero. The constraints in (\ref{eqn:cap23}) represent the physical link constraint for virtual link $(v_{ij},p_{ij})$, which equals the link constraint for the real link $(i,j)$ in the original system. Note that there is no explicit link constraints for $(i,v_{ij})$ and $(p_{ij},j)$, since the transfer of packets over these links happen at the same node, and there is no physical link constraint on them.
\begin{Lemma}
The network capacity region for the virtual network $\Lambda'$ defined in Theorem~\ref{thm:capacity2} is equal to that for the original system $\Lambda$ defined in Theorem~\ref{thm1}.
\end{Lemma}
\begin{Proof}
First, we show that if $\boldsymbol{\lambda}\in \Lambda' $, then it must lie in $\Lambda$ as well. This can be shown directly by letting $\mu^c_{ij}=\mu_{ij}^{2c}$, Thus, we have $\mu_{ij}^{1c}\leq \mu^c_{ij}\leq\mu_{ij}^{3c}$. Plugging into (\ref{eqn:cap22}), we have (\ref{eqn:cap2}), i.e., if $\boldsymbol{\lambda}$ satisfies the constraints in (\ref{eqn:cap21})-(\ref{eqn:cap25}), it must satisfy (\ref{eqn:cap1})-(\ref{eqn:cap5}) as well. Thus $\boldsymbol{\lambda}\in \Lambda$.
The other direction can be shown in the following way: we prove that for any $\boldsymbol{\lambda}+\boldsymbol{\epsilon}\in \Lambda$, $\boldsymbol{\lambda}+\frac{\epsilon}{2d+1}\in \Lambda'$, where $d$ is the maximum degree of the network.
Since $\boldsymbol{\lambda}+\boldsymbol{\epsilon}\in \Lambda$, we have
\begin{align}
\sum_{l}\mu^c_{li}+\lambda^c_i+\epsilon&\leq \sum_{j}\mu^c_{ij},\quad \forall i\neq c.\label{eqn:cap32}
\end{align}
By letting $\mu_{ij}^{2c}=\mu^c_{ij}$, we have that (\ref{eqn:cap23})-(\ref{eqn:cap25}) satisfied. At the same time, we let $\mu_{ij}^{1c}+\epsilon_1=\mu_{ij}^{3c}-\epsilon_1=\mu_{ij}^{2c}$, and plug them into (\ref{eqn:cap32}), which gives
\begin{align}
\sum_{l}(\mu^{3c}_{li}-\epsilon_1)+\lambda^c_i+\epsilon&\leq \sum_{j}(\mu^{1c}_{ij}+\epsilon_1),\quad \forall i\neq c.
\end{align}
Therefore,
\begin{align}
\sum_{l}\mu^{3c}_{li}+\lambda^c_i+\epsilon-2d\epsilon_1&\leq \sum_{j}\mu^{1c}_{ij},\quad \forall i\neq c.
\end{align}
By letting $\epsilon-2d\epsilon_1=\epsilon_1$, we have
\begin{align}
\sum_{l}\mu^{3c}_{li}+\lambda^c_i+\epsilon_1&\leq \sum_{j}\mu^{1c}_{ij},\quad \forall i\neq c\\
\mu_{ij}^{1c}+\epsilon_1&=\mu_{ij}^{2c},\quad \mu_{ij}^{2c}+\epsilon_1=\mu_{ij}^{3c}.
\end{align}
Thus, we have $\boldsymbol{\lambda}+\boldsymbol{\epsilon}_1\in \Lambda'$. As $\epsilon\rightarrow 0$, $\epsilon_1$ approaches zero as well. Thus, $\Lambda=\Lambda'$.
\end{Proof}
\begin{Theorem}\label{thm:virtual}
\textcolor{black}{For $\eta\geq 1$}, the proposed algorithm stabilizes any rate vector satisfying $\boldsymbol{\lambda}+\boldsymbol{\epsilon} \in \Lambda$. The average expected queue backlog in the system is upper bounded by $$\frac{(2d+1)(KN(d+A_{max})^2+2N+(2\eta+1)KNd)}{\epsilon}.$$\end{Theorem}
The proof of the theorem is given in Appendix~\ref{apx:thm_virtual}. \textcolor{black}{Since the upper bound is monotonically increasing in $\eta$, we can always set $\eta$ to 1 to achieve a better delay performance.}
The algorithm updates every slot. This avoids the delay caused by infrequent policy updating in the $T$-slot algorithm. On the other hand, we introduce virtual queues in the system. Since the differential backlog in Step 1) of the algorithm is not the physical differential backlog between nodes $(i,j)$ in the real system, the inaccuracy of the queue length information can, potentially, increase the average backlog in the system. This is reflected by the $2d+1$ factor in the upper bound. But for arrival rate vectors that are close to the boundary of network capacity region, the $T$-slot algorithm can only stabilize the system if $T$ is large. In such situation thus the virtual-queue based algorithm attains a better delay performance.
The algorithm exhausts the maximum stability region of the network without any pre-specified parameter depending on traffic statistics. Compared to the $T$-slot algorithm where the stabilizing parameter $T$ actually depends on how close a rate vector $\boldsymbol{\lambda}$ gets to the boundary of the network stability region $\Lambda$, this is a big advantage.
\section{Discussions}
\subsection{Enhanced Virtual Queue Based Algorithm}
According to Theorem~\ref{thm:Tslot}, the average expected backlog in the system under $T$-slot algorithm is $O(T^2)$, which indicates poor delay performance when $T$ is large. The virtual queue based algorithm avoids the long delay caused by the infrequent updating of queue length information. However, because of the introduction of virtual nodes, packets accumulate in virtual queues over weak links, which negatively impacts delay performance, especially when the system is lightly loaded. In order to improve delay performance, we proposed to enhance the virtual queue based algorithm by adjusting the term associated with $V_{ij}$ in the Lyapunov function.
Define a modified Lyaponov function
\begin{align}\label{eqn:lya3}
L(\mathbf{U}(t),\mathbf{V}(t),\mathbf{P}(t))&=\sum_{c,i}(U^c_i(t))^2+\sum_{c,(i,j)}(V^c_{ij}(t)+\gamma/ r_{ij})^2\nonumber\\
&\quad+\sum_{c,(i,j)}(P^c_{ij}(t)-\eta)^2.
\end{align}
Compared with (\ref{eqn:lya12}), we have added a term $\gamma/ r_{ij}$ to each $V^c_{ij}(t)$ in (\ref{eqn:lya3}). This is equivalent to add queue backlog $\gamma/ r_{ij}$ in the virtual queue.
Following an analysis similar to that of the virtual queue algorithm (cf. Appendix~\ref{apx:lemma_1drift}), we can show that under constraints (\ref{eqn:bcon}) and (\ref{eqn:fcon}), the sample path Lyapunov drift satisfies
\begin{align*}
\Delta(t)&\leq \hspace{-0.02in}2\sum_{c,i}U^c_i(t)A^c_i(t)\hspace{-0.02in}-\hspace{-0.02in}2\hspace{-0.02in}\sum_{c,i,j}[V^c_{ij}(t)+\gamma/r_{ij}-P^c_{ij}(t)]\beta^c_{ij}(t)\nonumber\\
&\quad-2\sum_{c,i,j}[U^c_i(t)-(V^c_{ij}(t)+\gamma/r_{ij})]\beta^{1c}_{ij}(t)\nonumber\\
&\quad -2\sum_{c,i,j}[(P^c_{ij}(t)-\eta)-U^c_j(t)]f^c_{ij}(t)+\alpha_3
\end{align*}
where $\alpha_3$ is a positive constant. In order to minimize the one-step Lyapunov drift, we substitute the following ``modified'' step M1) for step 1) of the virtual queue based algorithm as follows:
\begin{itemize}
\item[M1)] \textbf{Find per-link backpressure.} At the beginning of a timeslot, node $i$ checks its neighbors and computes the differential backlogs. We compute the weight for the link between node $i$ and the first virtual node, and, separately, the weight for the link between the two virtual nodes. Specially, the weight for control action $\beta^{1c}_{ij}$ is computed as
\begin{align*}
W^{1c}_{ij}(t)= &[U^c_i(t)-(V^c_{ij}(t)+\gamma/r_{ij})\\
&+(V^c_{ij}(t)+\gamma/r_{ij}-P^c_{ij}(t))r_{ij}]^+,
\end{align*}
and the weight for control action $\beta^{2c}_{ij}$ is computed as
$$W^{2c}_{ij}(t)=[V^c_{ij}(t)+\gamma/r_{ij}-P^c_{ij}(t)]^+r_{ij}.$$
The weight of commodity $c$ over link $(i,j)$ is $W^c_{ij}(t)=\max\{W^{1c}_{ij}(t),W^{2c}_{ij}(t)\}$.
The weight for link $(i,j)$ is $$W_{ij}(t)=\max_c W^{c}_{ij}(t),$$ and the optimal commodity $$c^*_{ij}=\arg\max_c W^{c}_{ij}(t).$$
\end{itemize}
The rest of the steps remain the same as in Sec.~\ref{sec:virtual_algo}. Following similar steps as in the proof of Theorem 4, we can show that the enhanced version also achieves the maximum stability region. The intuition is, in heavily backlogged regime $V^c_{ij}(t)\gg \gamma/r_{ij}$, therefore the added queue backlog is negligible and doesn't impact the stability of the queue.
As in the virtual queue based algorithm, in the enhanced algorithm, we also set $\eta=1$ to guarantee that $f^c_{ij}(t)=1$ only when $P^c_{ij}(t)\geq 1$. We now discuss the effect of the $\gamma$ parameter. We set $\gamma=1/2$. By setting $\gamma$ to be some positive value, the system adds some {\it virtual} backlog to buffer $V_{ij}$, thus preventing packets from entering the empty buffers over the weak links when the system starts from an initial empty state. It also increases backpressure between $V_{ij}$ and $P_{ij}$. Therefore, packets tend to be pushed through links more quickly, and the decoding time is shortened accordingly. Besides, in the modified Lyaponov function, we select weights for the {\it virtual} backlogs of virtual queues as the inverse of link rates. The reason for such selection is that the number of slots required to deliver a packet through a link is equal to the inverse of the link rate. We aim to capture different delay effects over different links through this adjustment. The intuition behind the enhanced algorithm is that, when the system is lightly loaded, passing packets only through strong links can support the traffic load while still providing good delay performance. Therefore, using weak links is not necessary, and using strong links is preferable. Setting the virtual backlog length to be $\gamma/r_{ij}$ forces packets to select strong links and improve the delay performance of the virtual queue based algorithm in the low traffic regime. When the system is heavily loaded and strong links cannot support all traffic flows, the differential backlogs over certain strong links eventually \textcolor{black}{decrease}, and weak links start to be used. \textcolor{black}{The enhanced algorithm essentially is a hybrid of the classic backpressure algorithm ($T=1$) and virtual queue based algorithm. It always stabilizes the system, and automatically adjusts the portion of slots that the system operates under each algorithm to maintain a good delay performance. }
\subsection{Dependent Link Rates}\label{sec:vary}
For the simplicity of analysis, in previous sections we assume that the link capacity for a given transmitter-receiver pair is fixed. We can easily relax this assumption and generalize the system model by assuming that the link rates are not fixed but are rather a function of the chosen activation pattern. Specifically, to be better able to capture the effects of interference, for link pair $(i,j)$ we assume the link rate under activation pattern $s$ is $r_{ij}(s)$. Then, the network capacity region under the new assumption is characterized by the same inequalities in Theorem~\ref{thm1} except that in eqn. (\ref{eqn:cap3}) $r_{ij}(s)$ is used instead of $r_{ij}$.
The scheduling algorithms should be adjusted accordingly in order to achieve the network capacity region. E.g., for the $T$-slot algorithm, in each updating slot, after determining the maximizing commodity $c^*_{ij}$ for each link $(i,j)$, the system selects the activation pattern, as well as the corresponding forwarder for each active transmitter, to maximize
\begin{align*}
\max_{s}\sum_{i\in s,j\in\mathcal{N}(i)}[Q^{c^*}_i(t)-Q^{c^*}_j(t)]^+r_{ij}(s).
\end{align*}
The remaining steps remain the same.
For the virtual queue based algorithm, the maximizing commodity should be jointly selected with the activation pattern and the corresponding forwarders at the same time.
\subsection{Distributed Implementation}
The $T$-slot algorithm and the virtual queue based algorithm presented in the previous sections involving solving a constrained optimization problem (max-weight matching) in a centralized fashion. Here we consider distributed implementations. We assume nodes have information of the link rates between themselves and their neighbors, and the queue backlogs of their neighbors.
In interference-free networks where nodes can transmit simultaneously without interfering with each other, minimizing the Lyaponov drift function can be decomposed into local optimization problems. Each node individually makes the transmission decision based only on locally available information.
In networks with non-trivial interference constraints, we can use the message passing algorithm proposed in \cite{shah_msg_pass07} to select the activation pattern in a distributed way. First, each node selects its forwarder based on its local information. Then, the system uses the belief propagation based message-passing algorithm to select the optimal activation pattern that minimizes the Lyaponov drift. If the underlying interference graph is bipartite, \cite{shah_msg_pass07} shows that the the message-passing algorithm always converges to the optimal solution. \textcolor{black}{Another possible approach is to develop carrier sensing based randomized scheduling algorithms. Carrier sensing based distributed algorithms are discussed in \cite{jiang, shah_random}, etc. The throughput optimality of these algorithms are established under certain assumptions. Some other distributed implementations of backpressure algorithms are discussed in Chapter 4.8 of \cite{Neely_now}. }
\section{Simulation Results}\label{sec:simu}
In this section we present detailed simulation results. These results
illustrate the basic properties of the algorithms.
\subsection{A single-commodity scenario}
First consider the $4$-node wireless network shown in Fig.~\ref{fig:4node} where the links with nonzero rates are shown. We consider a single commodity scenario where new packets destined for node $4$ arrive at node $1$ and node $2$ according to independent Bernoulli processes with rate $\lambda_1$ and $\lambda_2$, respectively. Node $3$ does not have incoming packets. It acts purely as a relay. We assume that the system does not have any activation constraints, i.e., all nodes can transmit simultaneously without causing each other interference.
The maximum stability region $\Lambda$, shown in Fig.~\ref{fig.stability_region}, is the union of rate pairs ($\lambda_1$, $\lambda_2$) defined according to Theorem~\ref{thm1}. If mutual information accumulation is not allowed, the corresponding maximum stability region is indicated by the dashed triangle inside $\Lambda$. This follows because when weak links are not utilized, the only route from node 1 to node 4 is through node 2, thus the sum of arrival rates from node 1 and 2 cannot exceed link rate 1. When mutual information accumulation is exploited, the weak link from node 1 to node 4 with rate $1/9$ can be utilized, expanding the stability region.
\begin{figure}[t]
\begin{center}
\scalebox{0.45} {\epsffile{4node.eps}}
\end{center}
\vspace{-0.15in}
\caption{The 4-node network used to compare the $T$-slot algorithm and the virtual queue based algorithm. The number labeling each link is the rate of that link.}
\label{fig:4node}
\end{figure}
\begin{figure}[t]
\begin{center}
\scalebox{0.5} {\epsffile{stability.eps}}
\end{center}
\vspace{-0.15in}
\caption{The maximum stability region of the 4-node network. The inside triangular region is the network capacity region when MIA is not exploited.}
\label{fig.stability_region}
\end{figure}
We first compare the performance of the $T$-slot algorithm for different values of $T$. For each $T$, we conduct the simulation for arrival rates $\lambda_1=\lambda_2=\lambda$ ranging from 0 to 0.55. The resulting average backlog curve is shown in Fig.~\ref{fig:avgbacklog}. When $T=1$, the weak links cannot be utilized, and the algorithm can only stabilize the arrival rates up to $\lambda=1/2$, which is point $A$ in Fig.~\ref{fig.stability_region}. When $T=9$, the reciprocal of the link rate 1/9, the algorithm can stabilize arrival rates up to $\lambda=9/17$, corresponding to point $B$ in Fig.~\ref{fig.stability_region}. In this case, all of the partial packets transferred over weak link $(1,4)$ are eventually decoded, and that weak link is fully utilized. This is a special scenario since the value of $T$ perfectly matches the rate of weak link. For larger networks that consist of many weak links, selecting $T$ to match all of the weak links may be prohibitive, since such value can be very large. Except for such perfect matching scenarios, for more general values of $T$, the weak link is partially utilized, therefore, the maximum $\lambda$ the algorithm can stabilizes is some value between 1/2 and 9/17. In general, a larger $T$ stabilizes larger arrival rates, and results in an increased average backlog in the system. This is illustrated by curves with $T=15,60$ in Fig.~\ref{fig:avgbacklog}.
Fig.~\ref{fig:avgbacklog} also plots the performance of the virtual queue based algorithm. The system can be stabilized up to $\lambda=9/17$ under the virtual queue based algorithm. Compared with the $T$-slot algorithm, the virtual queue based algorithm attains a much better delay performance for large value of $\lambda$, i.e., when the system is in heavy traffic regime. It dominates even the curve with $T=9$ at high rates. For small values of $\lambda$, the virtual queue based algorithm has a worse performance in terms of delay. This is because the algorithm requires the virtual queues to build up certain lengths in order to push packets through the weak links. The virtual queue based algorithm has relatively constant delay performance for $\lambda\in [0,1/2]$, while under the $T$-slot algorithm, the average backlog increases monotonically with $\lambda$.
\begin{figure}[t]
\begin{center}
\scalebox{0.3} {\epsffile{evolve_along_lambda_ra.eps}}
\end{center}
\vspace{-0.15in}
\caption{Comparison of average backlog in the system under the algorithms.}
\label{fig:avgbacklog}
\end{figure}
\subsection{A multi-commodity scenario}
Next we consider the $10$-node network shown in Fig.~\ref{fig:10node}. We consider a multi-commodity scenario in which packets destined for node $10$ arrive at node $1$ and packets destined for node $9$ arrive at node $2$. Arrivals are distributed according to two independent Bernoulli processes with rate $\lambda_1$ and $\lambda_2$, respectively. We assume that the system does not have any activation constraints so that all nodes can transmit simultaneously without causing interference.
The maximum stability region $\Lambda$ is shown in Fig.~\ref{fig.stability_region2}. If mutual information accumulation is not allowed, the corresponding maximum stability region is the dashed triangle inside $\Lambda$. This follows because when weak links are not used, the routes from node 1 to node 10 and the routes from node 2 to node 9 must pass through link $(4,7)$, thus the sum of arrival rates from node 1 and 2 cannot exceed that link rate. When mutual information accumulation is exploited, weak links can be utilized and form additional routes $1\rightarrow 5\rightarrow 6\rightarrow 10$, $2\rightarrow 3\rightarrow 8\rightarrow 9$, thus an expanded stability region can be achieved.
\begin{figure}[t]
\begin{center}
\scalebox{0.45} {\epsffile{10node.eps}}
\end{center}
\caption{The 10-node network used to compare the $T$-slot algorithm and the virtual queue based algorithm. The number labeling each link is the rate of that link.}
\label{fig:10node}
\end{figure}
\begin{figure}[t]
\begin{center}
\scalebox{0.5} {\epsffile{stability2.eps}}
\end{center}
\caption{The maximum stability region of the 10-node network, where $B=(11/15,11/15)$. The dashed triangular region is the network capacity region when MIA is not exploited.}
\label{fig.stability_region2}
\end{figure}
We first compare the performance of the $T$-slot algorithm for different values of $T$. For each $T$, we conduct the simulation for arrival rates $\lambda_1=\lambda_2=\lambda$ ranging from 0 to 0.75. The resulting average backlog curve is shown in Fig.~\ref{fig:avgbacklog2}. When $T=1$, the weak links are not utilized, so the algorithm can only stabilizes the arrival rates up to $\lambda=1/2$, which is point $A$ in Fig.~\ref{fig.stability_region2}. When $T=30$, which is the reciprocal of the link rate product $\frac{1}{2}\frac{1}{3}\frac{1}{5}$ and perfectly matches the rates of weak links, the algorithm can stabilize the arrival rates up to $\lambda=11/15$, corresponding to point $B$ in Fig.~\ref{fig.stability_region2}. For more general values of $T$, the weak links are partially utilized, therefore, the maximum $\lambda$ the algorithm can stabilizes is some value between 1/2 and 11/15. In general, a larger $T$ stabilizes larger arrival rates, and results in an increased average backlog in the system. This is illustrated by curves with $T=5,17$ in Fig.~\ref{fig:avgbacklog2}. In order to achieve the boundary point, a large $T$ is required in general.
\begin{figure}[t]
\begin{center}
\scalebox{0.5} {\epsffile{revised_plot.eps}}
\end{center}
\vspace{-0.15in}
\caption{Comparison of average backlog in the system under the algorithms.}
\label{fig:avgbacklog2}
\vspace{-0.15in}
\end{figure}
In Fig.~\ref{fig.stability_region2}, we also present performance results from simulation of the virtual queue based algorithm. As expected, the system can be stabilized up to the edge of the stability region, $\lambda=11/15$. Compared with the $T$-slot algorithms, the virtual queue based algorithm attains a much better delay performance in heavy traffic regime. It dominates the curve with $T=30$ over the displayed rate region. Similar to the single-commodity scenario, for small values of $\lambda$, the virtual queue based algorithm does not show much advantage in terms of delay. Finally, we also provide simulation results for the enhanced virtual queue based algorithm in Fig.~\ref{fig:avgbacklog2}. The enhanced algorithm stabilize the full sepctrum of arrival rate, i.e., every input rate vector up to $\lambda=11/15$. The delay performance in the light traffic regime ($\lambda<1/2$) is improved under the enhanced version, with a smaller penalty of delay in the heavy traffic regime. The delay performance transition around $\lambda<1/2$ can be explained by the \textcolor{black}{hybrid of and automatic adjustment between the classic backpressure ($T=1$) and virtual queue based algorithm. In the simulation, we set $\gamma=1/2$. The value of $\gamma$ decides the tradeoff between the delay performance in the light traffic regime and the delay performance in the heavy traffic regime. }
\section{Conclusions}\label{sec:conclusions}
In this paper, we analyzed the optimal routing and scheduling policy when mutual information accumulation is exploited at the physical layer in a wireless network. We first characterized the maximum stability region under the natural assumption of policy space, which is shown to surpass the network capacity region when mutual information accumulation is not allowed. Two scheduling policies are proposed to cope with the decoding process introduced by mutual information accumulation. The $T$-slot algorithm and the virtual queue based algorithm can both achieve the maximum stability region but the latter has significantly reduced delay in heavy traffic regime. We also compared the performance under these two policies analytically and numerically.
\appendices
\section{Proof of the Necessity Part of Theorem~\ref{thm1}}\label{apx:thm1}
Suppose that a stabilizing control strategy exists. It is possible that the strategy may use redundant packet transfers, i.e., allowing multiple copies of a packet to exist at the same time in the network. However, under assumption A3, each packet can only have a single parent, i.e., once a node starts to accumulate information for a packet, it can only decode the packet if the total received information from a single transmitter exceeds the amount of information contained in that packet.
Define $X^c_i(t)$ as the total number of packets of commodity $c$ that have arrived at node $i$ up to slot $t$. Define $\mathcal{D}^c(t)$ as the set of distinct packets of commodity $c$ delivered to the destination node $c$ over $[0,t)$, and $D^c(t)=|\mathcal{D}^c(t)|$ be the total number of such distinct packets. Then, we have $D^c(t)=\sum_{i=1}^n Y^c_i(t)$, where $Y^c_i(t)$ is defined in Section~\ref{sec:capacity}. If multiple copies of a packet depart from the system, we only count the first delivered copy. In a stable system, for any traffic flow of commodity $c$ entering source node $i$, the average number of distinct packets delivered to destination node $c$ in each slot must be equal to its arrival rate, thus, we have
\begin{align}\label{eqn:consv}
\lim_{t\rightarrow \infty} \frac{Y^c_i(t)}{t}= \lim_{t\rightarrow \infty} \frac{X^c_i(t)}{t}=\lambda^c_i.
\end{align}
For each distinct delivered packet $k$, there is a single routing path this packet took from its origin node to its destination node. Denote $D^c_{ij}(t)$ as the total number of distinct packets in $\mathcal{D}^c(t)$ transmitted from node $i$ to node $j$ over $[0,t)$. We have
\begin{align}\label{eqn:flow}
\sum_{l=1,l\neq i}^n D^c_{li}(t)+Y^c_i(t)&= \sum_{j=1,j\neq i}^n D^c_{ij}(t).
\end{align}
Define $T_s(t)$ to be the number of slots within which the system operates with activation pattern $s\in \mathcal{S}$ up to time $t$, and let $T^c_{ij}(s,t)$ be the number of slots that link $(i,j)$ is active and transmitting a packet \textcolor{black}{in $\mathcal{D}^c(t)$} under activation pattern $s$ up to time $t$. Therefore, we have
\begin{align}
D^c_{ij}(t)&= \sum_{s\in \mathcal{S}}T^c_{ij}(s,t)r_{ij},
\end{align}
Thus,
\begin{align}
\frac{ D^c_{ij}(t)}{t}&= \sum_{s\in \mathcal{S}}\frac{T_s(t)}{t}\frac{T^c_{ij}(s,t)}{T_s(t)}r_{ij}.
\end{align}
Define $\mu^c_{ij}(t)=\frac{D^c_{ij}(t)}{t}$, $\pi_s(t)=\frac{T_s(t)}{t}$ and $\theta^c_{ij}(s,t)=\frac{T^c_{ij}(s,t)}{T_s(t)}$. We note that since we can deliver at most one packet to $i$ from $j$ in each time slot,
\begin{align}\label{eqn:con1}
0\leq\mu^c_{ij}(t)\leq 1, \quad \mu^c_{ii}(t)=0,\quad \mu^c_{c,i}(t)=0.
\end{align}
Because only one activation pattern is allowed per slot, we have
\begin{align}
\sum_{s\in \mathcal{S}}\pi_s(t)&=1.
\end{align}
On the other hand, since a node can only transmit a single packet in any slot, and we only count distinct copies of packets, then, if node $i$ is transmitting a packet of commodity $c$ at time $t$, at most one of the received copies at its neighbors can be counted as a distinct packet in $\mathcal{D}^c(t)$. Thus, we have
\begin{align}
\sum_{c,j}\theta^c_{ij}(s,t)&\leq 1\quad \textrm{if }i\in s.\label{eqn:con2}
\end{align}
We can always make inequality (\ref{eqn:con2}) tight by restricting to the policy space that a node can only transmit when it is necessary, i.e., if node $i$'s transmission at time $t$ does not contribute to the delivery of any distinct packet to its destination, it should keep silent and be removed from the activation pattern at time $t$. The remaining active nodes form another valid activation pattern, and gives $T_s(t)=\sum_{c,j}T^c_{ij}(s,t)$ for every $i\in s$.
These constraints define a closed and bounded region with finite dimension, thus there must exist an infinite subsequence $\tilde{t}$ over which the individual terms converges to points $\mu^c_{ij}$, $\pi_s$ and $\theta^c_{ji}(s)$ that also satisfy the inequalities (\ref{eqn:con1})-(\ref{eqn:con2}):
\begin{align}
\lim_{\tilde{t}\rightarrow \infty} \mu^c_{ij}({\tilde{t}})=\mu^c_{ij},\\
\lim_{\tilde{t}\rightarrow \infty} \pi_s(\tilde{t})=\pi_s,\\
\lim_{\tilde{t}\rightarrow \infty} \theta^c_{ij}(s,\tilde{t})=\theta^c_{ij}(s).
\end{align}
Furthermore, using (\ref{eqn:consv}) in (\ref{eqn:flow}) and taking $\tilde{t}\rightarrow \infty$ yields
\begin{align}
\sum_{l}\mu^c_{li}+\lambda^c_i&= \sum_{j}\mu^c_{ij},\quad \forall i\neq c.
\end{align}
This proves the result.
\section{Proof of Theorem~\ref{thm:Tslot}}\label{apx:thm_Tslot}
Our algorithm always transmit packets from the head of $Q^c_i(t)$, and there is at most one full copy of each packet in the network. Therefore, without worry of confusion, in the following analysis we drop packet index $k$ and use commodity index $c$ instead as the superscript of control actions $\{\beta^{(k)}_{ij}\}$ and $\{f^{(k)}_{ij}\}$.
First, define the Lyapunov function $$L(\mathbf{Q}(t))=\sum_{c,i}(Q^c_i(t))^2,$$ and the $T$-slot sample path Lyapunov drift as $$\Delta_T(t):=L(\mathbf{Q}(t+T))-L(\mathbf{Q}(t)).$$ Then, we have the following Lemma.
\begin{Lemma}\label{lemma:drift}
Assume the system changes its policy every $T$ slots starting at $t=0$. For all $t=0, T, 2T,\ldots$ and all possible values of $\mathbf{Q}(t)$, under a given policy $\{\beta^c_{ij}(t)\}$, we have
\begin{align}\label{eqn:lya2}
\Delta_T(t)&\leq \sum_{c,i}-2Q^c_i(t)\Big(T\sum_j \beta^c_{ij}(t) r_{ij}-T\sum_{l}\beta^c_{li}(t) r_{li}\nonumber\\&\quad-\sum_{\tau=0}^{T-1}A^c_i(t+\tau)-1\Big)+\alpha_1,
\end{align}
where
\begin{align}
\alpha_1&= KNT^2(\mu_{max}+A_{max})^2+NT^2.
\end{align}
\end{Lemma}
\begin{Proof}
Under any given policy, the queue length evolves according to
\begin{align}
Q^c_i(t+1)&= \Big(Q^c_i(t)-\sum_{j}\beta^c_{ij}(t)f^c_{ij}(t)\Big)^+\nonumber\\
&\quad+\sum_{l}\beta^c_{li}(t)f^c_{li}(t)+A^c_i(t).\end{align}
Considering the policy which updates at $t=0,T,2T,\ldots$, we have
\begin{align}
&Q^c_i(t+T)\nonumber\\
&\leq \Big(Q^c_i(t)-\sum_{\tau=0}^{T-1}\sum_{j}\beta^c_{ij}(t)f^c_{ij}(t+\tau)\Big)^+\nonumber\\
&\quad+\sum_{\tau=0}^{T-1}\sum_{l}\beta^c_{li}(t)f^c_{li}(t+\tau)+\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\label{eqn:sumT}\\
&\leq \Big(Q^c_i(t)-\sum_j \beta^c_{ij}(t)\lfloor r_{ij}T\rfloor\Big)^+\nonumber\\
&\quad+\sum_{l}\beta^c_{li}(t)\lfloor r_{li}T\rfloor+\sum_{\tau=0}^{T-1}A^c_i(t+\tau).\label{eqn:lya10}
\end{align}
In (\ref{eqn:sumT}), we upper bound $Q^c_i(t+T)$ by moving the negative terms into the function $(\cdot)^+$. This follows from the facts that
\begin{align*}
\max[a+b-c,0]&\leq \max[a-c,0]+b\\
\max[\max[a,0]-c,0]&=\max[a-c,0] \textrm{ for }a,b,c\geq 0.
\end{align*}
This is equivalent to letting node $i$ transmit packets existing in $Q^c_i(t)$ only, i.e., even if there are some packets of commodity $c$ arrive at node $i$ in the epoch and all of the packets existing in $Q^c_i(t)$ have been cleared, they are not transmitted until next epoch. Since under the policy, these packets may be transmitted to next hop, the actual queue length $Q^c_i(t+T)$ can be upper bounded. Eqn.~(\ref{eqn:lya10}) follows from the fact that over the T-slot window, the successfully delivered packets (including dummy packets) from node $i$ to node $j$ is $\lfloor T\beta^c_{ij}(t)r_{ij}\rfloor$, and recall $\beta^c_{ij}(t)$ is held constant for the whole epoch. Since both sides of the inequality are positive, it holds for the square of both sides, thus,
\begin{align}
&(Q^c_i(t+T))^2\nonumber\\
&\leq \Big(Q^c_i(t)-\sum_j \beta^c_{ij}(t)\lfloor r_{ij}T\rfloor\Big)^2\nonumber\\
&\quad+2Q^c_i(t)\Big(\sum_{l}\beta_{li}(t)\lfloor r_{li}T\rfloor+\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\Big)\nonumber\\
&\quad+\Big(\sum_{l}\beta^c_{li}(t)\lfloor r_{li}T\rfloor+\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\Big)^2\label{eqn:square}\\
&\leq(Q^c_i(t))^2-2Q^c_i(t)\Big(\sum_j \beta^c_{ij}(t)\lfloor r_{ij}T\rfloor\nonumber\\&\quad-\sum_{l}\beta^c_{li}(t)\lfloor r_{li}T\rfloor-\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\Big)+ C^c_{i},
\end{align}
where
\begin{align*}
C^c_i&=\Big(\sum_{l}\beta^c_{li}(t)\lfloor r_{li}T\rfloor+\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\Big)^2\\
&\quad+\Big(\sum_j \beta^c_{ij}(t)\lfloor r_{ij}T\rfloor\Big)^2.
\end{align*}
We use $Q^c_i(t)$ instead of $\left(Q^c_i(t)-\sum_j \beta^c_{ij}(t)\lfloor r_{ij}T\rfloor\right)^+$ for the cross term in (\ref{eqn:square}). Since the former is always greater than the latter, the inequality (\ref{eqn:square}) holds. Therefore, we have
\begin{align}
\Delta_T(t)&\leq \sum_{c,i} -2Q^c_i(t)\Big(\sum_j \beta^c_{ij}(t)\lfloor r_{ij}T\rfloor-\sum_{l}\beta^c_{li}(t)\lfloor r_{li}T\rfloor\nonumber\\&\quad-\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\Big)+\sum_{c,i} C^c_{i}\\
&\leq \sum_{c,i}-2Q^c_i(t)\Big(\sum_j \beta^c_{ij}(t)( r_{ij}T-1)-\sum_{l}\beta^c_{li}(t) r_{li}T\nonumber\\&\quad-\sum_{\tau=0}^{T-1}A^c_i(t+\tau)\Big)+\sum_{c,i} C^c_{i}\label{eqn:floor}\\
&\leq \sum_{c,i}-2Q^c_i(t)\Big(T\sum_j \beta^c_{ij}(t) r_{ij}-T\sum_{l}\beta^c_{li}(t) r_{li}\nonumber\\&\quad-\sum_{\tau=0}^{T-1}A^c_i(t+\tau)-1\Big)+\sum_{c,i} C^c_{i}\label{eqn:floor2}
\end{align}
where $\Delta_T(t)$ is the $T$-slot Lyapunov drift, (\ref{eqn:floor}) follows from the fact that $x-1<\lfloor x\rfloor\leq x$, and (\ref{eqn:floor2}) follows from the fact that $\sum_j \beta^c_{ij}(t)\leq 1$. Based on the assumptions that $A_i^c(t)\leq A_{max}$, the maximum number of decoded packets at a node is upper bounded by $\mu_{max}$, and the constraint (\ref{beta_con}), we have
\begin{align*}
\sum_{c,i} C^c_i&\leq KNT^2(\mu_{max}+A_{max})^2+NT^2:=\alpha_1
\end{align*}
The proof is completed.
\end{Proof}
\begin{Lemma}\label{lemma:min}
For a given $\mathbf{Q}(t)$ on slot $t=0,T,2T,\ldots$, under the $T$-slot algorithm, the $T$-slot Lyapunov drift satisfies
\begin{align}
\Delta_T(t)&\leq \sum_{c,i}-2Q^c_i(t)\Big(T\sum_{j}\hat{\beta}^c_{ij}(t)r_{ij}-T\sum_{l}\hat{\beta}^c_{li}(t)r_{li}\nonumber\\
&\quad-\sum_{\tau=0}^{T-1}A^c_i(t+\tau)-1\Big)+\alpha_1,\label{eqn:lya22}
\end{align}
where $\{\hat{\beta}^c_{ij}(t)\}$ are any alternative (possibly randomized) decisions that satisfy (\ref{beta_con})-(\ref{beta_con2}).
Further more, we have
\begin{align}
&\mathds{E}\{\Delta_T(t)|\mathbf{Q}(t)\}
\leq \mathds{E}\Big\{\sum_{c,i}-2Q^c_i(t)\Big(T\sum_{j}\hat{\beta}^c_{ij}(t)r_{ij}\nonumber\\&\qquad\qquad-T\sum_{l}\hat{\beta}^c_{li}(t)r_{li}
\left.-T\lambda^c_i-1\Big)\right|\mathbf{Q}(t)\Big\}+\alpha_1\label{eqn:lya4}
\end{align}
\end{Lemma}
\begin{Proof}
Given $\mathbf{Q}(t)$, the $T$-slot algorithm makes decisions to minimize the right hand side of (\ref{eqn:lya2}). Therefore, inequality (\ref{eqn:lya22}) holds for all realizations of the random quantities, and hence also holds when taking conditional expectations of both sides. Thus, we have (\ref{eqn:lya4}).
\end{Proof}
\begin{Corollary}\label{cor1}
A rate matrix $\{\boldsymbol{\lambda}+\boldsymbol{\epsilon}\}$ is in the capacity region $\Lambda$ if and only if there exists a stationary (possibly randomized) algorithm that chooses control decisions subject to constraint (\ref{beta_con})-(\ref{beta_con2}) and independent of current queue backlog to yield
\begin{align}\label{eqn:stable}
\mathds{E}\Big\{\sum_{j}\beta^c_{ij}r_{ij}-\sum_{l}\beta^c_{li}r_{li}-\lambda^c_i\Big\}\geq \epsilon\quad \forall i\neq c.
\end{align}
\end{Corollary}
\begin{Proof}
The result is an immediate consequence of Theorem~\ref{thm1}. The intuition is to think $\mathds{E}\left\{\beta^c_{ij}\right\}r_{ij}$ as $\mu^c_{ij}$ in (\ref{eqn:cap1})-(\ref{eqn:cap5}). The necessary part is obtained directly. The sufficient part will be shown in the following section.
\end{Proof}
For any $\{\boldsymbol{\lambda}+\boldsymbol{\epsilon}\}\in \Lambda$, Lemma~\ref{lemma:min} shows that the $T$-slot algorithm minimizes the right hand side of (\ref{eqn:lya4}) for any alternative policy satisfying (\ref{beta_con})-(\ref{beta_con2}). On the other hand, Corollary~\ref{cor1} implies that such policy can be constructed in a randomized way which is independent of current queue status in the network and satisfying (\ref{eqn:stable}).
Combining Lemma~\ref{lemma:min} and Corollary~\ref{cor1}, we have
\begin{align*}
&\mathds{E}\{\Delta_T(t)|\mathbf{Q}(t)\}\\
&\leq \sum_{c,i}-2Q^c_i(t)\Big(\Big(\hat{\beta}^c_{ij}(t)r_{ij}-\sum_{l}\hat{\beta}^c_{li}(t)r_{li}-\lambda^c_i\Big)T-1\Big)+\alpha_1\\
&\leq \sum_{c,i}-2Q^c_i(t)\left(\epsilon T-1\right)+\alpha_1
\end{align*}
Taking expectations of the above inequality over the distribution of $\mathbf{Q}(t)$, we have
\begin{align}
\mathds{E}\{\Delta_T(t)\}&\leq \sum_{c,i}-2\mathds{E}\{Q^c_i(t)\}\left(\epsilon T-1\right)+\alpha_1.
\end{align}
Summing terms over $t=0,T,\ldots,(M-1)T$ for positive integer $M$ yields
\begin{align*}
&\frac{\mathds{E}\{L(\mathbf{Q}(MT))-L(\mathbf{Q}(0))\}}{M}\\
&\leq \frac{\sum_{m=0}^{M-1}\sum_{c,i}-2\mathds{E}\{Q^c_i(mT)\}\left(\epsilon T-1\right)}{M}+\alpha_1,
\end{align*}
i.e.,
\begin{align}
\sum_{m=0}^{M-1}\sum_{c,i}\mathds{E}\{Q^c_i(mT)\}&\leq \frac{ L(\mathbf{Q}(0))-L(\mathbf{Q}(MT))+M\alpha_1}{2\left(\epsilon T-1\right)}\nonumber\\
&\leq \frac{ L(\mathbf{Q}(0))+M\alpha_1}{2\left(\epsilon T-1\right)}.\label{eqn:mT}
\end{align}
We drop $L(\mathbf{Q}(MT))$ in (\ref{eqn:mT}) since $L(\mathbf{Q}(MT))\geq 0$ based on the definition of Lyaponov function.
On the other hand, for $t=0,T,2T,\ldots$ and $0<\tau<T$, we have
\begin{align}
Q^c_i(t+\tau)\leq Q^c_i(t)+(\mu_{max}+A_{max})\tau
\end{align}
where $A_{max}$ is the maximum arrival rates and $\mu_{max}$ is the maximum number of decoded packets in a slot for any node. Therefore,
\begin{align*}
\sum_{m=0}^{M-1}\sum_{\tau=1}^{T-1}\sum_{c,i}&\mathds{E}\{Q^c_i(mT+\tau)\}\leq \sum_{m=0}^{M-1}\sum_{c,i}(T-1)\mathds{E}\{Q^c_i(mT)\}\nonumber\\
&\quad+MT(T-1)KN(\mu_{max}+A_{max})/2
\end{align*}
Combining with (\ref{eqn:mT}), we have
\begin{align*}
&\frac{1}{MT}\sum_{m=0}^{M-1}\sum_{\tau=0}^{T-1}\sum_{c,i}\mathds{E}\{Q^c_i(mT+\tau)\}\nonumber\\
&\leq \frac{1}{M}\sum_{m=0}^{M-1}\sum_{c,i}\mathds{E}\{Q^c_i(mT)\}+KN(T-1)(\mu_{max}+A_{max})/2\\
&\leq \frac{ L(\mathbf{Q}(0))+M\alpha_1}{2M\left(\epsilon T-1\right)}+\frac{KN(T-1)(\mu_{max}+A_{max})}{2}
\end{align*}
Letting $M\rightarrow \infty$, we have
\begin{align*}
&\lim_{t\rightarrow \infty}\frac{1}{t}\sum_{\tau=0}^{t-1}\sum_{c,i}\mathds{E}\{Q^c_i(\tau)\}\nonumber\\
&\leq \frac{\alpha_1}{2(\epsilon T-1)}+\frac{KN(T-1)(\mu_{max}+A_{max})}{2}
\end{align*}
Therefore, when $T> \frac{1}{\epsilon}$, the average expected backlog in the network is bounded. Thus, the system is stable.
Since our algorithm operates under assumptions A1-A2, the total number of partial packets in the system, $\sum_{c,i}P^c_i(t)$, is always upper bounded by $\sum_{c,i}{Q^c_i(t)}$. Therefore, if the $\{Q^c_i(t)\}$ are stable, the $\{P^c_i(t)\}$ must be stable, and the overall system is stable.
\section{Proof of Lemma~\ref{lemma:1drift}}\label{apx:lemma_1drift}
Based on (\ref{eqn:u})-(\ref{eqn:p}), we have
\begin{align}
(U^c_i(t+1))^2&\leq (U_i^c(t))^2+\Big(\sum_{j}\beta_{ij}^{1c}(t)\Big)^2\nonumber\\
&-2U^c_i(t)\Big(\sum_{j}\beta_{ij}^{1c}(t)-\sum_{l}f^c_{li}(t)-A^c_i(t)\Big)\nonumber\\
&\quad +\Big(\sum_{l}f^c_{li}(t)+A^c_i(t)\Big)^2\label{eqn:u2}\\
(V_{ij}^c(t+1))^2&\leq (V^c_{ij}(t))^2+2V^c_{ij}(t)\left(\beta_{ij}^{1c}(t)-\beta_{ij}^{c}(t)r_{ij}\right)\nonumber\\
&\quad+\left(\beta_{ij}^{1c}(t)-\beta_{ij}^{c}(t)r_{ij}\right)^2\label{eqn:v2}\\
(P_{ij}^c(t+1))^2&\leq (P^c_{ij}(t))^2+2P^c_{ij}(t)(\beta^c_{ij}(t)r_{ij}(t)-f^c_{ij}(t))\nonumber\\
&\quad+(\beta^c_{ij}(t)r_{ij}(t)-f^c_{ij}(t))^2\label{eqn:p2}\\
P^c_{ij}(t+1)&\geq P^c_{ij}(t)-f^c_{ij}(t)
\end{align}
Thus,
\begin{align*}
\Delta(t)&\leq \sum_{c,i}-2U^c_i(t)\Big( \sum_{j}\beta_{ij}^{1c}(t)-\sum_{l}f^c_{li}(t)-A^c_i(t)\Big)\nonumber\\
&\quad+\sum_{c,i,j}2V^c_{ij}(t)\left(\beta_{ij}^{1c}(t)-\beta_{ij}^{c}(t)r_{ij}\right)\nonumber\\
&\quad+\sum_{c,i,j}[2P^c_{ij}(t)(\beta^c_{ij}(t)r_{ij}(t)-f^c_{ij}(t))+2\eta f^c_{ij}(t)]+C
\end{align*}
where
\begin{align*}
C&=\sum_{c,i}\Big( \sum_{j}\beta_{ij}^{1c}(t)\Big)^2+\sum_{c,i}\Big(\sum_{l}f^c_{li}(t)+A^c_i(t)\Big)^2\\
&\quad+\sum_{c,i,j}\Big[\left(\beta_{ij}^{1c}(t)-\beta_{ij}^{c}(t)r_{ij}\right)^2+(\beta^c_{ij}(t)r_{ij}(t)-f^c_{ij}(t))^2\Big]
\end{align*}
Because of constraints (\ref{eqn:bcon})-(\ref{eqn:fcon}), we have
\begin{align}
C&\leq KN(d+A_{max})^2+2N+KNd:=\alpha_2.
\end{align}
Combining items with respect to link $(i,j)$, we have (\ref{eqn:delta}).
\section{Proof of Theorem~\ref{thm:virtual}}\label{apx:thm_virtual}
\begin{Corollary}\label{cor:f}
A rate vector $\boldsymbol{\lambda}+\boldsymbol{\epsilon}$ is in the capacity region $\Lambda'$ if and only if there exists a stationary (possibly randomized) algorithm that chooses control decisions (independent of current queue backlog) subject to constraints (\ref{eqn:bcon}), to yield
\begin{align}
\mathds{E}\{\beta^c_{ij}\}r_{ij}&=\mathds{E}\{\beta^{1c}_{ij}\}+\epsilon\\
\mathds{E}\{f^c_{ij}\}&=\mathds{E}\{\beta^c_{ij}\}r_{ij}+\epsilon\\
\mathds{E}\Big\{\sum_{j}f^c_{ij}-\sum_{l}\beta^{1c}_{li}-\lambda^c_i\Big\}&\geq \epsilon\quad \forall i\neq c
\end{align}
\end{Corollary}
\begin{Proof}
The result is an immediate consequence of Theorem~\ref{thm:capacity2}. The intuition is to let $\mathds{E}\{\beta^c_{ij}\}r_{ij}=\mu_{ij}^{2c}$, $\mathds{E}\{\beta^{1c}_{ij}\}=\mu_{ij}^{1c}$ and $\mathds{E}\{f^c_{ij}\}=\mu_{ij}^{3c}$. The necessary part is obtained directly. The sufficient part will be shown in the following proof.
\end{Proof}
\begin{Lemma}\label{lemma:virtual_drift}
Under the virtual queue based algorithm,
\begin{align}
&\sum_{c,i,j}\Big\{[U^c_i(t)-V^c_{ij}(t)]\beta^{1c}_{ij}(t)+[V^c_{ij}(t)-P^c_{ij}(t)]r_{ij}\beta^c_{ij}
(t)\nonumber\\
&\quad +[P^c_{ij}(t)-\eta-U^c_j(t)]f^c_{ij}(t)\Big\}\nonumber\\
&\geq \sum_{c,i,j}\Big\{[U^c_i(t)-V^c_{ij}(t)]\hat{\beta}^{1c}_{ij}(t)+[V^c_{ij}(t)-P^c_{ij}(t)]r_{ij}\hat{\beta}^c_{ij}(t)\nonumber\\
&\quad +[P^c_{ij}(t)-\eta-U^c_j(t)]\hat{f}^c_{ij}(t)\Big\}\label{eqn:minVirtual}
\end{align}
for any other binary control policy $\{\hat{\beta}^{1c}_{ij},\hat{\beta}^c_{ij}, \hat{f}^c_{ij}\}$ satisfying (\ref{eqn:bcon}).
\end{Lemma}
\begin{Proof}
This lemma is an immediate consequence of the fact that the virtual queue based algorithm maximizes the left hand side of (\ref{eqn:minVirtual}) while satisfying (\ref{eqn:bcon}). The constraint (\ref{eqn:fcon}) is satisfied automatically.
\end{Proof}
Based on Lemma~\ref{lemma:1drift} (\ref{eqn:delta}) and Lemma~\ref{lemma:virtual_drift}, we have
\begin{align*}
&\mathds{E}\{\Delta(t)|\mathbf{U}(t),\mathbf{V}(t),\mathbf{P}(t)\}\\&\leq -2\sum_{c,i,j}\mathds{E}\left\{[U^c_i(t)-V^c_{ij}(t)]\hat{\beta}^{1c}_{ij}+[V^c_{ij}(t)-P^c_{ij}(t)]r_{ij}\hat{\beta}^c_{ij}\right.\\%\end{align*}
&\left.\left.\quad +[P^c_{ij}(t)-\eta-U^c_j(t)]\hat{f}^c_{ij}(t)\right|\mathbf{U}(t),\mathbf{T}(t),\mathbf{P}(t)\right\}\\
&\quad+2\sum_{c,i}U^c_i(t)\lambda^c_i+\alpha_2\\
&\leq \mathds{E}\Big\{\sum_{c,i}-2U^c_i(t)\Big( \sum_{j}\hat{\beta}_{ij}^{1c}(t)-\sum_{l}\hat{f}^c_{li}(t)-\lambda^c_i(t)\Big)\Big\}\\
&\quad+\mathds{E}\Big\{\sum_{c,i,j}2V^c_{ij}(t)\left(\hat{\beta}_{ij}^{1c}(t)-\hat{\beta}_{ij}^{c}(t)r_{ij}\right)\Big\}\nonumber\\
&\quad+\mathds{E}\Big\{\sum_{c,i,j}2P^c_{ij}(t)(\hat{\beta}^c_{ij}(t)r_{ij}(t)-\hat{f}^c_{ij}(t))+2\eta \hat{f}^c_{ij}(t)\Big\}+\alpha_2
\end{align*}
Since for $\boldsymbol{\lambda}+\boldsymbol{\epsilon} \in \Lambda$, we have $\boldsymbol{\lambda}+\boldsymbol{\epsilon}/(2d+1) \in \Lambda'$, thus
\begin{align*}
&\mathds{E}\{\Delta(t)|\mathbf{U}(t),\mathbf{V}(t),\mathbf{P}(t)\}\\
&\leq -2\Big(\sum_{c,i}U^c_i(t)+\sum_{c,i,j}V^c_{ij}(t)+\sum_{c,i,j}P^c_{ij}(t)\Big)\frac{\epsilon}{2d+1}\\
&\quad+2\eta KNd+KN(d+A_{max})^2+2N+KNd.
\end{align*}
Therefore, the system is stable, and the average backlog is upper bounded by
\begin{align*}
&\lim_{t\rightarrow\infty}\frac{1}{t}\sum_{\tau=0}^{t-1}\Big(\sum_{c,i}U^c_i(\tau)+\sum_{c,i,j}V^c_{ij}(\tau)+\sum_{c,i,j}P^c_{ij}(\tau)\Big)\\
&\leq \frac{(2d+1)(KN(d+A_{max})^2+2N+(2\eta+1)KNd)}{\epsilon}
\end{align*}
The proof is completed.
\bibliographystyle{IEEEtran}
| {'timestamp': '2012-07-17T02:00:38', 'yymm': '1109', 'arxiv_id': '1109.2583', 'language': 'en', 'url': 'https://arxiv.org/abs/1109.2583'} |
\section{Introduction}
In this paper we consider undirected graphs. The node set of a graph
$G=(V,E)$ is sometimes also denoted by $V(G)$, and similarly, the edge
set is sometimes denoted by $E(G)$. A \textbf{subgraph} of a graph
$G=(V,E)$ is a pair $(V', E')$ where $V'\subseteq V$ and $E'\subseteq
E\cap (V' \times V')$.
A graph is called \textbf{subcubic} if every node is
incident to at most 3 edges,
and it is called \textbf{subquadratic} if every node is incident to at most 4 edges.
By a \textbf{cut} in a graph we mean the set
of edges leaving a nonempty proper subset $V'$ of the nodes (note that
we do not require that $V'$ and $V-V'$ induces a connected graph). We use standard terminology and
refer the reader to \cite{frankbook} for what is not defined here.
We consider 3 types of decision problems with 7 types of
objects. The three types of problems are: packing, covering and
partitioning, and the seven types of objects are the following: paths
(denoted by a $\ensuremath{\mathrm{P}}$), paths with specified endvertices (denoted by
$\ensuremath{\mathrm{P}}_{st}$, where $s$ and $t$ are the prescribed endvertices), (simple)
circuits (denoted by $\ensuremath{\mathrm{C}}$: by that we mean a closed walk of length
at least 2, without edge- and node-repetition), forests ($\ensuremath{\mathrm{F}}$), spanning trees
($\ensuremath{SpT}$), (not necessarily spanning) trees ($\ensuremath{\mathrm{T}}$), and cuts (denoted
by $\ensuremath{Cut}$).
Let $G=(V,E)$ be a \textbf{connected} undirected graph (we
assume connectedness in order to avoid trivial case-checkings) and
$\ensuremath{\mathrm{A}}$ and $\ensuremath{\mathrm{B}}$ two (not necessarily different) object types from the 7
possibilities above. The general questions we ask are the following:
\begin{itemize}
\item \textbf{Packing problem} (denoted by $\ensuremath{\mathrm{A}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{B}}$): can we \textbf{find two edge-disjoint subgraphs} in $G$, one of type $\ensuremath{\mathrm{A}}$ and the other of type $\ensuremath{\mathrm{B}}$?
\item \textbf{Covering problem} (denoted by $\ensuremath{\mathrm{A}} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{B}}$): can we \textbf{cover the edge set} of $G$ with an object of type $\ensuremath{\mathrm{A}}$ and an object of type $\ensuremath{\mathrm{B}}$?
\item \textbf{Partitioning problem} (denoted by $\ensuremath{\mathrm{A}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{B}}$): can we \textbf{partition the edge set} of $G$ into an object of type $\ensuremath{\mathrm{A}}$ and an object of type $\ensuremath{\mathrm{B}}$?
\end{itemize}
Let us give one example of each type.
A typical partitioning problem is the following: decide whether the
edge set of $G$ can be partitioned into a spanning tree and a
forest. Using our notations this is Problem $\ensuremath{SpT} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$. This problem
is in \textbf{NP $\cap$ co-NP} by the results of Nash-Williams \cite{nw},
polynomial algorithms for deciding the problem were given by Kishi and
Kajitani \cite{kishi3}, and Kameda and Toida \cite{kameda}.
A typical packing problem is the following: given four (not
necessarily distinct) vertices $s,t,s',t'\in V$, decide whether there
exists an $s$-$t$ path $P$ and an $s'$-$t'$-path $P'$ in $G$, such that
$P$ and $P'$ do not share any edge. With our notations this is Problem
$\ensuremath{\mathrm{P}}_{st} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{P}}_{s't'} $. This
problem is still
solvable in polynomial time, as was shown by Thomassen
\cite{thomassen} and Seymour \cite{seymour}.
A typical covering problem is the following: decide whether the edge
set of $G$ can be covered by a path and a
circuit. In our notations this is Problem $P\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} C$. Interestingly we found
that this simple-looking problem is NP-complete.
Let us introduce the following short formulation for the partitioning
and covering problems. If the edge set of a graph $G$ can be
partitioned into a type $A$ subgraph and a type $B$ subgraph then we
will also say that \textbf{\boldmath the edge set of $G$ is $A\textbf{\boldmath{\ensuremath{\, + \,}}}
B$}. Similarly, if there is a solution of Problem $A\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} B$ for a
graph $G$ then we say that \textbf{\boldmath the edge set of $G$ is
$A\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} B$}.
\begin{table}[!ht]
\begin{center}
\caption{25 PARTITIONING PROBLEMS}
\label{tab:part}
\medskip
\begin{tabular}{|c|c|l|l|}
\hline
\textbf{Problem}&\textbf{Status}&\textbf{Reference}&\textbf{Remark}\\ \hline\hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part} & \textbf{NPC\xspace} for subquadratic planar\\\hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{P_{st}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part}& \textbf{NPC\xspace} for subquadratic planar\\\hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part}& \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$&\textbf{NPC\xspace}& Theorem \ref{thm:part}
& \textbf{NPC\xspace} for subquadratic planar \\ \hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$&\textbf{NPC\xspace} & Theorem \ref{thm:part}
&\textbf{NPC\xspace} for subquadratic planar \\ \hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$&\textbf{NPC\xspace} &Theorem \ref{thm:cut} (and Theorem \ref{thm:part}) &\textbf{NPC\xspace} for subcubic planar \\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}_{s't'}$&\textbf{NPC\xspace}&Theorem \ref{thm:part}& \textbf{NPC\xspace} for subquadratic planar\\\hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part}& \textbf{NPC\xspace} for subquadratic planar\\\hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$&\textbf{NPC\xspace} & Theorem \ref{thm:part}
& \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$&\textbf{NPC\xspace}& Theorem \ref{thm:part}
& \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$&\textbf{NPC\xspace} &Theorem \ref{thm:cut} (and Theorem \ref{thm:part})& \textbf{NPC\xspace} for subcubic planar\\ \hline
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part}& \textbf{NPC\xspace} for subquadratic planar \\ \hline
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$&\textbf{NPC\xspace} & Theorem \ref{thm:part}
&\textbf{NPC\xspace} for subquadratic planar \\ \hline
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$&\textbf{NPC\xspace} & Theorem \ref{thm:part}
& \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$&\textbf{NPC\xspace} &Theorem \ref{thm:cut} (and Theorem \ref{thm:part}) &
\textbf{NPC\xspace} for subcubic planar\\ \hline
$\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$&\textbf{NPC\xspace} & P\'alv\"olgyi \cite{Dome} & planar graphs? \\ \hline
$\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$&\textbf{NPC\xspace} & Theorem \ref{thm:3}
& planar graphs? \\ \hline
$\ensuremath{\mathrm{F}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$&\textbf{P\xspace} & Kishi and Kajitani \cite{kishi3}, & in \textbf{P\xspace} for matroids:\\
& & Kameda and Toida \cite{kameda} & Edmonds \cite{edmonds}\\
&& (Nash-Williams \cite{nw}) & \\ \hline
$\ensuremath{SpT} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$&\textbf{P\xspace} &Kishi and Kajitani \cite{kishi3}, & in \textbf{P\xspace} for matroids:\\
& & Kameda and Toida \cite{kameda}, & Edmonds \cite{edmonds}\\
& & (Nash-Williams \cite{nw61}, &\\
& & Tutte \cite{tutte}) & \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$&\textbf{P\xspace}& if and only if bipartite &\\
& &(and $|V|\ge 3$) & \\ \hlin
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$&\textbf{NPC\xspace}& Theorem \ref{thm:cut+F} & planar graps? \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:cut}& \textbf{NPC\xspace} for subcubic planar \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$&\textbf{NPC\xspace}&Theorem \ref{thm:cut}& \textbf{NPC\xspace} for subcubic planar \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}$&\textbf{NPC\xspace}&Theorem \ref{thm:cut}& \textbf{NPC\xspace} for subcubic planar \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{P_{st}}$&\textbf{NPC\xspace}&Theorem \ref{thm:cut}& \textbf{NPC\xspace} for subcubic planar \\ \hline
\end{tabular}
\end{center}
\end{table}
The setting outlined above gives us 84 problems. Note however that
some of these can be omitted. For example $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{A}}$ is trivial for each
possible type $A$ in question, because $P$ may consist of only one vertex. By the same reason, $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{A}}$ and $\ensuremath{\mathrm{F}}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{A}}$ type problems are also
trivial. Furthermore, observe that the edge-set $E(G)$ of a graph $G$
is $\ensuremath{\mathrm{F}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{A}} $ $\Leftrightarrow$ $E(G)$ is $ \ensuremath{\mathrm{F}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{A}}$ $ \Leftrightarrow$
$E(G)$ is $ \ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{A}} \Leftrightarrow$ $E(G)$ is $ \ensuremath{SpT} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{A}}$:
therefore we will only consider the problems of form $\ensuremath{\mathrm{F}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{A}}$ among
these for any $\ensuremath{\mathrm{A}}$. Similarly, the edge set $E(G)$ is $\ensuremath{\mathrm{F}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}} $
$\Leftrightarrow $ $E(G)$ is $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}} $ $\Leftrightarrow$ $E(G)$ is $
\ensuremath{SpT} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$:
again we choose to deal with $\ensuremath{\mathrm{F}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$.
We can also omit the problems $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$ and $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ because a
cut and a spanning tree can never be disjoint.
The careful calculation gives that we are left with 44 problems. We
have investigated the status of these. Interestingly, many of these
problems turn out to be NP-complete. Our results are summarized in
Tables \ref{tab:part}-\ref{tab:cov}. We
note that in our NP-completeness proofs we always show that the
considered problem is NP-complete even if the input graph is simple.
On the other hand, the polynomial algorithms given here always work
also for multigraphs (we allow parallel edges, but we forbid loops).
Some of the results shown in the tables were already proved in the
preliminary version \cite{quickpf} of this paper: namely we have
already shown the NP-completeness of
Problems $\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$,
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$,
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$,
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$,
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$,
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$,
$\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$,
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$,
and $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ there.
\begin{table}[!ht]
\begin{center}
\caption{9 PACKING PROBLEMS}
\label{tab:pack}
\medskip
\begin{tabular}{|c|c|l|l|}
\hline
\textbf{Problem}&\textbf{Status}&\textbf{Reference}&\textbf{Remark}\\ \hline\hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{P}}_{s't'}$&\textbf{P\xspace} &Seymour \cite{seymour}, Thomassen \cite{thomassen} & \\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$&\textbf{P\xspace} & see Section \ref{sec:alg1} & \\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$&\textbf{NPC\xspace} &Theorem \ref{thm:3}
& planar graphs?\\ \hline
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$&\textbf{P\xspace} & Bodlaender \cite{bodl} & \textbf{NPC\xspace} in linear matroids\\
&&(see also Section \ref{sec:alg2})& (Theorem \ref{thm:CdC}) \\ \hline
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$&\textbf{NPC\xspace} &Theorem \ref{thm:3}
& polynomial in \\
&&& planar graphs, \cite{marcin}\\ \hline
$\ensuremath{SpT} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$&\textbf{P\xspace} &Imai \cite{imai}, (Nash-Williams \cite{nw61},& in \textbf{P\xspace} for matroids: \\
& & Tutte \cite{tutte}) & Edmonds \cite{edmonds}\\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{Cut}$&\textbf{P\xspace}&always, if $G$ has two & \textbf{NPC\xspace} in linear matroids\\
& &non-adjacent vertices & (Corollary \ref{cor:CutdCut})\\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{P_{st}}$&\textbf{P\xspace}&always, except if the graph is an & \\
& & $s$-$t$ path (with multiple copies & \\
& & for some edges)&\\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$&\textbf{P\xspace}&always, except if the graph is & \textbf{NPC\xspace} in linear matroids \\
& & a tree, a circuit, or a bunch of & ($\Leftrightarrow$ the matroid
is not\\
&¶llel edges & uniform, Theorem \ref{thm:CutdC})\\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!ht]
\begin{center}
\caption{10 COVERING PROBLEMS}
\label{tab:cov}
\medskip
\begin{tabular}{|c|c|l|l|}
\hline
\textbf{Problem}&\textbf{Status}&\textbf{Reference}&\textbf{Remark}\\ \hline\hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part} & \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{P_{st}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part} & \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part} & \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}_{s't'}$&\textbf{NPC\xspace}&Theorem \ref{thm:part} & \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part}&\textbf{NPC\xspace} for subquadratic planar \\ \hline
$C \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}&Theorem \ref{thm:part} & \textbf{NPC\xspace} for subquadratic planar\\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{Cut}$&\textbf{NPC\xspace}& if and only if 4-colourable & always in planar \\
& & & Appel et al. \cite{4szin}, \cite{4szin2} \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$&\textbf{NPC\xspace}& Theorem \ref{thm:cut} & \textbf{NPC\xspace} for subcubic planar \\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}$&\textbf{NPC\xspace}& Theorem \ref{thm:cut}& \textbf{NPC\xspace} for subcubic planar\\ \hline
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{P_{st}}$&\textbf{NPC\xspace}& Theorem \ref{thm:cut}& \textbf{NPC\xspace} for subcubic planar\\ \hline
\end{tabular}
\end{center}
\end{table}
Problems $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$ and $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$ were posed in the open
problem portal called ``EGRES Open'' \cite{egresopen} of the Egerv\'ary
Research Group. Most of the NP-complete problems remain NP-complete
for planar graphs, though we do not know yet the status of Problems $\ensuremath{\mathrm{T}}
\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$, $\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$ $\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$, and $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ for
planar graphs, as indicated in the table.
We point out to an interesting phenomenon: planar duality and the
NP-completeness of Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$ gives that deciding whether the
edge set of a planar graph is the disjoint union of two \emph{simple}
cuts is NP-complete (a \textbf{simple cut}, or \textbf{bond} of a
graph is an inclusionwise minimal cut). In contrast, the edge set of a
graph is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$ if and only if \ the graph is bipartite on at least 3
nodes\footnote{It is easy to see that the edge set of a connected
bipartite graph on at least 3 nodes is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$. On the other
hand, the intersection of a cut and a circuit contains an even
number of edges, therefore the edge set of a non-bipartite graph
cannot be $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$.}, that is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$ is polinomially
solvable even for non-planar graphs.
Some of the problems can be formulated as a problem in the graphic
matroid and therefore also have a natural matroidal generalization.
For example the matroidal generalization of $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$ is the following:
can we find two disjoint circuits in a matroid (given with an independence
oracle, say)?
Of course, such a matroidal question is only interesting here if it
can be solved for graphic matroids in polynomial time. Some of these
matroidal questions is known to be solvable (e.g., the matroidal
version of $\ensuremath{SpT} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$), and some of them was unknown (at least for
us): the best example being the (above mentioned) matroidal version of
$\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$.
In the table above we indicate these matroidal generalizations, too,
where the meaning of the problem is understandable. The matroidal
generalization of spanning trees, forests, circuits is
straightforward. We do not want to make sense of trees, paths, or
$s$-$t$-paths in matroids. On the other hand, cuts deserve some
explanation. In matroid theory, a \textbf{cut} (also called
\textbf{bond} in the literature) of a matroid is defined as an
inclusionwise minimal subset of elements that intersects every
base. In the graphic matroid this corresponds to a simple cut of
the graph defined above.
So we will only consider
packing problems for cuts in matroids: for example the problem of type
$\ensuremath{\mathrm{A}}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{Cut}$ in graphs is equivalent to the problem of packing $A$ and a
simple cut in the graph, therefore the matroidal generalization is
understandable. We will discuss these matroidal generalizations in
Section \ref{sec:matroid}.
\section{NP-completeness proofs}\label{sect:npc}
\newcommand{{\sc Planar3\-Reg\-Ham}}{{\sc Planar3\-Reg\-Ham}}
\newcommand{{\sc Planar3\-Reg\-Ham}$-e$}{{\sc Planar3\-Reg\-Ham}$-e$}
A graph $G=(V,E)$ is said to be \textbf{subcubic} if $d_G(v)\le 3$ for
every $v\in V$. In many proofs below we will use Problem
{\sc Planar3\-Reg\-Ham}\ and Problem {\sc Planar3\-Reg\-Ham}$-e$\ given below.
\begin{prob}[{\sc Planar3\-Reg\-Ham}]
Given a $3$-regular planar graph $G=(V,E)$,
decide whether there is a Hamiltonian circuit in $G$.
\end{prob}
\begin{prob}[{\sc Planar3\-Reg\-Ham}$-e$]
Given a $3$-regular planar graph $G=(V,E)$ and an edge $e\in E$,
decide whether there is a Hamiltonian circuit in $G$ through edge $e$.
\end{prob}
It is well-known that Problems {\sc Planar3\-Reg\-Ham}\ and {\sc Planar3\-Reg\-Ham}$-e$\ are
NP-complete (see Problem [GT37] in \cite{gj}).
\subsection{NP-completeness proofs for subcubic planar graphs
}\label{sec:cut+c}
\begin{figure}[!ht]
\begin{center}
\input{k4min.tex}
\caption{An illustration for the proof of Theorem \ref{thm:cut}.}
\label{fig:k4min}
\end{center}
\end{figure}
\begin{thm}\label{thm:cut}
The following problems are NP-complete, even if restricted to subcubic
planar graphs: $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$, $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}$,
$\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{P_{st}}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{P_{st}}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$,
$\ensuremath{P_{st}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$.
\end{thm}
\begin{proof}
All the problems are clearly in NP. First we prove the completeness
of $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$, $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$ and $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$ using a reduction from Problem
{\sc Planar3\-Reg\-Ham}.
Given an instance of the Problem {\sc Planar3\-Reg\-Ham}\ with the 3-regular
planar graph $G$, construct the following graph $G'$. First subdivide
each edge $e=v_1v_2\in E(G)$ with 3 new nodes $x_e^{v_1},x_e,x_e^{v_2}$ such that
they form a path in the order $v_1,x_e^{v_1},x_e,x_e^{v_2},v_2$. Now for any node
$u\in V(G)$ and any pair of edges $e,f\in E(G)$ incident to $u$
connect $x_e^u$ and $x_f^u$ with a new edge. Finally, delete all the
original nodes $v\in V(G)$ to get $G'$.
Informally speaking, $G'$ is obtained from $G$ by blowing a triangle
into every node of $G$ and subdividing each original edge with a new
node: see Figure \ref{fig:k4min} \textit{a,b,} for an illustration. Note that by
contracting these triangles in $G'$ and undoing the subdivision
vertices of form $x_e$ gives back $G$.
Clearly, the resulting graph $G'$ is still planar and has maximum
degree 3 (we mention that the subdivision nodes of form $x_e$ are only needed for
the Problem $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$).
We will prove that $G$ contains a Hamiltonian circuit
if and only if $G'$ contains a circuit covering odd circuits (i.e., the edge-set of
$G'$ is $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{Cut}$)
if and only if the edge-set of $G'$ is $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$
if and only if $G'$ contains a circuit covering all the circuits (i.e., the edge
set of $G'$ is $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$). First let $C$ be a Hamiltonian circuit in
$G$. We define a circuit $C'$ in $G'$ as follows. For any $v\in V(G)$,
if $C$ uses the edges $e, f$ incident to $v$ then let $C'$ use the 3
edges $x_ex_e^v, x_e^vx_f^v, x_f^vx_f$ (see Figure \ref{fig:k4min} \textit{a,b,} for an
illustration). Observe that $G'-C'$ is a forest, proving that the
edge-set of $G'$ is $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$. Similarly, the edge set of $G'-C'$ is a
cut of $G'$, proving that the edge-set of $G'$ is $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{Cut}$. Finally
we show that if the edge set of $G'$ is $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{F}}$ then $G$ contains a
Hamiltonian circuit: this proves the sequence of equivalences stated
above (the remaining implications being trivial). Assume that $G'$ has
a circuit $C'$ that intersects the edge set of every odd circuit of
$G'$. Contract the triangles of $G'$ and undo the subdivision nodes of
form $x_e$ and observe that $C'$ becomes a Hamiltonian circuit of $G$.
For the rest of the problems we use {\sc Planar3\-Reg\-Ham}$-e$. Given
the 3-regular planar graph $G$ and an edge
$e=v_1v_2\in E(G)$, first construct the graph $G'$ as above. Next modify $G'$
the following way: if $x_e^{v_1},x_e,x_e^{v_2}$ are the nodes of $G'$ arising from the subdivision of the original edge $e\in E(G)$
then let $G''=(G'-x_e)+\{x_e^{v_i}a_i, a_ib_i, b_ic_i, c_ia_i: i=1,2\}$,
where $a_i,b_i,c_i (i=1,2)$ are 6 new nodes (informally, ``cut off'' the path
$x_e^{v_1},x_e,x_e^{v_2}$ at $x_e$ and substitute
the arising two vertices of degree 1 with two triangles).
An illustration can be seen in Figure \ref{fig:k4min} \textit{a,c}.
Let $s=c_1$ and $t=c_2$.
The following chain of equivalences settles the NP-completeness of the
rest of the problems promised in the theorem. The proof is similar to
the one above and is left to the reader.
There exists a Hamiltonian circuit in $G$ using the edge $e$ $\Leftrightarrow$ the
edge set of $G''$ is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{P_{st}}$ $\Leftrightarrow$ the edge set of
$G''$ is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}$ $\Leftrightarrow$ the edge set of $G''$ is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}}
T$ $\Leftrightarrow$ the edge set of $G''$ is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{P_{st}}$
$\Leftrightarrow$ the edge set of $G''$ is $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}$
$\Leftrightarrow$ the edge set of $G''$ is $\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$
$\Leftrightarrow$ the edge set of $G''$ is $\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$.
\end{proof}
\subsection{NP-completeness proofs based on Kotzig's theorem}
Now we prove the NP-completeness of many other problems in our
collection using
the following elegant result
proved by Kotzig \cite{kotzig}.
\begin{thm}\label{thm:kotzig}
A 3-regular graph contains a Hamiltonian circuit if and only if the
edge set of its line graph can be decomposed into two
Hamiltonian circuits.
\end{thm}
This theorem was used to prove NP-completeness results by Pike in
\cite{pike}.
Another useful and well known observation is the following:
the line graph of a planar 3-regular graph is 4-regular and planar.
\begin{thm}\label{thm:part}
The following problems are NP-complete, even if restricted to subquadratic planar graphs: $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}_{st}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$,
$\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}},$ $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{P}}_{s't'}, \; \ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$, $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, + \,}}}
\ensuremath{\mathrm{F}}$, $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$, $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$, $\; \ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$, $\; \ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}}
\ensuremath{SpT}$, $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}_{st}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$, $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{P}}_{s't'}$, $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$,
$\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{C}}$.
\end{thm}
\begin{proof}
The problems are clearly in NP. Let $G$ be a planar 3-regular
graph. Since $L(G)$ is 4-regular, it is decomposable to two circuits
if and only if it is decomposable to two Hamiltonian circuit s. This together with Kotzig's
theorem shows that $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{C}}$ is NP-complete. For every other problem of
type $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{A}}$ use $L=L(G)\!-\!st$ with an arbitrary edge $st$ of $L(G)$.
Let $C$ be a circuit of $L$ and observe that (by the number of edges
of $L$ and the degree conditions) $L\!-\!C$ is circuit-free if and only if
$C$ is a Hamiltonian circuit and $L\!-\!C$ is a Hamiltonian path connecting $s$ and $t$.
For the rest of the partitioning type problems we need one more trick. Let us be given
a $3$-regular planar graph $G=(V,E)$ and an edge $e=xy\in E$. We
construct another $3$-regular planar graph $G'=(V',E')$ as
follows. Delete edge $xy$, add vertices $x', y'$, and add edges $xx',
yy'$ and add two parallel edges between $x'$ and $y'$, namely $e_{xy}$
and $f_{xy}$ (note that $G'$ is planar, too). Clearly $G$ has a Hamiltonian circuit
through edge $xy$ if and only if $G'$ has a Hamiltonian circuit. Now consider $L(G')$,
the line graph of $G'$, it is a $4$-regular planar graph.
Note, that in $L(G')$ there are two parallel edges
between nodes $s=e_{xy}$ and $t=f_{xy}$, call these edges $g_1$ and
$g_2$. Clearly, $L(G')$ can be decomposed into two Hamiltonian circuit s if and only if
$L'=L(G')\!-\!g_1\!-\!g_2$ can be decomposed into two Hamiltonian path s. Let $P$ be a path
in $L'$ and notice again that the number of edges of $L'$ and the
degrees of the nodes in $L'$ imply that $L'\!-\!P$ is circuit free
if and only if \ $P$ and $L'\!-\!P$ are two Hamiltonian path s in $L'$.
Finally, the NP-completeness of the problems of type $\ensuremath{\mathrm{A}}\textbf{\boldmath{\ensuremath{\, \mathrm{\cup} \,}}} \ensuremath{\mathrm{B}}$ is an
easy consequence of the NP-completeness of the corresponding
partitioning problem $\ensuremath{\mathrm{A}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{B}}$: use the same construction and observe
that the number of edges enforce the two objects in the cover to be
disjoint.
\end{proof}
We remark that the above theorem gives a new proof of the
NP-completeness of Problems $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$, $\ensuremath{\mathrm{P}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$ and $\ensuremath{P_{st}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$, already
proved in Theorem \ref{thm:cut}.
\subsection{NP-completeness of Problems $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$, and $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$}
First we show the NP-completeness of Problems $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}}
\ensuremath{SpT}$, and $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$. Problem $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$ was proved to be NP-complete
by P\'alv\"olgyi in \cite{Dome}
(the NP-completeness of this problem with the additional requirement that
the two trees have to be of equal size was proved by Pferschy,
Woeginger and Yao \cite{woeg}). Our reductions here are similar to the
one used by P\'alv\"olgyi in \cite{Dome}. We remark that our first
proof for the NP-completeness of Problems $\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$, $\ensuremath{\mathrm{P}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, $\ensuremath{P_{st}}
\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$, $\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$ and $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$ used a variant of the
construction below (this can be found in \cite{quickpf}), but later we
found that using Kotzig's result (Theorem \ref{thm:kotzig}) a simpler
proof can be given for these.
For a subset of edges $E'\subseteq E$ in a graph $G=(V,E)$, let
$V(E')$ denote the subset of nodes incident to the edges of $E'$,
i.e., $V(E')=\{v\in V: $ there exists an $f\in E' $ with $v\in f\}$.
\begin{thm}\label{thm:3}
Problems $\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$ and $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ are NP-complete
even for graphs with maximum degree 3.
\end{thm}
\begin{proof}
It is clear that the problems are in NP. Their completeness will be
shown by a reduction from the well known NP-complete problems
\textsc{3SAT}\ or the problem \textsc{One-In-Three 3SAT}\ (Problems LO2 and LO4 in
\cite{gj}). Let $\varphi$ be a 3-CNF formula with variable set
$\{x_1,x_2,\dots,x_n\}$ and clause set $\ensuremath{\mathcal{C}}=\{C_1,C_2,\dots,C_m\}$
(where each clause contains exactly 3 literals). Assume that literal
$x_j$ appears in $k_j$ clauses
$C_{a^j_{1}},C_{a^j_{2}},\dots,C_{a^j_{k_j}}$, and literal $\ov{x_j}$
occurs in $l_j$ clauses $C_{b^j_{1}},C_{b^j_{2}},\dots,C_{b^j_{l_j}}$.
Construct the following graph $G_\varphi=(V,E)$.
For an arbitrary clause $C\in \ensuremath{\mathcal{C}}$ we will introduce a new node $u_C$,
and for every literal $y$ in $C$ we introduce two more nodes $v(y,C), w(y,C)$.
Introduce the edges $u_Cw(y,C), w(y,C)v(y,C)$
for every clause $C$ and every literal $y$ in $C$
(the nodes $w(y,C)$ will have degree 2).
For every variable $x_j$ introduce 8 new nodes $z^j_1, z^j_2,$ $ w^j_1,
\ov{w^j_1},$ $w^j_2,$ $ \ov{w^j_2},$ $w^j_3, \ov{w^j_3}$. For every
variable $x_j$, let $G_\varphi$ contain a circuit on the $k_j+l_j+4$
nodes $z^{j}_1,$ $ v(x_j,C_{a^j_{1}}),$ $v(x_j,C_{a^j_{2}}),$ $\dots,$ $
v(x_j,C_{a^j_{k_j}}),$ $ w^j_1, $ $z^j_2,$ $ \ov{w^j_1},$ $
v(\ov{x_j},C_{b^j_{l_j}}),$ $v(\ov{x_j},C_{b^j_{l_j-1}}), $ $\dots,$ $
v(\ov{x_j},C_{b^j_{1}})$ in this order.
We say that this circuit is \textbf{ associated to variable
$x_j$}. Connect the nodes $z^j_2$ and $z^{j+1}_1$ with an edge for
every $j=1,2,\dots,n\!-\!1$. Introduce furthermore a path on nodes
$w^1_3,\ov{w^1_3},w^2_3,\ov{w^2_3}, \dots, w^n_3,\ov{w^n_3}$ in this
order and add the edges $w^j_1w^j_2, w^j_2w^j_3, \ov{w^j_1}\ov{w^j_2},
\ov{w^j_2}\ov{w^j_3}$ for every $j=1,2,\dots, n$. Let $s=z^1_1$ and
$t=z^n_2$.
\begin{figure}[!ht]
\begin{center}
\input{gfi4.tex}
\caption{Part of the construction of graph $G_\varphi$ for clause
$C=(\ov{x_1}\vee {x_2}\vee \ov{x_3})$.}\label{fig:G_phi}
\end{center}
\end{figure}
The construction of the graph $G_\varphi$ is
finished. An illustration can be found in Figure \ref{fig:G_phi}.
Clearly, $G_\varphi$ is simple and has maximum degree three.
If $\tau$ is a truth assignment to the variables $x_1,x_2,\dots,x_n$
then we define an $s$-$t$ path $P_\tau$ as follows: for every
$j=1,2,\dots,n$, if $x_j$ is set to TRUE then let $P_\tau$ go through
the nodes $z^j_1, v(\ov{x_j},C_{b^j_{1}}),
v(\ov{x_j},$ $C_{b^j_{2}}),\dots ,$ $v(\ov{x_j},$ $C_{b^j_{l_j}}),$ $\ov{w^j_1},z^j_2$,
otherwise (i.e., if $x_j$ is set to FALSE) let $P_{\tau}$ go through $z^j_1,
v({x_j},$ $C_{a^j_{1}}), $ $v({x_j},$ $C_{a^j_{2}}),\dots
,$ $v({x_j},$ $C_{a^j_{k_j}}),{w^j_1}, z^j_2$.
We need one more concept. An $s$-$t$ path $P$ is called an
\emph{assignment-defining path} if
$v\in V(P),\ d_G(v)=2$ implies $v\in \{s,t\}$.
For such a path $P$ we define the truth assignment $\tau_P$ such that
$P_{\tau_P}=P$.
\begin{cl}
There is an $s$-$t$ path $P\subseteq E$ such that $(V,\; E\!-\!P)$ is
connected if and only if $\varphi\in 3SAT$. Consequently,
Problem $\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ is NP-complete.
\end{cl}
\begin{proof}
If $\tau$ is a truth assignment showing that $\varphi\in 3SAT$ then
$P_\tau$ is a path satisfying the requirements, as one can check. On
the other hand, if $P$ is an $s$-$t$ path such that $(V,\; E\!-\!P)$ is connected
then $P$ cannot go through nodes of degree 2, therefore $P$ is
assignment-defining,
and $\tau_P$ shows
$\varphi\in 3SAT$.
\end{proof}
To show the NP-completeness of Problem
$\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, modify $G_{\varphi}$ the following way: subdivide the two
edges incident to $s$ with two new nodes $s'$ and $s''$ and connect
these two nodes with an edge. Repeat this with $t$: subdivide the two
edges incident to $t$ with two new nodes $t'$ and $t''$ and connect
$t'$ and $t''$. Let the graph obtained this way be $G=(V,E)$.
Clearly, $G$ is subcubic and simple. Note that the definition of an
assignment defining path and that of $P_\tau$ for a truth assignment
$\tau $ can be obviously modified for the graph $G$.
\begin{cl}
There exists a truth assignment $\tau$ such that every clause in $\varphi$
contains exactly one true literal if and only if there exists a set
$T\subseteq E$ such that $(V(T),\, T)$ is a tree and $(V,\; E\!-\!T)$ is a
spanning tree. Consequently,
Problem $\ensuremath{\mathrm{T}} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$ is NP-complete.
\end{cl}
\begin{proof}
If $\tau$ is a truth assignment as above then one can see that
$T=P_\tau$ is an edge set satisfying the requirements.
On the other hand, assume that $T\subseteq E$ is such that $(V(T),\,
T)$ is a tree and $T^*=(V,\; E\!-\!T)$ is a spanning tree. Since $T^*$
cannot contain circuits, $T$ must contain at least one of the 3 edges
$ss',s's'',s''s$ (call it $e$), as well as at least one of the 3 edges
$tt',t't'',t''t$ (say $f$). Since $(V(T),\, T)$ is connected, $T$
contains a path $P\subseteq T$ connecting $e$ and $f$ (note that since
$(V, E\!-\!T)$ is connected, $|T\cap \{ss',s's'',s''s\}|=|T\cap
\{tt',t't'',t''t\}|=1$).
Since $(V,\; E\!-\!P)$ is connected, $P$ cannot go through nodes of
degree 2 (except for the endnodes of $P$), and the edges $e$ and $f$
must be the last edges of $P$ (otherwise $P$ would disconnect $s$ or
$t$ from the rest). Thus, without loss of generality we can assume
that $P$ connects $s$ and $t$ (by locally changing $P$ at its ends),
and we get that $P$ is assignment defining. Observe that in fact $T$
must be equal to $P$, since $G$ is subcubic (therefore $T$ cannot
contain nodes of degree 3). Consider the truth assignment $\tau_P$
associated to $P$, we claim that $\tau_P$ satisfies our requirements.
Clearly, if a clause $C$ of $\varphi$ does not contain a true literal
then $u_C$ is not reachable from $s$ in $G\!-\!T$, therefore every clause
of $\varphi$ contains at least one true literal. On the other hand
assume that a clause $C$ contains at least 2 true literals (say $x_j$
and $\ov{x_k}$ for some $j\ne k$), then one can see that there exists
a circuit in $G\!-\!T$ (because $v(x_j,C)$ is still reachable from
$v(\ov{x_k},C)$ in $G\!-\!T\!-\!u_C$ via the nodes $w_j^1, w_j^2,w_j^3$ and
$\ov{w_k^1}, \ov{w_k^2},\ov{w_k^3}$).
\end{proof}
Finally we prove the NP-completeness of Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$. For the
3CNF formula $\varphi$ with variables $x_1,x_2,\dots,x_n$ and clauses
$C_1,C_2,\dots,C_m$, let us associate the 3CNF formula $\varphi'$ with
the same variable set and clauses $(x_1 \vee x_1 \vee \ov{x_1}),\;
(x_2 \vee x_2 \vee \ov{x_2}), \dots,\; (x_n \vee x_n \vee \ov{x_n}),\;
C_1,\; C_2,\dots, C_m$. Clearly, $\varphi$ is satisfiable if and only if \ $\varphi'$
is satisfiable.
Construct the graph $G_{\varphi'}=(V,E)$ as
above (the construction is clear even if some clauses contain only 2
literals), and let $G=(V,E)$ be obtained from $G_{\varphi'}$ by adding
the edge $st$.
\begin{cl}
The formula $\varphi'$ is satisfiable if and only if there exists a
set $K\subseteq E$ such that $(V(K),\, K)$ is a circuit and $G\!-\!K=(V,\;
E\!-\!K)$ is connected. Consequently, Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ is NP-complete.
\end{cl}
\begin{proof}
First observe that if $\tau$ is a truth assignment satisfying
$\varphi'$ then $K=P_\tau\cup\{st\}$ is an edge set satisfying the
requirements. On the other hand, if $K$ is an edge set satisfying the
requirements then $K$ cannot contain nodes of degree 2, since $G\!-\!K$ is
connected. We claim that $K$ can neither be a circuit associated to a
variable $x_i$, because in this case the node $u_C$ associated to
clause $C=(x_i \vee x_i \vee \ov{x_i})$ would not be reachable in
$G\!-\!K$ from $s$. Therefore $K$ consists of the edge $st$ and an
assignment defining path $P$. It is easy to check (analogously to the
previous arguments) that $\tau_P$ is a truth assignment satisfying
$\phi'$.
\end{proof}
\noindent As we have proved the NP-completeness of all three problems,
the theorem is proved.
\end{proof}
We note that the construction given in our original proof of the above
theorem (see \cite{quickpf})
was used by Bang-Jensen and Yeo in \cite{bjyeo}. They
settled an open problem raised by Thomass\'e in 2005. They proved
that it is NP-complete to decide $\mathrm{SpA}\wedge \ensuremath{SpT}$ in
digraphs, where $\mathrm{SpA}$ denotes a spanning arborescence and
$\ensuremath{SpT}$ denotes a spanning tree in the underlying undirected graph.
We also point out that the planarity of the graphs in the above proofs
cannot be assumed. We do not know the status of any of the Problems
$\ensuremath{\mathrm{P}}_{st}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$, $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{SpT}$, and $\ensuremath{\mathrm{T}}\textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{T}}$ in planar graphs. It was
shown in \cite{marcin} that Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ is polynomially
solvable in planar graphs. We also mention that planar duality gives
that Problem $\ensuremath{\mathrm{C}}\textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ in a planar graph is eqivalent to finding a
cut in a planar graph that contains no circuit: by the results of
\cite{marcin}, this problem is also polynomially solvable. However
van den Heuvel \cite{heuvel} has shown that this problem is
NP-complete for general (i.e., not necessarily planar) graphs.
We point out to an interesting connection towards the Graphic TSP
Problem. This problem can be formulated as follows. Given a connected
graph $G=(V,E)$, find a connected Eulerian subgraph of $2G$ spanning
$V$ with minimum number of edges (where $2G=(V,2E)$ is the graph
obtained from $G$ by doubling its edges). The connection is the
following. Assume that $F\subseteq 2E$ is a feasible solution to the
problem. A greedy way of improving $F$ would be to delete edges from it, while
maintaining the feasibility. It is thus easy to observe that this
greedy improvement is possible if and only if the graph $(V,F)$
contains an edge-disjoint circuit and a spanning tree (which is
Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ in our notations). However, slightly modifying
the proof above it can be shown that Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{SpT}$ is also
NP-complete in Eulerian graphs (details can be found in
\cite{marcin}).
\begin{thm}\label{thm:cut+F}
Problem $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$ is NP-complete.
\end{thm}
\begin{proof}
The problem is clearly in NP. In order to show its completeness let
us first rephrase the problem. Given a graph, Problem $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}} \ensuremath{\mathrm{F}}$ asks
whether we can colour the nodes of this graph with two colours such
that no monochromatic circuit exists.
Consider the NP-complete Problem
2-COLOURABILITY OF A 3-UNIFORM HYPERGRAPH
This problem is the following: given a 3-uniform hypergraph $H=(V,
\ensuremath{\mathcal E})$, can we colour the set $V$ with two colours (say red and blue)
such that there is no monochromatic hyperedge in $\ensuremath{\mathcal E}$ (the problem is
indeed NP-complete, since Problem GT6 in \cite{gj} is a special case
of this problem). Given the instance $H=(V,\ensuremath{\mathcal E})$ of this problem,
construct the following graph $G$. The node set
of $G$ is $V\cup V_{\ensuremath{\mathcal E}}$, where $V_{\ensuremath{\mathcal E}}$ is disjoint from $V$ and it
contains 6 nodes for every hyperedge in $\ensuremath{\mathcal E}$: for an arbitrary
hyperedge $e=\{v_1,v_2,v_3\}\in \ensuremath{\mathcal E}$, the 6 new nodes associated to it
are $x_{v_1,e}, y_{v_1,e}, x_{v_2,e}, y_{v_2,e}, x_{v_3,e}, y_{v_3,e}$.
The edge set of $G$ contains the following edges: for the hyperedge
$e=\{v_1,v_2,v_3\}\in \ensuremath{\mathcal E}$, $v_i$ is connected with $x_{v_i,e}$ and
$y_{v_i,e}$ for every $i=1,2,3$, and among the 6 nodes associated to $e$
every two is connected with an edge
except for the 3 pairs of form $x_{v_i,e},y_{v_i,e}$ for $i=1,2,3$
(i.e., $|E(G)|=18|\ensuremath{\mathcal E}|$).
The construction of $G$ is finished. An illustration can be found in
Figure \ref{fig:cutplF}. Note that in any two-colouring of $V\cup
V_{\ensuremath{\mathcal E}}$ the 6 nodes associated to the hyperedge $e=\{v_1,v_2,v_3\}\in
\ensuremath{\mathcal E}$ do not induce a monochromatic circuit if and only if there exists
a permutation $i,j,k$ of $1,2,3$ so that they are coloured the
following way: $x_{v_i,e},y_{v_i,e}$ is blue, $x_{v_j,e},y_{v_j,e}$ is
red and $x_{v_k,e},y_{v_k,e}$ are of different colour.
\begin{figure}[!ht]
\begin{center}
\input{cutplF.tex}
\caption{Part of the construction of the graph $G$ in the proof of
Theorem \ref{thm:cut+F}.}\label{fig:cutplF}
\end{center}
\end{figure}
One can check that $V$ can be coloured with 2 colours such that there
is no monochromatic hyperedge in $\ensuremath{\mathcal E}$ if and only if \ $V\cup V_{\ensuremath{\mathcal E}}$ can be
coloured with 2 colours such that there is no monochromatic circuit in
$G$.
\end{proof}
We note that we do not know the status of Problem $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, + \,}}}
\ensuremath{\mathrm{F}}$ in planar graphs.
\section{Algorithms} \label{sec:alg1}\label{sec:alg2}
\paragraph{Algorithm for $\ensuremath{P_{st}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$.}
Assume we are given a connected multigraph $G=(V,E)$ and two nodes
$s,t\in V$, and we want to decide whether an $s$-$t$-path
$P\subseteq E$ and a circuit $C\subseteq E$ exists with $P\cap
C=\emptyset$. We may even assume that both $s$ and $t$ have degree
at least two. If $v\in V\!-\!\{s,t\}$ has degree at most two then we
can eliminate it. If there is a cut-vertex $v\in V$ then we can
decompose the problem into smaller subproblems by checking whether
$s$ and $t$ fall in the same component of $G\!-\!v$, or not. If they do
then $P$ should lie in that component, otherwise $P$ has to go
through $v$.
If there is a non-trivial two-edge $s$-$t$-cut (i.e., a set $X$ with
$\{s\}\subsetneq X\subsetneq V\!-\!t$, and $d_G(X)=2$), then we can again
reduce the problem in a similar way:
the circuit to be found cannot use both edges entering $X$ and we have
to solve two smaller problems obtained by contracting $X$ for the
first one, and contracting $V\!-\!X$ for the second one.
So we can assume that $|E|\ge n+\lceil n/2
\rceil-1$, and that $G$ is $2$-connected and $G$
has no non-trivial two-edge $s$-$t$-cuts.
Run a BFS
from $s$ and associate levels to vertices ($s$ gets $0$). If $t$ has
level at most $\lceil n/2 \rceil -1$ then we have a path of length at
most $\lceil n/2 \rceil -1$ from $s$ to $t$, after deleting its edges,
at least $n$ edges remain, so we are left with a circuit.
So we may assume that the level of $t$ is at least $\lceil n/2
\rceil$. As $G$ is $2$-connected, we must have at least two vertices
on each intermediate level. Consequently $n$ is even, $t$ is on level
$n/2$, and we have exactly two vertices on each intermediate level,
and each vertex $v\in V\!-\!\{s,t\}$ has degree $3$, or, otherwise for a
minimum $s$-$t$ path $P$ we have that $G\!-\!P$ has at least $n$ edges,
i.e., it contains a circuit. We have no non-trivial two-edge $s$-$t$-cuts,
consequently there can only be two cases: either $G$ equals to $K_4$ with
edge $st$ deleted, or $G$ arises from a $K_4$ such that two opposite
edges are subdivided (and these subdivision nodes are $s$ and $t$). In
either cases we have no solution.
\medskip
\paragraph{Algorithm for $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$.}
We give a simple polynomial time algorithm for deciding whether two
edge-disjoint circuits can be found in a given connected multigraph
$G=(V,E)$. We note that a polynomial (but less elegant) algorithm for
this problem was also given in \cite{bodl}.
If any vertex has degree at most two, we can eliminate it, so we
may assume that the minimum degree is at least $3$. If $G$ has at
least $16$ vertices, then it has a circuit of length at most $n/2$
(simply run a BFS from any node and observe that there must be a
non-tree edge between some nodes of depth at most $\log(n)$, giving us
a circuit of length at most $2\log(n)\le n/2$), and after deleting the
edges of this circuit, at least $n$ edges remain, so we are left with
another circuit. For smaller graphs we can check the problem in constant
time.
\section{Matroidal generalizations}\label{sec:matroid}
In this section we will consider the matroidal generalizations for the
problems that were shown to be polynomially solvable in the graphic
matroid. In fact we will only need linear matroids, since it turns out
that the problems we consider are already NP-complete in them. We
will use the following result of Khachyan.
\begin{thm}[Khachyan \cite{khachyan}]\label{thm:khac}
Given a $D\times N$ matrix over the rationals, it is NP-complete to
decide whether there exist $D$ linearly dependent columns.
\end{thm}
First we consider the matroidal generalization of Problem $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$.
\begin{thm}\label{thm:CutdC}
It is NP-complete to decide whether an (explicitly given) linear matroid
contains a cut and a circuit that are disjoint.
\end{thm}
\begin{proof}
Observe that there is no disjoint cut and circuit in a matroid if and
only if every circuit contains a base, that is equivalent with the
matroid being uniform.
Khachyan's Theorem \ref{thm:khac} is equivalent with the uniformness
of the linear matroid determined by the coloumns of the matrix in
question, proving our theorem.
\end{proof}
Finally we consider the matroidal generalization of Problem $\ensuremath{\mathrm{C}} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{\mathrm{C}}$
and $\ensuremath{Cut} \textbf{\boldmath{\ensuremath{\, \wedge \,}}} \ensuremath{Cut}$.
\begin{thm}\label{thm:CdC}
The problem of deciding whether an (explicitly given) linear matroid
contains two disjoint circuits is NP-complete.
\end{thm}
\begin{proof}
We will prove here that Khachyan's Theorem \ref{thm:khac} is
true even if $N=2D+1$, which implies our theorem, since there are two
disjoint circuits in the linear matroid represented by this $D\times
(2D+1)$ matrix if and only if there are $D$ linearly dependent columns
in it.
Khachyan's proof of Theorem \ref{thm:khac} was simplified by Vardy
\cite{vardy}, we will follow his line of proof. Consider the following
problem.
\begin{prob}\label{prob:sum}
Given different positive integers $a_1,a_2,\dots,a_n,b$ and a positive
integer $d$, decide whether there exist $d$ indices $1\le
i_1<i_2<\dots<i_d\le n$ such that $b=a_{i_1}+a_{i_2}+\dots+a_{i_d}$.
\end{prob}
\newcommand{{\sc Subset-Sum}}{{\sc Subset-Sum}}
Note that Problem \ref{prob:sum} is very similar to the
{\sc Subset-Sum}\ Problem (Problem SP13 in \cite{gj}), the only difference
being that in the {\sc Subset-Sum}\ problem we do not specify $d$, and the
numbers $a_1,a_2,\dots,a_n$ need not be different. On the other hand,
here we will strongly need that the numbers $a_1,a_2,\dots,a_n$ are
all different. Vardy has shown the following claim (we include a
proof for sake of completeness).
\begin{cl}\label{cl:vardy}
There is solution to Problem \ref{prob:sum} if and only if there are
$d+1$ linearly dependent columns (above the rationals) in the
$(d+1)\times (n+1)$ matrix
\[
\begin{pmatrix}
1 & 1 & \cdots & 1 & 0 \\
a_{1} & a_{2} & \cdots & a_{n}& 0 \\
\vdots & \vdots & \ddots & \vdots \\
a_{1}^{d-2} & a_{2}^{d-2} & \cdots & a_{n}^{d-2} & 0 \\
a_{1}^{d-1} & a_{2}^{d-1} & \cdots & a_{n} ^{d-1} & 1 \\
a_{1}^{d} & a_{2}^{d} & \cdots & a_{n} ^{d} & b
\end{pmatrix}.\]
\end{cl}
\begin{proof}
We use the following facts about determinants. Given real numbers
$x_1,x_2,\dots,x_k$,
we have the following well-known relation for the Vandermonde
determinant:
\[
\det\begin{pmatrix}
1 & 1 & \cdots & 1 \\
x_{1} & x_{2} & \cdots & x_{k}\\
\vdots & \vdots & \ddots & \vdots \\
x_{1}^{k-1} & x_{2}^{k-1} & \cdots & x_{k} ^{k-1}
\end{pmatrix}=\prod_{i<j}(x_j-x_i).
\]
Therefore the Vandermonde determinant is not zero, if the numbers
$x_1,x_2,\dots,x_k$ are different. Furthermore, we have the following
relation for an alternant of the Vandermonde determinant (see
Chapter V in \cite{muir}, for example):
\[
\det\begin{pmatrix}
1 & 1 & \cdots & 1 \\
x_{1} & x_{2} & \cdots & x_{k}\\
\vdots & \vdots & \ddots & \vdots \\
x_{1}^{k-2} & x_{2}^{k-2} & \cdots & x_{k} ^{k-2} \\
x_{1}^{k} & x_{2}^{k} & \cdots & x_{k} ^{k}
\end{pmatrix}=(x_1+x_2+\dots+x_k)\prod_{i<j}(x_j-x_i).
\]
W include a proof of this last fact: given an arbitrary $k\times k$
matrix $X=((x_{ij}))$ and numbers $u_1,\dots,u_k$, observe (by checking the coefficients of the $u_i$s on each side) that
\begin{eqnarray*}
\det\begin{pmatrix}
u_1 x_{11} & u_2x_{12} & \cdots & u_kx_{1k}\\
x_{21} & x_{22} & \cdots & x_{2k}\\
\vdots & \vdots & \ddots & \vdots \\
x_{k1} & x_{k2} & \cdots & x_{kk} \\
\end{pmatrix}+
\det\begin{pmatrix}
x_{11} & x_{12} & \cdots & x_{1k}\\
u_1 x_{21} & u_2 x_{22} & \cdots & u_k x_{2k}\\
\vdots & \vdots & \ddots & \vdots \\
x_{k1} & x_{k2} & \cdots & x_{kk} \\
\end{pmatrix}
+\dots +\\
\det\begin{pmatrix}
x_{11} & x_{12} & \cdots & x_{1k}\\
x_{21} & x_{22} & \cdots & x_{2k}\\
\vdots & \vdots & \ddots & \vdots \\
u_1 x_{k1} & u_2x_{k2} & \cdots & u_k x_{kk} \\
\end{pmatrix}
=
(u_1+u_2+\dots u_k)\det\begin{pmatrix}
x_{11} & x_{12} & \cdots & x_{1k}\\
x_{21} & x_{22} & \cdots & x_{2k}\\
\vdots & \vdots & \ddots & \vdots \\
x_{k1} & x_{k2} & \cdots & x_{kk} \\
\end{pmatrix}
\end{eqnarray*}
Now apply this to the Vandermonde matrix $X=\begin{pmatrix}
1 & 1 & \cdots & 1 \\
x_{1} & x_{2} & \cdots & x_{k}\\
\vdots & \vdots & \ddots & \vdots \\
x_{1}^{k-1} & x_{2}^{k-1} & \cdots & x_{k} ^{k-1}
\end{pmatrix}$ and numbers $u_i=x_i$ for every $i=1,2,\dots,k$.
We will use these two facts. The first one implies that if $d+1$
columns of our matrix are dependent then they have to include the last
column. By the second fact, if $1\le
i_1<i_2<\dots<i_d\le n$ are arbitrary indices then
\[
\det\begin{pmatrix}
1 & 1 & \cdots & 1 & 0 \\
a_{i_1} & a_{i_2} & \cdots & a_{i_d}& 0 \\
\vdots & \vdots & \ddots & \vdots \\
a_{i_1}^{d-2} & a_{i_2}^{d-2} & \cdots & a_{i_d}^{d-2} & 0 \\
a_{i_1}^{d-1} & a_{i_2}^{d-1} & \cdots & a_{i_d} ^{d-1} & 1 \\
a_{i_1}^{d} & a_{i_2}^{d} & \cdots & a_{i_d} ^{d} & b
\end{pmatrix}=(b-a_{i_1}-a_{i_2}-\dots-a_{i_d})\prod_{k<l}(a_{i_l}-a_{i_k}).\]
This implies the claim.
\end{proof}
Vardy also claimed that Problem \ref{prob:sum} is NP-complete: our
proof will be completed if we show that this is indeed the case even
if $n=2d+2$. Since we have not found a formal proof of this claim of
Vardy, we will give a full proof of the following claim. For a set $V$
let ${V \choose 3}=\{X\subseteq V: |X|=3\}$.
\begin{cl}\label{cl:2dpl2}
Problem \ref{prob:sum} is NP-complete even if $n=2d+2$.
\end{cl}
\begin{proof}
\newcommand{\threeDM}{{\sc Exact-Cover-by-3-Sets}} We will reduce the
well-known NP-complete problem \threeDM\ (Problem SP2 in \cite{gj}) to
this problem. Problem \threeDM\ is the following: given a 3-uniform
family $\ensuremath{\mathcal E}\subseteq {V \choose 3}$, decide whether
there exists a subfamily $\ensuremath{\mathcal E}'\subseteq \ensuremath{\mathcal E}$ such that every
element of $V$ is contained in exactly one member of $\ensuremath{\mathcal E}'$. We assume
that 3 divides $|V|$, and let $d=|V|/3$, so Problem \threeDM\ asks
whether there exist $d$ disjoint members in $\ensuremath{\mathcal E}$. First we show
that this problem remains NP-complete even if $|\ensuremath{\mathcal E}|=2d+2$. Indeed, if
$|\ensuremath{\mathcal E}|\ne 2d+2$ then let us introduce $3k$ new nodes $\{u_i,v_i,w_i:
i=1,2,\dots,k\}$ where
\begin{itemize}
\item $k$ is such that ${3k \choose 3}-2k\ge 2d+2-|\ensuremath{\mathcal E}|$ if $|\ensuremath{\mathcal E}| < 2d+2$, and
\item $k=|\ensuremath{\mathcal E}|-(2d+2)$, if $|\ensuremath{\mathcal E}| > 2d+2$.
\end{itemize}
Let $V^*=V\cup\{u_i,v_i,w_i: i=1,2,\dots,k\}$ and let $\ensuremath{\mathcal E}^*=\ensuremath{\mathcal E}\cup
\{\{u_i,v_i,w_i\}: i=1,2,\dots,k\}$ (note that $ |V^*|=3(d+k)$). If
$|\ensuremath{\mathcal E}| < 2d+2$ then include furthermore $ 2(d+k) + 2 - (|\ensuremath{\mathcal E}|+k)$ arbitrary new
sets of size 3 to $\ensuremath{\mathcal E}^*$ from $ {V^*-V \choose 3}$, but so that $\ensuremath{\mathcal E}^*$
does not contain a set twice (this can be done by the choice of
$k$). It is easy to see that $|\ensuremath{\mathcal E}^*|= 2|V^*|/3+2$, and $V$ can be
covered by disjoint members of $\ensuremath{\mathcal E}$ if and only if $V^*$ can be
covered by disjoint members of $\ensuremath{\mathcal E}^*$.
Finally we show that \threeDM\ is a special case of Problem
\ref{prob:sum} in disguise. Given an instance of \threeDM\ by a
3-uniform family $\ensuremath{\mathcal E}\subseteq {V \choose 3}$, consider the
characteristic vectors of these 3-sets as different positive
integers written in base 2 (that is, assume a fixed ordering of the
set $V$, then the characteristic vectors of the members in $\ensuremath{\mathcal E}$
are 0-1 vectors corresponding to different binary integers, conatining
3 ones in their representation). These will be the numbers
$a_1,a_2,\dots,a_{|\ensuremath{\mathcal E}|}$. Let $b=2^{|V|}-1$ be the number
corresponding to the all 1 characteristic vector, and let
$d=|V|/3$. Observe that there exist $d$ disjoint members in $\ensuremath{\mathcal E}$
if and only if there are indices $1\le i_1<i_2<\dots<i_d\le |\ensuremath{\mathcal E}|$
such that $b=a_{i_1}+a_{i_2}+\dots+a_{i_d}$.
(You will need to prove a small claim about the maximal number of ones
in the binary representation of a sum of positive integers.) This
together with the previous observation proves our claim.
\end{proof}
By combining Claims \ref{cl:vardy} and \ref{cl:2dpl2} we obtain the
proof of Theorem \ref{thm:CdC} as follows. Consider an instance of
Problem \ref{prob:sum} with $n=2d+2$ and let $D=d+1$. Claim
\ref{cl:vardy} states that this instance has a solution if and only if
the $(d+1)\times (n+1)=D\times (2D+1)$ matrix defined in the claim has
$D$ linearly dependent columns, which must be NP-hard to decide by
Claim \ref{cl:2dpl2}.
\end{proof}
\begin{cor}\label{cor:CutdCut}
The problem of deciding whether an (explicitly given) linear matroid
contains two disjoint cuts is NP-complete.
\end{cor}
\begin{proof}
Since the dual matroid of the linear matroid is also linear, and we
can construct a representation of this dual matroid from the
representation of the original matroid, this problem is equivalent
to the problem of deciding whether a linear matroid contains two
disjoint circuits, which is NP-complete by Theorem \ref{thm:CdC}.
\end{proof}
\bibliographystyle{amsplain} | {'timestamp': '2014-07-21T02:08:54', 'yymm': '1407', 'arxiv_id': '1407.4999', 'language': 'en', 'url': 'https://arxiv.org/abs/1407.4999'} |
\section{Introduction} \label{sec:intro}
We consider a one-dimensional semilinear hyperbolic system of the form
\begin{align}
\partial_t p(x,t) + \partial_x u(x,t) &= 0, \qquad x \in (0,1), \ t>0, \label{eq:sys1} \\
\partial_t u(x,t) + \partial_x p(x,t) + a(u(x,t)) &= 0, \qquad x \in (0,1), \ t>0, \label{eq:sys2}
\end{align}
which models, for instance, the damped vibration of a string or the propagation of pressure waves in a gas pipeline.
In this latter application, which we consider as our model problem in the sequel,
$p$ denotes the pressure, $u$ the velocity or mass flux, and
the nonlinear damping term $a(u)$ accounts for the friction at the pipe walls.
The two equations describe the conservation of mass and the balance of momentum
and they can be obtained under some simplifying assumptions from the one dimensional Euler equations with friction; see e.g. \cite{BrouwerGasserHerty11,Guinot08,LandauLifshitz6}.
Similar problems also arise as models for the vibrations of elastic multistructures \cite{LagneseLeugeringSchmidt}
or in the propagation of heat waves on microscopic scales \cite{JosephPreziosi89}.
The system \eqref{eq:sys1}--\eqref{eq:sys2} is complemented by boundary conditions
\begin{align}
u(0,t) = g_0(t), \quad u(1,t) = g_1(t), \qquad t>0, \label{eq:sys3}
\end{align}
and we assume the initial values to be known and given by
\begin{align}
p(x,0) = p_0(x), \quad u(x,0) = u_0(x), \qquad x \in (0,1). \label{eq:sys4}
\end{align}
Motivated by well-known friction laws in fluid dynamics for pipes \cite{LandauLifshitz6},
we will assume here that there exist positive constants $a_1,a_2$ such that
\begin{align} \label{eq:a1}
0 < a_1 \le a'(\xi) \le a_2 \qquad \text{for all } \xi \in \mathbb{R}.
\end{align}
In particular, friction forces are monotonically increasing with velocity.
This condition allows us to establish well-posedness of the system \eqref{eq:sys1}--\eqref{eq:sys4}.
It is also reasonable to assume that $a(-\xi)=-a(\xi)$, i.e.,
the magnitude of the friction force does not depend on the flow direction,
and consequently we will additionally assume that $a(0)=0$.
\medskip
In this paper, we are interested in the inverse problem of determining an unknown
friction law $a(u)$ in \eqref{eq:sys1}--\eqref{eq:sys4} from additional observation
of the pressure drop
\begin{align}
h(t)=\triangle p(t):= p(0,t) - p(1,t) , \qquad t > 0 \label{eq:sys5}
\end{align}
along the pipe. Such data are readily available in applications, e.g., gas pipeline networks.
\medskip
Before proceeding, let us comment on previous work for related coefficient inverse problems.
By combination of the two equations \eqref{eq:sys1}--\eqref{eq:sys2}, one obtains
the second order form
\begin{align} \label{eq:second}
\partial_{tt} u - \partial_{xx} u + a'(u) \partial_t u = 0, \qquad x \in (0,1), \ t>0,
\end{align}
of a wave equation with nonlinear damping.
The corresponding linear problem with coefficent $a'(u)$ replaced by $c(x)$ has been considered in \cite{Baudouin13,Bukgheim01,ImanuvilovYamamoto01};
uniqueness and Lipschitz stability for the inverse coefficient problem
have been established in one and multiple space dimensions.
A one-dimensional wave equation with nonlinear source term $c(u)$ instead of $a'(u) \partial_t u$
has been investigated in \cite{CannonDuChateau83};
numerical procedures for the identification and some comments on the identifiability
have been provided there.
In \cite{Kaltenbacher07}, the identification of the parameter function $c(\partial_x u)$
in the quasilinear wave equation $\partial_{tt} u - \partial_x ( c(\partial_x u) \partial_x u) = 0$
has been addressed in the context of piezo-electricity;
uniqueness and stability has been established for this inverse problem.
Several results are available for related coefficient inverse problems for
nonlinear parabolic equations; see e.g.
\cite{CannonDuChateau73,DuChateau81,EggerEnglKlibanov05,EggerPietschmannSchlottbom15,Isakov93,Lorenzi86}.
Let us also refer to \cite{Isakov06,KlibanovTimonov} for an overview of available results and further references.
To the best of our knowledge, the uniqueness and stability for the nonlinear coefficient problem
\eqref{eq:sys1}--\eqref{eq:sys5} considered in this paper are still open.
Following arguments proposed in \cite{EggerPietschmannSchlottbom15} for the analysis of
a nonlinear inverse problem in heat conduction, we will derive \emph{approximate stability
results} for the inverse problem stated above, which can be obtained by comparison with
the linear inverse problem for the corresponding stationary problem.
This allows us to obtain quantitative estimates for the reconstruction errors in dependence of the experimental setup,
and provides a theoretical basis for the hypothesis that uniqueness holds,
if the boundary fluxes $g_i(t)$ are chosen appropriately.
For the stable numerical solution in the presence of measurement errors,
we consider a variational regularization defined by
\begin{align}
J(a;p,u) &= \int_0^T |\triangle p(t) - h^\delta(t)|^2 dt + \alpha \|a-a^*\|^2 \to \min \label{eq:min1}\\
& \text{subject to } \quad \eqref{eq:sys1}-\eqref{eq:sys4} \quad \text{and} \quad \eqref{eq:a1}. \label{eq:min2}
\end{align}
This allows us to appropriately address the ill-posedness of the inverse coefficient problem \eqref{eq:sys1}--\eqref{eq:sys5}.
Here $\alpha>0$ is the regularization parameter,
$a^*\in \mathbb{R}$ is an a-priori guess for the damping law, and $h^\delta$ denotes the measurements of the pressure drop
across the pipe for the time interval $[0,T]$. The precise form of regularization term will be specified below.
\medskip
As a first step, we establish the well-posedness of the system \eqref{eq:sys1}--\eqref{eq:sys4}
and prove uniform a-priori estimates for the solution. Semigroup theory for semilinear evolution
problems will be used for that.
In addition, we also show the continuous dependence and differentiability of the states $(u,p)$ with respect to the parameter $a(\cdot)$.
We then study the optimal control problem \eqref{eq:min1}--\eqref{eq:min2}.
Elimination of $(p,u)$ via solution of \eqref{eq:sys1}--\eqref{eq:sys4}
leads to reduced minimization problem corresponding to Tikhonov regularization for
the nonlinear inverse problem $F(a)=h^\delta$ where $F$ is the parameter-to-measurment mapping
defined implicitly via the differential equations.
Continuity, compactness, and differentiability of the forward operator $F$ are investigated,
providing a guidline for the appropriate functional analytic setting for the inverse problem.
The existence and stability of minimizers for \eqref{eq:min1}--\eqref{eq:min2} then follows with standard arguments.
In addition, we derive quantitative estimates for the error between the reconstructed
and the true damping parameter using an \emph{approximate source condition},
which is reasonable for the problem under consideration.
Such conditions have been used successfully for the analysis of Tikhonov regularization
and iterative regularization methods in \cite{EggerSchlottbom11,HeinHofmann05}.
As a third step of our analysis, we discuss in detail the meaning and the plausibility of this approximate source condition.
We do this by showing that the nonlinear inverse problem is actually close to a linear inverse
problem, provided that the experimental setup is chosen appropriately.
This allows us to derive an approximate stability estimate for the inverse problem,
and to justify the validity of the approximate source condition.
These results suggest the hypothesis of uniqueness for the inverse problem under investigation,
and they allow us to make predictions about the results that can be expected in practice and that are actually observed in our numerical tests.
\medskip
The remainder of the manuscript is organized as follows:
In Section~\ref{sec:prelim} we fix our notation and briefly discuss the underlying linear wave equation without damping.
The well-posedness of the state system \eqref{eq:sys1}--\eqref{eq:sys4} is established in Section~\ref{sec:state}
via semigroup theory. For convenience of the reader, some auxiliary results are summarized in an appendix.
In Section~\ref{sec:forward}, we then investigate the basic properties of the parameter-to-measurement mapping $F$.
Section~\ref{sec:min} is devoted to the analysis of the regularization method \eqref{eq:min1}--\eqref{eq:min2}
and provides a quantitative estimate for the reconstruction error.
The required approximate source condition and the approximate stability of the inverse problem are discussed in Section~\ref{sec:hyp}
in detail.
Section~\ref{sec:num} presents the setup and the results of our numerical tests.
We close with a short summary of our results and a discussion of possible directions for future research.
\section{Preliminaries} \label{sec:prelim}
Throughout the manuscript, we use standard notation for Lebesgue and Sobolev spaces and for classical functions spaces, see e.g. \cite{Evans98}. For the analysis of problem \eqref{eq:sys1}--\eqref{eq:sys4},
we will employ semigroup theory.
The evolution of this semilinear hyperbolic system is driven by the linear wave equation
\begin{align}
\partial_t p(x,t) + \partial_x u(x,t) &= 0, \quad x \in (0,1), \ t>0, \\
\partial_t u(x,t) + \partial_x p(x,t) &= 0, \quad x \in (0,1), \ t>0,
\end{align}
with homogeneous boundary values
\begin{align}
u(0,t)=u(1,t)=0, \quad t>0,
\end{align}
and initial conditions given by $p(\cdot,0)=p_0$ and $u(\cdot,0)=u_0$ on $(0,1)$.
This problem can be written in compact form as an abstract evolution equation
\begin{align} \label{eq:abstract}
y'(t) + A y(t) = 0, \ t>0, \qquad y(0)=y_0,
\end{align}
with state vector $y=(p,u)$, initial value $y_0=(p_0,u_0)$, and operator $A=\begin{pmatrix} 0 & \partial_x \\ \partial_x & 0\end{pmatrix}$. \\[-1ex]
The starting point for our analysis is the following
\begin{lemma}[Generator] \label{lem:generator}
Let $X=L^2(0,1) \times L^2(0,1)$ and $D(A)=H^1(0,1) \times H_0^1(0,1)$. \\
Then the operator $A : D(A) \subset X \to X$ generates a $C^0$-semigroup of contractions on $X$.
\end{lemma}
\begin{proof}
One easily verifies that $A$ is a densly defined and closed linear operator on $X$.
Moreover, $(A y, y)_X = 0$ for all $y \in D(A)$; therefore, $A$ is dissipative.
By direct calculations, one can see that for any $\bar f, \bar g \in L^2(0,1)$, the boundary value problem\begin{align*}
\bar p(x) + \partial_x \bar u(x) &= \bar f(x), \quad x \in (0,1),\\
\bar u(x) + \partial_x \bar p(x) &= \bar g(x), \quad x \in (0,1),
\end{align*}
with $\bar u(0)=\bar u(1)=0$ is uniquely solvable with solution $(\bar p,\bar u) \in H^1(0,1) \times H_0^1(0,1)$.
The assertion hence follows by the Lumer-Phillips theorem \cite[Ch~1, Thm~4.3]{Pazy83}.
\end{proof}
The analysis of the model problem \eqref{eq:sys1}--\eqref{eq:sys4} can now be done in the framework of semigroups.
For convenience, we collect some of the required results in the appendix.
\section{The state system} \label{sec:state}
Let us return to the semilinear wave equation under consideration.
For proving well-posedness of the system \eqref{eq:sys1}--\eqref{eq:sys4},
and in order to establish some additional regularity of the solution, we will assume that
\begin{itemize}\setlength\itemsep{1ex}
\item[(A1)] $a \in W_{loc}^{3,\infty}(\mathbb{R})$ with $a(0)=0$, $a_0 \le a'(\cdot) \le a_1$, $|a''(\cdot)| \le a_2$, and $|a'''(\cdot)| \le a_3$
\end{itemize}
for some positive constants $a_0,a_1,a_2,a_3>0$.
Since the damping law comes from a modelling process involving several approximation steps,
these assumptions are not very restrictive in practice.
In addition, we require the initial and boundary data to satisfy
\begin{itemize}\setlength\itemsep{1ex}
\item[(A2)] $u_0=0$ and $p_0=c$ with $c \in \mathbb{R}$;
\item[(A3)] $g_0,g_1 \in C^4([0,T])$ for some $T>0$, $g_0(0)=g_1(0)=0$, and $g_0'(0)=g_1'(0)=0$.
\end{itemize}
The system thus describes the smooth departure from a system at rest.
As will be clear from our proofs,
the assumptions on the initial conditions and the regularity requirements for the parameter and the initial and boundary data
could be relaxed without much difficulty.
Existence of a unique solution can now be established as follows.
\begin{theorem}[Classical solution] \label{thm:classical} $ $\\
Let (A1)--(A3) hold.
Then there exists a unique classical solution
\begin{align*}
(p,u) \in C^1([0,T];L^2(0,1) \times L^2(0,1)) \cap C([0,T]; H^1(0,1) \times H^1(0,1))
\end{align*}
for the initial boundary value problem \eqref{eq:sys1}--\eqref{eq:sys4} and its norm can be bounded by
\begin{align*}
\|(p,u)\|_{C([0,T];H^1\times H^1)} + \|(p,u)\|_{C^1([0,T];L^2 \times L^2)}
\le C'
\end{align*}
with constant $C'$ only depending on the bounds for the coefficients and the data and the time horizon $T$.
Moreover, $\triangle p := p(0,\cdot)-p(1,\cdot) \in C^\gamma([0,T])$, for any $0 \le \gamma < 1/2$, and
\begin{align*}
\|\triangle p\|_{C^{\gamma}([0,T])} \le C'(\gamma).
\end{align*}
\end{theorem}
\begin{proof}
The proof follows via semigroup theory for semilinear problems \cite{Pazy83}.
For convenience of the reader and to keep track of the constants, we sketch the basics steps:
{\em Step 1:}
We define $\hat u(x,t) = (1-x) g_0(t) + x g_1(t)$ and set
$\hat p(x,t) = \int_0^x \hat p_x(s,t) dx$ with $\hat p_x(x,t) = (x-1) (a(g_0(t))+g_0'(t)) -x (a(g_1(t))+g_1'(t))$.
Then we decompose the solution into $(p,u)=(\hat p,\hat u) + (\tilde p,\tilde u)$
and note that $(\hat p,\hat u) \in C^1([0,T];H^1 \times H^1)$ by construction and assumption (A3).
The second part $(\tilde p, \tilde u)$ solves
\begin{align*}
&&&&\partial_t \tilde p + \partial_x \tilde u &= f_1, && \tilde p(\cdot,0) = \tilde p_0,&&&&\\
&&&&\partial_t \tilde u + \partial_x \tilde p &= f_2, && \tilde u(\cdot,0) = \tilde u_0,&&&&
\end{align*}
with $f_1(t)=-\partial_t \hat p(t) - \partial_x \hat u(t)$, $f_2(t,\tilde u(t))=-\partial_t \hat u(t)-\partial_x \hat p(t)- a(\hat u(t) + \tilde u(t))$
and initial values $\tilde p_0 = p_0 - \hat p(0)$, $\tilde u_0 = u_0 - \hat u(0)$.
In addition, we have $\tilde u(0,t)=\tilde u(1,t)=0$ for $t>0$.
This problem can be written as
abstract evolution
\begin{align*}
y'(t) + A y(t) = f(t,y(t)), \qquad y(0)=y_0,
\end{align*}
on $X=L^2 \times L^2$ with $y=(\tilde p,\tilde u)$, $f(t,y)=(f_1(t),f_2(t,y_2))$, and $D(A)=H^1 \times H_0^1$.
{\em Step 2:}
We now verify the conditions of Lemma~\ref{lem:classical2} stated in the appendix.
By assumptions (A2) and (A3) one can see that $y_0 = (\tilde p_0,\tilde u_0) \in H^1(0,1) \times H_0^1(0,1)$.
For every $y \in H^1(0,1) \times H_0^1(0,1)$, we further have $f(t,y) = (f_1(t),f_2(t,y_2)) \in H^1(0,1) \times H_0^1(0,1)$ by construction of $\hat u$ and $\hat p$. Moreover, $f$ is continuous w.r.t. time.
Denote by $|u|_{H^1}=\|\partial_x u\|_{L^2}$ the seminorm of $H^1$. Then
\begin{align*}
|f_2(t,v) - f_2(t,w)|_{H^1}
&= |a(\hat u(t) + v) - a(\hat u(t)+w)|_{H^1} \\
&= \int_0^1 |a'(\hat u(t) + (1-s) v + s w) (v-w)|_{H^1} ds \\
&\le a_1 |v-w|_{H^1} + a_2 |\hat u(t) + (1-s) v + s w|_{H^1} |v-w|_{H^1}.
\end{align*}
Here we used the embedding of $H^1(0,1)$ into $L^\infty(0,1)$ and the bounds for the coefficients.
This shows that $f$ is locally Lipschitz continuous with respect to $y$, uniform on $[0,T]$.
By Lemma~\ref{lem:classical2}, we thus obtain local existence and uniqueness of a classical solution.
{\em Step 3:}
To obtain the global existence of the classical solution,
note that
\begin{align*}
\|\tfrac{d}{dt} f(t,y(t))\|_X
&\le \|\tfrac{d}{dt} f_1(t)\|_{L^2} + \|\tfrac{d}{dt} f_2(t,\tilde u(t))\|_{L^2} \\
&\le C_1 + C_2 + \|\partial_t \tilde u(t)\|_{L^2}),
\end{align*}
where the first term comes from estimating $f_1$ and the other three terms from the estimate for $f_2$.
The constants $C_1,C_2$ here depend only on the bounds for the data.
Global existence of the classical solution and the uniform bound now follow from Lemma~\ref{lem:classical3}.
\end{proof}
Note that not all regularity assumptions for the data and for the parameter were required so far.
The conditions stated in (A1)--(A3) allow us to prove higher regularity of the solution,
which will be used for instance in the proof of Theorem~\ref{thm:lipschitz} later on.
\begin{theorem}[Regularity] \label{thm:regularity}
Under the assumptions of the previous theorem, we have
\begin{align*}
\|(p,u)\|_{C^1([0,T];H^1\times H^1) \cap C^2([0,T];L^2 \times L^2)} \le C''
\end{align*}
with $C''$ only depending on the bounds for the coefficient and data, and the time horizon.
\end{theorem}
\begin{proof}
To keep track of the regularity requirements, we again sketch the main steps:
{\em Step 1:}
Define $(r,w)=(\partial_t p,\partial_t u)$ and $(r,w) = (\hat r,\hat w) + (\tilde r,\tilde w)$ with $(\hat r,\hat w)=(\partial_t \hat p,\partial_t \hat u)$ and $(\tilde r,\tilde w)=(\partial_t \tilde p,\partial_t \tilde u)$ as in the previous proof.
The part $z=(\tilde r,\tilde w)$ can be seen to satisfy
\begin{align} \label{eq:z}
\partial_t z(t) + A z(t) = g(t,z(t)), \qquad z(0)=z_0,
\end{align}
with right hand side $g(t,z)=(-\partial_t \hat r(t) -\partial_x \hat w(t),-\partial_t \hat w(t)-\partial_x \hat r(t)-a'(u(t)) z_2)$
and initial value $z_0=(\partial_t p(0)-\partial_t \hat p(0),\partial_t u(0)-\partial_t \hat u(0)) = (-\partial_x u_0-\partial_t \hat p(0),-\partial_x p_0 - a(u_0) - \partial_t \hat u(0))$.
{\em Step 2:}
Using the assumptions (A1)--(A3) for the coefficient and the data, and the bounds for the solution
of Theorem~\ref{thm:classical}, and the definition of $\hat p$ and $\hat u$,
one can see that $z_0 \in Y=H^1 \times H_0^1$ and that
$g : [0,T] \times H^1(0,1) \times Y \to Y$ satisfies the conditions of Lemma~\ref{lem:classical2}.
Thus $z(t)$ is a local classical solution.
{\em Step 3:}
Similarly as in the previous proof, one can show that\begin{align*}
\|\tfrac{d}{dt} g(t,z(t))\|_X \le C_1 + C_2 \|z'(t)\|_X + C_3 \|A z(t)\|_X
\end{align*}
for all sufficiently smooth function $z$.
The global existence and uniform bounds for the classical solution then follow again by Lemma~\ref{lem:classical3}.
\end{proof}
\section{The parameter-to-output mapping} \label{sec:forward}
Let $u_0,p_0,g_0,g_1$ be fixed and satisfy assumptions (A2)--(A3).
Then by Theorem~\ref{thm:classical}, we can associate to any damping parameter $a$
satisfying the conditions (A1) the corresponding solution $(p,u)$ of problem
\eqref{eq:sys1}--\eqref{eq:sys4}.
By the uniform bounds of Theorem~\ref{thm:classical} and the embedding of $H^1(0,1)$ in $C[0,1]$,
we know that
\begin{align} \label{eq:bounds}
\underline{u} \le u(x,t) \le \overline{u}, \qquad x \in \omega, \ 0 \le t \le T,
\end{align}
for some constants $\underline{u}$, $\overline{u}$ independent of the choice of $a$.
Without loss of generality, we may thus restrict the parameter function $a$ to the interval $[\underline{u},\overline{u}]$.
We now define the parameter-to-measurement mapping, in the sequel also called \emph{forward operator}, by
\begin{align} \label{eq:forward}
F : D(F) \subset H^2(\underline{u},\overline{u}) \to L^2(0,T),
\qquad a \mapsto \triangle p
\end{align}
where $\triangle p=p(0,\cdot)-p(1,\cdot)$ is the pressure drop across the pipe and $(p,u)$ is the solution of \eqref{eq:sys1}--\eqref{eq:sys4} for parameter $a$. As domain for the operator $F$, we choose
\begin{align} \label{eq:domain}
D(F)=\{ a \in H^2(\underline{u},\overline{u}) : (A1) \text{ holds}\},
\end{align}
which is a closed and convex subset of $H^2(\underline{u},\overline{u})$.
By Theorem~\ref{thm:classical}, the parameter-to-measurment mapping is well-defined on $D(F)$.
In the following, we establish several further properties of this operator, which will be required for our analysis later on.
\begin{theorem}[Lipschitz continuity] \label{thm:lipschitz}
The operator $F$ is Lipschitz continuous, i.e.,
\begin{align*}
\|F(a) - F(\tilde a)\|_{L^2(0,T)} \le C_L \|a-\tilde a\|_{H^2(\underline{u},\overline{u})}, \qquad \forall a, \tilde a \in D(F)
\end{align*}
with some uniform Lipschitz constant $C_L$ independent of the choice of $a$ and $\tilde a$.
\end{theorem}
\begin{proof}
Let $a,\tilde a \in D(F)$ and let $(p,u)$, $(\tilde p,\tilde u)$ denote the corresponding classical solutions of problem \eqref{eq:sys1}--\eqref{eq:sys4}.
Then the function $(r,w)$ defined by $r=\tilde p-p$, $w=\tilde u-u$ satisfies
\begin{align*}
\partial_t r + \partial_x w &= 0, \\
\partial_t w + \partial_x r &= a(u) - \tilde a (\tilde u) =:f_2,
\end{align*}
with initial and boundary conditions $r(x,0)=w(x,0)=w(0,t)=w(1,t)=0$.
By Theorem~\ref{thm:classical}, we know the existence of a unique classical solution $(r,w)$.
Moreover,
\begin{align*}
\|\tfrac{d}{dt} f_2\|_{L^2}
&\le \|(a'(u)-a'(\tilde u)) \partial_t u\|_{L^2} + \|(a'(\tilde u) - \tilde a'(\tilde u)) \partial_t u\|_{L^2}
+ \|\tilde a'(\tilde u) (\partial_t u - \partial_t \tilde u)\|_{L^2} \\
&\le a_2 \|w\|_{L^2} \|\partial_t u\|_{L^\infty} + \|a'-\tilde a'\|_{L^\infty} \|\partial_t u\|_{L^2} + a_1 \|\partial_t w\|_{L^2}
\end{align*}
Using the uniform bounds for $u$ provided by Theorem~\ref{thm:classical} and \ref{thm:regularity} and similar estimates as in the proof of Lemma~\ref{lem:classical3}, one obtains
$\|(r,w)\|_{C([0,T];H^1 \times H_0^1)} \le C \|a' - \tilde a'\|_{L^\infty}$ with $C$ only depending on the bounds for the coefficients and the data and on the time horizon. The assertion then follows by noting that $F(\tilde a) -F(a) = r(0,\cdot)-r(1,\cdot)$ and the continuous embedding of $H^1(0,1)$ in $L^\infty(0,1)$ and $H^2(\underline{u},\overline{u})$ to $W^{1,\infty}(\underline{u},\overline{u})$.
\end{proof}
By careful inspection of the proof of Theorem~\ref{thm:lipschitz}, we also obtain
\begin{theorem}[Compactness] \label{thm:compact}
The operator $F$ maps sequences in $D(F)$ weakly converging in $H^2(\underline{u},\overline{u})$ to strongly convergent sequences in $L^2(0,T)$. In particular, $F$ is compact.
\end{theorem}
\begin{proof}
The assertion follows from the estimates of the previous proof by noting that the embedding of $H^2(\underline{u},\overline{u})$ into $W^{1,\infty}(\underline{u},\overline{u})$ is compact. The forward operator is thus a composition of a continuous and a compact operator.
\end{proof}
As a next step, we consider the differentiability of the forward operator.
\begin{theorem}[Differentiability] \label{thm:differentiability} $ $\\
The operator $F$ is Frechet differentiable with Lipschitz continuous derivative, i.e.,
\begin{align*}
\|F'(a) - F'(\tilde a)\|_{H^2(\underline{u},\overline{u}) \to L^2(0,T)} \le L \|a-\tilde a\|_{H^2(\underline{u},\overline{u})}
\qquad \text{for all } a,\tilde a \in D(F).
\end{align*}
\end{theorem}
\begin{proof}
Denote by $(p(a),u(a))$ the solution of \eqref{eq:sys1}--\eqref{eq:sys4}
for parameter $a$ and let $(r,w)$ be the directional derivative of $(p(a),u(a))$ with respect to $a$ in direction $b$, defined by
\begin{align} \label{eq:directional}
r = \lim_{s \to 0} \frac{1}{s} (p(a+sb)-p(a))
\quad \text{and} \quad
w = \lim_{s \to 0} \frac{1}{s} (u(a+sb)-u(a)).
\end{align}
Then $(r,w)$ is characterized by the {\em sensitivity system}
\begin{align}
\partial_t r + \partial_x w &= 0, \label{eq:sen1} \\
\partial_t w + \partial_x r &= -a'(u(a)) w - b(u(a))=:f_2\label{eq:sen2}
\end{align}
with homogeneous initial and boundary values
\begin{align}
r(x,0) = w(x,0) = w(0,t) = w(1,t) &= 0. \label{eq:sen34}
\end{align}
The right hand side $f_2(t,w)=-a'(u(a;t)) w - b(u(a;t))$ can be shown to be continuously differentiable
with respect to time, by using the previous results and (A1)--(A3).
Hence by Lemma~\ref{lem:classical} there exists a unique classical solution $(r,w)$ to \eqref{eq:sen1}--\eqref{eq:sen34}.
Furthermore
\begin{align*}
\|\tfrac{d}{dt} f_2\|_{L^2}
&\le \|a''(u)\|_{L^\infty} \|\partial_t u\|_{L^\infty} \|w\|_{L^2} + \|a'(u)\|_{L^\infty} \|\partial_t w\|_{L^2} + \|b'(u)\|_{L^\infty} \|\partial_t u\|_{L^2}.
\end{align*}
By Lemma~\ref{lem:classical3} we thus obtain uniform bounds for $(w,r)$.
The directional differentiability of $(p(a),u(a))$ follows by verifying \eqref{eq:directional},
which is left to the reader.
The function $(r,w)$ depends linearly and continuously on $b$ and continuously on $a$ which yields the continuous differentiability of $(p(a),u(a))$ with respect to the parameter $a$.
The differentiability of the forward operator $F$ then follows by noting that $F'(a) b = r(0,\cdot)-r(1,\cdot)$.
For the Lipschitz estimate, we repeat the argument of Theorem~\ref{thm:lipschitz}. An additional derivative
of the parameter $a$ is required for this last step.
\end{proof}
\section{The regularized inverse problem} \label{sec:min}
The results of the previous section allow us to rewrite the constrained minimization problem
\eqref{eq:min1}--\eqref{eq:min2} in reduced form as \begin{align} \label{eq:tikhonov}
J_\alpha^\delta(a) := \|F(a) - h^\delta\|_{L^2(0,T)}^2 + \alpha \|a-a^*\|_{H^2(\underline{u},\overline{u})}^2 \to \min_{a \in D(F)},
\end{align}
which amounts to Tikhonov regularization for the nonlinear inverse problem $F(a)=h^\delta$.
As usual, we replaced the exact data $h$ by perturbed data $h^\delta$ to account for measurement errors.
Existence of a minimizer can now be established with standard arguments \cite{EnglHankeNeubauer96,EnglKunischNeubauer89}.
\begin{theorem}[Existence of minimizers]
Let (A2)--(A3) hold.
Then for any $\alpha>0$ and any choice of data $h^\delta \in L^2(0,T)$,
the problem \eqref{eq:tikhonov} has a minimizer $a_\alpha^\delta \in D(F)$.
\end{theorem}
\begin{proof}
The set $D(F)$ is closed, convex, and bounded. In addition, we have shown that $F$ is weakly continuous,
and hence the functional $J_\alpha^\delta$ is weakly lower semi-continuous.
Existence of a solution then follows as in \cite[Thm.~10.1]{EnglHankeNeubauer96}.
\end{proof}
\begin{remark}
Weak continuity and thus existence of a minimizer can be shown without the bounds for the second and third derivative of the parameter in assumption (A1).
\end{remark}
Let us assume that there exists a true parameter $a^\dag \in D(F)$
and denote by $h=F(a^\dag)$ the corresponding exact data.
The perturbed data $h^\delta$ are required to satisfy
\begin{align} \label{eq:noise}
\|h^\delta - h\|_{L^2(0,T)} \le \delta
\end{align}
with $\delta$ being the noise level.
These are the usual assumptions for the inverse problem.
In order to simplify the following statements about convergence, we also assume
for the moment that the solution of the inverse problem is unique, i.e., that
\begin{align} \label{eq:unique}
F(a) \ne F(a^\dag) \quad \text{for all } a \in D(F) \setminus \{a^\dag\}.
\end{align}
This assumption is only made for convenience here,
but we also give some justification for its validity in the following section.
Under this uniqueness assumption, we obtain the following result about the convergence of the regularized solutions; see \cite{EnglHankeNeubauer96,EnglKunischNeubauer89}.
\begin{theorem}[Convergence]
Let \eqref{eq:unique} hold and $h^\delta$ be a sequence of data satisfying \eqref{eq:noise} for $\delta \to 0$.
Further, let $a_\alpha^\delta$ be corresponding minimizeres of \eqref{eq:tikhonov} with $\alpha=\alpha(\delta)$ chosen such that $\alpha \to 0$ and $\delta^2/\alpha \to 0$.
Then $\|a_\alpha^\delta - a^\dag\|_{H^2(\underline{u},\overline{u})} \to 0$ with $\delta \to 0$.
\end{theorem}
\begin{remark}
Without assumption \eqref{eq:unique} about uniqueness, convergence holds for subsequences and
towards an $a^*$-minimum norm solution $a^\dag$; see \cite[Sec.~10]{EnglHankeNeubauer96} for details.
\end{remark}
To obtain quantitative estimates for the convergence, some additional conditions on the nonlinearity of the operator $F$ and on the solution $a^\dag$ are required.
Let us assume that
\begin{align} \label{eq:source}
a^\dag - a^* = F'(a^\dag)^* w + e
\end{align}
holds for some $w \in L^2(0,T)$ and $e \in H^2(\underline{u},\overline{u})$.
Note that one can always choose $w=0$ and $e=a^\dag-a^*$, so this condition is no
restrictio of generalty. However, good bounds for $\|w\|$ and $\|e\|$ are required
in order to really take advantage of this splitting later on.
Assumption \eqref{eq:source} is called a \emph{approximate source condition}, and has been
investigated for the convergence analysis of regularization methods for instance in \cite{EggerSchlottbom11,HeinHofmann05}
By a slight modification of the proof of \cite[Thm~10.4]{EnglHankeNeubauer96}, one can obtain
\begin{theorem}[Convergence rates]
Let \eqref{eq:source} hold and let $L \|w\|_{H^2(\underline{u},\overline{u})} < 1$. Then
\begin{align}
\|a^\dag - a_\alpha^\delta\|_{H^2(\underline{u},\overline{u})}
\le C \big( \delta^2/\alpha + \alpha \|w\|^2 + \delta \|w\|_{H^2(\underline{u},\overline{u})} + \|e\|_{L^2(0,T)}^2).
\end{align}
The constant $C$ in the estimate only depends on the size of $L\|w\|_{H^2(\underline{u},\overline{u})}$.
\end{theorem}
\begin{proof}
Proceeding as in \cite{EnglHankeNeubauer96,EnglKunischNeubauer89}, one can see that
\begin{align*}
\|F(a_\alpha^\delta) - h^\delta\|^2 + \alpha \|a_\alpha^\delta - a^\dag\|^2
\le \delta^2 + 2\alpha (a^\dag-a_\alpha^\delta, a^\dag - a^* ).
\end{align*}
Using the approximate source condition \eqref{eq:source}, the last term can be estimated by
\begin{align*}
(a^\dag - a_\alpha^\delta, a^\dag - a^* )
&= (F'(a^\dag) (a^\dag -a_\alpha^\delta) , w) + (a^\dag-a_\alpha^\delta,e) \\
&\le \|F'(a^\dag) (a^\dag-a_\alpha^\delta)\| \|w\| + \|a^\dag-a_\alpha^\delta\|\|e\|.
\end{align*}
By elementary manipulations and the Lipschitz continuity of the derivative, one obtains
\begin{align*}
\|F'(a^\dag) (a^\dag - a_\alpha^\delta)\|
&\le \|F(a_\alpha^\delta)-h^\delta\| + \delta + \tfrac{L}{2} \|a_\alpha^\delta - a^\dag\|^2.
\end{align*}
Using this in the previous estimates and applying Young inequalities leads to
\begin{align*}
&\|F(a_\alpha^\delta) - h^\delta\|^2 + \alpha \|a_\alpha^\delta - a^\dag\|^2 \\
&\le \delta^2 + 2 \alpha^2 \|w\|^2 + 2 \alpha \delta \|w\| + C' \alpha \|e\|^2
+ \frac{1}{2} \|F(a_\alpha^\delta)-h^\delta\|^2 + \alpha \|a_\alpha^\delta-a^\dag\|^2 (L \|w\| + \tfrac{1}{C'}).
\end{align*}
If $L\|w\|<1$, we can choose $C'$ sufficienlty large such that $L \|w\| + \tfrac{1}{C'} < 1$ and the last two terms can be absorbed in the left hand side, which yields the assertion.
\end{proof}
\begin{remark}
The bound of the previous theorem yields a quantitative estimate for the error.
If the source condition \eqref{eq:source} holds with $e=0$ and $L \|w\| < 1$,
then for $\alpha \approx \delta$ one obtains $\|a_\alpha^\delta - a^\dag\| = O(\delta^{1/2})$,
which is the usual convergence rate result \cite[Thm.~10.4]{EnglHankeNeubauer96}.
The theorem however also yields estimates and a guidline for the choice of the regularization parameter in the general case.
We refer to \cite{HeinHofmann05} for an extensive discussion of the approximate source condition \eqref{eq:source} and its relation to more standard conditions.
\end{remark}
\begin{remark}
If the deviation from the classical source condition is small, i.e., if \eqref{eq:source} holds with
$\|e\| \approx \delta^{1/2}$ and $L \|w\| < 1$,
then for $\alpha \approx \delta$ one still obtains the usual estimate $\|a_\alpha^\delta - a^\dag\| = O(\delta^{1/2})$.
As we will illustrate in the next section, the assumption that $\|e\|$ is small is realistic in practice,
if the experimental setup is chosen appropriately.
The assumption that $\|e\|$ is sufficiently small in comparison to $\alpha$ also allows to show
that the Tikhonov functional is locally convex around minimizers and to prove convergence of iterative schemes;
see \cite{EggerSchlottbom15,ItoJin15} for some recent results in this direction.
\end{remark}
Numerical methods for minimizing the Tikhonov functional usually require the application of the adjoint derivative operator.
For later reference, let us therefore briefly give a concrete representation of the adjoint that can be used for the implementation.
\begin{lemma}
Let $\psi \in H^1(0,T)$ with $\psi(T)=0$ and let $(q,v)$ denote the solution of
\begin{align}
\partial_t q + \partial_x v &= 0 \qquad \qquad x \in (0,1), \ t<T, \label{eq:adj1}\\
\partial_t v + \partial_x q &= a'(u) v, \quad \ x \in (0,1), \ t<T, \label{eq:adj2}
\end{align}
with terminal conditions $v(x,T)=q(x,T)=0$ and boundary conditions
\begin{align}
v(0,t)=v(1,t)=\psi(t), \quad t<T. \label{eq:adj3}
\end{align}
Then the action of the adjoint operator $\phi=F'(a)^* \psi$ is given by
\begin{align} \label{eq:adj}
(\phi,b)_{H^2(\underline{u},\overline{u})} = \int_0^T (b(u), v)_{L^2(0,1)} dt , \qquad \forall b \in H^2(\underline{u},\overline{u}).
\end{align}
\end{lemma}
\begin{proof}
By definition of the adjoint operator, we have
\begin{align*}
(b,F'(a)^* \psi)_{H^2(\underline{u},\overline{u})} = (F'(a) b, \psi)_{L^2(0,T)}.
\end{align*}
Using the characterization of the derivative via the solution $(r,w)$ of the sensitivity equation \eqref{eq:sen1}--\eqref{eq:sen2} and the definition of the adjoint state $(q,v)$ via \eqref{eq:adj1}--\eqref{eq:adj2}, we obtain
\begin{align*}
&(F'(a) b, \psi)_{L^2(0,T)}
= \int\nolimits_0^T r(0,t) v(0,t) - r(1,t) v(1,t) dt \\
&= \int\nolimits_0^T -(\partial_x r,v) - (r, \partial_x v) dt
=\int\nolimits_0^T (\partial_t w+a'(u) w + b(u), v) + (r,\partial_t q) dt \\
&= \int\nolimits_0^T -(\partial_t v - a'(u) v,w) + (b(u),v) + (\partial_x w,q) dt
=\int\nolimits_0^T (b(u), v) dt.
\end{align*}
For the individual steps we only used integration-by-parts and made use of the boundary and initial conditions.
This already yields the assertion.
\end{proof}
\begin{remark}
Existence of a unique solution $(q,v)$ of the adjoint system \eqref{eq:adj1}--\eqref{eq:adj2}
with the homogeneous terminal condition $v(x,T)=q(x,T)=0$ and boundary condition $q(0,t)=q(1,t)=\psi(t)$ follows
with the same arguments as used in Theorem~\ref{thm:classical}.
The presentation of the adjoint formally holds also for $\psi \in L^2(0,T)$, which can be proved by a limiting process.
The adjoint problem then has to be understood in a generalized sense.
\end{remark}
\section{Remarks about uniqueness and the approximate source condition} \label{sec:hyp}
We now collect some comments about the uniqueness hypothesis \eqref{eq:unique} and the approximate source condition \eqref{eq:source}. Our considerations are based on the fact that the nonlinear inverse problem is actually close to a linear inverse problem provided that the experimental setup is chosen appropriately.
We will only sketch the main arguments here with the aim to illustrate the plausibility of these assumptions and to
explain what results can be expected in the numerical experiments presented later on.
\subsection{Reconstruction for a stationary experiment}
Let the boundary data \eqref{eq:sys3} be chosen such that
$g_0(t)=g_1(t)=\bar g \in \mathbb{R}$ for $t \ge t_0$.
By the energy estimates of \cite{GattiPata06}, which are derived for an equivalent problem in second order form \eqref{eq:second} there, one can show that the solution
$(p(t),u(t))$ of the system \eqref{eq:sys1}--\eqref{eq:sys4} converges exponentially fast to a steady state
$(\bar p,\bar u)$, which is the unique solution of
\begin{align}
\partial_x \bar u &= 0, \quad x \in (0,1), \label{eq:stat1}\\
\partial_x \bar p + a(\bar u) &= 0, \quad x \in (0,1), \label{eq:stat2}
\end{align}
with boundary condition $\bar u(0)=\bar u(1)=\bar g$.
From equation \eqref{eq:stat1}, we deduce that the steady state $\bar u$ is constant,
and upon integration of \eqref{eq:stat2}, we obtain
\begin{align} \label{eq:statsol}
a(\bar g) = a(\bar u) = \int_0^1 a(\bar u) dx = -\int_0^1 \partial_x \bar p(x) dx = \bar p(0)-\bar p(1) = \triangle \bar p.
\end{align}
The value $a(\bar g)$ can thus be determined by a stationary experiment.
As a consequence, the friction law $a(\cdot)$ could in principle be determined from an infinite number of
stationary experiments.
We will next investigate the inverse problem for these \emph{stationary experiments} in detail.
In a second step, we then use these results for the analysis of the inverse problem for the instationary experiments
that are our main focus.
\subsection{A linear inverse problem for a series of stationary experiments}
Let us fix a smooth and monotonic function $g : [0,T] \to [\underline{u},\overline{u}]$ and denote by $\triangle \bar p(t)=a(g(t))$ the pressure difference obtained from the stationary system \eqref{eq:stat1}--\eqref{eq:stat2} with boundary flux $\bar g=g(t)$.
The forward operator for a sequence of stationary experiments is then given by
\begin{align} \label{eq:statop}
K : H^2(\underline{u},\overline{u}) \to L^2(0,T), \qquad a \mapsto a(g(\cdot)),
\end{align}
and the corresponding inverse problem with exact data reads
\begin{align} \label{eq:statinv}
(K a)(t) = a(g(t)) = \triangle \bar p(t), \qquad t \in [0,T].
\end{align}
This problem is linear and its solution is given by the simple formula \eqref{eq:statsol}
with $\bar g$ and $\triangle \bar p$ replaced by $g(t)$ and $\triangle \bar p(t)$ accordingly.
From this representation, it follows that
\begin{align*}
\|a - \tilde a\|^2_{L^2(\underline{u},\overline{u})}
&= \int_0^T |a(g(t)) - \tilde a(g(t))|^2 |g'(t)| dt
\le C \|K a - K \tilde a\|_{L^2(0,T)}^2,
\end{align*}
where we assumed that $|g'(t)| \le C$ for all $t$.
Using the uniform bounds iny assumption (A1),
embedding, and interpolation, one can further deduce that
\begin{align} \label{eq:stathoelder}
\|a - \tilde a\|_{H^2(\underline{u},\overline{u})} \le C_\gamma \|K a-K\tilde a\|_{L^2(0,T)}^\gamma.
\end{align}
This shows the Hölder stability of the inverse problem \eqref{eq:statinv} for stationary experiments.
As a next step, we will now extend these results to the instationary case by a perturbation argument
as proposed in \cite{EggerPietschmannSchlottbom15} for a related inverse heat conduction problem.
\subsection{Approximate stability for the instationary inverse problem}
If the variation of the boundary data $g(t)$ with respect to time is sufficiently small,
then from the exponential stability estimates of \cite{GattiPata06}, one may deduce that
\begin{align} \label{eq:eps}
\|p(t)-\bar p(t)\|_{H^1} + \|u(t)-\bar u(t)\|_{H^1} \le \varepsilon.
\end{align}
Hence the solution $(p(t),u(t))$ is always close to the stationary state $(\bar p(t),\bar u(t))$
with the corresponding boundary data $\bar u(0,t)=\bar u(1,t)=g(t)$.
Using $\triangle p(t) = p(0,t) - p(1,t) = -\int_0^1 \partial_x p(y,t) dy$ and the Cauchy-Schwarz inequality leads to
\begin{align} \label{eq:est}
|\triangle p(t) - a(g(t))| = |\triangle p(t) - \triangle \bar p(t)| \le \|p(t)-\bar p(t)\|_{H^1(0,1)} \le \varepsilon.
\end{align}
From the definition of the nonlinear and the linear forward operators, we deduce that
\begin{align} \label{eq:estOps}
F(a) = K a + O(\varepsilon).
\end{align}
\begin{remark}
As indicated above, the error $\varepsilon$ can be made arbitrarily small by a proper design of the experiment, i.e., by
slow variation of the boundary data $g(t)$.
The term $O(\varepsilon)$ can therefore be considered as an additional measurement error,
and thus the parameter $a$ can be determined approximately with the formula \eqref{eq:statsol}
for the stationary experiments.
As a consequence of the stability of the linear inverse problem, we further obtain
\begin{align} \label{eq:hoelder}
\|a-\tilde a\|_{H^2(\underline{u},\overline{u})} \le C'_\gamma \|F(a)-F(\tilde a)\|_{L^2(0,T)}^\gamma + C''_\gamma \varepsilon^\gamma.
\end{align}
In summary, we may thus expect that the identification from the nonlinear experiments is stable and unique,
provided that the experimental setup is chosen appropriately.
\end{remark}
\subsection{The approximate source condition}
With the aid of the stability results in \cite{GattiPata06} and similar reasoning as above,
one can show that the linearized operator satisfies
\begin{align*}
F'(a) h = K h + O(\varepsilon \|h\|_{H^2(\underline{u},\overline{u})}).
\end{align*}
A similar expansion is then also valid for the adjoint operator, namely
\begin{align*}
F'(a)^* w = K^*w + O(\varepsilon \|w\|_{L^2(0,T)}).
\end{align*}
This follows since $L=F'(a)-K$ is linear and bounded by a multiple of $\varepsilon$,
and so is the adjoint $L^*=F'(a)^*-K^*$.
In order to verify the approximate source condition \eqref{eq:source}, it thus suffices to
consider the condition $z = K^* w$ for the linear problem.
From the explicit respresentation \eqref{eq:statop} of the operator $K$ this can be translated directly to a smoothness condition on $z$ in terms of weighted Sobolev spaces and some boundary conditions; we refer to \cite{EggerPietschmannSchlottbom14} for a detailed derivation in a similar context.
\begin{remark}
The observations made in this section can be summarized as follows:
(i) If the true parameter $a$ is sufficiently smooth, and if the boundary data are varied sufficiently slowly ($\varepsilon$ small),
such that the instationary solution at time $t$ is close to the steady state corresponding to the boundary data $g(t)$,
then the parameter can be identified stably with the simple formula for the linear inverse problem.
The same stable reconstructions will also be obtained with Tikhonov regularization \eqref{eq:tikhonov}.
(ii) For increasing $\varepsilon$, the approximation \eqref{eq:estOps} of the nonlinear problem by the linear problem deteriorates.
In this case, the reconstruction by the simple formula \eqref{eq:statsol} will get worse while the solutions obtained
by Tikhonov regularization for the instationary problem can be expected to still yield good and stable reconstructions.
\end{remark}
\begin{remark}
Our reasoning here was based on the \emph{approximate stability estimate} \eqref{eq:hoelder} that is inherited from the
satationary problem by a perturbation argument.
A related analysis of Tikhonov regularization under \emph{exact} conditional stability assumptions
can be found \cite{ChengYamamoto00,HofmannYamamoto10} together with some applications.
\end{remark}
\section{Numerical tests} \label{sec:num}
For illustration of our theoretical considerations discussed in the previous section,
let us we present some numerical results which provide additional evidence for the
uniqueness and stability of the inverse problem.
\subsection{Discretization of the state equations}
For the space discretization of state system \eqref{eq:sys1}--\eqref{eq:sys4},
we utilize a mixed finite element method based on a weak formulation of the problem.
The pressure $p$ and the velocity $u$ are approximated with continuous piecewise linear
and discontinuous piecewise constant finite elements, respectively. For the time discretization, we employ a one step scheme in which the differential terms are treated implicitly and the nonlinear damping term is integrated explicitly.
A single time step of the resulting numerical scheme then has the form
\begin{align*}
\tfrac{1}{\tau} (p^{n+1}_h,q_h) - (u_h^{n+1},\partial_x q_h) &= \tfrac{1}{\tau} (p_h^n,q_h) + g_0^{n+1} q_h(0) - g_1^{n+1} q_h(1), \\
\tfrac{1}{\tau} (u_h^{n+1},v_h) + (\partial_x p_h^{n+1},v_h) &= \tfrac{1}{\tau} (u_h^n,v_h) - (a(u_h^n),v_h),
\end{align*}
for all test functions $q_h \in P_1(T_h) \cap C[0,1]$ and $v_h \in P_0(T_h)$.
Here $T_h$ is the mesh of the interval $(0,1)$, $P_k(T_h)$ denotes the space of piecewise polynomials on $T_h$, $\tau>0$ is the time-step, and $g_i^n=g_i(t^n)$ are the boundary fluxes at time $t^{n}=n \tau$.
The functions $(p_h^n,u_h^n)$ serve as approximations for the solutions $(p_h(t^n),u_h(t^n))$
at the discrete time steps.
Similar schemes are used to approximate the sensitivity system \eqref{eq:sen1}--\eqref{eq:sen34} and the adjoint problem \eqref{eq:adj}--\eqref{eq:adj3} in a consistent manner.
The spatial and temporal mesh size were chosen so small such that
approximation errors due to the discretization can be neglected;
this was verified by repeating the tests with different discretization parameters.
\subsection{Approximation of the parameter}
The parameter function $a(\cdot)$ was approximated by cubic interpolating splines over a uniform grid of the interval $[\underline{u},\overline{u}]$.
The splines were parametrized by the interpolation conditions $s(u_i)=s_i$, $i=0,\ldots,m$ and knot-a-knot conditions was used to obtain a unique representation. To simplify the implementation, the $L^2$, $H^1$, and $H^2$ norm in the parameter space were approximated by difference operators acting directly on the interpolation points $s_i$, $i=0,\ldots,m$.
To ensure mesh independence, the tests were repeated for different
numbers $m$ of interpolation points.
\subsection{Minimization of the Tikhonov functional}
For minimization of the Tikhonov functional \eqref{eq:tikhonov}, we utilized a projected iteratively regularized Gau\ss-Newton method with regularization parameters $\alpha^n = c q^n$, $q<1$.
The bounds in assumption (A1) for the parameters were satisfied automatically for all iterates in our tests such that the projection step was never carried out.
The iteration was stopped by a discrepancy principle, i.e., when $\|F(a^n) - h^\delta\| \le 1.5 \delta$ was valid the first time.
The regularization parameter $\alpha^n$ of the last step was interpreted as the regularization parameter $\alpha$ of the Tikhonov functional \eqref{eq:tikhonov}.
We refer to \cite{EggerSchlottbom11} for details concerning such a strategy for the iterative minimization of the Tikhonov functional.
The discretizations of the derivative and adjoint operators $F'(a)$ and $F'(a)^*$ were implemented consistently, such that $(F'(a) h, \psi) = (h, F'(a)^* \psi)$ holds exactly also on the discrete level.
The linear systems of the Gau\ss-Newton method were then solved by a preconditioned conjugate gradient algorithm.
\subsection{Setup of the test problem}
As true damping parameter, we used the function
\begin{align} \label{eq:adag}
a^\dag(u) = u \sqrt{1+u^2} .
\end{align}
The asymptotic behaviour here is
$a(u) \approx u$ for $|u| \ll 1$ and $a(u) \approx u |u|$ for $|u| \gg 1$,
which corresponds to the expected behaviour of the friction forces in pipes \cite{LandauLifshitz6}.
Restricted to any bounded interval $[\underline{u},\overline{u}]$, the function $a^\dag$ satisfies the assumptions (A1).
For our numerical tests, we used the initial data $u_0 \equiv 0$, $p_0 \equiv 1$,
and we chose
\begin{align} \label{eq:g}
g_0(t)=g_1(t)=g(t)=2 \sin(\tfrac{\pi}{2T} t)^2
\end{align}
as boundary fluxes.
A variation of the time horizon $T$ thus allows us to tune the speed of variation in the boundary data,
while keeping the interval $[\underline{u},\overline{u}]$ of fluxes that arise at the boundary fixed.
\subsection{Simulation of measurement data}
The boundary data $g(t;T)$ and the resulting pressure drops $\triangle p(t;T)$ across the pipe resulting are displayed
in Figure~\ref{fig:1} for different choices of $T$.
For comparison, we also display the pressure drop $\triangle \bar p$ obtained with the
linear forward model.
\begin{figure}[ht!]
\medskip
\includegraphics[height=3.2cm]{data1} \hspace*{0.75cm}
\includegraphics[height=3.2cm]{data2} \hspace*{0.75cm}
\includegraphics[height=3.2cm]{data5} \\[0.5cm]
\includegraphics[height=3.2cm]{data10}
\caption{Boundary flux $g(t)$ and pressure drops $\triangle p(t)$ and $\triangle \bar p(t)$ for the instationary and the linearized model for time horizon $T=1,2,5,10$. \label{fig:1}}
\end{figure}
The following observations can be made:
For small values of $T$, the pressure drop $\triangle p$ varies rapidly all over the time interval $[0,T]$ and
therefore deviates strongly from the pressure drop $\triangle \bar p$ of the linearized model corresponding to stationary
experiments.
In contrast, the pressure drop $\triangle p$ is close to that of the linearized model on the whole time interval $[0,T]$, when $T$ is large and therefore the variation in the boundary data $g(t)$ is small.
As expected from \eqref{eq:estOps}, the difference between $\triangle p$ and $\triangle \bar p$ becomes smaller when $T$ is increased.
A proper choice of the parameter $T$ thus allows us to tune our experimental setup
and to verify the conclusions obtained in Section~\ref{sec:hyp}.
\subsection{Convergence to steady state}
We next demonstrate in more detail that the solution
$(p(t),u(t))$ of the instationary problem is close to the steady states $(\bar p(t),\bar u(t))$ for
boundary data $\bar u(0)=\bar u(1)=g(t)$, provided that $g(t)$ varies sufficiently slowly; cf \eqref{eq:eps}.
In Table~\ref{tab:1}, we list the errors
\begin{align*}
e(T) := \max_{0 \le t \le T} \|p(t;T)-\bar p(t;T)\|_{L^2} + \|u(t;T)-\bar u(t;T)\|_{L^2}
\end{align*}
between the instationary and the corresponding stationary states
for different values of $T$ in the definition of the boundary data $g(t;T)$.
In addition, we also display the difference
\begin{align*}
d(T)=\max_{0 \le t \le T} |\triangle p(t) - \triangle \bar p(t)|
\end{align*}
in the measurements corresponding to the nonlinear and the linearized model.
\begin{table}[ht!]
\centering
\small
\begin{tabular}{c||c|c|c|c|c|c}
$T$ & $1$ & $2$ & $5$ & $10$ & $20$ & $50$ \\
\hline
$e(T)$ & $1.016$ & $0.647$ & $0.207$ & $0.105$ & $0.054$ & $0.022$ \\
\hline
$d(T)$ & $1.044$ & $1.030$ & $0.479$ & $0.225$ & $0.114$ & $0.045$
\end{tabular}
\medskip
\caption{Error $e(T)$ between instationary and stationary solution and difference $d(T)$ in the corresponding
measurements.\label{tab:1}}
\end{table}
The speed of variation in the boundary data decreases when $T$ becomes larger,
and we thus expect a monotonic decrease of the distance $e(T)$ to steady state
with increasing time horizon. The same can be expected for the error $d(T)$
in the measurements. This is exactly the behaviour that we observe in our numerical tests.
\subsection{Reconstructions for nonlinear and linearized model}
Let us now turn to the inverse problem
and compare the reconstructions for the nonlinear inverse problem obtained by Tikhonov regularization with that computed by the simple formula \eqref{eq:statsol} for the linearized inverse problem corresponding to stationary experiments.
The data for these tests are generated by simulation as explained before, and then perturbed with random noise such that $\delta=0.001$.
Since the noise level is rather small, the data perturbations do not have any visual effect on the reconstructions here;
see also Figure~\ref{fig:3} below.
In Figure~\ref{fig:2}, we display the corresponding results for measurements $h=\triangle p(\cdot;T)$ obtained for different time horizons $T$ in the definition of the boundary data $g(\cdot;T)$.
\begin{figure}[ht!]
\includegraphics[height=5cm]{a1}
\includegraphics[height=5cm]{a10} \\
\includegraphics[height=5cm]{a2}
\includegraphics[height=5cm]{a20} \\
\includegraphics[height=5cm]{a5}
\includegraphics[height=5cm]{a50}
\caption{True parameter $a^\dag$, reconstruction $a_\alpha^\delta$ obtained by Tikhonov regularization with initial guess $a^*$, and result $\bar a$ obtained by formula \eqref{eq:statsol}.
The data $h^\delta$ are perturbed by random noise of size $\delta=0.001$.
The images correspond to time horizons $T=1,2,5$ (left) and $T=10,20,50$ (right).\label{fig:2}}
\end{figure}
As can be seen from the plots, the reconstruction with Tikhonov regularization works well in all test cases.
The results obtained with the simple formula \eqref{eq:statsol} however show some systematic deviations due to model errors,
which however become smaller when increasing $T$.
Recall that for large $T$, the speed of variation in the boundary fluxes $g(t;T)$ is small,
so that the system is close to steady state on the whole interval $[0,T]$.
The convergence of the reconstruction $\bar a$ for the linearized problem towards the true solution $a^\dag$
with increasing $T$ is thus in perfect agreement with our considerations in Sections~\ref{sec:hyp}.
\subsection{Convergence and convergence rates}
In a last sequence of tests, we investigate the stability and accuracy of the
reconstructions $a_\alpha^\delta$ obtained with Tikhonov regularization in the presence of data noise.
Approximations for the minimizers $a_\alpha^\delta$ are computed numerically via the projected
iteratively regularized Gau\ss-Newton method as outlined above.
The iteration is stopped according to the discrepancy principle.
Table~\ref{tab:2} displays the reconstruction errors for different time horizons $T$ and different noise levels $\delta$.
\begin{table}[ht!]
\centering
\small
\begin{tabular}{c||c|c|c|c|c|c}
$\delta \backslash T$
& $1$ & $2$ & $5$ & $10$ & $20$ & $50$ \\
\hline
\hline
$0.10000$ & $0.8504$ & $0.3712$ & $0.1027$ & $0.0417$ & $0.0324$ & $0.0092$ \\
\hline
$0.05000$ & $0.6243$ & $0.2742$ & $0.0706$ & $0.0239$ & $0.0081$ & $0.0055$ \\
\hline
$0.02500$ & $0.3911$ & $0.1616$ & $0.0496$ & $0.0096$ & $0.0066$ & $0.0032$ \\
\hline
$0.01250$ & $0.2264$ & $0.1050$ & $0.0355$ & $0.0065$ & $0.0024$ & $0.0019$ \\
\hline
$0.00625$ & $0.1505$ & $0.0630$ & $0.0316$ & $0.0030$ & $0.0015$ & $0.0012$
\end{tabular}
\medskip
\caption{Reconstruction error $\|a_\alpha^\delta - a^\dag\|_{L^2(\underline{u},\overline{u})}$ for Tikhonov regularization for different noise levels $\delta$ and various time horizons $T$. \label{tab:2}}
\end{table}
Convergence is observed for all experimental setups, but the absolut errors decrease
monotonically with increasing time horizon $T$, which is partly explained by
our considerations in Section~\ref{sec:hyp}.
The reconstructions for time horizon $T=2$, corresponding to the third column of Table~\ref{tab:2},
are depicted in Figure~\ref{fig:3}; also compare with Figure~\ref{fig:2}.
\begin{figure}[ht!]
\includegraphics[height=5cm]{rec1}
\includegraphics[height=5cm]{rec4} \\
\includegraphics[height=5cm]{rec2}
\includegraphics[height=5cm]{rec5} \\
\includegraphics[height=5cm]{rec3}
\includegraphics[height=5cm]{rec6}
\caption{True parameter $a^\dag$, reconstruction $a_\alpha^\delta$ obtained by Tikhonov regularization, and initial guess $a^*$ for time horizon $T=2$ and noise levels $\delta=0.1,0.05,0.025$ (left) and $\delta=0.0125,0.00625,0.003125$ (right).\label{fig:3}}
\end{figure}
Note that already for a small time horizon $T=2$ and large noise level $\delta$ of several percent, one can obtain good reconstructions of the damping profile. For larger time horizon or smaller noise levels, the reconstruction $a_\alpha^\delta$ visually coincides completely with the true solution $a^\dag$.
This is in good agreement with our considerations in Section~\ref{sec:hyp}.
\section{Discussion} \label{sec:sum}
In this paper, we investigated the identification of a nonlinear damping law in a
semilinear hyperbolic system from additional boundary measurements. Uniqueness and
stability of the reconstructions obtained by Tikhonov regularization was observed
in all numerical tests. This behaviour could be explained theoretically by considering
the nonlinear inverse problem as a perturbation of a nearby linear problem, for which
uniqueness and stability can be proven rigorously.
In the coefficient inverse problem under investigation, the distance to the approximating
linearization could be chosen freely by a proper experimental setup. A similar argument was
already used in \cite{EggerPietschmannSchlottbom15} for the identification of a nonlinear
diffusion coefficient in a quasi-linear heat equation.
The general strategy might however be useful in a more general context and for many other applications.
Based on the uniqueness and stability of the linearized inverse problem, we could obtain
stability results for the nonlinear problem \emph{up to perturbations}; see
Section~\ref{sec:hyp} for details. Such a concept might be useful as well for the convergence
analysis of other regularization methods and more general inverse problems.
In all numerical tests we observed global convergence of an iterative method for the minimization
of the Tikhonov functional. Since the minimizer is unique for the linearized problem, such
a behaviour seems not too surprising. At the moment, we can however not give a rigorous
explanation of that fact. Let us note however, that H\"older stability of the inverse
problem can be employed to prove convergence and convergence rates for Tikhonov regularization \cite{ChengHofmannLu14,ChengYamamoto00} and also global convergence of iterative regularization methods
\cite{deHoopQiuScherzer12} without further assumptions.
An extension of these results to inverse problems satisfying \emph{approximated stability conditions},
as the one considered here, might be possible.
\section*{Acknowledgements}
The authors are grateful for financial support by the German Research Foundation (DFG) via grants IRTG~1529, GSC~233, and TRR~154.
| {'timestamp': '2016-06-14T02:06:29', 'yymm': '1606', 'arxiv_id': '1606.03580', 'language': 'en', 'url': 'https://arxiv.org/abs/1606.03580'} |
\section{Introduction}
Extending the well-known result of extremal graph theory by Tur\'an, E. Gy\H ori and A.V. Kostochka \cite{ek} and independently F.R.K Chung \cite{chung} proved the following theorem. For an arbitrary graph $G$, let $p(G)$ denote the minimum of $\sum|V(G_i)|$ over all decompositions of $G$ into edge disjoint cliques $G_1,G_2,...$. Then $p(G)\le2t_2(n)$ and equality holds if and only if $G\cong T_2(n)$. Here $T_2(n)$ is the $2$-partite Tur\'an graph on $n$ vertices and $t_2(n)=\lfloor n^2/4\rfloor$ is the number of edges of this graph. P. Erd\H os later suggested to study the weight function $p^*(G)=\min \sum(|V(G_i)|-1))$. The first author \cite{ervinsurvey} started to study this function and to prove the conjecture $p^*(G)\le t_2(n)$ just in the special case when $G$ is $K_4$-free. This 24 year old conjecture was worded equivalently as follows.
\begin{conj} \label{mainconj}
Every $K_4$-free graph on $n$ vertices and $t_2(n)+m$ edges contains at least $m$ edge disjoint triangles.
\end{conj}
This was only known if the graph is $3$-colorable i.e. $3$-partite.
In \cite{chinese} towards proving the conjecture, they proved that for every $K_4$-free graph there are always at least $32k/35\ge 0.9142k$ edge-disjoint triangles and if $k\ge 0.0766 n^2$ then there are at least $k$ edge-disjoint triangles.
Their main tool is a nice and simple to prove lemma connecting the number of edge-disjoint triangles with the number of all triangles in a graph. In this paper using this lemma and proving new bounds about the number of all triangles in $G$, we settle the above conjecture:
\begin{thm}\label{thm:main}
Every $K_4$-free graph on $n^2/4+k$ edges contains at least $\lceil k\rceil$ edge-disjoint triangles.
\end{thm}
This result is best possible, as there is equality in Theorem \ref{thm:main} for every graph which we get by taking a $2$-partite Tur\'an graph and putting a triangle-free graph into one side of this complete bipartite graph. Note that this construction has roughly at most $n^2/4+n^2/16$ edges while in general in a $K_4$-free graph $k\le n^2/12$, and so it is possible (and we conjecture so) that an even stronger theorem can be proved if we have more edges, for further details see section Remarks.
\section{Proof of Theorem \ref{thm:main}}
From now on we are given a graph $G$ on $n$ vertices and having $e=n^2/4+k$ edges.
\begin{defi}
Denote by $\te$ the maximum number of edge disjoint triangles in $G$ and by $\ta$ the number of all triangles of $G$.
\end{defi}
The idea is to bound $\te$ by $\ta$. For that we need to know more about the structure of $G$, the next definitions are aiming towards that.
\begin{defi}
A {\bf good partition} $P$ of $V(G)$ is a partition of $V(G)$ to disjoint sets $C_i$ (the cliques of $P$) such that every $C_i$ induces a complete subgraph in $G$.
The {\bf size} $r(P)$ of a good partition $P$ is the number of cliques in it. The cliques of a good partition $P$ are ordered such that their size is non-decreasing: $|C_0|\le|C_1|\le\dots \le|C_{r(P)}|$.
A good partition is a {\bf greedy partition} if for every $l\ge 1$ the union of all the parts of size at most $l$ induces a $K_{l+1}$-free subgraph, that is, for every $i\ge 1$, $C_0\cup C_1\cup\dots\cup C_i$ is $K_{|C_i|+1}$-free.
(See Figure \ref{fig:defgp} for examples.)
\end{defi}
{\it Remark.} In our paper $l$ is at most 3 typically, but in some cases it can be arbitrary.
Note that the last requirement in the definition holds also trivially for $i=0$.
The name greedy comes from the fact that a good partition is a greedy partition if and only if we can build it greedily in backwards order, by taking a maximal size complete subgraph $C\subset V(G)$ of $G$ as the last clique in the partition, and then recursively continuing this process on $V(G)\setminus C$ until we get a complete partition. This also implies that every $G$ has at least one greedy partition. If $G$ is $K_4$-free then a greedy partition is a partition of $V(G)$ to $1$ vertex sets, $2$ vertex sets spanning an edge and $3$ vertex sets spanning a triangle, such that the union of the size 1 cliques of $P$ is an independent set and the union of the size 1 and size 2 cliques of $P$ is triangle-free.
\begin{figure}[t]
\centering
\includegraphics[scale=1]{greedypartitiondef.eps}
\caption{A greedy partition of an arbitrary graph and of a complete $3$-partite graph.}
\label{fig:defgp}
\end{figure}
\begin{lem}[\cite{chinese}]
Let $G$ be a $K_4$-free graph and $P$ be a greedy partition of $G$. Then
$$\te\ge \frac{\ta}{r(P)}.$$
\end{lem}
For sake of keeping the paper self-contained, we prove this lemma too.
\begin{proof}
Let $r=r(P)$ and the cliques of the greedy partition be $C_0,C_1,\dots C_{r-1}$. With every vertex $v\in C_i$ we associate the value $h(v)=i$ and with every triangle of $G$ we associate the value $h(T)=\sum _{v\in T} h(v) \mod r$. As there are $r$ possible associated values, by the pigeonhole principle there is a family $\cal T$ of at least $\ta/r$ triangles that have the same associated value. It's easy to check that two triangles sharing an edge cannot have the same associated value if $G$ is $K_4$-free, thus $\cal T$ is a family of at least $\ta/r$ edge-disjoint triangles in $G$, as required.
\end{proof}
It implies that $\te\ge \frac{\ta}{R(P)}$, moreover the inequality is true for every $P$.
Note that the next theorem holds for every graph, not only for $K_4$-free graphs.
\begin{thm}\label{thm:tbound}
Let $G$ be a graph and $P$ a greedy partition of $G$. Then $t\ge r(P)\cdot(e-n^2/4)$.
\end{thm}
By choosing an arbitrary greedy partition $P$ of $G$, the above lemma and theorem together imply that for a $K_4$-free $G$ we have $t_e\ge \frac{\ta}{r(P)}\ge e-n^2/4=k$, concluding the proof of Theorem \ref{thm:main}.
\bigskip
Before we prove Theorem \ref{thm:tbound}, we make some preparations.
\begin{lem}\label{lem:twocliques}
Given a $K_{b+1}$-free graph $G$ on vertex set $A\cup B$, $|A|=a\le b=|B|$, $A$ and $B$ both inducing complete graphs, there exists a matching of non-edges between $A$ and $B$ covering $A$. In particular, $G$ has at least $a$ non-edges.
\end{lem}
\begin{proof}
Denote by $\bar G$ the complement of $G$ (the edges of $\bar G$ are the non-edges of $G$).
To be able to apply Hall's theorem, we need that for every subset $A'\subset A$ the neighborhood $N(A')$ of $A'$ in $\bar G$ intersects $B$ in at least $|A'|$ vertices. Suppose there is an $A'\subset A$ for which this does not hold, thus for $B'=B\setminus N(A')$ we have $|B'|= |B|-|B\cap N(A')|\ge b-(a-1)$. Then $A'\cup B'$ is a complete subgraph of $G$ on at least $a+b-(a-1)=b+1$ vertices, contradicting that $G$ is $K_{b+1}$-free.
\end{proof}
\begin{obs}\label{obs:Pdoesnotmatter}
If $G$ is complete $l$-partite for some $l$ then it has essentially one greedy partition, i.e., all greedy partitions of $G$ have the same clique sizes and have the same number of cliques, which is the size of the biggest part (biggest independent set) of $G$.
\end{obs}
We regard the following function depending on $G$ and $P$ (we write $r=r(P))$: $$f(G,P)=r(e-n^2/4)-t.$$ We are also interested in the function $$g(G,P)=r(e-r(n-r))-t.$$ Notice that $g(G,P)\ge f(G,P)$ and $f$ is a monotone increasing function of $r$ (but $g$ is not!) provided that $e-n^2/4\ge 0$. Also, using Observation \ref{obs:Pdoesnotmatter} we see that if $G$ is complete multipartite then $r$, $f$ and $g$ do not depend on $P$, thus in this case, we may write simply $f(G)$ and $g(G)$.
\begin{lem}\label{lem:completepartite}
If $G$ is a complete $l$-partite graph then $g(G)\le 0$ and if $G$ is complete $3$-partite (some parts can have size $0$) then $g(G)= 0$.
\end{lem}
\begin{proof}
Let $G$ be a complete $l$-partite graph with part sizes $c_1\le \dots \le c_l$. By Observation \ref{obs:Pdoesnotmatter}, $r=r(P)=c_l$ for any greedy partition.
We have $n=\sum_i c_i$, $e=\sum_{i< j} c_ic_j$, $t=\sum_{i< j< m} c_ic_jc_m$ and so
$$g(G)= r(e-r(n-r))-t=c_l(\sum_{i< j} c_ic_j-c_l\sum_{i< l} c_i)-t=$$ $$=c_l\sum_{i<j<l} c_ic_j-\sum_{i<j<m} c_ic_jc_m=-\sum_{i<j<m<l} c_ic_jc_m\le 0.$$
Moreover, if $l\le 3$ then there are no indices $i<j<m<l$ thus the last equality also holds with equality.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[scale=1]{ggpdef.eps}
\caption{A generalized greedy partition of an arbitrary graph (heavy edges represent complete bipartite graphs) .}
\label{fig:defggp}
\end{figure}
In the proof we need a generalization of a greedy partition, which is similar to a greedy partition, with the only difference that the first part $C_0$ in the partition $P$ is a blow-up of a clique instead of a clique. see Figure \ref{fig:defggp} for an example.
\begin{defi}
A $P$ {\bf generalized greedy partition} ({\bf ggp} in short) of some graph $G$ is a partition of $V(G)$ into the sequence of disjoint sets $C_0,C_1,\dots C_l$ such that $C_0$ induces a complete $l_0$-partite graph, $C_i, i\ge 1$ induces a clique and $l_0\le |C_1|\le \dots |C_l|$. We require that for every $i\ge 1$, $C_0\cup C_1\cup\dots\cup C_i$ is $K_{|C_i|+1}$-free.
We additionally require that if two vertices are not connected in $C_0$ (i.e., are in the same part of $C_0$) then they have the same neighborhood in $G$, i.e., vertices in the same part of $C_0$ are already symmetric.
The {\bf size} $r(P)$ of a greedy partition $P$ is defined as the size of the biggest independent set of $C_0$ plus $l-1$, the number of parts of $P$ besides $C_0$.
\end{defi}
Note that the last requirement in the definition holds also for $i=0$ in the natural sense that $C_0$ is $l_0+1$-free.
Observe that the requirements guarantee that in a ggp $P$ if we contract the parts of $C_0$ (which is well-defined because of the required symmetries in $C_0$) then $P$ becomes a normal (non-generalized) greedy partition (of a smaller graph).
Using Observation \ref{obs:Pdoesnotmatter} on $C_0$, we get that the size of a ggp $P$ is equal to the size of any underlying (normal) greedy partition $P'$ of $G$ which we get by taking any greedy partition of $C_0$ and then the cliques of $P\setminus\{C_0\}$. Observe that for the sizes of $P$ and $P'$ we have $r(P)=r(P')$, in fact this is the reason why the size of a ggp is defined in the above way.
Finally, as we defined the size $r(P)$ of a ggp $P$, the definitions of the functions $f(G,P)$ and $g(G,P)$ extend to a ggp $P$ as well. With this notation Lemma \ref{lem:completepartite} is equivalent to the following:
\begin{cor}\label{cor:onepartggp}
If a ggp $P$ has only one part $C_0$, which is a blow-up of an $l_0$-partite graph, then $r(G,P)\le 0$ and if $l_0\le 3$ then $r(G,P)=0$.
\end{cor}
\begin{proof}[Proof of Theorem \ref{thm:tbound}]
The theorem is equivalent to the fact that for every graph $G_0$ and greedy partiton $P_0$ we have $f(G_0,P_0)\le 0$.
Let us first give a brief summary of the proof. We will repeatedly do some symmetrization steps, getting new graphs and partitions, ensuring that during the process $f$ cannot decrease. At the end we will reach a complete $l$-partite graph $G_*$ for some $l$. However by Lemma \ref{lem:completepartite} for such graphs $g(G_*,P_*)\le 0$ independent of $P_*$, which gives $f(G_0,P_0)\le f(G_*)\le g(G_*)\le 0$. This proof method is similar to the proof from the book of Bollob\'as \cite{bollobas} (section VI. Theorem 1.7.) for a (not optimal) lower bound on $t$ by a function of $e,n$. An additional difficulty comes from the fact that our function also depends on $r$, thus during the process we need to maintain a greedy partition whose size is not decreasing either.
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{proofalg.eps}
\caption{One step of the symmetrization algorithm SymmAlg (dashed lines denote non-edges).}
\label{fig:alg}
\end{figure}
Now we give the details of the symmetrization. The algorithm SymmAlg applies the symmetrization algorithms SymmAlgSubMatch and SymmAlgSubMerge alternately, for an example see Figure \ref{fig:alg}
{\bf SymmAlg:}
We start the process with the given $G_0$ and $P_0$. $P_0$ is a normal greedy partition which can be regarded also as a ggp in which in the first blown-up clique $C_0$ all parts have size $1$.
In a general step of SymmAlg before running SymmAlgSubMatch we have a $G$ and a ggp $P$ of $G$ such that $f(G_0,P_0)\le f(G,P)$. This trivially holds (with equality) before the first run of SymmAlgSubMatch.
{\bf SymmAlgSubMatch:}
If the actual ggp $P$ contains only one part $C_0$ (which is a blow-up of a clique) then we {\bf STOP} SymmAlg.
Otherwise we do the following.
Let the blown-up clique $C_0$ be complete $l$-partite. Temporarily contract the parts of $C_0$ to get a smaller graph in which $P$ becomes a normal greedy partition $P_{temp}$, let $A$ ($|A|=a$) be the first clique (the contraction of $C_0$) and $B=C_1$ ($a\le b=|B|$) be the second clique of $P_{temp}$. As $P$ is a greedy partition, $A\cup B$ must be $K_{b+1}$-free, so we can apply Lemma \ref{lem:twocliques} on $A$ and $B$ to conclude that there is a matching of non-edges between $A$ and $B$ that covers $A$. In $G$ this gives a matching between the parts of the blown-up clique $C_0$ and the vertices of the clique $C_1$ such that if a part $A_i\subset C_0$ is matched with $b_i\in C_1$ then there are no edges in $G$ between $A_i$ and $b_i$.
For every such pair $(A_i,b_i)$ we do the following symmetrization. Let $v\in A_i$ an arbitrary representative of $A_i$ and $w=b_i$. Fix $r_0=r(P)$ and let $f_v=r_0d_v-t_v$ where $d_v$ is the degree of $v$ in $G$ and $t_v$ is the number of triangles in $G$ incident to $v$, or equivalently the number of edges spanned by $N(v)$. Similarly $f_w=r_0d_w-t_w$. Clearly, $f(G,P)=r_0(e-n^2/4)-t=|A_i|f_v+f_w+f_0$ where $f_0$ depends only on the graph induced by the vertices of $V(G)\setminus (A_i\cup\{w\})$. Here we used that there are no edges between $A_i$ and $b_i$. If $f_v\ge f_w$ then we replace $w$ by a copy of $v$ to get the new graph $G_1$, otherwise we replace $A_i$ by $|A_i|$ copies of $w$ to get the new graph $G_1$. In both cases
$$r_0(e_1-n^2/4)-t_1=(|A_i|+1)\max(f_{v},f_{w})+f_0\ge$$$$\ge |A_i|f_v+f_w+f_0=r_0(e-n^2/4)-t.$$
Note that after this symmetrization $V(G)\setminus (A_i\cup\{w\})$ spans the same graph, thus we can do this symmetrization for all pairs $(A_i,b_i)$ one-by-one (during these steps for some vertex $v$ we define $f_v$ using the $d_v$ and $t_v$ of the current graph, while $r_0$ remains fixed) to get the graphs $G_2,G_3,\dots$. At the end we get a graph $G'$ for which
$$r_0(e'-n^2/4)-t'\ge r_0(e-n^2/4)-t=f(G,P).$$ Now we proceed with SymmAlgSubMerge, which modifies $G'$ further so that the final graph has a ggp of size at least $r_0$.
{\bf SymmAlgSubMerge:}
In this graph $G'$ for all $i$ all vertices in $A_i\cup \{b_i\}$ have the same neighborhood (and form independent sets). Together with the non-matched vertices of $C_1$ regarded as size-$1$ parts we get that in $G'$ the graph induced by $C_0\cup C_1$ is a blow-up of a (not necessarily complete) graph on $b$ vertices. To make this complete we make another series of symmetrization steps. Take an arbitrary pair of parts $V_1$ and $V_2$ which are not connected (together they span an independent set) and symmetrize them as well: take the representatives $v_1\in V_1$ and $v_2\in V_2$ and then $r_0(e'-n^2/4)-t'=|V_1|f_{v_1}+|V_2|f_{v_2}+f_1$ as before, $f_1$ depending only on the subgraph spanned by $G'\setminus (V_1\cup V_2)$. Again replace the vertices of $V_1$ by copies of $v_2$ if $f_2\ge f_1$ and replace the vertices of $V_2$ by copies of $v_1$ otherwise. In the new graph $G'_1$, we have
$$r_0(e_1'-n^2/4)-t_1'=(|V_i|+|V_j|)\max(f_{v_1},f_{v_2})+f_0\ge$$$$\ge |V_1|f_{v_1}+|V_2|f_{v_2}+f_0=r_0(e'-n^2/4)-t'.$$
Now $V_1\cup V_2$ becomes one part and in $G'_1$ $C_0\cup C_1$ spans a blow-up $C_0'$ of a (not necessarily complete) graph with $b-1$ parts. Repeating this process we end up with a graph $G''$ for which
$$r_0(e''-n^2/4)-t''\ge r_0(e'-n^2/4)-t'\ge f(G,P).$$
In $G''$ $C_0\cup C_1$ spans a blow-up $C_0''$ of a complete graph with at most $|C_1|$ parts. Moreover $V\setminus(C_0\cup C_1)$ spans the same graph in $G''$ as in $G$, thus $C_0''$ together with the cliques of $P$ except $C_0$ and $C_1$ have all the requirements to form a ggp $P''$. If the biggest part of $C_0$ was of size $c_l$ then in $C_0'$ this part became one bigger and then it may have been symmetrized during the steps to get $G''$, but in any case the biggest part of $C_0''$ is at least $c_l+1$ big. Thus the size of the new ggp $P''$ is $r(P'')\ge c_l+1+(r(P)-c_l-1)\ge r(P)=r_0$.
If $e''-n^2/4< 0$, then we {\bf STOP} SymmAlg and conclude that we have $f(G_0,P_0)\le f(G,P)\le 0$, finishing the proof.
Otherwise $$f(G'',P'')=r(P'')(e''-n^2/4)-t''\ge r_0(e''-n^2/4)-t''\ge f(G,P)\ge f(G_0,P_0),$$
and so $G'',P''$ is a proper input to SymmAlgSubMatch. We set $G:=G''$ and $P:=P''$ and {\bf GOTO} SymmAlgSubMatch. Note that the number of parts in $P''$ is one less than it was in $P$.
This ends the description of the running of SymmAlg.
As after each SymmAlgSubMerge the number of cliques in the gpp strictly decreases, SymmAlg must stop until finite many steps.
When SymmAlg STOPs we either can conclude that $f(G_0,P_0)\le 0$ or SymmAlg STOPped because in the current graph $G_*$ the current gpp $P_*$ had only one blow-up of a clique. That is, the final graph $G_*$ is a complete $l_*$-partite graph for some $l_*$ (which has essentially one possible greedy partition). We remark that if the original $G$ was $K_m$-free for some $m$ then $G_*$ is also $K_m$-free, i.e., $l_*\le m-1$. As $f$ never decreased during the process we get using Corollary \ref{cor:onepartggp} that $f(G_0,P_0)\le f(G_*,P_*)\le g(G_*,P_*)\le 0$, finishing the proof of the theorem.
\end{proof}
\section{Remarks}
In the proof of Theorem \ref{thm:tbound}, we can change $f$ to any function that depends on $r,n,e,t,k_4,k_5,\dots $, (where $k_i(G)$ is the number of complete $i$-partite graphs of $G$) and is monotone in $r$ and is linear in the rest of the variables (when $r$ is regarded as a constant) to conclude that the maximum of such an $f$ is reached for some complete multipartite graph. Moreover, as the symmetrization steps do not increase the clique-number of $G$, if the clique number of $G$ is $m$ then this implies that $f(G,P)$ is upper bounded by the maximum of $f(G_*)$ taken on the family of graphs $G_*$ that are complete $m$-partite (some parts can be empty).
Strengthening Theorem \ref{thm:tbound}, it is possible that we can change $f$ to $g$ and the following is also true:
\begin{conj}\label{conj:nice}
if $G$ is a $K_4$-free graph and $r=r(P)$ is the size of an arbitrary greedy partition of $G$ then $t\ge r(e-r(n-r))$ and so $t_e\ge e-r(n-r)$.
\end{conj}
This inequality is nicer than Theorem \ref{thm:tbound} as it holds with equality for all complete $3$-partite graphs. However, we cannot prove it using the same methods, as it is not monotone in $r$. Note that the optimal general bound for $t$ (depending on $e$ and $n$; see \cite{fishersolow} for $K_4$-free graphs and \cite{fisherpaper, razborov} for arbitrary graphs) does not hold with equality for certain complete $3$-partite graphs, thus in a sense this statement would be an improvement on these results for the case of $K_4$-free graphs (by adding a dependence on $r$). More specifically, it is easy to check that there are two different complete $3$-partite graphs with a given $e,n$ (assuming that the required size of the parts is integer), for one of them Fisher's bound holds with equality, but for the other one it does not (while of course Conjecture \ref{conj:nice} holds with equality in both cases).
As we mentioned in the Introduction, in the examples showing that our theorem is sharp, $k$ is roughly at most $n^2/16$ while in general in a $K_4$-free graph $k\le n^2/12$, thus for bigger $k$ it's possible that one can prove a stronger result. Nevertheless, the conjectured bound $t_e\ge e-r(n-r)$ is exact for every $e$ and $r$ as shown by graphs that we get by taking a complete bipartite graph on $r$ and $n-r$ vertices and putting any triangle-free graph in the $n-r$ sized side. For a greedy partition of size $r$ we have $e\le r(n-r)+(n-r)^2/4$ (follows directly from Claim \ref{claim:r2}, see below), thus these examples cover all combinations of $e$ and $r$, except when $e<r(n-r)$ in which case trivially we have at least $0$ triangles, while the lower bound $e-r(n-r)$ on the triangles is smaller than $0$.
\begin{claim}\label{claim:r2}
If $G$ is a $K_4$-free graph, $P$ is a greedy partition of $G$, $r=r(P)$ is the size of $P$ and $r_2$ is the number of cliques in $P$ of size at least $2$, then $e\le r(n-r)+r_2(n-r-r_2)$.
\end{claim}
\begin{proof}
Let $s_1,s_2,s_3$ be the number of size-$1,2,3$ (respectively) cliques of $P$. Then $r=s_1+s_2+s_3,n-r=s_2+2s_3,r_2=s_2+s_3,n-r-r_2=s_3$.
Applying Lemma \ref{lem:twocliques} for every pair of cliques in $P$ we get that the number of edges in $G$ is $e\le {s_1\choose 2}(1\cdot 1-1)+s_1s_2(1\cdot 2-1)+s_1s_3(1\cdot 3-1)+{s_2\choose 2}(2\cdot 2-2)+s_2s_3(2\cdot 3-2)+{s_3\choose 2}(3\cdot 3-3)+s_2+3s_3=s_1s_2+2s_1s_3+s_2^2+4s_2s_3+3s_3^2=(s_1+s_2+s_3)(s_2+2s_3)+(s_2+s_3)s_3=r(n-r)+r_2(n-r-r_2)$.
\end{proof}
Finally, as an additional motivation for Conjecture \ref{conj:nice} we show that Conjecture \ref{conj:nice} holds in the very special case when $G$ is triangle-free, that is $t=t_e=0$. Note that for a triangle-free graph the size-2 cliques of a greedy partition define a non-augmentable matching of $G$.
\begin{claim}
If $G$ is a triangle-free graph and $r=r(P)$ is the size of an arbitrary greedy partition of $G$, i.e., $G$ has a non-augmentable matching on $n-r$ edges, then $0\ge e-r(n-r)$.
\end{claim}
\begin{proof}
We need to show that $e\le r(n-r)$. By Claim \ref{claim:r2}, $e\le r(n-r)+r_2(n-r-r_2)$ where $r_2$ is the number of cliques in $P$ of size at least $2$. If $G$ is triangle-free, then $r_2=n-r$ and so $e\le r(n-r)$ follows.
Let us give another simple proof by induction. As $G$ is triangle-free, $P$ is a partition of $V(G)$ to sets inducing points and edges, thus $ r\le n$.
We proceed by induction on $n-r$. If $n-r=0$ then $P$ is a partition only to points. As $P$ is greedy, $G$ contains no edges, $e=0$ and we are done. In the inductive step, for some $n-r>0$ take a part of $P$ inducing an edge and delete these two points. Now we have a triangle-free graph $G'$ on $n-2$ points and a greedy partition $P'$ of $G'$ that has $r-1$ cliques, thus we can apply induction on $G'$ (as $n'-r'=n-2-(r-1)=n-r-1<n-r$) to conlcude that $G'$ has at most $(r-1)(n-1-r)$ edges. We deleted at most $n-1$ edges, indeed as the graph is triangle-free the deleted two vertices did not have common neighbors, so altogether they had edges to at most $n-2$ other points plus the edge between them. Thus in $G$ we had at most $n-1+(r-1)(n-1-r)=r(n-r)$ edges, finishing the inductive step.
\end{proof}
| {'timestamp': '2015-06-11T02:10:45', 'yymm': '1506', 'arxiv_id': '1506.03306', 'language': 'en', 'url': 'https://arxiv.org/abs/1506.03306'} |
\section{Introduction}
\setcounter{equation}{0}
Quantum field theory lies at the root of modern physics.
After the success of the standard model in describing particle physics,
one of the most pressing open question is how to derive an extended version
of field theory which encompasses the quantization of gravity. There are
several attempts for this, among which string theory, loop gravity and noncommutative
geometry are the best known. In each of these attempts one of the key
problem is to relax the constraints that formulate quantum field theory
on a particular space-time geometry.
What is certainly more fundamental than geometry is topology and in particular discrete
structures on finite sets such as the species of combinatorists \cite{BLL}. The most prominent
such species in field theory is the species of Feynman graphs.
They were introduced by Feynman to label quantum field perturbation theory and to
automatize the computation of {\it connected} functions. Feynman graphs
also became an essential tool in \emph{renormalization}, the structure
at the heart of quantum field theory.
There are two general canonical operations on graphs
namely the deletion or contraction of edges. Accordingly perhaps the most important
quantity to characterize a graph is its Tutte polynomial \cite{Tutte,Crapo}.
This polynomial obeys a simple recursion rule under
these two basic operations. It exists in many different variations, for instance
multivariate versions, with possible decorations at vertices.
These polynomials have many applications, in particular to statistical physics.
For recent reviews see \cite{Sokal1,EM1,EM2}.
In recent years the Tutte polynomial has been generalized to the
category of ribbon graphs, where it goes under the name of the Bollob\'as-Riordan polynomial
\cite{BR1,BR2,EM2}. Around the same time physicists have increasingly turned their attention
to quantum field theory formulated on noncommutative spaces, in particular
flat vector spaces equipped with the Moyal-Weyl product \cite{DouNe}.
This type of quantum field theory is hereafter called NCQFT.
It happens that perturbation theory for such NCQFT's is no longer labeled
by ordinary graphs but by ribbon graphs, suggesting a possible connection to the
work of Bollob\'as-Riordan.
Quantum field perturbation theory can be expressed in several representations.
The momentum representation is the most common in the text books. The direct space
representation is closer to physical intuition.
However it is the parametric representation which is the most elegant and compact one.
In this representation, after the integration of internal position and/or momentum
variables has been performed explicitly, the result is expressed in
terms of the Symanzik polynomials. There is an extensive literature on these polynomials (see e.g. \cite{Naka,IZ} for
classical reviews). These polynomials only depend on the Schwinger parameters.
Space time no longer enters explicitly into that representation except through its
dimension which appears simply as a parameter.
This observation is crucial for several key applications in QFT which rely on dimensional
interpolation. Dimensional regularization and renormalization
was a crucial tool in the proof by 't~Hooft and Veltmann that non-Abelian gauge theories are renormalizable \cite{HV}. The Wilson-Fisher $\epsilon$ expansion \cite{Wil} is our
best theoretical tool to understand three dimensional phase transitions.
Dimensional regularization is also used extensively in the works of Kreimer and Connes
\cite{Kreim,CKreim1} which recast the recursive BPHZ forest formula of perturbative renormalization into a Hopf algebra structure and relate it to a new class of Riemann-Hilbert
problems \cite{CKreim2}.
Following these works, renormalizability has further attracted
considerable interest in the recent years as a pure mathematical
structure. The renormalization group ambiguity reminds
mathematicians of the Galois group ambiguity for roots of algebraic
equations \cite{Cartier}. Hence the motivations to study quantum field theory
and renormalization come no longer solely from physics but
also at least partly from number theory.
The fact that the parametric representation is relatively independent of the details
of space time makes it also particularly appealing as a prototype for
the tools we need in order to quantize gravity.
The point of view of loop gravity is essentially based
on the diffeomorphism invariance of general relativity.
In the spin foam or group field theory formalism amplitudes are expressed as discrete sums
associated to combinatoric structures which generalize
Feynman graphs. They are in fact generalizations of ribbon graphs.
To extend the parametric representation and eventually the theory
of renormalization to this context is a major challenge, in which some
preliminary steps have been performed \cite{Markopoulou}.
In this paper we uncover the relationship
between universal polynomials of the Tutte and Bollob\'as-Riordan type
and the parametric representation in quantum field theory. The Symanzik polynomials
that appear in ordinary commutative QFT are particular multivariate versions of Tutte polynomials.
The relation between Bollob\'as-Riordan polynomials and the non commutative analogs
of the Symanzik polynomials
uncovered in \cite{MinSei,GurauRiv,RivTan} is new.
This establishes a relation between NCQFT, combinatorics and algebraic topology.
Recently the relation between renormalization and topological polynomials
was explored in \cite{KrajMart}, and in \cite{AluMarc}. We intend also to explore in the future
the relation between Feynman amplitudes and knot polynomials.
The plan of this paper is as follows. In the next section we
give a brief introduction to graph theory and to Tutte-like
polynomials. In the third section we derive the
parametric representation of Feynman amplitudes
of QFT and give a new method to compute the
corresponding Symanzik polynomials.
The deletion/contraction property (\ref{delcontrsym1}) of these polynomials
is certainly not entirely new \cite{Kreimer,brown}.
But our method which starts from the phase-space representation
of Feynman amplitudes is inspired
by earlier work on NCQFT \cite{GurauRiv,RivTan} and introduces two main technical improvements.
One is the use of Grassmann variables
to exploit the quasi-Pfaffian structure of Feynman amplitudes. This quasi-Pfaffian structure
was discovered in \cite{GurauRiv} in the context of NCQFT but
to our knowledge was never applied to the simpler
case of ordinary QFT. The second improvement is that
we do not factor out as usual the delta functions expressing
global momentum conservation, because this requires a noncanonical choice
of a root for every connected graph. Instead we introduce an infrared regularization
in the form of a small harmonic potential at each vertex which leads to more
elegant and canonical formulas. The corresponding generalized Symanzik polynomials obey
a transparent deletion/contraction relation which allows
to identify them with particular multivariate Tutte polynomials. These
polynomials are close but not identical to the polynomials of \cite{Sokal1};
we show how they both derive from a more general "categorified" polynomial.
The usual Symanzik polynomials are simply recovered as the leading terms
when the small harmonic potentials tend to zero.
For completeness we also include a more standard way to compute the Symanzik polynomials
through $x$ space representation and the tree matrix theorem.
In the fourth section we introduce ribbon graphs and Bollob\'as-Riordan polynomials.
In the fifth and last section we define the first and second Symanzik polynomials of NCQFT
and relate them to the Bollob\'as-Riordan polynomials, using again the Pfaffian variables.
Formulas for such polynomials were first sketched in \cite{MinSei}, but without proofs,
nor relation to the Bollob\'as-Riordan polynomials.
In a companion paper we shall discuss generalizations of the Tutte and Bollob\'as-Riordan
polynomials that occur for non-translation invariant theories with propagators
based on the Mehler rather than the heat kernel. These theories appeared as
the first examples of renormalizable NCQFT's \cite{GrWu1,GrWu2,RVW,GMRV,Riv1}
and they are the most promising candidates for a fully non-perturbative
construction of a field theory in four dimensions \cite{GrWubeta,DR1,DGMR,Riv2}. In this case
the harmonic potentials on the vertices are no longer needed as the Mehler
kernel already contains an harmonic potential for the propagators of the graphs.
\section{Tutte Polynomial}
\setcounter{equation}{0}
\subsection{Graph Theory, Notations}
\label{graphsub}
A graph $G$ is defined as a set of vertices $V$ and of edges $E$ together with
an incidence relation between them. The number of vertices and edges in a graph
will be noted also $V$ and $E$ for simplicity, since our context always prevents any confusion.
Graph theorists and field theorists usually have different words for the same objects so
a little dictionary may be in order. We shall mostly use in this review
the graph theorists language. In subsection \ref{decorsub}
we introduce also some enlarged notion of graphs, with decorations called \emph{flags}
which are attached to the vertices of the graph to treat the external variables of
physicists, plus other decorations also attached to vertices called (harmonic) weights to regularize infrared divergences. Generalizations to ribbon graphs will be described in section \ref{briordan}.
\begin{figure}
\begin{center}
\includegraphics[scale=1,angle=-90]{f5.pdf}
\caption{Basic building blocks of a graph}
\end{center}
\label{fig:bas}
\end{figure}
Edges in physics are called lines (or propagators).
Edges which start and end at the same vertex are definitely allowed,
and called (self)-loops in graph theory and tadpoles in physics.
A proper graph, i.e. a graph $G$ without such self-loops, together with
an arrow orienting each edge,
can be fully characterized through its incidence
matrix ${\epsilon_{v e}}$. It is the rectangular $E$ by $V$ matrix
with indices running over vertices and
edges respectively, such that
\begin{itemize}
\item
${\epsilon_{ve}}$ is +1
if $e$ starts at $v$,
\item
${\epsilon_{ve}}$ is -1 if $e$ ends at $v$,
\item
${\epsilon_{ve}}$ is 0 otherwise.
\end{itemize}
It is also useful to introduce the absolute value $\eta_{ve} = \vert \epsilon_{ve} \vert$
These quantities can be then generalized to graphs with self-loops by defining
$\epsilon_{ev} =0$ for any self-loop $e$ and vertex $v$ {\it but} $\eta_{ev} =2$
for a self-loop attached at vertex $v$ and $\eta_{ev} =0$ otherwise.
The number of half-edges at a vertex $v$ is called the degree of $v$
in graph theory, noted $d(v)$. Physicists usually call it the coordination number at $v$.
A self-loop counts for 2 in the degree of its vertex, so that
$d(v) = \sum_{e} \eta_{ev}$.
An edge whose removal increases (by one) the number of connected parts
of the graph is called a bridge in graph theory and a one-particle-reducible line in physics.
A forest is an acyclic graph and a tree is a connected forest.
A cycle in graph theory is a connected subset of $n$ edges and $n$ vertices
which cannot be disconnected by removing any edge. It is called a loop in
field theory.
Physicists understood earlier than graph theorists that half-edges (also called flags
in graph theory \cite{Kauf}) are more fundamental than edges. This is because they correspond to integrated fields through the rule of Gau\ss ian integration, which physicists call Wick's theorem.
Feynman graphs form a category of graphs with external flags decorating the vertices.
They occur with particular weights, in physics called amplitudes. These weights depend on the
detail of the theory, for instance the space-time dimension. A quantum field theory can be viewed the
generating functional for the species of such weighted Feynman graphs. In this paper we shall reserve the convenient word flag exclusively for the "external fields" decorations
and always use the word half-edge for the "internal half-edges".
An edge which is neither a bridge nor a self-loop is called regular. We shall call \emph{semi-regular}
an edge which is not a self-loop, hence which joins two distinct vertices.
There are two natural operations associated to an edge $e$
of a graph $G$, pictured in Figure \ref{fig:cond}:
\begin{itemize}
\item the deletion, which leads to a graph noted $G-e$,
\item the contraction, which leads to a graph noted $G/e$. If $e$ is not a self-loop, it identifies the two vertices $v_1$and $v_2$ at the ends of $e$
into a new vertex $v_{12}$, attributing all the
flags (half-edges) attached to $v_1$ and $v_2$ to $v_{12}$, and then it removes $e$.
If $e$ is a self-loop, $G/e$ is by definition the same as $G-e$.
\end{itemize}
\begin{figure}
\begin{center}
\includegraphics[scale=0.9,angle=-90]{f3.pdf}
\caption{The contraction-deletion of a graph}\label{fig:cond}
\end{center}
\end{figure}
A subgraph $G'$ of $G$ is a
subset of edges of $G$, together with the attached vertices.
A \emph{spanning forest} of $G$ is an acyclic subgraph of $G$
that contains all the vertices of $G$. If $G$
is connected a spanning forest is in fact a
tree of $G$ and any such \emph{spanning tree} has $\vert V\vert -1$ vertices.
As explained in the introduction
a topological graph polynomial
is an algebraic or combinatoric object associated with a graph that is usually invariant under at least graph homeomorphism. It encodes information of the graph and so enables combinatoric and algebraic method to deal with graphs.
The Tutte polynomial \cite{Tutte} is one of the most general polynomial to
characterize a graph. It is defined under a simple rule through the
deletion and contraction of edges. It can be generalized to the
larger theory of matroids \cite{Welsh}.
The original Tutte polynomial which is a function of two variables
can be generalized in various ways to multi-variable polynomials
which have many applications, in particular in statistical mechanics
where it evaluates the Potts model on graphs
\cite{Sokal1,EM1,EM2}. These applications shall not be reviewed here.
We present first the two main equivalent definitions of the
Tutte polynomial. One direct way is to specify its linear recursion
form under contraction of regular edges (which are neither loops
nor bridges), together with an evaluation on terminal forms solely made
of bridges and self-loops. Another definition is as a rank-nullity generating
function. By induction these definitions can be proved equivalent.
\subsection{Tutte Polynomial}
The definition through a recursion relation is a reduction rule on edges together
with an evaluation for the terminal forms.
The Tutte polynomial may be defined by such a linear recursion relation under deleting and
contracting regular edges. The terminal forms, i.e. the graphs without
regular edges are forests (i.e. graphs made of bridges) decorated with an additional arbitrary number of
self-loops at any vertex. The Tutte polynomial evaluated on these terminal forms simply counts separately the number of bridges and loops:
\begin{definition}[Deletion-Contraction]\label{defdelcontr1}
If $G=(V,E)$ is a graph, and $e$ is a regular edge, then
\begin{equation}
T_G (x,y)=T_{G/e} (x,y)+T_{G-e} (x,y).
\end{equation}
For a terminal form $G$ with $m$ bridges and $n$ self-loops the polynomial is defined by
\begin{equation}
T_G(x,y)=x^m y^n .
\end{equation}
\end{definition}
It is not obvious that Definition \ref{defdelcontr1} is a definition at all since the result
might depend on the ordering in which different edges are suppressed through deletion/contraction,
leading to a terminal form. The best proof that $T_G$ is unique and well-defined is in fact through
a second definition of the Tutte polynomial as a global sum over subgraphs. It gives a concrete
solution to the linear deletion/contraction recursion which is clearly independent
on the order in which edges are suppressed:
\begin{definition}[Sum overs subsets]
If $G=(V,E)$ is a graph, then the Tutte polynomial of
$G$, $T_G(x,y)$ has the following expansion:
\begin{equation}
T_G (x,y)=\sum_{A\subset E} (x-1)^{r(E)-r(A)} (y-1)^{n(A)},
\end{equation}
where $r(A) = \vert V \vert - k(A) $ is the rank of the subgraph $A$ and
$n(A) =\vert A \vert + k(A) - \vert V \vert $ is its nullity or cyclomatic number.
In physicists language $n(A)$ is the number of independent loops in $A$.
\end{definition}
Remark that $r(A)$ is the number of edges in any spanning forest of $A$,
and $n(A)$ is the number of remaining edges in $A$ when a spanning forest
is suppressed, so it is the number of \emph{independent cycles} in $A$.
\begin{theorem}
These two definitions are equivalent.
\end{theorem}
One can show that the polynomial defined by the sum over subsets
obeys the deletion-contraction recursion. One can also evaluate it directly
and show that it coincides with the first definition on the terminal forms
with only loops and bridges.
There is a third definition of the Tutte polynomial through spanning trees
(see eg \cite{EM1}). This third definition involves ordering the edges of the graph.
We think it may be also relevant in the context of field theory, in particular in relation
with the ordered trees or forests formulas of constructive theory \cite{BK,AR,GMR},
but this point of view will not be developed here.
\subsection{Multivariate Tutte polynomials}
Multivariate Tutte polynomials can also be defined through linear recursion or global formulas.
The ordinary multivariate Tutte polynomial
$Z_G(q,\{\beta\})$ has a different variable $\beta_e$ for each edge $e$,
plus another variable $q$ to count vertices. We also write it
most of the time as $Z_G(q,\beta)$ for simplicity.
It is defined through a completely general linear deletion-contraction relation:
\begin{definition}[Deletion-Contraction]
For any edge $e$ (not necessarily regular)
\begin{equation}\label{multivartut}
Z_G( q,\{ \beta \} )= \beta_e Z_{G/e} (q, \{\beta- \{\beta_e\} \} ) + Z_{G-e} (q, \{ \beta- \{ \beta_e \} \} ) .
\end{equation}
This relation together with the evaluation on terminal forms completely defines $Z_G(q,\beta)$,
since the result is again independent of the order of suppression of edges.
The terminal forms are graphs without edges,
and with $v$ vertices; for such graphs $Z_G(q,\beta)= q^v$.
\end{definition}
We can also define $Z_G(q,\beta)$ as a sum over subsets of edges:
\begin{definition}[Sum over subsets]
\begin{equation}
Z_G(q,\beta)=\sum_{A\subset E}q^{k(A)}\prod_{e\in A}\beta_e ,
\end{equation}
where $k(A)$ is the number of connected components in the subgraph $(V,A)$.
\end{definition}
One can prove as for the two variables Tutte polynomial that
this definition is equivalent to the first.
In \cite{Sokal1} this multivariate polynomial is discussed in detail.
\medskip
To understand the relation between this multivariate and the ordinary Tutte polynomial with two variables
we multiply $Z_G$ by $q^{- V } $, we set $ \beta_e = y-1$ and $q = (x-1)(y-1)$ and get
\begin{eqnarray}
\big[ q^{- V } Z_G(q,\beta) \big] \vert_{ \beta_e = y-1, q = (x-1)(y-1)} &=&
(x-1)^{k(E) -|V|} T_G(x,y).
\end{eqnarray}
We consider also
\begin{equation}
q^{- k(G) }Z_G (q,\beta).
\end{equation}
Taking the limit $q \to 0$
that is retaining only the constant term in $q$
we obtain a sum over maximally spanning subgraphs $A$,
that is subgraphs with $k(A)=k(G)$:
\begin{equation} S_{G} (\beta)=\sum_{A
\mathrm{ \; \; maximally \; \; spanning \; \; } E } \quad
\prod_{e\in A} \beta_e .
\end{equation}
If we now retain only the lowest degree of homogeneity
in $\beta$ we obtain a sum over maximally spanning graphs
with lowest number of edges, ie maximally spanning acyclic graphs or
\emph{spanning forests} of $G$.
\begin{equation} F_{G} (\beta)=\sum_{\cF
\mathrm{ \; \; maximally \; \; spanning \; \; forest \; \; of \; \; } G } \quad
\prod_{e\in \cF} \beta_e .
\end{equation}
Finally if we divide
$F_{G} (\beta)$ by $\prod_{e \in E} \beta_e$
and change variables to $\alpha_e = \beta_e^{-1}$
we obtain the ``(Kirchoff-Tutte)-Symanzik" polynomial. This polynomial is usually
defined for connected graphs, in which case the sum
runs over spanning trees $\cT$
of $G$.
\begin{equation} \label{kts}
U_G(\alpha)=
\sum_{ \cT \; \; \mathrm{spanning \; \; tree \; \; of \; \; } G }\quad
\prod_{e\not\in \cT} \alpha_e .
\end{equation}
This polynomial satisfies the deletion contraction-recursion
\begin{equation}\label{delcontrsym1}
U_G(\alpha)=U_{G/e}(\alpha)+\alpha_e U_{G-e}(\alpha)
\end{equation}
for any regular edge $e$, together with the terminal form evaluation
\begin{equation} \label{delcontrsym2}
U_G (\alpha) = \prod_{e \; \; \mathrm{self-loop} \; \; } \ \ \ \alpha_e ,
\end{equation}
for any $G$
solely made of self-loops and bridges.
The deletion-contraction (\ref{delcontrsym1}) can be extended
to general edges if we define $U$ for disconnected graphs as
the product over the connected components of the corresponding
$U$'s and put the contraction of any self-loop to 0.
The polynomial $U$ appears
in a key computation of QFT, namely that of
the parametric representation of the Feynman amplitude associated to the graph $G$.
We give a proof of this fact based on a new Pfaffian representation
of Feynman amplitudes together with harmonic weights at vertices so as
to make the deletion/contraction rule (\ref{delcontrsym1})-(\ref{delcontrsym2})
particularly transparent.
But to define the second (Kirchoff-Tutte)-Symanzik polynomial as well as to make the computation
of the first Symanzik polynomial more canonical,
we need first to enlarge slightly our category of graphs
to include some decorations at the vertices.
\subsection{Decorated graphs}
\label{decorsub}
Decorations are essential in physics to represent
the concept of {\it external variables}, which are ultimately those connected to
actual experiments and observations.
Graphs with integers attached to each vertex and their corresponding
multivariate polynomials $W_G ( \alpha_e, N_v)$ have been considered
in \cite{Wpolynomial}. But to represent external variables
we need to replace the integer $N_v$ by a set of $N_v$
disjoint objects\footnote{In mathematics such a replacement is called a
categorification of the integers $N_v$.},
hereafter called \emph{flags} (see subsection \ref{graphsub}).
Each flag is attached to a single vertex. A momentum variable $p_f$
in ${\mathbb R}^d$ is associated to each such flag. The incidence matrix
can be extended to the flags, that is we define
$\epsilon_{fv}$ as $+1$ if the flag $f$ is associated to the vertex $v$
and 0 otherwise. The total momentum incident to a subset $S$ of the
graph is then defined as $\sum_{f} \sum_{v \in S} \epsilon_{fv} p_f$.
Remark that this momentum is defined
for subgraphs $S$ which may contain
connected components reduced to single vertices.
For translation invariant QFT's,
global momentum conservation means that
the condition $p_G =0$ must be fulfilled.
Similarly we attach to each vertex a number $q_v >0$ called the (harmonic) weight of the vertex.
The total weight of a subgraph $S$ is $\sum_{v \in S} q_v$.
The deletion/contraction relation is then extended to this category of graphs.
The deletion is easy but the contraction is a bit non trivial. For a semi-regular edge joining
vertices $v_1$ and $v_2$ it collapses the two vertices into a single one $v_{12}$,
attaching to $v_{12}$ all half-edges of $v_1$ and $v_2$. But it also attaches to
$v_{12}$ the union of all the flags attached to $v_1$ and $v_2$, so that
the total momentum incoming to $v_{12}$ is the sum of the momenta incoming
to $v_1$ and to $v_2$. Finally the new weight of $v_{12}$ is the sum
$q_{v_1}+ q_{v_2}$ of the weights of $v_1$ and $v_2$.
These decorated graphs are the natural objects on which
to define generalized Symanzik polynomials in field theory.
Remaining for the moment in the context of graph theory we
can define the second (Kirchoff-Tutte)-Symanzik polynomial
for a \emph{connected} graph as
\begin{definition}
\begin{equation} \label{secondsysy} V_G(\alpha, p)= - \frac{1}{2}\sum_{v \ne v'} p_v \cdot p_{v'}
\sum_{\cT_2 \; \; \mathrm{2-tree \; \;separating } \;\: v \;\; {\rm and }\;\; v`} \quad
\prod_{e\not\in \cT_2} \alpha_e
\end{equation}
where a two tree $\cT_2$ means a tree minus one edge, hence a forest with
two disjoint connected components $G_1$ and $G_2$; the separation condition
means that $v$ and $v'$ must belong one to $G_1$ the other to $G_2$.
\end{definition}
For any pair of distinct vertices $v$ and $v'$ we can build a canonical graph $G(v,v')$ first by joining vertices $v$ and $v'$
in $G$ with a new edge and then \emph{contracting that edge}. This operation could be called
the contraction of the pair of vertices $v$ and $v'$. The following result goes back to
Kirchhoff \cite{kirchhoff}.
\begin{proposition}\label{propUV} The second Symanzik polynomial is a quadratic form in the total momenta
$p_v$ at each vertex, whose coefficients are the $U_{G(v,v')}$ polynomials:
\begin{equation} V_G(\alpha, p)= - \frac{1}{2}\sum_{v \ne v'} p_v \cdot p_{v'}\,\; U_{G(v,v')} .
\end{equation}
\end{proposition}
\noindent{\bf Proof}\ The graph $G(v,v')$ has $V-1$ vertices, hence its spanning trees have $V-2$ edges. They cannot make
cycles in $G$ because they would make cycles in $G(v,v')$. They are therefore two-trees in $G$,
which must separate $v$ and $v'$, otherwise they would make a cycle in $G(v,v')$.
{\hfill $\Box$}
On the submanifold of flag variables satisfying the \emph{momentum conservation}
condition $p_G = \sum_f p_f = 0$ there is an alternate less symmetric definition of a
similar polynomial:
\begin{definition}
\begin{equation}\label{secondsy} \bar V_G(\alpha, p)=
\sum_{\cT_2 \; \; \mathrm{2-tree }} \; \; p_{G_1}^2 \; \;
\prod_{e\not\in \cT_2} \alpha_e
\end{equation}
where $\cT_2$ is again a two-tree with two disjoint connected components $G_1$ and $G_2$.
\end{definition}
Indeed this is an unambiguous definition. On the submanifold $p_G =0$ we have
$p_{G_1} = - p_{G_2}$, hence equation (\ref{secondsy})
does not depend of the choice of $G_1$ rather than $G_2$.
\begin{proposition}
On the manifold of flag variables satisfying the momentum conservation
condition $p_G = \sum_f p_f = 0$ one has $V_G(\alpha, p) = \bar V_G(\alpha, p)$.
\end{proposition}
\noindent{\bf Proof}\
We simply commute the sums over $v,v'$ and $\cT_2$
in (\ref{secondsysy}). For a given $\cT_2$ the condition that $v$ and $v'$ are separated allows
to separate the $p_v$ with $v \in G_1$ from the $p_{v'}$ with $v' \in G_2$; one gets therefore
$- \frac{1}{2} \; 2 p_{G_1} \cdot p_{G_2} $ which is nothing but $p_{G_1}^2$
or $p_{G_2}^2$ on the manifold $p_G =0$.
{\hfill $\Box$}
We shall give in subsection \ref{symanpoly} a definition of
generalized first and second Symanzik polynomials for any graph, connected or not
from which $U_G$, $V_G$ or $\bar V_G$ can be easily derived in certain limits.
Before actually performing these computations we include a brief interlude
on Grassmann representation of determinants and Pfaffians. The reader familiar with this topic
can jump directly to the next section.
\subsection{Grassmann representations of determinants and Pfaffians}
Independent Grassmann variables $\chi_1, ..., \chi_n$ satisfy
complete anticommutation relations
\begin{equation} \chi_i \chi_j = - \chi_j \chi_i \quad \forall i, j
\end{equation} so that any function of these variables is a polynomial
with highest degree one in each variable.
The rules of Grassmann integrations
are then simply
\begin{equation} \int d\chi = 0, \; \; \; \; \int \chi d\chi = 1 .
\end{equation}
The determinant of any $n$ by $n$ matrix can be then expressed as
a Grassmann Gau\ss ian integral over $2n$ independent
Grassmann variables which it is convenient to name as
$\bar \psi_1, \ldots , \bar \psi_n$, $\psi_1, \ldots , \psi_n$,
although the bars have nothing yet at this stage to do with complex conjugation.
The formula is
\begin{equation} \det M = \int \prod d\bar \psi_i d\psi_i e^{-\sum_{ij} \bar\psi_i M_{ij} \psi_j } .
\end{equation}
The Pfaffian $\mathrm{Pf} (A)$ of an \emph{antisymmetric}
matrix $A$ is defined by
\begin{equation}
\det A=[\mathrm{Pf} (A)]^2 .
\end{equation}
\begin{proposition}
We can express the Pfaffian as:
\begin{eqnarray}
\mathrm{Pf} (A) =\int d\chi_1...d\chi_n
e^{-\sum_{i<j}\chi_i A _{ij}\chi_j}
= \int d\chi_1...d\chi_n e^{-\frac{1}{2}\sum_{i,j}\chi_i A _{ij}\chi_j} .
\label{pfaff}
\end{eqnarray}
\end{proposition}
\noindent{\bf Proof}\ Indeed we write
\begin{equation}
\det A= \int \prod_i d\bar \psi_i d\psi_i e^{-\sum_{ij} \bar\psi_i A_{ij} \psi_j } .
\end{equation}
Performing the change of variables (which a posteriori justifies the complex notation)
\begin{eqnarray} \label{changepfaff}
\bar\psi_i = \frac{1}{ \sqrt{2}}(\chi_i - i\omega_i), \quad
\psi_i = \frac{1}{\sqrt{ 2}}(\chi_i + i\omega_i),
\end{eqnarray}
whose Jacobian is $i^{-n}$, the new
variables $\chi$ and $\omega$ are again independent Grassmann variables.
Now a short computation using $A_{ij}=-A_{ji}$ gives
\begin{eqnarray}
\det A&=& i^{-n} \int \prod_i d\chi_i d\omega_i e^{-\sum_{i<j} \chi_i A_{ij} \chi_j
- \sum_{i<j} \omega_i A_{ij} \omega_j } \nonumber \\
&=& \int \prod_i d\chi_i e^{-\sum_{i<j} \chi_i A_{ij} \chi_j }\prod_i d\omega_i e^{ -\sum_{i<j} \omega_i A_{ij} \omega_j },
\label{pfaffi}\end{eqnarray}
where we used that $n=2p$ has to be even and that a factor $(-1)^p$ is generated
when changing $ \prod_i d\chi_i d\omega_i $ into $ \prod_i d\chi_i \prod_i d\omega_i $.
Equation (\ref{pfaffi}) shows why $\det A$ is a perfect square and proves (\ref{pfaff}). {\hfill $\Box$}
\begin{lemma}\label{quasipfaff}
The determinant of a matrix $D+A$ where $D$ is
diagonal and $A$ antisymmetric has a "quasi-Pfaffian" representation
\begin{equation} \det (D+A) = \int \prod_i d\chi_i d \omega_i e^{-\sum_i \chi_i D_{ii} \omega_i - \sum_{i <j}
\chi_i A_{ij} \chi_j + \sum_{i <j} \omega_i A_{ij} \omega_j } .
\end{equation}
\end{lemma}
\noindent{\bf Proof}\ The proof consists in performing the change of variables
(\ref{changepfaff}) and canceling carefully the $i$ factors. {\hfill $\Box$}
\subsubsection{Tree-Matrix Theorem}
Let $A$ be an $n\times n$ matrix such that
\begin{equation} \label{sumnulle}
\sum_{i=1}^n A_{ij} = 0 \ \ \forall j \ .
\end{equation} Obviously $\det A=0$. The interesting quantities are eg the
diagonal minors $\det A^{ii}$ obtained by
deleting the $i$-th row and the $i$-th column in $A$. The ``Kirchoff-Maxwell"
matrix tree theorem expresses these minors as sums over trees:
\begin{theorem}[Tree-matrix theorem]
\begin{equation}\label{treemat}
\det A^{ii}= \sum_{T\ {\rm spanning\ tree\ of} A} \prod_{e \in T} (-A_e ) ,
\end{equation}
where the sum is over spanning trees on $\{1, ... n\}$ oriented away from root $i$.
\end{theorem}
\noindent{\bf Proof}\ We give here a sketch of the Grassmann proof given in \cite{Abd}.
We can assume without loss of generality that $i=1$. For any matrix A we have:
\begin{equation}
\det A^{11}= \int \big[ \prod_{i=1}^n d\bar\psi_i d\psi_i \big] \psi_1 \bar\psi_1 e^{-\sum_{i,j}\bar\psi_i
A_{ij}\psi_j} .
\end{equation}
The trick is to use (\ref{sumnulle}) to write
\begin{equation}
{\bar \psi}A\psi=\sum_{i,j=1}^n
({\bar\psi}_i-{\bar\psi}_j)A_{ij}\psi_j ,
\end{equation}
hence
\begin{eqnarray}
\det A^{11} &=&
\int {\rm d}{\bar\psi} {\rm d}\psi
\ (\psi_1 {\bar \psi}_1)
\exp \lp -\sum_{i,j=1}^n A_{ij}({\bar\psi}_i-{\bar\psi}_j)\psi_j
\rp
\nonumber\\
&=& \int {\rm d}{\bar\psi} {\rm d}\psi
\ (\psi_1 {\bar \psi}_1)
\left[ \prod_{i,j=1}^n \lp 1-A_{ij}({\bar\psi}_i-{\bar\psi}_j)\psi_j \rp
\right]
\end{eqnarray}
by the Grassmann rules. We now expand to get
\begin{equation}
\det A^{11} =
\sum_{\cG}
\lp
\prod_{\ell=(i,j)\in\cG}(-A_{ij})
\rp
{\Omega}_{\cG}
\end{equation}
where $\cG$ is {\em any} subset of $[n]\times[n]$, and we used the notation
\begin{equation}
{\Omega}_{\cG}\equiv
\int {\rm d}{\bar\psi} {\rm d}\psi
\ (\psi_1 {\bar \psi}_1)
\lp
\prod_{(i,j)\in\cG}
\left[ ({\bar\psi}_i-{\bar\psi}_j)\psi_j \right]
\rp .
\end{equation}
Then the theorem follows from the following
\begin{lemma}
${\Omega}_{\cG}=0$
unless the graph $\cG$
is a tree directed away from 1 in which case
${\Omega}_{\cG}=1$.
\end{lemma}
\noindent{\bf Proof}\
Trivially, if $(i,i)$ belongs to $\cG$, then the integrand of
${\Omega}_{\cG}$ contains a factor ${\bar\psi}_i-{\bar\psi}_i=0$ and
therefore ${\Omega}_{\cG}$ vanishes.
But the crucial observation is that if
there is a loop in $\cG$ then again ${\Omega}_{\cG}=0$.
This is because then the integrand of ${\Omega}_{\cF,\cR}$ contains the factor
\begin{equation}
{\bar\psi}_{\ta(k)}-{\bar\psi}_{\ta(1)}=
({\bar\psi}_{\ta(k)}-{\bar\psi}_{\ta(k-1)})+\cdots+
({\bar\psi}_{\ta(2)}-{\bar\psi}_{\ta(1)}) .
\end{equation}
Inserting this telescoping expansion of the factor
${\bar\psi}_{\ta(k)}-{\bar\psi}_{\ta(1)}$ into the integrand of
${\Omega}_{\cF,\cR}$, the latter breaks into a sum of $(k-1)$ products.
For each of these products, there exists an $\al\in\ZZ/k\ZZ$
such that the factor $({\bar\psi}_{\ta(\al)}-{\bar\psi}_{\ta(\al-1)})$
appears {\em twice} : once with the $+$ sign from the telescopic
expansion of $({\bar\psi}_{\ta(k)}-{\bar\psi}_{\ta(1)})$, and once more
with a $+$ (resp. $-$) sign if $(\ta(\al),\ta(\al-1))$
(resp. $(\ta(\al-1),\ta(\al))$) belongs to $\cF$.
Again, the Grassmann rules entail that ${\Omega}_{\cG}=0$. {\hfill $\Box$}
To complete the proof of (\ref{treemat}) every connected component of $\cG$ must contain
1, otherwise there is no way to saturate the $d\psi_1$ integration.
This means that $\cG$ has to be a directed tree on $\{1,... n\}$.
It remains only to see that $\cG$ has to be directed away from 1,
which is not too difficult.
{\hfill $\Box$}
The interlude is over and we now turn to perturbative QFT and to the
parametric representation of Feynman amplitudes.
\section{Parametric Representation of Feynman Amplitudes}
\setcounter{equation}{0}
In this section we will give a brief introduction to the
parametric representation of ordinary QFT on a commutative
vector space ${\mathbb R}^d$. We may take the example of $\phi^4$ bosonic theory but
the formalism is completely general.
\subsection{Green and Schwinger functions in QFT}
In particle physics the most important quantity is the diffusion
matrix S whose elements or cross sections can be measured in
particle experiments. The S matrix can be expressed from the Green
functions through the reduction formulas. Hence they
contain all the relevant information for that QFT.
These Green functions are time ordered vacuum expectation values of the
fields $\phi$, which are operator-valued and act on the Fock space:
\begin{equation}
G_N(z_1,...,z_N)=\langle
\psi_0,T[\phi(z_1)...\phi(z_N)]\psi_0\rangle .
\end{equation}
Here $\psi_0$ is the vacuum state and the $T$-product orders
$\phi(z_1)...\phi(z_N)$ according to increasing times.
In the functional integral formalism the Green functions can be
written as:
\begin{equation}
G_N(z_1,...,z_N)=\frac{\int\prod_{j=1}^N\phi(z_j){e^{i\int\mathcal
{L}(\phi(x))dx}}D\phi}{\int{e^{i\int\mathcal {L}(\phi(x))dx}}D\phi} .
\end{equation}
Here $\mathcal {L}=\mathcal {L}_0+\mathcal {L}_{int}$ is the full
Lagrangian of the theory. The Green functions continued to Euclidean
points are called the Schwinger functions and are given by
the Euclidean Feynman-Kac formula:
\begin{equation}
S_N(z_1,...,z_N)=Z^{-1}\int\prod_{j=1}^N\phi(z_j)e^{-\int\mathcal
{L}(\phi(x))dx}D\phi ,
\end{equation}
\begin{equation}
Z=\int e^{-\int\mathcal {L}(\phi(x))dx}D\phi .
\end{equation}
For instance for the $\phi^4$ theory, $\mathcal {L}_{int}=\frac{\lambda}{4!} {\phi (x)}^4$
and we have
\begin{equation}\label{phi4theory}
\mathcal {L}(\phi)=\frac{1}{2} \partial_\mu \phi(x) \partial^\mu
\phi(x) +\frac{1}{2}m{\phi(x)}^2 + \frac{\lambda}{4!} {\phi (x)}^4
\end{equation}
where
\begin{itemize}
\item $\lambda$ is the (bare) coupling constant, which characterizes the
strength of the interaction, the traditional factor 1/4! is inessential but slightly
simplifies some computations.
\item $m$ is the (bare) mass,
\item $Z$ is the normalization factor,
\item $D\phi$ is an ill-defined "flat" product of Lebesgue measures $\prod_x d\phi(x)$
at each space time point.
\end{itemize}
The coefficient of the Laplacian is set to 1 in (\ref{phi4theory}) for simplicity. Although this coefficient
actually in four dimensions flows through renormalization, it is possible to exchange this flow
for a rescaling of the field $\phi$.
To progress towards mathematical respectability and to prepare for perturbation theory,
we combine the $e^{-\int\mathcal {L_0}(\phi(x))dx}D\phi$ and the free normalization factor
$Z_0 = \int e^{-\int\mathcal {L_0}(\phi(x))dx}D\phi$ into a normalized Gau\ss ian measure $d\mu_C (\phi)$
which is well-defined on some subspace of the Schwartz space of distributions $S'(R^d)$ \cite{GJ}.
The covariance of this measure is the (free) translation invariant propagator
$C(x,y) = \int \phi(x) \phi(y) d\mu_C (\phi)$, which by slight abuse of notation
we also write as $C(x-y)$ and whose Fourier transform is
\begin{equation}
C(p)=\frac{1}{(2\pi)^d}\frac{1}{p^2+m^2}.
\end{equation}
In this way the Schwinger functions are rewritten as
\begin{equation}
S_N(z_1,...,z_N)=Z^{-1}\int_{R^d}\prod_{j=1}^N \phi(z_j) e^{ -\int_{R^d} \mathcal{L}_{int} (\phi) } d\mu_C(\phi ),
\end{equation}
\begin{equation}
Z=\int e^{-\int_{R^d} \mathcal {L}_{int}(\phi(x))dx} d \mu_C(\phi).
\end{equation}
However this expression is still formal for two reasons; for typical fields the interaction factor is
not integrable over $R^d$ so that $\int_{R^d} \mathcal{L}_{int} (\phi)$ is ill-defined (infrared or thermodynamic problem)
and in dimension more than 2 even when the interaction factor is restricted to a finite volume
it is still ill-defined because for typical distributions $\phi$, products such as $\phi^4(x)$
are also ill-defined. This is the famous ultraviolet problem which requires renormalization (see \cite{Riv3}),
but this problem is not addressed here, as we discuss solely the structure of the integrands in Feynman
parametric representations, not the convergence of the integrals.
The reader worried by ill-defined integrals in the rest of this paper
for space-time dimension $d$ larger than $2$ should impose a
ultraviolet regulator. This means he should replace replace $C(p)$
by a better behaved $C_{\kappa}(p)$ such as
\begin{equation}
C_\kappa(p)=\frac{1}{(2\pi)^d}\frac{e^{-\kappa(p^2+m^2)}}{p^2+m^2}=\int_\kappa^\infty
e^{-\alpha (m^2+p^2)}d\alpha ,
\end{equation}
so that
\begin{equation}
C_\kappa(x,y)=\int_\kappa^\infty e^{-\alpha
m^2-(x-y)^2/{4\alpha }}\frac{d\alpha}{\alpha ^{D/2}} .
\end{equation}
We now turn to perturbation theory in which the factor $e^{-\int_{R^d} \mathcal{L}_{int} (\phi) }$
is expanded as a power series. This solves the thermodynamic problem, at the cost of introducing
another problem, the divergence of that perturbation expansion. This divergence which in the good cases
can be tackled by constructive field theory \cite{GJ,Riv4,constr1,constr2} will not be treated in this paper.
\subsection{Perturbation theory, Feynman Graphs}
Wick theorem is nothing but the rule of pairing which computes the moments of
a Gau\ss ian measure. It allows to integrate monomials of fields
\begin{equation}
\int \phi(x_1)...\phi(x_n)d\mu_C(\phi)=\sum_G \prod_{e\in G}C(x_{i_e},x_{j_e})
\end{equation}
where the sum over $G$ is over all contraction schemes (i.e. pairings of the fields) and
$C(x_{i_e},x_{j_e})$ is the propagator kernel joining the arguments of the two fields
$\phi(x_{i_e})$ and $\phi(x_{j_e})$ paired
into the edge $e$ by the contraction scheme $G$.
It was Feynman's master stroke to represent each such contraction scheme by a particular \emph{graph}
in which edges represent pairs of contracted fields and vertices stand for the interaction.
In the case of a $\phi^4$ theory, remark that these interaction vertices have degree 4.
Indeed the Schwinger functions after perturbative expansion are
\begin{equation}
S_N(z_1...z_N)=\frac{1}{Z}\sum_{n=0}^\infty\frac{(-\lambda)^n}{4^n n!}
\int\big[\int \prod_{v=1}^n \phi^4(x_v)dx_v \big] \phi(z_1)...\phi(z_N)d\mu(\phi).
\end{equation}
The pairings of Wick's theorem therefore occur between $n$ internal vertices each equipped with four fields and
$N$ external vertices or sources corresponding to the single fields $\phi(z_1)$, ... , $\phi(z_N)$.
Schwinger function are therefore expressed as sums over Feynman graphs of
associated quantities or weights called the Feynman amplitudes. In this position space
representation the Feynman graphs
have both $n$ \emph{internal vertices} corresponding to the
${\cal L}_{int}$ factors, plus $N$ external vertices of degree 1 corresponding to the fields
$\phi(z_1), ... , \phi(z_N)$. In the case of the $\phi^4$ theory
each internal vertex has degree 4.
\begin{figure}
\begin{center}
\includegraphics[scale=0.9,angle=-90]{f4.pdf}
\caption{ A $\phi^4$ graph}\label{fig:nor}
\end{center}
\end{figure}
The Feynman amplitudes are obtained by integrating over all positions of internal vertices
the product of the propagator kernels for all the edges of the graphs
\begin{equation} \label{amplix}
A_G (z_1, ..., z_N) = \int \prod_{v} dx_v \prod_{e \in G} C(x_{i_e},x_{j_e}) ,
\end{equation}
where the product $\prod_v$ runs over the \emph{internal} vertices $v$.
The quantities that are relevant to physical experiments are the
\emph{connected} Schwinger functions which can be written as:
\begin{equation}
\Gamma_N(z_1,...,z_N)=\sum_{ \phi^ 4 {\rm \ connected\ graphs\ } G {\rm \ with\ }
N(G)=N } \frac{(-\lambda)^{n(G)}}{S(G)} A(G)(z_1,...,z_N),
\end{equation}
where $S(G)$ is a combinatoric factor (symmetry factor).
The momentum space representation corresponds
to a Fourier transform to momenta variables called $p_1,... , p_N$:
\begin{equation} \Gamma_N(p_1,...,p_N) =\int dz_1...dz_N
e^{2i\sum p_f z_f} \Gamma_N(z_1,...,z_N),
\end{equation}
where the factor 2 is convenient and we forget inessential normalization factors.
This is a distribution, proportional to a global momentum conservation
$\delta (\sum_{f=1}^N p_f)$. From now on we use an index $f$ to label external momenta
to remember that they are associated to corresponding
graph-theoretic \emph{flags}. Usually one factors out this
distribution together with the external propagators, to obtain the
expansion in terms of truncated amputated graphs:
\begin{eqnarray}
\Gamma^T_N(p_1,...,p_N)&=&
\sum_{\phi^4 {\rm \ truncated\ graphs\ } G {\rm \ with\ }
N(G)=N}\frac{(-\lambda)^{n(G)}}{S(G)}
\nonumber \\
&&\delta (\sum_{f=1}^N p_f)
\prod_{f=1}^N \frac{1}{p_f^2 + m^2} A^T_G(p_1,...,p_N) .
\label{globaldelta}
\end{eqnarray}
In this sum we have to describe in more detail
the \emph{truncated graphs} $G$ with $N$ external flags.
Such truncated graphs are connected, but they may contain bridges and self-loops. They no longer have external vertices of degree 1. Instead, they still have $N$ external variables $p_f$, no longer
associated to edges but to flags ($N$ in total), which decorate the
former internal vertices. For instance for the $\phi^4$ theory
the degree of a truncated graph $G$ is no longer 4 at each
internal vertex. It is the total degree, that is the number of half-edges plus flags which
remains 4 at every vertex.
\begin{figure}
\begin{center}
\includegraphics[scale=0.9,angle=-90]{f6.pdf}
\caption{ A truncated $\phi^4$ graph}\label{fig:trun}
\end{center}
\end{figure}
Ordinary Schwinger functions can be expressed as sums over partitions of the arguments of products of the corresponding
truncated functions. We now give the explicit form of the corresponding
truncated amplitudes $A^T_G(p_1,...,p_N)$.
\subsection{Parametric representation}\label{paramQFT}
We shall first consider a fixed truncated oriented diagram $G$
and compute the corresponding contribution
or amplitude $A^T_G$ as given by Feynman rules.
We denote again by $E$ and $V$ the
number of edges and vertices respectively, and by $N$ the number
of flags. Since $G$ is connected its incidence matrix has rank $V-1$.
Now consider a Feynman graph $G$ contributing to some truncated
Schwinger function $\Gamma^T(p_1,...,p_N)$. The usual way to take into
account the global $\delta$ function
in (\ref{globaldelta}) is to restrict to configurations such that $\sum_f p_f=0$.
Extraction of this global delta function in (\ref{globaldelta}) for the amplitude
of a particular graph can be done provided we do not integrate
the position of one of the vertices in (\ref{amplix}), but rather fix it
at an arbitrary point, eg the origin. From now on we suppose
this vertex is $\bar v$ the one with last index. It provides a \emph{root}
in the graph $G$. However this standard procedure requires the non-canonical choice
of that root vertex, and the final result does not depend on that choice.
Another possibility is to modify the interaction $\lambda \phi^{4}(x)$
into $\lambda e^{-q x^2} \phi^{4}(x)$, in which case there is no longer global momentum conservation.
One can compute modified amplitudes $B^T_G (p_1, ... p_N; q)$
without factoring out the global $\delta (\sum_{f=1}^N p_f)$ factor, so that
\begin{eqnarray}
\Gamma^T_N(p_1,...,p_N; q)&=&
\sum_{\phi^4 {\rm \ truncated\ graphs\ } G {\rm \ with\ }
N(G)=N}\frac{(-\lambda)^{n(G)}}{S(G)}
\nonumber \\
&&
\prod_{f=1}^N \frac{1}{p_f^2 + m^2} B^T_G(p_1,...,p_N; q) .
\label{globaldelta1}
\end{eqnarray}
The momentum conserving usual amplitudes are recovered when $q \to 0$:
\begin{equation}\label{limBA} \lim_{q \to 0} B^T_G(p_1,...,p_N; q) = \delta (\sum_{f=1}^N p_f) A^T_G (p_1, ... ,p_N) .
\end{equation}
This is the procedure we shall follow
in subsection \ref{symanpoly}, because it avoids the choice of a noncanonical root.
But for the moment let us complete the standard presentation of $A^T_G (p_1, ... ,p_N)$.
The momentum representation of $A^T_G$, forgetting from now on
inessential factors of $2\pi$, is:
\begin{eqnarray}
A^T_G (p_1, ... ,p_N)&=& \int\prod_{e=1}^E d^d k_e
\frac{1}{k_e^2+m^2}
\prod_{v=1}^{V-1} \delta (\epsilon_{fv} p_f+\epsilon_{ev}k_e) .
\label{ampli}
\end{eqnarray}
in which we use the convention that repeated indices are summed,
so that $\epsilon_{fv} p_f+\epsilon_{ev}k_e$
stands for the total momentum $\sum_f \epsilon_{fv} p_f+ \sum_e \epsilon_{ev}k_e$
incoming at vertex $v$.
To obtain the parametric representation we have first to
rewrite the propagators as :
\begin{equation} \frac{1}{k^2+m^2}=\int_0^{\infty} d\alpha
e^{-\alpha(k^2+m^2)} . \label{prop1}
\end{equation}
We obtain the momentum parametric representation
\begin{equation}
A^T_G(p_1,...,p_N)=\int \prod_{e =1}^E d \alpha_e d^d
k_e e^{- \alpha_e (k_e^2+m^2)}
\prod_{v=1}^{V-1} \delta (\epsilon_{fv} p_f+\epsilon_{ev}k_e) .
\label{paramoment}
\end{equation}
Fourier transforming the $V-1$ Dirac distributions into oscillating integrals
we obtain, up to some inessential global factors the phase-space parametric representation
\begin{equation}
A^T_G(p_1,...,p_N) = \int \prod_{e =1}^E \big[ d \alpha_e e^{-\alpha_e m^2} d^d k_e \big]
\prod_{v=1}^{V-1} d^d x_v
e^{-\alpha_e k_e^2 + 2 i ( p_f \epsilon_{fv} x_v + k_e\epsilon_{ev} x_v ) } ,
\label{paraphase}
\end{equation}
where again
$ k_e\epsilon_{ev} x_v$ means
$\sum_{e=1}^E \sum_{v=1}^{V-1}
k_e\epsilon_{ev} x_v$ etc, and the factor 2 is convenient.
Finally integrating out the edge momenta whose dependence is Gau\ss ian
leads to the $x$ or direct space parametric representation:
\begin{equation}
A^T_G(p_1,...,p_N)= \int \prod_{e =1}^E d \alpha_e
\frac{e^{-\alpha_e m^2}}{\alpha_e^{d/2}}
\prod_{v=1}^{V-1} d^d x_v e^{2i p_f \epsilon_{fv} x_v - x_v \cdot x_{v'} \epsilon_{ve} \epsilon_{v'e} / \alpha_e } .
\label{paradirect}
\end{equation}
Remember this amplitude is only defined on the submanifold $p_G =0$, because
it is only there that the formula gives a result independent of the choice of the
root not integrated out in (\ref{paradirect})
The parametric representation consists in integrating out fully
the $x$ or $p$ variables in (\ref{paramoment}), (\ref{paraphase})
or (\ref{paradirect}).
One obtains the parametric representation, which is an integral
on $\alpha$ parameters only:
\begin{equation}
A^T_G(p_1,...,p_N) = \int \prod_{e =1}^E \big[ d \alpha_e e^{- \alpha_e m^2}\big]
\frac{e^{-V_G (p, \alpha)/ U_G (\alpha)}}{U_G (\alpha)^{d/2}} ,
\label{paramet}
\end{equation}
where $U_G$ and $V_G$ are called the first and second
Symanzik's polynomials.
\begin{theorem}\label{theosym}
The first Symanzik polynomial $U_G$ in (\ref{paramet})
is the multivariate Tutte polynomial (\ref{kts}).
On the submanifold $p_G =0$, the only one where it is unambiguously defined,
the second polynomial $V_G$ of (\ref{paramet}) coincides with (\ref{secondsysy}) and (\ref{secondsy}).
\end{theorem}
We are going to give two proofs of this classic theorem of quantum field theory,
one relying directly on contraction-deletion and on the phase-space representation (\ref{paraphase})
the other more standard and relying on the direct representation (\ref{paradirect}) and on the tree-matrix theorem.
Indeed in order to compute the Symanzik's polynomials, let us remark first that the
momentum representation mostly used in textbooks
is not very convenient. To use (\ref{paramoment})
we should ``solve" the $\delta$ functions, that is rewrite
each edge momentum in terms of independent momenta for cycles. In physics
this is called a momentum routing. But such a momentum routing is linked to the
choice of a particular spanning tree of $G$. The momenta of the edges not in this tree are kept as independent variables and the tree edges momenta are recursively computed in terms of those by progressing from the leaves of the tree towards the root which is the fixed vertex $v_n$. This is not a canonical prescription, as it depends on the choice of the tree.
The representations (\ref{paraphase})
or (\ref{paradirect}) are more convenient to integrate the space or momentum variables
because the dependence in variables $x$ and $k$ is Gau\ss ian so that the result
is a determinant to a certain power times a Gau\ss ian in the external variables.
In fact (\ref{paraphase}) is the best as we shall argue below.
However there is still a small noncanonical choice, the one of the root. This is why
we prefer to compute the regularized amplitudes
\begin{equation}
B^T_G(p_1,...,p_N;q ) = \int \prod_{e =1}^E \big[
d \alpha_e e^{-\alpha_e m^2} d^d k_e \big] \prod_{v=1}^{V} d^d x_v
e^{-\alpha_e k_e^2 - q \sum_{v=1}^V x_v^2
+ 2 i ( p_f \epsilon_{fv} x_v + k_e\epsilon_{ev} x_v )}
\label{paraphasereg}
\end{equation}
and to deduce the ordinary amplitudes from a limit $q \to 0$.
The last modification we perform is to attribute a different weight $q_v$ to each vertex regulator.
This is more natural from the point of view of universal polynomials.
So we define
\begin{equation}
B^T_G(p_1,...,p_N; \{q_v\} ) = \int \prod_{e =1}^E \big[
d \alpha_e e^{-\alpha_e m^2} d^d k_e \big] \prod_{v=1}^{V} d^d x_v
e^{-\alpha_e k_e^2 - q_v x_v^2
+ 2 i ( p_f \epsilon_{fv} x_v + k_e\epsilon_{ev} x_v )}.
\label{paraphaseregs}
\end{equation}
These amplitudes are Gau\ss ian in the external variables $p_f$ and no longer involve
any noncanonical choice. We shall now compute their generalized Symanzik polynomials
and deduce the ordinary Symanzik polynomials from these as
leading terms when all $q_v$'s are sent to 0.
\subsection{Generalized Symanzik Polynomials}
\label{symanpoly}
We consider the phase space representation (\ref{paraphaseregs}).
We have to perform a Gau\ss ian integral in $E+V$ variables (each of which is $d$-dimensional).
We consider these momentum and position variables as a single vector.
We also forget the label $^T$ for truncation as it is no longer needed in this section. The
graph we consider may be connected or not.
We introduce the condensed notations:
\begin{eqnarray} B_G (p_f, q_v) &=& \int \prod_e
d\alpha_e e^{-\alpha_e m^2} d^d k_e \int \prod_v d^d x_v
e^{- Y X_G Y^t} \label{mainformb}
\end{eqnarray}
where $X_G$ is a $d(E+V+N)$ by $d(E+V+N)$ square matrix, namely
\begin{equation} X_G =
\begin{pmatrix}
\alpha_e & - i \epsilon_{ev} & 0 \\
- i \epsilon_{ev} & q_v & - i \epsilon_{fv} \\
0 & - i \epsilon_{fv} & 0\\
\end{pmatrix}
\end{equation}
where $\alpha_e$ and $q_v$ are short notations for diagonal matrices
$\alpha_e \delta_{e,e'}$ and $q_v \delta_{v,v'}$.
$Y$ is an $E+V+N$ by 1 line, namely
$Y = \begin{pmatrix}
k_e & x_v & p_f\\
\end{pmatrix} $.
We can further decompose $X_G$ as
\begin{eqnarray} \label{mainformq}
X_G= \begin{pmatrix} Q_G & - i R_G^{t} \\ - i R_G & 0 \\
\end{pmatrix}\ .
\end{eqnarray}
where $Q_G =
\begin{pmatrix} \alpha_e & - i \epsilon_{ev} \\
- i \epsilon_{ev} & q_v \\ \end{pmatrix} $ is a $d(E + V)$ by $d(E + V)$ square matrix and
$R_G$ is the real rectangular $N$ by $E+V$ matrix
made of a $dN$ by $dE$ zero block and the $dN$ by $dV$ "incidence flag" matrix $ \epsilon^{\mu}_{fv}$.
The dimensional indices $\mu$ being quite trivial we no longer write them down from now on.
Note $P$ the line $p_f$, hence the last part of the line $Y$.
Gau\ss ian integrations can be performed explicitly and the result
is a Gau\ss ian in external variables. Therefore up to inessential constants
\begin{eqnarray} B_G (p_f, q_v) &=& \int \prod_e d\alpha_e e^{-\alpha_e m^2}
\frac{1}{\det Q_G^{d/2}} e^{ - P R_G Q_G^{-1} R_G^{t} P^t }
\nonumber
\\ &=& \int \prod_e d\alpha_e e^{-\alpha_e m^2} d^d k_e
\frac {e^{- {\cal V} / {\cal U} }}{{\cal U}^{d/2}}
\label{defampliB}
\end{eqnarray}
for some polynomial ${\cal U}_G$ in $\alpha$'s and $q$'s and
a quadratic form in the $p$ variable ${\cal V}_G$ with polynomial
coefficients in $\alpha$'s and $q$'s.
\begin{definition}
The generalized Symanzik polynomials with harmonic regulators are the polynomials
appearing in (\ref{defampliB}), namely
\begin{equation}
{\cal U}_G (\alpha_e, q_v) = \det Q_G,
\end{equation}
\begin{equation} \label{secondpol}
{\cal V}_G (\alpha_e, q_v,p_f)/ {\cal U}_G (\alpha_e, q_v) = P R_G Q_G^{-1} R_G^{t} P^t .
\end{equation}
\end{definition}
These polynomials can be computed explicitly:
\begin{theorem}\label{symantheo}
\begin{equation}
{\cal U}_G (\alpha_e, q_v) = \sum_{{\cal F}} \prod_{e \not \in {\cal F}} \alpha_e \prod_{{\cal C}} q_{\cal C} ,
\end{equation}
\begin{equation}
{\cal V}_G (\alpha_e, q_v,p_f) = \sum_{{\cal F}} \prod_{e \not \in {\cal F}} \alpha_e
\sum_{{\cal C}} p_{\cal C}^2 \prod_{{\cal C}' \ne {\cal C}} q_{{\cal C}'} ,
\end{equation}
where the sum over ${\cal F}$ runs over all forests of the graph, and the indices ${\cal C}$ and ${\cal C}'$
means any connected component of that forest (including isolated vertices if any).
The variables $p_{{\cal C}}$ and $q_{{\cal C}}$ are the natural sums associated
to these connected components.
\end{theorem}
In order to prove this theorem we introduce now the
quasi-Grassmann representations of ${\cal U}_G$
and ${\cal V}_G$ of Lemma \ref{quasipfaff}.
Let's calculate first ${\cal U}$, hence the determinant of $Q_G$.
Factoring out powers of $i$ we get:
\begin{equation}
\det Q_G = \det
\begin{pmatrix}
\alpha_e&- \epsilon_{e v}\\
\epsilon_{e v}&q_v\\
\end{pmatrix}\\
\end{equation}
which can be written as sum of a diagonal matrix $D$, with
coefficients $D_{ee} =\alpha_e$ and $D_{vv} = q_v$
and of an antisymmetric matrix $A$ with elements
$\epsilon_{ev}$, that is, $Q=D+A$.
By Lemma \ref{quasipfaff}
\begin{eqnarray}
{\cal U}_G (\alpha_e, q_v) = \int \prod_{v,e} d\chi_v d\omega_v d\chi_e d\omega_e
e^{ - \alpha_e\chi_e \omega_e}e^{ -q_v\chi_v \omega_v} e^{- \chi_e
\epsilon_{ev} \chi_v + \omega_e
\epsilon_{ev} \omega_v } .
\end{eqnarray}
Similarly ${\cal V}$ which is a minor related to the $Q_G$ matrix is given by a Grassmann
integral but with sources
\begin{eqnarray}
{\cal V}_G (\alpha_e, q_v,p_f) &=& \int \prod_{v,e} d\chi_v d\omega_v d\chi_e d\omega_e
e^{ -\alpha_e\chi_e\omega_e
}e^{- q_v\chi_v\omega_v} e^{-\chi_e
\epsilon_{ev}\chi_v + \omega_e \epsilon_{ev}\omega_v }\nonumber
\\&& p_f \cdot p_{f'} \epsilon_{fv} \epsilon_{f'v'} (\chi_v \omega_{v'} + \chi_{v'} \omega_{v} ) ,
\end{eqnarray}
where we have expanded $ \bar \psi_v \psi_{v'} $ as $\frac{1}{2} [\chi_v \chi_{v'}
+ \omega_v \omega_{v'} + i (\chi_v \omega_{v'} + \chi_{v'} \omega_{v} )]$ and canceled out the
$\chi_v \chi_{v'} + \omega_v \omega_{v'} $ term which must vanish by symmetry and the $i$
factors.
Now we can prove directly that these polynomials obey a deletion-contraction rule.
\begin{theorem}\label{grasstheo}
For any semi-regular edge $e$
\begin{eqnarray} \label{delcontr1}
{\cal U}_G (\alpha_e, q_v) = \alpha_e \; {\cal U}_{G-e} (\alpha_e, q_v) + {\cal U}_{G/e} (\alpha_e, q_v) ,
\end{eqnarray}
\begin{eqnarray} \label{delcontr2}
{\cal V}_G (\alpha_e, q_v, p_f) = \alpha_e {\cal V}_{G-e} (\alpha_e, q_v, p_f ) + {\cal V}_{G/e} (\alpha_e, q_v,p_f).
\end{eqnarray}
Moreover we have the terminal form evaluation
\begin{equation}\label{firsttermin} {\cal U}_G (\alpha_e, q_v) = \prod_{e} \alpha_e \prod_v q_v ,
\end{equation}
\begin{equation}\label{secondtermin} {\cal V}_G (\alpha_e, q_v, p_f) = \prod_{e} \alpha_e \sum_v p_v^2
\prod_{v'\ne v} q_v
\end{equation}
for $G$ solely made of self-loops attached to isolated vertices.
\end{theorem}
\noindent{\bf Proof}\
If $G$ is not a terminal form we can pick up any semi-regular edge $e$ connecting vertices $v_1$ and
$v_2$ with $\epsilon_{v_1} = +1, \epsilon_{v_2} = -1$. We expand
\begin{equation} e^{- \alpha_e\chi_e \omega_e} = 1 + \alpha_e\omega_e\chi_e .
\end{equation}
For the first term, since we must saturate the $\chi_e$
and $\omega_e$ integrations, we must keep the $\chi_e (\chi_{v_1} - \chi_{v_{2}}) $ term in $e^{\sum_{v}
\chi_e \epsilon_{ev}\chi_v }$ and the similar $\omega$ term, hence we get a contribution
\begin{eqnarray}
\det Q_{G,e,1}&=& \int \prod_{e'\ne e,v} d \chi_{e'}d\omega_{e'} d\chi_v d\omega_v
(\chi_{v_1} - \chi_{v_2}) (\omega_{v_1} - \omega_{v_2})
\nonumber\\
&&e^{-\sum_{e'\ne e }\alpha_e'\chi_{e'}
\omega_{e'}} e^{- q_v\chi_v\omega_v} e^{-\sum_{e' \ne e ,v} \chi_{e'}
\epsilon_{e'v}\chi_v+ \sum_{e' \ne e ,v} \omega_{e'}
\epsilon_{e'v}\omega_v } .
\end{eqnarray}
Performing the trivial triangular change of variables with unit Jacobian:
\begin{equation} \hat \chi_{v_1} = \chi_{v_1} - \chi_{v_2}, \ \ \hat \chi_{v} = \chi_{v}\ \ \emph{for } \ v \ne v_1,
\end{equation}
and the same change for the $\omega$ variables
we see that the effect of the $(\chi_{v_1} - \chi_{v_2}) (\omega_{v_1} - \omega_{v_2})$ term
is simply to change the $v_1$ label into $v_2$ and to destroy the edge $e$ and the vertex $v_1$.
This is exactly the contraction rule, so $ \det Q_{G,e,1} = \det Q_{G/e}$. The second term
$\det Q_{G,e,2}$ with the $\alpha_e\omega_e\chi_e$
factor is even easier. We must simply put to 0 all terms involving the $e$ label, hence trivially
$ \det Q_{G,e,2} = \alpha_e \det Q_{G-e}$. Remark that during the contraction steps the
weight factor $q_{v_1} \chi_{v_1} \omega_{v_1} $ is just changed into $q_{v_1} \chi_{v_2} \omega_{v_2} $.
That's why we get the new weight $q_{v_1} + q_{v_2}$ for the new vertex $v_2$ which represent
the collapse of former vertices $v_1$ and $v_2$.
Note that the source terms in ${\cal V}$ do not involve $\chi_e$ and $\omega_e$ variables.
Therefore the argument goes through exactly in the same way for the second polynomials.
The only remark to make is that like weights, flag momenta follow contraction moves.
The evaluation on terminal forms is easy. For a graph with only vertices and self-loops the matrix $Q_G$
is diagonal, because $\epsilon_{ev}$ is always 0.
Hence ${\cal U_G}$ is the product of the diagonal elements $\prod_e \alpha_e \prod_v q_v$.
The second polynomial can be analyzed through the Grassmann representation, but it is simpler to
use directly (\ref{secondpol}) and the fact that $Q_G$ is diagonal to get (\ref{secondtermin}). This completes
the proof of Theorem \ref{grasstheo}, hence also of Theorem \ref{symantheo}.
{\hfill $\Box$}
We turn now to the limit of small regulators $q_v$ to show how for a connected graph $G$
the ordinary amplitude $ \delta (\sum_f p_f) A_G$ and the ordinary polynomials $U_G$ and $V_G$ emerge
out of the leading terms of the regularized amplitude $B_G$
and the generalized polynomials ${\cal U}_G$ and ${\cal V}_G$.
When all $q$'s are sent to zero there is no constant term in ${\cal U}_G$ but a constant
term in ${\cal V}_G$. Up to second order in the $q$ variables we have:
\begin{equation}
{\cal U}_G (\alpha_e, q_v) = q_G\sum_{{\cal T}} \prod_{e \not \in {\cal T}} \alpha_e + O(q^2) ,
\end{equation}
\begin{equation}
{\cal V}_G (\alpha_e, q_v,p_f) =p^2_G\sum_{{\cal T}} \prod_{e \not \in {\cal T}} \alpha_e +
\sum_{{\cal T}_2} (p_{G_1}^2 q_{G_2} + p_{G_2}^2 q_{G_1} ) \prod_{e \not \in {\cal T}_2} \alpha_e
+ O(q^2) ,
\end{equation}
where the sum over ${\cal T}$ runs over trees and the sum over ${\cal T}_2$ runs over
two trees separating the graph into two connected components $G_1$ and $G_2$.
Hence we find
\begin{equation} \frac{ e^{- {\cal V} / {\cal U} } }{{\cal U}^{d/2}} = \frac{e^{- p^2_G / q_G }}{q_G^{d/2}} \frac{e^{ -
\sum_{{\cal T}_2}(p_{G_1}^2 q_{G_2} + p_{G_2}^2 q_{G_1} ) \prod_{e \not \in {\cal T}_2} \alpha_e
/ q_G\sum_{{\cal T}} \prod_{e \not \in {\cal T}} \alpha_e + p^2_G O(1)
+ O(q)} }{ [ \sum_{{\cal T}} \prod_{e \not \in {\cal T}} \alpha_e + O(q) ]^{d/2} } .
\end{equation}
Up to inessential normalization factors the first term tends to $\delta(p_G)$ and the second one tends to $e^{-V/U}/U^{d/2}$
if we use the fact that $\delta(p_G) f(p_G) = \delta(p_G) f(0) $, that is if we
use the delta distribution to cancel the $p^2_G O(1)$ term and to
simplify $(p_{G_1}^2 q_{G_2} + p_{G_2}^2 q_{G_1} ) $
into $q_G p^2_{G_1} = q_G p^2_{G_2} $. This proves (\ref{limBA}).
The $U_G$ and $V_G$ polynomials
are in fact easy to recover simply from the ${\cal U}_G$ polynomial alone:
\begin{theorem} For any connected $G$ and any vertex $v$
\begin{equation}
U_G (\alpha_e) =\frac { \partial }{\partial q_v} {\cal U}_G (\alpha_e, q_v) \,\,\; \vert_{q_{v'} = 0 \ \forall v'} .
\end{equation}
On the submanifold $p_G =0$ we further have
\begin{equation}
V_G (\alpha_e, p_f) = - \frac{1}{2} \sum_{v \ne v'} p_v \cdot p_{v'} \,\,\;
\frac{ \partial^2 }{\partial q_v\partial q_{v'} } �{\cal U}_G (\alpha_e, q_v) \,\,\; \vert_{q_{v"} = 0 \ \forall v"} .
\end{equation}
\end{theorem}
\noindent{\bf Proof}\ It is an easy consequence of Theorem \ref{symantheo}.
{\hfill $\Box$}
We can also prove an analog of Proposition \ref{propUV} between
${\cal V}_G$ and ${\cal U}_{G(vv')}$ but only on the submanifold $p_G =0$.
\subsection{Relation to discrete Schr\"odinger Operator}
As an aside, it is worthwhile to notice that there is a relation with discrete Schr\"odinger operators on graphs \cite{schrodinger}. Recall that given a graph $G=(V,E)$, the discrete Laplacian is defined as follows. We first introduce the 0-forms $\Omega_{0}(G)={\mathbb R}^{V}$ as the real functions on the set of vertices and 1-forms $\Omega_{0}(G)={\mathbb R}^{E}$ as functions on the edges. Then, the discrete differential $\mathrm{d}:\,\Omega_{0}(G)\rightarrow\Omega_{1}(G)$ is defined as
\begin{equation}
\mathrm{d}\psi(e)=\sum_{v}\epsilon_{ev}\,\psi_{v},
\end{equation}
where we recall the convention that for a self-loop $\epsilon_{e v}=0$ and an arbitrary orientation is chosen on the edges. Next, given strictly positive weights $\beta_{e}$ associated to the edges, we define $\mathrm{d}^{\ast}:\,\Omega_{1}(G)\rightarrow\Omega_{0}(G)$ by
\begin{equation}
\mathrm{d}^{\ast}\!\phi(v)=\sum_{e}\beta_{e}\epsilon_{ev}\,\phi_{e}.
\end{equation}
Note that $\mathrm{d}^{\ast}$ is precisely the adjoint of $\mathrm{d}$ for the scalar product on ${\mathbb R}^{E}$ defined by the weights $\beta_{e}$ and the Euclidean one on ${\mathbb R}^{V}$. Accordingly, the 0-form Laplacian $\Delta:\,\Omega_{0}(G)\rightarrow\Omega_{0}(G)$ is
\begin{equation}
\Delta=\mathrm{d}^{\ast}\mathrm{d},
\end{equation}
or, in terms of its action on functions $\psi\in{\mathbb R}^{V}$,
\begin{equation}
\Delta\psi(v')=\sum_{e,v}\beta_{e}\epsilon_{ev'}\epsilon_{ev}\,\psi_{v}.
\end{equation}
Note that there is exactly one zero mode per connected component, as follows from the equivalence between $\Delta\psi=0$ and $\mathrm{d}\psi=0$. Finally, the weights $q_{v}$ associated to the vertices\footnote{Strictly speaking, the latter are associated to the flags and $q_{v}$ is the sum the weights of the flags attached to $v$.} define a function $V$ from the vertices to ${\mathbb R}$ acting multiplicatively on $\Omega_{0}(G)$ so that we define the discrete Schr\"odinger operator (Hamiltonian in the quantum mechanics language) on the graph by
\begin{equation}
H=-\Delta+V.
\end{equation}
Turning back to the parametric representation, if we perform the Gau\ss ian integration over the momenta we are left with
\begin{equation}
\frac{\pi^{D/2}}{(\alpha_{1}\cdots\alpha_{e})^{D/2}}\int {\textstyle \prod_{v}dx_{v}}\,\mathrm{e}^{-\sum_{v,v'}x_{v}H_{v,v'}x_{v'}+2\mathrm{i}\sum_{v}x_{v}\cdot p_{v}},
\end{equation}
with weights $\beta_{e}=\frac{1}{\alpha_{e}}$. In particular, the first Symanzik polynomial with regulators $q_{v}$ is expressed in terms of the determinant of $H$,
\begin{equation}
{\cal U}_{G}(\alpha,q)=\left({\textstyle \prod_{e}\alpha_{e}}\right)\,\det H=
\left({\textstyle \prod_{e}\alpha_{e}}\right)\int {\textstyle \prod_{v}d\,\overline{\psi}_{v}d\psi_{v}}\,
\mathrm{e}^{-\sum_{v,v'}\overline{\psi}_{v}H_{v,v'}\psi_{v'}},
\end{equation}
with $\overline{\psi}_{v},\psi_{v}$ Grassmann variables. By the same token, the ratio appearing in the Feynman amplitude is expressed in terms of its inverse $G$ (Green's function in the quantum mechanics language),
\begin{equation}
\frac{{\cal V}_{G}(\alpha,q,p)}{{\cal U}_{G}(\alpha,q)}=
\sum_{v,v'}G_{v,v'}\,p_{v}\cdot p_{v},
\end{equation}
where the Green's function can also be expressed using Grassmann integrals. As a byproduct, it turns out that it can also be computed by contraction/deletion.
\subsection{Categorified Polynomials}
We have up to now considered two seemingly unrelated graph polynomials obeying contraction/deletion rules, the multivariate Tutte polynomial $Z_{G}(\beta_{e},q)$ and ${\cal U}_{G}(\alpha_{e},q_{i})$, from which the Symanzik polynomials can be recovered by various truncations. Therefore, it is natural to wonder wether there is a single graph polynomial, obeying contraction/deletion rules too, from which both $Z_{G}(\beta_{e},q)$ and ${\cal U}_{G}(\alpha_{e},q_{i})$ can be recovered. In this subsection for simplicity we shall consider only
the first Symanzik polynomial, and the flags considered in this subsection no longer bear external momenta,
but an abstract index.
Such a polynomial is an invariant of graphs with flags, i.e. labeled half-edges attached to the vertices.
In order to make the contraction possible, it is necessary to allow each vertex to have several flags, all carrying distinct labels. The requested polynomial, ${\cal W}_{G}(\beta_{e},q_{I})$ depends on edge variables $\beta_{e}$ as well as on independent variables $q_{I}$ for each non empty subset $I$ of the set of labels of the flags, with the proviso that, for each vertex, the subsets $I$ contain all the flags attached to the vertex or none of them. Thus, for a diagram with $V'$ vertices carrying flags there are $2^{V'}-1$ variables $q_{I}$.
\begin{definition}
For a graph $G$ with flags, ${\cal W}_{G}(\beta_{e},q_{I})$ is defined by the expansion
\begin{equation}
{\cal W}_{G}(\beta_{e},q_{I})=\sum_{A\subset E}\,\Big(\prod_{e\in E}\beta_{e}
\hspace{-0.5cm}\prod_{{\cal C}_{n}\atop
\mbox{\tiny connected components}}\hspace{-0.5cm}q_{I_{n}}\Big),
\end{equation}
where $I_{n}$ are the sets of flags attached to the vertices of the connected component ${\cal C}_{n}$ of the spanning graph $(V,A)$.
\end{definition}
For example, for the bubble graph on two vertices with
two edges between these vertices and flags $1,2$ attached to one of
vertex, and flag 3 to the other one, we have
\begin{equation}
{\cal W}_{G}(\beta_{e},q_{I})=(\beta_{1}\beta_{2}+\beta_{1}+\beta_{2})q_{123}
+q_{12}q_{3}.
\end{equation}
Since the variables $q_{I}$ are defined using the flags, the contraction/deletion rule for ${\cal W}_{G}(\beta_{e},q_{I})$ requires us to properly define how the flags follow the contraction/deletion rule for any edge of $G-e$ and $G/e$. Because the vertices and the flags of $G-e$ are left unchanged, the same variables $q_{I}$ appear in $G$ and $G-e$. For $G/e$, we restrict the $q_{I}$ to those associated with subsets that contain either all the flags attached to the two vertices merged by the contraction of $e$, either none of them. This is best formulated using flags: the new vertex simply carries the flags of the two vertices that have been merged. Then, the contraction/deletion identity simply follows from grouping the terms in ${\cal W}_{G}(\beta_{e},q_{I})$ that contain $\beta_{e}$ and those that do not.
\begin{proposition}
The polynomial ${\cal W}_{G}(\beta_{e},q_{I})$ obeys the contraction/deletion rule for any edge
\begin{equation}
{\cal W}_{G}(\beta_{e},q_{I})=\beta_{e}{\cal W}_{G/e}(\beta_{e'\neq e},q_{I}|_{G/e})+{\cal W}_{G-e}(\beta_{e'\neq e},q_{I}).
\end{equation}
\end{proposition}
The multivariate Tutte polynomial is easily recovered by setting $q_{I}=q$ for any $I$,
\begin{equation}
Z_{G}(\beta_{e},q)={\cal W}_{G}(\beta_{e},q_{I}\!=\!q).
\end{equation}
In this case, all the information about the flags is erased and so that the latter may be omitted. To recover ${\cal U}_{G}(\alpha_{e},q_{i})$, it is convenient to introduce as an intermediate step the polynomial
\begin{equation}
{\Upsilon}_{G}(\alpha_{e},q_{i})=\sum_{A\subset E}\,\prod_{e\notin E}\alpha_{e}\prod_{{\cal C}_{n}}\Big(\sum_{i\in I_{n}}q_{i}\Big),
\end{equation}
where as before $I_{n}$ are the flags included in the connected component ${\cal C}_{n}$ of the spanning graph $(V,A)$. By its very definition, ${\Upsilon}_{G}(\alpha_{e},q_{i})$ is related to ${\cal W}_{G}(\beta_{e},q_{I})$ by setting $q_{I}=\sum_{i\in I}q_{i}$,
\begin{equation}
{\Upsilon}_{G}(\alpha_{e},q_{i})=
\Big({\prod_{e}\alpha_{e}}\Big)\,{\cal W}_{G}(\beta_{e}\!=\!1/\alpha_{e},q_{I}\!=\!{\textstyle \sum_{i\in I}}q_{i}).
\end{equation}
Then, the polynomial ${\cal U}_{G}(\alpha_{e},q_{i})$ is obtained from ${\Upsilon}_{G}(\alpha_{e},q_{i})$ by keeping only the highest degree terms in the $\alpha_{e}$'s for each term in $\prod_{{\cal C}_{n}}\sum_{i\in I_{n}}q_{i}$. Indeed, ${\cal U}_{G}(\alpha_{e},q_{i})$ is obtained from ${\Upsilon}_{G}(\alpha_{e},q_{i})$ by truncating its expansion to those subsets $A\subset E$ that are spanning forests, i.e. that obey $0=|A|-V+k(A)$. Since the number of connected components $k(A)$ is fixed by the global degree in the $q_{i}$'s, the forests are obtained with $|A|$ minimal, so that the global degree in the $\alpha_{e}$'s must be maximal. Note that a truncation to the spanning forests may also be performed at the level of the multivariate Tutte polynomial by restricting, at fixed degree in $q$, to the terms of minimal degree in the $\beta_{e}$'s. This yields an expansion over spanning forests \cite{Sokal1} (see also \cite{Sokal2}).
\begin{equation}
F_{G}(\beta_{e},q)=\sum_{A\subset E\atop \mbox{\tiny spanning forest}}\Big(\prod_{e\in A}\beta_{e}\Big)\, q^{k(A)}.
\end{equation}
This, as well as the relation to the Symanzik polynomial, is conveniently summarized by the following diagram.
\begin{proposition}
The previous polynomials may be obtained from ${\cal W}(\alpha_{e},q_{I})$ by the following series of substitutions and truncations,
\begin{equation}
\xymatrix{
&{\Upsilon}_{G}(\alpha_{e},q_{i})\ar[r]^{\mbox{\tiny highest order}\atop\mbox{\tiny in the }\,\alpha_{e}}&{\cal U}_{G}(\alpha_{e},q_{i})\ar[dr]^{\mbox{\tiny\quad term in}\,\sum_{i}q_{i}}&\\
{\cal W}_{G}(\beta_{e},q_{I})\ar[ur]^{q_{I}=\sum_{i\in I}q_{i}\atop \mbox{\tiny \!\!multiplication by}\,\prod_{e}\!\alpha_{e}\quad\quad}\ar[dr]_{q_{I}=q}\quad&&&
U_{G}(\alpha_{e})\\
&
Z_{G}(\beta_{e},q)\ar[r]^{\mbox{\tiny lowest order}\atop\mbox{\tiny in the }\,\beta_{e}}&\ar[ur]_{\mbox{\tiny \quad term in}\,q\atop \mbox{\tiny multiplication by}\,\prod_{e}\!\alpha_{e}}{F}_{G}(\beta_{e},q)&
}
\end{equation}
where $\alpha_{e}=1/\beta_{e}$.
\end{proposition}
Alternatively, the polynomial ${\cal W}_{G}(\alpha_{e},q_{I})$ can be seen as an extension of the polynomial ${W}_{G}(\xi_{a},y)$ introduced by Noble and Welsh in \cite{Wpolynomial}.
\begin{definition}
For a graph with weights $\omega_{v}\in{\mathbb N}^{\ast}$ assigned to the vertices, the $W$ polynomial is defined as
\begin{equation}
{W}_{G}(\xi_{a},y)=\sum_{A\subset E}\hspace {0.5cm}(y-1)^{|A|-r(A)}\hspace{-1cm}\prod_{{\cal C}_{1},\dots,{\cal C}_{k(A)}\atop\mbox{\tiny connected components of} \, (V,A)}\hspace{-1cm}\xi_{a_{n}}
\end{equation}
with $a_{n}=\sum_{v\in {\cal C}_{n}}\omega_{v}$ the sum of the weights of the vertices in the connected component ${\cal C}_{n}$.
\end{definition}
This polynomial also obeys the contraction/deletion rule if we add the weights of the two vertices that are merged after the contraction of an edge. Alternatively, weights may be assigned to flags, with the convention that the weight of a vertex is the sum of the weights of the flags attached to it. Then, ${W}(\xi_{a},y)$ is naturally extended to diagrams with flags and results from a simple substitution in ${\cal W}_{G}(\xi_{a},y)$.
\begin{proposition}
For a graph with weights $\omega_{i}\in{\mathbb N}^{\ast}$ assigned to the flags,
\begin{equation}
{W}_{G}(\xi_{a},y)=(y-1)^{-|V|}{\cal W}_{G}\big(\beta_{e}\!=\!y\!-\!1,q_{I}\!=\!(y-\!\!1)\xi_{a_{I}}\big),
\end{equation}
with $a_{I}=\sum_{i\in I}\omega_{i}$ the sum of the weights of the flags in $I$.
\end{proposition}
The polynomial $W_{G}(\xi_{a},y)$ only encodes the sum of the weights of the flags in each connected component and erases information about their labels. In particular, if we weight each flag by $\omega_{i}=1$, then the expansion of $W$ only counts the number flags per component whereas that of ${\cal W}_{G}(\beta_{e},q_{I})$ keeps track of the associated set of labels. In a more sophisticated language, the latter may be considered as the simplest categorification of the former: integers, understood as finite sets up to isomorphisms, have been replaced by the category of finite sets.
\subsection{Symanzik Polynomials through the tree matrix theorem in $x$-space}
In this section we provide a sketch of a more standard proof of Theorem \ref{theosym} through the
$x$ space representation and the tree matrix theorem.
The reason we include it here is for completeness and because we have not been able to find it in the
existing literature, in which the same computation is usually performed through the
Binet-Cauchy theorem.
The $V \times V$ matrix $Q_G(\alpha)$ analog in this case of (\ref{mainformq}) is defined as
\begin{equation}
[Q_G(\alpha)]_{v,v'}=\sum_e \epsilon_{ev}\frac{1}{\alpha_e}\epsilon_{ev'}.
\end{equation}
It has vanishing sum over lines (or columns):
\begin{equation} \sum_{v'} [Q_G(\alpha)]_{v,v'}=\sum_{v'} \sum_e\epsilon_{ve }\frac{1}{\alpha_e}\epsilon_{ev'} = 0 .
\end{equation}
Therefore by the tree matrix theorem the determinant
of the $(V-1)\times(V-1)$ matrix $Q_G(\alpha)$ defined as its principal minor
with the line and column for the root vertex number $V$ deleted is:
\begin{equation}\Delta_G(\alpha)=\det[Q_G(\alpha)]=
\sum_{{\cal T}}\prod_{e\in{\cal T}}\frac{1}{\alpha_e}
\end{equation} where the sum is over all trees of G. Since every tree of G has
$V-1$ edges, $\Delta_G$ is clearly a homogenous polynomial in the
$\alpha_e^{-1}$. For $\alpha>0$, $\Delta$ is positive. The remaining
$(V-1)$ vectors $z$ may then be integrated over and the result
is
\begin{equation}
A_{G}(p)=\int_0^\infty\prod_l(d\alpha_e e^{-\alpha_e
m^2})\frac{\exp\{ - p_{v}
[Q^{-1}_G(\alpha)]_{v,v'}p_{v'}\} }{[\alpha_1...\alpha_E \Delta_G(\alpha)]^{d/2}} .
\end{equation}
This formula expresses $A_G(p)$ as a function of the invariant
scalar product of external momenta $p_{v} \cdot p_{v'}$.
The denominator
\begin{equation}
U_G(\alpha)\equiv
\alpha_1...\alpha_E\Delta_G(\alpha)=\sum_{{\cal T}}\prod_{e\not\in{\cal T}}\alpha_e
\end{equation}
is a homogenous polynomial of degree $V-1$. This gives an alternative
proof of (\ref{paramet}). The second Symanzik polynomial can also be obtained
through this method and the corresponding computation is left to the reader.
Of course harmonic regulators can also be included
if one wants to avoid the noncanonical choice of a root, but the Pfaffian structure
of the phase space representation is lost. Also this $x$-space method does not generalize easily
to noncommutative field theory to which we now turn our attention.
\section{Bollob\'as-Riordan Polynomials}
\label{briordan}
\subsection{Ribbon graphs}
A ribbon graph $G=(V,E)$ is an orientable surface with
boundary represented as the union of $V$ closed disks, also called
vertices, and $E$ ribbons also called edges, such that:
\begin{itemize}
\item the disks and the ribbons intersect in disjoint line
segments,
\item each such line segment lies on the boundary of precisely one
disk and one ribbon,
\item every ribbon contains two such line segments.
\end{itemize}
So one can think of a ribbon graph as consisting of
disks (vertices) attached to each other by thin stripes (edges) glued to
their boundaries (see Figures \ref{figpla}-\ref{figdual}). For any such ribbon graph $G$ there is an underlying
ordinary graph $\bar G$ obtained by collapsing the disks to points and the ribbons to edges.
Two ribbon graphs are isomorphic if there is a
homeomorphism from one to the other mapping vertices to vertices and
edges to edges. A ribbon graph is a graph with a fixed cyclic
ordering of the incident half-edges at each of its vertices.
A face of a ribbon graph is a connected component of its boundary as a surface.
If we glue a disk along the boundary of each face we obtain a closed Riemann surface
whose genus is also called the genus of the graph.
The ribbon graph is called planar if that Riemann surface has genus zero.
Generalized ribbon graphs that can also incorporate Moebius strips and correspond to nonorientable surface
can be defined but will not be considered in this paper.
There is a duality on ribbon graphs which preserves the genus but exchanges
faces and vertices, keeping the number of edges fixed. It simply considers the disks glued along faces
as the vertices of a dual graph and changes the ends of each ribbon into borders of the dual ribbon.
Extended categories of ribbon graphs with flags can be defined. Flags can be represented
as ribbons bordered by dotted lines to distinguish them from ordinary edges (see Figures
\ref{figpla}-\ref{figdual}).
Beware that the cyclic ordering of flags and half-edges at each vertex is very important
and must be respected under isomorphisms. The genus of an extended graph is defined as the genus of the graph obtained by removing the flags and closing the corresponding segments on their vertices. The number of broken faces is the number of faces which do contain at least one flag. It is an important
notion in noncommutative field theory.
We define for any ribbon graph
\begin{itemize}
\item $V(G)$ as the number of vertices;
\item $E(G)$, the number of edges,
\item $k(G)$, the number of connected components,
\item $r(G)=V(G)-k(G)$, the rank of $G$,
\item $n(G)=E(G)-r(G)$, the nullity of $G$,
\item $bc(G)= F(G)$, the number of components
of the boundary of $G$\footnote{This is the number of \emph{faces} of $G$ when $G$ is connected.},
\item $g(G)= k - (V - E + bc)/2$ is the genus of the graph,
\item $f(G)$ the number of flags of the graph.
\end{itemize}
A graph with a single vertex hence with $V=1$ is called a \emph{rosette}.
A subgraph $H$ of a ribbon graph $G$ is a subset of the edges of $G$.
The Bollob\'as-Riordan polynomial, which is a generalization of Tutte
polynomial, is a algebraic polynomial that is used to incorporate
new topological information specific to ribbon graphs, such as
the genus and the number of "broken" or "external" faces. It is
a polynomial invariant of the ribbon graph.
\begin{figure}
\begin{center}
\includegraphics[scale=0.9,angle=-90]{f9r.pdf}
\caption{A planar ribbon graph with $V=E=1$. $bc=2$ and two flags.}\label{figpla}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=1]{a22a.jpg} \hskip2cm \includegraphics[scale=1]{a22a-dual.jpg}
\caption{A non-planar ribbon graph without flags, with $V=2$, $E=3$, $bc=1$, $g=1$, $f=2$,
and its dual graph with $V=1$, $E=3$, $bc=2$, $g=1$, $f=2$.}\label{figdual}
\end{center}
\end{figure}
\subsection{Bollob\'as-Riordan Polynomial}
\begin{definition}[Global definition]
The Bollob\'as-Riordan polynomial is defined by:
\begin{eqnarray}
R_G= R_G(x,y,z)
=\sum_{H\subset
G}(x-1)^{r(G)-r(H)}y^{n(H)}z^{k(H)-bc(H)+n(H)}.
\end{eqnarray}
\end{definition}
The relation to the Tutte polynomial for the underlying graph $\bar G$
is $R_{G} (x-1, y-1 ,1) = T_{\bar G} (x,y)$.
Remark also that if $G$ is planar we have $R_G (x-1,y-1,z) = T_{\bar G} (x,y)$.
When $H$ is a spanning graph of $G$, we have
$k(H)-k(G)$=$r(G)-r(H)$. So we can rewrite the $R$ polynomial as:
\begin{equation}
R_G= (x-1)^{-k(G)}\sum_{H\subset G} M(H),
\end{equation}
where
\begin{equation}\label{defbrior}
M(H)=(x-1)^{k(H)}y^{n(H)}z^{k(H)-bc(H)+n(H)}
\end{equation}
so that $M(H)$ depends only on $H$ but not on $G$.
\subsection{Deletion/contraction}\label{delcontractribbon}
The deletion and contraction of edges in a ribbon graph are defined quite naturally:
the deletion removes the edge and closes the two scars at its end; the contraction
of a semi-regular edge
creates a new disk out of the two disks at both ends of the ribbon with a new
boundary which is the union of the boundaries of the two disks and of the ribbon (see
Figure \ref{fig:contribbon}).
An interesting property is that deletion and contraction of edges are exchanged in the dual graph.
The deletion of a self-loop is standard. However the natural contraction of a self-loop creates
a surface with a new border. Iterating, we may get surfaces of arbitrary genus
with an arbitrary number of disks removed, a category also called
disk-punctured surfaces.
The ribbons can now join any puncture to any other.
For instance the contraction of the self-loop on the graph $G_1$ of Figure \ref{cyli} leads to a cylinder
ie to a single vertex which is a sphere with two disks removed.
The contraction of the two self-loops in graph $G_2$ of Figure \ref{torus} corresponds to the
cylinder with a ribbon gluing
the two ends, hence to a torus.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5,angle=-90]{cylinder.jpg}
\caption{Contraction of the single self-loop $G_1$.}\label{cyli}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.5,angle=-90]{torus.pdf}
\caption{Contraction of the two self loops non-planar $G_2$.}\label{torus}
\end{center}
\end{figure}
Deletion and contraction defined in this extended category of graphs
can be iterated until the graph has no longer any edge,
ie is a collection of disk-punctured Riemann surfaces.
These punctured Riemann surfaces are very
natural objects both in the context of string theory
and in NCQFT. However we do not consider them in this paper.
In this paper we remain in the category of ordinary ribbon graphs with
disk-like vertices. The contraction/deletion of semi-regular edges
leads to rosettes as terminal forms. To treat them
we introduce the notion of \emph{double contraction}
on \emph{nice crossings}.
Nice crossings were introduced in \cite{GurauRiv}:
\begin{definition}
A nice crossing pair of edges in a rosette is a pair of crossing edges $e_1$ and $e_2$
which are adjacent on the
cycle of the rosette. Adjacency means that one end of $e_1$ is consecutive with
an end of $e_2$ (see Figure \ref{filk3}).
\end{definition}
It is proved in \cite{GurauRiv} that any rosette ${\cal R}$ of genus $g>0$ contains at least one
nice crossing.
The double contraction of such a nice crossing pair consists
in deleting $e_1$ and $e_2$ and interchanging
the half-edges encompassed by $e_1$ with the ones encompassed by $e_2$,
see Figure \ref{filk3}. This \emph{double contraction}
was defined in \cite{GurauRiv}
under the name of ``3rd Filk move''. It decreases the genus by one and
the number of edges by 2.
\begin{figure}
\begin{center}
\includegraphics[scale=0.85]{figraz-11.pdf}
\caption{When deleting the two edges of a nice pair crossing on some contracted vertex,
one also needs to interchange the half-edges encompassed
by the first edges with those encompassed by the second one. Beware
that the horizontal line in this picture is a part of the rosette cycle.}\label{filk3}
\end{center}
\end{figure}
In the next section iterating this double contraction until we reach
planarity allows us to compute the $U^\star $ Symanzik polynomial
by remaining in the category of ordinary ribbon graphs.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7,angle=0]{ribbon.pdf}
\caption{The contraction-deletion for a ribbon graph.}\label{fig:contribbon}
\end{center}
\end{figure}
\begin{theorem}[Bollob\'as-Riordan polynomial, contraction/deletion]
\label{delcontrbollo}
\begin{equation}
R_G=R_{G/e}+R_{G-e}
\end{equation}
for every ribbon graph $G$ and any regular edge $e$ of $G$ and
\begin{equation}
R_G=x R_{G/e}
\end{equation}
for every bridge of $G$.
\end{theorem}
Therefore the R polynomial satisfy contraction-deletion relations
as the Tutte polynomial. However to complete its definition
we also need to define the R polynomial for single vertex graphs,
namely the rosettes, which can be read off from (\ref{defbrior}).
For such a rosette ${\cal R}$, $k({\cal R})= V({\cal R}) = k(H) = V(H) =1$, so that the
R polynomial does not depend on $x$ and
\begin{equation}
R_{\cal R} (y,z)= \sum_{H \subset {\cal R}} y^{E(H)} z^{2g(H)}.
\end{equation}
For $z=1$ we recover $R_{\cal R} (y-1,1) = y^{E(\cal R)}$.
\subsection{The multivariate Bollob\'as-Riordan polynomial}
Like in the case of Tutte polynomial, we can generalize the
Bollob\'as-Riordan polynomial to a multivariate case. As before,we
associate to each edge $e$ a variable $\beta_e$.
\begin{definition}
The multivariate Bollob\'as-Riordan polynomial of a
ribbon graph analog of the multivariate polynomial (\ref{multivartut}) is:
\begin{equation}
Z_G(x,\{\beta_e\},z)=\sum_{H\subset G} x^{k(H)}(\prod_{e\in H} \beta_{e}) \, z^{bc(H)} .
\end{equation}
\end{definition}
It obeys again a deletion/contraction relation
similar to Theorem (\ref{delcontrbollo}) for any semi-regular edge.
\section{Translation-invariant NCQFT}
\subsection{Motivation}
Noncommutative quantum field theory, hereafter called NCQFT, has a long story.
Schr\"odinger, Heisenberg \cite{Schro} and Yang \cite{Yang} tried to extend the
noncommutativity of phase space to ordinary space.
Building on their ideas Snyder \cite{Snyder} formulated
quantum field theory on such noncommutative space in the hope that it
might behave better than ordinary QFT in the ultraviolet regime.
Right from the start another motivation to study noncommutative quantum field theory
came from the study of particles in strong magnetic fields. It was early recognized
that non zero commutators occur for the coordinates of the centers
of motion of such quantum particles, so that noncommutative geometry
of the Moyal type should be the proper setting for many body quantum physics in strong external field.
This includes in condensed matter the quantum Hall effect
(see the contribution of Polychronakos in \cite{QS}), or other strong field situations.
An other motivation comes from particle physics.
After initial work by Dubois-Violette, Kerner and Madore,
Connes, Lott, Chamseddine and others have forcefully advocated
that the \emph{classical} Lagrangian of the current standard
model arises naturally on a simple noncommutative geometry. For a review
see Alain Connes's contribution in \cite{QS} and references therein.
Still an other motivation came from the search of new regularizations of non-Abelian gauge theories
that may throw light on their difficult mathematical structure. After
't~Hooft proposed the large $N$ limit of matrix theory, in which planar graphs dominate, as relevant
to the subject \cite{Hoo}, the Eguchi-Kawai model was an important attempt for an explicit solution.
These ideas have been revived in connection with
the ultraviolet behavior of NCQFT on the Moyal-Weyl geometry, which also
leads to the domination of planar graphs. Seiberg and Witten proposed in \cite{Seiberg1999vs} a mapping between ordinary and noncommutative gauge fields which does not preserve the gauge groups but preserve the gauge equivalent classes.
The interest for non commutative geometry also stems from string theory.
Open string field theory may be recast as a problem of noncommutative
multiplication of string states \cite{Witten}.
It was realized in the late 90's that NCQFT is an effective theory of strings \cite{CDS}.
Roughly this is because in addition to the symmetric tensor $g_{\mu\nu}$ the spectrum
of the closed string also contains an antisymmetric tensor $B_{\mu\nu}$. There is no reason
for this antisymmetric tensor not to freeze at some lower scale into a classical field,
inducing an effective non commutative geometry of the Moyal type.
There might therefore be some intermediate regime
between QFT and string theory where NCQFT is the relevant formalism. The ribbon graphs
of NCQFT may be interpreted either as ``thicker particle world-lines" or as ``simplified open strings world-sheets" in which only the ends of strings appear but not yet their internal
oscillations.
\subsection{Scalar models on the Moyal space}
The noncommutative Moyal space is defined in even dimension $d$ by
\begin{eqnarray}
[x^\mu, x^\nu]_\star=\imath \Theta^{\mu \nu},
\end{eqnarray}
where $\Theta$
is an antisymmetric $d/2$ by $d/2$ block-diagonal matrix with blocks:
\begin{eqnarray}
\label{theta}
\begin{pmatrix}
0 &\theta \\
-\theta & 0
\end{pmatrix}
\end{eqnarray}
and we have denoted by $\star$ the Moyal-Weyl product
\begin{eqnarray}
\label{moyal-product}
(f\star g)(x)=\int \frac{d^{4}k}{(2\pi)^{4}}d^{4}y\, f(x+{\textstyle\frac 12}\Theta\cdot
k)g(x+y)e^{\imath k\cdot y} .
\end{eqnarray}
Note that in the limit $\theta\to 0$ this product becomes the ordinary commutative product of functions.
\subsubsection{The ``naive'' model}
\label{naive}
The simplest field theory on this space consists
in replacing the ordinary commutative local product of fields
by the Moyal-Weyl product
\begin{eqnarray}
\label{act-normala}
S[\phi]=\int d^d x (\frac 12 \partial_\mu \phi \star \partial^\mu \phi +\frac
12 \mu^2 \phi\star \phi + \frac{\lambda}{4} \phi \star \phi \star \phi \star \phi).
\end{eqnarray}
In momentum space the action (\ref{act-normala}) writes
\begin{eqnarray}
\label{act-normala-p}
S[\phi]=\int d^d p (\frac 12 p_\mu \phi p^\mu \phi +\frac
12 \mu^2 \phi \phi + V (\phi,\theta)).
\end{eqnarray}
where $V(\phi,\theta)$ is the corresponding potential.
An important consequence of the use of the non-local product $\star$ is that the interaction part
no longer preserves the invariance under permutation of external fields. This invariance is
restricted to cyclic permutations. Furthermore, there exists a basis - the matrix base - of the Moyal algebra where the Moyal-Weyl product takes the form of an ordinary (infinite) matrix product. For these reasons
the associated Feynman graphs are ribbon graphs, that is propagators should be drawn as ribbons.
In \cite{filk} several contractions on such a Feynman
graph were defined. In particular the ``first Filk move"
is the contraction introduced in subsection \ref{delcontractribbon}.
Repeating this operation for the $V-1$ edges of a spanning tree, one obtains a {\it rosette} (see Figure \ref{roz}).
\begin{figure}
\begin{center}
\includegraphics[scale=0.9]{rozeta.pdf}
\end{center}
\caption{An example of a rosette with two flags.
The crossings of edges $k_1$ and $k_2$ indicate the non trivial genus (here $g=1$).}\label{roz}
\end{figure}
Note that the number of faces or the genus of the graph does not change
under contraction. There is no crossing between edges for a planar rosette. The example of Figure \ref{roz} corresponds thus to a non-planar graph (one has crossings between the edges $k_1$ and $k_2$). This pair is called a {\it nice crossing} pair.
The notions expressed in the previous section (namely the Green and Schwinger functions or the perturbation theory concepts) remain the same as in QFT. Usual Feynman graphs are simply replaced
by ribbon Feynman graphs.
Recall that this ``naive model'' \eqref{act-normala} is not renormalizable
in $d=4$. This is due to a new type of non-local divergence at the level of the $2-$point
function - the UV/IR mixing \cite{MinSei}.
\subsubsection{A translation-invariant renormalizable scalar model}
\label{GMRT}
In order to restore renormalizability at $d=4$, the propagator can be modified in the following way
\cite{noi}
\begin{eqnarray}
\label{revolutie}
S_{GMRT}[\phi]=\int d^d p \; (\frac 12 p_{\mu} \phi p^\mu \phi +\frac
12 m^2 \phi \phi
+ \frac 12 a \frac{1}{\theta^2 p^2} \phi \phi
+ \frac{\lambda }{4} \phi \star \phi \star \phi \star \phi ),
\end{eqnarray}
where $a$ is some dimensionless parameter which is taken in the interval $0<a\le \frac 14 \theta^2 m^4$.
The corresponding propagator writes in momentum space
\begin{eqnarray}
\label{propa-rev}
C_{GMRT}=\frac{1}{p^2+\mu^2+\frac{a}{\theta^2 p^2}} \, .
\end{eqnarray}
In \cite{noi}, this model was proved to be renormalizable at any order in perturbation theory. Furthermore, its renormalization group flows \cite{beta-GMRT}
were calculated; a mechanism for taking the commutative limit has been proposed \cite{limita} (for a review on all these developments, see \cite{review-io}).
\subsection{The NC Parametric representation}
In this subsection we present the implementation of the parametric representations for the noncommutative scalar models introduced in the previous subsection.
To keep track of the cyclic ordering at the vertex it is convenient
to detail the incidence matrix $\varepsilon_{ev}$ into a more precise
incidence tensor $\varepsilon^v_{ei}$ where $i = 1,...,4$ indexes the four
corners of the Moyal vertex. As before it is 1 if the edge $e$ starts
at corner $i$ of vertex $v$, -1 if it exits at that corner, and 0 otherwise.
To implement the parametric representation we follow subsection \ref{paramQFT}. The propagator
remains the same as in QFT, but the contribution of a vertex $v$ now corresponds to a Moyal kernel.
In momentum space it writes using again summation over repeated indices
\begin{eqnarray}
\label{v1}
\delta (\sum_{i=1}^4 \varepsilon^v_{ei}k_e )e^{- \frac i2\sum_{1\le
i <j\le 4}\varepsilon^v_{ei}k_e\Theta \varepsilon^v_{ej}k_e} .
\end{eqnarray}
\nonumber
By $k_i\Theta k_j$ we denote $k_i^\mu \Theta_{\mu\nu} k^\nu_j$.
The $\delta-$function appearing in the vertex contribution \eqref{v1} is
nothing but the usual momentum conservation. It can be
written as an integral over a new variable $\tilde x_v$, called {\it hyperposition}. One associates such a variable to any Moyal vertex, even though this vertex is non-local:
\begin{eqnarray}
\delta(\sum_{i=1}^4 \varepsilon^v_{ei}k_e ) = \int \frac{d \tilde x_v'}{(2 \pi)^4}
e^{i\tilde x_v'(\sum_{i=1}^4 \varepsilon^v_{ei}k_e )}
=\int \frac{d \tilde x_v}{(2 \pi)^4}
e^{\tilde x_v\sigma(\sum_{i=1}^4 \varepsilon^v_{ei}k_e )}.\label{pbar1}
\end{eqnarray}
where $\sigma$ is a $d/2$ by $d/2$ block-diagonal matrix with blocks:
\begin{eqnarray}
\label{sigma}
\sigma =
\begin{pmatrix}
0 & -i \\
i & 0
\end{pmatrix}.
\end{eqnarray}
Note that to pass from the first to the second line in \eqref{pbar1}, the change of variables $i \tilde x_v'=\tilde x_v \sigma$ has Jacobian 1.
\subsubsection{The ``naive'' model}
Putting now together the contributions of all the internal momenta and vertices, one has the following parametric representation:
\begin{eqnarray}
\label{param-naiv}
&&{\cal A}_G^T(p_1,\ldots,p_N)=K_G^T\int \prod_{e,e'=1}^E d^d k_e d\alpha_e
e^{-\alpha_e (k_e^2+m^2)}\\
&&\prod_{v=1}^{V-1}\int d^d \tilde x_v
e^{i \tilde x_v (\sum_{i=1}^4 \varepsilon^v_{ei}k_e)}
e^{-\frac i2 \sum_{i <j} \varepsilon_{ei}^v k_e \Theta \varepsilon_{e'j}^v k_{e'}}
\end{eqnarray}
where we have denoted by
$K_G^T$ some inessential normalization constant. Furhermore note that in the integrand above we have denoted, to simplify the notations, by $k_e$ or $k_{e'}$ momenta which can be both internal or external.
\subsubsection{The translation-invariant model}
The parametric representation of the model \eqref{revolutie} was analyzed in \cite{param-GMRT}.
This representation is intimately connected to the one of the model \eqref{act-normala}
(see the previous subsubsection) for the following reason. One can rewrite the
propagator \eqref{propa-rev} as
\begin{eqnarray}
\frac{1}{A+B}=\frac 1A - \frac 1A B \frac{1}{A+B}
\end{eqnarray}
for
\begin{eqnarray}
\label{AB}
A=p^2+m^2,\ \ B=\frac{a}{\theta^2 p^2}.
\end{eqnarray}
Thus, the propagator \eqref{propa-rev} writes
\begin{eqnarray}
\label{propa2}
C_{GMRT}&=&\frac{1}{p^2+m^2}-\frac{1}{p^2+m^2}\frac{a}{\theta^2 p^2 (p^2+m^2)+a},\nonumber\\
&=&
\frac{1}{p^2+m^2}-\frac{1}{p^2+m^2}\frac{a}{\theta^2 (p^2 +m_1^2)(p^2+m^2_2)}
\end{eqnarray}
where $-m_1^2$ and $-m_2^2$ are the roots of the denominator of the second term in the LHS (considered as a second order equation in $p^2$, namely
$\frac{-\theta^2 m^2\pm \sqrt{\theta^4 m^4 - 4 \theta^2 a}}{2\theta^2}<0$.
Note that the form \eqref{propa2} allows us already to write an integral representation of the propagator $C(p,m,\theta)$. Nevertheless, for the second term one would need a triple integration over some set of Schwinger parameters:
\begin{eqnarray}
\label{param}
C_{GMRT}&=&\int_0^\infty d\alpha e^{-\alpha (p^2+m^2)},\\
&-& \frac{a}{\theta^2} \int_0^\infty \int_0^\infty d\alpha d\alpha^{(1)} d\alpha^{(2)} e^{-(\alpha+\alpha^{(1)}+\alpha^{(2)})p^2} e^{-\alpha m^2} e^{-\alpha^{(1)} m_1^2}e^{-\alpha^{(2)} m_2^2}.\nonumber
\end{eqnarray}
Instead of that one can use the following formula:
\begin{eqnarray}
\frac{1}{p^2+m_1^2}\frac{1}{p^2+m_2^2}=\frac{1}{m_2^2-m_1^2}(\frac{1}{p^2+m_1^2}-\frac{1}{p^2+m_2^2}).
\end{eqnarray}
This allows to write the propagator \eqref{propa2} as
\begin{eqnarray}
C_{GMRT}=\frac{1}{p^2+m^2}-\frac{a}{\theta^2 (m_2^2-m_1^2)}\frac{1}{p^2+m^2}(\frac{1}{p^2+m_1^2}-\frac{1}{p^2+m_2^2}).
\end{eqnarray}
This form finally allows to write down the following integral representation:
\begin{eqnarray}
\label{propa-int2}
C_{GMRT}&=&\int_0^\infty d\alpha e^{-\alpha (p^2+m^2)}
- \frac{a}{\theta^2 (m_2^2-m_1^2)} \int_0^\infty \int_0^\infty d\alpha d\alpha_1
e^{-(\alpha+\alpha_1)p^2-\alpha m^2 -\alpha_1 m_1^2}\nonumber \\
&+& \frac{a}{\theta^2 (m_2^2-m_1^2)} \int_0^\infty \int_0^\infty d\alpha d\alpha_2 e^{-(\alpha+\alpha_2)p^2} e^{-\alpha m^2} e^{-\alpha_2 m_2^2}.
\end{eqnarray}
Let us also remark that the noncommutative propagator $C_{GMRT}$ is bounded by the ``usual'' commutative propagator $C(p,m)$
\begin{eqnarray}
\label{limita}
C_{GMRT} \le C(p,m).
\end{eqnarray}
\medskip
Using now \eqref{param}, the parametric representation of the model \eqref{revolutie} is thus a sum of $2^E$ terms coming from the development of the $E$ internal propagators.
Each of these terms has the same form of the one of polynomials in the previous subsection. The only differences comes from
\begin{itemize}
\item the proper substitution of the set of Schwinger $\alpha$ parameters
\item the mass part.
\end{itemize}
One has
\begin{eqnarray}
\label{plr}
&&{\cal A}_G^T=K_G^T \left( \int \prod_{i=1}^L d \alpha_i \frac{1}{[U(\alpha)]^{\frac D2}} e^{\frac{-V(\alpha, p)}{U(\alpha)}} e^{-\sum_{i=1}^L \alpha_i m^2 }\right.\\
&& + (-\frac{a}{\theta^2})^{L-1}\sum_{j_1=1}^L \int d\alpha_{j_1} \prod_{i\ne j_1,\, i=1}^L d\alpha_i d\alpha_i^{(1)} d\alpha_i^{(2)} \frac{1}{[U(\alpha_i+\alpha_i^{(1)}+\alpha_i^{(2)}, \alpha_{j_1})]^\frac d2} \nonumber\\
&& \ \ \ e^{-\frac{V(\alpha_i+\alpha_i^{(1)}+\alpha_i^{(2)}, \alpha_{j_1},p)}{U(\alpha_i+\alpha_i^{(1)}+\alpha_i^{(2)}, \alpha_{j_1})}}
e^{-\sum_{i=1}^L \alpha_i m^2 } e^{-\sum_{i\ne j_1,\, i=1}^L \alpha_i^{(1)} m_1^2 }e^{-\sum_{i\ne j_1,\, i=1}^L \alpha_i^{(2)} m_2^2 }\nonumber\\
&& + (-\frac{a}{\theta^2})^{L-2}\sum_{j_1<j_2,\, j_1,j_2=1}^L \int d\alpha_{j_1}d\alpha_{j_2} \prod_{i\ne j_1,j_2,\, i=1}^L d\alpha_i d\alpha_i^{(1)} d\alpha_i^{(2)} \nonumber\\
&& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{1}{[U(\alpha_i+\alpha_i^{(1)}+\alpha_i^{(2)}, \alpha_{j_1}, \alpha_{j_2})]^\frac d2} \nonumber\\
&& \ \ \ \ \ \ \ \ \ \ e^{-\frac{V(\alpha_i+\alpha_i^{(1)}+\alpha_i^{(2)}, \alpha_{j_1},\alpha_{j_2}, p)}{U(\alpha_i+\alpha_i^{(1)}+\alpha_i^{(2)}, \alpha_{j_1}, \alpha_{j_2})}}
e^{-\sum_{i=1}^L \alpha_i m^2 } e^{-\sum_{i\ne j_1j_2,\, i=1}^L \alpha_i^{(1)} m_1^2 }e^{-\sum_{i\ne j_1,j_2\, i=1}^L \alpha_i^{(2)} m_2^2 }
\nonumber\\
&& + \ldots +\nonumber\\
&& + (-\frac{a}{\theta^2})^{L}\int \prod_{i=1}^L d\alpha_i d\alpha_i^{(1)} d\alpha_i^{(2)} \frac{1}{[U(\alpha_i+\alpha_i^{(1)}+\alpha_i^{(2)})]^\frac d2} e^{-\frac{V(\alpha_i+\alpha_i^{(1)}+\alpha_i^{(2)},p)}{U(\alpha_i+\alpha_i^{(1)}+\alpha_i^{(2)})}}\nonumber\\
&& \left. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
e^{-\sum_{i=1}^L \alpha_i m^2 }e^{-\sum_{i=1}^L \alpha_i^{(1)} m_1^2 }e^{-\sum_{i=1}^L \alpha_i^{(2)} m_2^2 }
\right).\nonumber
\end{eqnarray}
\subsection{Deletion/contraction for the NC Symanzik polynomials}
In this subsection we give some results relating the Bollob\'as-Riordan polynomial and the parametric representations of the noncommutative scalar models introduced here.
\subsubsection{The ``naive'' model}
As in the commutative case, we have to perform a Gau\ss ian integral in a $d(E+V-1)$ dimensional space.
Consider a ribbon graph $G$ with a root $\bar v$.
We introduce the condensed notations analog to \eqref{mainformb}-\eqref{mainformq}
\begin{eqnarray} A_G (p) = \int \prod_e
d\alpha_e e^{-\alpha_e m^2} \int d^d \tilde x d^d p
e^{- Y X Y^t}
\end{eqnarray}
where
\begin{eqnarray} \label{defMPQ1NC}
Y = \begin{pmatrix}
k_e & \tilde x_v & p_e & \tilde x_{\bar v} \\
\end{pmatrix}
\ \ , \ \ X= \begin{pmatrix} Q & -iR^t \\ -iR & M \\
\end{pmatrix}\ .
\end{eqnarray}
$Q$ is an $d(E + V-1)$-dimensional square matrix. We have denoted by $p_e$ the external momenta and by $\tilde x_{\bar v}$ the hyperposition associated to the root vertex $\bar v$. The matrix $R$ is a $d(N+1)\times d(E+V-1)$ dimensional matrix and $M$ is a $d(N+1)$ dimensional square matrix
representing the Moyal couplings between the external momenta and the root vertex.
Gau\ss ian integration gives, up to inessential constants:
\begin{eqnarray}\label{defMPQ2NC} A_G (p) = \int \prod_e d\alpha_e e^{-\alpha_e m^2}
\frac{1}{\det Q^{d/2}} e^{ - P R Q^{-1} R^{t} P^t }
\end{eqnarray}
where $P$ is a line matrix regrouping the external momenta (and the hyperposition associated to the root vertex).
The determinant of the matrix $Q$
defines therefore the first Symanzik NC-polynomial $U^\star$ and the product of the
matrices $R$ and inverse of $Q$ defines the quotient of the second
Symanzik polynomial $V^\star $ by $U^\star$ where the star recalls the Moyal product
used to define this NCQFT.
Let us calculate first the determinant of $Q$.
One has
\begin{eqnarray}
\label{important}
Q= D\otimes 1_d + A \otimes \sigma
\end{eqnarray}
where $D$ is a diagonal matrix with
coefficients $D_{ee} =\alpha_e$, for $e=1,\ldots, E$ and $D_{vv} = 0$ for the rest, $v=1,\ldots, V-1$.
$A$ is an antisymmetric matrix. In \cite{GurauRiv} it was noted that, for such a matrix
\begin{eqnarray}
\det Q = \det (D+A)^d.
\end{eqnarray}
This implies, as in the commutative case, that
\begin{eqnarray}
U^\star=\det (D+A).
\end{eqnarray}
Factoring out powers of $i$ one has
\begin{equation}
\label{Qnaiv}
\det (D+A) = \det
\begin{pmatrix}
\alpha_1 & f_{12}& & & -{\sum_{i=1}^4\epsilon^v_{e i}}\\
-f_{12}& \alpha_2 & \\& & \ldots&\\ & & & \ldots& \\
{\sum_{i=1}^4\epsilon_{e i}^v} & & & & 0\\
\end{pmatrix}.
\end{equation}
The difference with the commutative case comes from the
non-trivial antisymmetric coupling between the $E$ edges variables.
It corresponds to an $E$ dimensional square matrix $F$ with matrix elements
\begin{eqnarray}
\label{F}
f_{ee'}=-\frac{\theta}{2} \sum_{v=1}^{n} \sum_{i,j=1}^4 \omega(i,j)\varepsilon_{ei}^v \varepsilon_{e'j}^v, \ \forall e<e',\ e,e'=1,\ldots, E
\end{eqnarray}
where $\omega$ is an antisymmetric matrix such that $\omega (i,j)=1$ if $i<j$. This matrix takes into account the antisymmetric character of $\Theta$ in $k_\mu \Theta^{\mu \nu} p_\nu$.
Using again Lemma \ref{quasipfaff}
\begin{eqnarray}
\det (D+A)&=& \int \prod_{i,e} d\omega_i d\chi_i d\omega_e d\chi_e
\nonumber\\&&e^{- \sum_e \alpha_e\chi_e
\omega_e} e^{-\sum_{e,v}\chi_e
\epsilon_{ev}\chi_v + \chi \leftrightarrow \omega }
e^{-\sum_{e,e'}\chi_e
f_{ee'}\chi_{e'} + \chi \leftrightarrow \omega }.
\end{eqnarray}
Note that the last term above represents the difference with the commutative case.
We have the exact analog of Theorem \ref{grasstheo}
to prove a deletion-contraction rule.
\begin{theorem}\label{grasstheo2}
For any semi-regular edge $e$
\begin{eqnarray} \label{delcontr3}
\det (D+A)_G= \alpha_e \det (D+A)_{G- e} + \det (D+A)_{G/ e}.
\end{eqnarray}
\end{theorem}
\noindent{\bf Proof}\
We pick up a semi-regular edge $e$ entering $v_1$ and exiting $v_2$.
Thus it exists some $i$ and $j$ with $\epsilon^{v_1}_{ei} = +1, \epsilon^{v_2}_{ej} = -1$. We expand
\begin{equation} e^{- \alpha_e\chi_e \omega_e} = 1 +\alpha_e\omega_e\chi_e.
\end{equation}
leading to two contributions, which we denote respectively by $\det Q_{G,e,1}$ and
$\det Q_{G,e,2}$.
For the first term, since one must saturate the $\chi_e$
and $\omega_e$ integrations, one has to keep the $\chi_e (\chi_{v_1} - \chi_{v_2} + \sum_{\tilde e} f_{e\tilde e}\chi_{\tilde e}) $ term and the similar $\omega$ term.
Note that the sum is done on all the edges $\tilde e$ hooking to any of the vertices $v_1$ and $v_2$ and with whom the edge $e$ has no trivial Moyal oscillation factor.
One has
\begin{eqnarray}
\det Q_{G,e,1}&=& \int \prod_{e'\ne e,v} d \chi_{e'} d\chi_v d\omega_{e'} d\omega_v \nonumber\\
&&
(\chi_{v_1} - \chi_{v_2}+ \sum_{\tilde e} f_{e\tilde e}\chi_{\tilde e})
(\omega_{v_1} - \omega_{v_2}+ \sum_{\tilde e} f_{e\tilde e}\omega_{\tilde e})\nonumber\\
&&
e^{-\sum_{e'\ne e }\alpha_e'\chi_{e'}
\omega_{e'}} e^{-\frac{1}{4}\sum_{e' \ne e ,v} \chi_{e'}
\epsilon_{e'v}\chi_v+ \chi \leftrightarrow \omega }.
\end{eqnarray}
As in the commutative case, we now perform the trivial triangular change of variables with unit Jacobian:
\begin{equation} \hat \chi_{v_1} = \chi_{v_1} - \chi_{v_2}+ \sum_{\tilde e} f_{e\tilde e}\chi_{\tilde e}, \ \ \hat \chi_{v} = \chi_{v}\ \ \emph{for } \ v \ne v_1,
\end{equation}
and the same change for the $\omega$ variables. What happens now is analogous to the commutative case, with the difference that the last term in the definition of $\hat \chi_{v_1}$ will lead to the reconstruction of the Moyal oscillation factors of the edges hooking to $v_1$ with the edges hooking to $v_2$. This completes the ribbon contraction, thus
$ \det Q_{G,e,1} = \det Q_{G/e}$. The second term
$\det Q_{G,e,2}$ with the $ \alpha_e\omega_e\chi_e$
factor is even easier. We must simply put to 0 all terms involving the $e$ label, hence trivially
$ \det Q_{G,e,2} = \alpha_e \det Q_{G-e}$.
{\hfill $\Box$}
We need now to compute $U^\star$ on terminal forms after contracting/deleting all
semi-regular edges, that is compute $U^\star$ on a rosette graph ${\cal R}$.
This is done by using the double contraction introduced in the previous
section.
Consider a nice crossing of ${\cal R}$ between two edges $e_1$ and $e_2$
with parameters $\alpha_1$ and $\alpha_2$. It leads to a contribution
\begin{eqnarray}\label{terminal}
U_{{\cal R}}^\star = ( \alpha_1\alpha_2+\frac 14 \theta^2 ) U_{{\cal R}/e_1e_2}
\end{eqnarray}
where we recall that the contracted rosette ${\cal R}/e_1e_2$
is obtained by deleting $e_1$ and $e_2$ from ${\cal R}$ and interchanging
the half-edges encompassed by $e_1$ with the ones encompassed by $e_2$,
see Figure \ref{filk3}. The procedure
can be iterated on ${\cal R}/e_1e_2$ until after $g({\cal R})$
double contractions a planar rosette with $2E({\cal R}) -2g({\cal R})$
is reached, for which $F=0$ and for which the terminal form is
$\prod_e \alpha_e$ as in the commutative case.
Remark that the main difference with the commutative case is the inclusion
of the $\theta^2$ term in the terminal form evaluation
\eqref{terminal}.
This type of \emph{genus-term} has no analog in the commutative case.
\begin{example}
Consider the graph of Figure \ref{graf-NP}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.9]{sunshine-np.pdf}
\caption{An example of a non-planar graph, $g=1$.}\label{graf-NP}
\end{center}
\end{figure}
Its first Symanzik polynomial is \cite{param-GMRT}
\begin{eqnarray}
\label{tot}
\alpha_1\alpha_2+\alpha_1\alpha_3+\alpha_2\alpha_3+\frac 14 \theta^2.
\end{eqnarray}
Choosing $\alpha_3$ as a regular edge leads to a contracted graph
where the pair of edges $\alpha_1$ and $\alpha_2$ realizes a nice crossing. We thus have a contribution to the first polynomial
\begin{eqnarray}
\label{p1}
\alpha_1\alpha_2 +\frac 14 \theta^2 .
\end{eqnarray}
The deleted part then follows as in the commutative case leading to a contribution
\begin{eqnarray}
\label{p2}
\alpha_3\alpha_1+\alpha_3\alpha_2.
\end{eqnarray}
Putting together \eqref{p1} and \eqref{p2} leads to the expected result \eqref{tot}.
\end{example}
\medskip
Let us now give the following definition:
\begin{definition}
A $\star$-tree of a connected graph $G$ is a subset of edges with one boundary.
\end{definition}
This definition allows to write a $\star$-tree in some graph of genus $g$ as
an ordinary tree plus at most $g$ pairs of ``genus edges'' (where by ``genus edges'' we
understand pairs of edges which make a recursive succession of nice crossings under
double contractions on the rosette obtained after contracting the edges of the tree in the graph).
\begin{example}
For the graph of Figure \ref{graf-NP}, the $\star$-trees are the ordinary trees $\{1\}$, $\{2\}$, $\{3\}$
and the tree plus one pair of genus edges, namely $\{1,2,3\}$ which is the whole graph.
\end{example}
In \cite{MinSei}, the following general expression for the first
polynomial $U$ of the ``naive'' noncommutative model was given
\begin{eqnarray}
\label{min1}
U^\star(\alpha_1,\ldots,\alpha_E)=\left(\frac{\theta}{2}\right)^{b} \sum_{{\cal T}^\star\; \star-{\mathrm tree}} \prod_{e\notin {\cal T}^\star} 2 \frac{\alpha_e}{\theta},
\end{eqnarray}
where we have denoted by
\begin{eqnarray}
\label{b}
b=F-1+2g
\end{eqnarray}
the number of loops of $G$. Note that the factor $2$ above is the one which matches our conventions.
Let us now give a proof of the formula \eqref{min1}.
Consider the following lemma:
\begin{lemma}
{\bf (Lemma III.2 of \cite{GurauRiv})}
Let $D=(d_i\delta_{i j})_{i,j\in\{1,\dotsc,D\}}$
be diagonal and $A=(a_{i j})_{i,j\in\{1,\dotsc,D\}}$
be such that $a_{ii}=0$. Then
\begin{eqnarray}
\det(D+A)=\sum_{K\subset \{1,\dotsc ,N\}}\det(B_{\hat{K}}) \prod_{i\in K}a_i
\end{eqnarray}
where $A_{\hat{K}}$ is the matrix obtained from $A$ by deleting the lines and
columns with indices in $K$.
\end{lemma}
The particular form \eqref{important} of the matrix $Q$ allows thus to use this Lemma to calculate its determinant ({\it i.e.} the polynomial $U$). Factoring out $\frac {\theta}{2}$ on the first $E$ lines and then $\frac{2}{\theta}$ on the last $V-1$ columns, one has
\begin{eqnarray}
U^\star (\alpha)=\left(\frac{\theta}{2}\right)^b\sum_{K\subset\{1,\ldots,E\}} \det A_{\hat K} \prod_{e\in K} 2 \frac{\alpha_e}{\theta}
\end{eqnarray}
where we have used that
$$ b - E = - (V-1).$$
Note that the set $K$ on which one sums up corresponds to a set of edges of the graph; this comes from the fact that the last $V-1$ entries on the diagonal of the matrix $A$ are equal to $0$.
In \cite{GurauRiv} (see Lemma III.4) it is proven, using a non trivial triangular change of Grassmanian variables that a determinant of type $A_{\hat K}$ is not vanishing if and only if it corresponds to a graph with only one face. This means that the complement of the subset of edges $K$ must be a $\star-$tree, $\bar K={\cal T}^*$. Furthermore, one has
\begin{eqnarray}
\prod_{\bar {\cal T}^\star} \alpha_e = \prod_{e\notin {\cal T}^\star} \alpha_e.
\end{eqnarray}
\subsubsection{The translation-invariant model}
The relation of the parametric representation of the model \eqref{revolutie} to the Bollob\'as-Riordan polynomials follows the one of the ``naive'' model \eqref{act-normala} presented above. This is an immediate consequence of the intimate relationship between the parametric representation of these two noncommutative models, a relationship explained in the previous subsection.
\subsection{The second polynomial for NCQFT}
In this section we prove the form of the second polynomial for the model
\eqref{act-normala} (both its real and imaginary part, as we will see in the sequel).
We then relate this second polynomial to the Bollob\'as-Riordan polynomial.
From
\eqref{defMPQ2NC} it follows directly that
\begin{eqnarray}
\frac{V^\star(\alpha,p)}{U^\star(\alpha)}=-P
R Q^{-1} R^{t}
P^t
\end{eqnarray}
where we have left aside the matrix $M$ coupling the external momenta to themselves. Note that the matrix $R$ couples the external momenta (and the hyperposition associated to the root vertex) to the internal momenta and the remaining $V-1$ hyperpositions. This coupling is done in an analogous way to
the coupling of the internal momenta with the respective variables.
We can thus state that the $V$ polynomial is given, as in the commutative case, by the inverse $Q^{-1}$ of the matrix $Q$ giving the $U$ polynomial.
The particular form \eqref{important} of the matrix $Q$ leads to
\begin{eqnarray}
Q^{-1}&=&\frac 12 \left((D+A)^{-1}+(D-A)^{-1}\right)\otimes 1_d \nonumber\\
&+& \frac 12 \left((D+A)^{-1}-(D-A)^{-1}\right)\otimes \sigma
\end{eqnarray}
Thus, the polynomial $V$ has both a real $V^R$ and an imaginary part $V^I$.
In the case of the commutative theories, the imaginary part above disappears. This is a consequence of the fact that the matrix $F$, coupling through the Moyal oscillations the internal momenta, vanishes for $\theta=0$.
Let the following definitions.
\begin{definition}
A two $\star$-tree is a subset of edges with two boundaries.
\end{definition}
Furthermore, let $K$ a subset of lines of the antisymmetric matrix $A$.
Let Pf$(A_{\hat K\hat\tau})$ be the Pfaffian of the antisymmetric matrix obtained from
$A$ by deleting the edges in the set $K\cup\{\tau\}$ for $\tau\notin K$. We also define $\varepsilon_{K,\tau}$ to be the signature of the permutation obtained from $(1,\ldots, E)$ by extracting the positions belonging to $K\cup\{\tau\}$ and replacing them at the end in the order:
$$ 1,\ldots, E\to 1,\ldots, \hat i_1,\ldots, \hat i_p ,\ldots, \hat i_\tau,\ldots, E, i_\tau, i_p,\ldots , i_1.$$
We now prove a general form for both the real and the imaginary part of the polynomial $V^\star$,
noted ${\cal X}^{\star}$ and ${\cal Y}^{\star}$.
\begin{theorem}
\label{thr}
The real part of the second Symanzik polynomial writes
\begin{eqnarray}
\label{vr}
{\cal X}^{\star}=\left(\frac{\theta}{2}\right)^{b+1}\sum_{{\cal T}^\star_2\; 2-\star \;{\mathrm{tree}}}\prod_{e\notin {\cal T}_2^\star}2\frac{\alpha_e}{\theta}(p_{{\cal T}^\star_2})^2,
\end{eqnarray}
where $p_{{\cal T}^\star_2}$ is the sum of the momenta entering one of the two faces of the $2$ $\star-$tree ${\cal T}_2^\star$.
\end{theorem}
Note that by momentum conservation, the choice of the face in the above theorem is irrelevant. Furthermore, let us emphasize on the fact that, being on the submanifold $p_G=0$, an equivalent writing of \eqref{vr} is
\begin{eqnarray}
{\cal X}^{\star}=-\frac12 \left(\frac{\theta}{2}\right)^{b+1}\sum_{{\cal T}^\star_2\; 2-\star \;{\mathrm{tree}}}\prod_{e\notin {\cal T}_2^\star}2\frac{\alpha_e}{\theta}p_v\cdot p_{v'},
\end{eqnarray}
where $p_v$ (and resp. $p_{v'}$) is the total momenta entering one of the two faces of the $2$-$\star$ tree.
\medskip
\noindent
{\it Proof.} We base our proof on the following lemma:
\begin{lemma} ({\bf Lemma IV.1 of \cite{GurauRiv}}) \\
The real part of the polynomial $V^\star$ writes
\begin{eqnarray}
\label{lemar}
{\cal X}^{\star}=\sum_{K}\prod_{i\notin K} d_i \left(\sum_{e_1}p_{e_1}\sum_{\tau \notin K} R_{e_1 \tau} \varepsilon_{K\tau}{\mathrm {Pf}}(A_{\hat K\hat \tau})\right)^2
\end{eqnarray}
where $d_i$ are the elements on the diagonal of the matrix $Q$. Furthermore, when $|K|\in\{E-1,E\}$ the matrix with deleted lines is taken to be the empty matrix, with unit Pfaffian.
\end{lemma}
Note that, as before, since the matrix $Q$ has vanishing entries on the diagonal for the last $V-1$ entries the subsets $K$ are nothing but subsets of edges. The empty matrix obtained from deleting all the first $E$ edges in the graph corresponds to the graph with no internal edges but only disconnected vertices. Each of these disconnected components has one boundary; hence the Pfaffian is non-vanishing.
Note that the Pfaffian in \eqref{lemar} disappears iff the corresponding graph has $1$ boundary (see above). This means that
$ K\cup \{\tau\}$
is the complement of a $\star-$tree ${\cal T}^\star$:
\begin{eqnarray}
{\overline{K\cup \{\tau\}}}={\cal T}^\star.
\end{eqnarray}
Hence the subset $K$ is the complement of a $\star$-tree plus an edge (just like in the commutative case). Adding an extra edge to a $\star$-tree represents an increase of the number of boundaries by one unit. Hence, the subset of edges $K$ above is the complement of some $2$ $\star-$tree ${\cal T}_2^\star$
\begin{eqnarray}
\bar K= {\cal T}_2^\star.
\end{eqnarray}
As before, one has
$$ \prod_{e\in K} \alpha_e=\prod_{e\notin {\cal T}_2^\star} \alpha_e.$$
The diagonal terms in the matrix $Q$ are again the parameters $\alpha_e$. Factoring out $\frac{\theta}{2}$ factors on the lines of the matrices corresponding to the edges of the graph and then $\frac{2}{\theta}$ for the lines of the matrices corresponding to the vertices. The extra factor $\theta/2$ corresponds to the extra edge $\tau$.
\medskip
Let us now investigate the square root of the momenta combination entering \eqref{lemar}. Note that the matrix element $R_{e_1\tau}$ is not vanishing only for external momentum $p_{e_1}$ which has a Moyal oscillation with the internal momenta associated to the edge $\tau$. It is this edge $\tau$ which actually creates the extra boundary. Thus the sum on the external momenta in \eqref{lemar} is nothing but the sum of the momenta entering one of the two boundaries. By a direct verification, one can explicitly check the signs of the respective momenta in \eqref{lemar}, which concludes the proof. {\hfill $\Box$}
\begin{example}
For the graph of Figure \ref{graf-NP}, the second polynomial is
\begin{eqnarray}
V^{\star}(\alpha,p)=\alpha_1 \alpha_2 \alpha_3 p^2 + \frac 14 (\alpha_1+\alpha_2+\alpha_3)\theta^2p^2.
\end{eqnarray}
\end{example}
\bigskip
Let us now investigate the form of the imaginary part ${\cal Y}^{\star}$. One has the following theorem:
\begin{theorem}
\label{thi}
The imaginary part of the second Symanzik polynomial writes
\begin{eqnarray}
\label{vim}
{\cal Y}^{\star}(\alpha,p)=\left(\frac{\theta}{2}\right)^{b}\sum_{{\cal T}^\star\; \star \;{\mathrm{tree}}}\prod_{e\notin {\cal T}^\star}2\frac{\alpha_e}{\theta} \psi(p),
\end{eqnarray}
where $\psi(p)$ is the phase obtained by following the momenta entering the face of the $\star-$tree ${\cal T}^\star$
as if it was a Moyal vertex.
\end{theorem}
{\it Proof.} The proof follows closely the one of Theorem \ref{thr}. Nevertheless, the equivalent of \eqref{lemar} is now (see again \cite{GurauRiv})
\begin{eqnarray}
{\cal Y}^{\star}(\alpha,p)&=&\sum_{K}\prod_{e\notin K} d_i \epsilon_K {\mathrm{Pf}}(A_{\hat K}) \left(\sum_{e_1,e_2}\left(\sum_{\tau,\tau'}R_{e_1\tau}\epsilon_{K\tau\tau'}{\mathrm{Pf}}(A_{\hat K \hat \tau \hat \tau'})R_{e_2\tau'}\right)p_{e_1}\sigma p_{e_2}\right) \nonumber
\\
&&\end{eqnarray}
where $d_i$ are the elements on the diagonal of the matrix $Q$. Since we look this time for sets such that ${\mathrm{Pf}}(B_{\hat K})$ is non-vanishing, this implies as above that $K$ is the complement of some $\star$-tree ${\cal T}^\star$.
Furthermore one needs to consider the two extra edges $\tau$ and $\tau'$. It is possible from the initial $\star-$tree above to erase these two more edges such that the Pfaffian ${\mathrm Pf}B_{\hat K \hat \tau \hat \tau'}$ is non-vanishing. Indeed, if the $\star$-tree is a tree, by erasing two more edges of it we obtain a graph with $3$ disconnected components, each of it with a single boundary; the corresponding Pfaffian will be non-vanishing. Summing up on all these possibilities leads to the Moyal oscillations of the external momenta (the one which disappears when truncating the graph). If the $\star-$tree is formed by a tree and some pair of genus edges we can always delete further the pair of genus edges and remain with the regular tree. Obviously the corresponding Pfaffian is again non-vanishing (since it corresponds to a graph with only one boundary).
{\hfill $\Box$}
Note that the form of the real part and of the imaginary one of the polynomial $V^\star$ are qualitatively different. Indeed, the real part contains some square of a sum of incoming external momenta, while the imaginary one contains a phase involving the external momenta.
\bigskip
Let us end this section by stating that the second noncommutative Symanzik polynomial
also obeys the deletion-contraction rule. The proof is exactly like in the commutative case,
a straightforward rereading of Theorem \ref{grasstheo2}.
\subsection{Relation to multivariate Bollob\'as-Riordan polynomials}
In the previous subsections, we have identified the first Symanzik polynomial of a connected graph in a scalar NCQFT as the first order in $w$ of the multivariate Bollob\'as-Riordan polynomial,
\begin{equation}
U^{\star}_{G}(\alpha,\theta)=(\theta/2)^{E-V+1}\Big(\prod_{e\in E}\alpha_{r}\Big)\times
\lim_{w\rightarrow0}w^{-1}Z_{G}\Big({\textstyle\frac{\theta}{2\alpha_{e}}},1,w\Big).
\end{equation}
Recall that the multivariate Bollob\'as-Riordan polynomial (see \cite{expansionbollobas}) is a generalization of the multivariate Tutte polynomial to orientable ribbon graphs defined by the expansion,
\begin{equation}
Z_{G}(\beta,q,w)=\sum_{A\subset E}\Big(\prod_{e\in A}\beta_{e}\Big)q^{k(A)}w^{b(A)},
\end{equation}
with $q(A)$ the number of connected components and $b(A)$ the number of boundaries of the spanning graph $(V,A)$.
In order to deal with the second Symanzik polynomial in the noncommutative case, we now introduce an extension of $Z_{G}(\beta,q,w)$ for ribbon graphs with flags at $q=1$. In the case of ribbon graphs, the flags are attached to the vertices and the cyclic order of flags and half-edges at each vertex matters. For each cyclically oriented subset $I$ of the set of labels of the flags, we introduce an independent variable $w_{I}$. Cyclically ordered subsets $I$ are defined as sequences of different labels up to a cyclic permutation. Then, each boundary of a graph with the orientation induced by the graph, defines a cyclically ordered subset of the set of labels of the flags, by listing the flags in the order they appear on the boundary. Accordingly, a variable $w_{I}$ is attached to each boundary.
\begin{definition}
For an orientable ribbon graph $G$ with flags ${\Xi}_{G}(\beta_{e},w_{I})$ is defined by the expansion
\begin{equation}
{\Xi}_{G}(\alpha_{e},\beta_{e},w_{I})=\sum_{A\subset E}\,\Big(\prod_{e\notin E}\alpha_{e}\prod_{e\in E}\beta_{e}\prod_{
\mbox{\tiny boundaries}}w_{I_{n}}\Big),
\end{equation}
where $I_{n}$ are the cyclically ordered sets of flags attached to each of the connected component of the boundary of the spanning graph $(V,A)$.
\end{definition}
We recover $Z_{G}(\beta_{e},1,w)$ by setting $w_{I}=w$ and $\alpha_{e}=1$, but the information pertaining to $q$ is lost except for planar graphs. Indeed, in this case the genus of any subgraph is still 0 so that $|V|-|A|+b(A)=2k(A)$ and thus $Z_{G}(\beta_{e},q,w)=q^{|V|/2}Z_{G}(q^{-\frac{1}{2}}\beta_{e},q^{\frac{1}{2}}w)$.
The polynomial ${\Xi}_{G}(\alpha_{e},\beta_{e},w_{I})$ obeys the contraction/deletion rules for any semi-regular edges (i.e. all types of edges except self-loops). The structure of the flags of $G-e$ is left unchanged, but less variables $w_{I}$ enter the polynomial since the number of boundaries decreases. For $G/e$, the flags attached to the vertex resulting from the contraction are merged respecting the cyclic order of flags and half-edges attached to the boundary of the subgraph made of the contracted edge only.
\begin{proposition}
The polynomial ${\Xi}_{G}(\alpha_{e},\beta_{e},w_{I})$ obeys the contraction/deletion rule for a semi-regular edge,
\begin{equation}
{\Xi}_{G}(\alpha_{e},\beta_{e},w_{I})=\alpha_{e}\,{\Xi}_{G-e}(\alpha_{e'\neq e},\beta_{e'\neq e},w_{I})+\beta_{e}\,{\Xi}_{G/e}(\alpha_{e'\neq e},\beta_{e'\neq e},w_{I}).
\end{equation}
\end{proposition}
This follows from gathering in the expansion of $\Xi_{G}(\alpha_{e},\beta_{e},w_{I})$ the terms that contain $e$ and those that do not. The contraction/deletion rule may be extended to any edge provided we introduce vertices that are surfaces with boundaries as in \cite{expansionbollobas}.
The second interesting property of ${\Xi}_{G}(\alpha_{e},\beta_{e},w_{I})$ lies in its invariance under duality. Recall that for a connected ribbon graph $G$ with flags, its dual $G^{\ast}$ is defined by taking as vertices the boundaries of $G$, with flags and half-edges attached in the cyclic order following the orientation of the boundary induced by that of $G$.
\begin{proposition}
For a connected graph $G$ with dual $G^{\ast}$,
\begin{equation}
{\Xi}_{G}(\alpha_{e},\beta_{e},w_{I})=
{\Xi}_{G^{\ast}}(\beta_{e},\alpha_{e},w_{I}).
\end{equation}
\end{proposition}
{\it Proof:}
First, recall that there is a natural bijection between the edges of $G$ and those of $G^{\ast}$. Thus, to a subset $A$ of edges of $G$ we associate a subset $A^{\ast}$ of edges of $G^{\ast}$ which is the image under the previous bijection of the complementary $E-A$. Then, the term corresponding to $A$ in ${\Xi}_{G}(\alpha_{e},\beta_{e},w_{I})$ equals that corresponding to $A^{\ast}$ in ${\Xi}_{G^{\ast}}(\beta_{e},\alpha_{e},w_{I})$. The only non trivial part in the last statement is the equality of the boundary terms in $G$ and $G^{\ast}$, which is best understood by embedding $G$ in a surface $\Sigma$. Then, the spanning graph $(V^{\ast},A^{\ast})$, viewed as discs joined by ribbons, is homeomorphic to $\Sigma-(V,A)$, with the orientation reversed. Accordingly, they have the same boundary.
{\hfill $\Box$}
This relation may also be extended to non connected graphs at the price of introducing again vertices that are surfaces with holes. For example, the dual of a disjoint union of $n$ vertices is the vertex made of a sphere with $n$ holes. For a regular edge, the duality exchanges contraction (resp. deletion) in $G$ with deletion (resp. contraction) in $G^{\ast}$. In the case of the deletion of a bridge in $G$, we have to contract a self-loop in $G^{\ast}$, thus leading to vertices that are surfaces with holes. Note that this implies a duality for the multivariate Bollob\'as-Riordan polynomial only at the special point $q=1$, in agreement with the fact that the duality for the Bollob\'as-Riordan polynomial only holds when its arguments lies on a hypersurface \cite{chmutov}.
Finally, let us come to the relation with the second Symanzik polynomial in NCQFT. For a given connected graph with momenta $p_{i}$ such that $\sum_{i}p_{i}=0$ attached to the flags, we decompose the latter polynomial into real and imaginary part,
\begin{equation}
V^{\star}_{G}(\alpha_{e},\theta,p_{i})={\cal X}^{\star}_{G}(\alpha_{e},\theta,p_{i})+\mathrm{i}\,
{\cal Y}^{\star}_{G}(\alpha_{e},\theta,p_{i}).
\end{equation}
Consider real variables $w_{i}$ and define $w_{I}=\sum_{i}w_{i}$ for any cyclically oriented subset of flags, Then, expand $(\theta/2)^{|E|-|V|}\,{\Xi}_{G}(2\alpha_{e}/\theta,\theta w_{I}/2)$ to the first two orders at $w_{i}=0$,
\begin{equation}
(\theta/2)^{|E|-|V|}\,{\Xi}_{G}(2\alpha_{e}/\theta,\theta w_{I}/2)=A\Big(\sum_{i}w_{i}\big)+\sum_{i\neq j}B_{ij}w_{i}w_{j}+O\Big(w^{3}\Big).
\end{equation}
The first order term reproduces the first Symanzik polynomial
\begin{equation}
U^{\star}_{G}(\alpha_{e},\theta)=A,
\end{equation}
whereas the second order terms yields the real part of the second Symanzik polynomial,
\begin{equation}
{\cal X}^{\star}_{G}(\alpha_{e},\theta,p_{i})={\textstyle -\frac{1}{2}}\sum_{i\neq j}A_{ij}\,p_{i}\cdot p_{j}.
\end{equation}
To obtain the imaginary part, consider the variables
\begin{equation}
w_{I}={\textstyle \frac{1}{2}}\sum_{i<j}p_{i}\cdot\Theta p_{j}
\end{equation}
if $I$ contain all the flags and $w_{I}=0$ otherwise. The previous definition involves a choice of a total order on $I$ compatible with its cyclic structure, but momentum conservation $\sum_{i}p_{i}=0$ implies that $w_{I}$ does not depend on this choice. Then,
\begin{equation}
{\cal Y}^{\star}_{G}(\alpha_{e},\theta,p_{i})=(\theta/2)^{|E|-|V|}\,{\Xi}_{G}(2\alpha_{e}/\theta,w_{I}).
\end{equation}
As a consequence of their expressions in terms of ${\Xi}_{G}(\alpha_{e},\beta_{e},w_{I})$, the noncommutative Symanzik polynomials obey contraction/deletion rules for regular edges and duality relations. For example, the duality for the first Symanzik polynomial reads
\begin{equation}
(\theta/2)^{|V|}\,U^{\star}_{G}(\alpha_{e},\theta)=(\theta/2)^{|V^{\ast}|}\,\Big(\prod_{e\in E}\frac{2\alpha_{e}}{\theta}\Big)\,U^{\star}_{G^{\ast}}\Big({{\theta^{2}}/{\alpha_{e}}},\theta\Big).
\end{equation}
Beware that $G^{\ast}$ is the dual graph whereas the star on polynomials
such as $U^{\star}$ and $V^{\star}$ refer to the Moyal product.
Analogous relations, though slightly more cumbersome, can be written for the second Symanzik polynomial.
Still an other way to categorify and regularize in the infrared
is to introduce harmonic potentials on the edges rather than the vertices, leading to propagators
based on the Mehler rather than the heat kernel. This is the so-called \emph{vulcanization}. An extensive
study of the corresponding commutative and noncommutative polynomials is
under way as a companion paper \cite{KRTZ}.
\bigskip
\noindent
{\bf Acknowledgments}
We thank J. Ellis-Monaghan for introducing us to Bollob\'as-Riordan polynomials
and R\u{a}zvan Gur\u{a}u and Fabien Vignes -Tourneret
for interesting discussions at an early stage of this work.
| {'timestamp': '2008-11-03T16:43:12', 'yymm': '0811', 'arxiv_id': '0811.0186', 'language': 'en', 'url': 'https://arxiv.org/abs/0811.0186'} |
\section{Introduction}
In this paper, we study aspects of~$(2,0)$ superconformal field theories (SCFTs) in six dimensions, as well as their circle and torus compactifications, which lead to maximally supersymmetric Yang-Mills theories in five and four dimensions. (We will refer to them as~${\mathcal N}=2$ and~${\mathcal N}=4$ in their respective dimension.) In every case, we analyze the low-energy effective action for the massless fields on the moduli space of vacua. We explain a streamlined approach to the powerful non-renormalization theorems of~\cite{Paban:1998ea,Paban:1998mp,Paban:1998qy,Sethi:1999qv,Maxfield:2012aw}, which follow from maximal supersymmetry.\footnote{~These non-renormalization theorems were originally found by analyzing higher-derivative corrections to BFSS matrix quantum mechanics~\cite{Banks:1996vh}. By contrast with the non-renormalization theorems discussed in~\cite{Dine:1997nq}, which only require eight supercharges but use superconformal symmetry to constrain certain~$D$-terms, the ones discussed in this paper require maximal supersymmetry but also apply in theories that are not conformal.} This allows us to show that the functional form of the effective action at the first several orders in the derivative expansion is completely fixed in terms of a few coefficients. These can then be tracked along the moduli space, and across dimensions. Up to six-derivative order, all such coefficients can be determined from a one-loop calculation in five-dimensional~${\mathcal N}=2$ Yang-Mills theory, using only the standard two-derivative Lagrangian. Although this Yang-Mills Lagrangian is expected to be corrected by irrelevant operators, we show that the only such operators that could contaminate our results are inconsistent with the conformal symmetry of the six-dimensional parent theory. We also explain why it is in general not possible to reproduce the one-loop result in five dimensions from an analogous calculation in a genuinely four-dimensional~${\mathcal N}=4$ Yang-Mills theory.
This understanding leads to a computation of the~$a$-type Weyl anomaly for all~$(2,0)$ SCFTs, in the spirit of~\cite{Maxfield:2012aw}, and a new calculation of their~$R$-symmetry anomaly, along the lines envisioned in~\cite{Intriligator:2000eq}. These papers argued that both anomalies are captured by certain six-derivative terms in the moduli-space effective action, and proposed to fix the coefficients by comparing to~${\mathcal N}=2$ or~${\mathcal N}=4$ Yang-Mills theory in five or four dimensions; they also raised several puzzles that will be addressed below. Using our results, we show that the~$a$-anomaly is strictly decreasing under all renormalization group (RG) flows that preserve~$(2,0)$ supersymmetry, thus verifying the conjectured~$a$-theorem~\cite{Cardy:1988cwa} in six dimensions for this class of flows. We also discuss several field-theoretic arguments for the ADE classification of the~$(2,0)$ theories. One of these arguments only relies on consistency conditions for the moduli-space effective action in six dimensions.
We begin with a brief review of~$(2,0)$ SCFTs, and what is known about their anomalies, before stating our assumptions and summarizing our results in more detail.
\subsection{Review of~$(2,0)$ Theories}
In Lorentzian signature, the~$(2,0)$ superconformal algebra in six dimensions is~$\frak{osp} (8^*|4)$, whose Bosonic subalgebra is the sum of the conformal algebra~$\frak{so}(6,2)$ and the~$R$-symmetry algebra~$\frak {sp}(4)_R = \frak{ so}(5)_R$. All well-defined local operators reside in unitary representations of~$\frak{osp} (8^*|4)$. The only such representation that describes standard free fields is the Abelian tensor multiplet, which contains the following operators:
\begin{itemize}
\item Real scalars~$\Phi^I \; (I = 1,\ldots, 5)$ in the~$\bf 5$ of~$\frak{so}(5)_R$. They satisfy~$\square \Phi^I = 0$ and have scaling dimension~$\Delta = 2$.
\item Weyl Fermions in a~$\bf 4$ of the~$\frak{so}(5,1)$ Lorentz algebra and the~$\bf 4$ of~$\frak{so}(5)_R$, subject to a symplectic Weyl reality condition. They satisfy the free Dirac equation and have scaling dimension~$\Delta = {5\over 2}$.
\item A real, self-dual three-form~$H = *H$, which is the field strength of a two-form gauge field~$B$. Therefore~$H = dB$ is closed and co-closed, $dH = d* H= 0$, and its scaling dimension is~$\Delta = 3$.
\end{itemize}
Since free field theories only admit conventional relevant or marginal interactions in~$d \leq 4$ dimensions, interacting field theories in six dimensions were long thought to be impossible. Surprisingly, decoupling limits of string theory strongly suggest the existence of interacting~$(2,0)$ SCFTs in six dimensions~\cite{Witten:1995zh,Strominger:1995ac,Witten:1995em}. For a review, see~\cite{Seiberg:1997ax, Witten:2009at,Gaiotto:2010be, moorefklect} and references therein. These theories are believed to obey standard axioms of quantum field theory~\cite{Seiberg:1996vs,Seiberg:1996qx}, such as locality and the existence of a stress tensor. However, it is not yet understood how to properly formulate them, and many properties of the known interacting~$(2,0)$ SCFTs, including their existence, have been inferred from their embedding into particular string constructions. Other aspects can be analyzed more generally. For instance, it can be shown that no~$(2,0)$ SCFT possesses relevant or marginal operators that can be used to deform the theory while preserving supersymmetry, because the superconformal algebra~$\frak{osp} (8^*|4)$ does not admit unitary representations that contain such operators~\cite{ckt}.\footnote{~The same statement also applies to~$(1,0)$ SCFTs in six dimensions~\cite{ckt}. However, all six-dimensional SCFTs have relevant deformations that break supersymmetry.}
Every~$(2,0)$ SCFT~${\mathcal T}_{\frak g}$ that can be realized in string theory is locally characterized by a real Lie algebra~$\frak g = \oplus_i \frak g_i$. (Globally, there is additional data; see for instance~\cite{Witten:2009at,moorefklect,Gaiotto:2014kfa} and references therein.) Each~$\frak g_i$ is either~$\frak u(1)$ or a compact, simple Lie algebra of ADE type. A given~$\frak g_i$ gives rise to a theory that is locally, but not necessarily globally, decoupled from the other summands. Moreover, the~$\frak u(1)$ summands are locally described by free Abelian tensor multiplets. As long as we only probe local aspects of the theory, it is therefore sufficient to focus on one~$\frak g_i$ at a time.
Let~$\frak g$ be~$\frak u(1)$ or a compact, simple ADE Lie algebra. In flat Minkowski space~${\mathbb R}^{5,1}$, the theory~${\mathcal T}_{\frak g}$ has a moduli space of vacua,
\begin{equation}
{\mathcal M}_{\frak g} = \left({\mathbb R}^5\right)^{r_{\frak g}} / {\mathcal W}_{\frak g}~.\label{vacua}
\end{equation}
Here~$r_{\frak g}$ and~${\mathcal W}_{\frak g}$ are the rank and Weyl group of~$\frak g$. At a generic point, the low-energy dynamics on the moduli space is described by~$r_{\frak g}$ Abelian tensor multiplets valued in the Cartan subalgebra of~$\frak g$. For this reason, we will also refer to the moduli space as the tensor branch. The vacuum expectation values (vevs) of the five scalars in each tensor multiplet parametrize the~$r_{\frak g}$ different~${\mathbb R}^5$ factors, which are permuted by~${\mathcal W}_{\frak g}$. These vevs spontaneously break both the conformal and the~$\frak{so}(5)_R$ symmetry. The corresponding Nambu-Goldstone (NG) Bosons are supplied by the tensor multiplet scalars. The tensor multiplets, and hence the NG Bosons, weakly interact through higher-derivative, irrelevant operators that are suppressed by powers of the vevs.
At the boundaries of the moduli space, the low-energy dynamics is described by an interacting superconformal theory~${\mathcal T}_{\frak h}$, where~$\frak h \subset \frak g$ is a semisimple subalgebra of lower rank~$r_{\frak h} < r_{\frak g}$, as well as~$r_{\frak g} - r_{\frak h}$ Abelian tensor multiplets. The allowed subalgebras~$\frak h \subset \frak g$ are determined by adjoint Higgsing, so that~$\frak h$ is itself a sum of compact, simple ADE Lie algebras. Therefore, the moduli space~${\mathcal M}_{\frak g}$ has the structure one would intuitively expect from a gauge theory of non-Abelian tensor multiplets in the adjoint representation of~$\frak g$.
This intuition can be sharpened by compactifying~${\mathcal T}_{\frak g}$ on a spatial circle of radius~$R$, with supersymmetric boundary conditions. It follows from arguments in string theory that the five-dimensional effective theory, valid below the Kaluza-Klein (KK) scale~$1 \over R$, is given by maximally supersymmetric Yang-Mills theory with gauge algebra~$\frak g$ and gauge coupling~$g^2$ proportional to~$R$. Therefore its Coulomb branch exactly coincides with~\eqref{vacua}. The fact that the five-dimensional description is a weakly-coupled non-Abelian gauge theory has been essential for exploring the dynamics of~${\mathcal T}_{\frak g}$ using field-theoretic techniques. Since this description is a standard effective theory, we expect an infinite series of irrelevant operators, suppressed by powers of the cutoff set by the KK scale.\footnote{~A different point of view was advocated in~\cite{Douglas:2010iu,Lambert:2010iw,Papageorgakis:2014dma}, where it was argued that the five-dimensional Yang-Mills description could be extended beyond its regime of validity as an effective theory with a cutoff. Here we will work within standard effective field theory, and hence our results are neither in conflict with, nor shed light on, this proposal.} What is known about these corrections will be reviewed below.
\subsection{Anomalies}
Anomalies are robust observables: even in a strongly-coupled theory, they can often be computed by utilizing different effective descriptions, some of which may be weakly coupled. In conformal field theories (CFTs), we can distinguish between 't Hooft anomalies for continuous flavor symmetries (these include gravitational and mixed anomalies) and Weyl anomalies for the conformal symmetry.
The 't Hooft anomalies for the~$\frak{so}(5)_R$ symmetry, as well as gravitational and mixed anomalies, have been computed for all known~$(2,0)$ SCFTs~${\mathcal T}_{\frak g}$. They are summarized by an anomaly eight-form~${\mathcal I}_{\frak g}$, which encodes the anomalous variation of the action in the presence of background gauge and gravity fields via the descent procedure~\cite{AlvarezGaume:1983ig,Bardeen:1984pm}. If~$\frak g$ is~$\frak u(1)$ or a compact, simple ADE Lie algebra, then (in the normalization of~\cite{Intriligator:2000eq}),
\smallskip
\begin{equation}
\mathcal{I}_{\frak{g}}=\frac{k_{\frak g}}{24}p_{2}(F_{\frak{so}(5)_R})+r_{\frak{g}}\mathcal{I}_{\frak{u}(1)}~, \qquad k_{\frak g} = h_{\frak{g}}^{\vee}d_{\frak{g}}~,
\label{anomform}
\end{equation}
\smallskip
where~$F_{\frak{so}(5)_R}$ is the field strength of the background~$R$-symmetry gauge field, while~$h^\vee_{\frak g}$ and~$d_{\frak g}$ are the dual Coxeter number and the dimension of~$\frak g$, respectively. The anomaly polynomial~${\mathcal I}_{\frak u(1)}$ for a free Abelian tensor multiplet encodes all gravitational and mixed anomalies of~${\mathcal T}_{\frak g}$, as well as a contribution to the~$R$-symmetry anomaly. (Its precise form, which can be found in~\cite{Intriligator:2000eq}, will not be needed here.) The anomaly polynomial for~$\frak g = \frak{su}(n)$ was first obtained in \cite{Harvey:1998bx}. The general formula~\eqref{anomform} was conjectured in~\cite{Intriligator:2000eq}. It was verified in~\cite{Yi:2001bz} for~$\frak g = \frak{so}(2n)$, and in~\cite{Ohmori:2014kda} for all ADE Lie algebras.
The conjecture of~\cite{Intriligator:2000eq} was in part motivated by the insight that the constants~$k_{\frak g}$ in~\eqref{anomform} appear in the low-energy effective action for the dynamical fields on the tensor branch of~${\mathcal T}_{\frak g}$, where the theory is weakly coupled. For simplicity, we will focus on rank one adjoint breaking patterns~$\frak g \rightarrow \frak h \oplus \frak u(1)$, which lead to moduli spaces described by a single Abelian tensor multiplet. The difference~$\Delta k = k_{\frak g} - k_{\frak h}$ is the coefficient of a Wess-Zumino (WZ) term, which arises at six-derivative order. This term is needed to match the irreducible 't Hooft anomaly for the spontaneously broken~$\frak{so}(5)_R$ symmetry via a contribution from the~$R$-symmetry NG Bosons. It was suggested in~\cite{Intriligator:2000eq} that~$\Delta k$ could be computed by reducing the theory on a circle and integrating out massive W-Bosons on the Coulomb branch of five-dimensional~${\mathcal N}=2$ Yang-Mills theory. By analogy with~${\mathcal N}=4$ Yang-Mills theory in four dimensions, where a WZ term is generated at one loop on the Coulomb branch~\cite{Tseytlin:1999tp}, one might expect that~$\Delta k$ only depends on~$n_W = d_{\frak g} - (d_{\frak h} + 1)$, the number of~W-Bosons that become massive upon breaking~$\frak g \rightarrow \frak h \oplus \frak u(1)$. However, it was emphasized in~\cite{Intriligator:2000eq} that this is inconsistent with the values of~$\Delta k$ and~$n_W$ for the breaking patterns~$\frak{su}(n+1) \rightarrow \frak{su}(n) \oplus \frak u(1)$ and~$\frak{so}(2n) \rightarrow \frak{su}(n) \oplus \frak u(1)$, even at large~$n$. One of our goals in this paper is to explain why the four-dimensional intuition is misleading.
CFTs in even spacetime dimensions also have Weyl anomalies, which manifest as a violation of conformal invariance in the presence of a background spacetime metric. This is quantified by the anomalous trace of the stress tensor in such a background, whose scheme-independent part takes the following general form~\cite{Deser:1993yx},
\begin{equation}
\langle T^{\mu}_{\mu}\rangle= a E_{d}+\sum_{i} c_i I_i~. \label{tracet}
\end{equation}
Here the spacetime dimension~$d$ is even, $E_d$ is the Euler density, and the~$I_i$ are local Weyl invariants of weight~$d$. The number of independent~$I_i$ depends on~$d$, e.g.~in four dimensions there is one, and in six dimensions there are three.\footnote{~In two dimensions there is no~$c$-type anomaly and the~$a$-anomaly coincides with the Virasoro central charge. As a result, it is typically denoted by~$c$, even though it multiplies the two-dimensional Euler density.}
The dimensionless constants~$a$ and~$c_i$ are well-defined observables of the CFT. They are determined by certain flat-space correlators of the stress tensor at separated points. In two and four dimensions, it was shown~\cite{Zamolodchikov:1986gt,Cardy:1988cwa,Komargodski:2011vj,Komargodski:2011xv} that unitarity RG flows interpolating between CFTs in the UV and in the IR respect the following inequality,
\begin{equation}
a_{\rm UV} > a_{\rm IR}~.
\end{equation}
Therefore, $a$ provides a quantitative measure of the number of degrees of freedom in a CFT. An analogous $a$-theorem has been conjectured~\cite{Cardy:1988cwa}, and was recently investigated~\cite{Elvang:2012st}, for RG flows in six dimensions, but a proof is currently lacking. The ability to test this conjecture is limited by the paucity of interacting six-dimensional CFTs for which~$a$ has been computed.
In~$(2,0)$ SCFTs, the three independent~$c$-type anomalies that are present in six dimensions are believed to be proportional to a single constant~$c$~(see~\cite{Bastianelli:2000hi,Beem:2014kka} for some compelling evidence). We can therefore normalize~$c_{1,2,3} = c$ in~\eqref{tracet}. It is convenient to fix the remaining normalizations by demanding that the Weyl anomalies of a free Abelian tensor multiplet~${\mathcal T}_{\frak u(1)}$, which were computed in~\cite{Bastianelli:2000hi}, take the following simple values,
\begin{equation}
a_{\frak u(1)} = c_{\frak u(1)} = 1~.
\end{equation}
Less is known about the Weyl anomalies of interacting~$(2,0)$ theories~${\mathcal T}_{\frak g}$ with~$\frak g$ a compact, simple ADE Lie algebra. At the conformal point, $c_{\frak g}$ can be extracted from a stress tensor two-point function, while~$a_{\frak g}$ requires a four-point function. For~$\frak g = \frak{su}(n)$ and~$\frak{so}(2n)$, the leading large-$n$ behavior of~$a_{\frak g}$ and~$c_{\frak g}$ can be determined form their~$AdS_7 \times S^4$ and~$AdS_7 \times {\mathbb R} \mathbb{P}^4$ duals~\cite{Henningson:1998gx, Intriligator:2000eq}. Subleading corrections for the~$\frak{su}(n)$ case were suggested in~\cite{Tseytlin:2000sf,Beccaria:2014qea}, motivated by aspects of the holographic dual. A recent conjecture~\cite{Beem:2014kka}, which applies to all~$\frak g$ and passes several non-trivial consistency checks, identifies~$c_{\frak g}$ with the central charge of known chiral algebra in two dimensions,
\smallskip
\begin{equation}
c_{\frak g}=4 h_{\mathfrak{g}}^{\vee} d_{\mathfrak{g}}+r_{\mathfrak{g}}~.\label{cresult}
\end{equation}
\smallskip
A method for determining~$a_{\frak g}$ was proposed in~\cite{Maxfield:2012aw}. Following the work of \cite{Komargodski:2011vj,Komargodski:2011xv} in four dimensions, it was shown in~\cite{Maxfield:2012aw,Elvang:2012st} that~$a_{\frak g}$ appears in the effective action on the tensor branch of~${\mathcal T}_{\frak g}$, where conformal symmetry is spontaneously broken. We again focus on rank one breaking~$\frak g \rightarrow \frak h \oplus \frak u(1)$. Now the difference~$\Delta a = a_{\frak g} - \left(a_{\frak h} + 1\right)$ appears as the coefficient of a six-derivative WZ-like interaction term for the dilaton (the NG Boson of spontaneous conformal symmetry breaking), which is needed to match the~$a$-anomalies of the UV and IR theories. The authors of~\cite{Maxfield:2012aw} argued for a non-renormalization theorem that fixes~$\Delta a \sim b^2$, where~$b$ is the coefficient of a four-derivative term in the tensor-branch effective action. By compactifying the theory on~$T^2$ and tracking this four-derivative term as the torus shrinks to zero size, they argued that~$b$ could be extracted from a one-loop computation in four-dimensional~${\mathcal N}=4$ Yang-Mills theory with gauge algebra~$\frak g$. This leads to~$b \sim n_W$, the number of massive~W-Bosons. However, just as for~$\Delta k$ above, the large-$n$ asymtotics of~$a_{\frak g}$ for~$\frak g = \frak{su}(n)$ and~$\frak g = \frak{so}(2n)$, which are known from holography, imply that~$\Delta a$ cannot just depend on~$n_W$. This puzzle will also be resolved in the course of our investigation.
\bigskip
\subsection{Assumptions}
In this paper, we will analyze the moduli space effective actions of~$(2,0)$ SCFTs and their compactifications on~$S^1$ and~$T^2$. The goal is to learn as much as possible about the interacting theories using field-theoretic arguments and a small number of assumptions. While these assumptions are currently motivated by explicit string constructions, we hope that they can ultimately be justified for all~$(2,0)$ SCFTs -- including putative theories that do not have a known string realization. (This approach to the~$(2,0)$ theories has, for instance, been advocated in~\cite{moorefklect}.) In this spirit, the arguments and conclusions of this paper only rely on the following assumption.
\begin{framed}
\noindent{\bf Assumption:} When a six-dimensional~$(2,0)$ SCFT is supersymmetrically compactified on a spatial~$S^1_R$ of radius~$R$, the effective low-energy description at energies far below the Kaluza-Klein scale~$1 \over R$ is given by a five-dimensional~${\mathcal N}=2$ Yang-Mills theory.
\end{framed}
\noindent As was already stressed above, we expect this low-energy description to be corrected by irrelevant operators, which are suppressed by powers of the KK scale, and we make no a priori assumptions about their coefficients.
Since unitarity requires the gauge algebra~$\frak g$ in five dimensions to be a real Lie algebra comprised of~$\frak u(1)$ and compact simple summands, our assumption implies that every~$(2,0)$ theory is associated with such a Lie algebra. For the remainder of this paper, we will use~${\mathcal T}_{\frak g}$ to collectively denote all~$(2,0)$ theories that give rise to five-dimensional Yang-Mills theories with gauge algebra~$\frak g$. However, we will not assume that all such theories are the same. For instance, their five-dimensional descriptions might differ by irrelevant operators. From this point of view, it is no longer clear that a~$(2,0)$ SCFT whose five-dimensional description is a~$\frak u(1)$ gauge theory must be a free Abelian tensor multiplet, but we will show that this is indeed the case in section~3.4. For ease of presentation, we will focus on one summand of~$\frak g$ at a time, since our results are not affected by this. Throughout the paper, $\frak g$ will therefore denote~$\frak u(1)$ or a compact, simple Lie algebra.
We also do not input the assumption that~$\frak g$ is simply laced, even though we are considering a standard~$S^1$ compactification. (In particular, we do not turn on outer automorphism twists~\cite{Vafa:1997mh}; see~\cite{Witten:2009at,Tachikawa:2011ch} for a recent discussion.) Instead, we will allow arbitrary~$\frak g$ and derive the ADE restriction from consistency conditions in field theory. We will also review standard arguments that show why~${\mathcal T}_{\frak g}$ has a moduli space of vacua given by~\eqref{vacua}, and why the allowed breaking patterns are determined by adjoint Higgsing.
So far we have only mentioned the gauge algebra~$\frak g$. In gauge theory, one should specify a gauge group~$G$, whose Lie algebra is~$\frak g$. This is needed to define global aspects of the theory, such as the spectrum of allowed line operators or partition functions on topologically non-trivial manifolds. In maximally supersymmetric Yang-Mills theories that descend from~$(2,0)$ SCFTs, the choice of~$G$ arises due to subtle properties of the six-dimensional parent theory (see for instance~\cite{Witten:2009at,moorefklect,Tachikawa:2013hya}). Much of our discussion only refers to the Lie algebra~$\frak g$, but we will also encounter some global issues that depend on a choice of gauge group~$G$.
\subsection{Summary of Results}
In section~2 we consider the low-energy effective action on the moduli space of~${\mathcal T}_{\frak g}$ in flat Minkowski space~${\mathbb R}^{5,1}$. We focus on rank one breaking patterns~$\frak g \rightarrow \frak h \oplus \frak u(1)$, so that the moduli space is described by a single Abelian tensor multiplet. We review the WZ-like six-derivative terms that are required for the matching of the~$R$-symmetry anomaly in~\eqref{anomform} and the Weyl~$a$-anomaly in~\eqref{tracet} between the UV and IR theories~\cite{Intriligator:2000eq,Maxfield:2012aw,Elvang:2012st}. We then give a simple proof of the non-renormalization theorem of~\cite{Maxfield:2012aw}, which implies that all terms in the effective action up to six-derivative order are controlled by a single coefficient~$b$ residing at four-derivative order. In particular, the coefficients of the six-derivative WZ terms are quadratically related to~$b$,\footnote{~As we will explain in section~2, this statement only holds if the fields in the Abelian tensor multiplet are canonically normalized.}
\begin{equation}\label{deltaka}
\Delta k = k_{\frak g} -k_{\frak h} \sim b^2~, \qquad \Delta a = a_{\frak g} - (a_{\frak h} +1) \sim b^2~.
\end{equation}
Here~$\sim$ implies equality up to model-independent constants that are fixed by supersymmetry. Our proof of the non-renormalization theorems leading to~\eqref{deltaka} only relies on results from superconformal representation theory~\cite{ckt}. We also present a complementary point of view based on scattering superamplitudes for the fields in the tensor multiplet.
In section~3 we study the~$(2,0)$ theories~${\mathcal T}_{\frak g}$ on~${\mathbb R}^{4,1} \times S^1_R$. By our assumption in section~1.3, the low-energy description is a five-dimensional~${\mathcal N}=2$ Yang-Mills theory deformed by higher-derivative operators. As in six dimensions, we use non-renormalization theorems to track the Coulomb-branch effective action from the origin, where the five-dimensional Yang-Mills description is valid, to large vevs, where it is simply related to the tensor branch effective action of the six-dimensional parent theory~${\mathcal T}_{\frak g}$. This leads to new results about both regimes. The Yang-Mills description allows us to calculate the coefficient~$b$ appearing in~\eqref{deltaka} by integrating out W-Bosons at one loop, and via~\eqref{deltaka} also the coefficients of the six-dimensional WZ terms. (This is possible despite the fact -- emphasized in~\cite{Intriligator:2000eq} -- that the~$R$-symmetry WZ term vanishes when it is reduced to five dimensions.) Conversely, the conformal invariance of~${\mathcal T}_{\frak g}$ forces the leading possible higher-derivative operators at the origin of the five-dimensional Coulomb branch to vanish. Finally, matching the massive~${1 \over 2}$-BPS states on the Coulomb and tensor branches leads to the requirement that~$\frak g$ be simply laced.
In section~4, we combine the computation of~$b$ from section~3 with the relations~\eqref{deltaka} obtained in section~2 to compute the Weyl anomaly~$a_{\frak g}$ and the~$R$-symmetry anomaly~$k_{\frak g}$ for all~$(2,0)$ theories~${\mathcal T}_{\frak g}$. For the~$a$-anomaly, we find
\begin{equation}
a_{\frak g}=\frac{16}{7}h_{\mathfrak{g}}^{\vee} d_{\mathfrak{g}}+r_{\mathfrak{g}}~, \label{aresult}
\end{equation}
in agreement with previous large-$n$ results for~$\frak g = \frak{su}(n)$ and~$\frak g = \frak{so}(2n)$ from holography. We also show that the~$a$-anomaly decreases under all RG flows that preserve~$(2,0)$ supersymmetry, in agreement with the conjectured~$a$-theorem~\cite{Cardy:1988cwa}. As we will see, the positivity of~$\Delta a$ for all such flows essentially follows from~\eqref{deltaka}.
For the~$R$-symmetry anomaly~$k_{\frak g}$ we recover~\eqref{anomform}, which was derived in~\cite{Ohmori:2014kda} by considering a one-loop exact Chern-Simons term in five dimensions that involves dynamical and background fields. By contrast, we access~$\Delta k$ through a six-derivative term for the dynamical fields. The coefficient~$\Delta k$ is quantized, because it multiplies a WZ term in the tensor-branch effective action. We show that this quantization condition can always be violated when~$\frak g$ is not simply laced. This constitutes an alternative argument for the ADE restriction on~$\frak g$.
Both our result~\eqref{aresult} for the~$a$-anomaly, and the conjectured formula~\eqref{cresult} for the~$c$-anomaly are linear combinations of~$h^\vee_{\frak g} d_{\frak g}$ and~$r_{\frak g}$, which determine the anomaly eight-form in~\eqref{anomform}. In fact, once such a relationship between the independent Weyl and 't Hooft anomalies is assumed, \eqref{cresult} and~\eqref{aresult} can be obtained by fitting to the known anomalies of a free Abelian tensor multiplet and one reliable large-$n$ example from holography. As in four dimensions~\cite{Anselmi:1997am}, linear relations between Weyl and 't Hooft anomalies are captured by anomalous stress-tensor supermultiplets, which contain both the anomalous divergence of the~$R$-current and the anomalous trace of the stress tensor. For six-dimensional SCFTs, these anomaly multiplets are currently under investigation~\cite{ctkii}.
In section 5 we consider~$(2,0)$ SCFTs~${\mathcal T}_{\frak g}$ on~${\mathbb R}^{3,1} \times T^2$, where~$T^2 = S^1_R \times S^1_r$ is a rectangular torus of finite area~$A = R r$. We describe their moduli spaces of vacua, which depend on a choice of gauge group~$G$, and the singular points at which there are interacting~${\mathcal N}=4$ theories. In addition to the familiar theory with gauge group~$G$, which resides at the origin, there are typically additional singular points at finite distance~$\sim A^{-{1 \over 2}}$ (sometimes with a different gauge group), which move to infinite distance when the torus shrinks to zero size. We illustrate these phenomena in several examples and explain the underlying mechanism. As before, we use non-renormalization theorems to determine the four-derivative terms in the Coulomb-branch effective action via a one-loop calculation in five-dimensional~${\mathcal N}=2$ Yang-Mills theory, which now includes a sum over KK modes. In general, the result cannot be interpreted as arising from a single~${\mathcal N}=4$ theory in four dimensions. We also determine the leading higher-derivative operators that describe the RG flow into the different interacting~${\mathcal N}=4$ theories on the moduli space.
Appendix~A summarizes aspects of scattering superamplitudes in six and five dimensions, which provide an alternative approach to the non-renormalization theorems discussed in this paper (see especially section~2.3).
\section{The Tensor Branch in Six Dimensions}
In this section we analyze the low-energy effective Lagrangian~${\mathscr L}_{\text{tensor}}$ on the tensor branch of a~$(2,0)$ theory~${\mathcal T}_{\frak g}$ in six-dimensional Minkowski space~${\mathbb R}^{5,1}$. For simplicity, we focus on branches of moduli space described by a single Abelian tensor multiplet, which arise from breaking patters of the form~$\frak g \rightarrow \frak h \oplus \frak u(1)$. Here~$\frak h$ is a sum of compact simple Lie algebras that is obtained by deleting a single node in the Dynkin diagram of~$\frak g$, i.e.~by adjoint Higgsing. We review what is known about~${\mathscr L}_{\text{tensor}}$ on general grounds, before turning to a systematic discussion of the constraints that follow from supersymmetry.
\subsection{General Properties}
We consider branches of moduli space that are parametrized by the vevs~$\langle \Phi^I \rangle$ of the five scalars in a single Abelian tensor multiplet. At low energies, the fields in the tensor multiplet become free, i.e.~they satisfy the free equations of motion reviewed in section~1.1. Naively, these are summarized by a quadratic Lagrangian,
\begin{equation}
{\mathscr L}_{\text{free}} = -{1 \over 2} \sum_{I = 1}^5 (\partial_\mu\Phi^I)^2 - {1 \over 2} H \wedge * H + (\text{Fermions})~, \label{freetensoract}
\end{equation}
where the signs are due to the fact that we are working in Lorentzian signature~$- + \cdots +$. However, the fact that~$H$ is self-dual implies that~$H\wedge *H = 0$. While this is not a problem classically, where the self-duality constraint can be imposed on the equations of motion, defining the quantum theory of a free self-dual three-form requires some sophistication. Nevertheless, it is well understood (see for instance~\cite{Witten:2007ct,Witten:2009at,moorefklect} and references therein). Below we will deform~${\mathscr L}_{\text{free}}$ by adding higher-derivative operators constructed out of the fields in the tensor multiplet. These cause no additional complications beyond those that there are already present at the two-derivative level. With this in mind, we will use~${\mathscr L}_{\text{free}}$ to denote the theory of a free Abelian tensor multiplet.
The vevs~$\langle\Phi^I\rangle$ spontaneously break both the conformal symmetry and the~$R$-symmetry. Since~$\Phi^I$ is a vector of~$\frak{so}(5)_R$, the~$R$-symmetry is broken to~$\frak{so}(4)_R$. It is convenient to introduce radial and transverse variables,
\begin{equation}\label{ngBosons}
\Psi = \left(\sum_{I = 1}^5 \Phi^I \Phi^I\right)^{1 \over 2}~, \qquad \widehat \Phi^I = {\Phi^I \over \Psi}~.
\end{equation}
The field~$\Psi$ has dimension two, while the~$\widehat \Phi^I$ are dimensionless. Therefore~$\langle \Psi \rangle$ is the only dimensionful parameter, which sets the scale of conformal and~$R$-symmetry breaking. The fluctuations of~$\Psi$ describe the dilaton, the NG Boson of conformal symmetry breaking, and the fluctuations of the transverse fields~$\widehat \Phi^I$ describe the four NG Bosons that must arise upon breaking~$\frak{so}(5)_R \rightarrow \frak{so}(4)_R$. Note that~$\sum_{I = 1}^5 \widehat \Phi^I \widehat \Phi^I = 1$, so that the~$\widehat \Phi^I$ describe a unit~$S^4 = SO(5)_R/SO(4)_R$\,.
Upon activating~$\langle \Phi^I \rangle$, some degrees of freedom in~${\mathcal T}_{\frak g}$ acquires masses of order~$\sqrt{\langle \Psi\rangle}$. The remaining massless degrees of freedom are the interacting theory~${\mathcal T}_{\frak h}$ and the Abelian tensor multiplet containing the NG Bosons~\eqref{ngBosons}. It follows from Goldstone's theorem that the theories at the origin and on the tensor branch decouple at very low energies, and moreover that the multiplet of NG Bosons becomes free. We will focus on an effective Lagrangian~${\mathscr L}_{\text{tensor}}$ for the Abelian tensor multiplet. Integrating out the massive degrees of freedom at the scale~$\sqrt{\langle \Psi\rangle}$ induces weak, higher-derivative interactions for the NG Bosons and their superpartners.\footnote{~We follow the standard rules for counting derivatives in supersymmetric theories: $\partial_\mu$ and~$H$ both have weight~$1$, Fermions (including the supercharges) have weight~${1 \over 2}$, and the scalars~$\Phi^I$ have weight~$0$.} Schematically,
\begin{equation}\label{ltensor}
{\mathscr L}_{\text{tensor}} = {\mathscr L}_{\text{free}} + \sum_i f_i(\Phi^I) {\mathcal O}_i~,
\end{equation}
where~${\mathcal O}_i$ is a higher-derivative operator of definite~$R$-charge and scaling dimension that is constructed out of fields in the Abelian tensor multiplet. The higher-derivative interactions are constrained by (non-linearly realized) conformal and~$R$-symmetry, as well as~$(2,0)$ supersymmetry. For instance, every~${\mathcal O}_i$ in~\eqref{ltensor} is multiplied by a scale-invariant coefficient function~$f_i(\Phi^I)$ of the moduli fields, so that their product is marginal and~$\frak{so}(5)_R$ invariant. If we expand the~$\Phi^I$ in fluctuations around their vevs~$\langle \Phi^I\rangle$, then~\eqref{ltensor} reduces to a standard effective Lagrangian with irrelevant local operators suppressed by powers of the cutoff~$\sqrt{\langle\Psi\rangle}$. Integrating out massive fields at the scale~$\sqrt{ \langle \Psi\rangle}$ also leads to irrelevant interactions that couple the NG Bosons and their superpartners to the interacting SCFT~${\mathcal T}_{\frak h}$ at the origin. Below, we will comment on why such couplings will not play a role in our discussion.
The constraints of non-linearly realized conformal symmetry for the self-interactions of the dilaton~$\Psi$ were analyzed in~\cite{Elvang:2012st} (see also
\cite{Maxfield:2012aw,Elvang:2012yc}). The leading dilaton self-interactions arise at four-derivative order and are controlled by a single dimensionless coupling~$b$,\footnote{~Our coupling~$b$ should not be confused with a similar coupling that appears in~\cite{Elvang:2012st}. In particular, our~$b$ is dimensionless, while the one in~\cite{Elvang:2012st} is not. They are related by~$b_{\text{us}} = b_{\text{them}} / 4 \langle \Psi \rangle$.}
\begin{equation}\label{fourd}
b \, {(\partial \Psi)^4 \over \Psi^3} \subset{\mathscr L}_{\text{tensor}}~.
\end{equation}
Note that this term is not invariant under rescaling~$\Psi$, and hence this definition of~$b$ is tied to the canonically normalized kinetic terms in~\eqref{freetensoract}. In this normalization, $b$ controls the on-shell scattering amplitude of four dilatons at tree level~\cite{Elvang:2012st}. It follows from a dispersion relation for this amplitude that~$b \geq 0$, and that~$b = 0$ if and only if the dilaton is completely non-interacting~\cite{Adams:2006sv}, just as in the proof of the four-dimensional~$a$-theorem~\cite{Komargodski:2011vj,Komargodski:2011xv}.
At six-derivative order, conformal symmetry requires a very particular interaction term for the dilaton. Schematically, it is proportional to
\begin{equation}\label{awzw}
\Delta a \, {(\partial \Psi)^6 \over \Psi^6} \subset {\mathscr L}_{\text{tensor}}~, \qquad \Delta a = a_{\frak g} - \left(a_{\frak h} + 1\right)~.
\end{equation}
Again, it is convenient to define it using tree-level dilaton scattering amplitudes~\cite{Elvang:2012st}. The term in~\eqref{awzw} is required by anomaly matching between the UV theory~${\mathcal T}_{\frak g}$ and the IR theory consisting of~${\mathcal T}_{\frak h}$ and an Abelian tensor multiplet. The~$a$-anomaly is, in a sense, irreducible and non-Abelian. It therefore requires a non-trivial WZ-like term for the dynamical dilaton, even if the background metric is flat~\cite{Komargodski:2011vj,Komargodski:2011xv}.
A similar WZ term for the~NG Bosons~$\widehat \Phi^I$ is required by anomaly matching for the~$\frak{so}(5)_{R}$ symmetry~\cite{Intriligator:2000eq}. As is typical of such a term, it is convenient to write it as an integral over a seven-manifold~$X_7$ that bounds spacetime. Under suitable conditions (explained in~\cite{Intriligator:2000eq}) we can extend the NG fields to a map~$\widehat \Phi: X_{7}\rightarrow S^{4}$ and pull back the unit volume form~$\omega_4$ on~$S^4$ (i.e.~$\int_{S^4} \omega_4 = 1$) to define the a three-form~$\Omega_3$ via
\begin{equation}
d \Omega_3 = \widehat \Phi^*(\omega_4)~.
\end{equation}
The WZ term of~\cite{Intriligator:2000eq} can then be written as follows,
\begin{equation}\label{kenwzw}
\frac{\Delta k}{6}\int_{X_{7}}\Omega_{3}\wedge d \Omega_{3} \subset {\mathscr L}_{\text{tensor}}~, \qquad \Delta k = k_{\frak g} - k_{\frak h}~.
\end{equation}
This term is needed to match the irreducible~$\frak{so}(5)_R$ anomaly~$k$ in~\eqref{anomform} between the UV and IR theories. Requiring the six-dimensional action to be well defined leads to a quantization condition~\cite{Intriligator:2000eq},
\begin{equation}\label{wzwquant}
\Delta k \in 6 {\mathbb Z}~.
\end{equation}
The presence of the term~\eqref{kenwzw} and the quantization condition~\eqref{wzwquant} are general requirements, which do not rely on the known answer~\eqref{anomform} for~$k_{\frak g}$ in~ADE-type~$(2,0)$ theories. (This will play an important role in section~4.3.) Since the three-form~$\Omega_3$ only depends on the scalars~$\widehat \Phi^I$, it must contain three derivatives. Therefore the integrand in~\eqref{kenwzw} is a seven-derivative term integrated over~$X_7$, which leads to a conventional six-derivative term in spacetime (albeit one that is not manifestly~$\frak{so}(5)_R$ invariant). This term gives rise to interactions involving at least seven NG Bosons.\footnote{~The same phenomenon occurs in the chiral Lagrangian for low-energy QCD: a WZ term arises at four-derivative order, and it describes an interaction involving two K mesons and three pions~\cite{Witten:1983tw}.}
In this paper, we will focus on the four- and six-derivative terms in~${\mathscr L}_{\text{tensor}}$ and their relation to anomalies via the WZ terms in~\eqref{awzw} and~\eqref{kenwzw}. As was reviewed above, these terms control the tree-level scattering amplitudes for the Abelian tensor multiplet up to and including~${\mathcal O}(p^6)$ in the momentum expansion. At higher orders in the derivative expansion, the discussion becomes more involved. On the one hand there are additional terms in~${\mathscr L}_{\text{tensor}}$, which are increasingly less constrained. On the other hand, it is no longer legitimate to ignore the interaction between the Abelian tensor multiplet and the interacting SCFT~${\mathcal T}_{\frak h}$ at the origin. Such interactions generally contribute non-analytic terms to the scattering amplitudes of tensor-branch fields that reflect the interacting massless degrees of freedom in~${\mathcal T}_{\frak h}$. However, just as in four dimensions~\cite{Komargodski:2011xv,Luty:2012ww}, these effects do not contaminate the contribution of the WZ terms at~${\mathcal O}(p^6)$, or the terms at lower order.\footnote{~We thank~Z.~Komargodski for a useful discussion about this issue.}
It is natural to ask what massive degrees of freedom should be integrated out at the scale~$\sqrt{\langle \Psi\rangle}$ in order to generate the interaction terms in~\eqref{fourd}, \eqref{awzw}, and~\eqref{kenwzw}, as well as other higher-derivative terms. In maximally supersymmetric Yang-Mills theories these are the W-Bosons and their superpartners, which become massive on the Coulomb branch. String constructions suggest that the analogous objects in~$(2,0)$ theories are dynamical strings~(see for instance~\cite{Gaiotto:2010be,moorefklect} for a review) -- we will refer to them as W-strings. On the tensor branch, the supersymmetry algebra is modified to include a brane charge~$Z_\mu^I$ proportional to~$\langle \Phi^I\rangle$.\footnote{~The supersymmetry algebra also admits another brane charge, which is associated with~${1 \over 2}$-BPS three-branes, i.e.~objects of codimension two. We will not discuss them here. } This brane charge is activated by~${1 \over 2}$-BPS strings, whose tension is therefore proportional to~$\langle \Psi\rangle$. On the tensor branch, such strings arise as dynamical solitons, which can be identified with the W-strings~\cite{Intriligator:2000eq}, and act as sources for the two-form gauge field~$B$ in the tensor multiplet. This in turn explains several features of~${\mathscr L}_{\text{tensor}}$. For instance, \eqref{wzwquant} can be interpreted as a Dirac quantization condition for these strings~\cite{Intriligator:2000eq}. It was also suggested in~\cite{Intriligator:2000eq} that the interaction terms in~${\mathscr L}_{\text{tensor}}$ might arise by integrating out the W-strings. This intuition can be made precise, and even quantitative, by compactifying the theory to five-dimensions, as we will see in section~3.
\subsection{Non-Renormalization Theorems}
We will now discuss the constraints on~${\mathscr L}_{\text{tensor}}$ that follow from supersymmetry. In particular, we will derive a strong form of the non-renormalization theorems in~\cite{Maxfield:2012aw}. As in related work on maximally supersymmetric Yang-Mills theories~\cite{Paban:1998ea,Paban:1998mp,Paban:1998qy,Sethi:1999qv}, these non-renormalization theorems were originally obtained by examining special properties of particular multi-Fermion terms in the moduli-space effective action, which imply differential equations for some of the coefficient functions~$f_i(\Phi^I)$ in~\eqref{ltensor}. Here we present a simple, general approach to these non-renormalization theorems, which elucidates their essentially group-theoretic origin. The discussion proceeds in two steps:
\begin{itemize}
\item[1.)] We expand all coefficient functions~$f_i(\Phi^I)$ in~\eqref{ltensor} in fluctuations around a fixed vev,
\begin{equation}\label{fexp}
\Phi^I = \langle \Phi^I \rangle + \delta \Phi^I~, \qquad f_i(\Phi^I) = f_i |_{\langle \Phi\rangle} + \partial_I f_i |_{\langle \Phi\rangle} \, \delta \Phi^I+ {1 \over 2} \partial_I \partial_J f_i |_{\langle \Phi\rangle} \, \delta \Phi^I\delta \Phi^J+ \, \cdots~.
\end{equation}
This reduces every interaction term~$f_i(\Phi^I) {\mathcal O}_i$ in~\eqref{ltensor} to an infinite series of standard local operators, multiplied by suitable powers of~$\langle \Phi^I\rangle$. These local operators are constructed out of fields in the free Abelian tensor multiplet described by~${\mathscr L}_{\text{free}}$, and hence they must organize themselves into conventional (irrelevant) supersymmetric deformations of~${\mathscr L}_{\text{free}}$.
\item[2.)] If certain operators that arise by expanding a certain coefficient function~$f_i(\Phi^I)$ cannot be identified with any supersymmetric deformation of~${\mathscr L}_{\text{free}}$, then~$f_i(\Phi^I)$ satisfies a non-renormalization theorem. This step requires a complete understanding of all supersymmetric local operators that can be constructed in the theory described by~${\mathscr L}_{\text{free}}$.
\end{itemize}
\noindent We will now demonstrate this logic by constraining the higher-derivative terms in~${\mathscr L}_{\text{tensor}}$. However, note that the method is very general. In particular, the theory around which we expand need not be free. The main simplification is that expanding the coefficient functions as in~\eqref{fexp} leads to the problem of classifying conventional supersymmetric deformations, which can be addressed by a variety of methods. Some applications of our approach to moduli-space non-renormalization theorems have already appeared in~\cite{Wang:2015jna,Lin:2015ixa}.
In the present example, we can use the fact that~${\mathscr L}_{\text{free}}$ is a (free) SCFT to invoke a classification of supersymmetric deformations that can be obtained using superconformal representation theory~\cite{ckt}. However, any other method of classifying supersymmetric deformations is equally valid. Below, we will also mention approaches based on scattering superamplitudes, as well as superspace. It follows from results in~\cite{ckt} that all Lorentz-scalar supersymmetric deformations of a single, free Abelian tensor multiplet fall into two classes,\footnote{~Using two or more Abelian tensor multiplets, it is also possible to construct a six-derivative~$1 \over 4$-BPS deformation~\cite{ckt}.} which we will refer to as~$F$-terms and~$D$-terms:
\begin{itemize}
\item $F$-term deformations are schematically given by
\begin{equation}\label{fterm}
{\mathscr L}_F = Q^8 \left(\Phi^{(I_1} \cdots \Phi^{I_n)} - (\text{traces})\right)~, \qquad (n \geq 4)~.
\end{equation}
The operator~$\Phi^{(I_1} \cdots \Phi^{I_n)} - (\text{traces})$ is~${1 \over 2}$-BPS, which is why~${\mathscr L}_F$ only involves eight supercharges. It is therefore a four-derivative term. The~$R$-symmetry indices of the supercharges and the scalars are contracted so that~${\mathscr L}_F$ transforms as a traceless symmetric~$(n-4)$-tensor of~$\frak{so}(5)_R$. The restriction~$n\geq 4$ is due to the fact that~${\mathscr L}_F$ vanishes when~$n \leq 3$.
\item $D$-term deformations take the form
\begin{equation}\label{dterm}
{\mathscr L}_D = Q^{16} {\mathcal O}~,
\end{equation}
where~${\mathcal O}$ is a Lorentz scalar. Such terms always contain at least eight derivatives.
\end{itemize}
\smallskip
As a simple illustration of our method, we use it to demonstrate the well-known non-renormalization of the kinetic terms. (As explained in section~2.1, this result is also required by conformal symmetry.) In the presence of non-trivial coefficient functions, expanding the kinetic terms leads to two-derivative interactions involving three or more fields in the Abelian tensor multiplet. For instance,
\begin{equation}\label{kinnrt}
f_2(\Phi^I) (\partial\Phi)^2 \rightarrow \left(f_2 |_{\langle \Phi\rangle} + \partial_I f_2 |_{\langle \Phi\rangle} \, \delta \Phi^I + \cdots \right) (\partial\Phi)^2~,
\end{equation}
where the ellipsis denotes terms containing additional powers of~$\delta \Phi^I$. The constant~$\partial_I f_2 |_{\langle \Phi\rangle}$ multiplies a two-derivative interaction of three scalars. However, the allowed deformations~\eqref{fterm} and~\eqref{dterm} require at least four derivatives, and hence~$\partial_I f_2 |_{\langle \Phi\rangle} = 0$. Since this holds at every point~$\langle \Phi^I\rangle$, we conclude that~$f_2(\Phi^I)$ cannot depend on the moduli and must in fact be a constant. Analogously, the other fields in the tensor multiplet must also have moduli-independent kinetic terms.
This reasoning straightforwardly extends to the higher derivative terms in~${\mathscr L}_{\text{tensor}}$, which first arise at four-derivative order. (There are no supersymmetric deformations of~${\mathscr L}_{\text{free}}$ with only three derivatives.) Consider, for instance, the following term,
\begin{equation}\label{f4expand}
f_4(\Phi^I) (\partial \Psi)^4 \rightarrow \left(f_4 |_{\langle \Phi\rangle} + \partial_I f_4 |_{\langle \Phi\rangle} \, \delta \Phi^I+ {1 \over 2} \partial_I \partial_J f_4 |_{\langle \Phi\rangle} \, \delta \Phi^I\delta \Phi^J+ \cdots\right) (\partial \Psi)^4~.
\end{equation}
The constants~$f_4 |_{\langle \Phi\rangle}$ and~$\partial_I f_4 |_{\langle \Phi\rangle}$ multiply four-derivative interactions involving four and five fields. The former is~$R$-symmetry invariant, while the latter transforms as a vector of~$\frak{so}(5)_R$. These terms therefore arise as~$F$-terms~\eqref{fterm} built on~$n = 4$ and~$n=5$ scalars, respectively. The same is true for the traceless part of~$ \partial_I \partial_J f_4 |_{\langle \Phi\rangle}$, which multiplies a four-derivative interaction containing six fields and can arise from an~$F$-term with~$n = 6$. However, the trace~$\delta^{IJ} \partial_I \partial_J f_4 |_{\langle \Phi\rangle}$ multiplies an interaction with four derivatives and six fields that is invariant under the~$R$-symmetry. Neither an~$F$-term nor a~$D$-term can give rise to such an interaction. Therefore, it must vanish at every point~$\langle \Phi^I \rangle$, and this implies that~$f_4 (\Phi^I)$ is a harmonic function,
\begin{equation}\label{harmonic}
\delta^{IJ} \partial_I \partial_J f_4(\Phi^K) = 0~.
\end{equation}
Since~$f_4(\Phi^I)$ multiplies~$(\partial \Psi)^4$, it follows from~$\frak{so}(5)_R$ invariance that~$f_4(\Phi^I)$ can only depend on~$\Psi$. Together with~\eqref{harmonic}, this implies that
\begin{equation}
f_4(\Phi^I)=\frac{b}{\Psi^{3}}~.\label{fsolve}
\end{equation}
A dimensionful constant term in~$f_4(\Phi^I)$ is disallowed by scale invariance. This precisely reproduces the dilaton interaction in~\eqref{fourd}. Note that~$f_4(\Phi^I) \sim {1 \over \Psi^3}$ also follows from conformal symmetry, without appealing to~\eqref{harmonic}. However, the argument that led to~\eqref{harmonic} immediately generalizes to all other coefficient functions that arise at four-derivative order, even if they multiply more complicated operators. (For instance, some of them transform in non-trivial~$R$-symmetry representations.) Therefore all of these functions are harmonic. Imposing~$R$-symmetry and scale invariance fixes their functional form up to a multiplicative constant. This was explicitly shown in~\cite{Maxfield:2012aw} for terms involving eight Fermions.
In order to see why all of these constants are in fact determined a single overall coefficient, we evaluate the coefficient functions at a fixed vev~$\langle \Phi^I\rangle$ and drop the fluctuations~$\delta \Phi^I$. All of these terms involve exactly four fields and four derivatives (at leading order in the fluctuations, scalars without derivatives only contribute vevs), and hence they must all arise from the same~$R$-symmetry invariant~$F$-term~\eqref{fterm} built on~$n = 4$ scalars. Thus, there is a single supersymmetric invariant that governs all four-derivative terms in~${\mathscr L}_{\text{tensor}}$, which are therefore controlled by a single independent coefficient. We choose it to be the constant~$b$ in~\eqref{fourd} and~\eqref{fsolve}.
The preceding discussion of the four-derivative terms in~${\mathscr L}_{\text{tensor}}$, viewed as a deformation of~${\mathscr L}_{\text{free}}$ by local operators, was at leading order in this deformation. There are several ways to understand the effects of the deformation at higher orders:
\begin{itemize}
\item In an on-shell approach (see e.g.~\cite{Maxfield:2012aw}), where the supersymmetry transformations~$\delta_{\text{free}}$ of~${\mathscr L}_{\text{free}}$ only close on its equations of motion, the four-derivative terms~${\mathscr L}_4$ deform the transformations to~$\delta = \delta_{\text{free}} + \delta_4$, where
\begin{equation}\label{ldeformi}
\delta_4 {\mathscr L}_{\text{free}} \sim \delta_{\text{free}} {\mathscr L}_4~.
\end{equation}
Therefore, the derivative-scaling of~$\delta_4$ is~$5 \over 2$. Since every term in~${\mathscr L}_4$ is proportional to~$b$, we also have~$\delta_4 \sim b$. At the next order, the action of~$\delta_4$ on~${\mathscr L}_4$ can only be cancelled by adding a new term~${\mathscr L}_6$ to the Lagrangian, so that
\begin{equation}\label{ldeformii}
\delta_4 {\mathscr L}_4 \sim \delta_0 {\mathscr L}_6~.
\end{equation}
We conclude that~${\mathscr L}_6$ is a six-derivative term, which is completely determined by~$\delta_4$ and~${\mathscr L}_4$. In particular, every coefficient in~${\mathscr L}_6$ must be proportional to~$b^2$.
\item We can determine the effects of~${\mathscr L}_4$ in conformal perturbation theory around~${\mathscr L}_{\text{free}}$. At second order, the OPE~${\mathscr L}_4(x) {\mathscr L}_4(y)$ between two insertions of the deformation can contain a contact term~${\mathscr L}_6(x)\delta(x-y)$ that is required by the supersymmetry Ward identities. This contact term behaves like a tree-level insertion of~${\mathscr L}_6$, which can therefore be viewed as a term in the classical action.\footnote{~This is analogous to the seagull term~$A^\mu A_\mu |\phi|^2$ in scalar electrodynamics, which is required by current conservation and can be viewed as arising from a contact term in the OPE of two currents.} Since~${\mathscr L}_6$ arises by fusing~${\mathscr L}_4$ with itself, we conclude that~${\mathscr L}_6$ is a six-derivative term proportional to~$b^2$.\footnote{~More explicitly, the only way to generate a~$\delta$-function is by applying the equations of motion to a Wick-contraction of free fields in~${\mathscr L}_4(x)$ and~${\mathscr L}_4(y)$. This reduces the derivative order by two, which implies that~${\mathscr L}_6$ must contain six derivatives. }
\item In an approach based on scattering amplitudes, the relation of~${\mathscr L}_6$ and~${\mathscr L}_4$ can be understood through factorization. This will be discussed in section~2.3.
\end{itemize}
\noindent All three points of view show that the four-derivative terms induce terms at six-derivative order, whose coefficients are fixed by supersymmetry and proportional to~$b^2$. In general, there could also be new supersymmetric invariants that arise at six-derivative order, with independent coefficients. However, neither the~$F$-terms~\eqref{fterm} nor the~$D$-terms~\eqref{dterm} contribute at that order, so that the six-derivative terms are completely determined by the four-derivative ones, and in particular by the coefficient~$b$.
Since both~$\Delta a$ in~\eqref{awzw} and~$\Delta k$ in~\eqref{kenwzw} are coefficients of six-derivative terms, we conclude that they are both proportional to~$b^2$, with model-independent proportionality factors that are determined by supersymmetry. In principle these constants can be fixed by carefully working out the supersymmetry relations between the four- and six-derivative terms in~${\mathscr L}_{\text{tensor}}$. Instead, we will determine them using a reliable example. For instance, at large~$n$ the dilaton effective action for the breaking pattern~$\frak{su}(n+1) \rightarrow \frak{su}(n) \oplus \frak u(1)$ is given by the DBI action on a probe brane in~$AdS_7$. By studying this action, the authors of~\cite{Elvang:2012st} found the following relationship,\footnote{~Our normalization of the~$a$-anomaly differs from that in~\cite{Elvang:2012st}. They are related by~$a_{\text{us}} = (9216 \pi^3 /7) a_{\text{them}}$.}
\begin{equation}\label{absq}
\Delta a = \frac{98304 \, \pi^{3}}{7} \,b ^{2}~.
\end{equation}
Having fixed the constant of proportionality in this example, we conclude that~\eqref{absq} holds for all~$(2,0)$ theories and breaking patterns, because of supersymmetry. We can similarly use the known large-$n$ behavior of~$k_{\frak{su}(n)}$ to fix
\begin{equation}\label{kbsq}
\Delta k = 6144 \pi^3 \, b^2~.
\end{equation}
\subsection{The Amplitude Point of View}
The non-renormalization theorems discussed above can also be understood by considering on-shell scattering amplitudes in the Abelian theory on the tensor branch. Since this theory is free at low energies, it is sufficient to consider tree-level amplitudes, which are meromorphic functions of the external momenta. One advantage of this approach is that supersymmetry acts linearly on one-particle asymptotic states, so that the supersymmetry Ward identities constitute a set of linear relations on the set of~$n$-point amplitudes for any given~$n$. By contrast, in an on-shell Lagrangian approach, the supersymmetry transformations of the fields are typically deformed, as discussed around~\eqref{ldeformi} and~\eqref{ldeformii}. Moreover, on-shell scattering amplitudes do not suffer from ambiguities due to field redefinitions.
In the amplitude picture, the role of supersymmetric local operators that can be used to deform~${\mathscr L}_{\text{free}}$ is played by supervertices. These are local superamplitudes without poles that satisfy the supersymmetry Ward identities. Every supervertex corresponds to a possible first-order deformation of~${\mathscr L}_{\text{free}}$ that preserves supersymmetry. The construction of supervertices is particularly simple in the spinor helicity formalism. For the cases of interest in this paper, this formalism is reviewed in appendix A. The basic strategy is to split the 16 Poincar\'e supercharges into 8 supermomenta $Q$ and 8 superderivatives $\overline{Q}$. For instance, the four-point, four-derivative supervertex describing the supersymmetric completion of~$H^4 + (\partial\Phi)^4+\cdots$ can be written as a Grassmannian delta function~$\delta^8(Q)$. In this subsection, we will schematically denote four- and six-derivative interactions by~$H^4$ and~$H^6$, respectively.
At any point on the tensor branch, the four-derivative supervertex~$\delta^8(Q)$ is multiplied by the coefficient function~$f_4(\Phi^I)$ discussed in the previous subsection. Expanding~$f_4(\Phi^I)$ in fluctuations~$\delta\Phi^I$ of the scalar fields, as in~\eqref{fexp}, leads to soft limits of amplitudes with extra scalar emissions. Such amplitudes may or may not admit a local supersymmetric completion, as a supervertex wihtout poles. If there is no such supervertex, the corresponding soft scalar amplitude must belong to a nonlocal superamplitude, which is completely determined by the residues at its poles. This factorization relation amounts to a differential equation for the coefficient function~$f_4(\Phi^I)$, which leads to a non-renormalization theorem.
As explained in appendix A, the four-derivative, $n$-point supervertex~$\delta^8(Q)$ corresponds to a coupling of the form~$(\delta\Phi^+)^{n-4} H^4$ in~${\mathscr L}_{\text{tensor}}$. Here~$\delta\Phi^+$ is the highest-weight component in the~$\frak{so}(5)_R$ multiplet of the scalar fluctuations. All other supervertices at this derivative order are obtained from $\delta^8(Q)$ by an $\mathfrak{so}(5)_R$ rotation. In particular, the set of $n$-point supervertices at four-derivative order transform as rank $(n-4)$ symmetric traceless tensors of~$\mathfrak{so}(5)_R$. This is the amplitude version of the classification for~$F$-term deformations in~\eqref{fterm}. The absence of a supervertex that contains the~$\mathfrak{so}(5)_R$ singlet~$\delta_{IJ} \delta\Phi^I \delta\Phi^J H^4$ then leads to the requirement~\eqref{harmonic} that the coefficient function~$f_4(\Phi^I)$ be harmonic. Note that there does not exist any amplitude that contains the component vertex~$\delta\Phi^I \delta\Phi^I H^4$, since its supersymmetric completion as a six-point superamplitude would have to factorize though lower-point supervertices. However, the leading supervertices~$\delta^8(Q)$ arise at four-derivative order, so that a four-derivative amplitude cannot factorize through a pair of such vertices.
\begin{figure}[htb]
\centering
\includegraphics[scale=1.5]{sewing1.pdf}
\caption{Factorization of a six-point amplitude though a pair of $H^4$ vertices.}
\label{644fac}
\end{figure}
The statement that the coefficient functions~$f_6(\Phi^I)$ of the six-derivative terms are quadratically related to the coefficient functions~$f_4(\Phi^I)$ that occur at four-derivative order can also be understood through factorization. First, note that there is no four-point, six-derivative supervertex in a theory of a single Abelian tensor multiplet. This is because such a supervertex must be proportional to $\delta^8(Q)(s+t+u)$, but $s+t+u=0$ in a massless four-point amplitude. There is also no local six-point, six-derivative supervertex that is an~$\mathfrak{so}(5)_R$ singlet. Therefore, the six-point coupling~$H^6$ is part of a non-local superamplitude that is completely determined by its factorization through a pair of four-point supervertices of the form $\delta^8(Q)$, and hence it must be proportional to~$f_4^2$. This factorization channel is shown in Figure~\ref{644fac}. As in the discussion around~\eqref{absq} and~\eqref{kbsq}, the coefficients of proportionality between~$f_6$ and~$f_4^2$ are fixed by supersymmetry and can be determined by examining any non-trivial set of superamplitudes that obeys the supersymmetry Ward identities.
\begin{figure}[htb]
\centering
\includegraphics[scale=1.5]{sewing2.pdf}
\caption{Factorization of a $7$-point amplitude though a pair of 4-derivative vertices.}
\label{facwz}
\end{figure}
Finally, we would like to examine the quadratic relation~\eqref{kbsq} between the coefficient~$\Delta k$ of the WZ term in~\eqref{kenwzw} and the coefficient~$b$ of the four-derivative terms~$H^4$. There is a unique five-point supervertex at four-derivative order, which contains the coupling~$\delta\Phi^I H^4$ and arises by expanding the coefficient function in~$f_4(\Phi^I) H^4$. As explained in section 2.1, the WZ term leads to a six-derivative vertex involving seven scalars. The absence of an~$\mathfrak{so}(5)_R$ singlet supervertex at this derivative order implies that this seven-scalar vertex is part of a non-local seven-point superamplitude, which is completely determined by factorization. The only possible factorization channel is displayed in Figure~\ref{facwz}. It involves the five-point supervertex containing~$\delta\Phi^I H^4$ and the four-point supervertex~$\delta^8(Q)$ containing~$H^4$. This establishes the quadratic relation~$\Delta k \sim b^2$ in~\eqref{kbsq}.
\section{Compactification to Five Dimensions}
In this section we consider~$(2,0)$ superconformal theories~${\mathcal T}_{\frak g}$ on~${\mathbb R}^{4,1} \times S^1_R$, where~$S^1_R$ is a spatial circle with radius~$R$ and periodic boundary conditions for the Fermions.\footnote{~As was stated in the introduction, we do not consider outer automorphism twists around the circle.} By the assumption stated in section 1.3, the five-dimensional description, which is valid at energies far below the KK scale~$1 \over R$, is an~${\mathcal N}=2$ Yang-Mills theory with gauge algebra~$\frak g$ and gauge coupling~$g^2 \sim R$. We will first describe this Yang-Mills theory, including possible higher-derivative operators that are expected to be present at finite radius~$R$. As in six dimensions, we then explore the Coulomb branch of~${\mathcal T}_{\frak g}$ on~${\mathbb R}^{4,1} \times S^1_R$ using non-renormalization theorems. These allow us to interpolate between the five-dimensional Yang-Mills description, which is valid near the origin, and the effective theory far out on the Coulomb branch, which is simply related to the effective action on the six-dimensional tensor branch that was discussed in section~2. In one direction, the five-dimensional Yang-Mills description leads to information about the six-dimensional theory: the structure of its moduli space, the spectrum of dynamical W-strings on the tensor branch, and the coefficient~$b$ in~\eqref{fourd} and~\eqref{fsolve}, which governs the tensor-branch effective action through six-derivative order. Conversely, the parent theory~${\mathcal T}_{\frak g}$ constrains the five-dimensional effective theory: the properties of W-strings in six dimensions restrict~$\frak g$ to be of ADE type, and the conformal invariance of~${\mathcal T}_{\frak g}$ requires the leading higher-derivative operators at the origin of the five-dimensional Coulomb branch to be absent. We also show that the only~$(2,0)$ theories with~$\frak g = \frak u(1)$ are free Abelian tensor multiplets. Some of the arguments in this section are standard, while others offer a different point of view on known results (see for instance~\cite{Seiberg:1997ax, Witten:2009at,Gaiotto:2010be, moorefklect}). We include them here partly to render the discussion self-contained, and partly to emphasize that the conclusions only rely on the assumption stated in section~1.3.
\subsection{The Yang-Mills Description at the Origin}
The five-dimensional Lagrangian~${\mathscr L}^{(5)}_0$ at the origin of the Coulomb branch is a weakly-coupled Yang-Mills theory with gauge algebra~$\frak g$. For now, we will take~$\frak g$ to be the compact real form of a simple Lie algebra; the case~$\frak g = \frak u(1)$ is discussed in section 3.4. More properly, we should choose a gauge group~$G$, whose Lie algebra is~$\frak g$. This choice arises, because the six-dimensional theory~${\mathcal T}_{\frak g}$ generally does not possess a conventional partition function, but rather a family of partition functions valued in a finite-dimensional vector space -- sometimes referred to as the space of conformal blocks~\cite{Witten:2009at}.\footnote{~In this sense, many~$(2,0)$ theories~${\mathcal T}_{\frak g}$ require a slight generalization of standard quantum field theory~\cite{Witten:2009at}. It is always possible to obtain theories with a standard partition function by appropriately adding free decoupled tensor multiplets~\cite{Gaiotto:2014kfa}.} The ability to specify the gauge group~$G$ in five dimensions reflects the freedom to choose a partition function in the space of conformal blocks. The ambiguity of the partition function also implies that~${\mathcal T}_{\frak g}$ does not respect standard modular invariance, e.g.~when it is compactified on tori~\cite{Witten:2009at}. Much of the discussion below only depends on the Lie algebra~$\frak g$, but we will occasionally encounter global issues.
We will work in conventions where~$\frak g$ is represented by Hermitian matrices, so that the structure constants are purely imaginary. Since the gauge field~$A = A_\mu dx^\mu$ and the field strength~$F = {1 \over 2} F_{\mu\nu} dx^\mu \wedge dx^\nu = dA -i A \wedge A$ are are valued in~$\frak g$, they are also Hermitian. The same is true for the other fields in the~${\mathcal N}=2$ Yang-Mills multiplet: the scalars~$\phi^I$ in the~$\bf 5$ of the~$\frak{so}(5)_R$ symmetry, which is preserved by the circle compactification, and symplectic Majorana Fermions transforming in the fundamental spinor representations of the Lorentz and~$R$-symmetry. As in four dimensions, $A$ and~$\phi^I$ have mass dimension one, while the Fermions have dimension~$3 \over 2$.
In order to give meaning to the gauge coupling~$g^2$, we must specify a normalization for the gauge field~$A$, and hence the Lie algebra~$\frak g$.\footnote{~We will recall aspects of Lie algebras and Lie groups as they arise in our discussion. For a systematic review in the context of gauge theories, see for instance\cite{Kapustin:2005py,Gaiotto:2010be}} The Lie algebra~$\frak g$ decomposes into a Cartan subalgebra~$\frak t_{\frak g}$ and root vectors~$e_\alpha$, which diagonalize the adjoint action of the Cartan subalgebra, i.e.~for every~$h \in \frak t_{\frak g}$ and every root vector~$e_\alpha$,
\begin{equation}
[h, e_\alpha] = \alpha(h) e_\alpha~.
\end{equation}
The real functional~$\alpha \in \frak t_{\frak g}^*$ is the root associated with~$e_\alpha$, and the set of all roots comprises the root system~$\Delta_{\frak g} \subset \frak t_{\frak g}^*$ of the Lie algebra~$\frak g$. For every root~$\alpha \in \Delta_{\frak g}$, there is a unique coroot~$h_\alpha \in \frak t_{\frak g}$, which together with~$e_{\pm \alpha}$ satisfies the commutation relations of~$\frak{su}(2)$,\footnote{~In these conventions, the eigenvalues of~$h_\alpha$ are always integers, rather than half-integers. }
\begin{equation}\label{sucomm}
[e_\alpha, e_{-\alpha}] = h_\alpha~, \qquad [h_\alpha, e_{\pm \alpha}] = \pm 2 e_{\pm \alpha}~.
\end{equation}
We define a normalized, positive-definite trace~$\Tr_{\frak g}$,
\begin{equation}\label{trdef}
\Tr_{\frak g} = {1 \over 2 h^\vee_{\frak g}} \Tr_{\text{adj}}~,
\end{equation}
where~$h^\vee_{\frak g}$ is the dual Coxeter number. This induces a positive-definite metric~$\langle \cdot, \cdot\rangle_{\frak g}$ on the Cartan subalgebra,
\begin{equation}
\langle h, h'\rangle_{\frak g} \equiv \Tr_{\frak g} (h h')~, \qquad h, h' \in \frak t_{\frak g}~,
\end{equation}
and hence also its dual~$\frak t^*_{\frak g}$, which contains the root system~$\Delta_{\frak g}$. The definition in~\eqref{trdef} is in accord with the standard convention that a short co-root~$h_\alpha$, and the corresponding long root~$\alpha$, both satisfy~$\langle h_\alpha, h_\alpha\rangle_{\frak g} = \langle \alpha, \alpha\rangle_{\frak g}= 2$.\footnote{~As an example, consider~$\frak g = \frak{su}(2)$, where~$h^\vee_{\frak{su}(2)} = 2$. The commutation relations~$[e_+, e_-] = h$ and~$[h, e_\pm] = \pm 2 e_\pm$ imply that~$h_{\text{adj}} = \text{diag}(2,0,-2)$. Therefore~$\Tr_{\text{adj}}(h^2) = 8$ and~$\Tr_{\frak{su}(2)} \left(h^2\right) = 2$.
}
In these conventions, the instanton number on~$S^4$, which can take all possible integer values, is given by the following expression~\cite{Bernard:1977nr},
\begin{equation}
{1 \over 8 \pi^2} \int_{S^{4}} \mathrm{Tr}_{\mathfrak{g}}(F\wedge F) \in {\mathbb Z}~. \label{minimalinst}
\end{equation}
The two-derivative terms in the five-dimensional low-energy theory are given by the usual Yang-Mills Lagrangian,
\begin{align}
{\mathscr L}_0^{(5)}=& -\frac{1}{2g^{2}}\mathrm{Tr}_{\mathfrak{g}}\left(F\wedge * F +\sum_{I = 1}^ 5 D_{\mu}\phi^{I}D^{\mu}\phi^{I} - {1 \over 8} \sum_{I, J = 1}^5 \left[\phi^I, \phi^J\right]^2 \right) \cr
& +\left(\text{Fermions}\right) + \left(\text{higher-derivative terms}\right)~, \label{5dkin}
\end{align}
where~$D = d - i [A, \cdot]$ is the covariant derivative in the adjoint representation. In five dimensions the gauge coupling~$g^2$ has dimensions of length. It follows from the scale-invariance of the six-dimensional theory that~$g^2$ must be proportional to the compactification radius~$R$. However, our assumptions do not obviously fix the constant of proportionality. We can motivate the answer by appealing to the following intuitive picture, which can be made precise in string constructions: at the origin of the Coulomb branch, ${\mathcal N}=2$ Yang-Mills theories in five dimensions admit particle-like solitons, which are the uplift of four-dimensional instantons. Their mass is proportional to~$n \over g^2$, where~$n$ is the instanton number, and since~$g^2 \sim R$ it is tempting to interpret them as massive KK modes of the six-dimensional theory. We can fix~$g^2$ in terms of~$R$ by demanding that the mass of the minimal~$\frak g$-instanton-soliton in flat space (more precisely on~$S^4$) coincides with the minimal KK mass~${1 \over R}$. This leads to
\begin{equation}
g^{2}=4 \pi^2 R~. \label{instanton}
\end{equation}
Note that this picture involves particles whose mass is necessarily of the same order as the cutoff of the five-dimensional effective theory. Below we will show that~\eqref{instanton} can be reliably derived in effective field theory, by extrapolating between five and six dimensions along the Coulomb branch.
The higher-derivative terms in~\eqref{5dkin} are constrained by~${\mathcal N}=2$ supersymmetry and~$\frak{so}(5)_R$ symmetry. These constraints were analyzed in \cite{Movshev:2009ba,Bossard:2010pk,Chang:2014kma}, with the following conclusions:
\begin{itemize}
\item The leading irrelevant operators that can appear in~\eqref{5dkin} occur at four-derivative order, as supersymmetric completions of non-Abelian~$F^4$ terms. There are two independent such terms -- a single-trace operator and a double-trace operator.\footnote{~When~$\frak{g}=\frak{su}(2)$, there is only one independent operator, due to trace relations.} Schematically,
\begin{subequations}\label{F4}
\begin{align}
& x\,g^{6}~ t^{\mu_{1}\nu_{1}\mu_{2}\nu_{2}\mu_{3}\nu_{3}\mu_{4}\nu_{4}}~\mathrm{Tr}_{\mathfrak{g}}\left(F_{\mu_{1}\nu_{1}}F_{\mu_{2}\nu_{2}}F_{\mu_{3}\nu_{3}}F_{\mu_{4}\nu_{4}}\right)+\cdots \subset {\mathscr L}_0^{(5)}~, \\
&y\,g^{6}~t^{\mu_{1}\nu_{1}\mu_{2}\nu_{2}\mu_{3}\nu_{3}\mu_{4}\nu_{4}}~\mathrm{Tr}_{\mathfrak{g}}\left(F_{\mu_{1}\nu_{1}}F_{\mu_{2}\nu_{2}}\right) \, \mathrm{Tr}_{\mathfrak{g}}\left(F_{\mu_{3}\nu_{3}}F_{\mu_{4}\nu_{4}}\right)+\cdots \subset {\mathscr L}_0^{(5)}~,
\end{align}
\end{subequations}
where the ellipses denote the supersymmetric completions involving scalars and Fermions, and the powers of the cutoff~$g^2$ are fixed so that~$x, y$ are dimensionless constants. The tensor~$t^{\mu_{1}\nu_{1}\mu_{2}\nu_{2}\mu_{3}\nu_{3}\mu_{4}\nu_{4}}$, which determines how the spacetime indices are contracted, is constructed out of the metric~$\eta_{\mu\nu}$.\footnote{~The tensor~$t^{\mu_{1}\nu_{1}\mu_{2}\nu_{2}\mu_{3}\nu_{3}\mu_{4}\nu_{4}}$ occurs for maximally supersymmetric Yang-Mills theories in all dimensions. For instance, it is discussed in chapter~12 of~\cite{Polchinski:1998rr}.} One particular linear combination of~$x$ and~$y$ appears in the non-Abelian DBI Lagrangian describing coincident~D4-branes~\cite{Bergshoeff:2001dc}.
Both operators in~\eqref{F4} are~${1 \over 2}$-BPS -- they can be written as~$Q^8$ acting on a gauge-invariant local operator constructed out of fields in the Yang-Mills multiplet -- and they are the only such operators that preserve the~$\frak{so}(5)_R$ symmetry. (See~\cite{Chang:2014kma} for a discussion of~${1 \over 2}$-BPS operators that break the~$R$-symmetry.) On the Coulomb branch, where~$F$ is restricted to the Cartan subalgebra, these operators give rise to~${1 \over 2}$-BPS, four-derivative terms that can be tracked along the moduli space. Below we will use this to argue that~$x$ and~$y$ must vanish for all~$(2,0)$ SCFTs compactified on~$S^1_R$.
\item At six-derivative order, there is a single~${1\over 4}$-BPS, double-trace operator of the schematic form~$\Tr_{\frak g}^2 D^2 F^4$ \cite{Bossard:2010pk}. Generically, this operator induces six-derivative terms on the Coulomb branch. However, we will see below that it does not contribute on Coulomb branches described by a single Abelian vector multiplet.
\item Starting at six-derivative order, there are full~$D$-terms, which can be written as~$Q^{16} {\mathcal O}$, where~${\mathcal O}$ is a gauge-invariant local operator. The leading such~${\mathcal O}$ is the Konishi-like single-trace operator~$\sum_{I = 1}^5 \Tr_{\frak g}\left(\phi^I \phi^I\right)$, and the corresponding~$D$-term is schematically given by~$ {\rm Tr}_{\mathfrak{g}} D^2 F^4$. This term is believed to be present in~${\mathscr L}_0^{(5)}$, since it is needed to absorb a six-loop divergence of the two-derivative Yang-Mills theory~\cite{Bern:2012di}. Below, we will show that full~$D$-terms can only affect the Coulomb-branch effective action at eight-derivative order or higher.
\end{itemize}
Having described the theory at the origin, we will now explore its Coulomb branch.
\subsection{Two-Derivative Terms and BPS States on the Coulomb Branch}
The scalar potential in~\eqref{5dkin} restricts the adjoint-valued scalars to a Cartan subalgebra~$\frak t_{\frak g} \subset \frak g$. Therefore, the Coulomb branch is parametrized by~$\langle \phi^I \rangle \in {\mathbb R}^5 \otimes \frak t_{\frak g} / {\mathcal W}_{\frak g}$, where~${\mathbb R}^5$ transforms in the~$\bf 5$ of the~$\frak{so}(5)_R$ symmetry and~${\mathcal W}_{\frak g}$ is the Weyl group of~$\frak g$, which acts on~$\frak t_{\frak g}$. At a generic point on the Coulomb branch, the low-energy theory consists of~$r_{\frak g}$ Abelian vector multiplets, with scalars~$\varphi_i^I$ and field-strengths~$f_i$, which are permuted by the Weyl group. Their embedding into the non-Abelian fields at the origin can be written as follows,
\begin{equation}\label{hexp}
\phi^I = \sum_{i =1}^{r_{\frak g}} \, h_i \, \varphi_i^I ~, \qquad F = \sum_{i = 1}^{r_{\frak g}} \, h_i f_i ~.
\end{equation}
Here we use a basis of simple coroots~$h_i$ for the Cartan subalgebra, which are associated with the~$r_{\frak g}$ simple roots~$\alpha_i$ via~\eqref{sucomm}. Their commutation relations with the root vectors~$e_{\pm i} = e_{\pm \alpha_i}$ are determined by the Cartan matrix~$C_{ij}$,
\begin{equation}
[h_i, h_j] = 0~, \qquad [e_{+i}, e_{-j}] = \delta_{ij} h_j~, \qquad [h_i, e_{\pm j}] = \pm C_{ji} e_{\pm j}~. \label{algconv}
\end{equation}
In these equations, the repeated index~$j$ is not summed. Substituting~\eqref{hexp} into~\eqref{5dkin}, we obtain the leading two-derivative effective action on the Coulomb branch,
\begin{equation}\label{freeu1}
{\mathscr L}^{(5)}_{\text{Coulomb}} = -{1 \over 2 g^2} \Omega_{ij} \left(f_i \wedge * f_j + \sum_{I =1}^5 \partial_\mu \varphi^I_i \partial^\mu \varphi_j^I \right) + \left(\text{Fermions}\right) + \cdots~,
\end{equation}
where the ellipsis denotes a variety of possible corrections that will be discussed below. The kinetic terms are determined by a symmetric, positive-definite matrix,
\begin{equation}\label{omegadef}
\Omega_{ij} = \Tr_{\frak g} \left(h_i h_j\right) = \langle h_i, h_j\rangle_{\frak g}~.
\end{equation}
Note that the normalization of the Abelian gauge fields~$f_i$ is meaningful, since they are embedded in the non-Abelian~$F$ according to~\eqref{hexp}. More precisely, the fluxes of the~$f_i$ are quantized in units dictated by the gauge group~$G$, i.e.~they depend on the global properties of~$G$, not just on its Lie algebra. This will not affect the present discussion, but it will play a role when we discuss the compactification to four dimensions in section~5.
We obtained~\eqref{freeu1} classically, by restricting the fields in~\eqref{5dkin} to the Cartan subalgebra. There are two possible kinds of corrections: quantum corrections that modify the two-derivative terms in~\eqref{freeu1}, and higher-derivative corrections. The latter are present and will be discussed in section~3.3. The former are known be absent for maximally supersymmetric Yang-Mills theories in all dimensions.\footnote{~Just as in the discussion around~\eqref{kinnrt}, this can be shown by expanding any moduli-dependent two-derivative terms around a fixed vev~$\langle \varphi_i^I\rangle$ and noting that the free two-derivative theory does not admit supersymmetric deformations containing three fields and two derivatives~\cite{Bossard:2010pk,Chang:2014kma}.} Therefore the geometry of the Coulomb branch is dictated by the classical theory,
\begin{equation}\label{5d6dm}
{\mathcal M}_{\frak g} = {\mathbb R}^5 \otimes \frak t_{\frak g} / {\mathcal W}_{\frak g}~,
\end{equation}
with the flat metric~\eqref{omegadef}. The only singularities are of orbifold type and occur at the boundaries of~${\mathcal M}_{\frak g}$, where part of the gauge symmetry is restored. The allowed patterns of gauge symmetry breaking and restoration are governed by adjoint Higgsing. Therefore, the breaking pattern~$\frak g \rightarrow \frak h \oplus \frak u(1)^n$ with~$\frak h$ semisimple and~$n \leq r_{\frak g}$ is allowed if the Dynkin diagram of~$\frak h$ can be obtained from the Dynkin diagram of~$\frak g$ by deleting~$n$ nodes (see for instance~\cite{Slansky:1981yr}).
Since the geometry of the Coulomb branch is rigid, it can be extrapolated to vevs~$|\langle \phi_i^I \rangle|$ that are much larger than the KK scale~$1 \over R$, i.e.~there are no corrections due to KK modes. Therefore, the moduli spaces in five and six dimensions are identical -- they are both given by~\eqref{5d6dm} -- and the two-derivative effective actions that describe them are simply related. Explicitly, the fields in five dimensions are obtained by reducing the six-dimensional fields to zero modes along~$S^1_R$,
\begin{equation}\label{5d6dfields}
\Phi^I_i \rightarrow {1 \over 2 \pi R}\varphi^I_i ~, \qquad H_i \rightarrow {1 \over 2 \pi R} \left(f_i \wedge dx^5 + *^{(5)} f_i\right)~.
\end{equation}
Here~$x^5 \sim x^5 + 2 \pi R$ parametrizes the circle and~$*^{(5)}$ denotes the five-dimensional Hodge star, so that~$H_i = *H_i$ in six dimensions. Note that the units of quantization for the fluxes of~$H_i$ are dictated by those of~$f_i$, which are in turn determined by the five-dimensional gauge group~$G$. Using~\eqref{5d6dfields}, the two-derivative action~\eqref{freeu1} can be uplifted to six dimensions,
\begin{equation}\label{6dlift}
-{\pi R \over g^2} \, \Omega_{ij} \Big(H_i \wedge * H_j + \sum_{I =1}^5 \partial_\mu \Phi^I_i \partial^\mu \Phi_j^I \Big) + \left(\text{Fermions}\right) \subset {\mathscr L}_{\text{tensor}}~.
\end{equation}
As in section~2, the quadratic Lagrangian for the self-dual fields~$H_i$ in~\eqref{6dlift} vanishes, since~$\Omega_{ij}$ symmetric while~$H_i \wedge H_j$ is antisymmetric, and hence their non-canonical kinetic terms may seem meaningless. This is not the case, in part because the fluxes of the~$H_i$ are quantized in definite units. Therefore, global observables, such as partition functions on closed manifolds, are sensitive to~$\Omega_{ij}$ (see~\cite{Witten:2007ct} and references therein). Here we will use the fact that the kinetic terms in~\eqref{6dlift} determine the six-dimensional Dirac pairing for string sources that couple to the~$H_i$. For a string with worldsheet~$\Sigma_2$ and charges~$q_i$,
\begin{equation}\label{6dsc}
dH_i = q_i \delta_{\Sigma_2} \qquad \Longleftrightarrow \qquad q_i = \int_{\Sigma_3} H_i~,
\end{equation}
where~$\delta_{\Sigma_2}$ is a unit delta function localized on~$\Sigma_2$, which is linked by the three-cycle~$\Sigma_3$. The integer-valued Dirac pairing between two strings with charges~$q_i, q'_i$ is then given by~\cite{Deser:1997mz},\footnote{~If the~$H_i$ were not self dual, the pairing in~\eqref{6diracpair} would be valued in~${1 \over 2} {\mathbb Z}$ (as in four-dimensional electrodynamics) rather than in~${\mathbb Z}$. The relative factor of~${1 \over 2}$ arises because the self-dual and the anti-self-dual parts of~$H_i$ contribute equally to the angular momentum, whose quantization leads to the Dirac condition.}
\begin{equation}\label{6diracpair}
{R \over g^2} \, \Omega_{ij} q_i q'_j \in {\mathbb Z}~.
\end{equation}
This gives operational meaning to the non-trivial kinetic terms for the~$H_i$ in~\eqref{6dlift}. The importance of such terms was recently emphasized in~\cite{Intriligator:2014eaa}, and also played a role in~\cite{Ohmori:2014kda}.
A set of candidate string sources for the~$H_i$ is furnished by the dynamical W-strings on the tensor branch, which were mentioned at the end of section~2.1. When the theory is compactified on a circle, it is natural to compare them to the~BPS states of the five-dimensional Yang-Mills theory on the Coulomb branch (see~\cite{Gaiotto:2010be,Tachikawa:2011ch} for a brief summary). We will focus on the electrically charged W-Bosons, which correspond to roots~$\alpha \in \Delta_{\frak g}$, and magnetically charged monopole strings, which correspond to coroots~$h_\alpha$. Both the~W-Bosons and the monopole strings are~${1 \over 2}$-BPS and their masses are proportional to the vevs~$\langle \varphi^I_i\rangle$. As such, they become parametrically light near the origin of the Coulomb branch, where their properties are completely determined by the low-energy Yang-Mills theory. On the other hand, it is believed that such~${1 \over 2}$-BPS states can be reliably extrapolated to large vevs, where the theory is effectively six-dimensional. In that regime, both the W-Bosons and the monopole strings must arise from six-dimensional W-strings. The former correspond to strings that wrap~$S^1_R$, as in~\cite{Witten:1995zh}, while the latter describe strings that lie in the five non-compact dimensions. Therefore, the electric charges of the W-Bosons and the magnetic charges of the monopoles must arise from the same set of W-string charges in six dimensions.
This six-dimensional requirement leads to constraints on the five-dimensional effective theory. Consider the W-Boson corresponding to a fixed simple root~$\alpha_i$. It follows from~\eqref{algconv} that its electric charges~$(e_i)_j$ with respect to the Abelian gauge fields~$f_j$ are given by the entries~$C_{ij}$ in the~$i^{\text{th}}$ row of the Cartan matrix. These charges can be measured by evaluating the~$j^{\text{th}}$ electric flux across a Gaussian surface~$\Sigma^i_3$ that surrounds the W-Boson,
\begin{equation}\label{elch}
(e_i)_j = C_{ij} = {\Omega_{jk} \over g^2} \int_{\Sigma^i_3} * f_k~.
\end{equation}
Similarly, the magnetic charges~$(m_i)_j$ of the monopole-string corresponding to the simple coroot~$h_i$, measured with respect to~$f_j$, are given by
\begin{equation}\label{magch}
(m_i)_j = \delta_{ij} = {1 \over 2 \pi} \int_{\Sigma^i_2} f_j~,
\end{equation}
where~$\Sigma^i_2$ links the monopole string. If we use~\eqref{5d6dfields} to express the integrals in~\eqref{elch} and~\eqref{magch} as integrals of the six-dimensional three-form flux~$H_i$ over~$\Sigma_3^i$ and~$\Sigma_2^i \times S_R^1$, and we demand that these integrals measure the same six-dimensional W-string charge~$(q_i)_j$, as defined in~\eqref{6dsc}, then we obtain
\begin{equation}\label{result}
(q_i)_j = 2 \pi \delta_{ij}~, \qquad C_{ij} = {4 \pi^2 R \over g^2} \,\Omega_{ij}~.
\end{equation}
Since~\eqref{omegadef} implies that~$\Omega_{ij}$ is symmetric, the same must be true for the Cartan matrix~$C_{ij}$. However, this is only possible if~$\frak g$ is simply laced. Note that this argument crucially relies on properties of the six-dimensional parent theory, specifically its W-strings. It does not imply that all five-dimensional~${\mathcal N}=2$ Yang-Mills theories with non-ADE gauge groups are inconsistent. For instance, such theories arise by activating outer automorphism twists around the compactification circle, as in~\cite{Vafa:1997mh,Witten:2009at,Tachikawa:2011ch}. However, this ruins the symmetry between wrapped and unwrapped W-strings that lead to~\eqref{result}. Previous arguments for the ADE restriction used anomaly cancellation on the W-string worldsheet~\cite{Henningson:2004dh}, or self-duality and modular invariance in~$(2,0)$ theories with standard partition functions~\cite{Seiberg:1997ax,Seiberg:2011dr}. In section~4.3 we will describe another argument for the ADE restriction that only relies on the consistency of the low-energy effective theory on the six-dimensional tensor branch.
We can use~\eqref{result} to derive the relationship between~$g^2$ and~$R$. In general, the Cartan matrix is given by
\begin{equation}
C_{ij} = {2 \langle \alpha_i, \alpha_j\rangle_{\frak g} \over \langle \alpha_j, \alpha_j\rangle_{\frak g}}~.
\end{equation}
If~$\frak g$ is simply laced, then all~$\alpha_j$ have the same length; in our conventions~$ \langle \alpha_j, \alpha_j\rangle_{\frak g} = 2$. Therefore~$C_{ij}$ precisely coincides with~$\Omega_{ij}$ as defined in~\eqref{omegadef}, so that~\eqref{result} reduces to
\begin{equation}
g^{2}=4 \pi^2 R~, \label{gaugeagain}
\end{equation}
in agreement with~\eqref{instanton}. Together with~\eqref{result}, this shows that the Dirac quantization condition~\eqref{6diracpair} for the W-strings amounts to the statement that the entries of the Cartan matrix are integers.
\subsection{Four-Derivative Terms on the Coulomb Branch}
The higher-derivative terms in~${\mathscr L}_{\text{Coulomb}}^{(5)}$ can arise in two ways: classically, by restricting higher-derivative terms that are already present in the effective Lagrangian~${\mathscr L}^{(5)}_0$ at the origin to the Coulomb branch; and quantum mechanically, by integrating out W-Bosons. In general, understanding these corrections ultimately requires detailed knowledge of the higher-derivative terms in~${\mathscr L}_0^{(5)}$, including~$D$-terms. However, just as in six dimensions, the first few orders in the derivative expansion of~${\mathscr L}_{\text{Coulomb}}^{(5)}$ are protected by non-renormalization theorems.
We again restrict the discussion to rank one Coulomb branches with a single Abelian vector multiplet that arise by breaking~$\frak g\rightarrow \frak h \oplus \frak u(1)$. The vevs~$\langle \varphi^I\rangle$ of the scalars in this vector multiplet still break the~$\frak{so}(5)_R$ symmetry to~$\frak{so}(4)_R$, which leads to four NG Bosons, but since the five-dimensional theory is not conformal, there is no dilaton. Nevertheless, it is useful to introduce the radial variable
\begin{equation}
\psi = \left(\sum_{I = 1}^5 \varphi^I \varphi^I \right)^{1 \over 2}~.
\end{equation}
As in section~3.2, the kinetic terms in~${\mathscr L}^{(5)}_{\text{Coulomb}}$ are determined by the embedding of~$\varphi^I, f$ into the non-Abelian fields~$\phi^I, F$ at the origin,
\begin{equation}\label{nonabeemb}
\phi^I = t \varphi^I~, \qquad F = t f~, \qquad t \in \frak t_{\frak g}~.
\end{equation}
Here~$t \in \frak t_{\frak g}$ is a Cartan generator whose commutant in~$\frak g$ is~$\frak h \oplus \frak u(1)$,\footnote{~We denote the Cartan generator by~$t$ rather than~$h$, in order to avoid confusion with the subalgebra~$\frak h \subset \frak g$.} so that the kinetic terms on the Coulomb branch are given by
\begin{equation}\label{kt}
-{1 \over 2 g^2} \Tr_{\frak g}\left(t^2\right) \left(f \wedge * f + \sum_{i = I}^5 \partial_\mu \varphi^I \partial^\mu \varphi^I\right) + \left(\text{Fermions}\right) \; \subset \; {\mathscr L}^{(5)}_{\text{Coulomb}}~.
\end{equation}
As in maximally supersymmetric Yang-Mills theories in other dimensions~\cite{Paban:1998ea,Paban:1998mp,Paban:1998qy,Sethi:1999qv}, the dependence of the first several higher-derivative terms in~${\mathscr L}^{(5)}_{\text{Coulomb}}$ on the scalars~$\varphi^I$ is tightly constrained. Here we will follow the logic of section~2.2: first expand the moduli-dependent coefficient functions in the effective Lagrangian around a fixed vev, and then impose the constraints of supersymmetry on the resulting local operators. It follows from the analysis in~\cite{Movshev:2009ba,Bossard:2010pk,Chang:2014kma} that the possible supersymmetric deformations of single free Abelian vector multiplet take exactly the same form as the~$F$- and~$D$-term deformations~\eqref{fterm} and~\eqref{dterm} of a free Abelian tensor multiplet in six dimensions. The former are four-derivative terms of the form~$Q^8 \left(\varphi^{(I_1} \cdots \varphi^{I_n)} - \left(\text{traces}\right)\right)$ with~$n \geq 4$, which transform as symmetric, traceless~$(n-4)$-tensors of~$\frak{so}(5)_R$. The latter first arise at eight-derivative order, and there are no independent six-derivative deformations. Therefore the conclusions of section~2.2 still apply, with minimal modifications. In particular, the four-derivative terms in~${\mathscr L}^{(5)}_{\text{Coulomb}}$ are now controlled by two dimensionless coefficients~$b^{(5)}$ and~$c^{(5)}$, which we define as follows,
\begin{equation}\label{b5def}
\left({b^{(5)} \over \psi^3} + c^{(5)} g^6\right) \left(\partial\psi\right)^4 \subset {\mathscr L}^{(5)}_{\text{Coulomb}}~.
\end{equation}
As in section~2.2, the~$\psi$-dependence of the coefficient function in parentheses follows from the fact it must be harmonic and~$\frak{so}(5)_R$ invariant. Since the theory is not conformal, the term proportional to~$c^{(5)}$ is not a priori forbidden. As in six dimensions, all six-derivative terms in~${\mathscr L}^{(5)}_{\text{Coulomb}}$ are determined by the lower-order terms. Here we will focus on the four-derivative terms, and in particular on~\eqref{b5def}. As in section~2.3, these conclusions also follow from considerations involving superamplitudes (see also appendix A).
Since the dependence of~\eqref{b5def} on the dimensionful gauge coupling~$g$ is completely determined by the non-renormalization theorem, we can fix the coefficients~$b^{(5)}$ and~$c^{(5)}$ at parametrically weak coupling. In this limit, $c^{(5)}$ can only arise from a classical contribution due to four-derivative terms in the Lagrangian~${\mathscr L}_0^{(5)}$ at the origin. The only such terms are the two~${1 \over 2}$-BPS terms described around~\eqref{F4}, with independent dimensionless coefficients~$x, y$. Restricting the non-Abelian gauge fields at the origin to the Cartan direction~$t$ shows that
\begin{equation}\label{chh}
c^{(5)} \sim x \, \mathrm{Tr}_{\mathfrak{g}}(t^{4})+ \, y\left(\mathrm{Tr}_{\mathfrak{g}}(t^{2})\right)^2 ~.
\end{equation}
We can change~$t$ by considering different rank one adjoint breaking patterns, which samples different linear combinations of~$x$ and~$y$.\footnote{~The only exception is~$\frak g = \frak{su}(2)$, but in that case~$x$ and~$y$ are linearly dependent due to trace relations.}
The fixed dependence of~\eqref{b5def} on~$\psi$ enables us to extrapolate to large vevs~$|\langle \psi\rangle|$ and compare with the six-dimensional effective Lagrangian~${\mathscr L}_{\text{tensor}}$, as we did for the kinetic terms in section~3.2. However, in that regime, the scale-invariance of the six-dimensional theory forbids the constant~$c^{(5)}$ in~\eqref{b5def}. By comparing with~\eqref{chh} for different choices of~$t$, we conclude that both~$x$ and~$y$ must vanish. Therefore, the leading possible higher-derivative terms~\eqref{F4} at the origin of the five-dimensional Coulomb branch are absent. For~$\frak g = \frak{su}(2)$, this was argued in~\cite{Lin:2015zea} via a comparison with little string theory. Here we see that it is a simple and general consequence of the fact that the six-dimensional~$(2,0)$ theory is scale invariant.
\bigskip
\bigskip
\begin{figure}[h]
\qquad\xymatrix @R=1pc {
*=0{\phantom{\bullet}}\ar@{-}[ddr]&&&& *=0{\phantom{\bullet}}\ar@{-}[ddl]&&&&\\
\\
&*=0{\bullet} \ar@{-}[rr] \ar@{-}[ddd]&& *=0{\bullet}\ar@{-}[ddd]&&&&&\\
\\
&&&&&&&&*=0{ \qquad \qquad \quad\sim \; \left(\alpha(t)\right)^{4}\bigintsss {d^{d}p \over \left(p^{2}+m^2_W(\alpha)\right)^4} \; \sim \; \left(\alpha(t)\right)^{4}m^{d-8}_{W}(\alpha)\; \sim \; |\alpha(t)|^{d-4} \, |\langle \psi\rangle|^{d-8}}
\\
& *=0{\bullet} \ar@{-}[rr]&& *=0{\bullet}&&&&&\\
\\
*=0{\phantom{\bullet}}\ar@{-}[uur]&&&& *=0{\phantom{\bullet}}\ar@{-}[uul]&&&&}
\caption{One-loop box diagram in~$d$ spacetime dimensions. The external lines are associated with the Cartan generator~$t$ and the W-Bosons running in the loop correspond to a root~$\alpha$.}
\label{loopfig}
\end{figure}
\medskip
We now apply the same logic to the coefficient~$b^{(5)}$ in~\eqref{b5def}. Since it does not depend on the gauge coupling~$g$, it can only arise from the two-derivative Yang-Mills Lagrangian in~\eqref{5dkin} by integrating out massive W-Bosons at one loop. Upon turning on a vev along the Cartan element~$t$, the gluons in the adjoint representation of~$\frak g$ decompose into the massless gluons of~$\frak h$ and the~$\frak u(1)$ photon, as well as massive W-Bosons. The later are labeled by roots~$\alpha \in \Delta_{\frak g}$ that do not reside in the root system of~$\frak h$, i.e~$\alpha \in \Delta_{\frak g} \backslash \Delta_{\frak h}$. The~$\frak u(1)$ charge of a W-Boson labeled by~$\alpha$ is given by~$\alpha(t)$. Carrying out the one-loop computation expresses~$b^{(5)}$ as a sum over W-Bosons, weighted by their~$\frak u(1)$ charges,
\begin{equation}\label{b5loop}
b^{(5)} = {1 \over 128 \pi^2} \sum_ {\alpha \in \Delta_{\frak g} \backslash \Delta_{\frak h}} |\alpha(t)|~.
\end{equation}
Up to an overall constant, this result can be understood using a simple scaling argument: the coefficient~$b^{(5)}$ is determined by the one-loop scalar box integral in Figure~\ref{loopfig}, with W-Bosons labeled by roots~$\alpha$ running in the loop (see for instance~\cite{Elvang:2013cua}). It is instructive to examine this integral as a function of the spacetime dimension~$d$. There are four powers of~$|\alpha(t)|$ that arise from the vertices. If the integral over loop momenta is finite, as is the case here, it scales like~$d-8$ powers of the W-Boson mass~$m_W(\alpha)$. Therefore, the diagram in Figure~\ref{loopfig} is proportional to~$|\alpha(t)|^{d-4} |\langle \psi\rangle|^{d-8}$. For~$d = 5$ this is consistent with~\eqref{b5loop}, since~$b^{(5)}$ multiplies~$\psi^{-3}$ in~\eqref{b5def}. In the special case~$d = 4$, the charges~$|\alpha(t)|$ cancel and the diagram is simply proportional to~$n_W$, the total number of W-Bosons.\footnote{~This cancellation can be understood in terms of the conformal symmetry of four-dimensional~${\mathcal N}=4$ Yang-Mills theory. The four-derivative terms generated by the loop integral in Figure~\ref{loopfig} are proportional to~$\Delta a^{(4)}$, the difference between the four-dimensional $a$-anomaly of the UV and IR theories~\cite{Komargodski:2011vj}. Since the~$a$-anomaly of an~${\mathcal N}=4$ theory with gauge algebra~$\frak g$ is proportional to~$d_{\frak g}$, it follows that~$\Delta a^{(4)} \sim n_W$.} Therefore, it is misleading to extrapolate the four-dimensional answer to other dimensions, since it does not correctly capture the sum over charges. We will encounter a similar fallacy in section~5.
Having determined the coefficient~$b^{(5)}$ in~\eqref{b5def}, we can use the fact that we know the exact~$\psi$-dependence of this term to extrapolate it to large vevs. It can then be compared to the term~\eqref{fourd} in the six-dimensional effective Lagrangian on the tensor branch, which we repeat here for convenience
\begin{equation}\label{fourdii}
b \, {(\partial \Psi)^4 \over \Psi^3} \subset{\mathscr L}_{\text{tensor}}~.
\end{equation}
The coefficient~$b$ was defined in a normalization where the six-dimensional dilaton field~$\Psi$ has canonical kinetic terms. (Note that~\eqref{fourdii} is not invariant under rescalings of~$\Psi$.) We must therefore appropriately renormalize the five-dimensional field~$\psi$ to eliminate the non-canonical kinetic terms in~\eqref{kt}. Also taking into account factors of~$2 \pi R$ that arise in the transition from six to five dimensions (see the discussion around~\eqref{5d6dfields} and~\eqref{6dlift}) gives
\begin{equation}
b = \left({g^2 \over 2 \pi R \Tr_{\frak g}(t^2)} \right)^{1 \over 2} b^{(5)}~.
\end{equation}
Substituting~$g^2 = 4 \pi^2 R$ from~\eqref{gaugeagain} and~$b^{(5)}$ from~\eqref{b5loop} then leads to
\begin{equation}\label{bfinal}
b = \left({1 \over 8192 \pi^3 \Tr_{\frak g}(t^2)} \right)^{1 \over 2} \sum_ {\alpha \in \Delta_{\frak g} \backslash \Delta_{\frak h}} |\alpha(t)|~.
\end{equation}
Note that rescaling~$t$ does not change the right-hand side of this formula.
Having determined the coefficient~$b$ that governs the four-derivative terms on the tensor branch, we can use the relations~\eqref{absq} and~\eqref{kbsq} to determine the coefficients of the six-dimensional WZ terms~\eqref{awzw} and~\eqref{kenwzw}. As was emphasized in~\cite{Intriligator:2000eq}, the~$R$-symmetry WZ term~\eqref{kenwzw} vanishes when it is reduced to zero modes along~$S^1_R$, because it contains a derivative along the circle. Nevertheless, we can determine its coefficient by examining the four-derivative terms, whose reduction to five dimensions is non-trivial.
\subsection{The Abelian Case}
When~$\frak g = \frak u(1)$, the description at the origin of the five-dimensional Coulomb branch involves a single Abelian~${\mathcal N}=2$ vector multiplet, possibly deformed by higher derivative terms. One difference to the non-Abelian case is that the relation between the gauge coupling~$g^2$ and the compactification radius~$R$ is no longer fixed by considerations involving BPS states on the Coulomb branch, as in section~3.2. However, the non-renormalization theorems discussed in section~3.3 still apply. Since the Abelian theory does not give rise to any W-Bosons, the constant~$b$ computed in~\eqref{bfinal} vanishes. As was discussed around~\eqref{fourd}, this can only happen if the six-dimensional theory on the tensor branch is locally free. Therefore, the only~$(2,0)$ SCFTs that give rise to~$\frak u(1)$ gauge theories in five dimensions are locally described by free Abelian tensor multiplets.
\section{Applications}
In this section we will combine the results of sections~2 and~3 to compute the Weyl anomaly~$a_{\frak g}$ and the~$R$-symmetry anomaly~$k_{\frak g}$ for all~$(2,0)$ SCFTs~${\mathcal T}_{\frak g}$. We also prove that~$a_{\frak g}$ strictly decreases under every RG flow that preserves~$(2,0)$ supersymmetry. Finally, we use our computation of~$k_{\frak g}$ to offer another argument for the ADE restriction on the Lie algebra~$\frak g$.
\subsection{Computing the Anomalies~$a_{\frak g}$ and~$k_{\frak g}$}
Upon breaking~$\frak g\rightarrow \frak h \oplus \frak u(1)$ in six dimensions, the anomaly mismatch between the UV theory~${\mathcal T}_{\frak g}$ and the IR theory, which consists of the interacting SCFT~${\mathcal T}_{\frak h}$ and an Abelian tensor multiplet, is given by~\eqref{absq} and~\eqref{kbsq}, which we repeat here,
\begin{equation}\label{deltaakii}
\Delta a = a_{\frak g} - \left(a_{\frak h} + 1\right) = \frac{98304 \, \pi^{3}}{7} \,b ^{2}~, \qquad \Delta k = k_{\frak g} - k_{\frak h} = 6144 \pi^3 \, b^2~.
\end{equation}
The constant~$b$ was determined in~\eqref{bfinal} via a one-loop computation in five dimensions. Substituting~\eqref{bfinal} into~\eqref{deltaakii}, we find
\begin{equation}\label{akx}
\Delta a = {12 \over 7} X ~, \qquad \Delta k = {3\over 4} X~, \qquad X = {1 \over \Tr_{\frak g}(t^2)} \left(\sum_ {\alpha \in \Delta_{\frak g} \backslash \Delta_{\frak h}} |\alpha(t)|\right)^2~.
\end{equation}
Note that the denominator of~$X$ can also be expressed as a sum over~W-Bosons using~\eqref{trdef},
\begin{equation}\label{trwsum}
\Tr_{\frak g} (t^2) = {1 \over 2 h^\vee_{\frak g}} \Tr_{\text{adj}} (t^2) ={1 \over 2 h^\vee_{\frak g}} \sum_{{\alpha \in \Delta_{\frak g} \backslash \Delta_{\frak h}}} \left(\alpha(t)\right)^2~.
\end{equation}
Here the roots in~$\Delta_{\frak h}$ do not contribute, because~$t$ commutes with all elements of~$\frak h$.
In order to express~$X$ in~\eqref{akx} in terms of more familiar Lie-algebraic data, it is convenient to introduce the vector~$\omega_t \in \frak t^*_{\frak g}$ that is dual to the Cartan element~$t \in \frak t_{\frak g}$,
\begin{equation}\label{omegadefii}
\alpha(t) = \langle \omega_t, \alpha\rangle_{\frak g}~, \qquad \forall \alpha \in \Delta_{\frak g}~,
\end{equation}
and satisfies
\begin{equation}\label{hsq}
\Tr_{\frak g} (t^2) = \langle t,t\rangle_{\frak g} = \langle \omega_t, \omega_t\rangle_{\frak g}~.
\end{equation}
Note that~$\omega_t$ is orthogonal to the hyperplane~$\frak t^*_{\frak h} \subset \frak t^*_{\frak g}$, which is spanned by the roots of~$\frak h$. Recall that the metric~$\langle \cdot, \cdot \rangle_{\frak g}$ is normalized so that the long roots of~$\frak g$ have length-squared~$2$.
Next, we partition the root system of~$\frak g$ into positive and negative roots~$\Delta^\pm_{\frak g}$. We use~$\Delta_{\frak h}^\pm$ to denote the roots of~$\frak h$ that lie in~$\Delta^\pm_{\frak g}$. The roots~$\alpha \in \Delta^+_{\frak g} \backslash \Delta^+_{\frak h}$ correspond to W-Bosons of positive~$\frak u(1)$ charge~$\alpha(t) > 0$ and transform in a representation of~$\frak h$; the roots~$-\alpha \in \Delta^-_{\frak g} \backslash \Delta^-_{\frak h}$ correspond to W-Bosons of charge~$-\alpha(t)$. Since the Weyl vectors of~$\frak g$ and~$\frak h$ are given by
\begin{equation}\label{rhodef}
\rho_{\frak g} = {1 \over 2} \sum_{\alpha \in \Delta^+_{\frak g}} \alpha~, \qquad \rho_{\frak h} = {1 \over 2} \sum_{\alpha \in \Delta^+_{\frak h}} \alpha~,
\end{equation}
we can rewrite
\begin{equation}\label{srew}
\sum_ {\alpha \in \Delta_{\frak g} \backslash \Delta_{\frak h}} |\alpha(t)| = 2 \sum_ {\alpha \in \Delta^+_{\frak g} \backslash \Delta^+_{\frak h}} \alpha(t) = 4 \langle \omega_t, \rho_{\frak g} - \rho_{\frak h}\rangle_{\frak g}~.
\end{equation}
Substituting~\eqref{hsq} and~\eqref{srew} into~\eqref{akx}, we find that
\begin{equation}\label{xsimp}
X = {16 \langle \omega_t, \rho_{\frak g} - \rho_{\frak h}\rangle_{\frak g}^2 \over \langle \omega_t, \omega_t\rangle_{\frak g}}~.
\end{equation}
Finally, we use the fact that~$\rho_{\frak g} - \rho_{\frak h}$ is orthogonal to the hyperplane~$\frak t^*_{\frak h} \subset \frak t^*_{\frak g}$ spanned by the roots of~$\frak h$. To see this, recall that the roots in~$\Delta_{\frak g}^+ \backslash \Delta_{\frak h}^+$ are the positively charged W-Bosons, which transform in a representation of~$\frak h$. It follows that~$\Delta_{\frak g}^+ \backslash \Delta_{\frak h}^+$ is invariant under the Weyl group of~$\frak h$. Hence, the same is true for~$\rho_{\frak g} - \rho_{\frak h}$, which must therefore be orthogonal to~$\frak t^*_{\frak h}$. Since~$\omega_t$ is orthogonal to the same hyperplane, we conclude that~$\rho_{\frak g} - \rho_{\frak h}$ and~$\omega_t$ are parallel. These properties allow us to reduce~\eqref{xsimp} to
\begin{equation}
X = {16} \, \langle \rho_{\frak g} - \rho_{\frak h}, \rho_{\frak g} - \rho_{\frak h}\rangle_{\frak g} = {16}\Big(\langle \rho_{\frak g},\rho_{\frak g}\rangle_{\frak g} - \langle \rho_{\frak h}, \rho_{\frak h}\rangle_{\frak g}\Big)~.
\end{equation}
We would now like to apply the Freudenthal-de\,Vries strange formula
\begin{equation}\label{strangefmla}
\langle \rho_{\frak g}, \rho_{\frak g}\rangle_{\frak g} = {1 \over 12} h^\vee_{\frak g} d_{\frak g}~,
\end{equation}
and similarly for~$\frak h$. This formula only applies if we use the particular metric for which long roots have length-squared~$2$. However, the long roots of~$\frak h$ in general do not have this property with respect to the metric~$\langle \cdot, \cdot\rangle_{\frak g}$ adapted to~$\frak g$. For a general rank one braking~$\frak g \rightarrow \frak h \oplus \frak u(1)$, we have~$\frak h = \oplus_i \frak h_i$, where the~$\frak h_i$ are compact, simple Lie algebras. Then the metrics~$\langle \cdot, \cdot\rangle_{\frak h_i}$ adapted to~$\frak h_i$ are related to the metric adapted to~$\frak g$ as follows,
\begin{equation}\label{nfactordef}
\langle \cdot,\cdot \rangle_{\frak g} = N^{(i)}_{\frak h \subset \frak g} \, \langle \cdot, \cdot\rangle_{\frak h_i}~.
\end{equation}
The normalization factors~$N^{(i)}_{\frak h \subset g}$ depend on the particular embedding of~$\frak h$ in~$\frak g$. They are determined by the lengths of long roots~$\ell_i$ of~$\frak h_i$ with respect to the metric on~$\frak g$,
\begin{equation}\label{nembeddef}
N^{(i)}_{\frak h \subset \frak g} = {1 \over 2} \langle \ell_i, \ell_i\rangle_{\frak g}~, \qquad \ell_i~\text{a long root of}~\frak h_i \subset \frak g~.
\end{equation}
If~$\frak g$ is simply laced, then all roots have length-squared~$2$, so that~$N^{(i)}_{\frak h \subset \frak g} = 1$, but in general this is not the case.
The Weyl vector~$\rho_{\frak h}$ of~$\frak h$ defined in~\eqref{rhodef} is the sum of the mutually orthogonal Weyl vectors~$\rho_{\frak h_i}$ of the~$\frak h_i$. We can therefore use~\eqref{nfactordef} and the strange formula~\eqref{strangefmla} to evaluate
\begin{equation}
\langle \rho_{\frak h}, \rho_{\frak h}\rangle_{\frak g} = \sum_i N^{(i)} _{\frak h \subset g} \; \langle \rho_{\frak h_i}, \rho_{\frak h_i}\rangle_{\frak h_i} = {1 \over 12} \sum_i N^{(i)}_{\frak h \subset \frak g} h^\vee_{\frak h_i} d_{\frak h_i}~.
\end{equation}
Therefore,
\begin{equation}
X = {4 \over 3 } \bigg(h^\vee_{\frak g}d_{\frak g}-\sum_i N^{(i)}_{\frak h \subset \frak g} h^\vee_{\frak h_i} d_{\frak h_i}\bigg)~.
\end{equation}
Substituting into~\eqref{deltaakii}, we conclude that
\begin{equation}\label{deltakwithn}
\Delta a = {16 \over 7 } \bigg(h^\vee_{\frak g}d_{\frak g}-\sum_i N^{(i)}_{\frak h \subset \frak g} h^\vee_{\frak h_i} d_{\frak h_i}\bigg)~, \qquad \Delta k = h^\vee_{\frak g}d_{\frak g}-\sum_i N^{(i)}_{\frak h \subset \frak g} h^\vee_{\frak h_i} d_{\frak h_i}~.
\end{equation}
\smallskip
\begin{table}[h]
\centering
\begin{tabular}{!{\VRule[1pt]}c!{\VRule[1pt]}c!{\VRule[1pt]}c!{\VRule[1pt]}c!{\VRule[1pt]}}
\specialrule{1.2pt}{0pt}{0pt}
$\mathfrak{g}$ & $a_{\frak g}$ & $c_{\frak g}$ & $k_{\frak g}$ \rule{0pt}{2.6ex}\rule[-1.4ex]{0pt}{0pt} \\
\specialrule{1.2pt}{0pt}{0pt}
\multirow{2}{*}{$\mathfrak{u}(1)$}& \multirow{2}{*}{$1$} & \multirow{2}{*}{$1$} & \multirow{2}{*}{$0$}\\
& & & \\
\hline
\multirow{2}{*}{$\mathfrak{su}(n)$} & \multirow{2}{*}{$\frac{16}{7}n^{3}-\frac{9}{7}n-1$} & \multirow{2}{*}{$4n^{3}-3n-1$} & \multirow{2}{*}{$n^3-n$}\\
& & & \\
\hline
\multirow{2}{*}{$\mathfrak{so}(2n)$}& \multirow{2}{*}{$\frac{64}{7}n^{3}-\frac{96}{7}n^{2}+\frac{39}{7}n$} & \multirow{2}{*}{$16n^{3}-24n^{2}+9n$}& \multirow{2}{*}{$4n^3-6n^2+2n$}\\
& & & \\
\hline
\multirow{2}{*}{$\mathfrak{e}_{6}$}& \multirow{2}{*}{$\frac{15018}{7}$} & \multirow{2}{*}{$3750$}& \multirow{2}{*}{$936$} \\
& & & \\
\hline
\multirow{2}{*}{$\mathfrak{e}_{7}$}& \multirow{2}{*}{$5479$} & \multirow{2}{*}{$9583$}& \multirow{2}{*}{$2394$}\\
& & &\\
\hline
\multirow{2}{*}{$\mathfrak{e}_{8}$}& \multirow{2}{*}{$\frac{119096}{7}$} & \multirow{2}{*}{$29768$} & \multirow{2}{*}{$7440$}\\
& & &\\
\specialrule{1.2pt}{0pt}{0pt}
\end{tabular}
\caption{Weyl and~$\frak{so}(5)_R$ anomalies for all~$(2,0)$ SCFTs~${\mathcal T}_{\frak g}$.}
\label{gensp}
\end{table}
In order to obtain formulas for~$a_{\frak g}$ and~$k_{\frak g}$, we can break~$\frak g \rightarrow \frak u(1)^{r_{\frak g}}$ by a sequence of rank one breakings, i.e.~by successively removing nodes from the Dynkin diagram of~$\frak g$, and apply~\eqref{deltakwithn} at every step. If~$\frak g$ is an ADE Lie algebra then all~$N^{(i)}_{\frak h \subset \frak g} = 1$, so that
\begin{equation}\label{finalak}
a_{\frak g} = {16 \over 7} h^\vee_{\frak g} d_{\frak g} + r_{\frak g}~, \qquad k_{\frak g} = h^\vee_{\frak g} d_{\frak g}~, \qquad \frak g \in \left\{A_n,D_n,E_n\right\}~.
\end{equation}
The second term in~$a_{\frak g}$ arises from the~$r_{\frak g}$ tensor multiplets that remain when ~$\frak g$ is completely broken to the Cartan subalgebra. The answer for~$k_{\frak g}$ is in agreement with the known answer discussed around~\eqref{anomform}, and the one for~$a_{\frak g}$ agrees with expectations from holography~\cite{Henningson:1998gx,Tseytlin:2000sf,Beccaria:2014qea}. In Table~\ref{gensp} we display the values of~$a_{\frak g}$ and~$k_{\frak g}$ computed in~\eqref{finalak}, as well as the conjectured value of the~$c$-anomaly~$c_{\frak g}$ in~\eqref{cresult}, for all ADE Lie algebras~$\frak g$.
When~$\frak g$ is not simply laced, it is generally not possible to find functions~$a_{\frak g}, k_{\frak g}$ that only depend on~$\frak g$ and satisfy~\eqref{deltakwithn} for all rank one adjoint breaking patterns. We will discuss examples in section~4.3, where we revisit the ADE restriction on~$\frak g$.
\subsection{The~$a$-Theorem for RG Flows with~$(2,0)$ Supersymmetry}
In six dimensions, the conjectured~$a$-theorem~\cite{Cardy:1988cwa} (see also~\cite{Elvang:2012st}) states that the~$a$-anomaly strictly decreases under any unitary RG flow that interpolates between a~$\text{CFT}_{\text{UV}}$ at short distances and a~$\text{CFT}_{\text{IR}}$ at long distances,
\begin{equation}
a_{\text{UV}} > a_{\text{IR}}~.
\end{equation}
Broadly speaking, RG flows fall into two categories: those initiated by deforming the~$\text{CFT}_{\text{UV}}$ using a relevant operator, which breaks conformal invariance explicitly, and those initiated by activating a vev and (partially) moving onto a moduli space of vacua, where conformal invariance is spontaneously broken.
As was stated in section~1.1, the~$(2,0)$ SCFTs in six dimensions do not possess relevant (or marginal) operators that can be used to deform the Lagrangian while preserving~$(2,0)$ supersymmetry~\cite{ckt}. Therefore, the only possible RG flows that preserve that amount of supersymmetry are the moduli-space flows we have analyzed above. They are induced by adjoint Higgsing~$\frak g \rightarrow \frak h \oplus \frak u(1)^n$ with~$\frak h$ semisimple and~$n \leq r_{\frak g}$. If~$\frak g$ is an ADE Lie algebra, we can use~\eqref{finalak} to evaluate~$a_{\text{UV}} = a_{\frak g}$ and~$a_{\text{IR}} = a_{\frak h} + n$, and hence verify that their difference is positive. Since~$r_{\frak g} = r_{\frak h} + n$, this amounts to the statement that
\begin{equation}\label{deltahd}
h^\vee_{\frak g} d_{\frak g} -
h^\vee_{\frak h} d_{\frak h} > 0~.
\end{equation}
Using the formula for~$k_{\frak g}$ in~\eqref{finalak}, we see that the same combination governs the change~$\Delta k$ in the~$R$-symmetry anomaly. It is is therefore also monotonic under RG flow~\cite{Intriligator:2000eq}. Similarly, using the conjectured formula~\eqref{cresult} for the~$c$-anomaly shows that~$\Delta c$ is also proportional to the left-hand side of~\eqref{deltahd}, and hence positive. Therefore, the class of RG flows that preserve~$(2,0)$ supersymmetry is not sufficient to single out the~$a$-anomaly as the only monotonically decreasing quantity.\footnote{~An analogous situation occurs for RG flows between four-dimensional~${\mathcal N}=4$ SCFTs, since their Weyl anomalies~$a$ and~$c$ are always equal.}
The statement that~$\Delta a > 0$ for any flow can also be understood without using the explicit formula for~$a_{\frak g}$ in~\eqref{finalak}, or assuming that~$\frak g$ is simply laced. For rank one breaking patterns~$\frak g \rightarrow \frak h \oplus \frak u(1)$, it follows from~\eqref{deltaakii} that~$\Delta a \sim b^2$, with a positive, model-independent proportionality factor. As was shown in~\cite{Maxfield:2012aw} and reviewed in section~2.2, this relationship is dictated by supersymmetry. Since~$b$ can only vanish in free theories (see the discussion around~\eqref{fourd} and in section~3.4), it follows that~$\Delta a > 0$, which establishes the~$a$-theorem for rank one flows. Any adjoint breaking pattern can be obtained as a sequence of rank one breakings, by sequentially removing nodes from the Dynkin diagram of~$\frak g$. Therefore, the conclusion~$\Delta a > 0$ applies to all flows induced by adjoint Higgsing. Similarly, it also follows from~\eqref{deltaakii} that~$\Delta k \sim b^2$, and hence the same argument shows that~$\Delta k > 0$ for all such flows.
\subsection{The ADE Classification Revisited}
In section~3.2 we used properties of the dynamical W-strings that exist on the tensor branch of~$(2,0)$ theories~${\mathcal T}_{\frak g}$, and their relation to BPS states in five dimensions, to argue that~$\frak g$ must be a simply-laced Lie algebra. Here we will use the results of section~4.1 to offer an alternative perspective on the ADE restriction that only relies on properties of the massless fields on the six-dimensional tensor branch. Specifically, we will use the fact that the difference between the~$R$-symmetry anomalies of the UV and IR theories satisfies the quantization condition~\eqref{wzwquant}, because it multiplies a WZ term in six dimensions~\cite{Intriligator:2000eq},
\begin{equation}\label{dkquantii}
\Delta k = k_{\frak g} - k_{\frak h} \in 6 {\mathbb Z}~.
\end{equation}
In~\cite{Intriligator:2000eq}, this was interpreted as Dirac quantization for the dynamical W-strings on the tensor branch. The same quantization condition can be obtained by considering a non-dynamical instanton-string for an~$\frak{so}(5)_R$ background gauge field. It was argued in~\cite{Ohmori:2014kda,Intriligator:2014eaa} that such a string acts as a source for the dynamical, self-dual three-form fields on the tensor branch, with charges that are related to the~$R$-symmetry anomaly via a Green-Schwarz mechanism. Requiring these charges to satisfy Dirac quantization -- with appropriate adjustments for self-dual fields, as in~\eqref{6diracpair} -- also leads to~\eqref{dkquantii}.
In section~4.1 we computed~$\Delta k$ for the rank one breaking patterns~$\frak g \rightarrow \frak h \oplus \frak u(1)$, where~$\frak g$ is an arbitrary compact, simple Lie algebra. When~$\frak g$ is of ADE type, this formula can be integrated to~\eqref{finalak}. The quantization condition~\eqref{dkquantii} then amounts to the statement that~$h^\vee_{\frak g} d_{\frak g} \in 6 {\mathbb Z}$ for any ADE Lie algebra, which is indeed the case, as emphasized in~\cite{Intriligator:2000eq}. When~$\frak g$ is not simply laced, the formula for~$\Delta k$ (and also~$\Delta a$) in~\eqref{deltakwithn} involves the normalization factors~$N^{(i)}_{\frak h \subset \frak g}$, which generally depend on the way that~$\frak h$ is embedded into~$\frak g$.
As an example, consider~$\frak g = \frak g_{2}$, which has a maximal subalgebra~$\frak{su}(2)_{\circ} \oplus \frak{su}(2)_{\bullet}$.
Here the subscripts~$\circ$ and~$\bullet$ emphasize the fact that the two~$\frak{su}(2)$ summands are associated with the long and short roots in the~$\frak g_2$ Dynkin diagram~$\circ \hskip-2pt \equiv\hskip-2pt\bullet\,$. It is possible to break~$\frak g_2 \rightarrow \frak{su}(2) \oplus \frak u(1)$ in two inequivalent ways by adjoint Higgsing: we can either delete the long root~$\circ$ from the Dynkin diagram by choosing~$t = t_\circ$ to be the Cartan generator of~$\frak{su}(2)_\circ$, so that~$\frak h = \frak{su}(2)_\bullet$ is unbroken; or we can delete the short root~$\bullet$ by setting~$t = t_\bullet$ and preserve~$\frak h = \frak{su}(2)_\circ$\,. According to~\eqref{nembeddef}, the normalization factors for the two embeddings are determined by the long roots of the subgroup~$\frak h \subset \frak g_2$,\begin{equation}
N_{\frak{su}(2)_\bullet \subset \frak g_2} = {1 \over 3}~, \qquad N_{\frak{su}(2)_\circ \subset \frak g_2} = 1~.
\end{equation}
Substituting into~\eqref{deltakwithn}, we find
\begin{equation}\label{g2answers}
\Delta k_{\frak{su}(2)_\bullet \subset \frak g_2} = 4 \cdot 14 - {1 \over 3} \cdot 2 \cdot 3 = 54~, \qquad \Delta k_{\frak{su}(2)_\circ \subset \frak g_2}= 4 \cdot 14 - 2 \cdot 3 = 50~.
\end{equation}
This result can also be obtained by directly evaluating the sums over W-Bosons in~\eqref{akx} and~\eqref{trwsum}, using the fact that the adjoint~$\bf 14$ of~$\frak g_2$ decomposes as follows under~$\frak{su}(2)_\circ \oplus \frak{su}(2)_\bullet$,
\begin{equation}\label{g2dec}
{\bf 14} \rightarrow \left({\bf 1}, {\bf 3}\right) \oplus \left({\bf 3}, {\bf 1}\right) \oplus \left({\bf 2}, {\bf 4}\right)~.
\end{equation}
Note that~$\Delta k_{\frak{su}(2)_\circ \subset \frak g_2}$ in~\eqref{g2answers} is not divisible by six, i.e.~it does not satisfy the quantization condition~\eqref{dkquantii}. This rules out~$(2,0)$ theories~${\mathcal T}_{\frak g}$ with~$\frak g = \frak g_2$.
Similar phenomena occur for all non-simply-laced Lie algebras, since they contain roots of different lengths. In general, the subgroup~$\frak h$ decomposes into several compact, simple summands, which give rise to different normalization factors~\eqref{nembeddef} that must be added according to~\eqref{deltakwithn}.\footnote{~For example, we can break~$\frak{so}(7)\rightarrow \frak{su}(2)_\circ \oplus \frak{su}(2)_\bullet \oplus \frak u(1)$ by deleting the middle node from the~$\frak{so}(7)$ Dynkin diagram~$\circ\hskip-2pt-\hskip-2pt\circ\hskip-2.5pt=\hskip-2.5pt\bullet$\,. According to~\eqref{nembeddef}, the normalization factors for~$\frak{su}(2)_{\circ}$ and~$\frak{su}(2)_\bullet$ are~$N_{\circ} = 1$ and~$N_{\bullet} = {1 \over 2}$, respectively. Therefore~\eqref{deltakwithn} gives~$\Delta k = 5 \cdot 21 - 1 \cdot 2 \cdot 3 - {1 \over 2} \cdot 2 \cdot 3 = 96$.} The quantization condition~\eqref{dkquantii} also rules out all other non-ADE Lie algebras~$\frak g$. In order to show this, it suffices to rule out~$\frak g = \frak{so}(5) = \frak{sp}(4)$, since it can be reached from all non-simply-laced Lie algebras other than~$\frak g_2$ by adjoint Higgsing. We can break~$\frak{so}(5) \rightarrow \frak{su}(2)_\bullet \oplus \frak u(1)$ by deleting the long root~$\circ$ from the~$\frak{so}(5)$ Dynkin diagram~$\circ \hskip-3pt =\hskip-3pt\bullet\,$. Substituting the normalization factor~$N_{\frak{su}(2)_\bullet \subset \frak{so}(5)} = {1 \over 2}$ from~\eqref{nembeddef} into~\eqref{deltakwithn} then gives~$\Delta k_{\frak{su}(2)_\bullet \subset \frak{so}(5)} = 3 \cdot 10 - {1 \over 2} \cdot 2 \cdot 3 = 27$, which does not satisfy~\eqref{dkquantii}.
In summary, the fact that there are no~$(2,0)$ SCFTs~${\mathcal T}_{\frak g}$ unless~$\frak g$ is a simply-laced Lie algebra is required by the consistency of the low-energy effective theory on the six-dimensional tensor branch, due to the quantization condition~\eqref{dkquantii}.
\section{Compactification to Four Dimensions}
In this section we will consider~$(2,0)$ SCFTs~${\mathcal T}_{\frak g}$ on~${\mathbb R}^{3,1} \times T^2$. Here~$T^2 = S^1_R \times S^1_r$ is a rectangular torus of area~$A = R r$ and modular parameter~$\tau = i \tau_2 = i\left({r \over R}\right)$. We describe their moduli spaces of vacua, which depend on a choice of gauge group~$G$, and the singular points at which interacting~${\mathcal N}=4$ theories reside. In addition to the familiar theory with gauge group~$G$ at the origin there are typically additional singular points at finite distance~$\sim A^{-{1 \over 2}}$ (sometimes with a different gauge group), which recede to infinite distance when the torus shrinks to zero size, $A\rightarrow 0$. We use non-renormalization theorems to determine the four-derivative terms in the Coulomb-branch effective action via a one-loop calculation in five-dimensional~${\mathcal N}=2$ Yang-Mills theory, which now includes a sum over KK modes, and interpret the result. Many statements in this section have five-dimensional analogues, which were discussed in section~3. We will therefore be brief, focusing on those aspects that are particular to four dimensions.
\subsection{Two-Derivative Terms and Singular Points on the Coulomb Branch}
As in five dimensions, the two-derivative theory on the Coulomb branch of the toroidally compactified theory is completely rigid, due to the constraints of maximal supersymmetry. The geometry of the moduli space can therefore be understood by first compactifying to five-dimensional~${\mathcal N}=2$ Yang-Mills theory on~$S^1_R$, and then analyzing the classical vacua of this Yang-Mills theory on~${\mathbb R}^{3,1} \times S^1_r$. (Note that the order in which we compactify on the two circles selects an S-duality frame in four dimensions.) Many other aspects of toroidally compactified~$(2,0)$ theories can also be understood by studying the five-dimensional Yang-Mills theory on~${\mathbb R}^{3,1} \times S^1_r$ (see for instance~\cite{Tachikawa:2011ch} and references therein). In section~5.2 we will follow this logic to determine the four-derivative terms on the Coulomb branch.
As in previous sections, we will focus on rank one Coulomb branches described by a single Abelian vector multiplet, which is associated with a Cartan generator~$t \in \frak t_{\frak g}$ of the gauge algebra~$\frak g$ in five dimensions. In addition to the fields that are already present in five dimensions, the four-dimensional effective theory contains an additional real scalar~$\sigma$, which is the Wilson line of the five-dimensional Abelian gauge field~$a_\mu dx^\mu$ around~$S^1_r$,
\begin{equation}\label{sigmadef}
\sigma = {1 \over 2 \pi r} \int_{S^1_r} a_\mu dx^\mu~.
\end{equation}
With this normalization~$\sigma$ has dimension one, and its kinetic terms agree with those of the other five scalars~$\varphi^I$. Explicitly, we can reduce the five-dimensional kinetic terms in~\eqref{kt} to zero modes of along~$S^1_r$ (taking into account factors of~$2 \pi r$) and use the relation~$g^2 = 4 \pi^2 R$ from~\eqref{gaugeagain} to obtain the kinetic terms on the four-dimensional Coulomb branch,
\begin{equation}
- {\tau_2 \over 4 \pi} \Tr_{\frak g} (t^2) \left(f \wedge * f + \partial_\mu \sigma \partial^\mu \sigma + \sum_{I = 1}^5 \partial_\mu \varphi^I \partial^\mu \varphi^I\right) + \left(\text{Fermions}\right) \subset {\mathscr L}^{(4)}_{\text{Coulomb}}~,\;\;\tau_2 = {R\over r}~.
\end{equation}
The scalar~$\sigma$ is periodic, because it is the holonomy of the Abelian gauge field~$a_\mu dx^\mu$ in five dimensions, which is in turn embedded into a non-Abelian gauge field, as in~\eqref{nonabeemb}. We will parametrize the periodicity of~$\sigma$ as follows,
\begin{equation}\label{sigmaperdef}
\sigma \sim \sigma + {p \over r}~, \qquad (p >0)~.
\end{equation}
The dimensionless constant~$p$ depends on the choice of gauge group~$G$ in five dimensions, specifically its maximal torus. It is defined to be the smallest positive number that satisfies
\begin{equation}\label{pfind}
\exp\left(2 \pi i \, p\, t\right) = {\mathds 1}_G~,
\end{equation}
where~${\mathds 1}_G$ denotes the identity element of the Lie group~$G$. Since~$\sigma$ is periodic while the~$\varphi^I$ are not, the~$R$-symmetry is typically~$\frak{so}(5)_R$, as in five dimensions. If~$r\rightarrow 0$, so that the area~$A = R r$ of the torus vanishes, the periodicity of~$\sigma$ in~\eqref{sigmaperdef} disappears. In this limit we obtain a genuinely four-dimensional~${\mathcal N}=4$ theory with an accidental~$\frak{so}(6)_R$ symmetry under which the six scalars~$\sigma, \varphi^I$ transform as a vector.
We will now explore the structure of the Coulomb branch parametrized by the scalars~$\sigma, \varphi^I$. As in higher-dimensions, it is convenient to use the radial variable~$\psi = \left(\sum_I \varphi^I\varphi^I\right)^{1 \over 2}$. For generic~$\langle\sigma\rangle, \langle\psi\rangle$, the gauge symmetry is broken to the commutant~$\frak h \oplus \frak u(1)$ of the Cartan generator~$t$ in~$\frak g$. At the origin~$\langle \sigma\rangle = \langle \psi \rangle = 0$, the gauge symmetry is restored to~$\frak g$. Interestingly, the circle of vacua parametrized by~$\langle \sigma \rangle \neq 0$ and~$\langle \psi\rangle = 0$ typically contains other points at which the gauge symmetry is enhanced beyond~$\frak h \oplus \frak u(1)$. These can be found by examining the commutant of the Wilson line parametrized by~$\langle \sigma\rangle$ inside the gauge group~$G$,
\begin{equation}
\exp\left(2 \pi i \langle \sigma \rangle\, r \, t\right) \in G~.
\end{equation}
A complementary approach to identifying singular points on the Coulomb branch is to track the masses of~${1 \over 2}$-BPS W-Bosons and their KK modes on~$S^1_r$ as functions of the vevs~$\langle \sigma\rangle, \langle \psi\rangle$. The BPS mass formula for a W-Boson corresponding to a root~$\alpha \in \Delta_{\frak g} \backslash \Delta_{\frak h}$ with KK momentum~$\ell \over r$ takes the following form (see e.g.~\cite{Tachikawa:2011ch}),
\begin{equation}
m^2_W(\alpha, \ell) = \alpha(t)^2 \langle\psi\rangle^2 + \left({\ell \over r} - \langle \sigma\rangle \alpha(t)\right)^2~, \qquad \ell \in {\mathbb Z}~. \label{wkk}
\end{equation}
Just as for the two-derivative terms, maximal supersymmetry ensures that this classical formula is quantum-mechanically exact. At the origin of the Coulomb branch, the~$\ell = 0$ KK modes of all W-Bosons become massless, so that the gauge symmetry is enhanced to~$\frak g$. However, if~$\langle \psi \rangle = 0$, there can be points~$\langle \sigma \rangle \neq 0$ at which certain W-Bosons with~$\ell \neq 0$ become massless, which also enhances the gauge symmetry. Rather than describing these phenomena in full generality, we will illustrate them in several representative examples.
\bigskip
\noindent {\it The Moduli Space for~$\frak g = \frak{su}(n)$}
\medskip
The simply connected Lie group with Lie algebra~$\frak{su}(n)$ is~$SU(n)$, whose center is~${\mathbb Z}_n$. The possible choices of gauge group are then given by
\begin{equation}
G_j=SU(n)/\mathbb{Z}_{j}~.
\end{equation}
Here~$j$ is a divisor of~$n$, so that the center of~$G_j$ is~${\mathbb Z}_n/{\mathbb Z}_j$. We will consider the rank one breaking pattern~$\frak{su}(n)\rightarrow \frak{su}(k) \times \frak{su}(n-k)\times \frak{u}(1)$. In the fundamental representation, a Cartan element~$t$ that leads to this breaking pattern is given by
\begin{equation}
t=\left(\begin{array}{c|c}(n-k) \, {\mathds 1}_{k} & 0 \\ \hline0 & -k \,{\mathds 1}_{n-k} \end{array}\right)~.
\end{equation}
We can now use~\eqref{pfind} to find the periodicity~$p$ of the compact scalar~$\sigma$,
\begin{equation}\label{sunkp}
p= {1 \over j \cdot \mathrm{gcd}(n-k,k)}~.
\end{equation}
Whenever the Wilson line~$\exp(2 \pi i \langle \sigma \rangle \, r \, t)$ is in the center~${\mathbb Z}_n/{\mathbb Z}_j$ of the gauge group, the full~$G_j$ gauge symmetry is restored. This occurs precisely when
\begin{equation}\label{siglsolve}
\langle \sigma \rangle = {\ell \over n r}~, \qquad \ell \in {\mathbb Z}~, \qquad 0 \leq \ell < {n \over j \cdot \mathrm{gcd}(n-k,k)}~.
\end{equation}
Here the restriction on the integer~$\ell$ is due to the periodicity of~$\sigma$ dictated by~\eqref{sunkp}.
We can compare these results with the W-Boson mass formula in~\eqref{wkk}. Under the subalgebra~$ \frak{su}(k) \times \frak{su}(n-k)\times \frak{u}(1)$, the W-Bosons transform as follows,
\begin{equation}
\Big(\,\yng(1)~,\overline{\yng(1)}\,\Big)_{n} \oplus \left(\text{complex conjugate}\right)~,
\end{equation}
where the subscript indicates that the~W-Bosons have~$\frak{u}(1)$ charges~$\pm n$. At the values of~$\langle \sigma \rangle$ in~\eqref{siglsolve} (and for~$\langle \psi \rangle = 0$) the mass formula~\eqref{wkk} predicts that the~$\ell^{\text{th}}$ KK modes of all W-Bosons become massless and can therefore restore the full gauge symmetry~$G_j$.
The preceding discussion illustrates the fact that there are in general multiple singular points on the~$\langle \sigma\rangle$ circle, which correspond to~${\mathcal N}=4$ Yang-Mills theories with enhanced gauge symmetry. This phenomenon even occurs on the Coulomb branch of the simplest rank one theory with gauge algebra~$\frak g = \frak{su}(2)$, where~$\frak{su}(2) \rightarrow \frak u(1)$. In the notation above, this corresponds to~$n = 2$ and~$k=1$. If the global form of the gauge group is~$SU(2)$, the periodicity of~$\sigma$ in~\eqref{sunkp} is~$p =1$ and according to~\eqref{siglsolve} there are two distinct points, $\langle \sigma\rangle = 0$ and~$\langle \sigma\rangle = {1 \over 2 r}$, where the~$SU(2)$ gauge symmetry is restored. By contrast, if the gauge group is~$SU(2)/{\mathbb Z}_2 = SO(3)$, then~$p = {1 \over 2}$ and only the point~$\langle \sigma\rangle = 0$ has an enhanced~$SO(3)$ gauge symmetry.
In the language of~\cite{Kapustin:2014gua, Gaiotto:2014kfa}, the~$SU(2)$ gauge theory in five-dimensions has a~${\mathbb Z}_2$ one-form global symmetry, which shifts the gauge field by a flat~${\mathbb Z}_2$ connection. This acts as a global~${\mathbb Z}_2$ symmetry on the scalar~$\sigma$ defined in~\eqref{sigmadef}, which interchanges the two vacua at~$\langle \sigma\rangle = 0$ and~$\langle \sigma\rangle = {1 \over 2 r}$. When the gauge group is~$SO(3)$, this~${\mathbb Z}_2$ global symmetry is gauged and the two vacua are identified. In the limit~$r\rightarrow 0$, the $\sigma$-circle decompactifies and the points with enhanced gauge symmetry are separated by an infinite distance in moduli space. The importance of such global issues for gauge theories on a circle was recently emphasized in~\cite{Aharony:2013dha,Aharony:2013kma}. Subtleties of the zero-area limit were discussed in~\cite{Gaiotto:2011xs}.
\bigskip
\noindent {\it The Moduli Space for~$\frak g = \frak{so}(2n)$}
\medskip
As our second example, we consider~$\frak{g}=\frak{so}(2n)$, which manifests new phenomena. Here we limit the choice of gauge group to~$G = SO(2n)$, i.e.~the standard group of special orthogonal matrices with~$\mathbb{Z}_{2}$ fundamental group. We consider the breaking pattern~$\frak{so}(2n)\rightarrow \frak{su}(k)\oplus\frak{so}(2(n-k))\oplus \frak{u}(1)$. In the fundamental representation, a Cartan generator~$t$ that gives rise to this breaking is given by
\begin{equation}
t= \left(\begin{array}{c|c}\begin{array}{c|c} 0 & i {\mathds 1}_{k} \\ \hline -i {\mathds 1}_{k} & 0 \end{array} & 0 \\ \hline 0 & 0_{2n-2k}\end{array}\right)~.
\end{equation}
We can then evaluate the Wilson line,
\begin{equation}\label{wilsonlineso}
\exp\left(2 \pi i \langle \sigma \rangle \, r \, t \right) = \left(\begin{array}{c|c}\begin{array}{c|c} \cos\left(2 \pi \langle \sigma \rangle r\right) {\mathds 1}_k & - \sin\left(2 \pi \langle \sigma \rangle r\right) {\mathds 1}_{k} \\ \hline \sin\left(2 \pi \langle \sigma \rangle r\right) {\mathds 1}_{k} & \cos\left(2 \pi \langle \sigma \rangle r\right) {\mathds 1}_k \end{array} & 0 \\ \hline 0 & {\mathds 1}_{2n-2k}\end{array}\right)~.
\end{equation}
This shows that the periodicity of~$\sigma$ is~$p = 1$. At~$\langle \sigma \rangle = 0$, the full~$SO(2n)$ gauge symmetry is restored. However, the vacuum at~$\langle \sigma \rangle = {1 \over 2 r}$ is also special. At this point, the Wilson line in~\eqref{wilsonlineso} reduces to~$\text{diag}(-{\mathds 1}_{2k}, {\mathds 1}_{2n - 2k})$, so that the gauge symmetry is enhanced to~$\frak{so}(2k) \oplus \frak{so}(2(n-k))$. Note that this gauge algebra cannot be reached from~$\frak{so}(2n)$ by standard adjoint Higgsing using the five non-compact scalars~$\varphi^I$.
These conclusions are again reflected in the spectrum of W-Bosons. They transform as follows under~$\frak{su}(k) \oplus \frak{so}(2(n-k))\oplus \frak{u}(1)$,
\begin{equation}
\Bigg(\, \yng(1,1)~, \mathbf{1} \,\Bigg)_{2} \oplus \Big(\,\yng(1)~, \yng(1)\, \Big)_{1} \oplus \left(\text{complex conjugate}\right)~, \label{wspecso}
\end{equation}
where the subscripts denote the~$\frak{u}(1)$ charges with respect to~$t$. The mass formula~\eqref{wkk} shows that the~$\ell = 0$ KK modes of all W-Bosons are massless at the origin, where the gauge symmetry is enhanced to~$SO(2n)$. At~$\langle \sigma \rangle = {1\over 2r}$ and~$\langle \psi \rangle = 0$, the~$\ell=1$ KK mode from the~$\bigg(\,\yng(1,1)~, \mathbf{1} \,\bigg)_{2}$ representation and the~$\ell = -1$ KK mode from its complex conjugate representation are massless, while all other W-Boson modes are massive. Together with the massless gauge Bosons of~$\frak{su}(k) \oplus \frak{so}(2(n-k))\oplus \frak{u}(1)$, they precisely fill out the adjoint representation of~$\frak{so}(2k) \oplus \frak{so}(2(n-k))$, which is the unbroken gauge algebra at that point. Therefore we can obtain an~${\mathcal N}=4$ theory with this gauge symmetry by taking the~$r \rightarrow 0$ limit while tuning~$\langle \sigma \rangle = {1 \over 2 r}$. In this limit, the conventional vacuum at~$\langle \sigma \rangle = 0$ moves off to infinite distance.
\subsection{Four-Derivative Terms in Four Dimensions}
We will now use non-renormalization theorems, and our understanding of the five-dimensional theory from section~3, to determine the moduli dependence of the leading higher-derivative interactions on the Coulomb branch of toroidally compactified~$(2,0)$ theories. As in previous sections, we consider rank one Coulomb branches associated with a particular Cartan element~$t$, which breaks~$\frak g \rightarrow \frak h \oplus \frak u(1)$. We focus on the coefficient function~$f_4(\varphi^I, \sigma)$ of the four-derivative term~$(\partial \psi)^4$ in the effective action,
\begin{equation}\label{f4def4d}
A^{2}f_4(\varphi^I, \sigma)(\partial \psi)^{4} \subset {\mathscr L}^{(4)}_{\text{Coulomb}}~, \qquad A = R r~,
\end{equation}
but as before, a similar discussion applies to all four-derivative terms in~${\mathscr L}^{(4)}_{\text{Coulomb}}$. We will fix the function~$f_4(\varphi^I, \sigma)$ by imposing the following constraints:
\begin{itemize}
\item It is invariant under~$\frak{so}(5)_R$ rotations of the~$\varphi^I$, i.e.~it only depends on the~$\varphi^I$ through the radial variable~$\psi$.
\item It is periodic in~$\sigma$ with period~\eqref{sigmaperdef}, i.e.~it is invariant under~$\sigma \sim \sigma + {p \over r}$.
\item It is dimensionless. Since the six-dimensional theory is scale invariant, this fixes
\begin{equation}
f_4=f_4\left(r \psi, r \sigma, \tau_2\right)~, \qquad \tau_2 = {r \over R}~.
\end{equation}
\item As in sections~2 and~3, it is harmonic, due to the non-renormalization theorems of~\cite{Paban:1998ea,Paban:1998mp,Paban:1998qy,Sethi:1999qv,Maxfield:2012aw}.
\item In appropriate regimes, it matches onto the six- and five-dimensional results discussed in sections~2 and~3. In particular, for values of~$\psi$ that are much larger than any other length scale, it must decay to zero.
\end{itemize}
The periodicity of~$\sigma$ is conveniently taken into account by expanding~$f_4$ in Fourier modes,
\begin{equation}\label{sigmaft}
f_4\left(r \psi, r \sigma, \tau_2 \right)=\sum_{n \in \mathbb{Z}}f^{(n)}_4\left(r \psi, \tau_2 \right)\exp\left(\frac{2\pi i n r \sigma}{p}\right)~.
\end{equation}
Since the non-renormalization theorem states that~$f_4$ is harmonic, each mode function satisfies
\begin{equation}
\frac{d^{2}}{d(r\psi)^{2}}f^{(n)}_4\left(r\psi,\tau_2\right)+\left(\frac{4}{r\psi}\right)\frac{d}{d(r\psi)}f^{(n)}_4\left(r\psi,\tau_2\right)=\frac{4\pi^{2}n^{2}}{p^{2}}f^{(n)}_4\left(r\psi,\tau_2\right)~. \label{modesols}
\end{equation}
For each value of~$n$, this second order differential equations has two linearly independent solutions, only one of which decays to zero at large~$\psi$,
\begin{equation}\label{modefn}
f^{(n)}_4(r\psi,\tau_2)=b_n(\tau_2)\left(\frac{1}{(r\psi)^{3}}+\frac{2\pi |n| }{p(r\psi)^{2}}\right)\exp\left(-\frac{2\pi |n| r \psi}{p}\right)~.
\end{equation}
Together with~\eqref{sigmaft}, this completely fixes the coefficient function~$f_4$ in terms of an infinite set of coefficients~$b_n(\tau_2)$ that only depend on the four-dimensional gauge coupling~$\tau_2 = {r \over R}$\,.
In order to fix~$b_n(\tau_2)$, we consider the limit~$R \rightarrow 0$ with~$\tau_2$ fixed, in which the five-dimensional~${\mathcal N}=2$ Yang-Mills theory becomes arbitrarily weakly coupled. Therefore, the function~$f_4$ can be computed exactly by integrating out W-Bosons at one loop. This is similar to the five-dimensional computation discussed around~\eqref{b5loop}, except that one of the momentum integrals is replaced by a sum over KK momenta~$\ell \over r$ along~$S^1_r$. Thus,
\begin{equation}
f_4\left(r \psi, r \sigma, \tau_2\right)=\frac{\tau^2_2}{32 \pi^2} \sum_{\alpha \in \Delta_{\frak g} \backslash \Delta_{\frak h}}\sum_{\ell \in \mathbb{Z}}\left((r \psi )^{2}+\left(\frac{\ell}{|\alpha(t)|}+r\sigma \right)^{2}\,\right)^{-2}~. \label{4doneloop}
\end{equation}
The overall normalization can been fixed by matching onto the five-dimensional result~\eqref{b5loop} in the limit~$r\rightarrow \infty$.
It is instructive to rewrite the expression for~$f_4$ in~\eqref{4doneloop} as a Fourier series~\eqref{sigmaft}. This can be done using the Poisson summation formula,
\begin{equation}
\sum_{\ell \in \mathbb{Z}}\frac{1}{\left(y^{2}+(\ell+x)^{2}\right)^{2}}= \frac{\pi}{2}\sum_{n\in \mathbb{Z}}\left(\frac{1}{|y|^{3}}+\frac{2\pi |n|}{y^{2}}\right)\exp\Big(2\pi i nx-2\pi|n| |y|\Big)~.
\end{equation}
If we introduce the function
\begin{equation}
\delta_{\mathbb{Z}}(x) = \begin{cases} 1 & \mathrm{if} \ x \in \mathbb{Z}~, \\ 0 & \mathrm{if} \ x \notin \mathbb{Z}~, \end{cases}
\end{equation}
then the Fourier coefficients in~\eqref{modefn} can be succinctly written as
\begin{equation}
b^{(4)}_n(\tau_2)= \frac{\tau_2^2}{64 \pi}\sum_{\alpha\in \Delta_{\frak g} \backslash \Delta_{\frak h}}|\alpha(t)|\delta_{\mathbb{Z}}\left(\frac{n}{p|\alpha(t)|}\right)~.
\end{equation}
This shows that many Fourier modes that are consistent with the periodicity of~$\sigma$ are absent. For instance, we found above that choosing the gauge group to be~$G = SU(2)$ leads to~$p|\alpha(t)| = 2$. Therefore, only Fourier modes with even~$n$ contribute. This example illustrates why it is not in general possible to obtain the coefficient function~$f_4$, and in particular its five-dimensional limit, by computing the lowest Fourier coefficient~$b_0(\tau_2)$ in a genuinely four-dimensional~${\mathcal N}=4$ theory and then restoring the periodicity of~$\sigma$ by summing over all possible values of~$n$ with equal weight. This is a fortiori the case if the moduli space of the theory on a torus of finite area contains interacting~${\mathcal N}=4$ theories with different gauge groups, as was the case for the~$\frak{so}(2n)$ examples discussed in the previous subsection.
We can now expand the exact coefficient function~\eqref{4doneloop} in the four-dimensional limit of vanishing area, $A\rightarrow 0$. This enables us to determine the leading area-suppressed irrelevant operators that describe the RG flow into a genuinely four-dimensional~${\mathcal N}=4$ theory at low energies. (The case~$\frak g = \frak{su}(2)$ was recently examined in~\cite{Lin:2015ixa}.) As was discussed in section~5.1, the moduli space may in general contain several points at which interacting~${\mathcal N}=4$ theories reside. In order to describe the RG flow into one of these theories, we must appropriately tune the vevs to its vicinity as we take~$A\rightarrow 0$, while the other singular points on the moduli space recede to infinite distance. For simplicity, we will only carry out this procedure for the familiar interacting vacuum at the origin of moduli space.
We take the zero-area limit by letting~$r \rightarrow 0$ at fixed~$\tau_2$. The leading term in~\eqref{4doneloop} comes from the~$\ell = 0$ KK mode, while the first subleading correction is a sum over KK modes with~$\ell \neq 0$, which can be evaluated at~$\psi = \sigma = 0$. Therefore,
\begin{equation}\label{expansion}
A^2 f_4 \rightarrow {n_W \over 32 \pi^2} {1 \over \psi^2 + \sigma^2} + {\pi^2 A^2 \tau_2^2 \over 1440} \sum_{\alpha \in \Delta_{\frak g} \backslash \Delta_{\frak h}} \left(\alpha(t)\right)^4 + {\mathcal O}(A^4) \qquad \text{as}~~ A \rightarrow 0~,
\end{equation}
Since it follows from~\eqref{trdef} that the sum over W-Bosons can be written as~$2 h^\vee_{\frak g} \Tr_{\frak g}\left(h^4\right)$, the terms in the Coulomb-branch effective Lagrangian~\eqref{f4def4d} that follow from~\eqref{expansion} are given by
\begin{equation}\label{coulexp}
{n_W \over 32 \pi^2} {(\partial\psi)^4 \over \psi^2 + \sigma^2} + {\pi^2 h^\vee_{\frak g} A^2 \tau_2^2 \over 720} \Tr_{\frak g} \left(t^4\right) \left(\partial\psi\right)^4 + \cdots \subset {\mathscr L}^{(4)}_{\text{Coulomb}}~.
\end{equation}
The term proportional to~$n_W$ is the expected one-loop contribution due to integrating out W-Bosons in the four-dimensional~${\mathcal N}=4$ Yang-Mills theory at the origin of moduli space, as discussed around~\eqref{b5loop} and in Figure~\ref{loopfig}. The second term is non-singular and can be extrapolated to the origin, where it arises from a~${1 \over 2}$-BPS irrelevant operator analogous to the five-dimensional~$F^4$ operators discussed in~\eqref{F4}. Unlike in five dimensions, where these operators were shown to be absent, the compactification of~$(2,0)$ SCFTs on finite-area tori generates these interactions with a definite, non-zero coefficient, which can be extracted from~\eqref{coulexp}. When written in terms of~$\Tr_{\frak g}$, they are single-trace terms, which amounts to a definite linear combination of single- and double-trace terms in the fundamental representation. Note that both operators in~\eqref{coulexp} are invariant under the accidental~$\frak{so}(6)_R$ symmetry that emerges in the zero-area limit. However, subleading~${\mathcal O}(A^4)$ corrections break this symmetry to~$\frak{so}(5)_R$. All of them are single-trace operators when written in terms of~$\Tr_{\frak g}$, and it is straightforward to extract their coefficients by expanding~\eqref{4doneloop} to higher order.
\section*{Acknoweldgements}\noindent We are grateful to K.~Intriligator, Z.~Komargodski, and~N.~Seiberg for helpful exchanges and comments on the manuscript. CC and TD would like to thank K.~Intriligator for many valuable conversations, and for collaboration on related topics. TD would like to thank members of the Rutgers high-energy theory group for hospitality and discussions. CC is supported by a Junior Fellowship at the Harvard Society of Fellows. TD is supported by the Fundamental Laws Initiative of the Center for the Fundamental Laws of Nature at Harvard University, as well as DOE grant DE-SC0007870 and NSF grants PHY-0847457, PHY-1067976, and PHY-1205550. XY is supported by a Sloan Fellowship and a Simons Investigator Award from the Simons Foundation.
| {'timestamp': '2015-05-15T02:12:10', 'yymm': '1505', 'arxiv_id': '1505.03850', 'language': 'en', 'url': 'https://arxiv.org/abs/1505.03850'} |
\section{Introduction}\label{intro}
The interaction between a quantum system and a quantum measurement apparatus, with only unitary evolution, would entangle the two initially uncorrelated systems so that information about the system is recorded in a set of apparatus states \cite{N32}. Because an entangled state exhibits correlations regardless of the system basis in which it is written, this seems to leave an ambiguity about which system observable the apparatus has actually measured. To get around this problem, Zurek noted that a macroscopic apparatus will be continuously interacting with its environment, and introduced the idea of a `pointer basis' for the quantum apparatus \cite{Zur81}. For an ideally engineered apparatus, this can be defined as the set of pure apparatus states which do not evolve and never enter into a superposition \cite{Zur81, ZHP93}. More realistically, the environmental interaction will cause \emph{decoherence}, which turns a quantum superposition of pointer states into a classical mixture, on a time scale faster than that on which any pointer state evolves. In such a context, the original notion has been modified to define the pointer states as the least unstable pure states [3], i.e. the pure states that have the slowest rate of entropy increase for a given coupling to the environment.
After an apparatus (or, more generally, any quantum system) has undergone decoherence its state will be, in general, mixed. It is represented by a state matrix $\rho$. Mathematically, there are infinitely many ways to write a mixed state as a convex combination of pure states $\{\pi_k\}_k$ (a basis) with corresponding weights $\{\wp_k\}_k$. We shall refer to the set of ordered pairs $\{ (\wp_k,\pi_k) \}_k$ as a pure-state ensemble. Each ensemble suggests an \emph{ignorance interpretation} for the mixed state: the system is in one of the pure states $\pi_k$, but with incomplete information, one cannot tell which one it is. However, Wiseman and Vaccarro have shown that not all such ensembles are physically equivalent \cite{WV01} --- only some ensembles are `physically realizable' (PR). A PR ensemble $\{ (\wp_k,\pi_k) \}_k$ is one such that an experimenter can find out which pure state out of $\{\pi_k\}_k$ the system is in at all time (in the long time limit), by monitoring the environment to which the system is coupled. Such ensembles exist for all environmental couplings that can be described by a Markovian master equation~\cite{WM10}, and different monitorings result in different `unravellings'~\cite{Car93} of the master equation into stochastic pure-state dynamics. PR ensembles thus make the ignorance interpretation meaningful at all times in the evolution of a single system, as a sufficiently skilled observer could know which state the system is in at any time, without affecting the system evolution.
Zurek's `pointer basis' concept is supposed to explain why we can regard the apparatus as `really' being in one of pointer states, like a classical object. In other words, it appeals to an ignorance interpretation of a particular decomposition of a mixed state $\rho$ because of the interaction with the environment. But as explained above, the ignorance interpretation does not work for all ensembles; it works only for PR ensembles. It is for this reason that it was proposed in~\cite{ABJW05} that the set of candidate pointer bases should be restricted to the set of PR ensembles. Furthermore, it was shown in~\cite{ABJW05} that different PR ensembles, induced by different unravellings, differ according to the extent in which they possess certain features of classicality. One measure of classicality, which is closely related to that used by Zurek and Paz~\cite{ZHP93}, is the robustness of an unravelling-induced basis against environmental noise. This is the ability of an unravelling to generate a set of pure states $\{ \pi_k \}_k$ with the longest mixing time~\cite{ABJW05}. This is the time it takes for the mixedness (or entropy or impurity) of the initial pure state to increase to some level, on average, when the system evolves unconditionally (i.e.~according to the master equation). Thus it is this set of states that should be regarded as the pointer basis for the system.
In this paper we are concerned with applying these ideas to quantum feedback control~\cite{WM10}. This field has gained tremendous interest recently and already been successfully applied in many experiments~\cite{SCZ11,VMS12,YHN12}. As in classical control, one needs to gain information about the system in order to design a suitable control protocol for driving the system towards a desired state. However, measurements on a quantum system will in general perturb its state while information is being extracted. This back-action of quantum measurements is a key element that sets quantum feedback protocols apart from classical ones and means that one should take additional care in the design of the in-loop measurement.
A class of open systems of special interest are those with linear Heisenberg equations of motion in phase space driven by Gaussian noise. We will refer to these as linear Gaussian (LG) systems. Such systems have received a lot of attention because of their mathematical simplicity and because a great deal of classical linear systems theory can be re-adapted to describe quantum systems~\cite{DHJ+00,WD05}. LG systems arise naturally in quantum optics, describing modes of the electromagnetic field, nanomechanical systems, and weakly excited ensembles of atoms~\cite{WM10}.
In this paper, we consider using measurement (an unravelling) and linear feedback control to stabilize the state of a LG system to one of the states in the unravelling-induced basis. In particular we show that when the control is strong compared to the decoherence rate (the reciprocal of the mixing time) of the unravelling-induced basis, the system state can be stabilized with a fidelity close to one. We will show also that choosing the unravelling which induces the pointer basis (as defined above) maximizes the fidelity between the actual controlled state and the target state, for a strong control. Furthermore, we find that even if the feedback control strength is only comparable to the decoherence rate, the optimal unravelling for this purpose still induces a basis very close to the pointer basis. However if the feedback control is weak, this is not the case.
The rest of this paper is organized as follows. In \sref{PRPB} we formalize the idea of PR ensembles in the context of Markovian evolution by presenting the necessary and sufficient conditions for an ensemble to be PR which were originally derived in~\cite{WV01}. Here we will also define the mixing time which in turn is used to define the pointer basis. In \sref{LGQS} we review LG systems for both unconditional and conditional dynamics. An expression for the mixing time of LG systems will be derived. In \sref{CLGsys}, we add a control input to the LG system and show that it is effective for producing a pointer state. We will take the infidelity of the controlled state as the cost function for our control problem, and show that this can be approximated by a quadratic cost, thus putting our control problem into the class of linear-quadratic-Gaussian (LQG) control problems. Finally in \sref{ExampleQBM} we illustrate our theory for the example of a particle in one dimension undergoing quantum Brownian motion.
\section{Physically realizable ensembles and the pointer basis}\label{PRPB}
\subsection{Steady state dynamics and conditions for physically realizable ensembles}
In this paper we restrict our attention to master equations that describe valid quantum Markovian evolution so that the time derivative of the system state, denoted by $\dot{\rho}$ has the Lindblad form. This means that there is a Hermitian operator $\hat{H}$ and vector operator $\hat{\bi c}$ such that
\begin{eqnarray}
\label{Lindblad}
\dot{\rho} \equiv {\mathcal L} \rho
= -i\big[\hat{H},\rho\big] + \hat{\bi c}^\top \rho \hat{\bi c}^\ddag - {1\over 2} \, \hat{\bi c}^\dagger \hat{\bi c} \;\! \rho - \frac{1}{2} \, \rho \;\! \hat{\bi c}^\dagger \hat{\bi c} \;.
\end{eqnarray}
Note that $\hat{H}$ invariably turns out to be a Hamiltonian or can be interpreted as one. We have defined $\hat{\bi c}^\ddag$ to be the column vector operator
\begin{eqnarray}
\label{TransposeDagger}
\hat{\bi c}^\ddag \equiv \big( \hat{\bi c}^\dag \big)^\top \;,
\end{eqnarray}
where $\hat{\bi c}^\dag$ is defined by transposing $\hat{\bi c}$ and then taking the Hermitian conjugate of each element~\cite{CW11a}:
\begin{eqnarray}
\hat{\bi c}^\dag \equiv \big( \hat{c}^\dag_1, \hat{c}^\dag_2, \ldots, \hat{c}^\dag_l \big) \;.
\end{eqnarray}
We have assumed $\hat{\bi c}$ to be $l \times 1$. This is equivalent to saying that the system has $l$ dissipative channels. For $l=1$ one usually refers to $\hat{c}$ as a Lindblad operator. Similarly we will call $\hat{\bf c}$ a Lindblad vector operator. We will follow the notation in appendix~A of~\cite{CW11a}, and also use the terms environment and bath interchangeably.
Lindblad evolution is, in general, entropy-increasing and will thus lead to a mixed state for the system~\cite{BP02}. Assuming then, the existence of a steady state $\rho_{\rm ss}$, defined by
\begin{eqnarray}
\label{sss}
{\cal L}\rho_{\rm ss}= 0 \;,
\end{eqnarray}
we may write
\begin{eqnarray}
\label{SteadyStateEns}
\rho_{\rm ss} = \sum_{k} \wp_k \, \pi_{k} \;,
\end{eqnarray}
for some ensemble $\{(\wp_k,\pi_k)\}_k$ where each $\pi_k$ is a projector (i.e.~a pure state) and $\wp_k$ is the corresponding probability of finding the system in state $\pi_k$.
As explained earlier in \sref{intro}, physical realizability for an ensemble means justifying the ignorance interpretation of it for all times after the system has reached the steady state. That is, an ensemble is PR if and only if there exists an unravelling ${\sf U}$ (an environmental monitoring scheme which an experimenter can perform) that reveals the system to be in state $\pi^{\sf U}_k$ with probability $\wp_k^{\sf U}$. Note that any ensemble used to represent the system state once it has reached steady state will remain a valid representation thereafter. Thus if the PR ensemble $\{(\wp_k^{\sf U},\pi^{\sf U}_k)\}$ is to represent $\rho_{\rm ss}$ where the probabilities $\{\wp_k^{\sf U}\}$ are time-independent, then each $\wp_k^{\sf U}$ must reflect the proportion of time that the system spends in the state $\pi_k$. We therefore have a graphical depiction of the system dynamics where it is randomly jumping between the states $\pi^{\sf U}_k$ over some observation interval $\Delta t$. The probability of finding the system to be in state $\pi^{\sf U}_k$ is given by the fraction of time it spends in $\pi^{\sf U}_k$ in the limit of $\Delta t \to \infty$. This is illustrated in~\fref{PRE}. Note that this makes the system state a stationary ergodic process. We now denote the PR ensemble as $\{(\wp_k^{\sf U}, \pi_{k}^{\sf U})\}$, since it depends on the continuous measurement represented by ${\sf U}$. Surprisingly, we can determine whether an ensemble is PR purely algebraically, without ever determining the unravelling $\sf U$ that induces it~\cite{WV01}. Such a method will be employed in \sref{LGQS}.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figure1.pdf}
\caption{A PR ensemble $\{(\wp_k^{\sf U},\pi^{\sf U}_k)\}$ makes the system state a stationary ergodic process. That is to say the ensemble average over $k$ in \eref{SteadyStateEns} can be obtained by counting, for each value of $k$, the fraction of time the system spends in the $k^{\rm th}$ state for a \emph{single} run of the monitoring $\sf U$ over a sufficiently long period $\Delta t$. The probability of finding the system to be in a particular state, say the state with $k=\nu$ is then $\wp_\nu = \sum_{m=1}^{\lambda} t^{(\nu)}_m / \Delta t$ where for each value of $m$, $t^{(\nu)}_m$ is the amount of time the system spends in state $\pi^{\sf U}_\nu$ before making a jump to a different state as illustrated.}
\label{PRE}
\end{figure}
\subsection{Mixing time and the pointer basis}
The pointer states as defined in \cite{ABJW05} are states which constitute a PR ensemble and, roughly, decohere the slowest. Specifically, Atkins \etal proposed the mixing time $\tau_{\rm mix}$ as the quantity which attains its maximum for the pointer states. This is defined as follows. We assume that an experimenter has been monitoring the environment with some unit-efficiency unravelling ${\sf U}$ for a long (effectively infinite) time so that the conditioned system state is some pure state, $\pi^{\sf U}_k$. We label this time as the initial time and designate it by $t=0$~\fref{Tmix}). Note the state so obtained belongs to some PR ensemble. The mixing time is defined as the time required on average for the purity to drop from its initial value (being 1) to a value of $1-\epsilon$ if the system were now allowed to evolve unconditionally under the master equation. Thus $\tau_{\rm mix}$ is given by the smallest solution to the equation
\begin{equation}
\label{MixingTimeDefn}
{\rm E}\Big\{ {\rm Tr}\Big[ \big\{ \exp( {\cal L} \, \tau_{\rm mix}) \, \pi^{\sf U}_{k} \, \big\}^2 \Big]\Big\}
= 1 - \epsilon \;,
\end{equation}
where ${\rm E}\{X\}$ denotes the ensemble average of $X$. Note that \eref{MixingTimeDefn} is a slightly more general definition for the mixing time than the one used in \cite{ABJW05} as $\epsilon$ in \eref{MixingTimeDefn} can be any positive number between 0 and 1. In the next section we will consider the limit of small $\epsilon$.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figure2.pdf}
\caption{ Illustration of the mixing time for a particular $\pi^{\sf U}_k$ (hence the label $\tau_{\rm mix}^{(k)}$ on the time axis). The purity of the system state is denoted by $P(t)$ and we have marked $1-\epsilon$ at a finite distance away from 1 for clarity.}
\label{Tmix}
\end{figure}
\section{Linear Gaussian quantum systems}
\label{LGQS}
\subsection{Unconditional dynamics}
A LG system is defined by linear quantum stochastic differential equations driven by Gaussian quantum noise in the Heisenberg \ picture for (i) the system configuration $\hat{\bi x}$ in phase space, and (ii) the measurement output $\hat{\bi y}$ (also referred to as a current):
\begin{eqnarray}
\label{LinSys1}
d \hat{\bi x} & = & A \, \hat{\bi x} \, dt + E \, d \hat{\bi v}_{\rm p} \;, \\
\label{LinSys2}
\hat{\bi y}\,\!dt & = & {\sf C} \, \hat{\bi x} \, dt + d \hat{\bi v}_{\rm m} \;.
\end{eqnarray}
Here the phase-space configuration is defined as 2$n$-dimensional vector operator
\begin{equation}
\label{SysConfig}
\hat{\bi x} \equiv (\hat{q}_1,\hat{p}_1, \hat{q}_2,\hat{p}_2, \ldots,\hat{q}_n,\hat{p}_n)^\top \;.
\end{equation}
Here $\hat{\bi q} = (\hat{q}_1,\hat{q}_2, \ldots, \hat{q}_n)^\top$ and $\hat{\bi p} = (\hat{p}_1,\hat{p}_2, \ldots, \hat{p}_n)^\top$ represent the canonical position and momentum of the system, defined by
\begin{equation}
\lfloor \hat{\bi q}, \hat{\bi p} \rceil \equiv \hat{\bi q} \hat{\bi p}^\top - \big( \hat{\bi p} \hat{\bi q}^\top \big)^\top = i \hat{\rm I}_n \;,
\end{equation}
where $\hbar \equiv 1$ and $\hat{\rm I}_n$ is an $n \times n$ diagonal matrix containing identity operators. All vector operators in \eref{LinSys1} and \eref{LinSys2} are time dependent but we suppressed the time argument, as we will do except when we need to consider quantities at two or more different times. We take \eref{LinSys1} and \eref{LinSys2} to be It${\rm \hat{o}}$ stochastic differential equations with constant coefficients \cite{Jac10a}, i.e.~$A$, $E$, and ${\sf C}$ are real matrices independent of $\hat{\bi x}$ and time $t$.
The non-commutative nature of $\hat{\bi q}$ and $\hat{\bi p}$ gives rise to the Schr\"{o}dinger-Heisenberg uncertainty relation \cite{Hol11}
\begin{equation}
\label{HeiUncert}
V + \frac{i}{2} \, Z \ge 0 \;,
\end{equation}
where
\begin{equation}
\label{DefnOfZ}
Z \equiv \bigoplus^{n}_{1} \bigg(\begin{array}{cc}
0 & 1 \\
-1 & 0
\end{array} \bigg) \;,
\end{equation}
and $V$ is the covariance matrix of the system configuration, defined by
\begin{equation}
\label{CovarianceDefn}
V = {\rm Re}\big[ \;\! \big\langle (\hat{\bi x} - \langle \hat{\bi x} \rangle) (\hat{\bi x} - \langle \hat{\bi x} \rangle)^\top \big\rangle \;\! \big] \;.
\end{equation}
We are defining the real part of any complex matrix $A$ by ${\rm Re}[A]= (A+A^*)/2$.
The process noise $E \;\! \hat{\bi v}_{\rm p}$ is the unavoidable back-action from coupling the system to the environment. It is a vector operator of Hermitian quantum Wiener increments with a mean and covariance satisfying, for all time,
\begin{eqnarray}
\eqalign
\langle E \, d \hat{\bi v}_{\rm p} \rangle = {\bf 0} \;, \\
\label{ItoProcess}
{\rm Re} \big[ E \, d\hat{\bi v}_{\rm p} \, d\hat{\bi v}_{\rm p}^\top E^\top \big] \equiv D \, dt \;,
\end{eqnarray}
where for any matrix operator $\hat{\rm A}$ we have defined ${\rm Re} [ \hat{\rm A} ]=(\hat{\rm A}+\hat{\rm A}^\ddagger)/2$ and $\hat{\rm A}^\ddagger$ is defined similarly to \eref{TransposeDagger}. The quantum average is taken with respect to the initial state $\rho(0)$ i.e.~$\langle \hat{\rm A}(t) \rangle = \Tr [\hat{\rm A}(t) \rho(0)]$ since we are in the Heisenberg picture. Note that \eref{ItoProcess} involves the process noise at only one time, second-order moments with $E \;\! d \hat{\bi v}_{\rm p}$ at different times vanish as well as any other higher-order moments. Similarly the measurement noise $d\hat{\bi v}_{\rm m}$ is a vector operator of Hermitian quantum Wiener increments satisfying
\begin{eqnarray}
\eqalign
\langle d\hat{\bi v}_{\rm m} \rangle = {\bf 0} \;, \\
\label{ItoMeasurement}
d\hat{\bi v}_{\rm m} \, d\hat{\bi v}_{\rm m}^\top = \hat{\rm I}_ R \, dt \;,
\end{eqnarray}
where we have assumed $d\hat{\bi v}_{\rm m}$ (and also $\hat{\bi y}$) to have $R$ components. As with $E\;\! d\hat{\bi v}_{\rm p}$, \eref{ItoMeasurement} is the only non-vanishing moment for $d\hat{\bi v}_{\rm m}$. The noise $d\hat{\bi v}_{\rm m}$ describes the intrinsic uncertainty in the measurement represented by $\hat{\bi y}$ and in general will be correlated with $E\;\! d\hat{\bi v}_{\rm p}$. We define their correlation by a constant matrix $\Gamma^\top$, i.e.
\begin{equation}
\label{Gamma}
{\rm Re}\big[ E \, d\hat{\bi v}_{\rm p} \, d\hat{\bi v}_{\rm m}^\top \big] = \Gamma^\top dt \;.
\end{equation}
For the above to describe valid quantum evolution, various inequalities relating $A$, $E$, ${\sf C}$ and $Z$ must be satisfied~\cite{WM10}.
Just as a classical Langevin equation corresponds to a Fokker-Planck equation, the quantum Langevin equation \eref{LinSys1} also corresponds to a Fokker-Planck equation for the Wigner function \cite{Sch01} of the system state. Such an evolution equation for the Wigner function can also be derived from the master equation \eref{Lindblad} \cite{Car02}:
\begin{equation}
\label{OUE_Wigner}
\dot{W}( \breve{\bi x} ) = \{- \nabla^{\top}A \breve{\bi x} +\frac{1}{2}\nabla^{\top}D\nabla \} W( \breve{\bi x} ) \;.
\end{equation}
This equation has a Gaussian function as its solution, with mean and covariance matrix obeying
%
\begin{eqnarray}
\label{x dynamics}
d{\langle \hat{\bi x} \rangle}/dt = A\langle \hat{\bi x} \rangle \\
\label{V dynamics}
d{V}/dt = A \;\! V + V \;\! A^\top + D \;.
\end{eqnarray}
We restrict to the case that $A$ is Hurwitz; that is, where the real part of each eigenvalue is negative. Then the steady state Wigner function will be a zero-mean Gaussian \cite{Ris89}
\begin{equation}
\label{Wss}
W_{\rm ss}( \breve{\bi x } ) = g( \breve{\bi x }; {\bf 0},V_{\rm ss}) \;.
\end{equation}
The notation $\breve{\bi x}$ denotes the realization of the random vector ${\bi x}$ and $g( \breve{\bi x }; {\bi \mu},V)$ denotes a Gaussian with mean ${\bi \mu}$ and covariance $V$ for ${\bi x}$. In this case $V_{\rm ss}$ is the steady-state solution to \eref{V dynamics}; that is, the unique solution of
\begin{equation}
\label{V steady}
A \;\! V + V \;\! A^\top + D = 0 \;.
\end{equation}
We saw above that a LG system is defined by the It\^{o} equation \eref{LinSys1} for $\hat{\bi x}$, the statistics of which are characterized by the matrices $A$ and $D$. However, our theory of PR ensembles in \sref{PRPB} was in the Schr\"{o}dinger picture for which the system evolution is given by the master equation \eref{Lindblad}. To apply the idea of PR ensembles to a LG system we thus need to relate $A$ and $D$ to the dynamics specified in the Schr\"{o}dinger picture by ${\cal L}$, which is in turn specified by $\hat{H}$ and $\hat{\bi c}$. One can in fact show that \eref{LinSys1} results from choosing an $\hat{H}$ and $\hat{\bi c}$ that is (respectively) quadratic and linear in $\hat{\bi x}$ \cite{WM10}, i.e.
\begin{equation}
\label{LinSysHamiltonian}
\hat{H} = \frac{1}{2} \, \hat{\bi x}^\top G \;\! \hat{\bi x} \;,
\end{equation}
for any $2n \times 2n$ real and symmetric matrix $G$, and
\begin{equation}
\label{LinSysLindbladOp}
\hat{\bi c} = \tilde{C} \, \hat{\bi x} \;,
\end{equation}
where $\tilde{C}$ is $l \times 2n$ and complex. It can then be shown that \eref{LinSysHamiltonian} and \eref{LinSysLindbladOp} leads to
\begin{eqnarray}
A & = & Z \big( G + \bar{C}^\top S \bar{C} \big) \label{feeda}; \\
D & = & Z \bar{C}^\top \bar{C} Z^\top \label{feedd},
\end{eqnarray}
where we have defined
\begin{eqnarray}
\label{SandCbar}
S = \left( \begin{array}{cc}
0 & {\rm I}_l \\
-{\rm I}_l & 0
\end{array} \right)\;, \quad \bar{C} = \left( \begin{array}{c}
{\rm Re}[\tilde{C}] \\ {\rm Im}[\tilde{C}]
\end{array} \right) \;.
\end{eqnarray}
The matrix $S$ has dimensions $2l \times 2l$, formed from $l \times l$ blocks while $\bar{C}$ has dimensions $2l \times 2n$. These definitions will turn out be useful later especially in \sref{ExampleQBM}.
\subsection{Conditional dynamics in the long-time limit}
\Eref{LinSys1} describes only the dynamics of the system due to its interaction with the environment while \eref{LinSys2} describes the dynamics of some bath observable $\hat{\bi y}$ being measured. Our goal in the end is to drive the system to a particular quantum state and this is achieved most effectively if one uses the information obtained from measuring $\hat{\bi y}$. In a continuous measurement of $\hat{\bi y}$ the measurement device will output a continuous stream of numbers over a measurement time $t$. This is typically called a measurement record \cite{JS06} and is defined by
\begin{equation}
\label{MmtRecord}
{\bi y}_{[0,t)} \equiv \{ {\bi y}(\tau) \, | \, 0 \le \tau < t \} \;,
\end{equation}
where $\bi{y}(\tau)$ is the result of a measurement of $\hat{\bi y}$ at time $\tau$. In this paper we adopt feedback control in which the controlling signal depends on the ${\bi y}_{[0,t)}$ in~\eref{MmtRecord}. Here we will first explain the system evolution conditioned on knowledge of ${\bi y}_{[0,t)}$ and then from this derive the mixing time using definition \eref{MixingTimeDefn}. The inclusion of a control input in the system dynamics will be covered in~\sref{CLGsys}.
The measured current is first fed into an estimator that uses this information to estimate the system configuration continuously in time. This is often referred to as filtering and the continuous-time estimator is called a filter (see~\fref{Filter}). The performance of the filter may be measured by the mean-square error and it is well known from estimation theory that the optimal estimate is the conditional mean of $\hat{\bi x}$ \cite{KS99}, given by
\begin{equation}
\langle \hat{\bi x} \rangle_{\rm c} = \Tr \big[ \hat{\bi x} \;\! \rho_{\rm c}(t) \big] \;,
\end{equation}
where $\rho_{\rm c}(t)$ is the system state conditioned on ${\bi y}_{[0,t)}$. States as such obey stochastic differential equations that are referred to as quantum trajectories \cite{Car93,Car08} in quantum optics. For control purpose only the evolution of $\langle \hat{\bi x} \rangle_{\rm c}$ matter, and its evolution equation in this case is known as the Kalman-Bucy filter \cite{KB61}. We are ultimately interested in stabilizing the system to some quantum state which, without loss of generality, we can take to have $\langle \hat{\bi x} \rangle_{\rm c} = {\bf 0}$. That is, once the system has reached $\langle \hat{\bi x} \rangle_{\rm c} = {\bf 0}$ we would like to keep it there, ideally indefinitely for as long as the feedback loop is running. Thus it is the behaviour of the system in the long-time limit that is of interest to us and it can be shown \cite{WM10} that the Kalman-Bucy filter in this limit is given by
\begin{equation}
\label{KB1}
d\langle \hat{\bi x} \rangle_{\rm c} = A \, \langle \hat{\bi x} \rangle_{\rm c} \, dt + {\rm F}^\top \, d{\bi w} \;.
\end{equation}
Here $d{\bi w}$ is a vector of Wiener increments known as the innovation \cite{HJS08}, while ${\rm F} \equiv {\sf C} \,\Omega_{\sf U} + \Gamma$, where $\Omega_{\sf U}$ is the solution of the matrix Riccati equation
\begin{equation}
\label{KB2}
A \, \Omega_{\sf U} + \Omega_{\sf U} \, A^\top + D = {\rm F}^\top {\rm F} \;.
\end{equation}
The matrix $\Omega_{\sf U}$ is the steady-state value of $V_{\rm c}$ [given by \eref{CovarianceDefn} with the averages taken with respect to $\rho_{\rm c}$] and depends on the measurement as indicated by its subscript. It is well known in control theory that when $A$, ${\sf C}$, $E$, and $\Gamma$ [recall \eref{LinSys1},~\eref{LinSys2}, and \eref{Gamma}] have certain properties, $\Omega_{\sf U}$ is a unique solution to \eref{KB2} and is known as a stabilizing solution~\cite{WM10}. We will assume this to be the case in the following theory. As in unconditioned evolution, the conditioned state $\rho_{\rm c}$ also has a Gaussian Wigner function. This is given by
\begin{equation}
\label{Wc}
W^{\Omega_{\sf U}}_{\bar{\bi x}}(\breve{\bi x}) = g(\breve{\bi x};\bar{\bi x},\Omega_{\sf U}) \;,
\end{equation}
where we have defined the short-hand $\bar{\bi x}=\langle \hat{\bi x} \rangle_{\rm c}$. The uniqueness of $\Omega_{\sf U}$ means that the conditional states obtained in the long-time limit will all have the same covariance but with different means evolving according to \eref{KB1}. That is, the index $k$ which labels different members of an ensemble representing $\rho_{\rm ss}$ in \eref{SteadyStateEns} is now the vector $\bar{\bi x}$ which changed (continuously) when the system makes `transitions' between different members within an ensemble. Different ensembles are labelled by different values of $\Omega_{\sf U}$. Such an ensemble is referred to as an uniform Gaussian ensemble.
From \eref{Wss} and \eref{Wc} the ensemble representing the steady state $\rho_{\rm ss}$ of a LG system can be described in terms of Wigner functions as
\begin{equation}
\label{UniformGaussian}
W_{\rm ss}(\breve{\bi x}) = \int d\bar{\bi x} \; \wp(\bar{\bi x}) \; W^{\Omega_{\sf U}}_{\bar{\bi x}}(\breve{\bi x})
\end{equation}
where the distribution of conditional means is another Gaussian, given by
\begin{equation}
\label{P(x)}
\wp(\bar{\bi x}) = g(\bar{\bi x}; {\bf 0}, V_{\rm ss}-\Omega_{\sf U}) \;.
\end{equation}
This can be derived by using \eref{UniformGaussian} to calculate the characteristic function of $\wp(\bar{\bi x})$.
Since ${\rm F}^\top {\rm F}$ is positive semidefinite by definition, \eref{KB2} implies the linear-matrix inequality for $\Omega_{\sf U}$:
\begin{equation}
\label{PRconstraint}
A \;\! \Omega_{\sf U} + \Omega_{\sf U} \;\! A^\top + D \ge 0 \;.
\end{equation}
This constraint together with the Schr\"{o}dinger-Heisenberg relation for the conditional state [i.e.~\eref{HeiUncert} with $V$ replaced by $\Omega_{\sf U}$]
\begin{equation}
\label{QuantumConstraint}
\Omega_{\sf U} + \frac{i}{2} \, Z \ge {} 0 \;,
\end{equation}
are necessary and sufficient conditions for the uniform Gaussian ensemble~\footnote{This can be considered as the \emph{generalized coherent states}~(GCS) for the Heisengberg-Weyl group. See~\cite{KK08}} \eref{UniformGaussian} to be PR \cite{WV01,WM10}. This is the algebraic test for whether an ensemble is PR mentioned in~\sref{PRPB}.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figure3.pdf}
\caption{ A filter is a continuous-time estimator which accepts ${\bi y}_{[0,t)}$ as input and produces an estimate of the system configuration as its output. If the mean-square error is used as a performance measure for the filter estimate then the conditional average of $\hat{\bi x}$ is optimal and the filter is characterized by \eref{KB1} and \eref{KB2} in the long-time limit.}
\label{Filter}
\end{figure}
\subsection{Mixing time and the pointer basis}
\label{UforPointerBasis}
As mentioned above, conditioned evolution leads to a Gaussian state with mean $\langle \hat{\bi x} \rangle_{\rm c}$ and covariance matrix $\Omega_{\sf U}$ satisfying \eref{KB1} and \eref{KB2} in the long-time limit. The purity of any Gaussian state with a $2n$-component configuration $\hat{\bi x}$ and covariance $V$ at time $t$ is given by \cite{Oli12}
\begin{equation}
\label{purity formula}
P(t) = \frac{1}{\sqrt{ {\rm det}[2V(t)]}} \;,
\end{equation}
where ${\rm det}[A]$ denotes the determinant of an arbitrary matrix $A$. The mixing time [recall \eref{MixingTimeDefn}] is thus defined by
\begin{equation}
\label{DetVtmix1}
{\rm det}\big[ 2 V(\tau_{\rm mix}) \big] = \frac{1}{(1-\epsilon)^2} \;,
\end{equation}
where $V(\tau_{\rm mix})$ is the covariance matrix of the state evolved under unconditional evolution from the initial state $\pi^{\sf U}_k$, which has covariance $V(0) = \Omega_{\sf U}\,$. We have noted in \eref{DetVtmix1} that the ensemble average in \eref{MixingTimeDefn} plays no role since (i) the purity depends only on the covariance; (ii) the different initial states obtained at $t=0$ all have the same covariance $\Omega_{\sf U}$; and (iii) the evolution of the covariance is independent of the configuration $\langle \hat{\bi x} \rangle_{\rm c}$ at all times (not just in steady-state as per \eref{KB2}).
An expression for $\tau_{\rm mix}$ can be obtained in the limit $\epsilon \to 0$ by noting that in this limit $\tau_{\rm mix}$ will be small so we may Taylor expand $V(t)$ about $t=0$ to first order:
\begin{eqnarray}
\eqalign
V(\tau_{\rm mix}) & = & {} V(0) + \left.\frac{dV}{dt} \right|_{t=0} \tau_{\rm mix} \\
\label{Vtmix}
& = & {} \Omega_{\sf U} + ( A \, \Omega_{\sf U} + \Omega_{\sf U} \, A^\top + D ) \, \tau_{\rm mix} \;.
\end{eqnarray}
Note that we have used~\eref{V dynamics} in \eref{Vtmix}. Multiplying \eref{Vtmix} by $\Omega_{\sf U}^{-1}$ and taking the determinant gives
\begin{equation}
\label{DetV}
{\rm det}\big[2 V(\tau_{\rm mix})\big] = {\rm det}\big[ \, {\rm I}_{2n} + (A \, \Omega_{\sf U} + \Omega_{\sf U} \, A^\top + D) \, \Omega_{\sf U}^{-1} \;\! \tau_{\rm mix} \big] \;,
\end{equation}
where we have noted that the initial state is pure so ${\rm det}\big[2\Omega_{\sf U}\big] = 1$. For any $n \times n$ matrix $X$ and scalar $\varepsilon$ one can show that
\begin{equation}
\label{DetId}
{\rm det}\big[ \,{\rm I}_{n} + \varepsilon X \big] \approx 1 + \tr [\,\varepsilon X] \;,
\end{equation}
for $\varepsilon \to 0\,$. Therefore \eref{DetV} becomes
\begin{equation}
\label{DetVtmix2}
{\rm det}\big[2V(\tau_{\rm mix})\big] = 1 + \omega \, \tau_{\rm mix} \;,
\end{equation}
where we have defined for ease of writing
\begin{equation}
\label{omegaDefn}
\omega (\Omega_{\sf U}) \equiv 2 \, {\rm tr}\big[A\big] + {\rm tr}\big[D\,\Omega_{\sf U}^{-1}\big] \;.
\end{equation}
Substituting \eref{DetVtmix2} back into \eref{DetVtmix1} and solving for $\tau_{\rm mix}$ we arrive at
\begin{equation}
\label{MixingTime}
\tau_{\rm mix} \approx \frac{2 \epsilon}{\omega} \;.
\end{equation}
From this expression we see that to maximize the mixing time one should minimize $\omega$. From the definition \eref{omegaDefn} this means that (given $A$ and $D$) $\Omega_{\sf U}$ should be chosen to minimize $\tr \big[D\,\Omega_{\sf U}^{-1}\big]$ subject to the constraints \eref{PRconstraint} and \eref{QuantumConstraint}. Since $\Omega_{\sf U}$ depends on the unravelling $\sf U$, once the $\omega$-minimizing $\Omega_{\sf U}$ is found, call it $\Omega_{\sf U}^\star$, we can then find the unravelling that generates $\Omega_{\sf U}^\star$ by a simple relation \cite{WD05}. The set of pure states that can be obtained by such a measurement therefore forms the pointer basis. In the following we will denote the longest mixing time that is PR by $\tau_{\rm mix}^\star$, formally defined by
\begin{equation}
\label{tmixStar}
\tau_{\rm mix}^\star \equiv \frac{2\epsilon}
{\underset{\Omega_{\sf U}}{\rm min} \; \omega(\Omega_{\sf U})}
\end{equation}
subject to
\begin{eqnarray}
\label{PRCons}
\eqalign
A \, \Omega_{\sf U} + \Omega_{\sf U} \, A^\top + D \ge 0 \;, \\
\label{QuantumCons}
\Omega_{\sf U} + \frac{i}{2} \, Z \ge 0 \;,
\end{eqnarray}
where we have repeated \eref{PRconstraint} and \eref{QuantumConstraint} for convenience. We will denote other quantities associated with $\tau_{\rm mix}^\star$ also with a star superscript; in particular,
\begin{equation}
\label{PointerBasis}
\Omega_{\sf U}^\star \equiv \underset{\Omega_{\sf U}}{\rm arg\;min} \left\{ \big[\omega(\Omega_{\sf U})\big] \right\} \;,
\end{equation}
[\;still subject to \eref{PRCons} and \eref{QuantumCons} of course\;] and ${\sf U}^\star$ for the unravelling that realizes the pointer basis. \Eref{PointerBasis} now defines the pointer basis of the system under continuous observation which has a decoherence rate characterized by $1/\tau_{\rm mix}^\star\,$. We will illustrate the use of \eref{tmixStar}--\eref{PointerBasis} in~\sref{ExampleQBM} with the example of quantum Brownian motion.
\section{Controlled linear Gaussian quantum systems}
\label{CLGsys}
We have said above that for LG systems the unconditioned steady state in phase space is a uniform Gaussian ensemble, where uniformity refers to the fact that each member of the ensemble has the same covariance matrix given by $\Omega_{\sf U}$. Of the different ensembles the one with $\Omega_{\sf U}^\star$ identifies the pointer basis and the unravelling ${\sf U}^\star$ that induces it. All that is left to do to put the system into a specific pointer state is to steer the mean of the system configuration $\langle \hat{\bi x} \rangle_{\rm c}$ (or, in other words, the centroid of the Wigner distribution in phase space) towards a particular point, say $\langle \hat{\bi x} \rangle_{\rm c} = {\bi a}$. This requires feedback control, described by adding a control input ${\bi u}(t)$ that depends on the measurement record ${\bi y}_{[0,t)}$ as shown in~\fref{FeedbackLoop}.
For simplicity we will define our target state to be at the origin of the phase space, i.e.~${\bi a} = {\bf 0}$. Choosing the phase-space origin will simplify our analysis for a system whose uncontrolled Wigner function does not have a systematic drift away from the origin. This is beneficial for a feedback that is designed to drive the system towards ${\bi a}={\bf{0}}$ simply because the uncontrolled drift does not act against the feedback. In this case one only has to mitigate the effects of diffusion, a process which leads to a greater uncertainty about the system configuration. As this increase in uncertainty can be quantified by the mixing time the effect of the feedback can be characterized by comparing the control strength to $\tau_{\rm mix}^\star$. This is illustrated in~\sref{ExampleQBM} using the example of quantum Brownian motion.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figure4.pdf}
\caption{ Feedback loop.}
\label{FeedbackLoop}
\end{figure}
\subsection{Adding feedback}
To steer the system towards the origin in phase space we apply a classical input proportional to $\langle \hat{\bi x} \rangle_{\rm c}\,$ (effected by the actuator in~\fref{FeedbackLoop})
\begin{equation}
\label{ControlInput}
{\bi u}(t) = - K \, \langle \hat{\bi x} \rangle_{\rm c}(t) \;,
\end{equation}
where $K$ is a constant matrix, which we take to be
\begin{equation}
\label{FeedbackLG}
K = \frac{k \epsilon}{\tmix^\star} \: {\rm I}_{2n} \;.
\end{equation}
Here $k \ge 0$ is a dimensionless parameter which measures the feedback strength relative to the decoherence. This feedback scheme is similar to a special optimal control theory called the linear-quadratic-Gaussian (LQG). In fact, we show in the appendix C that our feedback scheme is equivalent to a limiting case of LQG control.
The long-time conditional dynamics of the system can thus be written as
\begin{equation}
\label{KBcontrol}
d\langle \hat{\bi x} \rangle_{\rm c} = N \langle \hat{\bi x} \rangle_{\rm c} \, dt + {\rm F}^\top \, d{\bi w}
\end{equation}
where,
\begin{equation}
\label{DefnN}
N \equiv A - K = A -\frac{k \epsilon}{\tmix^\star} \; {\rm I}_{2n} \;,
\end{equation}
while the equation for the covariance remains unchanged, still given by \eref{KB2}. The control input thus changes only the mean of $\hat{\bi x}$. One can derive from \eref{KBcontrol} the identity
\begin{equation}
\label{RelationM1}
N \, M + M N^\top + {\rm F}^\top {\rm F} = 0 \;,
\end{equation}
where, as long as $N$ is negative definite,
\begin{equation}
M \equiv {\rm E_{ss}}\big[ \langle \hat{\bi x} \rangle_{\rm c} \langle \hat{\bi x} \rangle_{\rm c}^\top \big] \;,
\end{equation}
with ${\rm E}_{\rm ss}[X]$ denoting the ensemble average of $X$ in the long-time limit (or ``steady state'' \footnote{When referring to $\langle \hat{\bi x} \rangle_{\rm c}$ we prefer the term long-time limit as opposed to steady state for the $t \to \infty$ limit since in this limit $\langle \hat{\bi x} \rangle_{\rm c}$ still follows a jiggly motion and is not constant as steady state would imply.}). It thus follows that the unconditioned steady state variance matrix in the presence of the feedback is given by
\begin{equation}
\label{RelationM2}
V_{\rm ss} = \Omega_{\sf U} + M \;.
\end{equation}
Relations \eref{RelationM1} and \eref{RelationM2} are useful for calculating the fidelity \cite{NC10,Uhl76} between the controlled state and the target state in the long-time limit.
\subsection{Performance of feedback}
\label{s42}
We take the fidelity between the target state and the state under control to be our performance measure for the feedback loop. The target state has the Wigner function
\begin{equation}
\label{TargetWigner}
W_\odot(\breve{\bi x}) = g(\breve{\bi x};{\bf 0},\Omega_{\sf U}) \;,
\end{equation}
while the controlled state is given by
\begin{equation}
\label{ControlledWigner}
W_{\rm ss}(\breve{\bi x}) = g(\breve{\bi x};{\bf 0},V_{\rm ss}) \;.
\end{equation}
The fidelity between states defined by \eref{TargetWigner} and \eref{ControlledWigner} can be shown to be
\begin{equation}
\label{Fidelity}
F = \frac{1}{\sqrt{{\rm det}\big[ V_{\rm ss} + \Omega_{\sf U} \big]}} \;.
\end{equation}
To calculate the determinant in the denominator we note that \eref{RelationM2} gives
\begin{equation}
\label{FidelityDenom}
\big( V_{\rm ss} + \Omega_{\sf U} \big) \, (2\Omega_{\sf U})^{-1} = {\rm I}_{2n} + M \, (2\Omega_{\sf U})^{-1}.
\end{equation}
Also note $\det[2\Omega_{\sf U}] = 1$, we thus have:
\begin{equation}
\label{det[Vss+qss]}
{\rm det}\big[V_{\rm ss} + \Omega_{\sf U} \big] = {\rm det}\big[ \,{\rm I}_{2n} + M \Omega_{\sf U}^{-1}/2 \big] \;.
\end{equation}
To simplify this further we need an expression for $M$, which can be derived by using \eref{RelationM1}. Substituting \eref{KB2} and \eref{DefnN} into \eref{RelationM1} we arrive at
\begin{equation}
\label{EqnForM}
M = \frac{\tau_{\rm mix}^\star}{2k \epsilon} \; \big( A \, \Omega_{\sf U} + \Omega_{\sf U} \, A^\top + D \big) + \mathcal{O}((\tau_{\rm mix}^\star)^2) \;.
\end{equation}
Because $\tau_{\rm mix} \sim \epsilon$ for $\epsilon \ll1$, we may discard second-order terms in $\tau_{\rm mix}^\star$ in \eref{EqnForM} to get
\begin{equation}
\label{M(tmix)}
M \approx \frac{\tau_{\rm mix}^\star}{2k\epsilon} \: \big( A \, \Omega_{\sf U} + \Omega_{\sf U} \, A^\top + D \big) \;.
\end{equation}
For strong control ($k \gg 1$), the determinant in \eref{det[Vss+qss]} can then be approximated by an expansion in $M$ to first order. Using \eref{DetId} this gives
\begin{equation}
\label{FidelityDet2}
\det \big[V_{\rm ss} + \Omega_{\sf U} \big] \approx 1 + \frac{1}{2} \, \tr\big[ M \Omega_{\sf U}^{-1} \big] \;.
\end{equation}
Multiplying \eref{M(tmix)} by $\Omega_{\sf U}^{-1}$ on the right and taking the trace we get
\begin{equation}
\label{TraceMOmegainv}
\tr \big[ M \Omega_{\sf U}^{-1} \big] \approx \frac{\omega(\Omega_{\sf U})}{2k \epsilon} \: \tau_{\rm mix}^\star = \frac{\tau_{\rm mix}^\star}{k\tau_{\rm mix}} \;.
\end{equation}
Substituting \eref{TraceMOmegainv} into \eref{FidelityDet2} and the resulting expression into the fidelity \eref{Fidelity} we find
\begin{equation}
F \approx 1 - \frac{1}{4k}\frac{\tau_{\rm mix}^\star}{\tau_{\rm mix}} \;.
\end{equation}
That is, the fidelity is close to one for $k$ large (i.e. strong control) as expected. One can also calculate the purity of the feedback-controlled steady state. An expression for this can be obtained from \eref{Fidelity} by replacing $\Omega_{\sf U}$ by $V_{\rm ss}$. Then following essentially the same method as for the fidelity calculation we find that it is given by
\begin{equation}
P \approx 1 - \frac{1}{2k}\frac{\tau_{\rm mix}^\star}{\tau_{\rm mix}} \;.
\end{equation}
In both cases we see that the best performance is achieved when $\tau_{\rm mix} = \tau_{\rm mix}^\star$. That is, when the unravelling generating the most robust ensemble is used. This demonstrates the link between the pointer basis and feedback control for LG quantum systems.
\section{Example: quantum Brownian motion}
\label{ExampleQBM}
We now illustrate the theory of \sref{LGQS} and \sref{CLGsys} with the example of a particle in an environment with temperature $T$ undergoing quantum Brownian motion in one dimension in the high temperature limit.
This limit means $k_{\rm B} T \gg \hbar \gamma$ where $k_{\rm B}$ is Boltzmann's constant
and $\gamma$ is the momentum damping rate. In this limit we can use a Lindblad-form master equation as per \eref{Lindblad} to describe the Brownian motion~\cite{DIO93,AB03},
with one dissipative channel (i.e.~$l=1$):
\begin{equation}
\label{L2}
\dot{\rho} = {\cal L} \rho
= - i [\hat{H},\rho] + \hat{c} \rho \hat{c}^\dagger
- \frac{1}{2} \hat{c}^\dagger \hat{c} \rho - \frac{1}{2} \rho \hat{c}^\dagger \hat{c} \;,
\end{equation}
where
\begin{equation}
\hat{H} = \frac{\hat{p}^2}{2} + \frac{1}{2} \big( \hat{q} \hat{p} + \hat{p} \hat{q} \big) \;,
\label{LindbladOperatorQBM}
\quad \hat{c} = \sqrt{2T} \hat{q} + \frac{i}{\sqrt{8T}} \; \hat{p} \;.
\end{equation}
We are using scaled units such that the damping rate, particle mass, Boltzmann constant, and $\hbar$ are all unity.
The above master equation could also describe a driven and damped single-mode field in an optical cavity with a particular type of optical nonlinearity. In this case $\hat{c}$ is the effective annihilation operator for the field fluctuations (about some mean coherent amplitude). That is, the position $\hat{q}$ and momentum operators $\hat{p}$ of the particle translate into the quadratures of the field mode, with suitable scaling (which depends on the model temperature $T$). This interpretation of the master equation allows the unravellings we discuss below to be easily interpreted: they correspond to homodyne measurement of the cavity output with different local oscillator phases.
Comparing \eref{LindbladOperatorQBM} with \eref{LinSysHamiltonian} and \eref{SandCbar}, we see that $\hat{H}$ and $\hat{c}$ can be written, respectively, as a quadratic and linear function of a two-dimensional configuration defined by
\begin{equation}
\label{2by2Z}
\hat{\bi x} = \left( \begin{array}{c}
\hat{q} \\ \hat{p}
\end{array} \right) \;, \quad Z = \left( \begin{array}{cc}
0 & 1 \\ -1 & 0
\end{array} \right) \;.
\end{equation}
The matrices $G$ and $\tilde{C}$ in this case are given by
\begin{equation}
\label{2by2c}
G = \left( \begin{array}{cc}
0 & 1 \\ 1 & 1
\end{array} \right) \;, \quad \tilde{C} = \left( \begin{array}{cc}
\sqrt{2T} , & i/\sqrt{8T}
\end{array} \right) \;.
\end{equation}
These can then be used to characterize the unconditional dynamics in terms of the drift and diffusion matrices given in \eref{feeda} and \eref{feedd} which are easily shown to be
\begin{equation}
\label{2by2ad}
A = \left( \begin{array}{cc}
0 & 1 \\ 0 & -1
\end{array} \right) \;, \quad D = \left( \begin{array}{cc}
1/8T & 0 \\ 0 & 2T
\end{array} \right) \;.
\end{equation}
\subsection{Measurement}
\label{ExampleQBM1}
The theory of PR ensembles, and in particular the realization of a pointer basis by continuous measurement as explained in~\sref{UforPointerBasis} can be applied to the above quantum Brownian motion master equation.
Recall that for LG systems with an efficient fixed measurement, the PR ensembles are uniform Gaussian ensembles of pure states, uniform in the sense that every member of the ensemble is characterized by the same covariance matrix $\Omega_{\sf U}$. We showed that for such an ensemble to be a pointer basis, $\Omega_{\sf U}^\star$ must be the solution to the constrained optimization problem defined by \eref{PRCons}--\eref{PointerBasis}. To find $\Omega^\star_{\sf U}$ let us first write $\Omega_{\sf U}$ as
\begin{equation}
\label{icm}
\Omega_{\sf U} = \frac{1}{4} \left(
\begin{array}{cc}
\alpha & \beta \\ \beta & \gamma
\end{array}
\right)\;,
\end{equation}
which should satisfy the two linear matrix inequalities \eref{PRCons} and \eref{QuantumCons}. The second of these (the Schr\"{o}dinger-Heisenberg uncertainty relation) is saturated for pure states, and this allows us to write $\alpha$ in terms of $\beta$ and $\gamma$: $\alpha = (\beta^2 +4)/\gamma$. Then from the constraint \eref{PRCons}, we have:
\begin{equation}
\label{QBMPRa}
\left( \frac{1}{8T}+\frac{\beta}{2} \right) \left( 2T-\frac{\gamma}{2} \right) - \frac{(\gamma-\beta)^2}{16} \geq 0 \;.
\end{equation}
In the case of $T \gg 1$, a simple calculation from \eref{QBMPRa} shows that the allowed solutions are restricted to $\gamma \in [0,4T)$ and $\beta \in [0,16T]$ (with the maximum range of $\beta$ being when $\gamma = 0$). The PR region is a convex shape in $\beta$-$\gamma$ space as plotted in~\fref{pr}.
\begin{figure}[htbp]
\centering
\subfloat[]{
\label{pr}
\includegraphics[width=0.5\linewidth]{figure5a.pdf}
}
\hspace{1pt}
\subfloat[]{
\label{mix}
\includegraphics[width=0.4\linewidth]{figure5b.pdf}
}
\caption{ Physically realizable region and mixing time for $T$=100. (a) The PR region defined by \eref{QBMPRa} (shaded area). For $T \gg 1$ we find that $0 \leq \gamma \leq 4T$ and $0 \leq \beta \leq 16T$ as can be seen from the plot. (b) The mixing time over the PR region when $\epsilon = 0.1$. It can be seen that the longest mixing time is for ensembles on the boundary of the PR region. That is, the pointer basis lies on the boundary. These plots remain qualitatively the same for all large $T$.}
\end{figure}
Now the definition \eref{DetVtmix1} of the mixing time $\tau_{\rm mix}$ is connected with $\beta$ and $\gamma$ by an implicit function (see appendix A):
\begin{equation}
\det [2V(\tau_{\rm mix}, \beta, \gamma)] = 1/(1-\epsilon)^2 \;.
\end{equation}
Searching over the PR region, we can find the longest mixing time $\tau_{\rm mix}^\star$, at the point $(\beta^\star,\gamma^\star)$. This point corresponds to $\Omega_{\sf U}^\star$, from which we can derive the optimal unravelling matrix ${\sf U}^\star$. It can be shown analytically (appendix A) that $(\beta^\star, \gamma^\star)$ always lies on the boundary of the PR region. Such conditioned states are generated by extremal unravellings ${\sf U}$. Physically (in the language of quantum optics) this corresponds to homodyne detection. Although we will not do so, a relation between ${\sf C}$ in \eref{LinSys2} and ${\sf U}^\star$ may be used to show that ${\sf U}^\star$ does indeed always correspond to homodyne measurement.
A similar conclusion was reached in \cite{ABJW05} but for measurements that maximize the survival time $\tau_{\rm sur}$ and only based on numerics. Note that the survival time (see appendix B) is more general than the mixing time in the sense that it captures any deviation of an unconditionally evolved state from the initially conditioned pure state, not just its decrease in purity. This means that typically $\tau_{\rm sur} \le \tau_{\rm mix}$. We show analytically in appendix B that $\tau_{\rm sur}$ is always maximized by PR ensembles that lie on the boundary of the PR region. This result thus rigorously justifies the claim of~\cite{ABJW05} and it is not surprising to find that they maximize $\tau_{\rm mix}$ as well (appendix A).
We can see from \fref{btt} (b) and (d) that $\tau_{\rm mix}^\star$ decreases monotonically as a function of temperature. Physically this is because a finite-temperature environment tends to introduce thermal fluctuations into the system, making it more mixed. By considering $T$ in the range of $10^2$ to $10^4$ we derive numerically a power law for $\tau_{\rm mix}^\star$; see~\fref{btt}. The fits are given in~\tref{t1}, and to a good approximation we have $\tau_{\rm mix}^\star \sim T^{-1/2}$. Of course this power law will not hold for $T$ small, but in that regime the high-temperature approximation made in deriving \eref{Lindblad} breaks down. \Fref{btt} also shows that $\beta^\star \approx 1$ is independent of $T$. From the equation for the boundary, it follows that $\gamma^\star \approx 4\sqrt{T}$.
\begin{figure}[htbp]
\includegraphics[width=1.\linewidth]{figure6.pdf}
\caption{ (a), (c): $\beta^\star$ as a function of $T$ for $\epsilon$=0.1 and $\epsilon$=0.2 respectively. (b), (d): log-log plot of $\tau_{\rm mix}^\star$ as a function of $T$ for $\epsilon$=0.1 and $\epsilon$=0.2 respectively.}
\label{btt}
\end{figure}
\begin{table}
\centering
\begin{tabular}{c|cc|cl}
\hline
&\multicolumn{2}{c}{Value} & \multicolumn{2}{c}{Standard Error} \\
\cline{2-3}
\cline{4-5}
$\epsilon$ & $a$ & $b$ & $a$ & \multicolumn{1}{c}{$b$} \\
\hline
0.1 & -1.02297 & -0.50913 & 0.00325 & 9.63577$\times 10^{-4}$ \\
0.2 & -0.68442 & -0.50959 & 0.00321 & 9.53113$\times 10^{-4}$ \\
\hline
\end{tabular}
\caption{Fitting results for~\fref{btt} (b) and (d). The fit is given by ${\rm log}\: \tau_{\rm mix}^\star = b\,{\rm log}\:T +a$. }
\label{t1}
\end{table}
\subsection{Measurement and feedback}
Having fixed our measurement scheme we are now in position to stabilize the system to a state in phase space prescribed by the Wigner function $W_{\odot}(\breve{\bi x})=g(\breve{\bi x}; {\bf 0}, \Omega_{\sf U})$. To do so we simply close the feedback loop by adding a control signal in the form of \eref{ControlInput} and \eref{FeedbackLG}:
\begin{equation}
{\bi u}(t)=-\frac{k \epsilon}{\tau_{\rm mix}^{\star}}\langle \hat{\bi x}(t)\rangle_{\rm c} \; ,
\end{equation}
where $k \geq 0$ is a dimensionless parameter determining the strength of control. Under controlled dynamics the drift matrix thus changes from $A$ [specified in \eref{2by2Z}] to $N$ [recall \eref{DefnN}] given by
\begin{equation}
N = \bigg(
\begin{array}{cc}
-k \epsilon/\tau_{\rm mix}^\star & 1 \\
0 & -(1+k \epsilon/\tau_{\rm mix}^\star)
\end{array}
\bigg) \;,
\end{equation}
This is an upper-triangular matrix so its eigenvalues $\lambda(N)$ may be read off from the diagonal entries:
\begin{equation}
\lambda(N) = \big\{ -k\epsilon/\tau_{\rm mix}^\star , \, -(1+k\epsilon/\tau_{\rm mix}^\star) \big\} \;.
\end{equation}
Since $k\epsilon$ and $\tau_{\rm mix}^\star$ are both greater than zero, $N$ is negative definite (or in the language of control theory, `strictly stable', or `Hurwitz stable') and the conditional steady-state dynamics described by \eref{KBcontrol} will indeed be stabilized to a state with zero mean in the phase-space variables. Note that the uncontrolled dynamics has a drift matrix with eigenvalues given by
\begin{equation}
\lambda(A) = \big\{ 0, \, -1\big\} \;.
\end{equation}
showing that quantum Brownian motion by itself is only ``marginally stable'' (i.e.~the system configuration will not converge unconditionally to zero owing to the zero eigenvalue). Physically this is because nothing prevents the position of the Brownian particle from diffusing away to infinity. This illustrates a ``stabilizing effect'' of the feedback loop that would not otherwise appear.
One may expect that the state of the quantum Brownian particle can be stabilized to the target pointer state \eref{TargetWigner} when the strength of feedback is much greater than the decoherence rate $1/\tau_{\rm mix}^\star$. However here we show that the system state can be stabilized to \eref{TargetWigner} very well even when the feedback strength is only comparable to the decoherence rate. This, and the effects of varying $\epsilon$, the environment temperature $T$, and $k$ on the performance of control are depicted in~\fref{fb1} which we now explain.
In~\fref{fb1} we plot the infidelity and the mixing time for $(\beta,\gamma)$ points that saturate the PR constraint \eref{QBMPRa}, as a functions of $\beta$. We do not consider values of $\beta$ and $\gamma$ interior to the PR region as we have already shown that the $\Omega_{\sf U}^\star$ which generates the pointer basis will lie on the boundary.
In~\fref{fb1} (a) we set the feedback strength to be comparable to the decoherence rate (corresponding to $k=10$) and for a fixed temperature ($T=1000$). We see from the blue curve in~\fref{fb1}~(a) that the infidelity achieves a minimum close to zero. We also see that our pointer-basis-inducing measurement determined above is indeed optimal for our control objective by observing that the mixing time and the infidelity reaches their maximum and minimum respectively for the same value of $\beta$, namely $\beta^\star$.
To see the effect of the environment temperature we increase $T$ from $1000$ to $5000$ but keep everything else constant. This is shown in~\fref{fb1} (b). As explained previously in~\sref{ExampleQBM1}, an environment at a larger temperature will have a stronger decohering effect on the system and this is seen as the decrease in the mixing time for all values of $\beta$. However, the infidelity, and in particular its minimum value corresponding to $\beta^\star$ has not changed much. This is as expected, since the strength of the feedback is defined relative to the decoherence rate.
Using again~\fref{fb1} (a) for reference we show in~\fref{fb1} (c) the effect of having a larger $\epsilon$ and a smaller $k$ (with $k\epsilon$ fixed). Quantitatively the curves for infidelity and mixing time change, as expected, but qualitatively they are very similar. In particular, the optimal ensemble is at almost the same point for the minimal infidelity, and the value of $\beta^\star$ is little different from that in~\fref{fb1} (a). This is the case even though $\epsilon = 0.2$ barely qualifies as small, and so we would expect some deviations from small $\epsilon$ results obtained in~\sref{s42}.
Finally in~\fref{fb1} (d) we show the effect of increasing the feedback strength by keeping $\epsilon$ and $T$ the same as those in~\fref{fb1} (c) but changing $k$ from 5 back up to 10. As expected this improves the infidelity (i.e.~making it lower for all $\beta$) while the mixing time remains unchanged when compared to that in (c), since it only depends on $\epsilon$ and $T$. We can also compare~\fref{fb1} (d) to (a) which illustrates how the infidelity curve in (d) is restored to one similar to that in (a), as expected because they use the same feedback strength $k$.
In~\fref{fb2}, we push even further into the regimes where $\epsilon$ is not small, and $k$ is not large. In~\fref{fb2} (a), we choose $\epsilon = 0.5$, and find that the ensemble ($\beta^\star, \gamma^\star$) with the longest mixing time for this threshold of impurity---recall \eref{MixingTimeDefn}---is significantly different from that found with $\epsilon$ small. In the same figure we plot the infidelity of the controlled state with the target state, with $k=10$ and $k=2$. The former (green) gives a minimum infidelity comparable with those with $k=10$ in~\fref{fb1}, and at a similar value of $\beta$. This value of $\beta$ thus differs from the $\beta^\star$ found via maximizing the mixing time.
This is not surprising as we expect them to be the same only for $\epsilon$ small. The two are closer together, however, for $k=2$ (blue), for which the performance of the feedback is quite poor, as expected. Keeping $k=2$ but restoring $\epsilon$ to a small value of $0.1$ gives somewhat better performance by the feedback control, as shown in~\fref{fb2} (b).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\linewidth]{figure7.pdf}
\caption{ Feedback simulation results. Mixing time (red dash curve) and infidelity (blue curve) for one-dimensional quantum Brownian motion as a function of $\beta$ for (a) $\epsilon = 0.1$, $T=1000$, $k =10$; (b) $\epsilon = 0.1$, $T=5000$, $k=10$; (c) $\epsilon = 0.2$, $T=1000$, $k =5$; and (d) $\epsilon = 0.2$, $T=1000$, $k = 10$. In summary, the effects of changing $T$, $\epsilon$, and $k$ are respectively illustrated in passing from (a) to (b); (a) to (c); and (c) to (d). The left axis stands for the infidelity and the right one stands for the mixing time. The red dot and the blue dot correspond to the maximum mixing time (also corresponds to the $\beta^\star$ point) and the minimal infidelity respectively. See the main text for an explanation of these plots.}
\label{fb1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\linewidth]{figure8.pdf}
\caption{ Feedback simulation results. Mixing time (red dash curve) and infidelity (blue and green curve) for one-dimension quantum Brownian motion as a function of $\beta$ for (a) $\epsilon = 0.5$, $T=1000$, $k = 2$ for the blue curve and $k = 10 $ for the green curve; (b) $\epsilon = 0.1$, $T=1000$, $k=2$. The left axis stands for the infidelity and the right one stands for the mixing time. The red dot and the blue (green) dot correspond to the maximum mixing time (also corresponds to the $\beta^\star$ point) and the minimal infidelity respectively.}
\label{fb2}
\end{figure}
\section{Conclusion}
We have shown a connection between two hitherto unrelated topics: pointer states and quantum feedback control. While pointer states have appeared in the quantum foundations literature in the early 1980s, the advent of quantum information has since extended this interest in pointer states, and more generally an interest in decoherence into the realm of practical quantum computing \cite{HK97,CMdMFD01,KDV11}. Some of these studies on decoherence have used pointer-state engineering as a means of resisting decoherence such as~\cite{CMdMFD01,KDV11}, but neither work uses feedback \footnote{Note that feedback have been used to protect quantum systems from decoherence as in~\cite{GTV96,HK97}, but not specifically to produce pointer states.}.
Here we have shown that a pointer state, as defined in a rigorous way by us, are those which are most easily attainable, with high fidelity, as target states in quantum linear Gaussian systems. By ``most easily attainable'' we mean with the minimum feedback strength. While we obtained general analytical results in certain limits, our numerical results for a particular system (quantum Brownian motion) shows that our conclusions still hold approximately in a much wider parameter regime. Our work shows how the concept of pointer states has applications outside the realm of quantum foundations, and could aid in the design of feedback loops for quantum LG systems by suggesting the optimal monitoring scheme.
\ack
This research is supported by the ARC Centre of Excellence grant CE110001027. AC acknowledges support from the National Research Foundation and Ministry of Education in Singapore.
| {'timestamp': '2014-11-20T02:09:00', 'yymm': '1407', 'arxiv_id': '1407.5007', 'language': 'en', 'url': 'https://arxiv.org/abs/1407.5007'} |
\section*{\refname}\small\renewcommand\bibnumfmt[1]{##1.}}
\usepackage[T1]{fontenc}
\usepackage{enumitem}
\usepackage{booktabs}
\usepackage{hyperref}
\usepackage{tikz}
\usetikzlibrary{myautomata}
\usetikzlibrary{decorations.pathreplacing,calc}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{multicol}
\usepackage{accsupp}
\usepackage{amssymb}
\usepackage{mathtools}
\usepackage[vlined]{algorithm2e}
\providecommand*{\napprox}{%
\BeginAccSupp{method=hex,unicode,ActualText=2249}%
\not\approx
\EndAccSupp{}%
}
\newcommand{\mathbin{\approx}}{\mathbin{\approx}}
\newcommand{\mathbin{\napprox}}{\mathbin{\napprox}}
\newcommand{\mathbin{\sqsubseteq}}{\mathbin{\sqsubseteq}}
\newcommand{\tikzmark}[1]{\tikz[overlay,remember picture] \node (#1) {};}
\definecolor{lime}{HTML}{A6CE39}
\DeclareRobustCommand{\orcidicon}{
\hspace{-2mm}
\begin{tikzpicture}
\draw[lime, fill=lime] (0,0)
circle [radius=0.16]
node[white] (ID) {{\fontfamily{qag}\selectfont \tiny ID}};
\draw[white, fill=white] (-0.0625,0.095)
circle [radius=0.007];
\end{tikzpicture}
\hspace{-2.5mm}
}
\def\orcidID#1{\href{https://orcid.org/#1}{\smash{\orcidicon}}}
\title{Effective Reductions of Mealy Machines}
\author{Florian Renkin\orcidID{0000-0002-5066-1726} \and Philipp Schlehuber-Caissier\orcidID{0000-0002-6611-9659} \and Alexandre Duret-Lutz\orcidID{0000-0002-6623-2512} \and Adrien Pommellet\orcidID{0000-0001-5530-152X}}
\institute{
LRDE, EPITA, Kremlin-Bicêtre, France \email{\{frenkin,philipp,adl,adrien\}@lrde.epita.fr}\\
}
\authorrunning{F. Renkin \and P. Schlehuber-Caissier \and A. Duret-Lutz \and A. Pommellet}
\def\todo#1{\textcolor{red}{#1}}
\usetikzlibrary{automata}
\usetikzlibrary{arrows.meta}
\usetikzlibrary{bending}
\usetikzlibrary{quotes}
\usetikzlibrary{positioning}
\usetikzlibrary{calc}
\tikzset{
automaton/.style={
semithick,shorten >=1pt,>={Stealth[round,bend]},
node distance=1cm,
initial text=,
every initial by arrow/.style={every node/.style={inner sep=0pt}},
every state/.style={minimum size=7.5mm,fill=white}
},
smallautomaton/.style={
automaton,
node distance=5mm,
every state/.style={minimum size=4mm,fill=white,inner sep=1pt}
},
mediumautomaton/.style={
automaton,
node distance=1.5cm,
every state/.style={minimum size=6mm,fill=white,inner sep=1pt}
},
initial overlay/.style={every initial by arrow/.style={overlay}},
accset/.style={
fill=blue!50!black,draw=white,text=white,thin,
circle,inner sep=1pt,anchor=center,font=\bfseries\sffamily\tiny
},
color acc0/.style={fill=magenta},
color acc1/.style={fill=cyan},
color acc2/.style={fill=orange},
color acc3/.style={fill=green!70!black},
color acc4/.style={fill=blue!50!black},
}
\makeatletter
\tikzoption{initial angle}{\tikzaddafternodepathoption{\def\tikz@initial@angle{#1}}}
\makeatother
\tikzstyle{initial overlay}=[every initial by arrow/.style={overlay}]
\tikzstyle{state-labels}=[state/.style=state with output,inner sep=2pt]
\def\slabel#1{\nodepart{lower} #1}
\tikzstyle{statename}=[
below,label distance=2pt,
fill=yellow!30!white,
rounded corners=1mm,inner sep=2pt
]
\tikzstyle{accset}=[
fill=blue!50!black,draw=white,text=white,thin,
circle,inner sep=.9pt,anchor=center,font=\bfseries\sffamily\tiny
]
\tikzset{
ks/.style={},
collacc0/.style={fill=blue!50!cyan},
collacc1/.style={fill=magenta},
collacc2/.style={fill=orange!90!black},
collacc3/.style={fill=green!70!black},
collacc4/.style={fill=blue!50!black},
fs/.style={font=\bfseries\sffamily\small},
acc/.pic={\node[text width={},text height={},minimum size={0pt},accset,collacc#1,ks]{#1};},
accs/.pic={\node[text width={},text height={},minimum size={0pt},accset,collacc#1,fs,ks]{#1};},
starnew/.pic={\node[text width={},text height={},minimum size={0pt},text=magenta]{$\filledlargestar$};},
starimpr/.pic={\node[text width={},text height={},minimum size={0pt},text=blue!50!cyan]{$\filledlargestar$};},
balldigit/.style={text=white,circle,minimum size={12pt},shade,ball color=structure.fg,inner sep=0pt,font={\footnotesize\bf},align=center}
}
\def\balldigit#1#2{\tikz[baseline=(X.base)]\node(X)[balldigit,#2]{#1};}
\def\acc#1{\tikz[smallautomaton,baseline=-4pt] \pic{accs=#1};}
\def\acct#1{\tikz[smallautomaton,baseline=-4pt] \pic{acc=#1};}
\def\tikz[baseline=0]=\pic{starnew};{\tikz[baseline=0]=\pic{starnew};}
\def\tikz[baseline=0]=\pic{starimpr};{\tikz[baseline=0]=\pic{starimpr};}
\makeatletter
\def1{1}
\tikzset{
opacity/.append code={
\pgfmathsetmacro1{#1*1}
},
opacity aux/.code={
\tikz@addoption{\pgfsetstrokeopacity{#1}\pgfsetfillopacity{#1}}
},
every shadow/.style={opacity aux=1},
covered/.style={opacity=0},
uncover on/.style={alt={#1{}{covered}}},
alt/.code args={<#1>#2#3}{%
\alt<#1>{\pgfkeysalso{#2}}{\pgfkeysalso{#3}}
},
explains/.style={rectangle callout,callout absolute pointer={#1},fill=structure.fg!10,drop shadow={fill=black!70!structure.fg!30},align=center}
}
\makeatother
\newcommand{\F}{\mathsf{F}}
\newcommand{\G}{\mathsf{G}}
\newcommand{\X}{\mathsf{X}}
\newcommand{\ensuremath{\mathbb{B}}}{\ensuremath{\mathbb{B}}}
\newcommand{\ensuremath{\mathbb{K}}}{\ensuremath{\mathbb{K}}}
\newcommand{\ensuremath{\mathrm{Succ}}}{\ensuremath{\mathrm{Succ}}}
\newcommand{\ensuremath{\mathrm{Out}}}{\ensuremath{\mathrm{Out}}}
\newcommand{\tup}[1]{{\ensuremath{\left(#1\right)}}}
\newcommand{\set}[1]{{\ensuremath{\left\lbrace#1\right\rbrace}}}
\newcommand{\bfm}[1]{{\ensuremath{\mathbf{#1}}}}
\tikzset{
automaton/.style={
semithick,shorten >=1pt,
node distance=1.5cm,
initial text=,
every initial by arrow/.style={every node/.style={inner sep=0pt}},
every state/.style={
align=center,
fill=white,
minimum size=7.5mm,
inner sep=0pt,
execute at begin node=\strut,
}},
smallautomaton/.style={automaton,
node distance=7mm,
every state/.style={minimum size=4mm,
fill=white,
inner sep=1.5pt}},
>={Stealth[round,bend]},
}
\begin{document}
\maketitle
\begin{abstract}
We revisit the problem of reducing incompletely specified Mealy
machines with reactive synthesis in mind. We propose two
techniques: the former is inspired by the tool {\sc
MeMin}~\citet{abel.15.iccad} and solves the minimization problem,
the latter is a novel approach derived from simulation-based
reductions but may not guarantee a minimized machine. However, we
argue that it offers a good enough compromise between the size of
the resulting Mealy machine and performance. The proposed methods
are benchmarked against \textsc{MeMin} on a large collection of test
cases made of well-known instances as well as new ones.\\
\end{abstract}
\section{Introduction}
\begin{figure}[b]
\begin{subfigure}[t]{0.28\textwidth}
{\centering
\begin{tikzpicture}[mediumautomaton,node distance=1.1cm and 2.1cm]
\node[draw,minimum width=1.4cm,minimum height=1.4cm] (C) {};
\draw[->] (C.160) +(-5mm,0) node[left]{$a$} -- (C.160);
\draw[->] (C.-160) +(-5mm,0) node[left]{$b$} -- (C.-160);
\draw[->] (C.20) -- ++(5mm,0) node[right]{$x$};
\draw[->] (C.-20) -- ++(5mm,0) node[right]{$y$};
\end{tikzpicture}
\caption{A reactive controller}
\label{controler}}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.39\textwidth}
{\centering
\begin{tikzpicture}[mediumautomaton,node distance=1cm and 1.33cm]
\node[initial,initial angle=90, lstate] (v0) {$0$};
\node[lstate,right=of v0] (v1) {$1$};
\node[lstate,left=of v0] (v2) {$2$};
\path[->]
(v0) edge[bend left=14] node[above,align=center] {$ab/\{x\bar{y},\mathrlap{xy\}}$\\$a\bar{b}/\set{\bar{x}y}$} (v1)
(v0) edge node[above] {$\bar{a}\bar{b}/\set{\bar{x}\bar{y}}$} (v2)
(v1) edge[bend left=14] node[below,align=center] {$\bar{a}\bar{b}/\set{x\bar{y},\bar{x}\bar{y}}$\\$ab/\set{x\bar{y}}$} (v0)
(v2) edge[loop below,looseness=10] node[right=2pt] {$\bar{a}\bar{b}/\set{\bar{x}\bar{y}}$} (v2)
;
\end{tikzpicture}
\caption{Original machine}
\label{autEx1}}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.24\textwidth}
{\centering
\begin{tikzpicture}[mediumautomaton,node distance=1.7cm and 2.1cm]
\begin{scope}[local bounding box=aut]
\node[initial,lstate] (v0) {$0$};
\end{scope}
\path[->]
(v0) edge[loop above,looseness=10] node[right=2pt] {$ab/\set{x\bar{y}}$} (v0)
(v0) edge[loop right,looseness=10] node {$a\bar{b}/\set{\bar{x}y}$} (v0)
(v0) edge[loop below,looseness=10] node[right=2pt] {$\bar{a}\bar{b}/\set{\bar{x}\bar{y}}$} (v0)
;
\end{tikzpicture}
\caption{Minimal machine}
\label{autEx1_Min}}
\end{subfigure}
\caption{Minimizing a Mealy machine that models a reactive controller}
\label{autEx1_all}
\end{figure}
Program synthesis is a well-established formal method: given a logical
specification of a system, it allows one to automatically generate a
provably correct implementation. It can be applied to reactive
controllers (Fig.~\ref{controler}): circuits that produce for an input
stream of Boolean valuations (here, over Boolean variables $a$ and
$b$) a matching output stream (here, over $x$ and $y$).
The techniques used to translate a specification (say, a Linear Time
Logic formula that relates input and output Boolean variables) into a
circuit often rely on automata-theoretic intermediate models such as
Mealy machines. These transducers are labeled graphs whose edges
associate input valuations to a choice of one or more output
valuations, as shown in Fig.~\ref{autEx1}.
Since Mealy machines with fewer states result in smaller circuits,
reducing and minimizing the size of Mealy machines are
well-studied problems~\citet{alberto.09.latw, paull.59.tec}.
However, vague specifications may cause incompletely specified
machines: for some states (i.e., nodes of the graph) and inputs,
there may not exist a unique, explicitly defined output, but
a set of valid outputs. Resolving those choices to a single output
(among those allowed) will produce a fully specified machine that
satisfies the initial specification, however those different choices
may have an impact on the minimization of the machine. While
minimizing fully specified machines is efficiently
solvable~\citet{hopcroft.71.tmc}, the problem is NP-complete for
incompletely specified machines~\citet{pfleeger.73.tc}. Hence, it may
also be worth exploring faster algorithms that seek to reduce the
number of states without achieving the optimal result.
Consider Fig. \ref{autEx1}: this machine is incompletely specified, as
for instance state $0$ allows multiple outputs for input $ab$
(i.e., when both input variables $a$ and $b$ are true) and implicitly
allows any output for input $\bar a b$ (i.e., only $b$ is true) as it isn't
constrained in any way by the specification. We
can benefit from this flexibility in unspecified outputs to help
reduce the automaton. For instance if we constrain state 2 to behave
exactly as state 0 for inputs $ab$ and $a \bar b$, then these two
states can be merged. Adding further constraints can lead to the
single-state machine shown in Fig. \ref{autEx1_Min}. These smaller
machines are not \emph{equivalent}, but they are \emph{compatible}: for
any input stream, they can only produce output streams that could also
have been produced by the original machine.
We properly define \emph{Incompletely specified Generalized Mealy
Machines} in Section \ref{secDef} and provide a SAT-based
minimization algorithm in Section \ref{secMin}.
Since the minimization of incompletely specified Mealy machines is desirable but
not crucial for reactive synthesis, we propose a faster reduction
technique yielding ``small enough'' machines in Section
\ref{secBisim}. Finally, in Section \ref{secBench} we benchmark these
techniques against the state-of-the-art tool {\sc
MeMin}~\citet{abel.15.iccad}.
\section{Definitions}\label{secDef}
Given a set of propositions (i.e., Boolean variables) $X$,
let $\ensuremath{\mathbb{B}}^X$ be the set of all possible valuations on $X$, and
let $2^{\ensuremath{\mathbb{B}}^X}$ be its set of subsets.
Any element of $2^{\ensuremath{\mathbb{B}}^X}$ can be expressed as a Boolean formula over $X$.
The negation of proposition $p$ is denoted $\bar{p}$.
We use $\top$ to denote the Boolean formula that is always true, or
equivalently the set $\ensuremath{\mathbb{B}}^X$, and assume that $X$ is clear from the context.
A \emph{cube} is a conjunction of propositions or their negations (i.e., literals).
As an example, given three propositions $a$, $b$ and $c$,
the cube $a \land \bar{b}$, written $a\bar{b}$,
stands for the set of all valuations such that $a$ is true and $b$ is false,
i.e. $\{a\bar{b}c, a\bar{b}\bar{c}\}$.
Let $\ensuremath{\mathbb{K}}^X$ stand for the set of all cubes over $X$.
$\ensuremath{\mathbb{K}}^X$ contains the cube $\top$, that stands
for the set of all possible valuations over $X$.
Note that any set of valuations can be represented
as a disjunction of disjoint cubes (i.e., not sharing a common valuation).
\begin{definition}
An \emph{Incompletely specified Generalized Mealy Machine} (IGMM) is
a tuple $M=\tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$, where
$I$ is a set of \emph{input propositions},
$O$ a set of \emph{output propositions},
$Q$ a finite set of \emph{states}, $q_{\mathit{init}}$ an \emph{initial state},
$\delta \colon \left(Q, \ensuremath{\mathbb{B}}^{I}\right) \rightarrow Q$
a partial \emph{transition function}, and
$\lambda \colon \left(Q, \ensuremath{\mathbb{B}}^{I}\right) \rightarrow 2^{\ensuremath{\mathbb{B}}^{O}}\setminus \{\emptyset\}$
an \emph{output function} such that $\lambda(q,i)=\top$ when $\delta(q,i)$ is undefined.
If $\delta$ is a total function, we then say that $M$ is \emph{input-complete}.
\end{definition}
It is worth noting that the transition function is input-deterministic
but not complete with regards to $Q$ as $\delta(q,i)$ could be
undefined. Furthermore, the output function may return many valuations
for a given input valuation and state. This is not an unexpected
definition from a reactive synthesis point of view, as a given
specification may yield multiple compatible output valuations for a
given input.
\begin{definition}[Semantics of IGMMs]
Let $M=\tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$ be an
IGMM. For all $u \in \ensuremath{\mathbb{B}}^{I}$ and $q \in Q$, if $\delta(q, u)$ is defined,
we write that $q \xrightarrow{u / v} \delta(q, u)$ for all
$v \in \lambda(q, u)$. Given two infinite sequences of valuations
$\iota=i_0\cdot i_1\cdot i_2\cdots\in (\ensuremath{\mathbb{B}}^{I})^\omega$ and
$o=o_0\cdot o_1\cdot o_2\cdots\in (\ensuremath{\mathbb{B}}^{O})^\omega$,
$(\iota,o)\models M_q$ if and only if:
\begin{itemize}
\item either there is an infinite sequence of states
$(q_j)_{j \ge 0} \in Q^\omega$ such that $q = q_0$ and
$q_0 \xrightarrow{i_0 / o_0} q_1 \xrightarrow{i_1 / o_1} q_2
\xrightarrow{i_2 / o_2} \cdots$;
\item or there is a finite sequence of states
$(q_j)_{0 \le j \le k} \in Q^{k+1}$ such that $q = q_0$, $\delta(q_k, i_k)$ is
undefined, and
$q_0 \xrightarrow{i_0 / o_0} q_1 \xrightarrow{i_1 / o_1} \cdots q_k$.
\end{itemize}
We then say that starting from state $q$, $M$ produces output $o$
given the input $\iota$.
\end{definition}
Note that if $\delta(q_k,i_k)$ is undefined, the machine is allowed to
produce an arbitrary output from then on. Furthermore, given an input
word $\iota$, there may be several output words $o$ such that
$(\iota,o) \models M_q$ (in accordance with a lax specification).
As an example, consider the input sequence
$\iota = ab\cdot \bar a\bar b\cdot ab\cdot \bar a\bar b\cdots$
applied to the initial state $0$ of the machine shown in Figure~\ref{autEx1}.
We have $(\iota,o)\models M_0$ if and only if for all $j \in \mathbb{N}$,
$o_{2j}\in x$ and $o_{2j+1}\in \bar y$, where $x$ and $\bar y$ are
cubes that respectively represent $\{xy,x\bar y\}$ and $\{x\bar y,\bar x\bar y\}$.
\begin{definition}[Variation and specialization]
Let $M=\tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$ and
$M'=\tup{I, O, Q', q'_{\mathit{init}}, \delta', \lambda'}$ be two IGMMs.
Given two states $q \in Q$, $q' \in Q'$, we say that $q'$ is a:
\begin{itemize}[noitemsep,topsep=2pt]
\item \emph{variation} of $q$ if $\forall \iota \in
({\ensuremath{\mathbb{B}}^I})^\omega ,
\set{o \mid (\iota,o) \models M'_{q'}} \cap \set{o \mid (\iota,o)\models M_{q}}
\neq \emptyset$;
\item \emph{specialization} of $q$ if $\forall \iota \in
({\ensuremath{\mathbb{B}}^I})^\omega ,
\set{o \mid (\iota,o) \models M'_{q'}} \subseteq \set{o \mid (\iota,o)\models M_{q}}$.
\end{itemize}
We say that $M'$ is a variation (resp.\ specialization) of $M$
if $q_{\mathit{init}}'$ is a variation (resp.\ specialization)
of $q_{\mathit{init}}$.
\end{definition}
Intuitively, all the input-output pairs accepted by
a specialization $q'$ in $M'$ are also accepted by $q$ in $M$.
Therefore, if all the outputs produced by state $q$ in $M$
comply with the original specification, then so do the outputs produced
by state $q'$ in $M'$.
In order for two states to be a variation of one another,
for all possible inputs they must be able to agree on a common output behaviour.
We write $q'\mathbin{\approx}{} q$ (resp. $q' \mathbin{\sqsubseteq}{} q$) if $q'$ is a
variation (resp. specialization) of $q$. Note that $\mathbin{\approx}{}$ is a
symmetric but non-transitive relation, while $\mathbin{\sqsubseteq}$ is transitive
($\mathbin{\sqsubseteq}$ is a preorder).
\medskip
Our goal in this article is to solve the following problems:
\begin{description}[noitemsep,topsep=2pt]
\item[Reducing an IGMM $M$:] finding a specialization of $M$ having at most
the same number of states, preferably fewer.
\item[Minimizing an IGMM $M$:] finding a specialization of $M$ having the
least number of states.
\end{description}
Consider again the IGMM shown in Figure~\ref{autEx1}.
The IGMM shown in Figure~\ref{autEx1_Min} is a specialization of this machine
and has a minimal number of states.
\subsubsection*{Generalizing inputs and outputs.}
\label{secCompToReg}
Note that the output function of an IGMM returns a set of valuations,
but it can be rewritten equivalently to output a set of cubes
as $\lambda \colon \left(Q, \ensuremath{\mathbb{B}}^I\right) \rightarrow 2^{\ensuremath{\mathbb{K}}^{O}}$.
As an example, consider $I = \{a\}$ and $O = \{x, y, z\}$; the set of valuations
$v = \{\bar{x}yz, \bar{x}y\bar{z}, x\bar{y}z, x\bar{y}\bar{z}\}\in 2^{\ensuremath{\mathbb{B}}^O}$ is equivalent to the
set of cubes $v_c = \{\bar{x}y, x\bar{y}\}\in 2^{\ensuremath{\mathbb{K}}^O}$.
In the literature, a Mealy machine commonly maps a single input
valuation to a single output valuation: its output function is therefore of the
form $\lambda \colon \left(Q, \ensuremath{\mathbb{B}}^I\right) \rightarrow \ensuremath{\mathbb{B}}^{O}$. The
tool \textsc{MeMin}~\citet{abel.15.iccad} uses a slight generalization by allowing a
single output cube, hence $\lambda \colon \left(Q, \ensuremath{\mathbb{B}}^{I}\right) \rightarrow \ensuremath{\mathbb{K}}^{O}$. Thus,
unlike our model, neither the common definition nor the tool \textsc{MeMin} can
feature an edge outputting the aforementioned set $v$ (or equivalently $v_c$),
as it cannot be represented by a single cube or valuation.
Our model is therefore \emph{strictly more expressive}, although
it comes at a price for minimization.
Note that, in practice, edges with identical source state,
output valuations, and destination state can be merged into a single transition
labeled by the set of allowed inputs. Both our tool and \textsc{MeMin} feature
this optimization. While it does not change the
expressiveness of the underlying model, this more succinct representation
of the machines does improve the efficiency of the algorithms
detailed in the next section, as they depend on the total number of transitions.
\section{SAT-Based Minimization of IGMM}
\label{secMin}
This section builds upon the approach presented
by~\citet{abel.15.iccad} for machines with outputs constrained to
cubes, and generalizes it to the IGMM model (with more
expressive outputs).
\subsection{General approach}
\begin{definition}
Given an IGMM $M=\tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$,
\emph{a variation class} $C \subseteq Q$ is a set of states such that all
elements are pairwise variations, i.e. $\forall q,q' \in C$, $q'\mathbin{\approx}{} q$.
For any input $i \in \ensuremath{\mathbb{B}}^I$, we define:
\begin{itemize}[noitemsep,topsep=2pt]
\item the \emph{successor function}
$\ensuremath{\mathrm{Succ}}(C, i) =
\bigcup_{q\in C} \set{\delta(q,i) \mid
\delta(q,i) \text{~is defined}} $;
\item the \emph{output function}
$\ensuremath{\mathrm{Out}}(C, i) = \bigcap_{q\in C} \lambda(q,i)$.
\end{itemize}
\end{definition}
Intuitively, the successor function returns the set of all states
reachable from a given class under a given input symbol.
The output function returns the set of all shared output valuations
between the various states in the class.
In the remainder of this section we will call a variation class simply a class,
as there is no ambiguity. We consider three important notions concerning
classes, or rather sets thereof, of the form $S = \set{C_0,\ldots,C_{n-1}}$.
\begin{definition}[Cover condition]\label{cond_cover}
We say that a set of classes $S$ \emph{covers} the machine $M$ if every
state of $M$ appears in at least one of the classes.
\end{definition}
\begin{definition}[Closure condition]\label{cond_closure}
We say that a set of classes $S$ is \emph{closed} if for all
$C_j\in S$ and for all inputs $i \in \ensuremath{\mathbb{B}}^I$ there exists a $C_k\in S$
such that $\ensuremath{\mathrm{Succ}}(C_j, i) \subseteq C_k$.
\end{definition}
\begin{definition}[Nonemptiness condition]\label{cond_nonempt}
We say that a class $C$ has a \break\emph{nonempty output} if $\ensuremath{\mathrm{Out}}(C,i)\ne\emptyset$ for
all inputs $i\in \ensuremath{\mathbb{B}}^I$.
\end{definition}
The astute reader might have observed that the nonempty output condition
is strictly stronger than the condition that all elements in a class
have to be pairwise variations of one another.
We will see that this distinction is however important, as it gives rise to a
different set of clauses in the SAT problem, reducing the total runtime.
Combining these conditions yields the main theorem for this approach.
This extends a similar theorem by~\citet[][Thm~1]{abel.15.iccad} by
adding the nonemptiness condition to support the more expressive IGMM
model.
\begin{theorem}
Let $M=\tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$ be an IGMM and
$S = \set{C_0,\ldots,C_{n-1}}$ be a \emph{minimal} (in terms of size)
set of classes such that \emph{(1)} $S$ is \emph{closed},
\emph{(2)} $S$ \emph{covers} every state of the machine $M$ and
\emph{(3)} each of the classes $C_j$ has a \emph{nonempty output}.
Then the IGMM $M'=\tup{I, O, S, q_{\mathit{init}}', \delta', \lambda'}$ where:
\begin{itemize}[noitemsep,topsep=2pt]
\item $q_{\mathit{init}}' = C$ for some $C \in S$ such that
$q_{\mathit{init}} \in C$;
\item $\delta'(C_j, i) = \begin{cases}C_k \text{ for some k s.t. } \ensuremath{\mathrm{Succ}}(C_j, i) \subseteq C_k &\text{if~} \ensuremath{\mathrm{Succ}}(C_j, i) \neq \emptyset \\
\text{undefined} &\text{else;}
\end{cases}$
\item $\lambda'(C_j, i) = \begin{cases} \ensuremath{\mathrm{Out}}(C_j, i) &\text{if~} \ensuremath{\mathrm{Succ}}(C_j, i) \neq \emptyset\\
\top &\text{else;}
\end{cases}$
\end{itemize}
is a \emph{specialization} of minimal size (in terms of states) of $M$.
\label{theoremSATMIN}
\end{theorem}
Figure~\ref{autExSatBase} illustrates this construction on an example
with a single input proposition $I=\set{a}$ (hence two input
valuations $\ensuremath{\mathbb{B}}^I = \set{a, \bar{a}}$), and three output propositions
$O=\set{x, y, z}$. To simplify notations, elements of $2^{\ensuremath{\mathbb{B}}^O}$ are
represented as Boolean functions (happening to be cubes in this
example) rather than sets.
States have been colored to
indicate their possible membership to one of the three variational classes.
The SAT solver needs to associate each state to at least one
of them in order to satisfy the cover condition~\eqref{cond_cover},
while simultaneously respecting conditions~\eqref{cond_closure}--\eqref{cond_nonempt}.
A possible choice would be:
\textcolor{violet}{$C_0 = \{0\}$},
\textcolor{orange}{$C_1 = \{1, 3, 6\}$}, and
\textcolor{green}{$C_2 = \{2, 4, 5\}$}.
For this choice, the \textit{\textcolor{violet}{violet}} class \textcolor{violet}{$C_0$}
has only a single state, so the closure condition~\eqref{cond_closure} is trivially satisfied.
All transitions of the states in the \textit{\textcolor{orange}{orange}}
class \textcolor{orange}{$C_1$} go to states in
\textcolor{orange}{$C_1$}, also satisfying the condition. The same
can be said of the \textit{\textcolor{green}{green}} class
\textcolor{green}{$C_2$}.
Finally, we need to check the nonempty output condition~\eqref{cond_nonempt}.
Once again, it is trivially satisfied for the
\textit{\textcolor{violet}{violet}} class \textcolor{violet}{$C_0$}.
For the \textit{\textcolor{orange}{orange}} and \textit{\textcolor{green}{green}} classes,
we need to compute their respective output.
We get $\ensuremath{\mathrm{Out}}(\textcolor{orange}{C_1}, a) = \bar{z}$,
$\ensuremath{\mathrm{Out}}(\textcolor{orange}{C_1}, \bar{a}) = z$,
$\ensuremath{\mathrm{Out}}(\textcolor{green}{C_2}, a) = \bar{z}$ and
$\ensuremath{\mathrm{Out}}(\textcolor{green}{C_2}, \bar{a}) = z$.
None of the output sets is empty, thus condition~\eqref{cond_nonempt}
is satisfied as well.
Note that, since the outgoing transitions of states 4 and 6
are self-loops compatible with all possible output valuations,
another valid choice is:
\textcolor{violet}{$C_0 = \{0, 4, 6\}$},
\textcolor{orange}{$C_1 = \{1, 3, 4, 6\}$}, and
\textcolor{green}{$C_2 = \{2, 4, 5, 6\}$}.
The corresponding specialization, constructed as described in
Theorem~\ref{theoremSATMIN}, is shown in Figure~\ref{autExSatBaseMin}.
Note that this machine is input-complete, so the incompleteness of the
specification only stems from the possible choices in the outputs.
\begin{figure}[t]
\begin{subfigure}[t]{0.59\textwidth}
\centering
\begin{tikzpicture}[mediumautomaton,node distance=1cm and 1.186cm]
\begin{scope}[local bounding box=aut]
\node[initial,lstate, fill = violet!50] (v0) {$0$};
\node[lstate,above=of v0, fill = orange!50] (v1) {$1$};
\node[lstate,right=of v0, fill = green!50] (v2) {$2$};
\node[lstate,right=of v1, fill = orange!50] (v3) {$3$};
\node[lstate,right=of v2, fill = green!50] (v5) {$5$};
\node[lstate,right=of v5, fill = orange!50] (v4) {$4$};
\fill[fill=green!50] (v4.center) -- (v4.east) arc (0:120:2.99mm) -- cycle;
\fill[fill=violet!50] (v4.center) -- (v4.east) arc (0:-120:2.99mm) -- cycle;
\node[lstate,fill=none] at (v4) {$4$};
\node[lstate,right=of v3, fill = orange!50] (v6) {};
\fill[fill=green!50] (v6.center) -- (v6.east) arc (0:120:2.99mm) -- cycle;
\fill[fill=violet!50] (v6.center) -- (v6.east) arc (0:-120:2.99mm) -- cycle;
\node[lstate,fill=none] at (v6) {$6$};
\end{scope}
\path[->]
(v0) edge node[left] {$a/{\bar{z}}$} (v1)
(v0) edge node[above] {$\bar{a}/\bar{x}\bar{y}\bar{z}$} (v2)
(v1) edge[loop left] node {$a/{\bar{z}}$} (v1)
(v1) edge[bend left=10, above] node {$\bar{a}/{z}$} (v3)
(v2) edge[bend right=20] node[below] {$a/\top$} (v4)
(v2) edge[above] node {$\bar{a}/{z}$} (v5)
(v3) edge[bend left=10, below] node {$a/{\bar{z}}$} (v1)
(v3) edge[above] node {$\bar{a}/\top$} (v6)
(v5) edge[above] node {$\bar{a}/\top$} (v4)
(v4) edge[loop above] node[align=center] {$a/\top$\\$\bar{a}/\top$} (v4)
(v5) edge[loop above] node {$a/{z}$} (v5)
(v6) edge[loop right] node[align=center] {$a/\top$\\$\bar{a}/\top$} (v6)
;
\end{tikzpicture}
\subcaption{Original IGMM $M$}
\label{autExSatBase}
\end{subfigure}
\begin{subfigure}[t]{0.4\textwidth}
\centering
\begin{tikzpicture}[mediumautomaton,node distance=1.cm and 1.4cm]
\begin{scope}[local bounding box=aut]
\node[initial,lstate, fill = violet!50] (v0) {$0$};
\node[lstate,above= of v0, fill = orange!50] (v1) {$1$};
\node[lstate,right= of v0, fill = green!50] (v2) {$2$};
\end{scope}
\path[->]
(v0) edge[above] node {$\bar{a}/{\bar{x}\bar{y}\bar{z}}$} (v2)
(v2) edge[loop above] node[align=center] {$a/{z}$\\$\bar{a}/{z}$} (v2)
(v0) edge[left] node {$a/{\bar{z}}$} (v1)
(v1) edge[loop right] node[align=center] {$a/{\bar{z}}$\\$\bar{a}/{z}$} (v1)
(v2) edge[bend left=35,transparent] node[below] {$a/\top$} (v0)
;
\end{tikzpicture}
\subcaption{Minimal specialization of $M$}
\label{autExSatBaseMin}
\end{subfigure}
\caption{Minimization example}
\label{autExSatBaseGen}
\end{figure}
\subsection{Proposed SAT Encoding}
We want to design an algorithm that finds a minimal specialization of a given
IGMM $M$. To do so, we will use the following approach, starting from $n = 1$:
\begin{itemize}[noitemsep,topsep=2pt]
\item Posit that there are $n$ classes, hence,
$n$ states in the minimal machine.
\item Design SAT clauses ensuring cover, closure and nonempty outputs.
\item Check if the resulting SAT problem is satisfiable.
\item If so, construct the minimal machine described
in Theorem~\ref{theoremSATMIN}.
\item If not, increment $n$ by one and apply the whole process again,
unless $n = \left|Q\right| - 1$, which serves as a proof that the original
machine is already minimal.
\end{itemize}
\subsubsection*{Encoding the cover and closure conditions.}
In order to guarantee that the set of classes
$S = \set{C_0, \ldots, C_{n-1}}$ satisfies both the cover and closure conditions
and that each class $C_j$ is a variation class,
we need two types of literals:
\begin{itemize}[noitemsep,topsep=2pt]
\item $s_{q,j}$ should be true if and only if
state $q$ belongs to the class $C_j$;
\item $z_{i,k,j}$ should be true if
$\ensuremath{\mathrm{Succ}}(C_k, i) \subseteq C_j$ for $i \in \ensuremath{\mathbb{B}}^I$.
\end{itemize}
The cover condition, encoded by Equation~\eqref{eq_SatPart}, guarantees that each state belongs to at least one class.\\
\begin{minipage}[t]{0.49\linewidth}
\begin{equation}
\bigwedge_{q\in Q}\;\bigvee_{0\le j< n} s_{q,j}
\label{eq_SatPart}
\end{equation}
\end{minipage}
\begin{minipage}[t]{0.5\linewidth}
\begin{equation}
\bigwedge_{0\le j< n}\;\bigwedge_{\substack{q,q'\in Q\\q\mathbin{\napprox} q'}} \overline{s_{q,j}} \lor \overline{s_{q',j}}
\label{eq_SatVar}
\end{equation}
\end{minipage}
Equation~\eqref{eq_SatVar} ensures that each class is a variational class:
two states $q$ and $q'$ that are not variations of each other cannot
belong to the same class.
The closure condition must ensure that for every class $C_i$ and every
input symbol $i \in \ensuremath{\mathbb{B}}^I$, there exists at least one class that contains all the
successor states: $\forall k, \forall i, \exists j,\ Succ(C_k, i) \subseteq C_j$.
This is expressed by the constraints~\eqref{eq_SatClos1} and~\eqref{eq_SatClos2}.
\begin{minipage}[t]{0.39\linewidth}
\begin{equation}
\bigwedge_{0\le k< n}\,\bigwedge_{\substack{i \in \ensuremath{\mathbb{B}}^I \\ \phantom{q' = \delta(q,i)}}}\,\bigvee_{0\le j < n} z_{i,k,j}
\label{eq_SatClos1}
\end{equation}
\end{minipage}
\begin{minipage}[t]{0.6\linewidth}
\begin{equation}
\bigwedge_{0\le j, k< n}\;\bigwedge_{\substack{q, q' \in Q, i \in \ensuremath{\mathbb{B}}^I \\ q' = \delta(q,i)}} (z_{i,k,j} \land s_{q,k}) \rightarrow s_{q',j}
\label{eq_SatClos2}
\end{equation}
\end{minipage}
The constraint~\eqref{eq_SatClos1} ensures that at least one $C_j$ contains
$\ensuremath{\mathrm{Succ}}(C_k, i)$, while~\eqref{eq_SatClos2} ensures this mapping of classes
matches the transitions of $M$.
\subsubsection*{Encoding the nonempty output condition.}
\label{encNonEmpty}
Each class in $S$ being a variation class is necessary
but not sufficient to satisfy the nonempty output condition.
We indeed want to guarantee that for any input $i$,
all states in a given class can agree on at least one common output valuation.
However it is possible to have three or more states (like
\begin{tikzpicture}[smallautomaton,baseline=(s.base)]
\node[state] (s) {\small $0$};
\path[->] (s) edge[loop right] node[inner sep=0pt](x){$\,a/\{xy,x\bar y\}$} (s);
\path[use as bounding box] (s.north west) rectangle(x.south east);
\end{tikzpicture},
\begin{tikzpicture}[smallautomaton,baseline=(s.base)]
\node[state] (s) {\small $1$};
\path[->] (s) edge[loop right] node[inner sep=0pt](x){$\,a/\{\bar xy,x\bar y\}$} (s);
\path[use as bounding box] (s.north west) rectangle(x.south east);
\end{tikzpicture}, and
\begin{tikzpicture}[smallautomaton,baseline=(s.base)]
\node[state] (s) {\small $2$};
\path[->] (s) edge[loop right] node[inner sep=0pt](x){$\,a/\{xy,\bar xy\}$} (s);
\path[use as bounding box] (s.north west) rectangle(x.south east);
\end{tikzpicture})
that are all variations of one another, but still cannot
agree on a common output.
This situation cannot occur in \textsc{MeMin} since their model uses
\emph{cubes} as outputs rather than arbitrary sets of valuations
as in our model.
A useful property of cubes is that if the
pairwise intersections of all cubes in a set are nonempty, then
the intersection of all cubes in the set is necessarily nonempty as well.
Since \emph{cubes} are not expressive enough for our model,
we will therefore generalize the output
as discussed earlier in Section \ref{secCompToReg}:
we represent the arbitrary set of valuations produced by the output
function $\lambda$ as a set of cubes whose disjunction yields the original set.
For $q \in Q$ and $i \in \ensuremath{\mathbb{B}}^I$, we partition the set of valuations
$\lambda(q, i)$ into cubes, relying on the~\citet{minato.92.sasimi} algorithm,
and denote the obtained set of cubes as $\mathrm{CS}(\lambda(q, i))$.
Our approach for ensuring that there exists a common output is to
search for disjoint cubes and exclude them from the possible outputs
by selectively deactivating them if necessary; an active cube is a set
in which we will be looking for an output valuation that the whole
class can agree on. To express this, we need two new types of
literals:
\begin{itemize}[noitemsep,topsep=2pt]
\item $a_{c,q,i}$ should be true iff
the particular instance of the cube $c\in \mathrm{CS}(\lambda(q,i))$ used
in the output of state $q$ when reading $i$ is \emph{active};
\item $\mathit{sc}_{q,q'}$ should be true iff
$\exists C_j \in S$ such that $q\in C_j$ and $q'\in C_j
\end{itemize}
The selective deactivation of a cube can then be expressed by the following:
\begin{minipage}[t]{.46\textwidth}
\begin{equation}
\bigwedge_{\substack{q, q' \in Q \\0 \le j < n}} (s_{q, j} \land s_{q', j})
\rightarrow \mathit{sc}_{q,q'}
\label{eq_SatSameClass}
\end{equation}
\end{minipage}
\hfill
\begin{minipage}[t]{.46\textwidth}
\begin{equation}
\bigwedge_{\substack{q \in Q,\, i \in \ensuremath{\mathbb{B}}^I \\\delta(q, i) \text{~is defined} }}\!\!\bigvee_{{c}\in \mathrm{CS}(\lambda(q, i))} a_{c,q,i}\\
\label{eq_SatNEPart}
\end{equation}
\end{minipage}
\begin{equation}
\bigwedge_{\substack{q, q' \in Q,\, i \in \ensuremath{\mathbb{B}}^I \\
\delta(q, i) \text{~is defined}\\\delta(q', i) \text{~is defined} }}
\;
\bigwedge_{\substack{c\in \mathrm{CS}(\lambda(q, i)) \\
c'\in \mathrm{CS}(\lambda(q', i)) \\
c \cap c' = \emptyset}}
(a_{c,q,i} \land a_{c',q',i}) \rightarrow \overline{\mathit{sc}_{q,q'}}.
\label{eq_SatDeact}
\end{equation}
Constraint~\eqref{eq_SatSameClass} ensures that $\mathit{sc}_{q,q'}$ is true if there exists
a class containing both $q$ and $q'$, in accordance with the expected definition.
Constraint~\eqref{eq_SatNEPart} guarantees that at least one of the cubes
in the output $\lambda(q, i)$ is active,
causing the restricted output to be nonempty.
Constraint~\eqref{eq_SatDeact} expresses selective deactivation and only needs to be
added for a given $q, q' \in Q$ and $i \in \ensuremath{\mathbb{B}}^I$ if
$\delta(q, i)$ and $\delta(q', i)$ are properly defined.
This formula guarantees that if there exists a class to which $q$ and $q'$
belong to (i.e., $\mathit{sc}_{q,q'}$ is true) but there also exist disjoint cubes
in the partition of their respective outputs, then
we deactivate at least one of these:
only cubes that intersect can be both activated.
Thus, this constraint guarantees the nonempty output condition.
Since encoding an output set requires a number of cubes exponential
in $|O|$, the above encoding uses
$\mathrm{O}(|Q|(2^{|I|+|O|}+|Q|)+n^2 \cdot 2^{|I|})$ variables as well
as
$\mathrm{O}(Q^2(n+2^{2|O|})+n^2 \cdot 2^{|I|}+|\delta|(2^{|O|}+n^2))$
clauses. We use additional optimizations to limit the number of
clauses, and make the algorithm more practical despite its frightening
theoretical worst case. In particular the CEGAR approach of
Section~\ref{sec:cegar} strives to avoid introducing constraints
\eqref{eq_SatSameClass}--\eqref{eq_SatDeact}.
\subsection{Adjustment of Prior Optimizations}
Constructing the SAT problem iteratively starting from $n = 1$
would be grossly inefficient.
We can instead notice that two states that are not variations of each
other can never be in the same class.
Thus, assuming we can find $k$ states that are not pairwise variations
of one another, we can infer that we need at least as many classes
as there are states in this set, providing a lower bound for $n$.
This idea was first introduced in~\citet{abel.15.iccad};
however, performing a more careful inspection of the constraints
with respect to this ``partial solution'' allows us
to reduce the number of constraints and literals needed.
The nonemptiness condition involves the creation of many literals and clauses
and necessitates an expensive preprocessing step to decompose the
arbitrary output sets returned by output function
($\lambda \colon \left(Q, \ensuremath{\mathbb{B}}^{I}\right) \rightarrow 2^{\ensuremath{\mathbb{B}}^{O}}\setminus \{\emptyset\}$)
into disjunctions of cubes ($\lambda \colon \left(Q, \ensuremath{\mathbb{B}}^{I}\right) \rightarrow 2^{\ensuremath{\mathbb{K}}^{O}}\setminus \{\emptyset\}$).
We avoid adding
unnecessary nonempty output clauses in a counter-example guided fashion.
Violation of these conditions can easily be detected
before constructing the minimized machine.
If detected, a small set of these constraints is added to SAT problem
excluding this particular violation.
In many cases, this optimization greatly reduces the number of
literals and constraints needed, to the extent we can often
avoid their use altogether.
From now on, we consider an IGMM with $N$ states
$Q=\set{q_0, q_1, \ldots, q_{N-1}}$.
\subsubsection*{Variation matrix.}
We first need to determine which states are not pairwise
variations of one another in order to extract a partial solution
and perform simplifications on the constraints.
We will compute a square matrix of size $N\times N$ called $\mathrm{mat}$
such that $\mathrm{mat}[k][\ell] = 1$ if and only if $q_k\mathbin{\napprox} q_\ell$
in the following fashion:
\begin{enumerate}
\item Initialize all entries of $\mathrm{mat}$ to $0$.
\item Iterate over all pairs $\tup{k, \ell}$ with $0 \le k < \ell < N$.
If the entry $\mathrm{mat}[k][\ell]$ is $0$, check if $\exists i \in \ensuremath{\mathbb{B}}^I$
such that $\lambda(q_k, i) \cap \lambda(q_l, i) = \emptyset$. If it exists,
$\mathrm{mat}[k][\ell] \gets 1$.
\item For all pairs $\tup{k, \ell}$ whose associated value $\mathrm{mat}[k][\ell]$
changed from $0$ to $1$, set all existing predecessor pairs
$\tup{m,n}$ with $m< n$ under the same input to $1$ as well, that is,
$\exists i \in \ensuremath{\mathbb{B}}^I$ such that $\delta(q_m, i) = q_k$ and
$\delta(q_n, n) = q_l$. Note that we may need to propagate these changes
to the predecessors of $\tup{m, n}$.
\end{enumerate}
As ``being a variation of'' is a symmetric, reflexive relation, we only
compute the elements above the main diagonal of the matrix.
The intuition behind this algorithm is that two states $q$ and $q'$ are
not variations of one another if either:
\begin{itemize}
\item There exists an input symbol for which the output sets are disjoint.
\item There exists a pair of states which are not variations of one another
and that can be reached from $q$ and $q'$ under the same input
sequence.
\end{itemize}
The complexity of this algorithm is
$\mathrm{O}(|Q|^2 \cdot 2^{|I|})$
if we assume that the disjointness of the output sets can be checked in
constant time; see~\citet{abel.15.iccad}.
This assumption is not correct in general: testing disjointness
for cubes has a complexity linear in the number of input propositions.
On the other hand, testing disjointness for generalized Mealy machines
that use arbitrary sets of valuations has a complexity exponential in the
number of input propositions. This increased complexity is however
counterbalanced by the succinctness the use of arbitrary sets allows.
As an example, given $2m$ output propositions $o_0, \ldots, o_{2m-1}$,
consider the set of output valuations expressed as a disjunction of cubes
$\bigvee_{0 \le k < m} o_{2k}\,\overline{o_{2k+1}} \lor
\overline{o_{2k}}\, o_{2k+1}$. Exponentially many \emph{disjoint} cubes are
needed to represent this set. Thus, a non-deterministic Mealy machine
labeled by output cubes will incur an exponential number of computations
performed in linear time, whereas a generalized Mealy machine
will only perform a single test with
exponential runtime.
\subsubsection*{Computing a partial solution.}
The partial solution corresponds to a set of states such that none of them is
a variation of any other state in the set.
Thus, none of these states can belong to the same (variation) class.
The size of this set is therefore a lower bound for the number of states in
the minimal machine.
Finding the largest partial solution is an NP-hard problem; we therefore
use the greedy heuristic described in~\citet{abel.15.iccad}.
For each state $q$ of $M$, we count the number of states $q'$ such that
$q$ is not a variation of $q'$; call this number $\mathit{nvc}_q$.
We then successively add to the partial solution the states
that have the highest $\mathit{nvc}_q$ but are not variations
of any state already inserted.
\subsubsection*{CEGAR approach to ensure the nonempty output condition.}\label{sec:cegar}
Assuming a solution satisfying the cover and closure constraints has already
been found, we then need to check if
said solution satisfies the nonempty output condition.
If this is indeed the case, we can then construct and return a minimal machine.
If the condition is not satisfied, we look for one or
more combinations of classes and input symbols such that
$\ensuremath{\mathrm{Succ}}(C_k, i) = \emptyset$.
We add for the states in $C_k$ and the input symbol $i$
the constraints described in Section~\ref{encNonEmpty}, and for these states
and input symbols only. Then we check if the problem is still satisfiable.
If it is not, then we need to increase the number of classes to find
a valid solution. If it is, the solution either respects
condition~\eqref{cond_nonempt} and we can return a minimal machine, or
it does not and the process of selectively adding constraints is
repeated. Either way, this \emph{counter-example guided abstraction
refinement} (CEGAR) scheme ensures termination, as the problem is
either shown to be unsatisfiable or solved through iterative exclusion
of all violations of condition~\eqref{cond_nonempt}.
\subsection{Algorithm}
The optimizations described previously yield Algorithm~\ref{algoSAT1}.
\begin{algorithm}[t]
\KwData{a machine $M=\tup{I,O,Q,q_{\mathit{init}},\delta,\lambda}$}
\KwResult{a minimal specialization $M'$}
\tcc{Computing the variation matrix}
bool[][] $\mathrm{mat}$ $\gets$ isNotVariationOf($M$)\;
\tcc{Looking for a partial solution P}
set $P \gets$ extractPartialSol($\mathrm{mat}$)\;
clauses $\gets$ empty list\;
\tcc{Using the lower bound inferred from P}
\For{$n\gets \left|P\right| \KwTo \left|Q\right|-1$}{
addCoverCondition(clauses, $M$, $P$, $\mathrm{mat}$, $n$)\;
addClosureCondition(clauses, $M$, $P$, $\mathrm{mat}$, $n$)\;
\tcc{Solving the cover and closure conditions}
(sat, solution) $\gets$ satSolver(clauses)\;
\While{sat}{
\If{verifyNonEmpty($M$, solution)}{
\KwRet buildMachine($M$, solution)\;
}
\tcc{Adding the relevant nonemptiness clauses}
addNonemptinessCondition(clauses, $M$, solution)\;
(sat, solution) $\gets$ satSolver(clauses)\;
}
}
\tcc{If no solution has been found, return M}
\KwRet copyMachine($M$)\;
\caption{SAT-based minimization}
\label{algoSAT1}
\end{algorithm}
\subsubsection*{Further optimizations and comparison to \textsc{MeMin}.}
The proposed algorithm relies on the general approach outline
in~\citet{abel.15.iccad}, as well as the SAT encoding for the cover and closure
conditions.
We find a partial solution by using a similar heuristic and adapt some
optimizations found in their source code, which are neither detailed
in their paper nor here due to a lack of space.
The main difference lies in the increased expressiveness of the input and output
symbols that causes some significant changes.
In particular, we added the nonemptiness condition to guarantee
correctness, as well as a CEGAR-based implementation to maintain performance.
Other improvements mainly stem from a better usage of the partial solution.
For instance, each state $q$ of the partial solution is associated to
``its own'' class $C_j$. Since the matching literal $s_{q,j}$ is trivially true,
it can be omitted by replacing all its occurrences by true.
States belonging to the partial solution have other peculiarities that
can be leveraged to reduce the number of possible successor classes,
further reducing the amount of literals and clauses needed.
We therefore require fewer literals and clauses, trading
a more complex construction of the SAT problem
for a reduced memory footprint.
The impact of these improvements is detailed in Section~\ref{secBench}.
The Mealy machine described by~\citet{abel.15.iccad} come in two flavors:
One with an explicit initial state and a second one where all states are
considered to be possible initial states.
While our approach does
explicit an initial state, it does not further influence the resulting minimal machine
when all original states are reachable.
\section{Bisimulation with Output Assignment}
\label{secBisim}
We introduce in this section another approach tailored to our
primary use case, that is, efficient reduction of control strategies in the
context of reactive synthesis. This technique, based on the $\mathbin{\sqsubseteq}$
specialization relation, yields non-minimal but ``relatively small'' machines at
significantly reduced runtimes.
Given two states $q$ and $q'$ such that $q'\mathbin{\sqsubseteq} q$, one idea is to
restrict the possible outputs of $q$ to match those of $q'$.
Concretely, for all inputs $i\in \ensuremath{\mathbb{B}}^I$, we restrict $\lambda(q,i)$ to
its subset $\lambda(q',i)$; $q$ and $q'$ thus become
bisimilar, allowing us to merge them. In practice, rather than restricting
the output first then reducing bisimilar states to their quotient,
we instead directly build a machine that is minimal
with respect to $\mathbin{\sqsubseteq}$ where all transitions going to $q$
are redirected to $q'$.
Note that if two states $q$ and $q'$ are bisimilar,
then necessarily $q'\mathbin{\sqsubseteq} q$ and $q\mathbin{\sqsubseteq} q'$: therefore, both states will be
merged by our approach. As a consequence, the resulting machine is always
smaller than the bisimulation quotient of the original machine
(as shown in Section~\ref{secBench}).
\subsection{Reducing Machines with $\mathbin{\sqsubseteq}$}
Our algorithm builds upon the following theorem:
\begin{theorem}
\label{theoremSpecReduc}
Let $M = \tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$ be
an IGMM, and $r\colon Q\to Q$ be a mapping satisfying
$r(q)\mathbin{\sqsubseteq} q$. Define
$M' = \tup{I, O, Q', q_{\mathit{init}}', \delta', \lambda}$ as
an IGMM where $Q' = \mathit{r(Q)}$,
$q'_{\mathit{init}}=r(q_{\mathit{init}})$ and
$\delta' (q, i) = r(\delta (q, i))$ for all states $q$ and input $i$.
Then $M'$ is a specialization of $M$.
\end{theorem}
Intuitively, if a state $q$ is remapped to a
state $q'\mathbin{\sqsubseteq} q$, then the set of words $w$ that can be output for an
input $i$ is simply reduced to a subset of the original output.
The smaller the image $r(Q)$, the more significant the reduction performed on
the machine. Thus, to find a suitable function $r$,
we map each state $q$ to one of the
\emph{minimal elements} of the $\mathbin{\sqsubseteq}$ preorder, also called
the \emph{representative states}.
\begin{figure}[bt]
\begin{minipage}{.35\textwidth}
\centering
\begin{tikzpicture}[mediumautomaton, yscale=1.164
]
\node at (0,2) (n46) {$\{4, 6\}$};
\node at (0,1) (n3) {$\{3\}$};
\node at (-1.5,0) (n2) {$\{2\}$};
\node at (-.5,0) (n0) {$\{0\}$};
\node at (.5,0) (n1) {$\{1\}$};
\node at (1.5,0) (n5) {$\{5\}$};
\draw [->] (n46) edge[bend right] (n2);
\draw [->] (n46) edge[bend right=15] (n0);
\draw [->] (n46) -- (n3);
\draw [->] (n46) edge[bend left=15] (n1);
\draw [->] (n46) edge[bend left] (n5);
\draw [->] (n3) edge[bend right=5] (n0);
\draw [->] (n3) edge[bend left=5] (n1);
\draw[dashed,thin,rounded corners=1mm] ($(n2.north west)+(-2mm,2mm)$) rectangle ($(n5.south east)+ (2mm,-2mm)$);
\node[below=-1.5mm] at (n2.south -| n3) {leaves};
\end{tikzpicture}
\caption{Specialization graph of the IGMM of Fig.~\ref{autExSatBase}}
\label{fig:graph_ex}
\end{minipage}
\hfill
\begin{minipage}{.3\textwidth}
\centering
\begin{tabular}{lcl}
$q$ && $\mathllap{r(}q\mathrlap{)}$ \\
\midrule
0 &$\to$ & 0 \\
1 &$\to$ & 1 \\
2 &$\to$ & 2 \\
3 &$\to$ & 1 \\
4 &$\to$ & 1 \\
5 &$\to$ & 5 \\
6 &$\to$ & 1 \\
\end{tabular}
\vspace*{-1mm}
\caption{Chosen representative mapping.\label{fig:autMapBisim}}
\end{minipage}
\hfill
\begin{minipage}{.33\textwidth}
\centering
\begin{tikzpicture}[mediumautomaton,node distance=.8cm and 1cm]
\begin{scope}[local bounding box=aut]
\node[initial,lstate, fill=violet!50] (v0) {$0$};
\node[lstate,above=of v0, fill=orange!50] (v1) {$1$};
\node[lstate,right=of v0, fill=green!50] (v2) {$2$};
\node[lstate,above=of v2, fill=green!50] (v5) {$5$};
\end{scope}
\path[->]
(v0) edge[left] node {${a}/{\bar{z}}$} (v1)
(v0) edge[below] node {${\bar{a}}/{\bar{x}\bar{y}\bar{z}}$} (v2)
(v1) edge[loop left] node {${a}/{\bar{z}}$} (v1)
(v1) edge[loop above] node {${\bar{a}}/{z}$} (v3)
(v2) edge node[above right=-3pt] {${a}/\top$} (v1)
(v2) edge[right] node {${\bar{a}}/{z}$} (v5)
(v5) edge[above] node {${\bar{a}}/\top$} (v1)
(v5) edge[loop above] node {${a}/{z}$} (v5)
;
\end{tikzpicture}
\caption{IGMM obtained by reducing that of Fig.~\ref{autExSatBase}\label{autExBisim}}
\end{minipage}
\end{figure}
\begin{definition}[Specialization graph]
A \emph{specialization graph} of an IGMM
$M = \tup{I, O, Q, q_{\mathit{init}}, \delta, \lambda}$ is the
\emph{condensation graph} of the directed graph representing the
relation $\mathbin{\sqsubseteq}$: the vertices of the specialization graph
are sets that form a partition of $Q$ such that two states $q$ and
$q'$ belong to the same vertex if $q \mathbin{\sqsubseteq} q'$ and $q' \mathbin{\sqsubseteq} q$;
there is an edge
$\{q_1,q_2,...\} \longrightarrow \{q'_1,q'_2,...\}$ if and only if
$q'_i \mathbin{\sqsubseteq} q_j$ for some (or equivalently all) $i,j$. Note that
this graph is necessarily acyclic.
\end{definition}
Fig.~\ref{fig:graph_ex} shows the specialization graph associated to
the machine of Fig.~\ref{autExSatBase}.
\begin{definition}[Representative of a state]
Given two states $q$ and $q'$ of an IGMM, $q'$ is
a \emph{representative} of $q$ if, in the specialization graph of $M$, $q'$
belongs to a leaf that can be reached from the vertex containing $q$.
In other words, $q'$ is a representative of $q$ if $q' \mathbin{\sqsubseteq} q$ and $q'$ is
a minimal element of the $\mathbin{\sqsubseteq}$ preorder.
\end{definition}
Note that any state has at least one representative. In
Fig.~\ref{fig:graph_ex} we see that $0$ represents $0$,
$3$, $4$, and $6$. States $3$, $4$, and $6$ can be represented by
$0$ or $1$.
By picking one state in each leaf, we obtain a set of
representative states that cover all states of the IGMM. We then
apply Theorem~\ref{theoremSpecReduc} to a function $r$ that maps
each state to its representative in this cover. In Fig.~\ref{fig:graph_ex},
all leaves are singletons, so the set
$\{0,1,2,5\}$ contains representatives for all states. Applying
Th.~\ref{theoremSpecReduc} using $r$ from Fig.~\ref{fig:autMapBisim}
yields the machine shown in Fig.~\ref{autExBisim}. Note that while this
machine is smaller than the original, it is still bigger than the
minimal machine of Fig.~\ref{autExSatBaseMin}, as this approach does not
appraise the variation relation $\mathbin{\approx}$.
\subsection{Implementing $\mathbin{\sqsubseteq}$}
We now introduce an effective decision procedure for $q \mathbin{\sqsubseteq} q'$.
Note that $\mathbin{\sqsubseteq}$ can be defined recursively like a simulation
relation. Assuming, without loss of generality, that the IGMM is
input-complete, $\mathbin{\sqsubseteq}$ is the coarsest relation satisfying:
\[
q'\mathbin{\sqsubseteq} q \Longrightarrow \forall i\in \ensuremath{\mathbb{B}}^I, \begin{cases}
\lambda(q',i) \subseteq \lambda(q,i) \\
\delta(q',i) \mathbin{\sqsubseteq} \delta(q,i) \\
\end{cases}
\]
As a consequence, $\mathbin{\sqsubseteq}$ can be decided using any technique that is suitable
for computing simulation
relations~\citet{henzinger.95.focs,etessami.00.concur}.
Our implementation relies on a straightforward adaptation of the technique
of signatures described by~\citet[][Sec.~4.2]{babiak.13.spin}: for
each state $q$, we compute its \emph{signature} $\mathrm{sig}(q)$, that
is, a Boolean formula (represented as a BDD) encoding the outgoing
transitions of that state such that
$\mathrm{sig}(q) \Rightarrow \mathrm{sig}(q')$ if and only if $q\mathbin{\sqsubseteq} q'$.
Using these signatures, it becomes easy to build the
\emph{specialization graph} and derive a remapping function $r$.
Note that, even if $\mathbin{\sqsubseteq}$ can be computed like a simulation, we do
not use it to build a bisimulation quotient. The remapping applied in
Th.~\ref{theoremSpecReduc} does not correspond to the quotient of $M$ by
the equivalence relation induced by $\mathbin{\sqsubseteq}$.
\section{Benchmarks}\label{secBench}
The two approaches described in Sections \ref{secMin} and \ref{secBisim}
have been implemented within Spot
2.10~\citet{duret.16.atva2}, a toolbox for $\omega$-automata
manipulation, and used in our SyntComp'21
submission~\citet{renkin.21.synt}. The following benchmarks
are based on a development version of Spot\footnote{For instructions
to reproduce, see \url{https://www.lrde.epita.fr/~philipp/forte22/}}
that features efficient variation checks (verifying whether $q\mathbin{\approx} q'$)
thanks to an improved representation of cubes.
We benchmark the two proposed approaches against \textsc{MeMin},
against a simple bisimulation-based approach, and against one another.
The \textsc{MeMin} tool has already been shown~\citet{abel.15.iccad} to
be superior to existing tools like
\textsc{Bica}~\citet{pena.99.cadics},
\textsc{Stamina}~\citet{rho.94.cadics}, and
\textsc{Cosme}~\citet{alberto.13.ocs}; we are not aware of more recent
contenders. For this reason, we only compare our approaches to
\textsc{MeMin}.
In a similar manner to~\citet{abel.15.iccad}, we use the ISM
benchmarks~\citet{kam1994fully} as well as the MCNC benchmark
suite~\citet{yang1991logic}. These benchmarks share a severe drawback: they
only feature very small instances. \textsc{MeMin} is able to solve any of
these instances in less than a second. We therefore extend the set of
benchmarks with our main use-cases: Mealy machines corresponding to control
strategies obtained from SYNTCOMP LTL specifications~\citet{jacobs20205th}.
As mentioned in Section \ref{secCompToReg}, \textsc{MeMin} processes Mealy
machines, encoded using the the KISS2 input format~\citet{yang1991logic},
whose output can be chosen from a cube. However, the IGMM formalism we promote
allows an arbitrary set of output valuations instead.
This is particularly relevant for the SYNTCOMP benchmark, as the LTL
specifications from which the sample's Mealy machines are derived often fail to fully
specify the output. In order to (1) show the benefits of the generalized
formalism while (2) still allowing comparisons with \textsc{MeMin}, we
prepared two versions of each SYNTCOMP input: the ``full'' version features
arbitrary sets of output valuations that cannot be processed by \textsc{MeMin},
while in the ``cube'' version said sets have been
replaced by the first cube produced by the Minato algorithm~\citet{minato.92.sasimi}
on the original output set. The ACM and MCNC benchmarks, on the other hand,
already use a single output cube in the first place.
\begin{figure}[t]
\begin{minipage}[t]{0.48\textwidth}
\centering
\resizebox {1.\textwidth} {!}
{
\input{tot_time.tex}
}
\caption{Log-log plot of runtimes. The legend $a/b$ stands for
$a$ cases above diagonal, and $b$ below.}
\label{fig_tottime}
\end{minipage}
\hfill
\begin{minipage}[t]{0.48\linewidth}
\centering
\resizebox {1.\textwidth} {!}
{
\input{n_lit_clause.tex}
}
\caption{Comparison of the number of literals and clauses in the encodings.}
\label{fig_nclauses}
\end{minipage}
\end{figure}
\begin{table}[t]
\begin{center}
\begin{tabular}{lccccc|ccc@{~}ccc|ccc@{~}cc}
& & & & & & & \multicolumn{2}{c}{$\frac{\mathit{size}}{\mathit{orig}}$} & \multicolumn{2}{c}{$\frac{\mathit{size}}{\mathit{min}}$} & & & \multicolumn{2}{c}{$\frac{\mathit{size}}{\mathit{orig}}$} & \multicolumn{2}{c}{$\frac{\mathit{size}}{\mathit{min}}$} \\
& >(1) & >(2) & >(3) & >(4) & & & avg. & md. & avg. & md. & & & avg. & md. & avg. & md. \\
\cmidrule{1-5} \cmidrule{8-11} \cmidrule{14-17}
original & 114 & 304 & 271 & 314 & & & 1.00 & 1.0 & 6.56 & 1.0 & & & 1.00 & 1.00 & 12.23 & 1.77 \\
(1) bisim (full) & \phantom{000} & 249 & 214 & 275 & & & 0.94 & 1.0 & 1.85 & 1.0 & & & 0.88 & 1.00 & \phantom{0}2.72 & 1.50 \\
(2) bisi\rlap{m w/ o.a. (full)} & \phantom{000} & \phantom{00}0 & \phantom{0}68 & \phantom{0}84 & & & 0.83 & 1.0 & 1.55 & 1.0 & & & 0.66 & 0.67 & \phantom{0}2.10 & 1.00 \\
(3) \textsc{MeMin} (minima\rlap{l cube)} & \phantom{000} & \phantom{0}74 & \phantom{00}0 & \phantom{0}77 & & & 0.81 & 1.0 & 1.13 & 1.0 & & & 0.63 & 0.69 & \phantom{0}1.27 & 1.00 \\
(4) SAT (full) & \tikzmark{b1}\phantom{000} & \phantom{00}0 & \phantom{00}0 & \phantom{00}0 & & & 0.77 & 1.0 & 1.00 & 1.0\tikzmark{b2} & & & \tikzmark{b3}0.54 & 0.56 & \phantom{0}1.00 & 1.00\tikzmark{b4} \\
\\[1em]
\end{tabular}
\end{center}
\begin{tikzpicture}[overlay,remember picture]
\draw [decoration={brace},decorate,thick]
($(b2)+(0,-2mm)$) -- node [below=2pt,align=center] {all 634 instances\\without timeout} ($(b1)+(0,-2mm)$) ;
\draw [decoration={brace},decorate,thick]
($(b4)+(0,-2mm)$) -- node [below=2pt,align=center] {314 non-minimal\\\llap{instan}ces without timeout} ($(b3)+(0,-2mm)$) ;
\end{tikzpicture}%
\caption{Statistics about our three reduction algorithms. The leftmost pane
counts the number of instances where algorithm (y) yields a smaller result than
algorithm (x); as an example,
bisimulation with output assignment (2) outperforms
standard bisimulation (1) in 249 cases. The middle pane presents mean
(avg.) and median (md.) size ratios relative to the
original size and the minimal size of the sample machines.
The rightmost pane presents similar statistics while ignoring
all instances that were already minimal in the first place.\label{tab:stats}}
\end{table}
Figure~\ref{fig_tottime} displays a log-log plot comparing our different methods to
\textsc{MeMin}, using only the ``cube'' instances.\footnote{A 30 minute
timeout was enforced for all instances. The benchmarks were run on an
Asus G14 with a Ryzen 4800HS CPU with 16GB of RAM and no swap }. The
label ``\emph{bisim. w/ o.a.}'' refers to the approach outlined in
Section~\ref{secBisim}, ``\emph{bisim.}'', to a simple bisimulation
quotient, and ``\emph{SAT}'', to the approach of Section~\ref{secMin}.
Points on the black diagonal stand for cases where \textsc{MeMin} and the
method being tested had equal runtime; cases above this line favor
\textsc{MeMin}, while cases below favor the aforementioned methods.
Points on the dotted line at the edges of the figure represent timeouts.
Only \textsc{MeMin} fails this way, on 10 instances.
Figure~\ref{fig_nclauses} compares the maximal number of literals and clauses
used to perform the SAT-based minimization by \textsc{MeMin} or by our
implementation. These two figures only describe ``cube'' instances, as
\textsc{MeMin} needs to be able to process the sample machines.
To study the benefits of our IGMM model's generic outputs, Table~\ref{tab:stats}
compares the relative reduction ratios achieved by the various methods
w.r.t. other methods as well as the original and minimal size of the sample
machines. We use the ``full'' inputs everywhere with the exception of
\textsc{MeMin}.
\subsubsection{Interpretation.}
Reduction via bisimulation solves all instances and has been proven to be by far the
fastest method (Fig.~\ref{fig_tottime}), but also the coarsest, with
a mere $0.94$ reduction ratio (Tab.\ref{tab:stats}).
Bisimulation with output assignment achieves a better reduction ratio of $0.83$, very
close to \textsc{MeMin}'s $0.81$.
In most cases, the proposed SAT-based approaches remain significantly slower than
approaches based on bisimulation (Fig.~\ref{fig_tottime}). Our SAT-based
algorithm is sometimes slower than \textsc{MeMin}'s, as the model's increased
expressiveness requires a more complex method. However,
improving the use of partial solutions and increasing the expressiveness
of the input symbols significantly reduce the size of the encoding of
the intermediate SAT problems featured in our method (Fig.~\ref{fig_nclauses}),
hence, achieve a lower memory footprint.
Points on the horizontal line at the bottom of Figure~\ref{fig_nclauses}
correspond to instances that have already been proven minimal,
since the partial solution is equal to the entire set of states:
in these cases, no further reduction is required.
Finally, the increased expressiveness of our model results in
significantly smaller minimal machines, as shown by the $1.27$
reduction ratio of \textsc{MeMin}'s cube-based machines compared to
the minimisation of generic IGMMs derived from the same specification.
There are also 74 cases where this superior expressiveness allows the
bisimulation with output assignment to beat \textsc{MeMin}.
\section{Conclusion}\label{secConcl}
We introduced a generalized model for incompletely specified Mealy
machines, whose output is an arbitrary choice between multiple
possible valuations.
We have presented two reduction techniques on this model,
and compared them against the state-of-the-art minimization tool
\textsc{MeMin} (where the output choices are restricted to a cube).
The first technique is a SAT-based approach inspired by {\sc
MeMin}~\citet{abel.15.iccad} that yields a minimal machine. Thanks to
this generalized model and an improved use of the partial solution, we
use substantially fewer clauses and literals.
The second technique yields a reduced yet not necessarily minimal
machine by relying on the notion of state specialization. Compared
to the SAT-based approach, this technique offers a good compromise
between the time spent performing the reduction, and the actual
state-space reduction, especially for the cases derived from SYNTCOMP
from which our initial motivation originated.
Both techniques are implemented in Spot 2.10. They have been used in
our entry to the 2021 Synthesis Competition~\citet{renkin.21.synt}.
Spot comes with Python bindings that make it possible to experiment
with these techniques and compare their respective effects\footnote{See: \url{https://spot.lrde.epita.fr/ipynb/synthesis.html}.}.
\bibliographystyle{abbrvnat}
| {'timestamp': '2022-06-22T02:45:21', 'yymm': '2206', 'arxiv_id': '2206.10228', 'language': 'en', 'url': 'https://arxiv.org/abs/2206.10228'} |
\section{Introduction}
The supersymmetric version of the seesaw mechanism is an attractive candidate for physics beyond the Standard Model. On the one hand, it includes the seesaw mechanism, which postulates the existence of right-handed neutrino fields and has become the most popular framework to account for neutrino masses. The seesaw is able to accommodate the experimental data on neutrino masses and mixing \cite{Yao:2006px}, explaining naturally the small neutrino mass scale. On the other hand, it embraces low energy supersymmetry, with its rich phenomenology and its well known virtues. In fact, the minimal supersymmetric Standard Model solves the hierarchy problem, achieves the unification of the gauge couplings, and contains a dark matter candidate: the lightest supersymmetric particle.
The lightest sneutrino is a new dark matter candidate present in the supersymmetric seesaw. Being a mixture of left-handed and right-handed sneutrino fields, the lightest sneutrino will have different properties depending on its composition in terms of interactions eigenstates. In general, three different kind of sneutrinos can be envisioned: a dominantly left-handed one, a mixed sneutrino, or a dominantly right-handed one. A dominantly left-handed sneutrino is not a good dark matter candidate. They are ruled out by experimental searches \cite{Ahlen:1987mn} and tend to have a too small relic density \cite{Falk:1994es}. A mixed sneutrino can be compatible with the observed dark matter density as well as with present bounds from direct searches \cite{ArkaniHamed:2000bq,Arina:2007tm}. The required mixing is obtained at the expense of a large neutrino trilinear coupling, which is not allowed in typical models of supersymmetry breaking. A dominantly right-handed sneutrino is the final possibility, the one we will be concerned with throughout this paper. A right-handed sneutrino, being essentially sterile, interacts with other particles mainly through the neutrino Yukawa coupling. Could such a sterile sneutrino account for the observed dark matter density?
Gopalakrishna, Gouvea, and Porod, in \cite{Gopalakrishna:2006kr}, studied that possibility within the same scenario we are considering here. They showed that self-annihilations of right-handed sneutrinos as well as co-annihilations with other particles are too weak to keep the sneutrinos in equilibrium with the thermal plasma in the early Universe. They also found that the production of sneutrinos in the decay of other supersymmetric particles gives a too large contribution to the relic density. They concluded, therefore, that in the standard cosmological model right-handed sneutrinos cannot explain the dark matter of the Universe.
Even though generally valid, that conclusion is not guaranteed if the mass difference between the Higgsino and the sneutrino is small. In that case, inverse decays, such as $\tilde N+L\to \tilde H$, contribute to the annihilation of sneutrinos and therefore to the reduction of the sneutrino relic density. Such possibility was not taken into account in \cite{Gopalakrishna:2006kr}. In this paper, we will focus on models with a Higgsino NLSP and show that inverse processes cannot be neglected, for they suppress the sneutrino relic density by several orders of magnitude. Then, we will reexamine whether the sterile sneutrino can explain the dark matter of the Universe in the standard cosmological model.
In the next section we briefly review the supersymmetric seesaw model and show that sterile sneutrinos arise naturally in common scenarios of supersymmetry breaking. Then, in section \ref{sec:3}, we will include inverse decays into the Boltzmann equation that determines the sneutrino abundance. It is then shown that inverse decays are indeed relevant; they cause a significant reduction of the relic density. In section \ref{sec:4}, we study the relic density as a function of the neutrino Yukawa coupling, the sneutrino mass, and the Higgsino-sneutrino mass difference. There, we will obtain our main result: the suppression effect of inverse decays, though important, is not enough to bring the sneutrino relic density down within the observed range. In the final section we will review our study and present our conclusions.
\section{The model}\label{sec:2}
We work within the supersymmetric version of the seesaw mechanism, where the field content of the MSSM is supplemented with a right-handed neutrino superfield $N$ per generation. The superpotential then reads
\begin{equation}
W=W_{MSSM}+\frac12M_N^{IJ}N^IN^J+Y_\nu^{IJ} H_uL^IN^J
\label{superp}
\end{equation}
where, as usual, we have assumed R-parity conservation and renormalizability. $M_N$ is the Majorana mass matrix of right-handed neutrinos and $Y_\nu$ is the matrix of neutrino Yukawa couplings. Without loss of generality $M_N$ can be chosen to be real and diagonal. $Y_\nu$ is in general complex but we will assume, for simplicity, that it is real. $M_N$ and $Y_\nu$ are new free parameters of the model; they are to be determined or constrained from experimental data.
After electroweak symmetry breaking, the above superpotential generates the following neutrino mass terms
\begin{equation}
\mathcal{L}_{\nu\,mass}=-v_uY_\nu \nu N-\frac12M_NNN+h.c.
\end{equation}
If $M_N\gg v_u Y_\nu$, the light neutrino mass matrix, $m_\nu$, is then given by the seesaw formula
\begin{equation}
m_\nu=-m_DM_N^{-1}m_D^T,
\label{seesaw}
\end{equation}
with $m_D=v_uY_\nu$ being the Dirac mass. Since $m_\nu$ is partially known from neutrino oscillation data, equation (\ref{seesaw}) is actually a constraint on the possible values of $Y_\nu$ and $M_N$. It is a weak constraint though; and it allows $M_N$ to vary over many different scales. In this paper we consider what is usually known as a seesaw mechanism at the electroweak scale. That is, we assume that $M_N\sim 100$ GeV. Thus, since the neutrino mass scale is around $m_\nu\sim 0.1$ eV, the typical neutrino Yukawa coupling is
\begin{equation}
Y_\nu\sim 10^{-6}\,,
\end{equation}
or around the same order of magnitude as the electron Yukawa coupling. Notice that this value of $Y_\nu$ is a consequence of the seesaw mechanism at the electroweak scale. In other frameworks, such as Dirac neutrinos or seesaw at much higher energies, $Y_\nu$ takes different values. We will not consider such possibilities here.
The new soft-breaking terms of the supersymmetric seesaw model are given by
\begin{equation}
\mathcal{L}_{soft}=-(m_N^2)^{IJ}\tilde N_R^{*I}\tilde N_R^J+\left[(m_B^2)^{IJ}\tilde N_R^I\tilde N_R^J-A_\nu^{IJ}h_u\tilde L^I\tilde N_R^J+h.c.\right]\,.
\label{lsoft}
\end{equation}
They include sneutrino mass terms as well a trilinear interaction term. For simplicity, we will assume that $m_N^2$, $m_B^2$, and $A_\nu$ are real.
To study the sneutrino mass terms resulting from (\ref{superp}) and (\ref{lsoft}) it is convenient to suppress the generation structure; that is, to work with one fermion generation only. It is also useful to introduce the real fields $\tilde\nu_1$, $\tilde\nu_2$, $\tilde N_1$ and $\tilde N_2$ according to the relations
\begin{eqnarray}
\tilde\nu_L=\frac{1}{\sqrt2}\left(\tilde \nu_1+i\tilde \nu_2\right)\,,
\tilde N_R=\frac{1}{\sqrt2}\left(\tilde N_1+ i \tilde N_2\right).
\end{eqnarray}
Indeed, in the basis $(\tilde \nu_1,\tilde N_1,\tilde \nu_2,\tilde N_2)$ the sneutrino mass matrix takes a block diagonal form
\begin{equation}
\mathcal{M}_{\tilde\nu}=\left(\begin{array}{cccc} m_{LL}^2 & m_{RL}^{2}+m_DM_N & 0 &0\\
m_{RL}^2+m_D M & m_{RR}^2-m_B^2 & 0 &0 \\
0 & 0& m_{LL}^2 & m_{RL}^{2}-m_DM_N\\
0 & 0& m_{RL}^2-m_DM_N & m_{RR}^2+m_B^2\end{array}\right)
\label{eq:mv}
\end{equation}
where $m_{LL}=m_{\tilde L}^2+m_D^2+0.5m_Z^2\cos2\beta $, $m_{RR}^2=M_N^2+m_N^2+m_D^2$, and $m_{LR}^2=-\mu v_dY_N+v_uA_\nu$.
This matrix can be diagonalized by a unitary rotation with a mixing angle given by
\begin{equation}
\tan 2\theta_{1,2}^{\tilde \nu}=\frac{2(m_{RL}^2\pm m_DM)}{m_{LL}^2-(m_{RR}^2\mp m_B^2)},
\label{eq:mix}
\end{equation}
where the top sign corresponds to $\theta_1$ --to the mixing between $\tilde \nu_1$ and $\tilde N_1$-- whereas the bottom sign corresponds to $\theta_2$.
Since $\mathcal{M}_{\tilde\nu}$ is independent of gaugino masses, there is a region in the supersymmetric parameter space where the lightest sneutrino, obtained from (\ref{eq:mv}), is the lightest supersymmetric particle (LSP) and consequently the dark matter candidate. That is the only region we will consider in this paper.
The lightest sneutrino is a mixture of left-handed and right-handed sneutrino fields. Depending on its gauge composition, three kinds of sneutrinos can be distinguished: a dominantly left-handed sneutrino, a mixed sneutrino, and a dominantly right-handed sneutrino. A dominantly left-handed sneutrino is not a good dark matter candidate for it is already ruled out by direct dark matter searches. These sneutrinos also have large interactions cross sections and tend to annihilate efficiently in the early universe, typically yielding a too small relic density. A mixed sneutrino may be a good dark matter candidate. By adjusting the sneutrino mixing angle, one can simultaneously suppress its annihilation cross section, so as to obtain the right relic density, and the sneutrino-nucleon cross section, so as to evade present constraints from direct searches. A detailed study of models with mixed sneutrino dark matter was presented recently in \cite{Arina:2007tm}. A major drawback of these models is that the required mixing may be incompatible with certain scenarios of supersymmetry breaking, such as gravity mediation. The third possibility, the one we consider, is a lightest sneutrino which is predominantly right-handed. That is, a \emph{sterile sneutrino}.
A sterile sneutrino is actually unavoidable in supersymmetry breaking scenarios where the trilinear couplings are proportional to the corresponding Yukawa matrices, such as the constrained Minimal Supersymmetric Standard Model (CMSSM)\cite{Yao:2006px}. In these models
\begin{equation}
A_\nu=a_\nu Y_\nu m_{soft}
\end{equation}
where $m_{soft}\sim 100$ GeV is a typical supersymmetry breaking mass and $a_\nu$ is an order one parameter. Because $Y_\nu$ is small, $A_\nu$ is much smaller than the electroweak scale,
\begin{equation}
A_\nu\sim 100 \mathrm{keV}\,.
\end{equation}
Hence, from equation (\ref{eq:mix}), the mixing angle between $\tilde \nu_i$ and $\tilde N_i$ is also very small
\begin{equation}
\sin\theta_i\sim 10^{-6}\,.
\end{equation}
Thus, we see how in these models the small $Y_\nu$ translates into a small trilinear coupling $A_\nu$ that in turn leads to a small mixing angle --to a sterile sneutrino. Sterile sneutrinos are also expected in other supersymmetry breaking mechanisms that yield a small $A_\nu$ at the electroweak scale.
Since the mixing angle is small, we can extract the sterile neutrino mass directly from (\ref{eq:mv}). It is given by
\begin{equation}
m_{\tilde N}^2=m_{RR}^2-m_{B}^2\approx M_N^2+m_N^2-m_B^2
\end{equation}
where we have neglected the Dirac mass term in the last expression. $m_{\tilde N}$ is thus expected to be at the electroweak scale. In the following, we will consider $m_{\tilde N}=m_{LSP}$ as a free parameter of the model.
To summarize, the models we study consist of the MSSM plus an electroweak scale seesaw mechanism that accounts for neutrino masses. Such models include a new dark matter candidate: the lightest sneutrino. In common scenarios of supersymmetry breaking, the lightest sneutrino, which we assume to be the dark matter candidate, turns out to be a dominantly right handed sneutrino, or a sterile sneutrino. In the following, we will examine whether such a \emph{sterile} sneutrino may account for the dark matter of the Universe.
\section{The $\tilde N$ relic density}\label{sec:3}
To determine whether the sterile sneutrino can explain the dark matter of the universe we must compute its relic density $\Omega_{\tilde N}h^2$ and compare it with the observed value $\Omega_{DM}h^2=0.11\cite{Dunkley:2008ie}$. This question was already addressed in \cite{Gopalakrishna:2006kr}. They showed that, due to their weak interactions, sneutrinos are unable to reach thermal equilibrium in the early Universe. In fact, both the self-annihilation and the co-annihilation cross section are very suppressed. They also noticed that sneutrinos could be produced in the decays of other supersymmetric particles and found that such decay contributions lead to a relic density several orders of magnitude larger than observed. Thus, they concluded, sterile sneutrinos can only be non-thermal dark matter candidates.
That conclusion was drawn, however, without taking into account inverse decay processes. We now show that if the Higgsino-sneutrino mass difference is small\footnote{If it is large the results in \cite{Gopalakrishna:2006kr} would follow.}, inverse decays may suppress the sneutrino relic density by several orders of magnitude. To isolate this effect, only models with a Higgsino NLSP are considered in the following. We then reexamine the possibility of having a sterile sneutrino as a thermal dark matter candidate within the standard cosmological model.
In the early Universe, sterile sneutrinos are mainly created through the decay $\tilde H\to \tilde N+L$, where $\tilde H$ is the Higgsino and $L$ is the lepton doublet. Alternatively, using the mass-eigenstate language, one may say that sneutrinos are created in the decay of neutralinos ($\chi^0\to \tilde N +\nu$) and charginos ($\chi^\pm\to \ell^\pm +\tilde N$). These decays are all controlled by the neutrino Yukawa coupling $Y_\nu$. Other decays, such as $\tilde\ell\to \tilde N f f'$ via $W^\pm$, also occur but the Higgsino channel dominates. Regarding annihilation processes, the most important one is the inverse decay $\tilde N+L\to\tilde H$. In fact, the sneutrino-sneutrino annihilation cross section is so small that such process never reaches equilibrium. And a similar result holds for the sneutrino coannihilation cross section. We can therefore safely neglect annihilations and coannihilations in the following. Only decays and inverse decays contribute to the sneutrino relic density.
The Boltzmann equation for the sneutrino distribution function $f_{\tilde N}$ then reads:
\begin{align}
\label{boltzmann}
\frac{\partial f_{\tilde N}}{\partial t}-H\frac{|\mathbf{p}|^2}{E}\frac{\partial f_{\tilde N}}{\partial E}=\frac{1}{2 E_{\tilde N}}\int & \frac{d^3p_L}{(2\pi)^3 2E_L}\frac{d^3p_{\tilde H}}{(2\pi)^3 2E_{\tilde H}}|\mathcal{M}_{\tilde H\to L\tilde N}|^2\\ \nonumber
& (2\pi)^4 \delta^4(p_{\tilde H}-p_L-p_{\tilde N})\left[f_{\tilde H}-f_L f_{\tilde N}\right]
\end{align}
where $H$ is the Hubble parameter and $f_{\tilde H}$, $f_{L}$ respectively denote the $\tilde H$ and $L$ distribution functions. Other dark matter candidates, including the neutralino, have large elastic scatterings cross sections with the thermal plasma that keep them in \emph{kinetic} equilibrium during the freeze out process. Their distribution functions are then proportional to those in \emph{chemical} equilibrium and the Boltzmann equation can be written as an equation for the number density instead of the distribution function \cite{Gondolo:1990dk}. For sterile sneutrinos, on the contrary, the elastic scattering is a slow process --being suppressed by the Yukawa coupling-- and kinetic equilibrium is not guaranteed. Hence, we cannot write (\ref{boltzmann}) as an equation for the sneutrino number density $n_{\tilde N}$ and must instead solve it for $f_{\tilde N}$.
If the condition $f_{\tilde N}\ll 1$ were satisfied, inverse processes could be neglected and a simple equation relating the sneutrino number density to the Higgsino number density could be obtained. That is the case, for instance, in supersymmetric scenarios with Dirac mass terms only \cite{Asaka:2005cn}. In such models, the neutrino Yukawa coupling is very small, $Y_\nu\sim 10^{-13}$, and sneutrinos never reach chemical equilibrium. But for the range of parameters we consider, $Y_\nu\sim 10^{-6}$, the condition $f_{\tilde N}\ll 1$ is not satisfied.
Since equation (\ref{boltzmann}) depends also on the Higgsino distribution function, one may think that it is necessary to write the Boltzmann equation for $f_{\tilde H}$ and then solve the resulting system for $f_{\tilde N}$ and $f_{\tilde H}$. Not so. Higgsinos, due to their gauge interactions, are kept in thermal equilibrium --by self-annihilation processes-- until low temperatures, when they decay into $\tilde N+L$ through the $Y_\nu$ suppressed interaction. It is thus useful to define a \emph{freeze-out} temperature, $T_{f.o.}$, as the temperature at which these two reaction rates become equal. That is,
\begin{equation}
n_{\tilde H}\langle\sigma_{\tilde H\sH}v\rangle|_{T_{f.o.}}=\Gamma(\tilde H\to \tilde N +L)|_{T_{f.o.}}\,,
\label{eq:fo}
\end{equation}
where $n_{\tilde H}$ is the Higgsino number density and $\langle\sigma_{\tilde H\sH}v\rangle$ is the thermal average of the Higgsino-Higgsino annihilation rate into light particles. $T_{f.o.}$ marks the boundary between two different regimes. For $T>T_{f.o.}$ Higgsinos are in equilibrium and annihilate efficiently. The Higgsinos produced in the inverse decays, in particular, easily annihilate with thermal Higgsinos into light particles. The inverse process is thus effective. In contrast, for $T<T_{f.o.}$ Higgsinos mostly decay into the LSP and inverse decays cannot deplete the sneutrino abundance. The final state Higgsinos simply decay back into sneutrinos: $\tilde N+L\to\tilde H\to \tilde N+L$.
Below $T_{f.o.}$, therefore, the total number of sneutrinos plus Higgsinos remains constant. Thus, we only need to integrate equation (\ref{boltzmann2}) until $T_{f.o.}$, a region in which Higgsinos are in equilibrium.
Assuming a Maxwell-Boltzmann distribution, $f(E)\propto \exp(-E/T)$, for Higgsinos and leptons and neglecting lepton masses, the integrals in (\ref{boltzmann}) can be evaluated analytically to find
\begin{equation}
\frac{\partial f_{\tilde \nu}}{\partial t}-H\frac{|\mathbf{p}|^2}{E}\frac{\partial f_{\tilde \nu}}{\partial E}=\frac{|\mathcal{M}_{\tilde H\to L\tilde N}|^2 T}{16\pi E_{\tilde N}|\mathbf{p}_{\tilde N}|}\left(e^{-E_{\tilde N}/T} -f_{\tilde N}\right) \left[e^{-E_{-}/T}-e^{-E_{+}/T}\right]
\label{boltzmann2}
\end{equation}
where
\begin{align}
E_\pm&=\frac{m_{\tilde H}^2-m_{\tilde N}^2}{2m_{\tilde N}^2}(E_{\tilde N}\pm|\mathbf{p}_{\tilde N}|).
\end{align}
In the following we will solve equation (\ref{boltzmann2}) to obtain the sneutrino abundance, $Y_{\tilde N}=n_{\tilde N}/s$, and the sneutrino relic density, $\Omega_{\tilde N}h^2$. The sneutrino abundance today will be given by
\begin{equation}
Y_{\tilde N}|_{T_0}=Y_{\tilde N}|_{T_{f.o.}}+Y_{\tilde H}|_{T_{f.o.}},
\end{equation}
where the second term takes into account that the Higgsinos present at freeze-out will decay into sneutrinos. The sneutrino relic density today is then obtained as
\begin{equation}
\Omega_{\tilde N}h^2=2.8\times10^{10} Y_{\tilde N} \frac{m_{\tilde N}}{100\mathrm{GeV}}.
\end{equation}
The only parameters that enter directly in the computation of the sneutrino relic density are the Yukawa coupling, the sneutrino mass, and the Higgsino mass, which we take to be given by the $\mu$ parameter --$m_{\tilde H}=\mu$. All other supersymmetric particles besides $\tilde N$ and $\tilde H$ are assumed to be heavier, with $m_{susy}\sim 1$ TeV. To determine the freeze-out temperature, equation (\ref{eq:fo}), we also need to know the Higgsino annihilation rate into Standard Model particles. We use the DarkSUSY package \cite{Gondolo:2004sc} to extract that value. Regarding the initial conditions, we assume that at high temperatures ($T\gg m_{\tilde H}$) the sneutrino distribution function is negligible $f_{\tilde N}\sim 0$. Finally, we assume that the early Universe is described by the standard cosmological model.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4,angle=-90]{evolyuk.ps}
\caption{\small The role of the neutrino Yukawa coupling on the sterile sneutrino abundance. The figure shows $Y$ as a function of the temperature for different values of $Y_\nu$. The sneutrino mass is $100$GeV while $\mu=120$GeV.}
\label{figure1}
\end{center}
\end{figure}
Once decays and inverse decays are included in the $\tilde N$ Boltzmann equation, two questions naturally come to mind. First, for what values of $Y_\nu$ are inverse decays relevant? Second, can decays and inverse decays bring the sneutrinos into equilibrium? To answer these questions we show in figure \ref{figure1} the sneutrino abundance as a function of the temperature for $m_{\tilde N}=100$ GeV, $m_{\tilde H}=120$ GeV, and different values of $Y_\nu$. Notice that for $Y_\nu=10^{-8}$ inverse processes are negligible and the sneutrino abundance simply grows with temperature. In that region, for $Y_\nu\lesssim 10^{-8}$, the sneutrino relic density is proportional to $Y_\nu^2$. From the figure we see that for $Y_\nu=10^{-7}$ the inverse process leads to a reduction of the sneutrino abundance around $T=20$ GeV. The Yukawa interaction is not yet strong enough to bring the sneutrinos into equilibrium. For $Y_\nu=10^{-6}$ sneutrinos do reach equilibrium and then decouple at lower temperatures. For even larger Yukawa couplings, $Y_\nu=10^{-5},10^{-4}$, equilibrium is also reached but the decoupling occurs at higher temperatures. In that region, the relic density also increases with the Yukawas. Thus, for $Y_\nu\sim 10^{-6}$ inverse decays not only are relevant, they are strong enough to thermalize the sneutrinos.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4,angle=-90]{compall.ps}
\end{center}
\caption{\small The effect of the inverse process on the sneutrino relic density. The panels show the resulting sneutrino abundance $Y=n/s$ as a function of the temperature for $m_{\tilde N}=100$GeV and different values of $\mu$. The full line is the result obtained including the inverse process whereas the dashed line is the result without including them. The dash-dotted line shows the sneutrino equilibrium abundance.}
\label{figure2}
\end{figure}
Figure \ref{figure2} directly compares the resulting sneutrino abundance with and without including the inverse process. The full line corresponds to the correct result, taking into account the direct and the inverse process. The dashed line, instead, shows the result for the direct process only, that is the sneutrino abundance according to \cite{Gopalakrishna:2006kr}. The sneutrino mass was taken to be $100$GeV and $Y_\nu$ was set to $10^{-6}$. The Higgsino mass is different in each panel and includes values leading to strong and mild degeneracy as well as no-degeneracy at all between the sneutrino and the Higgsino. Notice that the correct final abundance, and consequently the resulting relic density, is always several orders of magnitude below the value predicted in \cite{Gopalakrishna:2006kr}. Even for the case of a large mass difference, we find a suppression of 3 orders of magnitude in the relic density. And as the mass difference shrinks the suppression becomes larger, reaching about $6$ orders of magnitude for $\mu=150$ and about $7$ orders of magnitude for $\mu=120$GeV. We thus see that over the whole parameter space the inverse process has a large suppression effect on the sneutrino relic density.
\section{Results}
\label{sec:4}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4,angle=-90]{newsneuyuk.ps}
\caption{\small The sneutrino relic density as a function of the neutrino Yukawa coupling for different values of $m_{\tilde N}$ and $\Delta m=20$GeV.}
\label{figure3}
\end{center}
\end{figure}
So far we have found that the inverse decay process $\tilde N+L\to\tilde H$ leads to a suppression of the sneutrino relic density. It remains to be seen whether such suppression is strong enough to bring the relic density down to the observed value. That is, we will now study the dependence of the relic density with the sneutrino mass, the Higgsino-sneutrino mass difference, and the neutrino Yukawa coupling to find the region of the parameter space that satisfies the condition $\Omega_{\tilde N}h^2=\Omega_{DM}h^2$.
Figure \ref{figure3} shows the sneutrino relic density as a function of the neutrino Yukawa coupling and different values of the sneutrino mass. The Higgsino-sneutrino mass difference ($\Delta m=m_{\tilde H}-m_{\tilde N}$) was set to $20$ GeV. Larger values would only increase the relic density --see figure \ref{figure2}. Notice that, for a given sneutrino mass, the relic density initially decreases rather steeply reaching a minimum value at $Y_\nu\lesssim 10^{-6}$ and then increases again. From the figure we also observe that the smallest value of the relic density is obtained for $m_{\tilde H}=400$ GeV, that is, when the percentage mass difference is smaller. In any case, the relic density is always larger than $1$, too large to be compatible with the observations.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4,angle=-90]{newsneumsn.ps}
\caption{\small The sneutrino relic density as a function of the sneutrino mass for $Y_\nu=10^{-6}$ and different values of $\Delta m/m_{\tilde N}$. As expected the smaller the mass difference the smaller the relic density is.}
\end{center}
\label{fig:msn}
\end{figure}
This result is confirmed in figure \ref{fig:msn} when we display the relic density as a function of the sneutrino mass for $Y_{\tilde N}=10^{-6}$ and different values of $\Delta m/m$. In agreement with the previous figure, we see that the smaller the percentage mass difference, the smaller the relic density is. Yet, $\Omega_{\tilde N}h^2$ is always larger than $1$. We have verified that this conclusion is robust. Neither larger sneutrino masses nor different Yukawa couplings lead to the correct value of the relic density.
\section{Conclusions}
We studied the possibility of explaining the dark matter with a sterile sneutrino in a supersymmetric model consisting of the MSSM supplemented with a seesaw mechanism at the weak scale. We showed that if the Higgsino is the NLSP inverse decays play a crucial role in the computation of the sneutrino relic density, suppressing it for several orders of magnitude. We wrote down and numerically solved the correct Boltzmann equation that determines the sneutrino abundance and studied the resulting relic density as a function of the sneutrino mass, the neutrino Yukawa coupling and the Higgsino-sneutrino mass difference. We found that the sterile sneutrino relic density, even though much smaller than previously believed, is still larger than the observed dark matter density. In this scenario, therefore, the sterile sneutrino is not a thermal dark matter candidate.
\section*{Acknowledgments}
I am supported by the \emph{Juan de la Cierva} program of the Ministerio de Educacion y Ciencia of Spain, by Proyecto Nacional FPA2006-01105, and by the Comunidad de Madrid under Proyecto HEPHACOS S-0505/ESP-0346. I would like to thank W. Porod and Ki-Young Choi for comments and suggestions.
\thebibliography{99}
\bibitem{Yao:2006px}
C.~Amsler {\it et al.} [Particle Data Group],
Phys.\ Lett.\ B {\bf 667}, 1 (2008). W.~M.~Yao {\it et al.} [Particle Data Group],
J.\ Phys.\ G {\bf 33} (2006) 1.
\bibitem{Ahlen:1987mn}
S.~P.~Ahlen, F.~T.~Avignone, R.~L.~Brodzinski, A.~K.~Drukier, G.~Gelmini and D.~N.~Spergel,
Phys.\ Lett.\ B {\bf 195} (1987) 603.
D.~O.~Caldwell, R.~M.~Eisberg, D.~M.~Grumm, M.~S.~Witherell, B.~Sadoulet, F.~S.~Goulding and A.~R.~Smith,
Phys.\ Rev.\ Lett.\ {\bf 61} (1988) 510.
M.~Beck {\it et al.},
Phys.\ Lett.\ B {\bf 336} (1994) 141.
\bibitem{Falk:1994es}
T.~Falk, K.~A.~Olive and M.~Srednicki,
Phys.\ Lett.\ B {\bf 339} (1994) 248
[arXiv:hep-ph/9409270].
\bibitem{ArkaniHamed:2000bq}
N.~Arkani-Hamed, L.~J.~Hall, H.~Murayama, D.~Tucker-Smith and N.~Weiner,
Phys.\ Rev.\ D {\bf 64} (2001) 115011
[arXiv:hep-ph/0006312].
F.~Borzumati and Y.~Nomura,
Phys.\ Rev.\ D {\bf 64} (2001) 053005
[arXiv:hep-ph/0007018].
\bibitem{Arina:2007tm}
C.~Arina and N.~Fornengo,
JHEP {\bf 0711} (2007) 029
[arXiv:0709.4477 [hep-ph]].
\bibitem{Gopalakrishna:2006kr}
S.~Gopalakrishna, A.~de Gouvea and W.~Porod,
JCAP {\bf 0605} (2006) 005
[arXiv:hep-ph/0602027].
\bibitem{Asaka:2005cn}
T.~Asaka, K.~Ishiwata and T.~Moroi,
Phys.\ Rev.\ D {\bf 73} (2006) 051301
[arXiv:hep-ph/0512118].
T.~Asaka, K.~Ishiwata and T.~Moroi,
Phys.\ Rev.\ D {\bf 75} (2007) 065001
[arXiv:hep-ph/0612211].
\bibitem{Dunkley:2008ie}
J.~Dunkley {\it et al.} [WMAP Collaboration],
arXiv:0803.0586 [astro-ph].
\bibitem{Gondolo:1990dk}
P.~Gondolo and G.~Gelmini,
Nucl.\ Phys.\ B {\bf 360} (1991) 145.
\bibitem{Gondolo:2004sc}
P.~Gondolo, J.~Edsjo, P.~Ullio, L.~Bergstrom, M.~Schelke and E.~A.~Baltz,
JCAP {\bf 0407} (2004) 008
[arXiv:astro-ph/0406204].
\end{document}
| {'timestamp': '2008-11-04T12:58:28', 'yymm': '0811', 'arxiv_id': '0811.0485', 'language': 'en', 'url': 'https://arxiv.org/abs/0811.0485'} |
\section{Introduction}\label{sec1}
The anomalous Hall effect (AHE)~\cite{Nagaosa2010} and the magneto-optical effect (MOE)~\cite{Ebert1996,Oppeneer2001,Antonov2004} are fundamental phenomena in condensed matter physics and they have become appealing techniques to detect and measure magnetism by electric and optical means, respectively. Usually occuring in ferromagnetic metals, the AHE is characterized by a transverse voltage drop resulting from a longitudinal charge current in the absence of applied magnetic fields. There are two distinct contributions to the AHE, that is, the extrinsic one~\cite{Smit1955,Smit1958,Berger1970} depending on scattering of electron off impurities or due to disorder, and the intrinsic one~\cite{Sundaram1999,YG-Yao2004} solely determined by the Berry phase effect~\cite{D-Xiao2010} in a pristine crystal. Both of these mechanisms originate from time-reversal ($T$) symmetry breaking in combination with spin-orbit coupling (SOC)~\cite{Nagaosa2010}. The intrinsic AHE can be accurately calculated from electronic-structure theory on the \textit{ab initio} level, and examples include studies of Fe~\cite{YG-Yao2004,XJ-Wang2006}, Co~\cite{XJ-Wang2007,Roman2009}, SrRuO$_{3}$~\cite{Fang2003,Mathieu2004}, Mn$_{5}$Ge$_{3}$~\cite{CG-Zeng2006}, and CuCr$_{2}$Se$_{4-x}$Br$_{x}$~\cite{YG-Yao2007}. Referring to the Kubo formula~\cite{Kubo1957,CS-Wang1974}, the intrinsic anomalous Hall conductivity (IAHC) can be straightforwardly extended to the ac case (as given by the optical Hall conductivity), which is intimately related to the magneto-optical Kerr and Faraday effects (MOKE and MOFE) [see Eqs.~\eqref{eq:kerr} and~\eqref{eq:faraday} below]. Phenomenally, the MOKE and MOFE refer to the rotation of the polarization plane when a linearly polarized light is reflected from, or transmitted through a magnetic material, respectively. Owing to their similar physical nature, the intrinsic AHE is often studied together with MOKE and MOFE.
\begin{figure*}
\includegraphics[width=2\columnwidth]{Fig1}
\caption{(Color online) (a) Right-handed ($\kappa=+1$) and (b) left-handed ($\kappa=-1$) vector spin chiralities in coplanar noncollinear spin systems. The open arrows indicate the clockwise rotation of spin with a uniform angle, which results in a different spin configuration with the same spin chirality. (c) The crystal and magnetic structures of Mn$_{3}X$N ($X$ = Ga, Zn, Ag, and Ni). The purple, green, and blue balls represent Mn, $X$, and N atoms, respectively. The spin magnetic moments originate mainly from Mn atoms, while the spin polarization of $X$ and N atoms is negligible. The spins on three Mn-sublattices (Mn$_{1}$, Mn$_{2}$, and Mn$_{3}$) are indicated by red arrows that are aligned within the (111) plane (here, the right-handed spin chirality is shown as an example). The angles between neighboring spins are always 120$^{\circ}$, while the spins can simultaneously rotate within the (111) plane that is characterized by an azimuthal angle $\theta$ away from the diagonals of the face. (d) The (111) plane of Mn$_{3}X$N, which can be regarded as a kagome lattice of Mn atoms. The dotted lines mark the two-dimensional unit cell. (e)-(g) The R1, R2, and R3 phases with the right-handed spin chirality. There are one three-fold rotation axis ($C_{3}$, which is along the $[111]$ direction ($z$ axis)), three two-fold rotation axes ($C_{2}^{(1)}$, $C_{2}^{(2)}$, and $C_{2}^{(3)}$), and three mirror planes ($M^{(1)}$, $M^{(2)}$, and $M^{(3)}$) in the R1 phase; only $C_{3}$ axis is preserved in the R2 phase; the time-reversal symmetry $T$ has to be combined with a two-fold rotation and mirror symmetries in the R3 phase. (h)-(j) The L1, L2, and L3 phases with the left-handed spin chirality. There are one two-fold rotation axis ($C_{2}$) and one mirror plane ($M$) in the L1 phase; the time-reversal symmetry $T$ is combined with two-fold rotation and mirror symmetries in both the L2 and the L3 phases.}
\label{fig1}
\end{figure*}
As the AHE and the MOE are commonly considered to be proportional to the magnetization, most of the materials studied to date with respect to these phenomena are ferromagnets (FMs) and ferrimagnets (FiMs), while antiferromagnets (AFMs) are naively expected to have neither AHE nor MOE due to their vanishing net magnetization. Although $T$ symmetry is broken in AFMs, its combination $TS$ with other spatial symmetries $S$ (e.g., fractional translations or inversion) can reinstate Kramers theorem such that AHE and MOE vanish. A simple example is the one-dimensional collinear bipartite antiferromagnet~\cite{Herring1966}, where $S$ is the fractional translation by half of the vector connecting the two sublattices. Another example is the two-dimensional honeycomb lattice with collinear N{\'e}el order (as realized, e.g., in the bilayer MnPSe$_{3}$)~\cite{Sivadas2016}, which has natively the combined symmetry $TI$ although time-reversal symmetry $T$ and spatial inversion symmetry $I$ are both broken individually. The application of an electric field perpendicular to the film plane will manifest in broken $TI$ symmetry and band exchange splitting that generates the MOKE~\cite{Sivadas2016}. Such electrically driven MOKE has been realized, e.g., in multiferroic Cr-based metallorganic perovskites~\cite{FR-Fan2017}. Therefore, the AHE and the MOE, as the most fundamental fingerprints of $T$ symmetry breaking in matter, can in principle exist in AFMs if certain crystal symmetries are absent, even though the net magnetization vanishes. Notably, the cluster multipole theory proposed by Suzuki \textit{et al.}~\cite{Suzuki2017,Suzuki2018} has been recently applied to interpret the origin of AHE in AFMs.
Leaving aside collinear AFMs, recent works~\cite{Ohgushi2000,Shindou2001,Hanke2017,Shiomi2018,J-Zhou2016,WX-Feng2018,H-Chen2014,Kubler2014,GY-Guo2017,Y-Zhang2017,Nakatsuji2015,Nayak2016,Kiyohara2016,Ikhlas2017,WX-Feng2015,Higo2018} revealed that noncollinear AFMs can also host nonvanishing AHE and MOE. Two types of noncollinear AFMs can be considered: noncoplanar and coplanar, which are characterized by scalar and vector spin chiralities, respectively~\cite{Kawamura2001}. On the one hand, the nonzero scalar spin chirality $\chi=\boldsymbol{S}_{i}\cdot(\boldsymbol{S}_{j}\times\boldsymbol{S}_{k})$ (where $\boldsymbol{S}_{i}$, $\boldsymbol{S}_{j}$, and $\boldsymbol{S}_{k}$ denote three neighboring noncoplanar spins) will generate a fictitious magnetic field that makes the electrons feel a real-space Berry phase while hopping in the spin lattice~\cite{Ohgushi2000,Shindou2001}. Consequently, the AHE can emerge in noncoplanar AFMs without SOC, which is referred to the topological Hall effect that has been theoretically predicted~\cite{Shindou2001,Hanke2017} and experimentally observed~\cite{Shiomi2018}, for instance, in disordered $\gamma$-Fe$_{x}$Mn$_{1-x}$ alloys. Moreover, the quantized version of the topological Hall effect was reported in the layered noncoplanar noncollinear K$_{0.5}$RhO$_{2}$ AFM insulator~\cite{J-Zhou2016}. Extending these findings, Feng \textit{et al.}~\cite{WX-Feng2018} proposed that topological MOE and quantum topological MOE exist in $\gamma$-Fe$_{x}$Mn$_{1-x}$ and K$_{0.5}$RhO$_{2}$, respectively.
Instead of the scalar spin chirality (which vanishes for coplanar spin configurations), the finite vector spin chirality~\cite{Kawamura2001},
\begin{equation}\label{eq:kappa}
\kappa=\frac{2}{3\sqrt{3}}\sum_{\langle ij\rangle}\left[\boldsymbol{S}_{i}\times\boldsymbol{S}_{j}\right]_{z},
\end{equation}
where $\langle ij\rangle$ runs over the nearest neighboring spins, is an important quantity in coplanar noncollinear AFMs such as cubic Mn$_{3}X$ ($X$ = Rh, Ir, Pt) and hexagonal Mn$_{3}Y$ ($Y$ = Ge, Sn, Ga). The Mn atoms in the (111) plane of Mn$_{3}X$ and in the (0001) plane of Mn$_{3}Y$ are arranged into a kagome lattice, while Mn$_{3}X$ and Mn$_{3}Y$ have opposite vector spin chiralities~\cite{Y-Zhang2017} with $\kappa=+1$ (right-handed state) and $\kappa=-1$ (left-handed state) [see Figs.~\ref{fig1}(a) and~\ref{fig1}(b)], respectively. The concept of right- and left-handed states adopted here follows the convention of Ref.~\onlinecite{Kawamura2001}. For both right- and left-handed spin chiralities, the spins can be simultaneously rotated within the plane, further resulting in different spin configurations [see Figs.~\ref{fig1}(a) and~\ref{fig1}(b)], e.g., the T1 and the T2 phases in Mn$_{3}X$~\cite{WX-Feng2015} as well as the type-A and the type-B phases in Mn$_{3}Y$~\cite{GY-Guo2017}. The vector spin chirality and the spin rotation discussed here allow us to characterize coplanar AFMs that have a 120$^\circ$ noncollinear magnetic ordering. For the AHE, Chen \textit{et al.}~\cite{H-Chen2014} discovered theoretically that Mn$_{3}$Ir has unexpectedly large IAHC and several other groups predicted the IAHC in Mn$_{3}Y$ with comparable magnitudes~\cite{Kubler2014,GY-Guo2017,Y-Zhang2017}. At the same time, the AHE in Mn$_{3}Y$ has been experimentally confirmed~\cite{Nakatsuji2015,Nayak2016,Kiyohara2016,Ikhlas2017}. Because of the close relationship to AHE, Feng \textit{et al.}~\cite{WX-Feng2015} first predicted that large MOKE can emerge in Mn$_{3}X$ even though the net magnetization is zero. Eventually, Higo \textit{et al.}~\cite{Higo2018} successfully measured large zero-field Kerr rotation angles in Mn$_{3}$Sn at room temperature.
In addition to Mn$_3X$ and Mn$_3Y$, the antiperovskite Mn$_3X$N ($X$ = Ga, Zn, Ag, Ni, etc.) is another important class of coplanar noncollinear AFMs~\cite{Singh2018}, which was known since the 1970s~\cite{Bertaut1968,Fruchart1978}. Compared to Mn$_3X$, the $X$ atoms in Mn$_3X$N also occupy the corners of the cube [see Fig.~\ref{fig1}(c)] and the face-centered Mn atoms are arranged into a kagome lattice in the (111) plane [see Fig.~\ref{fig1}(d)], while there is an additional N atom located in the center of the cube [see Fig.~\ref{fig1}(c)]. Despite the structural similarity, some unique physical properties have been found in Mn$_3X$N, such as magnetovolume effects~\cite{Gomonaj1989,Gomonaj1992,WS-Kim2003,Lukashev2008,Lukashev2010,Takenaka2014,SH-Deng2015,Zemen2017a} and magnetocaloric effects~\cite{Y-Sun2012,Matsunami2014,KW-Shi2016,Zemen2017} that stem from a strong coupling between spin, lattice, and heat. The most interesting discovery in Mn$_3X$N may be the giant negative thermal expansion that was observed in the first-order phase transition from a paramagnetic state to a noncollinear antiferromagnetic state with decreasing temperature $\mathtt{T}$. Below the N{\'e}el temperature ($\mathtt{T_{N}}$), a second-order phase transition between two different noncollinear antiferromagnetic states, which are featured by a nearly constant volume but the change of spin configuration, possibly occurs.
Taking Mn$_3$NiN as an example~\cite{Fruchart1978}, all the spins point along the diagonals of the face if $\mathtt{T}<$163 K (the so-called $\Gamma^{5g}$ configuration), while in the temperature range of 163 K $<\mathtt{T}<$ 266 K the spins can point to the center of the triangle formed by three nearest-neighboring Mn atoms (the so-called $\Gamma^{4g}$ configuration). The $\Gamma^{5g}$ and the $\Gamma^{4g}$ spin configurations are named as R1 ($\theta=0^{\circ}$) and R3 ($\theta=90^{\circ}$) phases in this work [see Figs.~\ref{fig1}(e) and~\ref{fig1}(g), where the azimuthal angle $\theta$ measures the rotation of the spins starting from the diagonals of the face], respectively. An intermediate state ($0^{\circ}<\theta<90^{\circ}$) between the R1 and R3 phases, referred to as the R2 phase (see Fig.~\ref{fig1}(f) with $\theta=30^{\circ}$ as an example), was proposed to exist~\cite{Gomonaj1989,Gomonaj1992}. Such nontrivial magnetic orders are also believed to occur in other Mn$_3X$N compounds~\cite{Bertaut1968,Fruchart1978,Gomonaj1989,Gomonaj1992}, as recently clarified by Mochizuki \textit{et al.}~\cite{Mochizuki2018} using a classical spin model together with the replica-exchange Monte Carlo simulation. However, the details of the changes in spin configurations from R1 phase, passing through the R2 phase to R3 phase, and how they affect the relevant physical properties (e.g., AHE and MOE) are still unclear. Moreover, although only the right-handed spin chirality was reported in the previous literature, the left-handed spin chirality as a counterpart [Fig.~\ref{fig1}(h-j)] could also exist, e.g., in Mn$_{3}$NiN, because of the favorable total energy for a particular $\theta$ [see Fig.~\ref{fig4}(a)].
In this work, using first-principles density functional theory together with group-theory analysis and tight-binding modelling, we systematically investigate the effect of \textit{spin order} on the intrinsic AHE as well as the MOKE and the MOFE in coplanar noncollinear AFMs Mn$_{3}X$N ($X$ = Ga, Zn, Ag, and Ni). The \textit{spin order} considered here has dual implications, i.e., spin chiralities (right- and left-handed states) and spin configurations [regarding the different spin orientations by simultaneously rotating the spins within the (111) plane]. In Sec.~\ref{sec2}, we first identify the antisymmetric shape of the IAHC tensor (i.e., zero and nonzero elements) for different spin orders by a group theoretical analysis. For the right-handed spin chirality, only $\sigma_{xy}$ is nonzero (except for two particular spin configurations: $\theta=0^{\circ}$ and $180^{\circ}$); for the left-handed spin chirality, all three off-diagonal elements ($\sigma_{xy}$, $\sigma_{yz}$, and $\sigma_{zx}$) can be nonzero (except for some particular spin configurations, e.g., $\theta=0^{\circ}$ and $60^{\circ}$ for $\sigma_{xy}$, $\theta=30^{\circ}$ and $210^{\circ}$ for $\sigma_{yz}$, $\theta=120^{\circ}$ and $300^{\circ}$ for $\sigma_{zx}$). The results of the group-theory analysis are further confirmed by both tight-binding modelling (Sec.~\ref{sec3}) and first-principles calculations (Sec.~\ref{sec4-1}). In addition to the IAHC, the magnetic anisotropy energy (MAE) has also been accessed and the in-plane easy spin orientation is determined (Sec.~\ref{sec4-1}).
Considering Mn$_{3}$NiN as a prototype, we extend the study of IAHC to the optical Hall conductivity [$\sigma_{xy}(\omega)$, $\sigma_{yz}(\omega)$ $\sigma_{zx}(\omega)$] as well as the corresponding diagonal elements [$\sigma_{xx}(\omega)$, $\sigma_{yy}(\omega)$, and $\sigma_{zz}(\omega)$] (Sec.~\ref{sec4-2}). The spin order hardly affects the diagonal elements, whereas a significant dependence on the spin order is observed in the off-diagonal elements akin to the IAHC. Subsequently in Sec.~\ref{sec4-3}, the MOKE and the MOFE are computed from the optical conductivity for all Mn$_{3}X$N ($X$ = Ga, Zn, Ag, and Ni). Kerr and Faraday spectra exhibit a distinct dependence on the spin order, which they inherit from the optical Hall conductivity. The computed Kerr and Faraday rotation angles in Mn$_{3}X$N are comparable to the ones in Mn$_{3}X$ studied in our previous work~\cite{WX-Feng2015}. The magneto-optical anisotropy, originating from the nonequivalent off-diagonal elements of optical conductivity, is explored for both right- and left-handed spin chiralities. Finally, the summary is drawn in Sec.~\ref{sec5}. Our work reveals that the AHE and the MOE depend strongly on the spin order in noncollinear AFMs Mn$_{3}X$N which suggests that complex noncollinear spin structures can be uniquely classified in experiments by measuring AHE and MOE.
\begin{table*}[htpb]
\caption{The magnetic space and point groups as well as the nonzero elements of IAHC for Mn$_{3}X$N for different spin orders characterized by the azimuthal angle $\theta$ and the vector spin chirality $\kappa$. The magnetic space and point groups exhibit a period of $\pi$ ($\pi/3$) in $\theta$ for right-handed (left-handed) spin chirality. The IAHC is considered as a pseudovector, i.e., $\boldsymbol{\sigma}=[\sigma^{x},\sigma^{y},\sigma^{z}]=[\sigma_{yz},\sigma_{zx},\sigma_{xy}]$, which is expressed in the Cartesian coordinate system defined in Fig.~\ref{fig1}. The nonzero elements of IAHC are in complete accord with the tight-binding and first-principles calculations, shown in Figs.~\ref{fig2}(c), ~\ref{fig4}(b), and~\ref{fig4}(c), respectively.}
\label{tab1}
\begin{ruledtabular}
\begingroup
\setlength{\tabcolsep}{4.5pt}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{lccccccccccccccc}
\multicolumn{2}{c}{} &
\multicolumn{13}{c}{azimuthal angle $\theta$} & \\
\cline{3-15}
&$\kappa$&$0^{\circ}$&$15^{\circ}$&$30^{\circ}$&$45^{\circ}$&$60^{\circ}$&$75^{\circ}$&$90^{\circ}$&$105^{\circ}$&$120^{\circ}$&$135^{\circ}$&$150^{\circ}$&$165^{\circ}$&$180^{\circ}$&\\
\hline
magnetic space group & $+1$ & $R\bar{3}m$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}m^{\prime}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}$ & $R\bar{3}m$ \\
& $-1$ & $C2/m$ & $P\bar{1}$ & $C2^{\prime}/m^{\prime}$ & $P\bar{1}$ & $C2/m$ & $P\bar{1}$ & $C2^{\prime}/m^{\prime}$ & $P\bar{1}$ & $C2/m$ & $P\bar{1}$ & $C2^{\prime}/m^{\prime}$ & $P\bar{1}$ & $C2/m$ \\
\hline
magnetic point group & $+1$ & $\bar{3}1m$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}1m^{\prime}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}$ & $\bar{3}1m$ \\
& $-1$ & $2/m$ & $\bar{1}$ & $2^{\prime}/m^{\prime}$ & $\bar{1}$ & $2/m$ & $\bar{1}$ & $2^{\prime}/m^{\prime}$ & $\bar{1}$ & $2/m$ & $\bar{1}$ & $2^{\prime}/m^{\prime}$ & $\bar{1}$& $2/m$ \\
\hline
nonzero elements & $+1$ & -- & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & $\sigma_{xy}$ & -- \\
of IAHC & $-1$ & \vtop{\hbox{\strut $\;\:$--}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\;\:$--}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\;\:$--}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\;\:$--}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\;\:$--}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\sigma_{xy}$}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}} & \vtop{\hbox{\strut $\;\:$--}\hbox{\strut $\sigma_{yz}$}\hbox{\strut $\sigma_{zx}$}}
\end{tabular}
\endgroup
\end{ruledtabular}
\end{table*}
\section{Group theory analysis}\label{sec2}
In this section, we determine the magnetic space and point groups of Mn$_{3}X$N for given spin orders, and then identify the nonzero elements of IAHC from group theory. The magnetic groups computed with the \textsc{isotropy} code~\cite{Stokes,Stokes2005} are listed in Table~\ref{tab1}, from which one can observe that the magnetic groups vary in the azimuthal angle $\theta$ with a period of $\pi$ for right-handed spin chirality, but with a period of $\pi/3$ for left-handed spin chirality. This indicates that the magnetic groups that need to be analyzed are limited to a finite number. Furthermore, it is sufficient to restrict the analysis to magnetic point groups since the IAHC~\cite{Kubo1957,CS-Wang1974,YG-Yao2004},
\begin{equation}\label{eq:IAHC}
\sigma_{\alpha\beta} = -\dfrac{e^{2}}{\hbar}\int_{BZ}\frac{d^{3}k}{(2\pi)^{3}}\Omega_{\alpha\beta}(\bm{k}),
\end{equation}
is translationally invariant. In the above expression $\Omega_{\alpha\beta}(\bm{k})=\sum_{n}f_{n}(\bm{k})\Omega_{n,\alpha\beta}(\bm{k})$ is the momentum-space Berry curvature, with the Fermi-Dirac distribution function $f_{n}(\bm{k})$ and the band-resolved Berry curvature
\begin{equation}\label{eq:BerryCur}
\Omega_{n,\alpha\beta}\left(\bm{k}\right) = -2 \mathrm{Im}\sum_{n^{\prime} \neq n}\frac{\left\langle \psi_{n\bm{k}}\right|\hat{v}_{\alpha}\left| \psi_{n^{\prime}\bm{k}} \right\rangle \left\langle \psi_{n^{\prime}\bm{k}}\right|\hat{v}_{\beta}\left|\psi_{n\bm{k}} \right\rangle}{\left(\omega_{n^{\prime}\bm{k}}-\omega_{n\bm{k}}\right)^{2}}.
\end{equation}
Here $\hat{v}_{\alpha}$ is the velocity operator along the $\alpha$th Cartesian direction, and $\psi_{n\bm{k}}$ ($\hbar\omega_{n\bm{k}}=\epsilon_{n\bm{k}}$) is the eigenvector (eigenvalue) to the band index $n$ and the momentum $\bm{k}$. Since the IAHC and the Berry curvature can be regarded as pseudovectors, just like spin, their vector-form notations $\boldsymbol{\sigma}=[\sigma^{x},\sigma^{y},\sigma^{z}]=[\sigma_{yz},\sigma_{zx},\sigma_{xy}]$ and $\boldsymbol{\Omega}_{n}=[\Omega_{n}^{x},\Omega_{n}^{y},\Omega_{n}^{z}]=[\Omega_{n,yz},\Omega_{n,zx},\Omega_{n,xy}]$ are used here for convenience.
Let us start with the right-handed spin chirality by considering the three non-repetitive magnetic point groups: $\bar{3}1m$ [$\theta=n\pi$], $\bar{3}1m^{\prime}$ [$\theta=(n +\frac{1}{2})\pi$], and $\bar{3}$ [$\theta \neq n\pi \text{ and }\theta \neq (n +\frac{1}{2})\pi$] with $n\in\mathbb{N}$ (see Tab.~\ref{tab1}). First, $\bar{3}1m$ belongs to the type-I magnetic point group, i.e., it is identical to the crystallographic point group $D_{3d}$. As seen from Fig.~\ref{fig1}(e), it has one three-fold rotation axis ($C_{3}$), three two-fold rotation axes ($C_{2}^{(1)}$, $C_{2}^{(2)}$, and $C_{2}^{(3)}$) and three mirror planes ($M^{(1)}$, $M^{(2)}$, and $M^{(3)}$). As mentioned before, $\boldsymbol{\Omega}_{n}$ is a pseudovector, and the mirror operation $M^{(1)}$ (parallel to the $yz$ plane) changes the sign of $\Omega_{n}^{y}$ and $\Omega_{n}^{z}$, but preserves $\Omega_{n}^{x}$. This indicates that $\Omega_{n}^{y}$ and $\Omega_{n}^{z}$ are odd functions along the $k_{x}$ direction in momentum space, while $\Omega_{n}^{x}$ is an even function. Correspondingly, integrating the Berry curvature over the entire Brillouin zone should give $\boldsymbol{\sigma}=[\sigma^{x},0,0]$. The role of $C_{2}^{(1)}$ is the same as that of $M^{(1)}$. The other two mirror (two-fold rotation) symmetries are related to $M^{(1)}$ ($C_{2}^{(1)}$) by the $C_{3}$ rotation, which transforms $[\sigma^{x},0,0]$ into $[-\frac{1}{2}\sigma^{x},-\frac{\sqrt{3}}{2}\sigma^{x},0]$ and $[-\frac{1}{2}\sigma^{x},\frac{\sqrt{3}}{2}\sigma^{x},0]$. Therefore, all components of IAHC are zero, i.e., $\boldsymbol{\sigma}=[0,0,0]$, owing to the symmetries of the group $\bar{3}1m$. Second, $\bar{3}$ is also a type-I magnetic point group, which is identical to the crystallographic point group $C_{3i}$. Compared to $D_{3d}$, all $C_{2}$ and $M$ operations are absent whereas only the $C_{3}$ operation is left [see Fig.~\ref{fig1}(f)]. In this situation, the components of $\boldsymbol{\sigma}$ that are normal to the $C_{3}$ axis disappear due to the cancellations of $\Omega_{n}^{x}$ and $\Omega_{n}^{y}$ in the $k_{x}$--$k_{y}$ plane. This gives rise to $\boldsymbol{\sigma}=[0,0,\sigma^{z}]$. Finally, $\bar{3}1m^{\prime}=C_{3i} \oplus T(D_{3d}-C_{3i})$ is a type-III magnetic point group as it contains operations combining time and space symmetries. Here, $T(D_{3d}-C_{3i})$ is the set of three $TM$ and three $TC_{2}$ operations depicted in Fig.~\ref{fig1}(g). With respect to the mirror symmetry $M^{(1)}$, $\Omega_{n}^{x}$ is even but $\Omega_{n}^{y}$ and $\Omega_{n}^{z}$ are odd; with respect to the time-reversal symmetry $T$, all of $\Omega_{n}^{x}$, $\Omega_{n}^{y}$, and $\Omega_{n}^{z}$ are odd; hence, with respect to the $TM^{(1)}$ symmetry, $\Omega_{n}^{x}$ is odd but $\Omega_{n}^{y}$ and $\Omega_{n}^{z}$ are even, resulting in $\boldsymbol{\sigma}=[0,\sigma^{y},\sigma^{z}]$. $TC_{2}^{(1)}$ plays the same role, just like $TM^{(1)}$ does. The other two $TM$ ($TC_{2}$) symmetries are related to $TM^{(1)}$ ($TC_{2}^{(1)}$) by the $C_{3}$ rotation in the subgroup $C_{3i}$, which forces $\sigma^{y}$ to be zero but allows finite $\sigma^{z}$. Thus, the IAHC tensor shape is $\boldsymbol{\sigma}=[0,0,\sigma^{z}]$ in the magnetic point group $\bar{3}1m^{\prime}$. To summarize, for the right-handed spin chirality only $\sigma^{z}$ can be nonzero, except for $\theta=n\pi$ where all components of the IAHC vanish.
Next, we turn to the left-handed spin chirality, which also has three non-repetitive magnetic point groups: $2/m$ [$\theta=n\frac{\pi}{3}$], $2^{\prime}/m^{\prime}$ [$\theta=(n +\frac{1}{2})\frac{\pi}{3}$], and $\bar{1}$ [$\theta \neq n\frac{\pi}{3} \text{ and }\theta \neq (n +\frac{1}{2})\frac{\pi}{3}$] with $n\in\mathbb{N}$ (see Tab.~\ref{tab1}). First, $2/m$ belongs to the type-I magnetic point group, which is identical to the crystallographic point group $C_{2h}$ that contains one two-fold rotation axis ($C_{2}$) and one mirror plane ($M$) [see Fig.~\ref{fig1}(h)]. As mentioned before, the $M$ symmetry allows only for those components of the IAHC that are perpendicular to the mirror plane (i.e., along the current $C_{2}$ axis), therefore, $\sigma^{z}$ should be always zero but $\sigma^{x}$ and $\sigma^{y}$ are generally finite for $\theta= 0^{\circ}$ (for current Cartesian coordinates). If $\theta= \frac{2\pi}{3}$ or $\frac{5\pi}{3}$, the mirror plane is parallel to the $yz$ plane and renders only $\sigma^{x}$ potentially nonzero. Similarly, $\bar{1}$ is also a type-I magnetic point group that is identical to the crystallographic group $C_{i}$. Since all components $\Omega_{n}^{x}$, $\Omega_{n}^{y}$, and $\Omega_{n}^{z}$ are even with respect to the spatial inversion symmetry $I$, the group $C_{i}$ imposes no restrictions on the shape of $\boldsymbol{\sigma}$, allowing all components to be finite. Finally, $2^{\prime}/m^{\prime}=C_{i} \oplus T(C_{2h}-C_{i})$ is a type-III magnetic point group containing one $TM$ and one $TC_{2}$ operation [see Figs.~\ref{fig1}(i) and~\ref{fig1}(j)]. There are two scenarios: if $\theta= \frac{\pi}{6}$ [Fig.~\ref{fig1}(i)], $TM$ (or $TC_{2}$) symmetry forces $\sigma^{x}$ to vanish but facilitates nonzero $\sigma^{y}$ and $\sigma^{z}$; if $\theta= \frac{\pi}{2}$ [Fig.~\ref{fig1}(j)], the principal axis of both symmetry operations changes ($M$ is neither parallel to $yz$ nor $zx$ plane) such that all entries $\sigma^{x}$, $\sigma^{y}$ and $\sigma^{z}$ are finite. The other cases of $\theta= \frac{7\pi}{6}$ and $\frac{5\pi}{6}$ are identical to $\theta= \frac{\pi}{6}$ and $\frac{\pi}{2}$, respectively. In summary, all tensor components of $\boldsymbol{\sigma}$ are allowed (except for some particular $\theta$) for the left-handed spin chirality owing to the reduced symmetry as compared to the systems with right-handed spin chirality.
In the above discussion, all zero and potentially nonzero elements of the IAHC tensor are identified based on the underlying magnetic point groups. Alternatively, these results can also be obtained by following the Neumann principle, i.e., by applying all symmetry operations of the corresponding point group to the conductivity tensor~\cite{Seemann2015}. This method has been implemented in a computer program~\cite{Zelezny2017a,Zelezny2018a}, which generates the shape of linear response tensors (IAHC or intrinsic spin Hall conductivity) in a given coordinate system. Another useful analysis tool is the so-called cluster multipole theory~\cite{Suzuki2017,Suzuki2018}, which is capable of uncovering the hidden AHE in AFMs by evaluating the cluster multipole moment that behaves as a macroscopic magnetic order. For instance, although the cluster dipole moments (i.e., the net magnetization from the conventional understanding) vanish in noncollinear AFMs (e.g., Mn$_{3}X$ and Mn$_{3}Y$), the emerging cluster octupole moments lead to a finite AHE.
\begin{figure*}
\includegraphics[width=2\columnwidth]{Fig2}
\caption{(Color online) (a)~Band structures of the kagome lattice with the spin orders $\kappa=+1$ and $\theta=0^{\circ}$, $30^{\circ}$, $90^{\circ}$. (b)~Band structures of the kagome lattice with the spin orders $\kappa=-1$ and $\theta=0^{\circ}$, $15^{\circ}$, $30^{\circ}$. (c)~IAHC of the kagome lattice as a function of $\theta$ for $\kappa=\pm1$ states for the three positions of the Fermi energy $E_{F}$ at 1.8~eV (top panel), 0~eV (middle panel), and $-$1.8~eV (bottom panel). The curves of the $\kappa=-1$ state (green lines) are scaled by a factor of 10. (d)-(g)~Berry curvature $\Omega_{xy}(\bm{k})$ with $\kappa=+1$ and $\theta=0^{\circ}$, $30^{\circ}$, $90^{\circ}$, $270^{\circ}$ at $E_{F}=-1.8$~eV. (h)-(k)~Berry curvature $\Omega_{xy}(\bm{k})$ with $\kappa=-1$ and $\theta=0^{\circ}$, $15^{\circ}$, $30^{\circ}$, $90^{\circ}$ at $E_{F}=-1.8$~eV. Dotted lines in panels (d)-(k) indicate the first Brillouin zone.}
\label{fig2}
\end{figure*}
\section{Tight-binding model}\label{sec3}
Group theory is particularly powerful to identify the tensor shape of the IAHC, but it provides no insights into the magnitude of the allowed elements, which will depend strongly on details of the electronic structure. In this light, tight-binding models and first-principles calculations are valuable tools to arrive at quantitative predictions. In this section, we consider a double-exchange $s$-$d$ model that describes itinerant $s$ electrons interacting with local $d$ magnetic moments on the kagome lattice, which refers to the (111) plane of cubic Mn$_{3}X$N. Following Ref.~\onlinecite{H-Chen2014}, the Hamiltonian is written as
\begin{eqnarray}\label{eq:Hamiltonian}
H & = & t\sum_{\left<ij\right>\alpha}c_{i\alpha}^{\dagger}c_{j\alpha}-J\sum_{i\alpha\beta}\left(\boldsymbol{\tau}_{\alpha\beta}\cdot\boldsymbol{S}_{i}\right)c_{i\alpha}^{\dagger}c_{i\beta} \nonumber \\
& & + it_{\text{SO}}\sum_{\left<ij\right>\alpha\beta}\nu_{ij}\left(\boldsymbol{\tau}_{\alpha\beta}\cdot\boldsymbol{n}_{ij}\right)c_{i\alpha}^{\dagger}c_{i\beta},
\end{eqnarray}
where $c_{i\alpha}^{\dagger}$ ($c_{i\alpha}$) is the electron creation (annihilation) operator on site $i$ with spin $\alpha$, and $\boldsymbol{\tau}$ is the vector of Pauli matrices, and $\left\langle ij\right\rangle$ restricts the summation to nearest-neighbor sites. The first term is the nearest-neighbor hopping with the transfer integral $t$. The second term is the on-site exchange coupling between the conduction electron and the localized spin moment $\boldsymbol{S}_{i}$, and $J$ is the Hund's coupling strength. The third term is the SOC effect with coupling strength $t_{\text{SO}}$, $\nu_{ij}$ is the antisymmetric 2D Levi-Civita symbol (with $\nu_{12}=\nu_{23}=\nu_{31}=1$), and $\boldsymbol{n}_{ij}$ is an in-plane vector perpendicular to the line from site $j$ to site $i$~\cite{H-Chen2014}. In the following calculations, we set $J=1.7t$ and $t_{\text{SO}}=0.2t$.
We first discuss the band structure, the IAHC, and the Berry curvature of the system with right-handed spin chirality ($\kappa=+1$), plotted in Figs.~\ref{fig2}(a),~\ref{fig2}(c), and~\ref{fig2}(d-g), respectively. The band structure significantly changes from $\theta=0^{\circ}$ (R1 phase, $\bar{3}1m$), to $30^{\circ}$ (R2 phase, $\bar{3}$), and to $90^{\circ}$ (R3 phase, $\bar{3}1m^{\prime}$). If $\theta=0^{\circ}$, two band crossings around 1.8 eV appear at the $K$ point and along the $M$-$K$ path, respectively. This band structure is identical to the one without SOC~\cite{H-Chen2014}, because the SOC term in Eq.~\eqref{eq:Hamiltonian} plays no effect in the spin configuration of $\theta=0^{\circ}$ in the sense that the left-handed and right-handed environments of an electron hopping between nearest neighbors are uniform. Accordingly, the Berry curvature $\Omega_{xy}(\bm{k})$ vanishes everywhere in the Brillouin zone [Fig.~\ref{fig2}(d)]. The band degeneracy is split when $\theta\neq0^{\circ}$ and with increasing $\theta$, the band gap at the $K$ point enlarges significantly, while the one at the $M$ point shrinks slightly. In order to disentangle the dependence of the IAHC on the band structure, the IAHC is calculated at different Fermi energies ($E_{F}$) including 1.8~eV, 0~eV, and $-$1.8~eV, shown in Fig.~\ref{fig2}(c). In all three cases, the IAHC exhibits a period of $2\pi$ in $\theta$, and the values for $E_{F}=\pm1.8$~eV are two order of magnitude larger than the ones at $E_{F}=0$~eV. The large IAHC originates from the small band gap at the $M$ point since the Berry curvature shows sharp peaks there [see Figs.~\ref{fig2}(e-g)]. For $E_{F}=-1.8$~eV and $0$~eV, the largest IAHC occurs for $\theta=90^{\circ}$ and $270^{\circ}$. The case of $E_{F}=1.8$ eV is special since the IAHC is quantized to $\pm2e^{2}/\hbar$ in a broad range of $\theta$, revealing the presence of quantum anomalous Hall state in coplanar noncollinear AFMs.
For the left-handed spin chirality ($\kappa=-1$), the band structure, the IAHC, and the Berry curvature are plotted in Figs.~\ref{fig2}(b),~\ref{fig2}(c), and~\ref{fig2}(h-k), respectively. The band structure hardly changes from $\theta=0^{\circ}$ ($2/m$), to $15^{\circ}$ ($\bar{1}$), and to $30^{\circ}$ ($2^{\prime}/m^{\prime}$). If $\theta=0^{\circ}$, the Berry curvature $\Omega_{xy}(\bm{k})$ is odd for the group $2/m$ [Fig.~\ref{fig2}(h)] such that the IAHC $\sigma_{xy}$ is zero when integrating the $\Omega_{xy}(\bm{k})$ over the entire Brillouin zone. With increasing $\theta$, the IAHC reaches its maximum at $\theta=30^{\circ}$ and exhibits a period of $\frac{2\pi}{3}$ [Fig.~\ref{fig2}(c)]. Similarly to the $\kappa=+1$ state, the IAHC at $E_{F}=\pm1.8$~eV is two orders of magnitude larger than at $E_{F}=0$~eV. However, the IAHC of $\kappa=-1$ state is much smaller than that of $\kappa=+1$ state [Fig.~\ref{fig2}(c)]. This is understood based on the Berry curvature shown in Figs.~\ref{fig2}(i-k), which reveals that $\Omega_{xy}(\bm{k})$ at the three $M$ points has different signs (two negative and one positive, or two positive and one negative) due to the reduced symmetry in the $\kappa=-1$ state, in contrast to the same sign in the $\kappa=+1$ state [Figs.~\ref{fig2}(e-g)].
The tight-binding model used here is constructed on a two-dimensional kagome lattice, for which the $\sigma_{yz}$ and $\sigma_{zx}$ components vanish. Although the model is rather simple, the following qualitative results are useful: (1) the IAHC turns out to be large if the Fermi energy lies in a small band gap as outlined in previous theoretical work~\cite{YG-Yao2004}; (2) $\sigma_{xy}$ has a period of $2\pi$ ($\frac{2\pi}{3}$) in $\theta$ for right-handed (left-handed) spin chirality; (3) For structures with right-handed spin chirality, $\sigma_{xy}$ is much larger than for the left-handed case.
\section{First-principles calculations}\label{sec4}
In this section, by computing explicitly the electronic structure of the Mn$_3X$N compounds with different spin orders, we first demonstrate that key properties of these systems follow the qualitative conclusions drawn from the discussed tight-binding model. Then, we present the values of the computed magnetic anisotropy energy (MAE) and the IAHC of the Mn$_{3}X$N compounds. The obtained in-plane easy spin orientations are comparable to the previous reports~\cite{Mochizuki2018}, while the IAHC is found to depend strongly on the spin order, in agreement with the above tight-binding results. Taking Mn$_{3}$NiN as an example, we further discuss the longitudinal and transverse optical conductivity, which are key to evaluating the MOE. Finally, the spin-order dependent MOKE and MOFE as well as their anisotropy are explored. Computational details of the first-principles calculations are given in Appendix~\ref{appendix}.
\begin{table}[b!]
\caption{Magnetic anisotropy constant ($K_\text{eff}$) and the maximum of IAHC for Mn$_{3}X$N ($X$ = Ga, Zn, Ag, and Ni). The IAHC is listed in the order of $\sigma_{yz}$, $\sigma_{zx}$, and $\sigma_{xy}$. For the $\kappa=+1$ state, $\sigma_{xy}$ reaches its maximum at $\theta=90^{\circ}$. For the $\kappa=-1$ state, $\sigma_{yz}$, $\sigma_{zx}$, and $\sigma_{xy}$ reach their maxima at $\theta=120^{\circ}$, $30^{\circ}$, and $30^{\circ}$, respectively.}
\label{tab2}
\begin{ruledtabular}
\begingroup
\setlength{\tabcolsep}{4.5pt}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{ccccc}
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{$K_\text{eff}$ (meV/cell)} &
\multicolumn{2}{c}{IAHC (S/cm)} \\
\cline{2-3}
\cline{4-5}
System & $\kappa=+1$ & $\kappa=-1$ & $\kappa=+1$ & $\kappa=-1$ \\
\hline
Mn$_{3}$GaN & 0.52 & 0.26 & 0, 0, $-$99 & 59, $-$67, $-$5 \\
Mn$_{3}$ZnN & 0.43 & 0.21 & 0, 0, $-$232 & 156, $-$174, 23 \\
Mn$_{3}$AgN & 0.15 & 0.08 & 0, 0, $-$359 & 344, $-$314, 72 \\
Mn$_{3}$NiN & $-$0.18 & $-$0.09 & 0, 0, $-$301 & 149, $-$134, 5 \\
\end{tabular}
\endgroup
\end{ruledtabular}
\end{table}
\begin{figure}
\includegraphics[width=\columnwidth]{Fig3}
\caption{(Color online) The first-principles band structures of (a,b)~Mn$_{3}$ZnN and (c,d)~Mn$_{3}$NiN for different spin orders ($\theta=0^{\circ}$, $30^{\circ}$, and $90^{\circ}$ for the right-handed state with $\kappa=+1$, and $\theta=0^{\circ}$, $15^{\circ}$, and $30^{\circ}$ for the left-handed state of opposite spin chirality). The $k$-path lies within the (111) plane.}
\label{fig3}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{Fig4}
\caption{(Color online) (a) Magnetic anisotropy energy of Mn$_{3}X$N ($X$ = Ga, Zn, Ag, and Ni) as a function of the azimuthal angle $\theta$. The results for left-handed spin chirality ($\kappa=-1$) are only shown in Mn$_{3}$ZnN and Mn$_{3}$NiN as the representatives. The solid and dotted lines are expressed by $\text{MAE}(\theta)=K_{\text{eff}}\sin^{2}(\theta)$ and $\text{MAE}=K_{\text{eff}}/2$, respectively. (b), (c) The IAHC of Mn$_{3}$ZnN and Mn$_{3}$NiN as a function of the azimuthal angle $\theta$ for both right-handed ($\kappa=+1$) and left-handed ($\kappa=-1$) spin chiralities. The solid and dotted lines are the polynomial fits to the data.}
\label{fig4}
\end{figure}
\subsection{Electronic structure}\label{sec4-0}
Figure~\ref{fig3} illustrates the first-principles band structures of the Mn$_{3}X$N systems, taking Mn$_{3}$ZnN and Mn$_{3}$NiN as two prototypical examples. While the electronic structure of the left-handed state with $\kappa=-1$ hardly changes as the spin-rotation angle $\theta$ is tuned, the right-handed state of opposite vector spin chirality is rather sensitive to details of the noncollinear spin configuration. Specifically, the calculated electronic structure for the $\kappa=+1$ state reveals that the band degeneracy (e.g., at the $\Gamma$ point) is lifted for $\theta\neq0^{\circ}$, and the magnitude of the band splitting increases with the spin-rotation angle. These features are in a very good qualitative agreement with the tight-binding results [see Figs.~\ref{fig2}(a) and~\ref{fig2}(b)], which roots in the fact that the (111) planes of Mn$_{3}X$N compounds and the 2D Kagome lattice considered in the previous sections have common symmetries.
\subsection{Intrinsic anomalous Hall conductivity and magnetic anisotropy energy}\label{sec4-1}
The MAE is one of the most important parameters that characterizes a magnetic material. In FMs, the MAE refers to the total energy difference between easy- and hard-axis magnetization directions. In the noncollinear AFMs that we consider here, we define the MAE as the total energy difference between different spin orders, given by
\begin{equation}\label{eq:MAE}
\text{MAE}(\theta)=E_{\kappa=\pm1,\theta\neq0^{\circ}}-E_{\kappa=+1,\theta=0^{\circ}},
\end{equation}
where the spin order with $\kappa=+1$ and $\theta=0^{\circ}$ is set as the reference state. The calculated MAE of Mn$_{3}X$N is plotted in Fig.~\ref{fig4}(a). For the $\kappa=+1$ state, the MAE can be fitted well to the uniaxial anisotropy $K_{\text{eff}}\sin^{2}(\theta)$, where $K_{\text{eff}}$ is the magnetic anisotropy constant listed in Tab.~\ref{tab2}. Compared to traditional Mn-based alloys, the value of $K_{\text{eff}}$ in Mn$_{3}X$N is comparable in magnitude MnPt (0.51 meV/cell)~\cite{Umetsu2006}, MnPd ($-$0.57 meV/cell)~\cite{Umetsu2006}, MnNi ($-$0.29 meV/cell)~\cite{Umetsu2006}, and MnRh ($-$0.63 meV/cell)~\cite{Umetsu2006}, but are one order of magnitude smaller than in MnIr ($-$7.05 meV/cell)~\cite{Umetsu2006}, Mn$_3$Pt (2.8 meV/cell)~\cite{Kota2008}, and Mn$_3$Ir (10.42 meV/cell)~\cite{Szunyogh2009}. For the $\kappa=-1$ state, the MAE is approximately constant with a value of $K_{\text{eff}}/2$, indicating the vanishing in-plane anisotropy energy that leads to a relatively easy rotation of the spins within the (111) plane. This feature has also been found in other noncollinear AFMs such as Mn$_{3}$Ir~\cite{Szunyogh2009}, Mn$_{3}$Ge~\cite{Nagamiya1982}, and Mn$_{3}$Sn~\cite{Nagamiya1982,Tomiyoshi1982,Nakatsuji2015}.
\begin{figure*}
\includegraphics[width=0.95\textwidth]{Fig5}
\caption{(Color online) Energy dependence of the optical conductivity in Mn$_{3}$NiN. (a-b)~Real and imaginary parts of $\sigma_{xx}$ for the $\kappa=+1$ state. Characteristic peaks and valleys are marked by black arrows. (c-d)~Real and imaginary parts of $\sigma_{xy}$ for the $\kappa=+1$ state. (e-f)~The real and imaginary parts of $\sigma_{xx}$ for the $\kappa=-1$ state. (g-h), (i-j), (k-l)~The real and imaginary parts of $\sigma_{xy}$, $\sigma_{yz}$, and $\sigma_{zx}$ for the $\kappa=-1$ state.}
\label{fig5}
\end{figure*}
Fig.~\ref{fig4}(a) reveals that $\text{MAE}(\theta)=\text{MAE}(\theta+\pi)$, implying that the ground state of $120^{\circ}$ triangular spin order has a discrete two-fold degeneracy~\cite{H-Chen2014}. For the $\kappa=+1$ state, Mn$_{3}$GaN and Mn$_{3}$ZnN obviously prefer the R1 phase ($\theta=0^{\circ}$ or $180^{\circ}$), which is in full accordance with the $\Gamma^{5g}$ spin configuration identified in Ref.~\onlinecite{Mochizuki2018} using a classical spin model with frustrated exchange interactions and magnetic anisotropy. As the spin configuration is closely related to the number of valence electrons $n_{\nu}$ in the $X$ ion, Mochizuki \textit{et al.}~\cite{Mochizuki2018} propose a mixture of the $\Gamma^{5g}$ and the $\Gamma^{4g}$ spin patterns in Mn$_{3}$AgN and Mn$_{3}$NiN due to the smaller $n_{\nu}$ (weaker $X$-ion crystal field) as compared to that of Mn$_{3}$GaN and Mn$_{3}$ZnN. In the present calculations, Mn$_{3}$AgN still hosts the $\Gamma^{5g}$ spin configuration but has a much smaller MAE compared to Mn$_{3}$GaN and Mn$_{3}$ZnN, while Mn$_{3}$NiN favors the $\Gamma^{4g}$ spin configuration (R3 phase, $\theta=90^{\circ}$ or $270^{\circ}$). Our calculated MAE is a monotonic function of $n_{\nu}$, i.e., Ni $(n_{\nu}=0)<$ Ag $(n_{\nu}=1)<$ Zn $(n_{\nu}=2)<$ Ga $(n_{\nu}=3)$, which provides a clear interpretation for the systematic evolution of the magnetic orders in Mn$_{3}X$N. On the other hand, the $\kappa=-1$ state of Mn$_{3}X$N has not been considered in previous works, while we find that it could exist for particular values of $\theta$. For example, the $\kappa=-1$ state in Mn$_{3}$NiN has the favorable energy in three segments of $\theta$: $[0^{\circ},45^{\circ})$, $(135^{\circ},225^{\circ})$, and $(315^{\circ},360^{\circ}]$. In the light of recent experiments on Mn$_{3}$Sn~\cite{Nakatsuji2015,Ikhlas2017} and Mn$_{3}$Ge~\cite{Nayak2016,Kiyohara2016}, an external magnetic field may be used to tune the spin orientation by coupling to the weak in-plane magnetic moment. This finding enriches the spectrum of possible magnetic orders in Mn$_{3}X$N compounds.
The IAHC of Mn$_{3}$ZnN and Mn$_{3}$NiN with different spin orders is illustrated in Figs.~\ref{fig4}(b) and~\ref{fig4}(c), respectively. The component $\sigma_{xy}$ displays a period of $2\pi$ ($\frac{2\pi}{3}$) in $\theta$ for the $\kappa=+1$ ($\kappa=-1$) state, and its magnitude in the $\kappa=+1$ state is much larger than that of the $\kappa=-1$ state, in excellent agreement with the tight-binding results. From the group theoretical analysis we showed that $\sigma_{yz}$ and $\sigma_{zx}$ are allowed in $\kappa=-1$ state, which is confirmed by our first-principles results. Moreover, we observe that both $\sigma_{yz}$ and $\sigma_{zx}$ display a period of $2\pi$ in $\theta$ and their magnitudes are much larger than that of $\sigma_{xy}$. The maximum of IAHC for $\kappa=\pm1$ states is summarized in Tab.~\ref{tab2}. Overall, the obtained magnitude of the IAHC in the studied family of compounds is comparable or even larger than that in other noncollinear AFMs like Mn$_3X$~\cite{H-Chen2014,Y-Zhang2017} and Mn$_{3}Y$~\cite{Kubler2014,GY-Guo2017,Y-Zhang2017,Nakatsuji2015,Nayak2016,Kiyohara2016,Ikhlas2017}. In contrast to the MAE, the IAHC follows the relation $\boldsymbol{\sigma}(\theta)=-\boldsymbol{\sigma}(\theta+\pi)$, which manifests that the spin state at $\theta+\pi$ is the time-reversed counterpart of the order at $\theta$ and the IAHC is odd under time-reversal symmetry.
\begin{figure*}
\includegraphics[width=\textwidth]{Fig6}
\caption{(Color online) Magneto-optical spectra of Mn$_{3}X$N for $X=$ Ga~(a), Zn~(b), Ag~(c), and Ni~(d) in the $\kappa=+1$ spin configuration. The panels from left to right show Kerr rotation angle $\theta^{z}_{K}$, Kerr ellipticity $\varepsilon^{z}_{K}$, Faraday rotation angle $\theta^{z}_{F}$, and Faraday ellipticity $\varepsilon^{z}_{F}$, respectively.}
\label{fig6}
\end{figure*}
\subsection{Optical conductivity}\label{sec4-2}
Before proceeding to the MOE, we evaluate the optical conductivity as a key quantity that comprises the MOE. Expanding on the expressions for the IAHC [Eqs.~\eqref{eq:IAHC} and~\eqref{eq:BerryCur}], the optical conductivity can be written down as
\begin{eqnarray}\label{eq:optical}
\sigma_{\alpha\beta}(\omega) & = & \sigma^{\prime}_{\alpha\beta}(\omega) + i \sigma^{\prime\prime}_{\alpha\beta}(\omega) \nonumber \\
& = & \hbar e^{2}\int\frac{d^{3}k}{(2\pi)^{3}}\sum_{n\neq n^{\prime}}\left[f_{n}(\bm{k})-f_{n^{\prime}}(\bm{k})\right] \nonumber \\
& & \times\frac{\textrm{Im}\left[\left\langle \psi_{n\bm{k}}|v_{\alpha}|\psi_{n^{\prime}\bm{k}}\right\rangle \left\langle \psi_{n^{\prime}\bm{k}}|v_{\beta}|\psi_{n\bm{k}}\right\rangle\right] }{(\hbar\omega_{n\bm{k}}-\hbar\omega_{n^{\prime}\bm{k}})^{2}-(\hbar\omega+i\eta)^{2}},
\end{eqnarray}
where the superscript $^{\prime}$ ($^{\prime\prime}$) of $\sigma_{\alpha\beta}$ denotes its the real (imaginary) part, $\eta$ is an adjustable smearing parameter in units of energy, and $\hbar\omega$ is the photon energy. Due to the found similarity in the results among all studied systems, we take Mn$_{3}$NiN as a representative example for discussing the optical conductivity (Fig.~\ref{fig5}). The real part of the diagonal element, $\sigma^{\prime}_{xx}$ [see Figs.~\ref{fig5}(a) and.~\ref{fig5}(e)], measures the average in the absorption of left- and right-circularly polarized light. The spectrum exhibits one absorptive peak at 1.8~eV with a shoulder at 1.1~eV and another absorptive peak at 3.9~eV. The imaginary part of the diagonal element, $\sigma^{\prime\prime}_{xx}$ [see Figs.~\ref{fig5}(b) and.~\ref{fig5}(f)], is the dispersive part of the optical conductivity, revealing two distinct valleys at 0.6 eV and 3.4 eV. Obviously, $\sigma_{xx}$ is not affected by the spin order (neither spin chirality $\kappa$ nor azimuthal angle $\theta$). A similar behavior has been found in Mn$_{3}X$~\cite{WX-Feng2015}, where $\sigma_{xx}$ is identical for T1 and T2 spin structures. From the symmetry analysis~\cite{Seemann2015}, it should hold that $\sigma_{xx}=\sigma_{yy}\neq\sigma_{zz}$ for the magnetic point groups $\bar{3}1m$, $\bar{3}$, and $\bar{3}1m^{\prime}$ in the $\kappa=+1$ state, whereas $\sigma_{xx}\neq\sigma_{yy}\neq\sigma_{zz}$ for the magnetic point groups of $2/m$, $\bar{1}$, and $2^{\prime}/m^{\prime}$ in the $\kappa=-1$ state. However, all diagonal elements are approximately equal in our calculations, i.e., we observe that $\sigma_{xx}\approx\sigma_{yy}\approx\sigma_{zz}$. This promotes the optical isotropy in the Mn$_{3}X$N family.
In contrast to the diagonal entries, the off-diagonal elements displayed in Figs.~\ref{fig5}(c,d) and~\ref{fig5}(g--l) depend significantly on the spin order. For the $\kappa=+1$ state [Figs.~\ref{fig5}(c,d)], $\sigma_{xy}(\omega)$ vanishes if $\theta=0^{\circ}$, but it increases with the increasing $\theta$ and reaches its maximum at $\theta=90^{\circ}$. For the $\kappa=-1$ state [Figs.~\ref{fig5}(g--l)], all three off-diagonal elements $-$ $\sigma_{xy}(\omega)$, $\sigma_{yz}(\omega)$, and $\sigma_{zx}(\omega)$ $-$ can be nonzero and they peak at $\theta=30^{\circ}$, $120^{\circ}$, and $30^{\circ}$, respectively. Furthermore, $\sigma_{xy}(\omega)$ is at least two orders of magnitude smaller than $\sigma_{yz}(\omega)$ and $\sigma_{zx}(\omega)$. The overall trend of $\sigma_{xy}(\omega)$ depending on the spin order is very similar to that of the IAHC in Fig.~\ref{fig4}(c).
\begin{figure*}
\includegraphics[width=\textwidth]{Fig7}
\caption{(Color online) The magneto-optical spectra of Mn$_{3}$NiN for $\kappa=-1$ state: $\phi^{x}_{K,F}$ (a), $\phi^{y}_{K,F}$ (b), and $\phi^{z}_{K,F}$ (c). The panels from left to right are Kerr rotation angle, Kerr ellipticity, Faraday rotation angle, and Faraday ellipticity, respectively.}
\label{fig7}
\end{figure*}
\subsection{Magneto-optical Kerr and Faraday effects}\label{sec4-3}
We now turn to the magneto-optical Kerr and Faraday effects (MOKE and MOFE). The characteristic Kerr rotation angle $\theta_{K}$ and the ellipticity $\varepsilon_{K}$ are typically combined into the complex Kerr angle given by~\cite{Kahn1969,GY-Guo1994,GY-Guo1995}
\begin{eqnarray}\label{eq:kerr}
\phi^{\gamma}_{K} & = & \theta^{\gamma}_{K}+i\varepsilon^{\gamma}_{K} \nonumber \\
& = & \frac{-\nu_{\alpha\beta\gamma}\sigma_{\alpha\beta}}{\sigma_{0}\sqrt{1+i\left(4\pi/\omega\right)\sigma_{0}}},
\end{eqnarray}
where $\nu_{\alpha\beta\gamma}$ is the 3D Levi-Civita symbol with the Cartesian coordinates $\alpha,\beta,\gamma\in\{x,y,z\}$ and $\sigma_{0}=\frac{1}{2}(\sigma_{\alpha\alpha}+\sigma_{\beta\beta})\approx\sigma_{\alpha\alpha}$. The complex Kerr angle expressed here, similarly to the IAHC, holds a pseudovector form, i.e., $\boldsymbol{\phi}_{K}=[\phi^{x}_{K},\phi^{y}_{K},\phi^{z}_{K}]$, which differentiates the Kerr angles when the incident light propagates along different crystallographic axes. One can read from Eq.~\eqref{eq:kerr} that the longitudinal optical conductivity $\sigma_{\alpha\alpha}$ modulates the magnitude of the Kerr spectrum, while the transverse optical conductivity $\sigma_{\alpha\beta}$ determines key features of the Kerr spectrum. For example, only the component $\phi^{z}_{K}$ is finite for the $\kappa=+1$ state, whereas all components $\phi^{x}_{K}$, $\phi^{y}_{K}$, and $\phi^{z}_{K}$ are nonzero in the $\kappa=-1$ configuration. More importantly, $\phi^{x}_{K}\neq\phi^{y}_{K}\neq\phi^{z}_{K}$ implies the presence of magneto-optical anisotropy if the incident light propagates along $x$ ($01\bar{1}$), $y$ ($\bar{2}11$), and $z$ ($111$) axes (see Fig.~\ref{fig1}), respectively. Similarly, the complex Faraday angle can be expressed as~\cite{Reim1990}
\begin{eqnarray}\label{eq:faraday}
\phi^{\gamma}_{F} & = & \theta^{\gamma}_{F}+i\varepsilon^{\gamma}_{F} \nonumber \\
& = & \nu_{\alpha\beta\gamma}(n_{+}-n_{-})\frac{\omega l}{2c},
\end{eqnarray}
where $n_{\pm}=[1+\frac{4\pi i}{\omega}(\sigma_{\alpha\alpha}\pm i\sigma_{\alpha\beta})]^{1/2}$ are the complex refractive indices and $l$ is the thickness of the thin film. Since $\sigma_{\alpha\alpha}$ is generally much larger than $\sigma_{\alpha\beta}$ (see Fig.~\ref{fig5}), $n_{\pm}\approx[1+\frac{4\pi i}{\omega}\sigma_{\alpha\alpha}]^{1/2}\mp\frac{2\pi}{\omega}\sigma_{\alpha\beta}[1+\frac{4\pi i}{\omega}\sigma_{\alpha\alpha}]^{-1/2}$ (Ref.~\onlinecite{YM-Fang2018}) and consequently, the complex Faraday angle can be approximated as $\theta^{\gamma}_{F}+i\varepsilon^{\gamma}_{F} = -\nu_{\alpha\beta\gamma}\frac{2\pi l}{c}\sigma_{\alpha\beta}[1+\frac{4\pi i}{\omega}\sigma_{\alpha\alpha}]^{-1/2}$. Therefore, the Faraday spectrum is also determined by $\sigma_{\alpha\beta}$.
The magneto-optical Kerr and Faraday spectra for the spin order with $\kappa=+1$ in Mn$_{3}X$N are plotted in Fig.~\ref{fig6}, where only $\phi^{z}_{K,F}$ are shown since all other components vanish. Taking Mn$_{3}$NiN as an example [Fig.~\ref{fig6}(d)], one can observe that the Kerr and Faraday spectra indeed inherit the behavior of the optical conductivity $\sigma_{xy}(\omega)$ [Figs.~\ref{fig5}(c) and~\ref{fig5}(d)]. For example, the Kerr and Faraday angles are zero when $\theta=0^{\circ}$, increase with increasing $\theta$, and reach their maximum at $\theta=90^{\circ}$. This indicates that the symmetry requirements for MOKE and MOFE are the same as that for the optical Hall conductivity. In addition, all Mn$_{3}X$N compounds considered here have similar Kerr and Faraday spectra, primarily due to their isostructural nature. The Kerr rotation angles in Mn$_{3}X$N are comparable to the theoretical values in Mn$_{3}X$ ($0.2\sim0.6$ deg)~\cite{WX-Feng2015} and are larger than the experimental value in Mn$_{3}$Sn (0.02 deg)~\cite{Higo2018}. The largest Kerr and Faraday rotation angles of respectively 0.42 deg and $4\times10^{5}$ deg/cm emerge in Mn$_{3}$AgN. This roots potentially in the stronger SOC of the Ag atom as compared to other lighter $X$ atoms.
Figure~\ref{fig7} shows the magneto-optical Kerr and Faraday spectra for the $\kappa=-1$ state of Mn$_{3}$NiN. Since all off-diagonal elements $\sigma_{yz}(\omega)$, $\sigma_{zx}(\omega)$, and $\sigma_{xy}(\omega)$ of the optical conductivity are nonzero for the $\kappa=-1$ state, the Kerr and Faraday effects will appear if the incident light propagates along any Cartesian axes. This is in contrast to the case of the $\kappa=+1$ configuration, for which only the incident light along the $z$ axis generates finite $\phi^{z}_{K,F}$. In Fig.~\ref{fig7}(a), $\phi^{x}_{K,F}$ are zero at $\theta=30^{\circ}$ but have the largest values at $\theta=120^{\circ}$, owing to the features in $\sigma_{yz}$ [Figs.~\ref{fig5}(i) and~\ref{fig5}(j)]. Moreover, the Kerr and Faraday rotation angles ($\theta^{x}_{K}$ and $\theta^{x}_{F}$) and the ellipticity ($\varepsilon^{x}_{K}$ and $\varepsilon^{x}_{F}$) resemble, respectively, the real part ($\sigma^{\prime}_{yz}$) and imaginary part ($\sigma^{\prime\prime}_{yz}$) of the corresponding off-diagonal conductivity element. Compared to $\phi^{x}_{K,F}$, the angle $\phi^{y}_{K,F}$ in Fig.~\ref{fig7}(b) displays an opposite behavior in the sense that it has the largest values at $\theta=30^{\circ}$ but vanishes at $\theta=120^{\circ}$. This is not surprising as the periods of $\sigma_{yz}$ and $\sigma_{zx}$ as a function of $\theta$ differ by $\frac{\pi}{2}$, which can be read from Fig.~\ref{fig4}(c) and Figs.~\ref{fig5}(i-l). The angles $\phi^{z}_{K,F}$ shown in Fig.~\ref{fig7}(c) are two orders of magnitude smaller than $\phi^{x}_{K,F}$ and $\phi^{y}_{K,F}$, implying that very weak Kerr and Faraday effects are expected for the incident light along the $z$ axis. From Figs.~\ref{fig6} and~\ref{fig7}, we conclude that the MOKE and MOFE depend strongly on the spin order, as in the case of the IAHC.
\section{Summary}\label{sec5}
In summary, using a group theoretical analysis, tight-binding modelling, and first-principles calculations, we have systematically investigated the spin-order dependent intrinsic anomalous Hall effect and magneto-optical Kerr and Faraday effects in Mn$_3X$N ($X$ = Ga, Zn, Ag, and Ni) compounds, which are considered to be an important class of noncollinear antiferromagnets. The symmetry-imposed shape of the anomalous Hall conductivity tensor is determined via the analysis of magnetic point groups, that is, only $\sigma_{xy}$ can be nonzero for the right-handed spin chirality ($\kappa=+1$) while finite $\sigma_{xy}$, $\sigma_{yz}$, and $\sigma_{zx}$ exist for the left-handed spin chirality ($\kappa=-1$). Our tight-binding modelling confirms these results and further reveals that $\sigma_{xy}$ is a \textit{sine}-like function of the azimuthal angle $\theta$ with a period of $2\pi$ ($\frac{2\pi}{3}$) for the $\kappa=+1$ ($\kappa=-1$) state. By examining the $\bm{k}$-resolved Berry curvature, we uncovered that the intrinsic anomalous Hall conductivity is generally large if the Fermi energy enters into the region with small band gaps formed at anticrossings. The first-principles calculations reproduce all features of $\sigma_{xy}$ and further verify that $\sigma_{yz}$ and $\sigma_{zx}$ have a period of $2\pi$ for the $\kappa=-1$ state. The intrinsic anomalous Hall conductivity shows a distinct relation of $\boldsymbol{\sigma}(\theta)=-\boldsymbol{\sigma}(\theta+\pi)$ due to its odd nature under time-reversal symmetry. In addition, we have calculated the magnetic anisotropy energy which manifests as $K_{\textrm{eff}}$ $\sin^{2}$($\theta)$ for the $\kappa=+1$ state but remains nearly constant at $K_{\textrm{eff}}/2$ for the $\kappa=-1$ state. A discrete two-fold energy degeneracy, i.e., $\text{MAE}(\theta)=\text{MAE}(\theta+\pi)$, is found in the noncollinear antiferromagnetic Mn$_3X$N. Strikingly, our first-principles calculations reveal that the $\kappa=-1$ state could exist in Mn$_3X$N for certain values of $\theta$.
The optical conductivities for $\kappa=\pm1$ states were explored, considering Mn$_3$NiN as a prototypical example. We find that the spin order hardly affects the diagonal elements whereas it influences strongly the off-diagonal entries. The optical isotropy is established since $\sigma_{xx}(\omega)\approx\sigma_{yy}(\omega)\approx\sigma_{zz}(\omega)$, while magneto-optical anisotropy occurs inevitably as $\sigma_{xy}(\omega)\neq\sigma_{yz}(\omega)\neq\sigma_{zx}(\omega)$. Finally, magneto-optical Kerr and Faraday effects are evaluated based on the optical conductivity. The largest Kerr rotation angles in Mn$_3X$N amount to 0.4 deg, which is comparable to other noncollinear antiferromagnets, e.g., Mn$_{3}X$~\cite{WX-Feng2015} and Mn$_{3}$Sn~\cite{Higo2018}. Since the optical Hall conductivity plays a major role for magneto-optical effects, the Kerr and Faraday spectra also display a spin-order dependent behavior. Our work illustrates that complex noncollinear spin structures could be probed via anomalous Hall and magneto-optical effects measurements.
\begin{acknowledgments}
W. F. and Y. Y. acknowledge the support from the National Natural Science Foundation of China (Nos. 11874085 and 11734003) and the National Key R\&D Program of China (No. 2016YFA0300600). W. F. also acknowledges the funding through an Alexander von Humboldt Fellowship. Y.M., J.-P. H. and S.B. acknowledge funding under SPP 2137 ``Skyrmionics" (project MO 1731/7-1), collaborative Research Center SFB 1238, and Y.M. acknowledges funding from project MO 1731/5-1 of Deutsche Forschungsgemeinschaft (DFG). G.-Y. G. is supported by the Ministry of Science and Technology and the Academia Sinica as well as NCTS and Kenda Foundation in Taiwan. We acknowledge computing time on the supercomputers JUQUEEN and JURECA at J\"ulich Supercomputing Centre and JARA-HPC of RWTH Aachen University.
\end{acknowledgments}
| {'timestamp': '2019-03-27T01:23:52', 'yymm': '1903', 'arxiv_id': '1903.11038', 'language': 'en', 'url': 'https://arxiv.org/abs/1903.11038'} |
\section{Introduction}
Among the most promising approaches to address the issue of global optimization
of an unknown function under reasonable smoothness assumptions
comes from extensions of the multi-armed bandit setup.
\cite{Bubeck2009} highlighted the connection between cumulative regret
and simple regret which facilitates fair comparison between methods
and \cite{Bubeck2011} proposed bandit algorithms on metric space $\cX$, called $\cX$-armed bandits.
In this context, theory and algorithms have been developed
in the case where the expected reward is a function $f:\cX\to\bR$
which satisfies certain smoothness conditions such as Lipschitz or H\"older continuity
\citep{Kleinberg2004, Kocsis2006, Auer2007, Kleinberg2008, Munos2011}.
Another line of work is the Bayesian optimization framework \citep{Jones1998, Bull2011, Mockus2012}
for which the unknown function $f$ is assumed to be the realization of a prior stochastic process distribution,
typically a Gaussian process.
An efficient algorithm that can be derived in this framework is the popular GP-UCB algorithm
due to \cite{Srinivas2012}.
However an important limitation of the upper confidence bound (UCB) strategies
without smoothness condition
is that the search space has to be {\em finite} with bounded cardinality,
a fact which is well known but, up to our knowledge,
has not been discussed so far in the related literature.
In this paper, we propose an approach which improves both lines of work with respect to their present limitations.
Our purpose is to: (i) relax smoothness assumptions that limit the relevance
of $\cX$-armed bandits in practical situations where target functions may only display random smoothness,
(ii) extend the UCB strategy for arbitrary sets $\cX$.
Here we will assume that $f$, being the realization of a given stochastic process distribution,
fulfills a \emph{probabilistic smoothness} condition.
We will consider the stochastic process bandit setup and we develop a UCB algorithm
based on {\em generic chaining} \citep{Bogachev1998,Adler2009,Talagrand2014,Gine2015}.
Using the generic chaining construction,
we compute hierarchical discretizations of $\cX$ under the form of chaining trees
in a way that permits to control precisely the discretization error.
The UCB algorithm then applies on these successive discrete subspaces
and chooses the accuracy of the discretization at each iteration
so that the cumulative regret it incurs matches the state-of-the art bounds on finite $\cX$.
In the paper, we propose an algorithm which computes a generic chaining tree
for arbitrary stochastic process in quadratic time.
We show that this tree is optimal for classes like Gaussian processes with high probability.
Our theoretical contributions have an impact in the two contexts mentioned above.
From the bandit and global optimization point of view,
we provide a generic algorithm that incurs state-of-the-art regret on stochastic process objectives
including non-trivial functionals of Gaussian processes such as
the sum of squares of Gaussian processes (in the spirit of mean-square-error minimization),
or nonparametric Gaussian processes on ellipsoids (RKHS classes),
or the Ornstein-Uhlenbeck process, which was conjectured impossible by \citep{Srinivas2010} and \citep{Srinivas2012}.
From the point of view of Gaussian process theory,
the generic chaining algorithm leads to tight bounds on the supremum of the process
in probability and not only in expectation.
The remainder of the paper is organized as follows.
In Section~\ref{sec:framework}, we present the stochastic process bandit framework over continuous spaces.
Section~\ref{sec:chaining} is devoted to the construction of generic chaining trees for search space discretization.
Regret bounds are derived in Section~\ref{sec:regret} after choosing adequate discretization depth.
Finally, lower bounds are established in Section~\ref{sec:lower_bound}.
\section{Stochastic Process Bandits Framework}
\label{sec:framework}
We consider the optimization of an unknown function $f:\cX\to\bR$
which is assumed to be sampled from a given separable stochastic process distribution.
The input space $\cX$ is an arbitrary space not restricted to subsets of $\bR^D$,
and we will see in the next section how the geometry of $\cX$ for a particular metric
is related to the hardness of the optimization.
An algorithm iterates the following:
\begin{itemize}
\item it queries $f$ at a point $x_i$ chosen with the previously acquired information,
\item it receives a noisy observation $y_i=f(x_i)+\epsilon_t$,
\end{itemize}
where the $(\epsilon_i)_{1\le i \le t}$ are independent centered Gaussian $\cN(0,\eta^2)$ of known variance.
We evaluate the performances of such an algorithm using $R_t$ the cumulative regret:
\[R_t = t\sup_{x\in\cX}f(x) - \sum_{i=1}^t f(x_i)\,.\]
This objective is not observable in practice,
and our aim is to give theoretical upper bounds that hold with arbitrary high probability
in the form:
\[\Pr\big[R_t \leq g(t,u)\big] \geq 1-e^{-u}\,.\]
Since the stochastic process is separable, the supremum over $\cX$ can be replaced by
the supremum over all finite subsets of $\cX$ \citep{Boucheron2013}.
Therefore we can assume without loss of generality that $\cX$ is finite with arbitrary cardinality.
We discuss on practical approaches to handle continuous space in Appendix~\ref{sec:greedy_cover}.
Note that the probabilities are taken under the product space of both the stochastic process $f$ itself
and the independent Gaussian noises $(\epsilon_i)_{1\le i\le t}$.
The algorithm faces the exploration-exploitation tradeoff.
It has to decide between reducing the uncertainty on $f$
and maximizing the rewards.
In some applications one may be interested in finding the maximum of $f$ only,
that is minimizing $S_t$ the simple regret:
\[S_t = \sup_{x\in\cX}f(x) - \max_{i\leq t}f(x_i)\,.\]
We will reduce our analysis to this case by simply observing that $S_T\leq \frac{R_T}{T}$.
\paragraph{Confidence Bound Algorithms and Discretization.}
To deal with the uncertainty,
we adopt the \emph{optimistic optimization} paradigm
and compute high confidence intervals where the values $f(x)$ lie with high probability,
and then query the point maximizing the upper confidence bound \citep{Auer2002}.
A naive approach would use a union bound over all $\cX$
to get the high confidence intervals at every points $x\in\cX$.
This would work for a search space with fixed cardinality $\abs{\cX}$,
resulting in a factor $\sqrt{\log\abs{\cX}}$ in the Gaussian case,
but this fails when $\abs{\cX}$ is unbounded,
typically a grid of high density approximating a continuous space.
In the next section,
we tackle this challenge by employing {\em generic chaining} to build hierarchical discretizations of $\cX$.
\section{Discretizing the Search Space via Generic Chaining}
\label{sec:chaining}
\subsection{The Stochastic Smoothness of the Process}
Let $\ell_u(x,y)$ for $x,y\in\cX$ and $u\geq 0$ be the following confidence bound on the increments of $f$:
\[\ell_u(x,y) = \inf\Big\{s\in\bR: \Pr[f(x)-f(y) > s] < e^{-u}\Big\}\,.\]
In short, $\ell_u(x,y)$ is the best bound satisfying $\Pr\big[f(x)-f(y) \geq \ell_u(x,y)\big] < e^{-u}$.
For particular distributions of $f$, it is possible to obtain closed formulae for $\ell_u$.
However, in the present work we will consider upper bounds on $\ell_u$.
Typically, if $f$ is distributed as a centered Gaussian process of covariance $k$,
which we denote $f\sim\cGP(0,k)$, we know that $\ell_u(x,y) \leq \sqrt{2u}d(x,y)$,
where $d(x,y)=\big(\E(f(x)-f(y))^2\big)^{\frac 1 2}$ is the canonical pseudo-metric of the process.
More generally, if it exists a pseudo-metric $d(\cdot,\cdot)$ and a function $\psi(\cdot,\cdot)$
bounding the logarithm of the moment-generating function of the increments, that is,
\[\log \E e^{\lambda(f(x)-f(y))} \leq \psi(\lambda,d(x,y))\,,\]
for $x,y\in\cX$ and $\lambda\in I \subseteq \bR$,
then using the Chernoff bounding method \citep{Boucheron2013},
\[\ell_u(x,y) \leq \psi^{*-1}(u,d(x,y))\,,\]
where $\psi^*(s,\delta)=\sup_{\lambda\in I}\big\{\lambda s - \psi(\lambda,\delta)\big\}$
is the Fenchel-Legendre dual of $\psi$
and $\psi^{*-1}(u,\delta)=\inf\big\{s\in\bR: \psi^*(s,\delta)>u\big\}$
denotes its generalized inverse.
In that case, we say that $f$ is a $(d,\psi)$-process.
For example if $f$ is sub-Gamma, that is:
\begin{equation}
\label{eq:sub_gamma}
\psi(\lambda,\delta)\leq \frac{\nu \lambda^2 \delta^2}{2(1-c\lambda \delta)}\,,
\end{equation}
we obtain,
\begin{equation}
\label{eq:sub_gamma_tail}
\ell_u(x,y) \leq \big(c u + \sqrt{2\nu u}\big) d(x,y)\,.
\end{equation}
The generality of Eq.~\ref{eq:sub_gamma} makes it convenient to derive bounds
for a wide variety of processes beyond Gaussian processes,
as we see for example in Section~\ref{sec:gp2}.
\subsection{A Tree of Successive Discretizations}
As stated in the introduction, our strategy to obtain confidence intervals
for stochastic processes is by successive discretization of $\cX$.
We define a notion of tree that will be used for this purpose.
A set $\cT=\big(\cT_h\big)_{h\geq 0}$ where $\cT_h\subset\cX$ for $h\geq 0$ is a tree
with parent relationship $p:\cX\to\cX$, when for all $x\in \cT_{h+1}$ its parent is given by
$p(x)\in \cT_h$.
We denote by $\cT_{\leq h}$ the set of the nodes of $\cT$ at depth lower than $h$:
$\cT_{\leq h} = \bigcup_{h'\leq h} \cT_h'$.
For $h\geq 0$ and a node $x\in \cT_{h'}$ with $h\leq h'$,
we also denote by $p_h(x)$ its parent at depth $h$,
that is $p_h(x) = p^{h'-h}(x)$
and we note $x\succ s$ when $s$ is a parent of $x$.
To simplify the notations in the sequel,
we extend the relation $p_h$ to $p_h(x)=x$ when $x\in\cT_{\leq h}$.
We now introduce a powerful inequality bounding the supremum of the difference of $f$
between a node and any of its descendent in $\cT$,
provided that $\abs{\cT_h}$ is not excessively large.
\begin{theorem}[Generic Chaining Upper Bound]
\label{thm:chaining}
Fix any $u>0$, $a>1$ and $\big(n_h\big)_{h\in\bN}$ an increasing sequence of integers.
Set $u_i=u+n_i+\log\big(i^a\zeta(a)\big)$
where $\zeta$ is the Riemann zeta function.
Then for any tree $\cT$ such that $\abs{\cT_h}\leq e^{n_h}$,
\[\forall h\geq 0, \forall s\in\cT_h,~ \sup_{x\succ s} f(x)-f(s) \leq \omega_h\,,\]
holds with probability at least $1-e^{-u}$,
where,
\[\omega_h = \sup_{x\in\cX} \sum_{i> h} \ell_{u_i}\big(p_i(x), p_{i-1}(x)\big)\,.\]
\end{theorem}
The full proof of the theorem can be found in Appendix~\ref{sec:proof_chaining}.
It relies on repeated application of the union bound over the $e^{n_i}$ pairs $\big(p_i(x),p_{i-1}(x)\big)$.
Now, if we look at $\cT_h$ as a discretization of $\cX$
where a point $x\in\cX$ is approximated by $p_h(x)\in\cT_h$,
this result can be read in terms of discretization error,
as stated in the following corollary.
\begin{corollary}[Discretization error of $\cT_h$]
\label{cor:chaining}
Under the assumptions of Theorem~\ref{thm:chaining}
with $\cX=\cT_{\leq h_0}$ for $h_0$ large enough, we have that,
\[\forall h, \forall x\in\cX,~ f(x)-f(p_h(x)) \leq \omega_h\,,\]
holds with probability at least $1-e^{-u}$.
\end{corollary}
\subsection{Geometric Interpretation for $(d,\psi)$-processes}
\label{sec:psi_process}
The previous inequality suggests that to obtain a good upper bound on the discretization error,
one should take $\cT$ such that $\ell_{u_i}(p_i(x),p_{i-1}(x))$
is as small as possible for every $i>0$ and $x\in\cX$.
We specify what it implies for $(d,\psi)$-processes.
In that case, we have:
\[\omega_h \leq \sup_{x\in\cX} \sum_{i>h} \psi^{*-1}\Big(u_i,d\big(p_i(x),p_{i-1}(x)\big)\Big)\,.\]
Writing $\Delta_i(x)=\sup_{x'\succ p_i(x)}d(x',p_i(x))$
the $d$-radius of the ``cell'' at depth $i$ containing $x$,
we remark that $d(p_i(x),p_{i-1}(x))\leq \Delta_{i-1}(x)$,
that is:
\[
\omega_h \leq \sup_{x\in\cX} \sum_{i>h} \psi^{*-1}\big(u_i,\Delta_{i-1}(x)\big)\,.
\]
In order to make this bound as small as possible,
one should spread the points of $\cT_h$ in $\cX$
so that $\Delta_h(x)$ is evenly small,
while satisfying the requirement $\abs{\cT_h}\leq e^{n_h}$.
Let $\Delta = \sup_{x,y\in\cX}d(x,y)$ and $\epsilon_h=\Delta 2^{-h}$,
and define an $\epsilon$-net as a set $T\subseteq \cX$ for which $\cX$ is covered by $d$-balls
of radius $\epsilon$ with center in $T$.
Then if one takes $n_h=2\log N(\cX,d,\epsilon_h)$, twice the metric entropy of $\cX$,
that is the logarithm of the minimal $\epsilon_h$-net,
we obtain with probability at least $1-e^{-u}$ that
$\forall h\geq 0, \forall s\in\cT_h$\,:
\begin{equation}
\label{eq:classical_chaining}
\sup_{x\succ s}f(x)-f(s) \leq \sum_{i>h} \psi^{*-1}(u_i, \epsilon_i)\,,
\end{equation}
where $u_i= u+2\log N(\cX,d,\epsilon_i)+\log(i^a\zeta(a))$.
The tree $\cT$ achieving this bound consists in computing a minimal $\epsilon$-net at each depth,
which can be done efficiently by Algorithm~\ref{alg:greedy_cover}
if one is satisfied by an almost optimal heuristic
which exhibits an approximation ratio of $\max_{x\in\cX} \sqrt{\log \log \abs{\cB(x,\epsilon)}}$,
as discussed in Appendix~\ref{sec:greedy_cover}.
This technique is often called \emph{classical chaining} \citep{Dudley1967}
and we note that an implementation appears in \cite{Contal2015} on real data.
However the upper bound in Eq.~\ref{eq:classical_chaining}
is not tight as for instance with a Gaussian process indexed by an ellipsoid,
as discussed in Section~\ref{sec:gp}.
We will present later in Section~\ref{sec:lower_bound} an algorithm to compute a tree $\cT$
in quadratic time leading to both a lower and upper bound on $\sup_{x\succ s}f(x)-f(s)$
when $f$ is a Gaussian process.
The previous inequality is particularly convenient when we know a bound
on the growth of the metric entropy of $(\cX,d)$, as stated in the following corollary.
\begin{corollary}[Sub-Gamma process with metric entropy bound]
\label{cor:subgamma_bigoh}
If $f$ is sub-Gamma and there exists $R,D\in\bR$ such that for all $\epsilon>0$,
$N(\cX,d,\epsilon) \leq (\frac R \epsilon)^D$, then with probability at least $1-e^{-u}$\,:
\[\forall h\geq 0,\forall s\in\cT_h,~ \sup_{x\succ s}f(x)-f(s) =\cO\Big(\big(c(u + D h)+\sqrt{\nu(u+Dh)}\big) 2^{-h}\Big)\,.\]
\end{corollary}
\begin{proof}
With the condition on the growth of the metric entropy,
we obtain $u_i = \cO\big(u+D\log R + D i\big)$.
With Eq.~\ref{eq:classical_chaining} for a sub-Gamma process we get,
knowing that $\sum_{i=h}^\infty i 2^{-i} =\cO\big(h 2^{-h}\big)$
and $\sum_{i=h}^\infty \sqrt{i}2^{-i}=\cO\big(\sqrt{h}2^{-h}\big)$,
that $\omega_h = \cO\Big(\big(c (u+D h) + \sqrt{\nu(u+D h)}\big)2^{-h}\Big)$.
\end{proof}
Note that the conditions of Corollary~\ref{cor:subgamma_bigoh} are fulfilled
when $\cX\subset [0,R]^D$ and there is $c\in\bR$ such that for all $x,y\in\cX,~d(x,y) \leq c\norm{x-y}_2$,
by simply cutting $\cX$ in hyper-cubes of side length $\epsilon$.
We also remark that this condition is very close to the near-optimality dimension of the metric space $(\cX,d)$
defined in \cite{Bubeck2011}.
However our condition constraints the entire search space $\cX$
instead of the near-optimal set $\cX_\epsilon = \big\{ x\in\cX: f(x)\geq \sup_{x^\star\in\cX}f(x^\star)-\epsilon\big\}$.
Controlling the dimension of $\cX_\epsilon$ may allow to obtain an exponential decay of the regret
in particular deterministic function $f$ with a quadratic behavior near its maximum.
However, up to our knowledge no progress has been made in this direction for stochastic processes
without constraining its behavior around the maximum.
A reader interested in this subject may look at
the recent work by \cite{Grill2015} on smooth and noisy functions with unknown smoothness,
and the works by \cite{Freitas2012} or \cite{Wang2014b}
on Gaussian processes without noise and a quadratic local behavior.
\section{Regret Bounds for Bandit Algorithms}
\label{sec:regret}
Now we have a tool to discretize $\cX$ at a certain accuracy,
we show here how to derive an optimization strategy on $\cX$.
\subsection{High Confidence Intervals}
Assume that given $i-1$ observations $Y_{i-1}=(y_1,\dots,y_{i-1})$ at queried locations $X_{i-1}$,
we can compute $L_i(x,u)$ and $U_i(x,u)$ for all $u>0$ and $x\in\cX$, such that:
\[ \Pr\Big[ f(x) \in \big(L_i(x,u), U_i(x,u)\big) \Big] \geq 1-e^{-u}\,.\]
Then for any $h(i)>0$ that we will carefully choose later,
we obtain by a union bound on $\cT_{h(i)}$ that:
\[ \Pr\Big[ \forall x\in\cT_{h(i)},~ f(x) \in \big(L_i(x,u+n_{h(i)}), U_i(x,u+n_{h(i)})\big) \Big] \geq 1-e^{-u}\,.\]
And by an additional union bound on $\bN$ that:
\begin{equation}
\label{eq:ucb}
\Pr\Big[ \forall i\geq 1, \forall x\in\cT_{h(i)},~ f(x) \in \big(L_i(x,u_i), U_i(x,u_i)\big) \Big] \geq 1-e^{-u}\,,
\end{equation}
where $u_i=u+n_{h(i)}+\log\big(i^a\zeta(a)\big)$ for any $a>1$ and $\zeta$ is the Riemann zeta function.
Our \emph{optimistic} decision rule for the next query is thus:
\begin{equation}
\label{eq:argmax}
x_i \in \argmax_{x\in\cT_{h(i)}} U_i(x,u_i)\,.
\end{equation}
Combining this with Corollary~\ref{cor:chaining}, we are able to prove the following bound
linking the regret with $\omega_{h(i)}$ and the width of the confidence interval.
\begin{theorem}[Generic Regret Bound]
\label{thm:regret_bound}
When for all $i\geq 1$, $x_i \in \argmax_{x\in \cT_{h(i)}} U_i(x,u_i)$
we have with probability at least $1- 2 e^{-u}$:
\[ R_t = t \sup_{x\in\cX} f(x)-\sum_{i=1}^t f(x_i) \leq \sum_{i=1}^t\Big\{ \omega_{h(i)} + U_i(x_i,u_i)-L_i(x_i,u_i)\Big\}\,.\]
\end{theorem}
\begin{proof}
Using Theorem~\ref{thm:chaining} we have that,
\[\forall h\geq 0,\,\sup_{x\in\cX}f(x) \leq \omega_h+\sup_{x\in\cX}f(p_h(x))\,,\]
holds with probability at least $1-e^{-u}$.
Since $p_{h(i)}(x) \in \cT_{h(i)}$ for all $x\in\cX$,
we can invoke Eq.~\ref{eq:ucb}\,:
\[\forall i\geq 1\,~ \sup_{x\in\cX} f(x)-f(x_i) \leq \omega_{h(i)}+\sup_{x\in\cT_{h(i)}}U_i(x,u_i)-L_i(x_i,u_i)\,,\]
holds with probability at least $1-2e^{-u}$.
Now by our choice for $x_i$, $\sup_{x\in\cT_{h(i)}}U_i(x,u_i) = U_i(x_i,u_i)$,
proving Theorem~\ref{thm:regret_bound}.
\end{proof}
In order to select the level of discretization $h(i)$ to reduce the bound on the regret,
it is required to have explicit bounds on $\omega_i$ and the confidence intervals.
For example by choosing
\[h(i)=\min\Big\{i:\bN: \omega_i \leq \sqrt{\frac{\log i}{i}} \Big\}\,,\]
we obtain $\sum_{i=1}^t \omega_{h(i)} \leq 2\sqrt{t\log t}$ as shown later.
The performance of our algorithm is thus linked with the decrease rate of $\omega_i$,
which characterizes the ``size'' of the optimization problem.
We first study the case where $f$ is distributed as a Gaussian process,
and then for a sum of squared Gaussian processes.
\subsection{Results for Gaussian Processes}
\label{sec:gp}
The problem of regret minimization where $f$ is sampled from a Gaussian process
has been introduced by \cite{Srinivas2010} and \cite{grunewalder2010}.
Since then, it has been extensively adapted to various settings of Bayesian optimization
with successful practical applications.
In the first work the authors address the cumulative regret
and assume that either $\cX$ is finite or that the samples of the process are Lipschitz
with high probability, where the distribution of the Lipschitz constant has Gaussian tails.
In the second work the authors address the simple regret without noise and with known horizon,
they assume that the canonical pseudo-metric $d$ is bounded by a given power of the supremum norm.
In both works they require that the input space is a subset of $\bR^D$.
The analysis in our paper permits to derive similar bounds in a nonparametric fashion
where $(\cX,d)$ is an arbitrary metric space.
Note that if $(\cX,d)$ is not totally bounded, then the supremum of the process is infinite with probability one,
so is the regret of any algorithm.
\paragraph{Confidence intervals and information gain.}
First, $f$ being distributed as a Gaussian process,
it is easy to derive confidence intervals given a set of observations.
Writing $\mat{Y}_i$ the vector of noisy values at points in $X_i$,
we find by Bayesian inference \citep{Rasmussen2006} that:
\[\Pr\Big[ \abs{f(x)-\mu_i(x)} \geq \sigma_i(x)\sqrt{2u}\Big] < e^{-u}\,,\]
for all $x\in\cX$ and $u>0$, where:
\begin{align}
\label{eq:mu}
\mu_i(x) &= \mat{k}_i(x)^\top \mat{C}_i^{-1}\mat{Y}_i\\
\label{eq:sigma}
\sigma_i^2(x) &= k(x,x) - \mat{k}_i(x)^\top \mat{C}_i^{-1} \mat{k}_i(x)\,,
\end{align}
where $\mat{k}_i(x) = [k(x_j, x)]_{x_j \in X_i}$ is the covariance vector between $x$ and $X_i$,
$\mat{C}_i = \mat{K}_i + \eta^2 \mat{I}$,
and $\mat{K}_i=[k(x,x')]_{x,x' \in X_i}$ the covariance matrix
and $\eta^2$ the variance of the Gaussian noise.
Therefore the width of the confidence interval in Theorem~\ref{thm:regret_bound}
can be bounded in terms of $\sigma_{i-1}$:
\[U_i(x_i,u_i)-L_i(x_i,u_i) \leq 2\sigma_{i-1}(x_i)\sqrt{2u_i}\,.\]
Furthermore it is proved in \cite{Srinivas2012} that the sum of the posterior variances
at the queried points $\sigma_{i-1}^2(x_i)$ is bounded in terms of information gain:
\[\sum_{i=1}^t \sigma_{i-1}^2(x_i) \leq c_\eta \gamma_t\,,\]
where $c_\eta=\frac{2}{\log(1+\eta^{-2})}$
and $\gamma_t = \max_{X_t\subseteq\cX:\abs{X_t}=t} I(X_t)$
is the maximum information gain of $f$ obtainable by a set of $t$ points.
Note that for Gaussian processes,
the information gain is simply $I(X_t)=\frac 1 2 \log\det(\mat{I}+\eta^{-2}\mat{K}_t)$.
Finally, using the Cauchy-Schwarz inequality and the fact that $u_t$ is increasing we have
with probability at least $1- 2 e^{-u}$:
\begin{equation}
\label{eq:gp_regret}
R_t \leq 2\sqrt{2 c_\eta t u_t \gamma_t} + \sum_{i=1}^t \omega_{h(i)}\,.
\end{equation}
The quantity $\gamma_t$ heavily depends on the covariance of the process.
On one extreme, if $k(\cdot,\cdot)$ is a Kronecker delta,
$f$ is a Gaussian white noise process and $\gamma_t=\cO(t)$.
On the other hand \cite{Srinivas2012} proved the following inequalities for widely used covariance functions
and $\cX\subset \bR^D$:
\begin{itemize}
\item linear covariance $k(x,y)=x^\top y$, $\gamma_t=\cO\big(D \log t\big)$.
\item squared exponential covariance $k(x,y)=e^{-\frac 1 2 \norm{x-y}_2^2}$, $\gamma_t=\cO\big((\log t)^{D+1}\big)$.
\item Mat\'ern covariance, $k(x,y)=\frac{2^{p-1}}{\Gamma(p)}\big(\sqrt{2p}\norm{x-y}_2\big)^p K_p\big(\sqrt{2p}\norm{x-y}_2\big)$,
where $p>0$ and $K_p$ is the modified Bessel function,
$\gamma_t=\cO\big( (\log t) t^a\big)$, with $a=\frac{D(D+1)}{2p+D(D+1)}<1$ for $p>1$.
\end{itemize}
\paragraph{Bounding $\omega_h$ with the metric entropy.}
We now provide a policy to choose $h(i)$ minimizing the right hand side of Eq.\ref{eq:gp_regret}.
When an explicit upper bound on the metric entropy of the form
$\log N(\cX,d,\epsilon)\leq \cO(-D \log \epsilon)$ holds,
we can use Corollary~\ref{cor:subgamma_bigoh} which gives:
\[\omega_h\leq\cO\big(\sqrt{u+D h}2^{-h}\big)\,.\]
This upper bound holds true in particular for Gaussian processes with $\cX\subset[0,R]^D$
and for all $x,y\in\cX$, $d(x,y) \leq \cO\big(\norm{x-y}_2\big)$.
For stationary covariance this becomes $k(x,x)-k(x,y)\leq \cO\big(\norm{x-y}_2\big)$
which is satisfied for the usual covariances used in Bayesian optimization such as
the squared exponential covariance
or the Mat\'ern covariance with parameter $p\in\big(\frac 1 2, \frac 3 2, \frac 5 2\big)$.
For these values of $p$ it is well known that $k(x,y)=h_p\big(\sqrt{2p}\norm{x-y}_2\big) \exp\big(-\sqrt{2p}\norm{x-y}_2\big)$,
with $h_{\frac 1 2}(\delta)=1$, $h_{\frac 3 2}(\delta)=1+\delta$ and $h_{\frac 5 2}(\delta)=1+\delta+\frac 1 3 \delta^2$.
Then we see that is suffices to choose $h(i)=\ceil{\frac 1 2 \log_2 i}$
to obtain $\omega_{h(i)} \leq \cO\Big( \sqrt{\frac{u+\frac 1 2 D\log i}{i}} \Big)$
and since $\sum_{i=1}^t i^{-\frac 1 2}\leq 2 \sqrt{t}$ and
$\sum_{i=1}^t \big(\frac{\log i}{i}\big)^{\frac 1 2} \leq 2\sqrt{t\log t}$,
\[R_t \leq \cO\Big(\sqrt{t \gamma_t \log t }\Big)\,, \]
holds with high probability.
Such a bound holds true in particular for the Ornstein-Uhlenbeck process,
which was conjectured impossible in \cite{Srinivas2010} and \cite{Srinivas2012}.
However we do not know suitable bounds for $\gamma_t$ in this case
and can not deduce convergence rates.
\paragraph{Gaussian processes indexed on ellipsoids and RKHS.}
As mentioned in Section~\ref{sec:psi_process}, the previous bound on the discretization error
is not tight for every Gaussian process.
An important example is when the search space is a (possibly infinite dimensional) ellipsoid:
\[\cX=\Big\{ x\in \ell^2: \sum_{i\geq 1}\frac{x_i^2}{a_i^2} \leq 1\Big\}\,.\]
where $a\in\ell^2$,
and $f(x) = \sum_{i\geq 1}x_ig_i$ with $g_i\iid \cN(0,1)$,
and the pseudo-metric $d(x,y)$ coincide with the usual $\ell_2$ metric.
The study of the supremum of such processes is connected to learning error bounds
for kernel machines like Support Vector Machines,
as a quantity bounding the learning capacity of a class of functions in a RKHS,
see for example \cite{Mendelson2002}.
It can be shown by geometrical arguments that
$\E \sup_{x: d(x,s)\leq \epsilon} f(x)-f(s) \leq \cO\big(\sqrt{\sum_{i\geq 1}\min(a_i^2,\epsilon^2)}\big)\,,$
and that this supremum exhibits $\chi^2$-tails around its expectation,
see for example \cite{Boucheron2013} and \cite{Talagrand2014}.
This concentration is not grasped by Corollary~\ref{cor:subgamma_bigoh},
it is required to leverage the construction of Section~\ref{sec:lower_bound}
to get a tight estimate.
Therefore the present work forms a step toward efficient and practical online model selection
in such classes in the spirit of \cite{Rakhlin2014} and \cite{Gaillard2015}.
\subsection{Results for Quadratic Forms of Gaussian Processes}
\label{sec:gp2}
The preeminent model in Bayesian optimization is by far the Gaussian process.
Yet, it is a very common task to attempt minimizing a regret on functions which
does not look like Gaussian processes.
Consider the typical cases where $f$ has the form of a mean square error
or a Gaussian likelihood.
In both cases, minimizing $f$ is equivalent to minimize a sum of squares,
which we can not assume to be sampled from a Gaussian process.
To alleviate this problem, we show that this objective fits in our generic setting.
Indeed, if we consider that $f$ is a sum of squares of Gaussian processes,
then $f$ is sub-Gamma with respect to a natural pseudo-metric.
In order to match the challenge of maximization, we will precisely take the opposite.
In this particular setting we allow the algorithm
to observe directly the noisy values of the \emph{separated} Gaussian processes,
instead of the sum of their square.
To simplify the forthcoming arguments, we will choose independent and identically distributed processes,
but one can remove the covariances between the processes by Cholesky decomposition of the covariance matrix,
and then our analysis adapts easily to processes with non identical distributions.
\paragraph{The stochastic smoothness of squared GP.}
Let $f=-\sum_{j=1}^N g_j^2(x)$,
where $\big(g_j\big)_{1\le j\le N}$ are independent centered Gaussian processes $g_j\iid\cGP(0,k)$
with stationary covariance $k$ such that $k(x,x)=\kappa$ for every $x\in\cX$.
We have for $x,y\in\cX$ and $\lambda<(2\kappa)^{-1}$:
\[\log\E e^{\lambda(f(x)-f(y))} = -\frac{N}{2}\log\Big(1-4\lambda^2(\kappa^2-k^2(x,y))\Big)\,. \]
Therefore with $d(x,y)=2\sqrt{\kappa^2-k^2(x,y)}$ and $\psi(\lambda,\delta)=-\frac{N}{2}\log\big(1-\lambda^2\delta^2\big)$,
we conclude that $f$ is a $(d,\psi)$-process.
Since $-\log(1-x^2) \leq \frac{x^2}{1-x}$ for $0\leq x <1$,
which can be proved by series comparison,
we obtain that $f$ is sub-Gamma with parameters $\nu=N$ and $c=1$.
Now with Eq.~\ref{eq:sub_gamma_tail},
\[\ell_u(x,y)\leq (u+\sqrt{2 u N})d(x,y)\,.\]
Furthermore, we also have that $d(x,y)\leq \cO(\norm{x-y}_2)$ for $\cX\subseteq \bR^D$
and standard covariance functions including
the squared exponential covariance or the Mat\'ern covariance with parameter $p=\frac 3 2$ or $p=\frac 5 2$.
Then Corollary~\ref{cor:subgamma_bigoh} leads to:
\begin{equation}
\label{eq:omega_gp2}
\forall i\geq 0,~ \omega_i \leq \cO\Big( u+D i + \sqrt{N(u+D i)}2^{-i}\Big)\,.
\end{equation}
\paragraph{Confidence intervals for squared GP.}
As mentioned above, we consider here that we are given separated noisy observations $\mat{Y}_i^j$
for each of the $N$ processes.
Deriving confidence intervals for $f$ given $\big(\mat{Y}_i^j\big)_{j\leq N}$
is a tedious task since the posterior processes $g_j$ given $\mat{Y}_i^j$
are not standard nor centered.
We propose here a solution based directly on a careful analysis of Gaussian integrals.
The proof of the following technical lemma can be found in Appendix~\ref{sec:gp2_tail}.
\begin{lemma}[Tails of squared Gaussian]
\label{lem:gp2_tail}
Let $X\sim\cN(\mu,\sigma^2)$ and $s>0$. We have:
\[\Pr\Big[ X^2 \not\in \big(l^2, u^2\big)\Big] < e^{-s^2}\,,\]
for $u=\abs{\mu}+\sqrt{2} \sigma s$
and $l=\max\big(0,\abs{\mu}-\sqrt{2}\sigma s\big)$.
\end{lemma}
Using this lemma, we compute the confidence interval for $f(x)$
by a union bound over $N$.
Denoting $\mu_i^j$ and $\sigma_i^j$ the posterior expectation and deviation
of $g_j$ given $\mat{Y}_i^j$ (computed as in Eq.~\ref{eq:mu} and \ref{eq:sigma}),
the confidence interval follows for all $x\in\cX$:
\begin{equation}
\label{eq:gp2_ci}
\Pr\Big[ \forall j\leq m,~ g_j^2(x) \in \big( L_i^j(x,u), U_i^j(x,u) \big)\Big] \geq 1- e^{-u}\,,
\end{equation}
where
\begin{align*}
U_i^j(x,u) &= \Big(\abs{\mu_i^j(x)}+\sqrt{2(u+\log N)} \sigma_{i-1}^j(x)\Big)^2\\
\text{ and } L_i^j(x,u) &= \max\Big(0, \abs{\mu_i^j(x)}-\sqrt{2(u+\log N)} \sigma_{i-1}^j(x)\Big)^2\,.
\end{align*}
We are now ready to use Theorem~\ref{thm:regret_bound} to control $R_t$
by a union bound for all $i\in\bN$ and $x\in\cT_{h(i)}$.
Note that under the event of Theorem~\ref{thm:regret_bound},
we have the following:
\[\forall j\leq m, \forall i\in\bN, \forall x\in\cT_{h(i)},~ g_j^2(x) \in \big(L_i^j(x,u_i), U_i^j(x,u_i)\big)\,,\]
Then we also have:
\[\forall j\leq m, \forall i\in\bN, \forall x\in\cT_{h(i)},~ \abs{\mu_i^j(x)} \leq \abs{g_j(x)}+\sqrt{2(u_i+\log N)}\sigma_{i-1}^j(x)\,,\]
Since $\mu_0^j(x)=0$, $\sigma_0^j(x)=\kappa$ and $u_0\leq u_i$
we obtain $\abs{\mu_i^j(x)} \leq \sqrt{2(u_i+\log N)}\big(\sigma_{i-1}^j(x)+\kappa\big)$.
Therefore Theorem~\ref{thm:regret_bound} says with probability at least $1-2e^{-u}$:
\[R_t \leq \sum_{i=1}^t\Big\{\omega_{h(i)} + 8\sum_{j\leq N}(u_i+\log N)\big(\sigma_{i-1}^j(x)+\kappa\big)\sigma_{i-1}^j(x_i) \Big\}\,.\]
It is now possible to proceed as in Section~\ref{sec:gp} and bound the sum of posterior variances with $\gamma_t$\,:
\[R_t \leq \cO\Big( N u_t \big(\sqrt{t \gamma_t} + \gamma_t\big) + \sum_{i=1}^t \omega_{h(t)} \Big)\,.\]
As before, under the conditions of Eq.~\ref{eq:omega_gp2} and
choosing the discretization level $h(i)=\ceil{\frac 1 2 \log_2 i}$
we obtain $\omega_{h(i)}=\cO\Big(i^{-\frac 1 2} \big(u+\frac 1 2 D\log i\big)\sqrt{N}\Big)$,
and since $\sum_{i=1}^t i^{-\frac 1 2} \log i\leq 2 \sqrt{t}\log t$,
\[R_t \leq \cO\Big(N \big(\sqrt{t\gamma_t \log t}+\gamma_t\big) + \sqrt{Nt}\log t\Big)\,,\]
holds with high probability.
\section{Tightness Results for Gaussian Processes}
\label{sec:lower_bound}
We present in this section a strong result on the tree $\cT$ obtained by Algorithm~\ref{alg:tree_lb}.
Let $f$ be a centered Gaussian process $\cGP(0,k)$ with arbitrary covariance $k$.
We show that a converse of Theorem~\ref{thm:chaining} is true with high probability.
\subsection{A High Probabilistic Lower Bound on the Supremum}
We first recall that for Gaussian process we have $\psi^{*-1}(u_i,\delta)=\cO\big(\delta \sqrt{u+n_i}\big)$,
that is:
\[\forall h\geq 0, \forall s\in\cT_h,~\sup_{x\succ s}f(x)-f(s) \leq \cO\Big(\sup_{x\succ s}\sum_{i>h}\Delta_i(x) \sqrt{u+n_i}\Big)\,,\]
with probability at least $1-e^{-u}$.
For the following, we will fix for $n_i$ a geometric sequence $n_i=2^i$ for all $i\geq 1$.
Therefore we have the following upper bound:
\begin{corollary}
Fix any $u>0$ and let $\cT$ be constructed as in Algorithm~\ref{alg:tree_lb}.
Then there exists a constant $c_u>0$ such that, for $f\sim\cGP(0,k)$,
\[\sup_{x\succ s} f(x)-f(s) \leq c_u \sup_{x\succ s} \sum_{i>h} \Delta_i(x)2^{\frac i 2}\,,\]
holds for all $h\geq 0$ and $s\in\cT_h$ with probability at least $1-e^{-u}$.
\end{corollary}
To show the tightness of this result,
we prove the following probabilistic bound:
\begin{theorem}[Generic Chaining Lower Bound]
\label{thm:lower_bound}
Fix any $u>0$ and let $\cT$ be constructed as in Algorithm~\ref{alg:tree_lb}.
Then there exists a constant $c_u>0$ such that, for $f\sim\cGP(0,k)$,
\[\sup_{x\succ s} f(x)-f(s) \geq c_u \sup_{x\succ s}\sum_{i=h}^\infty \Delta_i(x)2^{\frac i 2}\,,\]
holds for all $h\geq 0$ and $s\in\cT_h$ with probability at least $1-e^{-u}$.
\end{theorem}
The benefit of this lower bound is huge for theoretical and practical reasons.
It first says that we cannot discretize $\cX$ in a finer way that Algorithm~\ref{alg:tree_lb}
up to a constant factor.
This also means that even if the search space $\cX$ is ``smaller''
than what suggested using the metric entropy,
like for ellipsoids,
then Algorithm~\ref{alg:tree_lb} finds the correct ``size''.
Up to our knowledge, this result is the first construction of tree $\cT$
leading to a lower bound at every depth with high probability.
The proof of this theorem shares some similarity with the construction
to obtain lower bound in expectation,
see for example \cite{Talagrand2014} or \cite{Ding2011} for a tractable algorithm.
\subsection{Analysis of Algorithm~\ref{alg:tree_lb}}
Algorithm~\ref{alg:tree_lb} proceeds as follows.
It first computes $(\cT_h)_{h\geq 0}$ a succession of $\epsilon_h$-nets as in Section~\ref{sec:psi_process}
with $\epsilon_h=\Delta 2^{-h}$ where $\Delta$ is the diameter of $\cX$.
The parent of a node is set to the closest node in the upper level,
\[\forall t\in\cT_h,~ p(t) = \argmin_{s\in\cT_{h-1}} d(t,s)\,\]
Therefore we have $d(t,p(t))\leq \epsilon_{h-1}$ for all $t\in\cT_h$.
Moreover, by looking at how the $\epsilon_h$-net is computed we also have
$d(t_i,t_j) \geq \epsilon_h$ for all $t_i,t_j\in\cT_h$.
These two properties are crucial for the proof of the lower bound.
Then, the algorithm updates the tree to make it well balanced,
that is such that no node $t\in\cT_h$ has more that $e^{n_{h+1}-n_h}=e^{2^h}$ children.
We note at this time that this condition will be already satisfied in every reasonable space,
so that the complex procedure that follows is only required in extreme cases.
To force this condition, Algorithm~\ref{alg:tree_lb} starts from the leafs
and ``prunes'' the branches if they outnumber $e^{2^h}$.
We remark that this backward step is not present in the literature on generic chaining,
and is needed for our objective of a lower bound with high probability.
By doing so, it creates a node called a \emph{pruned node} which will take as children
the pruned branches.
For this construction to be tight, the pruning step has to be careful.
Algorithm~\ref{alg:tree_lb} attaches to every pruned node a value,
computed using the values of its children,
hence the backward strategy.
When pruning branches, the algorithm keeps the $e^{2^h}$ nodes with maximum values and displaces the others.
The intuition behind this strategy is to avoid pruning branches that already contain pruned node.
Finally, note that this pruning step may creates unbalanced pruned nodes
when the number of nodes at depth $h$ is way larger that $e^{2^h}$.
When this is the case, Algorithm~\ref{alg:tree_lb}
restarts the pruning with the updated tree to recompute the values.
Thanks to the doubly exponential growth in the balance condition,
this can not occur more that $\log \log \abs{\cX}$ times
and the total complexity is $\cO\big(\abs{\cX}^2\big)$.
\subsection{Computing the Pruning Values and Anti-Concentration Inequalities}
We end this section by describing the values used for the pruning step.
We need a function $\varphi(\cdot,\cdot,\cdot,\cdot)$
satisfying the following anti-concentration inequality.
For all $m\in\bN$, let $s\in\cX$ and $t_1,\dots,t_m\in\cX$ such that
$\forall i\leq m,~p(t_i)=s$ and $d(s,t_i)\leq \Delta$,
and finally $d(t_i,t_j)\geq \alpha$.
Then $\varphi$ is such that:
\begin{equation}
\label{eq:varphi}
\Pr\Big[\max_{i\leq m}f(t_i)-f(s) \geq \varphi(\alpha,\Delta,m,u) \Big]>1-e^{-u}\,.
\end{equation}
A function $\varphi$ satisfying this hypothesis is described in Lemma~\ref{lem:max_one_lvl}
in Appendix~\ref{sec:proof_lower_bound}.
Then the value $V_h(s)$ of a node $s\in\cT_h$ is computed with $\Delta_i(s) = \sup_{x\succ s} d(x,s)$ as:
\[V_h(s) = \sup_{x\succ s} \sum_{i>h} \varphi\Big(\frac 1 2 \Delta_h(x),\Delta_h(x),m,u\Big) \one_{p_i(x)\text{ is a pruned node}}\,.\]
The two steps proving Theorem~\ref{thm:lower_bound} are:
first, show that $\sup_{x\succ s}f(x)-f(s) \geq c_u V_h(s)$ for $c_u>0$ with probability at least $1-e^{-u}$,
second, show that $V_h(s) \geq c_u'\sup_{x\succ s}\sum_{i>h}\Delta_i(x)2^{\frac i 2}$
for $c_u'>0$.
The full proof of this theorem can be found in Appendix~\ref{sec:proof_lower_bound}.
\paragraph{Acknowledgements.}
We thank C\'edric Malherbe and Kevin Scaman for fruitful discussions.
| {'timestamp': '2016-02-17T02:08:26', 'yymm': '1602', 'arxiv_id': '1602.04976', 'language': 'en', 'url': 'https://arxiv.org/abs/1602.04976'} |
\section{Introduction}
\label{sec:introduction-weight-structures}
The aim of this article is to provide and review some foundational
results on homotopy categories, weight structures and weight complex
functors.
Weight structures were independently introduced by M.~Bondarko
in \cite{bondarko-weight-str-vs-t-str}, and D.~Pauksztello in
\cite{pauk-co-t} where they are called co-t-struc\-tures. Their definition is
formally very similar to that of a t-structure. In both cases there is
an axiom demanding that each objects fits into a triangle of a certain
form. In the case of a t-structure these ``truncation
triangles" are functorial whereas in the case of a weight structure
these ``weight decomposition triangles" are not functorial.
This is a technical issue but the theory remains amazingly rich.
Weight structures have applications in various branches of
mathematics, for example in algebraic geometry (theory of motives),
algebraic topology (stable homotopy category) and representation
theory, see e.\,g.\ the work of M.~Bondarko (e.\,g.\
\cite{bondarko-weight-str-vs-t-str}),
J.~Wildeshaus (e.\,g.\ \cite{wildeshaus-chow-arXiv}) and
D.~Pauksztello (\cite{pauk-co-t, pauk-note-co-t-arXiv}) and references
therein; other references are
\cite{achar-treumann-arXiv},
\cite{iyama-yoshino-mutation},
\cite{geordie-ben-HOMFLYPT},
\cite{aihara-iyama-silting-arXiv},
\cite{keller-nicolas-simple-arXiv},
\cite{MSSS-AB-context-co-t-str-arXiv},
\cite{achar-kitchen-koszul-mixed-arXiv}.
A crucial observation due to M.~Bondarko is that, in the presence of a
weight structure
$w=(\mathcal{T}^{w \leq 0},\mathcal{T}^{w \geq 0})$
on a triangulated category $\mathcal{T}$, there is a weak weight
complex functor
${WC}: \mathcal{T} \rightarrow K_{\op{weak}}({\heartsuit})$ where
${\heartsuit}= \mathcal{T}^{w \leq 0} \cap \mathcal{T}^{w \geq 0}$
is the
heart of $w$ and the weak homotopy
category $K_{\op{weak}}({\heartsuit})$ is a certain quotient of the homotopy category
$K({\heartsuit})$.
M.~Bondarko explains
that in various natural settings this functor lifts to a
``strong weight complex functor" $\mathcal{T} \rightarrow K({\heartsuit})^{\op{anti}}$
(the upper index does not appear in
\cite{bondarko-weight-str-vs-t-str}; it will be explained below).
We expect that this strong weight complex functor
will be an important tool.
The basic example of a weight structure is the ``standard weight
structure"
$(K(\mathcal{A})^{w \leq 0}, K(\mathcal{A})^{w \geq 0})$
on the homotopy category $K(\mathcal{A})$ of an additive
category $\mathcal{A}$;
here
$K(\mathcal{A})^{w \leq n}$ (resp.\
$K(\mathcal{A})^{w \geq n}$) is the full subcategory of
$K(\mathcal{A})$ consisting of complexes $X=(X^i, d^i:X^i \rightarrow
X^{i+1})$ that are isomorphic to a complex concentrated in
degrees $\leq n$ (resp.\ $\geq n$) (for fixed $n \in \Bl{Z}$).
The subtle point to confirm this example
(which appears in \cite{bondarko-weight-str-vs-t-str} and
\cite{pauk-co-t})
is to check that $K(\mathcal{A})^{w \leq 0}$
and $K(\mathcal{A})^{w \geq 0}$ are both
closed under retracts in $K(\mathcal{A})$.
This basic example was our motivation for the first part of this article
where we discuss the idempotent completeness of (subcategories of)
homotopy categories. Let $\mathcal{A}$ be an additive category as
above.
Then a natural question is whether
$K(\mathcal{A})$ is idempotent complete.
Since $K(\mathcal{A})$ is a triangulated category this question can be
rephrased as follows:
\begin{question}
\label{q:KA-ic}
Given any idempotent endomorphism $e:X \rightarrow X$ in
$K(\mathcal{A})$, is there an isomorphism $X\cong E \oplus F$ such
that $e$ corresponds to
$\tzmat 1000: E \oplus F \rightarrow E \oplus F$?
\end{question}
We do not know the answer to this question in general.
We can show that certain full subcategories of $K(\mathcal{A})$ are
idempotent complete:
\begin{theorem}
[{see Thm.~\ref{t:one-side-bounded-hot-idempotent-complete}}]
\label{t:one-side-bounded-hot-idempotent-complete-einleitung}
The full subcategories
$K(\mathcal{A})^{w \leq n}$ and
$K(\mathcal{A})^{w \geq n}$ of
$K(\mathcal{A})$ are idempotent complete.
In particular
$K(\mathcal{A})^-:=\bigcup_{n \in \Bl{Z}}K(\mathcal{A})^{w \leq n}$,
$K(\mathcal{A})^+:=\bigcup_{n \in \Bl{Z}}K(\mathcal{A})^{w \geq n}$ and
$K(\mathcal{A})^{bw}:= K(\mathcal{A})^- \cap K(\mathcal{A})^+$
are idempotent complete.
\end{theorem}
Possibly this result is known to the
experts but we could not find a proof in the literature.
Our proof of this result is constructive and based on work of
R.~W.~Thomason \cite{thomason-class}
and ideas of P.~Balmer and M.~Schlichting
\cite{balmer-schlichting}.
Theorem~\ref{t:one-side-bounded-hot-idempotent-complete-einleitung}
implies that
$(K(\mathcal{A})^{w \leq 0}, K(\mathcal{A})^{w \geq 0})$
is a weight structure on $K(\mathcal{A})$ (see
Prop.~\ref{p:ws-hot-additive}).
Another approach to Question~\ref{q:KA-ic} is to impose further
assumptions on $\mathcal{A}$.
We can show (see Thm.~\ref{t:hot-idempotent-complete}):
$K(\mathcal{A})$ is idempotent complete if
$\mathcal{A}$ has countable coproducts (this follows directly
from results of
M.~B{\"o}kstedt and A.~Neeman \cite{neeman-homotopy-limits}
or from a variation of our proof of
Theorem~\ref{t:one-side-bounded-hot-idempotent-complete-einleitung})
or if $\mathcal{A}$ is abelian (this is an application of results from
\cite{beligiannis-reiten-torsion-theories} and
\cite{Karoubianness}).
If $\mathcal{A}$ itself is idempotent complete then
projectivization (see \cite{Krause-krull-schmidt-categories}) shows
that the full
subcategory $K(\mathcal{A})^b$ of $K(\mathcal{A})$ of complexes that
are isomorphic to a bounded complex is idempotent complete.
It may seem natural to assume that
$\mathcal{A}$ is idempotent complete and additive in
Question~\ref{q:KA-ic}. However, if $\mathcal{A}^{\op{ic}}$ is the idempotent
completion of $\mathcal{A}$, then $K(\mathcal{A})$ is idempotent
complete if and only if $K(\mathcal{A}^{{\op{ic}}})$ is idempotent complete
(see Rem.~\ref{rem:KA-idempotent-complete-wlog-ic}).
Results of R.~W.~Thomason \cite{thomason-class} indicate that it might
be useful to consider the Grothendieck group $K_0(K(\mathcal{A}))$ of
the triangulated category $K(\mathcal{A})$ for additive (essentially)
small $\mathcal{A}$. We show that the Grothendieck groups
$K_0(K(\mathcal{A}))$,
$K_0(K(\mathcal{A})^-)$ and
$K_0(K(\mathcal{A})^+)$ all vanish for such $\mathcal{A}$
(see Prop.~\ref{p:grothendieck-homotopy-category}).
The second part of this article concerns weight complex functors.
In the example of the standard weight structure
$(K(\mathcal{A})^{w \leq 0}, K(\mathcal{A})^{w \geq 0})$ on
$K(\mathcal{A})$, the heart ${\heartsuit}$ is the idempotent completion of
$\mathcal{A}$
(see Cor.~\ref{c:heart-standard-wstr})
and the weak weight complex functor ${WC}:
\mathcal{T} \rightarrow K_{\op{weak}}({\heartsuit})$
naturally
and easily
lifts
to a
triangulated functor ${\widetilde{WC}}:K(\mathcal{A}) \rightarrow K({\heartsuit})^{\op{anti}}$
(see Prop.~\ref{p:strong-WC-for-standard-wstr}); here the triangulated
categories $K({\heartsuit})^{\op{anti}}$ and $K({\heartsuit})$ coincide as additive
categories with translation but a candidate triangle
$X \xra{u} Y \xra{v} Z \xra{w} [1]X$
is a triangle in $K({\heartsuit})^{\op{anti}}$ if and only if
$X \xra{-u} Y \xra{-v} Z \xra{-w} [1]X$
is a triangle in $K({\heartsuit})$.
This functor ${\widetilde{WC}}$ is an example of a ``strong weight complex
functor".
Let us return to the general setup of a weight structure $w$ on a
triangulated category $\mathcal{T}$ with heart ${\heartsuit}$.
Assume that
$\tildew{\mathcal{T}}$ is a filtered triangulated category over
$\mathcal{T}$ in the sense of \cite[App.]{Beilinson}.
The main result of the second part of this article is a complete proof
of the following Theorem.
\begin{theorem}
[{see Thm.~\ref{t:strong-weight-cplx-functor} and cf.\
\cite[8.4]{bondarko-weight-str-vs-t-str}}]
\label{t:strong-weight-cplx-functor-einleitung}
Assume that $w$ is bounded and that $\tildew{\mathcal{T}}$
satisfies axiom~\ref{enum:filt-tria-cat-3x3-diag} stated
in Section~\ref{sec:additional-axiom}.
Then there is a strong weight complex functor
\begin{equation*}
{\widetilde{WC}}: \mathcal{T} \rightarrow K^b({\heartsuit})^{\op{anti}}.
\end{equation*}
This means that ${\widetilde{WC}}$ is a triangulated functor whose composition
with $K^b({\heartsuit})^{\op{anti}} \rightarrow K_{\op{weak}}({\heartsuit})$
is isomorphic to the weak weight complex functor
(as a functor of additive categories with translation).
\end{theorem}
Our proof of this theorem relies on the ideas of
A.~Beilinson and M.~Bondarko sketched in
\cite[8.4]{bondarko-weight-str-vs-t-str}. We explain the idea of the proof
in Section~\ref{sec:idea-construction-strong-WC-functor}.
The additional axiom~\ref{enum:filt-tria-cat-3x3-diag}
imposed on $\tildew{\mathcal{T}}$ in
Theorem~\ref{t:strong-weight-cplx-functor-einleitung}
seems to be new. It states that any morphism gives rise to a certain
$3\times 3$-diagram; see Section~\ref{sec:additional-axiom} for the
precise formulation. It is used in the proof of
Theorem~\ref{t:strong-weight-cplx-functor-einleitung} at
two important points; we do not know if this axiom can be removed.
We hope that this axiom is satisfied for reasonable filtered triangulated
categories; it is true in the basic example of a filtered
triangulated category: If $\mathcal{A}$ is an abelian category
its filtered derived category $DF(\mathcal{A})$
is a filtered triangulated category (see
Prop.~\ref{p:basic-ex-f-cat})
that satisfies axiom~\ref{enum:filt-tria-cat-3x3-diag}
(see Lemma~\ref{l:additional-axiom-true-DFA}).
In the short third part of this article we prove a result which
naturally fits into the context of weight structures and filtered
triangulated categories:
Given a filtered triangulated category $\tildew{\mathcal{T}}$ over a
triangulated category $\mathcal{T}$
with a weight structure $w$, there is a unique weight structure on
$\tildew{\mathcal{T}}$ that is compatible with $w$ (see
Prop.~\ref{p:compatible-w-str}).
\subsection*{Plan of the article}
We fix our notation and gather together some results on additive
categories, idempotent completeness, triangulated categories and
homotopy categories in Section~\ref{sec:preliminaries}; we suggest
skimming through this section and coming back as required.
Sections~\ref{sec:hot-cat-idem-complete}
and~\ref{sec:weight-structures} constitute the first part of this
article - they contain the results on idempotent
completeness of homotopy categories and some basic results on weight
structures.
The next two sections lay the groundwork for the proof of
Theorem~\ref{t:strong-weight-cplx-functor-einleitung}.
In Section~\ref{sec:weak-wc-fun} we construct the weak weight complex
functor in detail. We recall the notion of a filtered triangulated category in
Section~\ref{sec:filt-tria-cat} and prove some important results
stated in \cite[App.]{Beilinson} as no proofs are available in the literature.
We prove Theorem~\ref{t:strong-weight-cplx-functor-einleitung} in
Section~\ref{sec:strong-WCfun}.
Section~\ref{sec:weight-structures-and-filtered-triang} contains the
results on compatible weight structures.
\subsection*{Acknowledgements}
This work was motivated by an ongoing joint project with Geordie Williamson.
We thank him for the collaboration, for comments and in particular for
the encouragement to turn our private notes into this article.
We thank Mikhail Bondarko for useful correspondence, discussions and
encouragement.
We thank Catharina Stroppel for her interest and
helpful comments on a preliminary version of this article.
We thank Peter J\o{}rgensen,
Henning Krause, Maurizio Martino, David Pauksztello, Jorge Vit{\'o}ria and Dong Yang
for their interest, discussions and useful references.
Thanks are also due to Apostolos Beligiannis and Idun Reiten for
explanations,
and to
Paul Balmer and Amnon Neeman for useful correspondence.
Our work was partially supported by the priority program SPP 1388 of
the German Science foundation, and by the
Collaborative Research Center SFB Transregio 45 of the German Science foundation.
\section{Preliminaries}
\label{sec:preliminaries}
For the definition of an additive category (with translation
(= shift)), a functor of
additive categories (with translation), and of a triangulated category
see \cite{KS-cat-sh}.
\subsection{(Additive) categories}
\label{sec:additive-categories}
Let $\mathcal{A}$ be a category and $X$ an object of $\mathcal{A}$.
An object $Y \in \mathcal{A}$ is a
\textbf{retract of $X$} if there are morphisms $p:X \rightarrow Y$ and $i:Y
\rightarrow X$ such that $pi=\op{id}_Y$. Then $ip:X \rightarrow X$ is an idempotent
endomorphism.
A subcategory $\mathcal{B} \subset \mathcal{A}$ is
\textbf{closed under retracts} (= Karoubi-closed) if it contains all
retracts in $\mathcal{A}$ of any object of $\mathcal{B}$.
In this case $\mathcal{B}$ is a
\textbf{strict} subcategory of $\mathcal{A}$, i.\,e.\ it is closed under
isomorphisms.
Let $\mathcal{B}$ be a subcategory of an additive
category $\mathcal{A}$. We say that $\mathcal{B}$ is
\textbf{dense in $\mathcal{A}$} if each object of $\mathcal{A}$ is a
summand of an object of $\mathcal{B}$.
We
define full subcategories $\leftidx{^\perp}{\mathcal{B}},
\mathcal{B}^\perp \subset \mathcal{A}$ by
\begin{align*}
\leftidx{^\perp}{\mathcal{B}} & =
\{Z \in \mathcal{A} \mid \mathcal{A}(Z, \mathcal{B})=0\},\\
\mathcal{B}^\perp & =
\{Z \in \mathcal{A} \mid \mathcal{A}(\mathcal{B},Z)=0\}.
\end{align*}
\subsection{Idempotent completeness}
\label{sec:idemp-completeness}
Let $\mathcal{A}$ be a category and $X$ an object of $\mathcal{A}$.
An idempotent
endomorphism $e \in \op{End}(X)$ \textbf{splits} if there is a \textbf{splitting
of $e$}, i.\,e.\ there are an object $Y \in \mathcal{A}$ and morphisms
$p:X \rightarrow Y$
and $i: Y \rightarrow X$ such that $ip=e$ and $pi=\op{id}_Y$.
A splitting of $e$ is unique up to
unique isomorphism.
If every idempotent endomorphism splits we say that $\mathcal{A}$ is
\textbf{idempotent complete} (= Karoubian).
If $\mathcal{A}$ is additive, an idempotent $e:X \rightarrow X$ has a splitting
$(Y,p,i)$ and $1-e$ has a
splitting $(Z, q, j)$, then obviously $(i,j): Y\oplus Z \rightarrow X$ is an
isomorphism with inverse $\svek p q$.
In particular in an idempotent complete additive category
any idempotent endomorphism of an object $X$ induces a
direct sum decomposition of $X$.
Any additive category $\mathcal{A}$ has an \textbf{idempotent
completion} (= Ka\-rou\-bi completion)
$(\mathcal{A}^{\op{ic}},i)$, i.\,e.\ there is an idempotent complete additive
category $\mathcal{A}^{\op{ic}}$ together with an additive functor
$i:\mathcal{A} \rightarrow \mathcal{A}^{\op{ic}}$ such
that any additive functor $F:\mathcal{A} \rightarrow \mathcal{C}$ to an
idempotent complete additive category $\mathcal{C}$ factors as $F=i
\circ F^{\op{ic}}$ for an additive functor $F^{\op{ic}}:\mathcal{A}^{\op{ic}} \rightarrow
\mathcal{C}$ which is unique up to unique isomorphism; see
e.\,g.\ \cite{balmer-schlichting} for an explicit construction. Then
$i$ is fully faithful and we usually view $\mathcal{A}$ as a full
subcategory of $\mathcal{A}^{\op{ic}}$; it is a dense subcategory.
Conversely if
$\mathcal{A}$ is a full dense
additive subcategory of an idempotent complete additive category
$\mathcal{B}$,
then $\mathcal{B}$ together with
the inclusion $\mathcal{A} \hookrightarrow \mathcal{B}$ is an idempotent
completion of $\mathcal{A}$.
\subsection{Triangulated categories}
\label{sec:triang-categ}
Let $\mathcal{T}$ (more precisely $(\mathcal{T}, [1])$ together with a
certain class of candidate triangles) be a
triangulated category (see \cite[Ch.~10]{KS-cat-sh},
\cite{neeman-tricat}, \cite{BBD}). We follow the terminology of
\cite{neeman-tricat} and call candidate triangle (resp.\ triangle)
what is called triangle (resp.\ distinguished triangle) in
\cite{KS-cat-sh}. We say that a subcategory $\mathcal{S} \subset
\mathcal{T}$ is \define{closed under extensions} if for any triangle $X
\rightarrow Y \rightarrow Z \rightarrow [1]X$ in $\mathcal{T}$ with $X$ and $Z$ in
$\mathcal{S}$ we have $Y \in \mathcal{S}$.
\subsubsection{Basic statements about triangles}
\label{sec:basic-triangles}
\begin{lemma}
\label{l:zero-triang-cat}
Let $X \xra{f} Y \xra{g} Z \xra{h} [1]X$ be a triangle in
$\mathcal{T}$. Then
\begin{enumerate}
\item
\label{enum:zero-triang-cat-equiv}
$h=0$ if and only if there is a morphism $\epsilon: Y \rightarrow X$
such that $\epsilon f =\op{id}_X$.
\end{enumerate}
Let $\epsilon: Y \rightarrow X$ be given satisfying $\epsilon f =\op{id}_X$. Then:
\begin{enumerate}[resume]
\item
\label{enum:zero-triang-unique-delta}
There
is a unique morphism $\delta: Z \rightarrow Y$ such that $\epsilon \delta
= 0$ and $g \delta =\op{id}_Z$.
\item
\label{enum:zero-triang-isom}
The morphism
\begin{equation}
\label{eq:split-triangle}
\xymatrix{
X \ar[r]^-f \ar[d]^-{\op{id}_X}
& Y \ar[r]^-g \ar[d]^-{\svek \epsilon g}
& Z \ar[r]^-{h=0} \ar[d]^-{\op{id}_Z}
& {[1]X} \ar[d]^-{\op{id}_{[1]X}} \\
{X} \ar[r]^-{\svek 10}
& {X \oplus Z} \ar[r]^-{\zvek 01}
& Z \ar[r]^-0
& {[1]X}
}
\end{equation}
is an isomorphism of triangles and $\svek \epsilon g$ is invertible with
inverse $ \zvek f \delta$.
Under the isomorphism $\svek \epsilon g:Y \xra{\sim} X \oplus Z$ the
morphism $\epsilon$ corresponds to $\zvek 10: X \oplus Z \rightarrow X$ and the
morphism $\delta$ to $\svek 01:Z \rightarrow X \oplus Z$.
\end{enumerate}
\end{lemma}
\begin{proof}
If $\epsilon f=\op{id}_X$ then $h =\op{id}_{[1]X} \circ h = [1]\epsilon \circ
[1]f \circ h =0$. If $h=0$ then use the cohomological functor
$\op{Hom}(?,X)$ to find $\epsilon$.
Let $\epsilon: Y \rightarrow X$ be given satisfying $\epsilon f=\op{id}_X$.
The morphism \eqref{eq:split-triangle} is a morphism of candidate
triangles and even of triangles since coproducts of triangles are
triangles (e.\,g.\ \cite[Prop.~1.2.1]{neeman-tricat}). Hence $\svek
\epsilon g$ is an isomorphism.
For any $\delta: Z \rightarrow Y$ satisfying $\epsilon \delta
= 0$ and $g \delta =\op{id}_Z$ we have
$\svek \epsilon g \zvek f\delta =\tzmat 1001$. Hence $\delta$ is
unique if it exists.
Let $\zvek ab$ be the inverse of $\svek \epsilon g$. Then $\op{id}_Y =
a\epsilon +bg$ and hence $f= \op{id}_Y f= a\epsilon f+bgf =a$; on
the other hand $\tzmat 1001 =
\svek \epsilon g \zvek ab =\tzmat {\epsilon f}{\epsilon
b}{gf}{gb}=\tzmat 1 {\epsilon b} 0 {g b}$. Hence $b$ satisfies the
conditions imposed on $\delta$.
\end{proof}
We say that a triangle
$X \xra{f} Y \xra{g} Z \xra{h} [1]X$ splits if it is isomorphic (by an
arbitrary isomorphism of triangles) to the
triangle $X \xra{\svek 10} X \oplus Z \xra{\zvek 01} Z \xra{0} [1]X$.
This is the case if and only if $h=0$ as we see from
Lemma~\ref{l:zero-triang-cat}.
\begin{corollary}
[{cf.~\cite[Lemma~2.2]{Karoubianness}}]
\label{c:retract-is-summand-in-tricat}
Let $e: X \rightarrow X$ be an idempotent endomorphism in $\mathcal{T}$. Then $e$
splits if and only if $1-e$ splits.
In particular, an object $Y$ is a retract of an object $X$ if and
only if $Y$ is a summand of $X$.
\end{corollary}
This corollary shows that the question of idempotent completeness of a
triangulated category is equivalent to the analog of
Question~\ref{q:KA-ic}.
\begin{proof}
Let $(Y,p,i)$ be a splitting of $e$. Complete $i:Y \rightarrow X$ into a
triangle $Y \xra{i} X \xra{q} Z \rightarrow [1]Y$.
Lemma~\ref{l:zero-triang-cat} \eqref{enum:zero-triang-unique-delta}
applied
to this triangle and $p:X \rightarrow Y$ yields a morphism $j:Z \rightarrow X$ and then
Lemma~\ref{l:zero-triang-cat} \eqref{enum:zero-triang-isom} shows
that $(Z, q, j)$ is a splitting of $1-e$.
\end{proof}
\begin{proposition}
[{\cite[Prop.~1.1.9]{BBD}}]
\label{p:BBD-1-1-9-copied-for-w-str}
Let $(X,Y,Z)$ and $(X',Y',Z')$ be triangles and let
$g:Y \rightarrow Y'$ be a morphism in $\mathcal{T}$:
\begin{equation}
\label{eq:BBD-1-1-9-copied-for-w-str}
\xymatrix{
X \ar[r]^u \ar@{..>}[d]_f \ar@{}[dr]|{(1)} & Y \ar[r]^v \ar[d]_g
\ar@{}[dr]|{(2)} & Z \ar[r]^d \ar@{..>}[d]_h & {[1]X} \ar@{..>}[d]_{[1]f}\\
X' \ar[r]^{u'} & Y' \ar[r]^{v'} & Z' \ar[r]^{d'} & {[1]X' }
}
\end{equation}
The following conditions are equivalent:
\begin{enumerate}
\item
$v'gu=0$;
\item
\label{enum:fkomm}
there is a morphism $f$ such that (1) commutes;
\item
\label{enum:hkomm}
there is a morphism $h$ such that (2) commutes;
\item
there is a morphism $(f,g,h)$ of triangles as indicated in diagram
\eqref{eq:BBD-1-1-9-copied-for-w-str}.
\end{enumerate}
If these conditions are satisfied and $\op{Hom}([1]X,Z')=0$ then
$f$ in \eqref{enum:fkomm} and $h$ in \eqref{enum:hkomm}
are unique.
\end{proposition}
\begin{proof}
See {\cite[Prop.~1.1.9]{BBD}}.
\end{proof}
\begin{proposition}
[{\cite[1.1.11]{BBD}}]
\label{p:3x3-diagram-copied-for-w-str}
Every commutative square
\begin{equation*}
\xymatrix{
{X} \ar[r] & Y\\
{X'} \ar[u] \ar[r] & {Y'} \ar[u]
}
\end{equation*}
can be completed to a so called \textbf{$3\times 3$-diagram}
\begin{equation*}
\xymatrix{
{[1]X'} \ar@{..>}[r] &
{[1]Y'} \ar@{..>}[r] &
{[1]Z'} \ar@{..>}[r] \ar@{}[rd]|{\circleddash}&
{[2]X'} \\
{X''} \ar[u] \ar[r] &
{Y''} \ar[u] \ar[r] &
{Z''} \ar[u] \ar[r] &
{[1]X''} \ar@{..>}[u] \\
{X} \ar[u] \ar[r] &
{Y} \ar[u] \ar[r] &
{Z} \ar[u] \ar[r] &
{[1]X} \ar@{..>}[u] \\
{X'} \ar[u] \ar[r] &
{Y'} \ar[u] \ar[r] &
{Z'} \ar[u] \ar[r] &
{[1]X',} \ar@{..>}[u]
}
\end{equation*}
i.\,e.\ a diagram as above having the following properties: The
dotted arrows are obtained
by translation $[1]$, all small squares are commutative except the
upper right square marked with $\circleddash$ which is
anti-commutative, and all three rows and columns with solid arrows
are triangles. (The column/row with the dotted arrows becomes a
triangle if an odd number of its morphisms is multiplied by
$-1$.)
Variation: Any of the eight small commutative squares in the above
diagram can be completed to a $3\times 3$-diagram as above.
It is also possible to complete the anti-commutative square to a
$3\times 3$ diagram.
\end{proposition}
\begin{proof}
See \cite[1.1.11]{BBD}. To obtain the variations remove the column on
the right and add the $[-1]$-shift of the ``$Z$-column" on the
left. Modify the signs suitably. Iterate this and use the diagonal
symmetry.
\end{proof}
\subsubsection{Anti-triangles}
\label{sec:anti-triangles}
Following \cite[10.1.10]{KS-cat-sh} we call a candidate triangle $X
\xra{f} Y \xra{g} Z \xra{h} [1]X$ an
\textbf{anti-triangle} if $X \xra{f} Y \xra{g} Z \xra{-h} [1]X$ is a
triangle. Then $(\mathcal{T}, [1])$ with the class of all
anti-triangles is again a triangulated category that we denote by
$(\mathcal{T}^{\op{anti}}, [1])$ or $\mathcal{T}^{\op{anti}}$.
The triangulated categories $\mathcal{T}$ and $\mathcal{T}^{\op{anti}}$ are
equivalent as triangulated categories
(cf.\ \cite[Exercise~10.10]{KS-cat-sh}): Let
$F=\op{id}_\mathcal{T}:\mathcal{T} \rightarrow \mathcal{T}$ be the identity
functor (of additive
categories) and $\tau: F[1] \xra{\sim} [1]F$ the isomorphism given by
$\tau_X=-\op{id}_{[1]X}: [1]X=F[1]X \xra{\sim} [1]X=[1]FX$. Then
\begin{equation}
\label{eq:T-antiT-triequi}
(\op{id}_\mathcal{T}, \tau):\mathcal{T} \xra{\sim} \mathcal{T}^{\op{anti}}
\end{equation}
is a triangulated equivalence.
\subsubsection{Adjoints are triangulated}
\label{sec:adjoints-triang}
Assume that $(G,\gamma): \mathcal{T} \rightarrow \mathcal{S}$ is a
triangulated functor and that $F: \mathcal{S} \rightarrow \mathcal{T}$ is left
adjoint to $G$.
Let $X \in \mathcal{S}$. The morphism
${[1]X} \rightarrow {[1]GFX} \xsira{\gamma_{FX}^{-1}} {G[1]FX}$
(where the first morphism is the translation of the unit of the adjunction)
corresponds under the adjunction to a morphism
$\phi_X: F[1]X \rightarrow [1]FX$.
This construction is natural in $X$ and defines a morphism $\phi: F[1]
\rightarrow [1]F$ (which in fact is an isomorphism).
We omit the tedious proof of the following Proposition.
\begin{proposition}
[{\cite[Prop.~1.6]{keller-vossieck} (without proof); cf.\
\cite[Exercise~10.3]{KS-cat-sh}}]
\label{p:linksad-triang-for-wstr-art}
Let $F$ be left adjoint to a triangulated functor $(G,\gamma)$.
Then $(F, \phi)$ as defined above is a triangulated functor.
Similarly the right adjoint to a triangulated functor is
triangulated.
\end{proposition}
\subsubsection{Torsion pairs and t-structures}
\label{sec:t-str-and-torsion-theories}
The notion of a t-structure \cite[1.3.1]{BBD}
and that of a torsion pair
\cite{beligiannis-reiten-torsion-theories}
on a triangulated category essentially coincide,
see \cite[Prop.~I.2.13]{beligiannis-reiten-torsion-theories}:
A pair $(\mathcal{X}, \mathcal{Y})$ is a torsion pair if and only if
$(\mathcal{X}, [1]\mathcal{Y})$ is a t-structure.
We will use both terms.
\subsection{Homotopy categories and variants}
\label{sec:homotopy-categories}
Let $\mathcal{A}$ be an additive category and $C(\mathcal{A})$ the
category of (cochain) complexes in $\mathcal{A}$ with cochain maps as
morphisms:
A morphism $f:X \rightarrow Y$ in $C(\mathcal{A})$ is a sequence
$(f^n)_{n \in \Bl{Z}}$ of morphisms $f^n: X^n \rightarrow Y^n$ such that
$d^n_Y f^n =f^{n+1} d^n_X$ for all $n \in \Bl{Z}$
(or in shorthand notation $df=fd$).
Let $f, g: X \rightarrow Y$ be morphisms in $C(\mathcal{A})$. Then $f$ and
$g$ are \define{homotopic} if there is a sequence $h=(h^n)_{n \in
\Bl{Z}}$ of morphisms $h^n: X^n \rightarrow Y^{n-1}$ in $\mathcal{A}$ such that
$f^n-g^n= d^{n-1}_Y h^n + h^{n+1} d^n_X$ for all $n \in \Bl{Z}$ (or in
shorthand
notation $f-g=dh+hd$).
The \define{homotopy category} $K(\mathcal{A})$ has the same objects
as $C(\mathcal{A})$, but
\begin{equation*}
\op{Hom}_{K(\mathcal{A})}(X,Y):=
\frac{\op{Hom}_{C(\mathcal{A})}(X,Y)}{\{\text{morphisms homotopic to zero}\}}.
\end{equation*}
Let $f, g: X \rightarrow Y$ be morphisms in $C(\mathcal{A})$. Then $f$ and
$g$ are \define{weakly homotopic}
(see \cite[3.1]{bondarko-weight-str-vs-t-str})
if there is a pair $(s, t)$ of
sequences $s=(s^n)_{n \in \Bl{Z}}$ and
$t=(t^n)_{n \in \Bl{Z}}$ of morphisms $s^n, t^n: X^n \rightarrow Y^{n-1}$
such that
\begin{equation*}
f^n-g^n= d^{n-1}_Y s^n + t^{n+1} d^n_X
\end{equation*}
for all $n \in \Bl{Z}$
(or in shorthand notation $f-g=ds+td$).
The \define{weak homotopy category} $K_{{\op{weak}}}(\mathcal{A})$ has the
same objects
as $C(\mathcal{A})$, but
\begin{equation*}
\op{Hom}_{K(\mathcal{A})}(X,Y):=
\frac{\op{Hom}_{C(\mathcal{A})}(X,Y)}{\{\text{morphisms weakly
homotopic to zero}\}}.
\end{equation*}
\begin{remark}
Let $h$ and $(s,t)$ be as in the above definition (without asking
for $f-g=dh+hd$ or $f-g=ds+td$).
Then $dh+hd:X \rightarrow Y$ is homotopic to zero.
But
$ds+td:X \rightarrow Y$ is
not necessarily weakly homotopic to zero: It need not be a morphism
in $C(\mathcal{A})$.
It is weakly homotopic to zero if and only if $dsd=dtd$ (which is of
course the case if $f-g=ds+td$).
Note that
weakly homotopic maps induce the same map on cohomology.
\end{remark}
All categories $C(\mathcal{A})$, $K(\mathcal{A})$ and
$K_{{\op{weak}}}(\mathcal{A})$ are additive categories.
Let $[1]: C(\mathcal{A}) \rightarrow C(\mathcal{A})$ be the functor that
maps an object $X$ to
$[1]X$ where $([1]X)^n=X^{n+1}$ and
$d_{[1]X}^n=-d_X^{n+1}$ and a morphism $f: X \rightarrow Y$ to $[1]f$ where
$([1]f)^n=f^{n+1}$. This is an automorphism of $C(\mathcal{A})$ and
induces automorphisms $[1]$ of $K(\mathcal{A})$ and
$K_{{\op{weak}}}(\mathcal{A})$. The categories $C(\mathcal{A})$,
$K(\mathcal{A})$ and $K_{{\op{weak}}}(\mathcal{A})$ become additive categories with
translation. Sometimes we write $\Sigma$ instead of $[1]$.
Obviously there are canonical functors
\begin{equation}
\label{eq:can-CKKweak}
C(\mathcal{A}) \rightarrow K(\mathcal{A}) \rightarrow K_{\op{weak}}(\mathcal{A})
\end{equation}
of additive categories with translation.
The category $K(\mathcal{A})$ has a natural structure of triangulated
category:
Given a morphism
$m: M \rightarrow N$ in $C(\mathcal{A})$ we define its mapping cone $\op{Cone}(m)$
of $m$
to be the complex $\op{Cone}(m)=N \oplus [1]M$ with differential
$\tzmat{d_N}{m}{0}{d_{[1] M}}=\tzmat{d_N}{m}{0}{-d_M}$.
It fits into the following mapping cone sequence
\begin{equation}
\label{eq:mapping-cone-triangle}
\xymatrix{
& {M} \ar[r]^-{m}
& {N} \ar[r]^-{\svek {1} 0}
& {\op{Cone}(m)} \ar[r]^-{\zvek 0{1}}
& {[1]M}
}
\end{equation}
in $C(\mathcal{A})$.
The triangles of $K(\mathcal{A})$ are precisely the candidate
triangles that are
isomorphic to the
image of a mapping cone sequence
\eqref{eq:mapping-cone-triangle} in $K(\mathcal{A})$;
this image is called the mapping cone triangle of $m$.
We will later use: If we rotate the mapping cone triangle
for $-m$ twice we obtain
the triangle
\begin{equation}
\label{eq:mapcone-minus-m-rot2}
\xymatrix{
{[-1]N} \ar[r]^-{\svek {-1} 0}
& {[-1]\op{Cone}(-m)} \ar[r]^-{\zvek 0{-1}}
& {M} \ar[r]^-{-m}
& {N.}
}
\end{equation}
In this setting
there
is apart from \eqref{eq:T-antiT-triequi}
another triangulated equivalence
between $K(\mathcal{A})$ and $K(\mathcal{A})^{\op{anti}}$:
The functor $S:C(\mathcal{A})\xra{\sim} C(\mathcal{A})$ which sends a complex
$(X^n, d_X^n)$ to $(X^n, -d_X^n)$ and a morphism $f$ to $f$ descends to
a functor $S:K(\mathcal{A}) \xra{\sim} K(\mathcal{A})$. Then
\begin{equation}
\label{eq:KA-antiKA-triequi}
(S,\op{id}):K(\mathcal{A}) \xra{\sim} K(\mathcal{A})^{\op{anti}}
\end{equation}
(where $\op{id}: S[1]\xra{\sim}
[1]S$ is the obvious identification) is a triangulated equivalence:
Observe that $S$ maps
\eqref{eq:mapping-cone-triangle} to
\begin{equation*}
\xymatrix{
& {S(M)} \ar[r]^-{S(m)=m}
& {S(N)} \ar[r]^-{\svek {1} 0}
& {\op{Cone}(-S(m))} \ar[r]^-{\zvek 0{1}}
& {[1]S(M)}
}
\end{equation*}
which becomes an anti-triangle in $K(\mathcal{A})$.
We introduce some notation:
Any functor $F: \mathcal{A} \rightarrow \mathcal{B}$ of additive
categories obviously induces
a functor $F_{C}:C(\mathcal{A}) \rightarrow C(\mathcal{B})$ of additive
categories with translation and a functor
$F_{K}:K(\mathcal{A}) \rightarrow K(\mathcal{B})$ of triangulated categories.
If $F: \mathcal{A} \rightarrow \mathcal{A}$ is an endofunctor we denote these
functors often by $F_{C(\mathcal{A})}$ and $F_{K(\mathcal{A})}$.
\section{Homotopy categories and idempotent completeness}
\label{sec:hot-cat-idem-complete}
Let $K(\mathcal{A})$ be the homotopy category of an additive category
$\mathcal{A}$.
We define full subcategories of $K(\mathcal{A})$ as follows.
Let $K^{w \leq n}(\mathcal{A})$ consist of all
objects that are zero in all degrees $>n$
(where $w$ means ``weights"; the terminology will become clear
from Proposition~\ref{p:ws-hot-additive} below). The union of all $K^{w \leq
n}(\mathcal{A})$ for $n \in \Bl{Z}$ is $K^-(\mathcal{A})$, the category
of all bounded above complexes.
Similarly we define
$K^{w \geq n}(\mathcal{A})$ and $K^+(\mathcal{A})$.
Let $K^b(\mathcal{A})=K^-(\mathcal{A}) \cap K^+(\mathcal{A})$ be the
full subcategory of all bounded complexes.
For $* \in \{+, -, b, w \leq n, w \geq n\}$ let
$K(\mathcal{A})^*$ be the closure under isomorphisms of
$K^*(\mathcal{A})$ in $K(\mathcal{A})$. Then all inclusions $K^*(\mathcal{A}) \subset
K(\mathcal{A})^*$ are equivalences of categories.
We define $K(\mathcal{A})^{bw}:= K(\mathcal{A})^- \cap
K(\mathcal{A})^+$ (where $bw$ means ``bounded weights"). In general
the inclusion $K(\mathcal{A})^b \subset
K(\mathcal{A})^{bw}$ is not an equivalence (see
Rem.~\ref{rem:hot-bd-cplx-add-cat-not-idempotent-complete}).
\begin{theorem}
\label{t:one-side-bounded-hot-idempotent-complete}
Let $\mathcal{A}$ be an additive category and $n \in \Bl{Z}$.
The following categories are idempotent complete:
\begin{enumerate}
\item
\label{enum:one-side-bounded-hot-idempotent-complete-neg}
$K^{w \leq n}(\mathcal{A})$,
$K(\mathcal{A})^{w \leq n}$,
$K^-(\mathcal{A})$,
$K(\mathcal{A})^{-}$;
\item
\label{enum:one-side-bounded-hot-idempotent-complete-pos}
$K^{w \geq n}(\mathcal{A})$,
$K(\mathcal{A})^{w \geq n}$,
$K^+(\mathcal{A})$,
$K(\mathcal{A})^{+}$;
\item
\label{enum:one-side-bounded-hot-idempotent-complete-bounded}
$K(\mathcal{A})^{bw}$.
\end{enumerate}
\end{theorem}
\begin{remark}
\label{rem:hot-bd-cplx-add-cat-not-idempotent-complete}
In general the (equivalent) categories $K^b(\mathcal{A})$ and $K(\mathcal{A})^b$
are not idempotent complete (and hence the
inclusion into the idempotent complete category $K(\mathcal{A})^{bw}$
is not an equivalence):
Let $\op{mod}(k)$ be the category of finite dimensional vector spaces
over a field $k$ and let
$\mathcal{E} \subset \op{mod}(k)$ be the full subcategory of even
dimensional vector spaces. Note that $K(\mathcal{E}) \subset
K(\op{mod}(k))$ is a full triangulated subcategory.
Then $X \mapsto \sum_{i \in \Bl{Z}} (-1)^i \dim H^i(X)$ is well defined
on the objects of $K(\op{mod}(k))^b$. It takes even values on
all objects of $K(\mathcal{E})^b$; hence $\tzmat 1000:k^2
\rightarrow k^2$ cannot split in $K(\mathcal{E})^b$.
\end{remark}
\begin{remark}
\label{rem:thomason-and-balmer-schlichting}
Let us indicate the ideas behind the rather explicit proof of
Theorem~\ref{t:one-side-bounded-hot-idempotent-complete} which might
perhaps be seen as a variation of the Eilenberg swindle.
Let $\mathcal{T}'$ be an essentially small triangulated category.
R.~W.~Thomason shows that taking the Grothendieck group establishes
a bijection between
dense strict full triangulated subcategories of $\mathcal{T}'$
and subgroups of its Grothendieck
group $K_0(\mathcal{T}')$, see
\cite[Section~3]{thomason-class}.
Now let $\mathcal{T}^{\op{ic}}$ be the idempotent completion
of an essentially small triangulated category $\mathcal{T}$,
cf.\ Section~\ref{sec:idemp-completeness}; it carries a natural
triangulated structure \cite[Thm.~1.5]{balmer-schlichting}.
The previous result applied to $\mathcal{T}'=\mathcal{T}^{\op{ic}}$ shows
that the vanishing of $K_0(\mathcal{T}^{\op{ic}})$ implies that
$\mathcal{T}$ is idempotent complete; this was observed by
P.~Balmer and M.~Schlichting \cite[2.2-2.5]{balmer-schlichting}
where they also provide a method that sometimes shows this vanishing
condition.
These results show directly that
$K^-(\mathcal{A})$, $K(\mathcal{A})^{-}$, $K^+(\mathcal{A})$,
$K(\mathcal{A})^{+}$ and $K(\mathcal{A})^{bw}$ are idempotent
complete if $\mathcal{A}$ is an essentially small additive category.
A careful analysis of the proofs of these results essentially gives
our proof of Theorem~\ref{t:one-side-bounded-hot-idempotent-complete} below.
\end{remark}
\begin{proof}
We prove \eqref{enum:one-side-bounded-hot-idempotent-complete-neg} first.
It is obviously sufficient to prove that
$\mathcal{T}^{w \leq n} := K^{w \leq n}(\mathcal{A})$ is
idempotent complete.
Let $\mathcal{T}:= K^-(\mathcal{A})$ and consider the endofunctor
\begin{align*}
S: \mathcal{T} & \rightarrow \mathcal{T},\\
X & \mapsto \bigoplus_{n \in \Bl{N}} [2n]X = X \oplus [2] X \oplus [4] X \oplus \dots
\end{align*}
(it is even triangulated by \cite[Prop.~1.2.1]{neeman-tricat}).
Note that $S$ is well defined: Since $X$ is bounded above, the
countable direct sum has only finitely many nonzero summands in
every degree. There is an obvious
isomorphism of functors $S \xra{\sim} \op{id} \oplus [2] S$.
The functor $S$ extends to a (triangulated) endofunctor
of the idempotent completion $\mathcal{T}^{\op{ic}}$ of $\mathcal{T}$,
denoted by the same symbol, and we still have an
isomorphism $S \xra{\sim} \op{id} \oplus [2] S$.
Now let $M$ be an object of $\mathcal{T}^{w \leq n}$ with an
idempotent endomorphism $e: M \rightarrow M$. In $\mathcal{T}^{\op{ic}}$
we obtain a direct sum
decomposition $M \cong E \oplus F$ such that $e$ corresponds to
$\tzmat 1000: E \oplus F \rightarrow E \oplus F$. We have to show that $E$ is
isomorphic to an object of $\mathcal{T}^{w \leq n}$.
Since $S$ preserves $\mathcal{T}^{w \leq n}$, we obtain that
\begin{equation}
\label{eq:smint}
SM \cong
SE \oplus SF \xra{\sim}
(E \oplus [2] SE) \oplus SF
\in \mathcal{T}^{w \leq n},
\end{equation}
where we use the convention just to write $X \in
\mathcal{T}^{w \leq n}$ if an object $X \in \mathcal{T}^{\op{ic}}$ is
isomorphic to an object of $\mathcal{T}^{w \leq n}$.
The direct sum of the triangles
\begin{align*}
0 \rightarrow & SE \xra{1} SE \rightarrow 0,\\
SE \rightarrow & 0 \rightarrow [1] SE \xra{1} [1] SE,\\
SF \xra{1} & SF \rightarrow 0 \rightarrow [1] SF
\end{align*}
in $\mathcal{T}^{\op{ic}}$ is the triangle
\begin{equation*}
SE \oplus SF \xra{\tzmat 0001}
SE \oplus SF \xra{\tzmat 1000}
SE \oplus [1] SE \xra{\tzmat 0100}
[1] (SE \oplus SF).
\end{equation*}
The first two vertices are isomorphic to
$SM \in \mathcal{T}^{w \leq n}$. The mapping cone of a map between
objects of $\mathcal{T}^{w \leq n}$ is again in $\mathcal{T}^{w \leq
n}$. We obtain that
\begin{equation}
\label{eq:sesigseint}
SE \oplus [1] SE \in \mathcal{T}^{w \leq n}.
\end{equation}
Applying $[1]$ (which preserves $\mathcal{T}^{w \leq n}$) to the
same statement for $F$ yields
\begin{equation}
\label{eq:sigsfsigsigsfint}
[1] SF \oplus [2] SF \in\mathcal{T}^{w \leq n}.
\end{equation}
Taking the direct sum of the objects in \eqref{eq:smint},
\eqref{eq:sesigseint} and \eqref{eq:sigsfsigsigsfint} shows that
\begin{align*}
E \oplus [2] SE \oplus SF
\oplus &
SE \oplus [1] SE
\oplus
[1] SF \oplus [2] SF\\
\cong & E \oplus \big[SM \oplus [1] SM \oplus [2]
SM\big] \in \mathcal{T}^{w \leq n}.
\end{align*}
Define
$R:= SM \oplus [1] SM \oplus [2] SM$ which is obviously in
$\mathcal{T}^{w \leq n}$. Then
the ``direct sum" triangle
\begin{equation*}
R \rightarrow R\oplus E \rightarrow E \xra{0} [1] R
\end{equation*}
shows that $E$ is isomorphic to an object
of $\mathcal{T}^{w \leq n}$.
Now we prove
\eqref{enum:one-side-bounded-hot-idempotent-complete-pos}.
(The proof is essentially the same, but one has to pay attention to the fact
that the mapping cone of a map between objects of
$K^{w\geq n}(\mathcal{A})$ is only in
$K^{w\geq n-1}(\mathcal{A})$.)
Again it is
sufficient to show that
$\mathcal{T}^{w \geq n} := K^{w\geq n}(\mathcal{A})$ is idempotent complete.
Let
$\mathcal{T}:= K^+(\mathcal{A})$ and
consider the (triangulated)
functor
$S: \mathcal{T} \rightarrow \mathcal{T}$,
$X \mapsto \bigoplus_{n \in \Bl{N}} [-2n]X = X \oplus [-2] X \oplus \dots$.
It is well defined, extends to the idempotent completion
$\mathcal{T}^{\op{ic}}$ of $\mathcal{T}$, and we have
an isomorphism $S \xra{\sim} \op{id} \oplus [-2] S$ of functors.
Let $M$ in $\mathcal{T}^{w \geq n}$ with an
idempotent endomorphism $e: M \rightarrow M$. In $\mathcal{T}^{\op{ic}}$
we have a direct sum
decomposition $M \cong E \oplus F$ such that $e$ corresponds to
$\tzmat 1000: E \oplus F \rightarrow E \oplus F$. We have to show that $E$ is
isomorphic to an object of $\mathcal{T}^{w \geq n}$.
Since $S$ preserves $\mathcal{T}^{w \geq n}$, we obtain (with the
analog of the
convention introduced above) that
\begin{equation}
\label{eq:smintplus}
SM \cong
SE \oplus SF \xra{\sim}
(E \oplus [-2] SE) \oplus SF
\in \mathcal{T}^{w \geq n}.
\end{equation}
As above we have a triangle
\begin{equation*}
SE \oplus SF \xra{\tzmat 0001}
SE \oplus SF \xra{\tzmat 1000}
SE \oplus [1] SE \xra{\tzmat 0100}
[1] (SE \oplus SF).
\end{equation*}
The first two vertices are isomorphic to
$SM \in \mathcal{T}^{w \geq n}$; hence we get
$SE \oplus [1] SE \in \mathcal{T}^{w \geq n-1}$ or equivalently
\begin{equation}
\label{eq:sesigseintplus}
[-1] SE \oplus SE \in \mathcal{T}^{w \geq n}.
\end{equation}
Applying $[-1]$ (which preserves $\mathcal{T}^{w \geq n}$) to the
same statement for $F$ yields
\begin{equation}
\label{eq:sigsfsigsigsfintplus}
[-2] SF \oplus [{-1}] SF \in\mathcal{T}^{w \geq n}.
\end{equation}
Taking the direct sum of the objects in \eqref{eq:smintplus},
\eqref{eq:sesigseintplus} and \eqref{eq:sigsfsigsigsfintplus} shows that
\begin{align*}
E \oplus [{-2}] SE \oplus SF
\oplus &
[-1] SE \oplus SE
\oplus
[{-2}] SF \oplus [{-1}] SF\\
\cong & E \oplus \big[SM \oplus [-1] SM \oplus [{-2}]
SM\big] \in \mathcal{T}^{w \geq n}.
\end{align*}
Define
$R:= SM \oplus [-1] SM \oplus [{-2}] SM$ which is obviously in
$\mathcal{T}^{w \geq n}$. Then
the ``direct sum" triangle
\begin{equation*}
E \rightarrow E\oplus R \rightarrow R \xra{0} [1] E
\end{equation*}
shows that $[1] E$ is isomorphic to an object
of $\mathcal{T}^{w \geq n-1}$. Hence $E$ is isomorphic to an
object of $\mathcal{T}^{w \geq n}$.
The statement
\eqref{enum:one-side-bounded-hot-idempotent-complete-bounded}
that $K(\mathcal{A})^{bw}$
is idempotent complete is a consequence
of
\eqref{enum:one-side-bounded-hot-idempotent-complete-neg}
and
\eqref{enum:one-side-bounded-hot-idempotent-complete-pos}.
\end{proof}
\begin{theorem}
\label{t:hot-idempotent-complete}
Let $\mathcal{A}$ be an additive category.
\begin{enumerate}
\item
\label{enum:hot-idempotent-complete-abelian}
If $\mathcal{A}$ is abelian
then $K(\mathcal{A})$ is idempotent complete.
\item
\label{enum:hot-idempotent-complete-count-coprod}
If $\mathcal{A}$ has countable coproducts
then $K(\mathcal{A})$ is idempotent complete.
\item
\label{enum:hot-idempotent-complete-if-idempotent-complete}
If $\mathcal{A}$ is idempotent complete then
$K^b(\mathcal{A})$ and $K(\mathcal{A})^b$ are idempotent complete.
\end{enumerate}
\end{theorem}
\begin{remark}
\label{rem:KA-idempotent-complete-question}
We do not know whether $K(\mathcal{A})$
is idempotent complete for additive $\mathcal{A}$ (cf.\
Question~\ref{q:KA-ic}).
\end{remark}
\begin{remark}
\label{rem:KA-idempotent-complete-wlog-ic}
If $\mathcal{A}^{\op{ic}}$ is the idempotent completion of an additive
category $\mathcal{A}$ then $K(\mathcal{A})$ is idempotent complete
if and only if $K(\mathcal{A}^{\op{ic}})$ is idempotent complete:
This follows from R.~W.~Thomason's results cited in
Remark~\ref{rem:thomason-and-balmer-schlichting} (note that
$K(\mathcal{A}) \subset K(\mathcal{A}^{\op{ic}})$ is dense) and
Proposition~\ref{p:grothendieck-homotopy-category} below.
Hence in Remark~\ref{rem:KA-idempotent-complete-question} one can
assume without loss of generality that $\mathcal{A}$ is idempotent
complete.
\end{remark}
\begin{proof}
We prove \eqref{enum:hot-idempotent-complete-abelian} in
Corollary~\ref{c:hot-idempotent-complete-abelian} below.
Let us show \eqref{enum:hot-idempotent-complete-count-coprod}.
Assume that $\mathcal{A}$ has countable
coproducts. Then $K(\mathcal{A})$ has countable coproducts; hence any
idempotent endomorphism of an object of $K(\mathcal{A})$ splits by
\cite[Prop.~1.6.8]{neeman-tricat}.
Another way to see this is to use the strategy explained in
Remark~\ref{rem:thomason-and-balmer-schlichting}. More concretely,
adapt the proof of
Theorem~\ref{t:one-side-bounded-hot-idempotent-complete}~\eqref{enum:one-side-bounded-hot-idempotent-complete-neg} in the obvious way:
Note that the functor $X \mapsto \bigoplus_{n \in \Bl{N}} [2n]X$ is
well-defined on $K(\mathcal{A})$.
For the proof of
\eqref{enum:hot-idempotent-complete-if-idempotent-complete} assume
now that $\mathcal{A}$ is idempotent complete.
Since $K^b(\mathcal{A}) \subset K(\mathcal{A})^b$ is an
equivalence it is sufficient to show that $K^b(\mathcal{A})$ is
idempotent complete.
Let $C$ be an object of
$K^b(\mathcal{A})$.
Let $X:=\bigoplus_{i \in \Bl{Z}} C^i$ be the finite direct sum over
all nonzero components of $C$.
Let $\op{add} X \subset \mathcal{A}$ be the full subcategory of
$\mathcal{A}$ that contains
$X$ and is closed under finite direct sums and summands.
Since $\mathcal{A}$ is idempotent complete,
$\op{Hom}(X,?): \mathcal{A} \rightarrow \op{Mod}(\op{End}(X))$ induces an equivalence
\begin{equation}
\label{eq:project-equiv}
\op{Hom}(X,?): \op{add} X \xra{\sim} \op{proj}(R)
\end{equation}
where $R=\op{End}(X)$ and $\op{proj}(R)$ is the category of all finitely
generated projective right $R$-modules (see
\cite[Prop.~1.3.1]{Krause-krull-schmidt-categories}).
Note that $K^b(\op{add} X)$ is a full triangulated subcategory of
$K^b(\mathcal{A})$ containing $C$. Equivalence~\eqref{eq:project-equiv}
yields an
equivalence
$K^b(\op{add} X) \xra{\sim} K^b(\op{proj}(R))$.
The category $K^b(\op{proj}(R))$ of perfect complexes is well known to
be idempotent complete
(e.\,g.\ \cite[Exercise~I.30]{KS} or
\cite[Prop.~3.4]{neeman-homotopy-limits}).
This implies that any idempotent endomorphism of $C$ splits.
\end{proof}
In the rest of this section we assume that $\mathcal{A}$ is abelian.
We use torsion pairs/t-structures in
order to prove that $K(\mathcal{A})$ is idempotent complete.
Let $\mathcal{K}^{h \geq 0} \subset K(\mathcal{A})$
be the full subcategory of
objects isomorphic to complexes $X$ of the form
\begin{equation*}
\xymatrix{
{\dots} \ar[r] &
{0} \ar[r] &
{X^{-2}} \ar[r]^{d^{-2}} &
{X^{-1}} \ar[r]^{d^{-1}} &
{X^0} \ar[r]^{d^0} &
{X^1} \ar[r] &
{\dots}
}
\end{equation*}
with $X^0$ in degree zero and $d^{-2}$ the kernel of $d^{-1}$.
(Here $\mathcal{K}$ stands for ``kernel" and ``$h \geq 0$" indicates that the
cohomology is concentrated in degrees $\geq 0$.)
Let $\mathcal{C}^{h \leq 0} \subset K(\mathcal{A})$
be the full subcategory of
objects isomorphic to complexes $X$ of the form
\begin{equation*}
\xymatrix{
{\dots} \ar[r] &
{X^{-1}} \ar[r]^{d^{-1}} &
{X^0} \ar[r]^{d^0} &
{X^{1}} \ar[r]^{d^{1}} &
{X^2} \ar[r] &
{0} \ar[r] &
{\dots}
}
\end{equation*}
with $X^0$ in degree zero and $d^{1}$ the cokernel of $d^{0}$.
(Here $\mathcal{C}$ stands for ``cokernel" and ``$h \leq 0$" indicates that the
cohomology is concentrated in degrees $\leq 0$.)
Define
$\mathcal{C}^{h\leq n}:= [-n]\mathcal{C}^{h\leq 0}$
and
$\mathcal{K}^{h \geq n}:= [-n]\mathcal{K}^{h \geq 0}$.
\begin{lemma}
[{cf.~Example after
\cite[Prop.~I.2.15]{beligiannis-reiten-torsion-theories}}]
\label{l:torsion-pair-homotopy-abelian}
Let $\mathcal{A}$ be abelian.
Then
\begin{equation*}
k:=(K(\mathcal{A})^{w \leq 0}, \mathcal{K}^{h \geq 1})
\quad\text{and}\quad
c:=(\mathcal{C}^{h \leq -1}, K(\mathcal{A})^{w \geq 0})
\end{equation*}
are torsion pairs on $K(\mathcal{A})$.
\end{lemma}
\begin{remark}
It is easy to see that the torsion pair $k$ of
Lemma~\ref{l:torsion-pair-homotopy-abelian}
coincides with the torsion pair defined
in the Example after
\cite[Prop.~I.2.15]{beligiannis-reiten-torsion-theories}:
An object $X \in K(\mathcal{A})$ is in
$K(\mathcal{A})^{w \leq n}$
(resp.\ $\mathcal{K}^{h \geq n}$) if and only if the complex
$\op{Hom}(A,X)$ of abelian groups is exact in all degrees $>n$ (resp.\
$<n$) for all $A \in \mathcal{A}$.
The categories
$\mathcal{C}^{h \leq n}$ and $K(\mathcal{A})^{w \geq n}$ can be
characterized similarly.
\end{remark}
\begin{proof}
Let $(\mathcal{X}, \mathcal{Y})$ be the pair $k$ or
the pair $[1]c:=(\mathcal{C}^{h \leq -2}, K(\mathcal{A})^{w \geq -1})$.
Let $f$ represent a morphism $X \rightarrow Y$ with $X \in \mathcal{X}$ and $Y \in
\mathcal{Y}$. Then the diagram
\begin{equation*}
\xymatrix{
{X:} \ar[d]^f &
{\dots} \ar[r] \ar[d] &
{X^{-2}} \ar[r]^{d^{-2}} \ar[d] &
{X^{-1}} \ar[r]^{d^{-1}} \ar[d]^{f^{-1}} &
{X^0} \ar[r] \ar[d]^{f^0} \ar@{..>}[dl]|-{\exists !} &
{0} \ar[r] \ar[d] &
{0} \ar[r] \ar[d] &
{\dots}\\
{Y:} &
{\dots} \ar[r] &
{0} \ar[r] &
{Y^{-1}} \ar[r]^{d^{-1}} &
{Y^0} \ar[r]^{d^0} &
{Y^1} \ar[r]^{d^1} &
{Y^2} \ar[r] &
{\dots}
}
\end{equation*}
shows that $f$ is homotopic to zero; hence
$\op{Hom}_{K(\mathcal{A})}(\mathcal{X}, \mathcal{Y})=0$.
It is obvious that $\mathcal{X}$ is stable under $[1]$ and
$\mathcal{Y}$ is stable under $[-1]$.
We need to show that any object $A=(A^j, d^j)$ of $K(\mathcal{A})$ fits into a
triangle $X \rightarrow A \rightarrow Y \rightarrow [1]X$ with $X \in \mathcal{X}$ and $Y
\in \mathcal{Y}$.
\begin{itemize}
\item
Case $(\mathcal{X}, \mathcal{Y})=k$:
Let $f: M \rightarrow A^0$ be the kernel of $d^0:A^0 \rightarrow A^1$; then
there is a unique morphism $g:A^{-1} \rightarrow M$ such that
$d^{-1}=f g$.
\item
Case $(\mathcal{X}, \mathcal{Y})=[1]c$:
Let
$g: A^{-1} \rightarrow M$ be the cokernel of $d^{-2}:A^{-2} \rightarrow A^{-1}$; then
there is a unique morphism $f:M \rightarrow A^{0}$ such that
$d^{-1}=f g$.
\end{itemize}
The commutative diagram
\begin{equation*}
\xymatrix{
{X:} \ar[d] &
{\dots} \ar[r] &
{A^{-2}} \ar[r]^{d^{-2}} \ar[d]^1 &
{A^{-1}} \ar[r]^{g} \ar[d]^1 &
{M} \ar[r] \ar[d]^{f} &
{0} \ar[r] \ar[d] &
{0} \ar[r] \ar[d] &
{\dots}\\
{A:} \ar[d] &
{\dots} \ar[r] &
{A^{-2}} \ar[r]^{d^{-2}} \ar[d] &
{A^{-1}} \ar[r]^{d^{-1}} \ar[d]^{g} &
{A^0} \ar[r]^{d^0} \ar[d]^1 &
{A^1} \ar[r]^{d^1} \ar[d]^1 &
{A^2} \ar[r] \ar[d]^1 &
{\dots}\\
{Y:} \ar[d] &
{\dots} \ar[r] &
{0} \ar[r] \ar[d] &
{M} \ar[r]^{f} \ar[d]^1 &
{A^0} \ar[r]^{d^0} \ar[d] &
{A^1} \ar[r]^{d^1} \ar[d] &
{A^2} \ar[r] \ar[d] &
{\dots} \\
{[1]X:} &
{\dots} \ar[r] &
{A^{-1}} \ar[r]^{-g} &
{M} \ar[r] &
{0} \ar[r] &
{0} \ar[r] &
{0} \ar[r] &
{\dots}\\
}
\end{equation*}
defines a candidate
triangle $X \rightarrow A \rightarrow Y \rightarrow [1]X$ in $K(\mathcal{A})$.
It is easy to check that it is in fact a triangle.
\end{proof}
\begin{corollary}
\label{c:hot-idempotent-complete-abelian}
Let $\mathcal{A}$ be an abelian category
and $n \in \Bl{Z}$.
Then $K(\mathcal{A})$ is
idempotent complete, and the same is true for
$\mathcal{K}^{h\geq n}$ and $\mathcal{C}^{h \leq n}$.
\end{corollary}
\begin{proof}
Let $(\mathcal{X}, \mathcal{Y})$ be one of the torsion pairs of
Lemma~\ref{l:torsion-pair-homotopy-abelian}.
Let $e: A \rightarrow A$ be an idempotent endomorphism in $K(\mathcal{A})$.
The truncation functors $\tau_{\mathcal{X}}: K(\mathcal{A}) \rightarrow
\mathcal{X}$ and $\tau_{\mathcal{Y}}: K(\mathcal{A}) \rightarrow \mathcal{Y}$
yield a morphism of triangles
\begin{equation*}
\xymatrix{
{\tau_{\mathcal{X}}(A)} \ar[r]^-u \ar[d]^{\tau_{\mathcal{X}}(e)} &
{A} \ar[r]^-v \ar[d]^{e} &
{\tau_{\mathcal{Y}}(A)} \ar[r]^-w \ar[d]^{\tau_{\mathcal{Y}}(e)} &
{[1]\tau_{\mathcal{X}}(A)} \ar[d]^{[1]\tau_{\mathcal{X}}(e)} \\
{\tau_{\mathcal{X}}(A)} \ar[r]^-u &
{A} \ar[r]^-v &
{\tau_{\mathcal{Y}}(A)} \ar[r]^-w &
{[1]\tau_{\mathcal{X}}(A).}
}
\end{equation*}
All morphisms ${\tau_{\mathcal{X}}(e)}$, $e$, ${\tau_{\mathcal{Y}}(e)}$ are
idempotent
and
${\tau_{\mathcal{X}}(e)}$ and ${\tau_{\mathcal{Y}}(e)}$ split by
Theorem~\ref{t:one-side-bounded-hot-idempotent-complete}
(since $\mathcal{K}^{h \geq n} \subset K(\mathcal{A})^{w\geq n-2}$
and
$\mathcal{C}^{h \leq n} \subset K(\mathcal{A})^{w\leq n+2}$).
Hence $e$ splits by
\cite[Prop.~2.3]{Karoubianness}.
This shows that $K(\mathcal{A})$ is idempotent complete.
Since $\mathcal{X}=\leftidx{^\perp}{\mathcal{Y}}$
and $\mathcal{Y}=\mathcal{X}^\perp$ for any torsion pair
this implies that $\mathcal{X}$ and $\mathcal{Y}$ are idempotent
complete.
\end{proof}
\section{Weight structures}
\label{sec:weight-structures}
The following definition of a weight structure is independently due to
M.~Bondarko \cite{bondarko-weight-str-vs-t-str}
and D.~Pauksztello \cite{pauk-co-t} who calls it a co-t-structure.
\begin{definition}
\label{d:ws}
Let $\mathcal{T}$ be a triangulated category.
A \define{weight structure}
(or \define{w-structure})
on $\mathcal{T}$ is a
pair $w=(\mathcal{T}^{w \leq 0},\mathcal{T}^{w \geq 0})$
of two full subcategories such that:
(We define
$\mathcal{T}^{w \leq n} :=[-n]\mathcal{T}^{w \leq 0}$ and
$\mathcal{T}^{w \geq n} :=[-n]\mathcal{T}^{w \geq 0}$.)
\begin{enumerate}[label=(ws{\arabic*})]
\item
\label{enum:ws-i}
$\mathcal{T}^{w \leq 0}$ and $\mathcal{T}^{w \geq 0}$ are additive
categories and
closed under retracts
in $\mathcal{T}$;
\item
\label{enum:ws-ii}
$\mathcal{T}^{w \leq 0} \subset
\mathcal{T}^{w \leq 1}$ and $\mathcal{T}^{w \geq 1} \subset
\mathcal{T}^{w \geq 0}$;
\item
\label{enum:ws-iii}
$\op{Hom}_\mathcal{T}(\mathcal{T}^{w \geq 1}, \mathcal{T}^{w \leq 0})=0$;
\item
\label{enum:ws-iv}
For every $X$ in $\mathcal{T}$ there is a triangle
\begin{equation*}
A \rightarrow X \rightarrow B \rightarrow [1]A
\end{equation*}
in $\mathcal{T}$ with $A$ in $\mathcal{T}^{w \geq 1}$ and $B$ in
$\mathcal{T}^{w \leq 0}$.
\end{enumerate}
A weight structure $w=(\mathcal{T}^{w \leq 0},\mathcal{T}^{w \geq
0})$ is \textbf{bounded above} if $\mathcal{T}=\bigcup_{n \in \Bl{Z}}
\mathcal{T}^{w \leq n}$ and \textbf{bounded below}
if $\mathcal{T}=\bigcup_{n \in \Bl{Z}} \mathcal{T}^{w \geq n}$.
It is \textbf{bounded} if it is bounded above and bounded below.
The \define{heart} of a
weight structure $w=(\mathcal{T}^{w \leq 0},\mathcal{T}^{w \geq 0})$
is
\begin{equation*}
{\heartsuit}(w):= \mathcal{T}^{w=0}:= \mathcal{T}^{w \leq 0}\cap
\mathcal{T}^{w \geq 0}.
\end{equation*}
A \define{weight category} (or \define{w-category}) is a pair
$(\mathcal{T}, w)$ where
$\mathcal{T}$ is a triangulated category and $w$ is a weight
structure on $\mathcal{T}$. Its \define{heart} is the heart of $w$.
\end{definition}
A triangle
$A \rightarrow X \rightarrow B \rightarrow [1]A$ with $A$ in $\mathcal{T}^{w \geq n+1}$ and $B$ in
$\mathcal{T}^{w \leq n}$
(cf.~\ref{enum:ws-iv})
is called a
\define{weight decomposition} of $X$, or more precisely a
\define{$(w\geq n+1, w \leq n)$-weight decomposition}
or a \define{weight decomposition of type $(w\geq n+1, w \leq n)$} of
$X$.
It is convenient to write such a weight decomposition as $w_{\geq
n+1}X \rightarrow X \rightarrow w_{\leq n}X \rightarrow [1]w_{\geq n+1}X$ where $w_{\geq
n+1}X$ and $w_{\leq n}X$ are just names for the objects $A$ and
$B$ from above. If we say
that $w_{\geq n+1}X \rightarrow X \rightarrow w_{\leq n}X \rightarrow [1]w_{\geq n+1}X$ is a
weight decomposition without specifying its type explicitly, this type is usually
obvious from the notation.
The heart ${\heartsuit}(w)$ is a full subcategory of $\mathcal{T}$ and
closed under retracts in $\mathcal{T}$ by
\ref{enum:ws-i}.
We will use the following notation (for $a, b \in \Bl{Z}$):
$\mathcal{T}^{w \in [a,b]} := \mathcal{T}^{w \leq b} \cap
\mathcal{T}^{w \geq a}$, $\mathcal{T}^{w=a}:= \mathcal{T}^{w \in
[a,a]}$.
Note that $\mathcal{T}^{w \in [a,b]}=0$ if $b<a$ by
\ref{enum:ws-iii}: For $X \in \mathcal{T}^{w \in [a,b]}$ we have $\op{id}_X=0$.
\begin{definition}
\label{def:w-exact}
Let $\mathcal{T}$ and $\mathcal{S}$ be weight categories. A triangulated
functor $F: \mathcal{T} \rightarrow \mathcal{S}$ is called
\define{w-exact} (or \define{weight-exact}) if
$F(\mathcal{T}^{w \leq 0}) \subset \mathcal{S}^{w \leq 0}$ and
$F(\mathcal{T}^{w \geq 0}) \subset \mathcal{S}^{w \geq 0}$.
\end{definition}
\subsection{First properties of weight structures}
\label{sec:first-properties-ws}
Let $\mathcal{T}$ be a triangulated category with a weight structure
$w=(\mathcal{T}^{w \leq 0},\mathcal{T}^{w \geq 0})$.
\begin{lemma}
[{cf.\ \cite[Prop.~1.3.3]{bondarko-weight-str-vs-t-str} for
some statements}]
\label{l:weight-str-basic-properties}
\rule{0cm}{1mm}
\begin{enumerate}
\item
\label{enum:triangle-heart-splits}
Let $X \xra{f} Y \xra{g} Z \xra{h} [1]X$ be a triangle in
$\mathcal{T}$.
If $Z \in \mathcal{T}^{w \geq n}$ and $X \in \mathcal{T}^{w \leq n}$
then this triangle splits.
In particular any triangle $X \xra{f} Y \xra{g} Z \xra{h} [1]X$
with all objects $X, Y, Z$ in the heart ${\heartsuit}(w)$ splits.
\end{enumerate}
Let
\begin{equation}
\label{eq:weight-decomp-X}
w_{\geq n+1}X \xra{f} X \xra{g} w_{\leq n}X \xra{h} [1]w_{\geq n+1}X
\end{equation}
be
a
$(w\geq n+1, w \leq n)$-weight decomposition of $X$, for some $n \in \Bl{Z}$.
\begin{enumerate}[resume]
\item
\label{enum:weight-decomp-with-knowlegde-dir-summand-i}
If $X$ is in $\mathcal{T}^{w \leq n}$, then $w_{\leq n}X \cong X \oplus
[1]w_{\geq n+1}X$
\item
\label{enum:weight-decomp-with-knowlegde-dir-summand-ii}
If $X$ is in $\mathcal{T}^{w \geq n+1}$, then $w_{\geq n+1}X\cong X\oplus
[-1]w_{\leq n}X$.
\item
\label{enum:weight-perp-prop}
For every $n \in \Bl{Z}$ we have
\begin{align}
\label{eq:ws-leq-right-orth-of-geq}
(\mathcal{T}^{w\geq n+1})^\perp & =\mathcal{T}^{w \leq n}, \\
\label{eq:ws-geq-left-orth-of-leq}
\leftidx{^{\perp}}{(\mathcal{T}^{w\leq n})}{} & =\mathcal{T}^{w \geq n+1}.
\end{align}
In particular
$\mathcal{T}^{w \leq n}$ and $\mathcal{T}^{w \geq n+1}$ are closed
under extensions.
\item
\label{enum:weights-bounded}
Assume that $a \leq n < b$ (for $a,b \in \Bl{Z}$) and that $X \in
\mathcal{T}^{w \in [a,b]}$. Then
$w_{\leq n}X \in \mathcal{T}^{w \in [a,n]}$ and
$w_{\geq n+1}X \in \mathcal{T}^{w \in [n+1,b]}$.
More precisely: If $a \leq n$ then $X \in \mathcal{T}^{w \geq a}$
implies $w_{\leq n}X \in
\mathcal{T}^{w \in [a,n]}$ (and obviously $w_{\geq n+1}X \in
\mathcal{T}^{w \geq n+1} \subset \mathcal{T}^{w \geq a}$).
If $n < b$ then
$X \in \mathcal{T}^{w \leq b}$
implies (obviously $w_{\leq n}X \in \mathcal{T}^{w \leq n} \subset
\mathcal{T}^{w \leq b}$ and)
$w_{\geq n+1}X \in \mathcal{T}^{w \in [n+1, b]}$.
\item
\label{enum:bounded-w-decomp}
Let $a,b,n \in \Bl{Z}$.
For $X \in \mathcal{T}^{w \geq a}$
(resp.\ $X \in \mathcal{T}^{w \leq b}$
or $X \in \mathcal{T}^{w \in [a,b]}$)
there is a
$(w\geq n+1, w \leq n)$-weight decomposition
\eqref{eq:weight-decomp-X} of $X$ such that
both $w_{\geq n+1}X$ and $w_{\leq n}X$ are in
$\mathcal{T}^{w \geq a}$
(resp.\ in $\mathcal{T}^{w \leq b}$
or $\mathcal{T}^{w \in [a,b]}$).
\end{enumerate}
\end{lemma}
\begin{proof}
By \ref{enum:ws-iii} we have
$h=0$ in \eqref{enum:triangle-heart-splits},
$f=0$ in
\eqref{enum:weight-decomp-with-knowlegde-dir-summand-i}
and $g=0$ in
\eqref{enum:weight-decomp-with-knowlegde-dir-summand-ii};
use Lemma~\ref{l:zero-triang-cat}.
We prove \eqref{enum:weight-perp-prop}.
Axiom \ref{enum:ws-iii} shows that the inclusions $\supset$ in
\eqref{eq:ws-leq-right-orth-of-geq} and
\eqref{eq:ws-geq-left-orth-of-leq} are true.
Let $X \in \mathcal{T}$ and take a weight decomposition
\eqref{eq:weight-decomp-X} of $X$.
If $X \in (\mathcal{T}^{w\geq n+1})^\perp$ then $f=0$ by
\ref{enum:ws-iii}; hence $X$
is a summand of $w_{\leq n}X \in \mathcal{T}^{w\leq n}$ and hence in
$\mathcal{T}^{w\leq n}$ by \ref{enum:ws-i}.
Similarly $X \in \leftidx{^\perp}{(\mathcal{T}^{w\leq n})}$ implies
$g=0$ so $X$ is a summand of $w_{\geq n+1}X \in \mathcal{T}^{w\geq n+1}$
and hence in $\mathcal{T}^{w\geq n+1}$.
Let us prove \eqref{enum:weights-bounded}:
Since $X \in \mathcal{T}^{w \geq a}$ and $[1]w_{\geq n+1}X \in
\mathcal{T}^{w\geq n} \subset \mathcal{T}^{w \geq a}$ and
$\mathcal{T}^{w \geq a}$ is closed under extensions
by \eqref{enum:weight-perp-prop}
we have $w_{\leq n}X \in \mathcal{T}^{w \in [a,n]}$.
Turning the triangle we see that $w_{\geq n+1}X$ is an extension of
$[-1]w_{\leq n}X \in \mathcal{T}^{w \leq n+1}
\subset \mathcal{T}^{w \leq b}$
and $X \in \mathcal{T}^{w \leq b}$, hence $w_{\geq n+1}X \in \mathcal{T}^{w \in
[n+1,b]}$.
We prove \eqref{enum:bounded-w-decomp}:
Assume $X \in \mathcal{T}^{w\geq a}$. If $a \leq n$ any such weight
decomposition does the job by \eqref{enum:weights-bounded};
if $a > n$ take $X \xra{\op{id}} X \rightarrow 0 \rightarrow [1]X$.
Assume $X \in \mathcal{T}^{w\leq b}$. If $n < b$ use
\eqref{enum:weights-bounded};
if $b \leq n$ take $0 \rightarrow X \xra{\op{id}} X \rightarrow 0$.
Assume $X \in \mathcal{T}^{w\in [a,b]}$.
The case $a > b$ is trivial since then $X=0$.
So assume $a\leq b$.
If $a \leq n < b$ use
\eqref{enum:weights-bounded};
if $a > n$ take $X \xra{\op{id}} X \rightarrow 0 \rightarrow [1]X$.
if $b \leq n$ take $0 \rightarrow X \xra{\op{id}} X \rightarrow 0$.
\end{proof}
\begin{corollary}
\label{c:inclusion-w-str}
Let
$(\mathcal{D}^{w \leq 0}, \mathcal{D}^{w \geq 0})$ and
$(\mathcal{T}^{w \leq 0}, \mathcal{T}^{w \geq 0})$ be two
weight structures on a triangulated category.
If
\begin{equation*}
\mathcal{D}^{w \leq 0} \subset \mathcal{T}^{w \leq 0}
\text{ and }
\mathcal{D}^{w \geq 0} \subset \mathcal{T}^{w \geq 0},
\end{equation*}
then these two weight structures coincide.
\end{corollary}
\begin{proof}
Our assumptions and \eqref{eq:ws-leq-right-orth-of-geq} give
\begin{equation*}
\mathcal{T}^{w \leq 0} = (\mathcal{T}^{w\geq 1})^\perp
\subset
(\mathcal{D}^{w\geq 1})^\perp =\mathcal{D}^{w \leq 0}.
\end{equation*}
Similarly we obtain
$\mathcal{T}^{w \geq 0} \subset \mathcal{D}^{w \geq 0}$.
\end{proof}
The following Lemma is the analog of \cite[1.3.19]{BBD}.
\begin{lemma}
\label{l:induced-w-str}
Let $\mathcal{T}'$ be a full triangulated subcategory of a
triangulated category $\mathcal{T}$.
Assume that
$w=(\mathcal{T}^{w \leq 0},\mathcal{T}^{w \geq 0})$ is a weight
structure on $\mathcal{T}$.
Let
$w'=(\mathcal{T}' \cap \mathcal{T}^{w \leq 0}, \mathcal{T}' \cap
\mathcal{T}^{w \geq 0})$.
Then
$w'$ is a weight structure on $\mathcal{T}'$ if and only if for
any object $X \in \mathcal{T}'$ there is a triangle
\begin{equation}
\label{eq:induced-w-str-w-decomp}
\xymatrix{
{w_{\geq 1}X} \ar[r]
& {X} \ar[r]
& {w_{\leq 0}X} \ar[r]
& {[1]w_{\geq 1}X}
}
\end{equation}
in $\mathcal{T}'$ that is a weight
decomposition
of type $(w \geq 1, w \leq 0)$ in $(\mathcal{T},w)$.
If $w'$ is a weight structure on $\mathcal{T}'$ it is called the
\define{induced weight structure}.
\end{lemma}
\begin{proof}
If $w'$ is a weight structure on $\mathcal{T}'$ weight
decompositions in $(\mathcal{T}', w')$ are triangles and yield
weight decompositions in
$(\mathcal{T}, w)$.
Conversely let us show that under the given condition $w'$ is a
weight structure on $\mathcal{T}'$.
This condition obviously says that $w'$ satisfies \ref{enum:ws-iv}.
If $Y$ is a retract in $\mathcal{T}'$ of
$X \in \mathcal{T}'\cap \mathcal{T}^{w \leq 0}$, it is a retract of
$X$ in $\mathcal{T}$ and hence $Y \in \mathcal{T}^{w \leq 0}$.
This proves that $\mathcal{T}'\cap \mathcal{T}^{w \leq 0}$ is
closed under retracts in $\mathcal{T}'$, cf.\ \ref{enum:ws-i}.
The remaining conditions for $w'$ being are weight structure are obvious.
\end{proof}
\subsection{Basic example}
\label{sec:basic-ex}
Let $\mathcal{A}$ be an additive category and $K(\mathcal{A})$ its homotopy category.
We use the notation introduced in Section~\ref{sec:hot-cat-idem-complete}.
\begin{proposition}
[{cf.\ \cite{bondarko-weight-str-vs-t-str}, \cite{pauk-co-t}}]
\label{p:ws-hot-additive}
The pair
\begin{equation*}
(K(\mathcal{A})^{w \leq 0}, K(\mathcal{A})^{w \geq 0})
\end{equation*}
is a weight
structure on $K(\mathcal{A})$.
It induces (see Lemma~\ref{l:induced-w-str}) weight structures on
$K^*(\mathcal{A})$
for $* \in \{+, -, b\}$
and on $K(\mathcal{A})^*$ for $* \in \{+, -, b, bw\}$.
All these weight structures are called the
\define{standard weight structure} on the respective category.
\end{proposition}
\begin{remark}
\label{rem:ws-hot-additive-anti}
The triangulated equivalence \eqref{eq:KA-antiKA-triequi}
between $K(\mathcal{A})$ and $K(\mathcal{A})^{\op{anti}}$
allows us to transfer the weight structure from
Proposition~\ref{p:ws-hot-additive} to
$K(\mathcal{A})^{\op{anti}}$. This defines the
\define{standard weight structure}
\begin{equation*}
(K(\mathcal{A})^{{\op{anti}}, w \leq 0}, K(\mathcal{A})^{{\op{anti}}, w \geq 0})
\end{equation*}
on $K(\mathcal{A})^{\op{anti}}$.
We have
$K(\mathcal{A})^{{\op{anti}}, w \leq 0}=K(\mathcal{A})^{w \leq 0}$
and $K(\mathcal{A})^{{\op{anti}}, w \geq 0}= K(\mathcal{A})^{w \geq 0}$.
Similarly one can transfer the induced weight structures.
\end{remark}
\begin{proof}
Condition \ref{enum:ws-i}: It is obvious that both
$K(\mathcal{A})^{w \leq 0}$ and
$K(\mathcal{A})^{w \geq 0}$ are additive.
Since they are strict subcategories of $K(\mathcal{A})$ and
idempotent complete by
Theorem~\ref{t:one-side-bounded-hot-idempotent-complete},
they are in particular closed under retracts in $K(\mathcal{A})$.
Conditions \ref{enum:ws-ii} and \ref{enum:ws-iii} are obvious.
We verify condition \ref{enum:ws-iv} explicitly.
Let $X=(X^i, d^i:X^i
\rightarrow X^{i+1})$ be a complex.
We give $(w \geq n+1, w \leq n)$-weight decompositions of $X$ for
any $n \in \Bl{Z}$.
The following diagram
defines
complexes $\ul{w}_{\leq n}(X)$, $\ul{w}_{\geq n+1}(X)$ and a mapping
cone sequence
$[-1]\ul{w}_{\leq n}(X) \rightarrow {\ul{w}_{\geq n+1}(X)} \rightarrow X \rightarrow
\ul{w}_{\leq n}(X)$
in $C(\mathcal{A})$:
\begin{equation*}
\hspace{-0.4cm}
\xymatrix
{
{[-1]{\ul{w}_{\leq n}(X)}:} \ar[d] &
{\dots} \ar[r] &
{X^{{n-2}}} \ar[r]^-{-d^{{n-2}}} \ar[d] &
{X^{{n-1}}} \ar[r]^-{-d^{{n-1}}} \ar[d] &
{X^n} \ar[r] \ar[d]^-{d^n} &
{0} \ar[r] \ar[d] &
{\dots} \\
{{\ul{w}_{\geq n+1}(X)}:} \ar[d] &
{\dots} \ar[r] &
{0} \ar[r] \ar[d] &
{0} \ar[r] \ar[d] &
{X^{n+1}} \ar[r]^-{d^{n+1}} \ar[d]^-{1} &
{X^{n+2}} \ar[r] \ar[d]^-1 &
{\dots} \\
{X:} \ar[d] &
{\dots} \ar[r] &
{X^{{n-1}}} \ar[r]^-{d^{{n-1}}} \ar[d]^-1 &
{X^n} \ar[r]^-{d^n} \ar[d]^-1 &
{X^{n+1}} \ar[r]^-{d^{n+1}} \ar[d] &
{X^{n+2}} \ar[r] \ar[d] &
{\dots} \\
{{\ul{w}_{\leq n}(X)}:} &
{\dots} \ar[r] &
{X^{{n-1}}} \ar[r]^-{d^{{n-1}}} &
{X^n} \ar[r] &
{0} \ar[r] &
{0} \ar[r] &
{\dots.}
}
\end{equation*}
Passing to $K(\mathcal{A})$ and rotation of the triangle
yields the weight decomposition we need:
\begin{equation}
\label{eq:hot-wdecomp}
\hspace{-0.4cm}
\xymatrix
{
{{\ul{w}_{\geq n+1}(X)}:} \ar[d] &
{\dots} \ar[r] &
{0} \ar[r] \ar[d] &
{0} \ar[r] \ar[d] &
{X^{n+1}} \ar[r]^-{d^{n+1}} \ar[d]^-{1} &
{X^{n+2}} \ar[r] \ar[d]^-1 &
{\dots} \\
{X:} \ar[d] &
{\dots} \ar[r] &
{X^{{n-1}}} \ar[r]^-{d^{{n-1}}} \ar[d]^-1 &
{X^n} \ar[r]^-{d^n} \ar[d]^-1 &
{X^{n+1}} \ar[r]^-{d^{n+1}} \ar[d] &
{X^{n+2}} \ar[r] \ar[d] &
{\dots} \\
{{\ul{w}_{\leq n}(X)}:} \ar[d] &
{\dots} \ar[r] &
{X^{{n-1}}} \ar[r]^-{d^{{n-1}}} \ar[d] &
{X^n} \ar[r] \ar[d]^-{-d^n} &
{0} \ar[r] \ar[d] &
{0} \ar[r] \ar[d] &
{\dots} \\
{[1]{\ul{w}_{\geq n+1}(X)}:} &
{\dots} \ar[r] &
{0} \ar[r] &
{X^{n+1}} \ar[r]^-{-d^{n+1}} &
{X^{n+2}} \ar[r]^-{-d^{n+2}} &
{X^{n+3}} \ar[r] &
{\dots} \\
}
\end{equation}
The statements about the induced weight structures
on $K^*(\mathcal{A})$ and $K(\mathcal{A})^*$
for $* \in \{+,-,b\}$ are obvious
from \eqref{eq:hot-wdecomp}
and Lemma~\ref{l:induced-w-str}.
For $K(\mathcal{A})^{bw}$ use additionally
Lemma~\ref{l:weight-str-basic-properties}, \eqref{enum:bounded-w-decomp}.
\end{proof}
We continue to use the maps $\ul{w}_{\leq n}$ and $\ul{w}_{\geq n+1}$
on complexes
introduced in the above proof.
Define
$\ul{w}_{>n}:=\ul{w}_{\geq n+1}$,
$\ul{w}_{<n}:=\ul{w}_{\leq n-1}$,
$\ul{w}_{[a,b]}=\ul{w}_{\geq a}\ul{w}_{\leq b}=\ul{w}_{\leq
b}\ul{w}_{\geq a}$, and $\ul{w}_a=\ul{w}_{[a,a]}$.
The triangle \eqref{eq:hot-wdecomp}
will be called the
\define{$\ul{w}$-weight decomposition of $X$}.
\begin{remark}
Note that $\ul{w}_{\geq n}$ and $\ul{w}_{\leq n}$ are functorial
on $C(\mathcal{A})$ but not at all on $K(\mathcal{A})$
(if $\mathcal{A} \not= 0$):
Take an object $M \in \mathcal{A}$ and consider its mapping cone
$\op{Cone}(\op{id}_M)$.
All objects $\op{Cone}(\op{id}_M)$ (for $M \in \mathcal{A}$) are isomorphic
to the zero
object in $K(\mathcal{A})$. If $\ul{w}_{\leq 0}$ were functorial, all
$\ul{w}_{\leq 0}(\op{Cone}(\op{id}_M))=M$ were isomorphic, hence $\mathcal{A}=0$.
\end{remark}
\begin{remark}
\label{rem:t-structures-adjacent-to-standard-w-structure}
Assume that $\mathcal{A}$ is an abelian category and consider the
following four subcategories of $K(\mathcal{A})$:
\begin{equation*}
(\mathcal{C}^{h \leq 0}, K(\mathcal{A})^{w \geq 1},
K(\mathcal{A})^{w \leq 0}, \mathcal{K}^{h \geq 1}).
\end{equation*}
The two outer pairs are torsion pairs on $K(\mathcal{A})$
by Lemma~\ref{l:torsion-pair-homotopy-abelian}; the pair in the
middle is (up two a swap of the two members and a translation) the
standard w-structure on $K(\mathcal{A})$
from Proposition~\ref{p:ws-hot-additive}.
In any pair of direct neighbors there are no morphisms from left
to right; more precisely the left member is the left
orthogonal of the right member and vice versa.
In the terminology of \cite[Def~4.4.1]{bondarko-weight-str-vs-t-str},
the t-structure
$(K(\mathcal{A})^{w \leq 0}, \mathcal{K}^{h \geq 0})$
and the (standard) w-structure
$(K(\mathcal{A})^{w \leq 0}, K(\mathcal{A})^{w \geq 0})$
on $K(\mathcal{A})$ are left adjacent (i.\,e.\ their left aisles coincide)
to each other.
Similarly, the t-structure
$(\mathcal{C}^{h \leq 0}, K(\mathcal{A})^{w \geq 0})$
and the (standard) w-structure
$(K(\mathcal{A})^{w \leq 0}, K(\mathcal{A})^{w \geq 0})$
on $K(\mathcal{A})$ are right adjacent to each other.
\end{remark}
\begin{lemma}
\label{l:bd-complex-dir-summand}
Let $X \in K(\mathcal{A})$ and $a,b \in \Bl{Z}$.
If $X \in K(\mathcal{A})^{w \in [a,b]}$ then
$X$ is a summand of $\ul{w}_{[a,b]}(X)$ in $K(\mathcal{A})$.
\end{lemma}
\begin{proof}
Let $X \in K(\mathcal{A})^{w \in [a,b]}$. If $a>b$ then
$X\cong 0=\ul{w}_{[a,b]}(X)$. Assume $a \leq b$.
Lemma~\ref{l:weight-str-basic-properties}~\eqref{enum:weight-decomp-with-knowlegde-dir-summand-i}
gives $\ul{w}_{\leq b}(X) \cong X \oplus [1]\ul{w}_{> b}(X)$. Observe that
$[1]\ul{w}_{>b}(X) \in K(\mathcal{A})^{w \geq b} \subset
K(\mathcal{A})^{w \geq a}$.
Hence $\ul{w}_{\leq b}(X) \in K(\mathcal{A})^{w \geq a}$.
Now
Lemma~\ref{l:weight-str-basic-properties}~\eqref{enum:weight-decomp-with-knowlegde-dir-summand-ii}
shows that $\ul{w}_{[a,b]}(X)\cong \ul{w}_{\leq b}(X) \oplus [-1]
\ul{w}_{< a}(X)$.
\end{proof}
We view $\mathcal{A}$ as a full subcategory of $K(\mathcal{A})$,
namely given by all complexes concentrated in degree zero.
\begin{corollary}
\label{c:heart-standard-wstr}
The heart ${\heartsuit}$ of the standard weight structure on $K(\mathcal{A})$
is the idempotent completion of $\mathcal{A}$.
The same statement is true for the standard weight structure on
$K(\mathcal{A})^*$ for $* \in \{+,-,bw\}$
(and for that on
$K^*(\mathcal{A})$ for $* \in \{+,-\}$).
The heart of the standard weight structure on $K(\mathcal{A})^b$
(resp.\ $K^b(\mathcal{A})$) is
the closure under retracts of $\mathcal{A}$ in $K(\mathcal{A})^b$
(resp.\ $K^b(\mathcal{A})$).
\end{corollary}
\begin{proof}
Since $K(\mathcal{A})^{bw}$ is
idempotent complete
(Thm.~\ref{t:one-side-bounded-hot-idempotent-complete})
and the heart ${\heartsuit}$ is contained in $K(\mathcal{A})^{bw}$
and closed under retracts in $K(\mathcal{A})$ it follows that
${\heartsuit}$ is idempotent complete.
Furthermore any object $X \in {\heartsuit}$ is a summand of
$X^0=\ul{w}_{0}(X) \in \mathcal{A}$ by Lemma~\ref{l:bd-complex-dir-summand}.
Since $\mathcal{A}$ is a full subcategory of ${\heartsuit}$ the claim
follows from the results at the end of
Section~\ref{sec:idemp-completeness}.
The proof of the second statement is similar.
For the proof of the last statement let ${\heartsuit}$ be the heart of the
standard weight structure on $\mathcal{T}=K(\mathcal{A})^b$
(resp.\ $\mathcal{T}=K^b(\mathcal{A})$).
Since $\mathcal{A} \subset {\heartsuit}$ and the heart ${\heartsuit}$ is
closed under retracts in $\mathcal{T}$,
any retract of an object of $\mathcal{A}$ in $\mathcal{T}$
is in ${\heartsuit}$. Conversely, any object of ${\heartsuit}$ is
by Lemma~\ref{l:bd-complex-dir-summand} a retract of an object of
$\mathcal{A}$.
\end{proof}
\begin{proposition}
\label{p:grothendieck-homotopy-category}
Let $\mathcal{A}$ be a small additive category. Then
the Grothendieck group of its homotopy category $K(\mathcal{A})$ is
trivial:
\begin{equation*}
K_0(K(\mathcal{A}))=0.
\end{equation*}
The Grothendieck groups of $K^-(\mathcal{A})$, $K(\mathcal{A})^{-}$,
$K^+(\mathcal{A})$ and $K(\mathcal{A})^{+}$ vanish as well.
\end{proposition}
This result is presumably well-known to the experts
(cf.~\cite{schlichting-neg-K-theory-dercat} or
\cite{miyachi-grothendieck}).
\begin{proof}
We prove that $K_0(K(\mathcal{A}))=0$.
We write $[{X}]$ for the class of an object $X$ in the Grothendieck group.
Let $X \in K(\mathcal{A})$. Let $A=\ul{w}_{>0}(X)$ and
$B=\ul{w}_{\leq 0}(X)$. The $\ul{w}$-weight decomposition
$A \rightarrow X \rightarrow B \rightarrow [1]A$
gives $[X]=[A]+[B]$ in the
Grothendieck group. Since $B$ is a bounded above complex
\begin{equation*}
T(B):= B \oplus [2] B \oplus [4]B \oplus \dots = \bigoplus_{n \in \Bl{N}}[2n]B
\end{equation*}
is a well-defined object of $K(\mathcal{A})$.
There is an obvious isomorphism $T(B) =B \oplus
[2]T(B)$ which gives
\begin{equation*}
[{T(B)}]=[{B}]+[{[2]T(B)}]=[{B}]+[{T(B)}]
\end{equation*}
implying $[{B}]=0$. Considering
$\bigoplus_{n \in \Bl{N}}[-2n]A$
we similarly find $[A]=0$. Hence $[X]=[A]+[B]=0$.
The proof that the other Grothendieck groups mentioned in the
proposition vanish is now obvious.
\end{proof}
\begin{remark}
Assume that we are in the setting of
Proposition~\ref{p:grothendieck-homotopy-category}.
If we can show that the Grothendieck group of the idempotent
completion of $K(\mathcal{A})$ vanishes then $K(\mathcal{A})$ is
idempotent complete by the results cited in
Remark~\ref{rem:thomason-and-balmer-schlichting}.
\end{remark}
\begin{example}
Let $\{0\} \subsetneq \Lambda \subset \Bl{N}$ be a subset
that is
closed under addition, e.\,g. $\Lambda= 17 \Bl{N}+9\Bl{N}$.
Let $\op{mod}(k)$ be the category of finite dimensional vector spaces
over a field $k$ and let
$\op{mod}_{\Lambda}(k) \subset \op{mod}(k)$ be the full subcategory of vector
spaces whose dimension is in $\Lambda$.
We claim that $K(\op{mod}_{\Lambda}(k))$ is idempotent complete.
Obviously $K(\op{mod}_{\Lambda}(k)) \subset
K(\op{mod}(k))$ is a full triangulated subcategory.
It is easy to see that any object of $K(\op{mod}(k))$ is isomorphic to a
stalk complex, i.\,e.\ a complex in $\op{mod}(k)$ with all differentials
$d=0$. Hence $K(\op{mod}(k))$ is idempotent complete
(alternatively, this follows from
Theorem~\ref{t:hot-idempotent-complete}~\eqref{enum:hot-idempotent-complete-abelian})
and $K(\op{mod}_{\Lambda}(k))$ is dense in $K(\op{mod}(k))$.
In particular $K(\op{mod}(k))$ is the idempotent completion of
$K(\op{mod}_{\Lambda}(k))$.
The Grothendieck groups of $K(\op{mod}_{\Lambda}(k))$ and $K(\op{mod}(k))$ vanish
by Proposition~\ref{p:grothendieck-homotopy-category}.
Hence results of R.\ W.\ Thomason \cite[Section~3]{thomason-class} (as
explained in Remark~\ref{rem:thomason-and-balmer-schlichting}) show
that the closure of $K(\op{mod}_{\Lambda}(k))$ under isomorphisms in
$K(\op{mod}(k))$ equals $K(\op{mod}(k))$. In particular
$K(\op{mod}_{\Lambda}(k))$ is
idempotent complete.
\end{example}
\section{Weak weight complex functor}
\label{sec:weak-wc-fun}
In this section we construct the weak weight complex
functor essentially following
\cite[Ch.~3]{bondarko-weight-str-vs-t-str} where it is just called
weight complex functor.
We repeat the construction in detail since we need this
for the proof of Theorem~\ref{t:strong-weight-cplx-functor}.
Let $(\mathcal{T}, w=(\mathcal{T}^{w \leq 0}, \mathcal{T}^{w \geq
0}))$ be a weight category.
We fix for every object $X$ in
$\mathcal{T}$ and every $n \in \Bl{Z}$ a weight decomposition
\begin{equation}
\label{eq:choice-weight-decomp}
\xymatrix{
{T^n_X:} &
{w_{\geq n+1}X} \ar[r]^-{g^{n+1}_X} &
{X} \ar[r]^-{k_X^n} &
{w_{\leq n}X} \ar[r]^-{v_X^n} &
{[1]w_{\geq n+1}X;}
}
\end{equation}
as suggested by the notation we assume
$w_{\geq n+1}X \in \mathcal{T}^{w \geq n+1}$ and $w_{\leq n} \in
\mathcal{T}^{w \leq n}$.
For any $n$ there is a unique morphism of triangles
\begin{equation}
\label{eq:unique-triang-morph-Tnx}
\xymatrix{
{T^n_X:} \ar[d]|{(h^n_X,\op{id}_X,l^n_X)} &
{w_{\geq n+1}X} \ar[r]^-{g^{n+1}_X} \ar[d]^-{h^n_X}
\ar@{}[rd]|{\triangle} &
{X} \ar[r]^-{k^n_X} \ar[d]^-{\op{id}_X}
\ar@{}[rd]|{\nabla} &
{w_{\leq n}X} \ar[r]^-{v^n_X} \ar[d]^-{l^n_X} &
{[1]w_{\geq n+1}X} \ar[d]^-{[1]h^n_X} \\
{T^{n-1}_X:} &
{w_{\geq n}X} \ar[r]^-{g^n_X} &
{X} \ar[r]^-{k^{n-1}_X} &
{w_{\leq n-1}X} \ar[r]^-{v^{n-1}_X} &
{[1]w_{\geq n}X}
}
\end{equation}
extending $\op{id}_X$
(use Prop.~\ref{p:BBD-1-1-9-copied-for-w-str} and
\ref{enum:ws-iii}).
More precisely $h^n_X$ (resp.\ $l^n_X$) is the unique morphism making the square
$\Delta$ (resp.\ $\nabla$) commutative.
We use the square marked with $\triangle$ as the germ cell for the
octahedral axiom and obtain the following diagram:
\begin{equation}
\label{eq:wc-weak-octaeder}
O_X^n:=
\xymatrix@dr{
&& {[1]{w_{\geq n+1}X}} \ar[r]^-{[1]h^n_X} & {[1]{w_{\geq n}X}}
\ar[r]^-{[1]e^n_X} & {[1]{w_n X}}
\\
{T^{\leq n}_X:} &{w_n X} \ar@(ur,ul)[ru]^-{c^n_X}
\ar@{..>}[r]^-{a^n_X} &
{w_{\leq n} X} \ar@{..>}[r]^-{l^n_X} \ar[u]^-{v^n_X} &
{w_{\leq n-1} X}
\ar@(dr,dl)[ru]^-{b^n_X} \ar[u]_{v^{n-1}_X}\\
{T^{n-1}_X:} &{w_{\geq n}X} \ar[u]^-{e^n_X} \ar[r]^-{g^n_X}
& {X} \ar@(dr,dl)[ru]^-{k^{n-1}_X} \ar[u]_{k^n_X} \\
{T^n_X:} &{w_{\geq n+1}X} \ar[u]^-{h^n_X} \ar@(dr,dl)[ru]^-{g^{n+1}_X}
\ar@{}[ru]|{\triangle} \\
& {T^{\geq n}_X:}
}
\end{equation}
The octahedral axiom says that after fitting $h^n_X$ into
the triangle $T^{\geq n}_X$ the dotted arrows exist such
that $T^{\leq n}_X$ is a triangle and everything commutes. The lower
dotted arrow is in fact $l^n_X$ by the uniqueness statement given
above.
We fix such octahedral diagrams $O^n_X$ for all objects $X$ and all $n
\in \Bl{Z}$.
The triangles
$T^{\leq n}_X$ and $T^{\geq n}_X$
and the fact that
$\mathcal{T}^{w \leq n}$
and $\mathcal{T}^{w \geq n}$ are closed under extensions
show that $w_n X
\in \mathcal{T}^{w=n}$ and $[n]w_n X \in {\heartsuit}:={\heartsuit}(w)$.
We define the (candidate) weight complex ${WC_{\op{c}}}(X) \in C({\heartsuit})$ of
$X$
as follows (the index $c$ stands for ``candidate"): Its $n$-th term is
\begin{equation*}
{WC_{\op{c}}}(X)^n:=[n]w_n X
\end{equation*}
and the
differential $d^n_{{WC_{\op{c}}}(X)}:[n]w_n X \rightarrow [n+1]w_{n+1}X$ is defined by
\begin{multline}
\label{eq:differential-weak-WC-complex}
d^n_{{WC_{\op{c}}}(X)}
:= [n](b^{n+1}_X \circ a^n_X)
= [n](([1]e^{n+1}_X) \circ v^n_X \circ a^n_X)\\
= [n](([1]e^{n+1}_X) \circ c^n_X).
\end{multline}
Note that $d^n_{{WC_{\op{c}}}(X)} \circ d^{n-1}_{{WC_{\op{c}}}(X)}=0$
since
the composition of two consecutive
maps in a triangle is zero (apply this to \eqref{eq:wc-weak-octaeder}).
Hence ${WC_{\op{c}}}(X)$ is in fact a complex in ${\heartsuit}$.
Now let $f: X \rightarrow Y$ be a morphism in $\mathcal{T}$.
We can extend $f$ to a morphism of triangles
(use Prop.~\ref{p:BBD-1-1-9-copied-for-w-str} and
\ref{enum:ws-iii})
\begin{equation}
\label{eq:f-w-trunc}
\xymatrix{
{T^n_X:} \ar[d]_-{(f_{w \geq n+1}, f, f_{w \leq n})} &
{w_{\geq n+1}X} \ar[r]^-{g^{n+1}_X} \ar[d]^-{f_{w \geq n+1}} &
{X} \ar[r]^-{k_X^n} \ar[d]^-{f} &
{w_{\leq n}X} \ar[r]^-{v_X^n} \ar[d]^-{f_{w \leq n}}&
{[1]w_{\geq n+1}X,} \ar[d]^-{[1]f_{w \geq n+1}} \\
{T^n_Y:} &
{w_{\geq n+1}Y} \ar[r]^-{g^{n+1}_Y} &
{Y} \ar[r]^-{k_Y^n} &
{w_{\leq n}Y} \ar[r]^-{v_Y^n} &
{[1]w_{\geq n+1}Y.}
}
\end{equation}
This extension is not unique in general; this will be discussed later
on. Nevertheless we fix now such an extension $(f_{w \geq n+1}, f,
f_{w \leq n})$ for any $n \in \Bl{Z}$.
Consider the following diagram in the category of triangles
(objects: triangles; morphisms: morphisms of triangles):
\begin{equation}
\label{eq:any-trunc-f-comm-hil}
\xymatrix@C=2.5cm@R=1.5cm{
{T^n_X}
\ar[d]_-{(h^n_X,\op{id}_X,l^n_X)}
\ar[r]^-{(f_{w \geq n+1}, f, f_{w \leq n})} &
{T^n_Y} \ar[d]^-{(h^n_Y,\op{id}_Y,l^n_Y)} \\
{T^{n-1}_X}
\ar[r]^-{(f_{w \geq n}, f, f_{w \leq n-1})} &
{T^{n-1}_Y} \\
}
\end{equation}
This square is commutative since a morphism of triangles $T^n_X \rightarrow
T^{n-1}_Y$ extending $f$ is unique
(use Prop.~\ref{p:BBD-1-1-9-copied-for-w-str} and \ref{enum:ws-iii}).
In particular we have $f_{w \leq n-1} l^n_X =l^n_Y f_{w \leq n}$, so
we can extend the partial morphism $(f_{w \leq n}, f_{w \leq n-1})$
by a morphism $f^n:w_nX \rightarrow w_nY$
to a morphism of triangles
\begin{equation}
\label{eq:f-w-trunc-fn}
\xymatrix{
{T^{\leq n}_X:} \ar[d]_-{(f^n, f_{w \leq n}, f_{w \leq n-1})} &
{w_n X} \ar[r]^-{a^n_X} \ar@{..>}[d]^-{f^n} &
{w_{\leq n} X} \ar[r]^-{l^n_X} \ar[d]^-{f_{w\leq n}} &
{w_{\leq n-1} X} \ar[r]^-{b^n_X} \ar[d]^-{f_{w\leq
n-1}} &
{[1]w_n X} \ar@{..>}[d]^-{[1]f^n}
\\
{T^{\leq n}_{Y}:} &
{w_n Y} \ar[r]^-{a^n_Y} &
{w_{\leq n} Y} \ar[r]^-{l^n_Y} &
{w_{\leq n-1} Y} \ar[r]^-{b^n_Y} &
{[1]w_n Y}
}
\end{equation}
as indicated.
Again there might be a choice, but we fix for each $n \in \Bl{Z}$ such an
$f^n$.
The commutativity of the squares on the left and right in
\eqref{eq:f-w-trunc-fn} shows that
the sequence
${WC_{\op{c}}}(f):=([n]f^n)_{n \in \Bl{Z}}$
defines a morphism of
complexes
\begin{equation}
\label{eq:nearly-weakWC}
\xymatrix{
{{WC_{\op{c}}}(X):} \ar[d]_{{WC_{\op{c}}}(f)}
& {\dots} \ar[r]
& {[n]w_n X} \ar[r]^-{d^n_{{WC_{\op{c}}}(X)}} \ar[d]^{[n]f^n}
& {[n+1]w_{n+1}X} \ar[r] \ar[d]^{[n+1]f^{n+1}}
& {\dots}\\
{{WC_{\op{c}}}(Y):}
& {\dots} \ar[r]
& {[n]w_n Y} \ar[r]^-{d^n_{{WC_{\op{c}}}(Y)}}
& {[n+1]w_{n+1}Y} \ar[r]
& {\dots.}
}
\end{equation}
Since some morphisms existed but were not unique we cannot expect that
${WC_{\op{c}}}$ defines a functor $\mathcal{T} \rightarrow C({\heartsuit})$,
cf.\ Example~\ref{ex:weak-WC-weak-homotopy} below.
Let ${WC}$ be the composition of ${WC_{\op{c}}}$ with the canonical
functor $C({\heartsuit}) \rightarrow K_{\op{weak}}({\heartsuit})$
(cf.\ \eqref{eq:can-CKKweak}), i.\,e.\ the
assignment mapping an object $X$ of $\mathcal{T}$ to ${WC}(X):=
{WC_{\op{c}}}(X)$ and a morphism $f$ of $\mathcal{T}$ to the class
of ${WC_{\op{c}}}(f)$ in $K_{\op{weak}}({\heartsuit})$. The complex
${WC}(X)$ is called a \define{weight complex of $X$}.
Recall that the assignment $X \mapsto {WC}(X)$ depends on the
choices made in \eqref{eq:choice-weight-decomp} and
\eqref{eq:wc-weak-octaeder}. For morphisms we have:
\begin{proposition}
\label{p:dependence-on-choices-weakWCfun}
Mapping a morphism $f$ in $\mathcal{T}$ to ${WC}(f)$ does
not depend on the choices made in \eqref{eq:f-w-trunc} and
\eqref{eq:f-w-trunc-fn}.
\end{proposition}
\begin{proof}
By considering appropriate differences it is easy to see that it is
sufficient to consider the case that $f=0$.
Consider \eqref{eq:f-w-trunc} now for $f=0$
(but we write $f_{w \leq n}$ instead of
$0_{w \leq n}$).
\begin{equation}
\label{eq:f-w-trunc-null-morph}
\xymatrix{
{T^n_X:} \ar[d]_-{(f_{w \geq n+1}, 0, f_{w \leq n})} &
{w_{\geq n+1}X} \ar[r]^-{g^{n+1}_X} \ar[d]^-{f_{w \geq n+1}} &
{X} \ar[r]^-{k_X^n} \ar[d]^-{f=0} &
{w_{\leq n}X} \ar[r]^-{v_X^n} \ar[d]^-{f_{w \leq n}}&
{[1]w_{\geq n+1}X,} \ar[d]^-{[1]f_{w \geq n+1}} \\
{T^n_Y:} &
{w_{\geq n+1}Y} \ar[r]^-{g^{n+1}_Y} &
{Y} \ar[r]^-{k_Y^n} &
{w_{\leq n}Y} \ar[r]^-{v_Y^n} &
{[1]w_{\geq n+1}Y,}\\
}
\end{equation}
Since $f_{w \leq n} \circ k^n_X=0$ there exists
$s^n: [1]w_{\geq n+1}X \rightarrow
w_{\leq n}Y$ such that $f_{w \leq n}=s^n v^n_X$.
Then in the situation
\begin{equation*}
\xymatrix@C=1.5cm{
{[1]w_{\geq n+2}X} \ar[r]^-{[1] h^{n+1}_X}
& {[1]w_{\geq {n+1}}X} \ar[r]^-{[1] e^{n+1}_X} \ar[d]_(.3){s^n}
& {[1]w_{n+1}X}
\ar@{..>}[lld]^(.3){\tau^{n+1}}
\\
{w_n Y} \ar[r]_-{a^n_Y}
& {w_{\leq n}Y} \ar[r]_-{l^n_Y}
& {w_{\leq n-1}Y}
}
\end{equation*}
(where both rows are part of triangles in
\eqref{eq:wc-weak-octaeder}, the upper row comes up to signs
from a rotation of $T^{\geq n+1}_{X}$, the lower row is from
${T^{\leq n}_{Y}}$)
the indicated factorization
\begin{equation}
\label{eq:sn-factorization}
s^n=a^n_Y \tau^{n+1} ([1]e^{n+1}_X)
\end{equation}
exists by \ref{enum:ws-iii}.
Now \eqref{eq:f-w-trunc-fn} takes the form (the dotted diagonal arrow
will be explained)
\begin{equation*}
\hspace{-1.4cm}
\xymatrix@=1.8cm{
{T^{\leq n}_X:} \ar[d]_-{(f^n, f_{w \leq n}, f_{w \leq n-1})} &
{w_n X} \ar[r]^-{a^n_X} \ar[d]^-{f^n} \ar@{}[rd]|(0.7){\nabla}&
{w_{\leq n} X} \ar[r]^-{l^n_X}
\ar[d]^-{
s^n v^n_X}
\ar@{..>}[dl]|-{\tau^{n+1} b^{n+1}_X} &
{w_{\leq n-1} X} \ar[r]^-{b^n_X}
\ar[d]^-{
s^{n-1} v^{n-1}_X} &
{[1]w_n X} \ar[d]^-{[1]f^n}
\\
{T^{\leq n}_{Y}:} &
{w_n Y} \ar[r]^-{a^n_Y} &
{w_{\leq n} Y} \ar[r]^-{l^n_Y} &
{w_{\leq n-1} Y} \ar[r]^-{b^n_Y} &
{[1]w_n Y}
}
\end{equation*}
Equation \eqref{eq:sn-factorization} and $([1]e^{n+1}_X) v^n_X =
b^{n+1}_X$ (which follows from the octahedron $O^{n+1}_X$, cf.\
\eqref{eq:wc-weak-octaeder})
yield
\begin{equation*}
s^nv^n_X=a^n_Y \tau^{n+1} ([1]e^{n+1}_X)v^n_X =a^n_Y \tau^{n+1} b^{n+1}_X.
\end{equation*}
This shows that the honest triangle marked $\nabla$
commutes. Consider $f^n-\tau^{n+1} b^{n+1}_X a^n_X$: Since
\begin{equation*}
a^n_Y(f^n-\tau^{n+1} b^{n+1}_X a^n_X)
=a^n_Y f^n - s^nv^n_X a^n_X=0
\end{equation*}
there is a morphism $\nu^n: w_n X \rightarrow [-1]w_{\leq n-1}Y$ such that
\begin{equation}
\label{eq:factors-over-nu}
-([-1]b^n_Y) \nu^n=f^n -\tau^{n+1} b^{n+1}_X a^n_X.
\end{equation}
Now consider the following diagram
\begin{equation*}
\hspace{-0.4cm}
\xymatrix{
& {w_n X} \ar[d]^{\nu^n} \ar@{..>}[dl]_{\sigma^n}\\
{[-1]w_{n-1}Y} \ar[r]_-{-[-1]a^{n-1}_Y}
& {[-1]w_{\leq {n-1}}Y} \ar[r]_-{-[-1]l^{n-1}_Y}
& {[-1]w_{\leq {n-2}}Y} \ar[r]_-{-[-1]b^{n-1}_Y}
& {w_{n-1}Y}
}
\end{equation*}
where the lower row is the triangle
obtained from $T^{\leq n-1}_{Y}$ by three rotations.
The composition $([-1]l^{n-1}_Y) \nu^n$ vanishes by
\ref{enum:ws-iii}; hence there is a morphism $\sigma^n: w_n X \rightarrow
[-1]w_{n-1}Y$ as
indicated such that
$-([-1]a^{n-1}_Y) \sigma^n = \nu^n$. If we plug this into
\eqref{eq:factors-over-nu} we get
\begin{equation*}
f^n- \tau^{n+1} b^{n+1}_X a^n_X=-([-1]b^n_Y) \nu^n =([-1]b^n_Y)
([-1]a^{n-1}_Y) \sigma^n.
\end{equation*}
Applying $[n]$ to this equation yields
(using \eqref{eq:differential-weak-WC-complex})
\begin{equation}
\label{eq:well-defined-in-weak-homotopy-cat}
{WC_{\op{c}}}(f)^n=[n]f^n= ([n]\tau^{n+1})
d^n_{{WC_{\op{c}}}(X)} + d^{n-1}_{{WC_{\op{c}}}(Y)}
([n]\sigma^n).
\end{equation}
This shows that ${WC_{\op{c}}}(f)$ is weakly
homotopic to zero proving the claim (since we assumed that $f=0$).
\end{proof}
\begin{theorem}
[{cf.\ \cite[Ch.~3]{bondarko-weight-str-vs-t-str}}]
\label{t:weakWCfun}
The assignment $X \mapsto {WC}(X)$, $f \mapsto {WC}(f)$,
depends only on the choices made in
\eqref{eq:choice-weight-decomp} and
\eqref{eq:wc-weak-octaeder}
and defines an additive functor
\begin{equation*}
{WC}: \mathcal{T} \rightarrow K_{\op{weak}}({\heartsuit}).
\end{equation*}
This functor is in a canonical way a functor of additive categories
with translation:
There is a canonical isomorphism of functors $\phi: \Sigma \circ {WC}
\xra{\sim} {WC} \circ [1]$ such that $({WC}, \phi)$ is a functor of
additive categories with translation (the translation on
$K_{\op{weak}}({\heartsuit})$ is denoted $\Sigma$ here for clarity).
\end{theorem}
\begin{proof}
It is obvious that
${WC}(\op{id}_X)=\op{id}_{{WC}(X)}$
and
${WC}(f \circ g)={WC}(f) \circ {WC}(g)$.
Hence ${WC}$ is a well-defined functor which is obviously
additive.
We continue the proof
after the following Remark~\ref{rem:choices-weakWC}
\end{proof}
\begin{remark}
\label{rem:choices-weakWC}
Let ${WC}_1:={WC}$ be the additive functor from
Theorem~\ref{t:weakWCfun} (we do not know yet how it is compatible
with the respective translations) and let ${WC}_2$ be another
such
functor constructed from possibly different choices
in
\eqref{eq:choice-weight-decomp} and
\eqref{eq:wc-weak-octaeder}.
For each $X \in \mathcal{T}$ the identity $\op{id}_X$ gives rise to a
well-defined morphism $\psi_{21,X}: {WC}_1(X) \rightarrow {WC}_2(X)$ in
$K_{\op{weak}}({\heartsuit})$ which is constructed in the same manner as ${WC}(f)$ was
constructed from $f:X \rightarrow Y$ above. The collection of these $\psi_{21,X}$
in fact defines an isomorphism $\psi_{21}: {WC}_1 \xra{\sim} {WC}_2$.
If there is a third functor ${WC}_3$ of the same type all these
isomorphisms are compatible in the sense that
$\psi_{32}\psi_{21}=\psi_{31}$ and $\psi_{ii}=\op{id}_{{WC}_i}$ for
all $i \in \{1,2,3\}$.
\end{remark}
\begin{proof}[Proof of Thm.~\ref{t:weakWCfun} continued:]
Our aim is to define $\phi$.
Let $U^{n}_{[1]X}$ be the triangle obtained by three rotations from
the triangle $T^{n+1}_X$ (see \eqref{eq:choice-weight-decomp}):
\begin{equation}
\label{eq:choice-weight-decomp-translation}
\xymatrix{
{U^{n}_{[1]X}:} &
{[1]w_{\geq n+2}X} \ar[r]^-{-[1]g^{n+2}_X} &
{[1]X} \ar[r]^-{-[1]k_X^{n+1}} &
{[1]w_{\leq n+1}X} \ar[r]^-{-[1]v_X^{n+1}} &
{[2]w_{\geq n+2}X;}
}
\end{equation}
Note that it is a
$(w \geq n+1, w \leq n)$-weight decomposition of $[1]X$.
Using \eqref{eq:wc-weak-octaeder} it is easy to check that
\begin{equation}
\label{eq:wc-weak-octaeder-translation}
\hspace{-1.0cm}
\xymatrix@=1.2cm@dr{
&& {[2]{w_{\geq n+2}X}} \ar[r]^-{[2]h^{n+1}_X} & {[2]{w_{\geq {n+1}}X}}
\ar[r]^-{[2]e^{n+1}_X} & {[2]{w_{n+1} X}}
\\
{U^{\leq n}_{[1]X}:} &{[1]w_{n+1} X} \ar@(ur,ul)[ru]^-{-[1]c^{n+1}_X}
\ar[r]_-{[1]a^{n+1}_X} &
{[1]w_{\leq {n+1}} X} \ar[r]^-{[1]l^{n+1}_X} \ar[u]^-{-[1]v^{n+1}_X} &
{[1]w_{\leq n} X}
\ar@(dr,dl)[ru]^-{-[1]b^{n+1}_X} \ar[u]_{-[1]v^{n}_X}\\
{U^{n-1}_{[1]X}:} & [1]{w_{\geq {n+1}}X} \ar[u]^-{[1]e^{n+1}_X}
\ar[r]^-{-[1]g^{n+1}_X}
& {[1]X} \ar@(dr,dl)[ru]^-{-[1]k^{n}_X} \ar[u]_{-[1]k^{n+1}_X} \\
{U^{n}_{[1]X}:} &{[1]w_{\geq n+2}X}
\ar[u]^-{[1]h^{n+1}_X} \ar@(dr,dl)[ru]^-{-[1]g^{n+2}_X}
\\
& {U^{\geq n}_{[1]X}:}
}
\end{equation}
is an octahedron.
In the same manner in which the choices
\eqref{eq:choice-weight-decomp} and
\eqref{eq:wc-weak-octaeder} gave rise to the functor ${WC}:
\mathcal{T} \rightarrow K_{\op{weak}}({\heartsuit})$, the choices
\eqref{eq:choice-weight-decomp-translation} and
\eqref{eq:wc-weak-octaeder-translation}
give rise to an additive functor
${WC}': \mathcal{T} \rightarrow K_{\op{weak}}({\heartsuit})$.
As seen in Remark~\ref{rem:choices-weakWC} there is a canonical
isomorphism $\psi:{WC}' \xra{\sim} {WC}$.
We have
\begin{equation*}
{WC}'([1]X)^n=[n][1]w_{n+1}X={WC}(X)^{n+1}=\Sigma({WC}(X))^{n}
\end{equation*}
and
\begin{multline*}
d^{n}_{{WC}'([1]X)}=[n]((-[1]b^{n+2}_X) \circ [1]a^{n+1}_X)
=-[n+1](b^{n+2}_X \circ a^{n+1}_X) \\
= -d^{n+1}_{{WC}(X)}=
d^n_{\Sigma({WC}(X))}.
\end{multline*}
This implies that $\Sigma({WC}(X))= {WC}'([1]X)$
and it is easy to see that even $\Sigma \circ {WC} ={WC}' \circ [1]$.
Now define $\phi$ as the
composition
\begin{equation*}
\Sigma \circ {WC} ={WC}' \circ [1] \xsira{\psi \circ [1]} {WC}
\circ [1].
\end{equation*}
\end{proof}
\begin{definition}
The functor ${WC}$ (together with $\phi$) of additive categories
with translation from
Theorem~\ref{t:weakWCfun}
is called a \define{weak weight complex functor}.
\end{definition}
A weak weight complex functor depends on the choices
made in \eqref{eq:choice-weight-decomp} and
\eqref{eq:wc-weak-octaeder}.
However it follows from
the proof of the Theorem~\ref{t:weakWCfun}
(and Remark~\ref{rem:choices-weakWC}) that any two weak weight complex
functors are canonically isomorphic. Hence we allow ourselves to speak
about \emph{the} weak weight complex functor.
\begin{example}
[{cf.\ \cite[Rem.~1.5.2.3]{bondarko-weight-str-vs-t-str}}]
\label{ex:weak-WC-weak-homotopy}
Let $\op{Mod}(R)$ be the category of
$R$-modules for $R=\Bl{Z}/4\Bl{Z}$ and consider the standard weight structure on
$K(\op{Mod}(R))$
(see Prop.~\ref{p:ws-hot-additive}). Let $X$ be the complex
$\dots \rightarrow 0 \rightarrow R
\xra{\cdot 2} R \rightarrow 0 \rightarrow
\dots$ with $R$ in degrees $0$ and $1$.
If we use the $\ul{w}$-weight decompositions from the proof of
Proposition~\ref{p:ws-hot-additive}, the only interesting weight
decomposition is $T^0_X$ of type $(w \geq 1, w \leq 0)$ and has the
following form (where we draw the complexes vertically and give only
their components in degrees 0 and 1, and similar for the morphisms):
\begin{equation*}
\xymatrix{
{\ul{w}_{\geq 1}X} \ar[r]^{\svek 10} &
{X} \ar[r]^{\svek 01} &
{\ul{w}_{\leq 0}X} \ar[r]^{\svek 0{\cdot(-2)}} &
{[1]\ul{w}_{\geq 1}X}\\
{R} \ar[r]^1 &
{R} \ar[r]^0 &
{0} \ar[r]^0 &
{0}\\
{0} \ar[r]^0 \ar[u] &
{R} \ar[r]^1 \ar[u]^{\cdot 2}&
{R} \ar[r]^{\cdot(-2)=\cdot 2} \ar[u] &
{R.} \ar[u]
}
\end{equation*}
We can choose $w_1X= w_{\geq 1}X$ and $w_0X=w_{\leq 0}X$ and then
one checks that
${WC}(X)$ is given by the connecting morphism of this triangle,
more precisely
\begin{equation*}
{WC}(X) = (\dots \rightarrow 0 \rightarrow R \xra{\cdot 2} R \rightarrow 0 \rightarrow \dots)
\end{equation*}
concentrated in degrees 0 and 1.
Consider the morphism $0=0_X=\svek 00: X \rightarrow X$ and extend it to a
morphism of triangles
(cf.\ \eqref{eq:f-w-trunc} or \eqref{eq:f-w-trunc-null-morph})
\begin{equation*}
\xymatrix{
{w_{\geq 1}X} \ar[r] \ar@{..>}[d]^{\svek y0} &
{X} \ar[r] \ar[d]^{\svek 00} &
{w_{\leq 0}X} \ar[r] \ar@{..>}[d]^{\svek 0x} &
{[1]w_{\geq 1}X} \ar@{..>}[d]^{[1]\svek y0 =\svek 0{y}} \\
{w_{\geq 1}X} \ar[r] &
{X} \ar[r] &
{w_{\leq 0}X} \ar[r] &
{[1]w_{\geq 1}X.}
}
\end{equation*}
It is an easy exercise to check that the dotted arrows complete $0_X$
to a morphism of triangles
for any $x,y \in \{0,2\}$. Now one checks that all four morphism $(0,0),
(0,2), (2,0), (2,2): {WC}(X) \rightarrow {WC}(X)$ are weakly
homotopic whereas for example $(0,0)$ and $(2,0)$ are not
homotopic.
In particular, this example confirms that mapping an object $X$ to
${WC_{\op{c}}}(X)$ and a morphism
$f$ to ${WC_{\op{c}}}(f)$ (or its class in $K({\heartsuit})$) is not a
well-defined functor: We have to pass to the weak homotopy category.
(But one can easily find a preferred choice for the morphisms $f^n$
in this example
which defines a functor to $K({\heartsuit})$, see Section
\ref{sec:strong-WC-standard-wstr} below.)
\end{example}
\begin{lemma}
[{cf.\ \cite[Thm.~3.3.1.IV]{bondarko-weight-str-vs-t-str}}]
\label{l:WCweak-w-exact}
Let $a,b \in \Bl{Z}$. If
$X \in \mathcal{T}^{w \geq a}$
(resp.\ $X \in \mathcal{T}^{w \leq b}$
or $X \in \mathcal{T}^{w \in [a,b]}$)
then
${WC}(X) \in K({\heartsuit})^{w \geq a}$
(resp.\ ${WC}(X) \in K({\heartsuit})^{w \leq b}$
or ${WC}(X) \in K({\heartsuit})^{w \in [a,b]}$).
In particular, if the weight structure is bounded, then ${WC}(X) \in
K({\heartsuit})^b$ for all $X \in \mathcal{T}$.
\end{lemma}
\begin{proof}
Let $X \in \mathcal{S}$ where $\mathcal{S}$ is one of the categories
$\mathcal{T}^{w \geq a}$, $\mathcal{T}^{w \leq b}$,
$\mathcal{T}^{w \in [a,b]}$.
Lemma~\ref{l:weight-str-basic-properties}
\eqref{enum:bounded-w-decomp} shows that we can assume that in all
weight decompositions $T^n_X$
(see \eqref{eq:choice-weight-decomp})
of $X$ the objects $w_{\geq n+1}X$ and $w_{\leq n}X$ are in
$\mathcal{S}$.
Choose octahedra \eqref{eq:wc-weak-octaeder} and let
${WC}'(X)$ be constructed using these choices.
Consider the octahedron
\eqref{eq:wc-weak-octaeder} again.
We claim that $w_nX \in \mathcal{S}$.
We already know that $w_nX \in \mathcal{T}^{w=n}$. In particular the
triangle $T_X^{\geq n}$ is a $(w\geq n+1, w \leq n)$-weight
decomposition of $w_{\geq n}X$ and the triangle $T_X^{\leq n}$ is a
$(w \geq n, w \leq n-1)$-weight decomposition of $w_{\leq n}X$.
We obtain
\begin{itemize}
\item Case $\mathcal{S}= \mathcal{T}^{w \geq a}$:
If $a \leq n$ then the weight decomposition $T_X^{\geq n}$ and
Lemma~\ref{l:weight-str-basic-properties} \eqref{enum:weights-bounded}
show $w_nX \in \mathcal{T}^{w \geq a}=\mathcal{S}$.
If $a > n$ the triangle $T_X^{\leq n}$ shows that $w_nX$ is an
extension of $w_{\leq n}X \in \mathcal{T}^{w \geq a}$ and
$[-1]w_{\leq n-1}X \in \mathcal{T}^{w \geq a+1} \subset
\mathcal{T}^{w \geq a}$ and
hence in $\mathcal{T}^{w \geq a}=\mathcal{S}$.
\item Case $\mathcal{S}= \mathcal{T}^{w \leq b}$:
If $n-1 < b$ the weight decomposition $T_X^{\leq n}$ and
Lemma~\ref{l:weight-str-basic-properties} \eqref{enum:weights-bounded}
show $w_nX \in \mathcal{T}^{w \leq b}=\mathcal{S}$.
If $n-1 \geq b$ the triangle $T_X^{\geq n}$ shows that $w_nX$ is an
extension of $w_{\geq n}X \in \mathcal{T}^{w \leq b}$ and
$[1]w_{\geq n+1}X \in \mathcal{T}^{w \leq b-1} \subset
\mathcal{T}^{w \leq b}$ and
hence in $\mathcal{T}^{w \leq b}=\mathcal{S}$.
\item Case $\mathcal{S}= \mathcal{T}^{w \in [a, b]}$: Follows from
the above two cases.
\end{itemize}
This proves the claim $w_nX \in \mathcal{S}$.
Let $I$ be the one among the intervals $[a,\infty)$, $(-\infty,b]$,
$[a,b]$ that satisfies $\mathcal{S}=\mathcal{T}^{w \in I}$.
If $n \not\in I$ then
$w_nX \in \mathcal{T}^{w \in I} \cap \mathcal{T}^{w=n}=0$ and hence
${WC}'(X)^n=[n]w_nX=0$. This shows
${WC}'(X) \in K^{w \in I}({\heartsuit})$ and
${WC}(X) \in K({\heartsuit})^{w \in I}$ since ${WC}'(X) \cong
{WC}(X)$. (Here the categories $K^{w \in I}({\heartsuit})$ and
$K({\heartsuit})^{w \in I}$ are defined in the obvious way, cf.\ beginning
of Section~\ref{sec:hot-cat-idem-complete}).
\end{proof}
In the following definition the triangulated category
$K({\heartsuit}(w))^{\op{anti}}$ appears (see Section
\ref{sec:triang-categ} for the definition of $\mathcal{T}^{\op{anti}}$ for
a triangulated category $\mathcal{T}$).
This happens naturally as can be seen from
Proposition~\ref{p:strong-WC-for-standard-wstr} below.
Let us however remark that we could avoid its appearance by
replacing the definition of a weak weight complex functor ${WC}$
above with its composition with the functor induced by $(S, \op{id})$
(see \eqref{eq:KA-antiKA-triequi})
which just changes the signs of all differentials.
\begin{definition}
[{cf.~\cite[Conj.~3.3.3]{bondarko-weight-str-vs-t-str}}]
\label{d:strong-WCfun}
A \define{strong weight complex functor} is a
\emph{triangulated}
functor ${\widetilde{WC}}:\mathcal{T} \rightarrow K({\heartsuit})^{\op{anti}}$
such that the obvious composition
\begin{equation*}
{\mathcal{T}} \xra{{\widetilde{WC}}} {K({\heartsuit})^{\op{anti}}} \rightarrow {K_{\op{weak}}({\heartsuit})}
\end{equation*}
is isomorphic to the/a weak weight complex functor
as a functor of additive categories with translation.
\end{definition}
Recall the standard weight structure on $K({\heartsuit})^{\op{anti}}$ from
Remark~\ref{rem:ws-hot-additive-anti}.
\begin{lemma}
\label{l:WCstrong-w-exact}
Any strong weight complex functor ${\widetilde{WC}}:\mathcal{T} \rightarrow
K({\heartsuit})^{\op{anti}}$
is weight-exact.
\end{lemma}
\begin{proof}
This follows immediately from Lemma~\ref{l:WCweak-w-exact}.
\end{proof}
\subsection{Strong weight complex functor for the standard weight
structure}
\label{sec:strong-WC-standard-wstr}
Consider the standard weight structure $w$ on the homotopy category
$K(\mathcal{A})$ of an additive category $\mathcal{A}$
from Proposition~\ref{p:ws-hot-additive}.
Given $X \in K(\mathcal{A})$ the $\ul{w}$-weight decomposition
\eqref{eq:hot-wdecomp} is a preferred choice for the
weight decomposition $T_X^n$ in \eqref{eq:choice-weight-decomp}.
Then there is also an obvious preferred choice for the octahedron
$O_X^n$ in \eqref{eq:wc-weak-octaeder} in which $w_n X$ is just
$[-n]X^n$, the
$n$-th term $X^n$ of the complex $X$ shifted into degree $n$.
With this choices the complex
${WC_{\op{c}}}(X)={WC}(X)$ is
obtained from $X$ by multiplying all differentials by $-1$, i.\,e.\
${WC_{\op{c}}}(X) = S(X)$ where $S$ is the functor defined in
Section~\ref{sec:homotopy-categories};
here we view
$\mathcal{A} \subset {\heartsuit}(w)$ as a full
subcategory (see Cor.~\ref{c:heart-standard-wstr} for a more
precise statement).
Similarly there are preferred choices for morphisms: Let
$f: X \rightarrow Y$ be a morphism in $K(\mathcal{A})$. Let $\hat{f}: X \rightarrow Y$
be a morphism in $C(\mathcal{A})$ representing $f$.
The morphisms $\ol{w}_{\geq n+1}\hat{f}$,
$\ol{w}_{\leq n}\hat{f}$ and $\ol{w}_{n}\hat{f}$
gives rise to preferred choices for the morphisms $f_{w \geq n+1}$,
$f_{w\leq n}$ and $f^n$ in
\eqref{eq:f-w-trunc} and \eqref{eq:f-w-trunc-fn}.
If we identify $\mathcal{A} \subset {\heartsuit}(w)$ as above the morphism
${WC_{\op{c}}}(f)$ (see \eqref{eq:nearly-weakWC}) of complexes
is just $\hat{f}=S(\hat{f}):{WC_{\op{c}}}(X)=S(X)
\rightarrow {WC_{\op{c}}}(Y)=S(Y)$. Obviously its class in the homotopy
category
is $f=S(f)$ and hence
does not depend on
the choice of the representative for $f$.
\begin{proposition}
\label{p:strong-WC-for-standard-wstr}
The composition
\begin{equation*}
K(\mathcal{A}) \xsira{(S,\op{id})}
K(\mathcal{A})^{\op{anti}} \subset K({\heartsuit}(w))^{\op{anti}}
\end{equation*}
of the triangulated equivalence $(S, \op{id})$
(cf.~\eqref{eq:KA-antiKA-triequi}) and the obvious inclusion is a
strong weight complex functor.
\end{proposition}
\begin{proof}
Clear from the above arguments.
\end{proof}
\section{Filtered triangulated categories}
\label{sec:filt-tria-cat}
We very closely follow \cite[App.]{Beilinson}. Let us recall the
definition of a filtered triangulated category.
\begin{definition}
\label{d:filt-tria}
\begin{enumerate}
\item
\label{enum:filt-tria-cat}
A \define{filtered triangulated category}, or \define{f-cat\-e\-go\-ry}
for short, is a quintuple
$(\tildew{\mathcal{T}}, \tildew{\mathcal{T}}(\leq 0),
\tildew{\mathcal{T}}(\geq 0), s, \alpha)$
where $\tildew{\mathcal{T}}$ is a triangulated category,
$\tildew{\mathcal{T}}(\leq 0)$ and $\tildew{\mathcal{T}}(\geq 0)$
are strict full \emph{triangulated} subcategories,
$s: \tildew{\mathcal{T}} \xra{\sim} \tildew{\mathcal{T}}$ is a
triangulated automorphism (called ``shift of filtration") and
$\alpha:
\op{id}_{\tildew{\mathcal{T}}} \rightarrow s$ is a morphism of
\emph{triangulated}
functors, such that the
following axioms hold
(where
$\tildew{\mathcal{T}}(\leq n):= s^n(\tildew{\mathcal{T}}(\leq 0))$
and
$\tildew{\mathcal{T}}(\geq n):= s^n(\tildew{\mathcal{T}}(\geq 0))$):
\begin{enumerate}[label=(fcat{\arabic*})]
\item
\label{enum:filt-tria-cat-shift}
$\tildew{\mathcal{T}}(\geq 1) \subset \tildew{\mathcal{T}}(\geq 0)$ and
$\tildew{\mathcal{T}}(\leq 0) \subset \tildew{\mathcal{T}}(\leq 1)$
\item
\label{enum:filt-tria-cat-exhaust}
$\tildew{\mathcal{T}}=\bigcup_{n \in \Bl{Z}} \tildew{\mathcal{T}}(\leq n) =
\bigcup_{n \in \Bl{Z}} \tildew{\mathcal{T}}(\geq n)$.
\item
\label{enum:filt-tria-cat-no-homs}
$\op{Hom}(\tildew{\mathcal{T}}(\geq 1), \tildew{\mathcal{T}}(\leq 0))=0$.
\item
\label{enum:filt-tria-cat-triang}
For any $X$ in $\tildew{\mathcal{T}}$ there is a
triangle
\begin{equation*}
A \rightarrow X \rightarrow B \rightarrow A[1]
\end{equation*}
with $A$ in $\tildew{\mathcal{T}}(\geq 1)$ and $B$
in $\tildew{\mathcal{T}}(\leq 0)$.
\item
\label{enum:filt-tria-cat-shift-alpha}
For any $X \in \tildew{\mathcal{T}}$ one has
$\alpha_{s(X)}=s(\alpha_X)$ as morphisms
$s(X) \rightarrow s^2(X)$.
\item
\label{enum:filt-tria-cat-hom-bij}
For all $X$ in
$\tildew{\mathcal{T}}(\leq 0)$
and $Y$ in $\tildew{\mathcal{T}}(\geq 1)$, the map
\begin{align*}
\op{Hom}(X,s^{-1}(Y)) & \xra{\sim} \op{Hom}(X,Y)\\
f & \mapsto \alpha_{s^{-1}(Y)} \circ f
\end{align*}
is bijective (equivalently one can require that
\begin{align*}
\op{Hom}(s(X), Y) & \xra{\sim} \op{Hom}(X,Y)\\
g & \mapsto g \circ \alpha_X
\end{align*}
is bijective).
As diagrams these equivalent conditions look as follows:
\begin{equation*}
\xymatrix{
& {Y}\\
{X} \ar[ur]^-a \ar@{..>}[r]_-{\exists ! a'} & {s^{-1}(Y)}
\ar[u]_-{\alpha_{s^{-1}(Y)}}
}
\quad
\text{ and }
\quad
\xymatrix{
{s(X)} \ar@{..>}[r]^-{\exists ! b'} & {Y}\\
{X} \ar[u]^-{\alpha_X} \ar[ur]_-b &
}
\end{equation*}
\end{enumerate}
By abuse of notation we then say that $\tildew{\mathcal{T}}$ is an
f-category.
\item
\label{enum:filt-tria-over}
Let $\mathcal{T}$ be a triangulated category. A \define{filtered
triangulated category over\footnote
{
We do not ask here for a functor
$\tildew{\mathcal{T}} \rightarrow \mathcal{T}$ as suggested by the word
``over"; Proposition~\ref{p:functor-omega} will yield such a
functor.
} $\mathcal{T}$}
(or \define{f-category over $\mathcal{T}$})
is an f-category $\tildew{\mathcal{T}}$ together with an
equivalence
\begin{equation*}
i: \mathcal{T} \rightarrow \tildew{\mathcal{T}}(\leq 0)\cap
\tildew{\mathcal{T}}(\geq 0)
\end{equation*}
of triangulated categories.
\end{enumerate}
\end{definition}
Let $\tildew{\mathcal{T}}$ be an f-category.
We will use the shorthand notation
$\tildew{\mathcal{T}}([a,b])=\tildew{\mathcal{T}}(\leq b) \cap
\tildew{\mathcal{T}}(\geq a)$ and abbreviate
$[a,a]$ by $[a]$. Similarly $\tildew{\mathcal{T}}(<a)$
etc.\ have the obvious meaning.
For $a<b$ we have $\tildew{\mathcal{T}}(\leq a) \cap \tildew{\mathcal{T}}(\geq b)=0$: If $X$ is in this intersection then axiom \ref{enum:filt-tria-cat-no-homs} implies that $\op{id}_X=0$ hence $X=0$.
Note that $\tildew{\mathcal{T}}$ together with the identity functor
$\op{id}: \tildew{\mathcal{T}}([0]) \rightarrow \tildew{\mathcal{T}}([0])$ is an
f-category over $\tildew{\mathcal{T}}([0])$.
\begin{remark}
\label{rem:f-cat-vs-t-cat}
Let $\tildew{\mathcal{T}}$ be a filtered triangulated category. Define
$\mathcal{D}^{t \leq 0}:=\tildew{\mathcal{T}}(\geq 1)$ and
$\mathcal{D}^{t \geq 0}:=\tildew{\mathcal{T}}(\leq 0)$.
Then
$(\mathcal{D}^{t \leq 0},
\mathcal{D}^{t \geq 0})$ defines a t-structure
on
$\tildew{\mathcal{T}}$.
Note that in this example all $\mathcal{D}^{t \leq i}$
coincide since $\tildew{\mathcal{T}}(\geq 1)$ is a triangulated
subcategory;
similarly, all $\mathcal{D}^{t \geq i}$ are equal.
The heart of this t-structure is zero.
Of course we can apply the shift of filtration to this example and
obtain t-structures
$(\tildew{\mathcal{T}}(\geq n+1), \tildew{\mathcal{T}}(\leq n))$ for
all $n \in \Bl{Z}$.
Similarly, define
$\mathcal{E}^{w \leq 0} :=\tildew{\mathcal{T}}(\leq 0)$ and
$\mathcal{E}^{w \geq 0} :=\tildew{\mathcal{T}}(\geq 1)$. Then
$(\mathcal{E}^{w \leq 0}, \mathcal{E}^{w \geq 0})$ is a weight
structure on $\tildew{\mathcal{T}}$.
Note that \ref{enum:ws-i} is satisfied since
$(\tildew{\mathcal{T}}(\geq 1), \tildew{\mathcal{T}}(\leq 0))$ is a
t-structure. Again all $\mathcal{E}^{w \leq i}$
(resp.\ $\mathcal{E}^{w \leq i}$) coincide and the heart is zero.
\end{remark}
\subsection{Basic example}
\label{sec:basic-example-fcat}
We introduce the filtered derived category of an abelian category,
following \cite[V.1]{illusie-cotan-i}. The reader who is not interested
in this basic example of an f-category can skip this section and
continue with \ref{sec:firstprop-filt-cat}.
Let $\mathcal{A}$ be an abelian category and $CF(\mathcal{A})$ the
category whose objects are complexes in $\mathcal{A}$ with a finite
decreasing filtration and whose morphisms are morphisms of complexes
which respect the filtrations.
If $L$ is an object of $CF(\mathcal{A})$ and $i,j \in \Bl{Z}$ we denote the
component of $L$ in degree $i$ by $L^i$ and by $F^jL$ the $j$-the step
of the filtration, and by $F^jL^i$ the component of degree $i$ in
$F^jL$.
Pictorially an object $L$ looks as
\begin{equation*}
\xymatrix{
{L:} &
{\dots} \ar@{}[r]|-{\supset} &
{F^{-1}L} \ar@{}[r]|-{\supset} &
{F^{0}L} \ar@{}[r]|-{\supset} &
{F^{1}L} \ar@{}[r]|-{\supset} &
{F^{2}L} \ar@{}[r]|-{\supset} &
{\cdots}
}
\end{equation*}
or, if we also indicate the differentials, as
\begin{equation*}
\xymatrix@R1pc{
{\dots} &
{} &
{\dots} &
{\dots} &
{\dots} &
{} \\
{L^{1}:} \ar[u] &
{\dots} \ar@{}[r]|-{\supset} &
{F^{-1}L^{1}} \ar@{}[r]|-{\supset} \ar[u] &
{F^{0}L^{1}} \ar@{}[r]|-{\supset} \ar[u] &
{F^{1}L^{1}} \ar@{}[r]|-{\supset} \ar[u] &
{\dots} \\
{L^{0}:} \ar[u] &
{\dots} \ar@{}[r]|-{\supset} &
{F^{-1}L^{0}} \ar@{}[r]|-{\supset} \ar[u] &
{F^{0}L^{0}} \ar@{}[r]|-{\supset} \ar[u] &
{F^{1}L^{0}} \ar@{}[r]|-{\supset} \ar[u] &
{\dots} \\
{L^{-1}:} \ar[u] &
{\dots} \ar@{}[r]|-{\supset} &
{F^{-1}L^{-1}} \ar@{}[r]|-{\supset} \ar[u] &
{F^{0}L^{-1}} \ar@{}[r]|-{\supset} \ar[u] &
{F^{1}L^{-1}} \ar@{}[r]|-{\supset} \ar[u] &
{\dots} \\
{\dots} \ar[u] &
{} &
{\dots} \ar[u] &
{\dots} \ar[u] &
{\dots} \ar[u] &
{}
}
\end{equation*}
This is an additive category having kernels and cokernels, but
it is
not abelian (if $\mathcal{A}\not=0$).
There is an obvious translation functor $[1]$ on $CF(\mathcal{A})$.
A morphism $f: L \rightarrow M$ between objects of $CF(\mathcal{A})$
is called a \define{quasi-isomorphism} if one of the following
equivalent conditions is satisfied:
\begin{enumerate}
\item $F^nf:F^nL \rightarrow F^nM$ is a quasi-isomorphism for all $n \in \Bl{Z}$.
\item $\op{gr}^n(f): \op{gr}^n(L) \rightarrow \op{gr}^n(M)$ is a quasi-isomorphism for
all $n \in \Bl{Z}$.
\end{enumerate}
We localize $CF(\mathcal{A})$ with respect to the class of all
quasi-isomorphisms and obtain the \define{filtered derived category}
$DF(\mathcal{A})$ of $\mathcal{A}$.
This category can equivalently be constructed as the localization of
the filtered homotopy category. The latter category is triangulated
with triangles isomorphic to mapping cone triangles; this
structure of a triangulated category is inherited to
$DF(\mathcal{A})$.
Morphisms in $DF(\mathcal{A})$ are equivalence classes of so-called roofs:
Any morphism $f: L \rightarrow M$ in $DF(\mathcal{A})$ in $DF(\mathcal{A})$
can be represented as
\begin{equation*}
g s^{-1}: L \xla{s} L' \xra{g} M
\end{equation*}
where $s$ and $g$ are morphisms in $CF(\mathcal{A})$ and $s$ is a
quasi-isomorphism. Similarly, one can also represent $f$ as
\begin{equation*}
t^{-1} h: L \xra{h} M' \xla{t} M
\end{equation*}
where $t$ and $h$ are morphisms in $CF(\mathcal{A})$ and $t$ is a
quasi-isomorphism.
Let
$D(\mathcal{A})$ be the derived category of $\mathcal{A}$.
The functor $\op{gr}^n:CF(\mathcal{A}) \rightarrow C(\mathcal{A})$ passes to the
derived categories and yields a triangulated functor $\op{gr}^n:
DF(\mathcal{A}) \rightarrow D(\mathcal{A})$.
Define (strict) full subcategories
\begin{align*}
DF(\mathcal{A})(\leq n)& :=\{L \in DF(\mathcal{A})\mid \text{$\op{gr}^i (L)
=0$ for all $i>n$}\},\\
DF(\mathcal{A})(\geq n)& :=\{L \in DF(\mathcal{A})\mid \text{$\op{gr}^i (L)
=0$ for all $i<n$}\}.
\end{align*}
Let $s: CF(\mathcal{A}) \rightarrow CF(\mathcal{A})$ the functor which shifts
the filtration:
Given an object $L$ the object $s(L)$ has the same underlying complex
but filtration $F^n(s(L))=F^{n-1}L$.
It induces a triangulated automorphism $s:
DF(\mathcal{A}) \rightarrow DF(\mathcal{A})$.
Let $\alpha: \op{id}_{DF(\mathcal{A})} \rightarrow s$ be the obvious morphism of
triangulated functors: We include a picture of $\alpha_L$ where we indicate the $0$-th part of the filtration by a box:
\begin{equation*}
\xymatrix@R1pc{
{L:} \ar[d]^-{\alpha_L} &
{\dots} \ar@{}[r]|-{\supset} &
{F^{-1}L} \ar@{}[r]|-{\supset} \ar[d] &
{\mathovalbox{F^{0}L}} \ar@{}[r]|-{\supset} \ar[d] &
{F^1L} \ar@{}[r]|-{\supset} \ar[d] &
{\cdots} \\
{s(L):}
&
{\dots} \ar@{}[r]|-{\supset} &
{F^{-2}L} \ar@{}[r]|-{\supset}
&
{\mathovalbox{F^{-1}L}} \ar@{}[r]|-{\supset}
&
{F^{0}L} \ar@{}[r]|-{\supset}
&
{\cdots}
}
\end{equation*}
Note that $s^n(DF(\mathcal{A})(\leq 0))= DF(\mathcal{A})(\leq n)$ and
$s^n(DF(\mathcal{A})(\geq 0))= DF(\mathcal{A})(\geq n)$.
We define a functor
$i:D(\mathcal{A}) \rightarrow DF(\mathcal{A})$
by mapping an
object $L$ of $D(\mathcal{A})$ to $i(L)=(L,{\op{Tr}})$, where ${\op{Tr}}$ is the
trivial filtration on $L$ defined by ${\op{Tr}}^i L = L$ for $i \leq 0$
and ${\op{Tr}}^i L =0$ for $i > 0$.
We often consider $i$ as a functor to $DF(\mathcal{A})([0])$.
\begin{proposition}
[{cf.\ \cite[Example A 2]{Beilinson}}]
\label{p:basic-ex-f-cat}
The datum
\begin{equation}
\label{eq:filt-der}
(DF(\mathcal{A}), DF(\mathcal{A})(\leq 0),
DF(\mathcal{A})(\geq 0), s, \alpha)
\end{equation}
defines a filtered triangulated
category $DF(\mathcal{A})$, and
the functor $i:D(\mathcal{A}) \rightarrow DF(\mathcal{A})([0])$ makes
$DF(\mathcal{A})$ into a filtered
triangulated category over $D(\mathcal{A})$.
\end{proposition}
\begin{proof}
We first check that \eqref{eq:filt-der} defines a filtered
triangulated category.
Since all $\op{gr}^i: DF(\mathcal{A}) \rightarrow D(\mathcal{A})$ are
triangulated functors, $DF(\mathcal{A})(\leq 0)$
and $DF(\mathcal{A})(\geq 0)$ are strict full triangulated
subcategories of $DF(\mathcal{A})$.
The conditions \ref{enum:filt-tria-cat-shift},
\ref{enum:filt-tria-cat-exhaust} (we use finite filtrations) and
\ref{enum:filt-tria-cat-shift-alpha} are obviously satisfied.
\textbf{Condition \ref{enum:filt-tria-cat-triang}}:
Let $X$ be any object in $DF(\mathcal{A})$. We define objects
$X(\geq 1)$
and $X/(X(\geq 1))$
and (obvious) morphisms
$ X(\geq 1) \xra{i} X \xra{p} X/(X(\geq 1))$
in $CF(\mathcal{A})$
by the following
diagram:
\begin{equation}
\label{eq:basic-example-fcat-def-Xgeq-triangle}
\hspace{-1.6cm}
\xymatrix@R1pc{
{X(\geq 1):} \ar[d]^-{i} &
{\dots} \ar@{}[r]|-{=} &
{F^{1}X} \ar@{}[r]|-{=} \ar[d] &
{\mathovalbox{F^{1}X}} \ar@{}[r]|-{=} \ar[d] &
{F^{1}X} \ar@{}[r]|-{\supset} \ar[d] &
{F^{2}X} \ar@{}[r]|-{\supset} \ar[d] &
{\cdots} \\
{X:} \ar[d] \ar[d]^-{p} &
{\dots} \ar@{}[r]|-{\supset} &
{F^{-1}X} \ar@{}[r]|-{\supset} \ar[d] &
{\mathovalbox{F^{0}X}} \ar@{}[r]|-{\supset} \ar[d] &
{F^1X} \ar@{}[r]|-{\supset} \ar[d] &
{F^2X} \ar@{}[r]|-{\supset} \ar[d] &
{\cdots} \\
{X/(X(\geq 1)):} &
{\dots} \ar@{}[r]|-{\supset} &
{F^{-1}X/F^1X} \ar@{}[r]|-{\supset} &
{\mathovalbox{F^{0}X/F^1X}} \ar@{}[r]|-{\supset} &
{0} \ar@{}[r]|-{=} &
{0} \ar@{}[r]|-{=} &
{\cdots} \\
}
\end{equation}
There is a morphism $X/(X(\geq 1)) \rightarrow [1](X(\geq 1))$ in
$DF(\mathcal{A})$ such that
\begin{equation}
\label{eq:triangle-fcat-basic}
\xymatrix{
{X(\geq 1)} \ar[r]^-{i} &
{X} \ar[r]^-{p} &
{X/(X(\geq 1))} \ar[r] &
{[1](X(\geq 1))}
}
\end{equation}
is a triangle in $DF(\mathcal{A})$: Use the obvious quasi-isomorphism from
the mapping cone of $i$ to $X/(X(\geq 1))$.
Since $X(\geq 1) \in DF(\mathcal{A})(\geq 1)$ and
$X/(X(\geq 1)) \in DF(\mathcal{A})(\leq 0)$ by definition this
proves condition \ref{enum:filt-tria-cat-triang}.
\textbf{Observation:}
Application of the triangulated functors $\op{gr}^i$ to the
triangle \eqref{eq:triangle-fcat-basic} shows:
If $X$ is in $DF(\mathcal{A})(\geq 1)$ then
$X (\geq 1) \rightarrow X$ is an isomorphism in $DF(\mathcal{A})$.
Similarly, if $X$ is in $DF(\mathcal{A})(\leq 0)$, then $X \rightarrow
X/(X(\geq 1))$ is an isomorphism.
We obtain:
\begin{itemize}
\item
Any object in $DF(\mathcal{A})(\geq a)$
is isomorphic to an object $Y$ with $Y=F^{-\infty}Y = \dots =F^{a-1}Y
=F^aY$.
\item Any object in $DF(\mathcal{A})(\leq b)$ is
isomorphic to an object $Y$ with $0=F^{b+1}Y=F^{b+2}Y = \dots$.
\item Any object in $DF(\mathcal{A})([a,b])$ is
isomorphic to an object $Y$ with
$Y= F^{-\infty}Y=\dots
=F^aY \supset \dots \supset 0=F^{b+1}Y = \dots$.
\end{itemize}
\textbf{Condition \ref{enum:filt-tria-cat-no-homs}}:
Let $X$ in $DF(\mathcal{A})(\geq 1)$ and $Y$ in
$DF(\mathcal{A})(\leq 0)$.
By the above observation we can assume that $0=F^1Y =F^2Y = \dots$.
Let a morphism $f: X \rightarrow Y$ be
represented by a roof $X \xla{s} Z \xra{g} Y$ with $s$ a
quasi-isomorphism.
Then the obvious morphism $\iota: Z(\geq 1) \rightarrow Z$ is a
quasi-isomorphism and the roof
$X \xla{s} Z \xra{g} Y$ is equivalent to the roof
$X \xla{s\iota} Z(\geq 1) \xra{g \iota} Y$. Since
$F^1Y=0$ and $F^1(Z(\geq 1))=Z(\geq 1)$ we obtain $g \iota=0$. Hence
$f=0$.
\textbf{Condition \ref{enum:filt-tria-cat-hom-bij}}:
Let $X$ in $DF(\mathcal{A})(\geq 1)$ and $Y$ in
$DF(\mathcal{A})(\leq 0)$ as before.
As observed above we can assume that $X= \dots =F^0X=F^1X$
and that $0=F^1Y = F^2Y =\dots$.
We prove that
\begin{align*}
\op{Hom}(sY, X) & \rightarrow \op{Hom}(Y,X)\\
g & \mapsto g \circ \alpha_Y
\end{align*}
is bijective. Here is a picture of $\alpha_Y$:
\begin{equation*}
\xymatrix@R1pc{
{Y:} \ar[d]^-{\alpha_Y} &
{\dots} \ar@{}[r]|-{\supset} &
{F^{-1}Y} \ar@{}[r]|-{\supset} \ar[d] &
{\mathovalbox{F^{0}Y}} \ar@{}[r]|-{\supset} \ar[d] &
{0} \ar@{}[r]|-{=} \ar[d] &
{0} \ar@{}[r]|-{=} \ar[d] &
{\cdots} \\
{s(Y):} &
{\dots} \ar@{}[r]|-{\supset} &
{F^{-2}Y} \ar@{}[r]|-{\supset} &
{\mathovalbox{F^{-1}Y}} \ar@{}[r]|-{\supset} &
{F^{0}Y} \ar@{}[r]|-{\supset} &
{0} \ar@{}[r]|-{=} &
{\cdots}
}
\end{equation*}
We define a map $\op{Hom}(Y, X)\rightarrow \op{Hom}(s(Y), X)$ which will be inverse
to the above map.
Let $f: Y \rightarrow X$ be a morphism, represented by a roof
$Y \xra{h} Z \xla{t} X$ where $h$ and $t$ are morphisms in
$CF(\mathcal{A})$ and $t$ is a quasi-isomorphism. We
define an object $\tildew{Z}$ and a morphism $Z \rightarrow \tildew{Z}$ by
the following picture in which we include $Z \xla{t} X$:
\begin{equation*}
\xymatrix@R1pc{
{\tildew{Z}:} &
{\dots} \ar@{}[r]|-{=} &
{Z} \ar@{}[r]|-{=} &
{\mathovalbox{Z}} \ar@{}[r]|-{=} &
{Z} \ar@{}[r]|-{\supset} &
{F^2Z} \ar@{}[r]|-{\supset} &
{\cdots} \\
{Z:} \ar[u]^-{s} &
{\dots} \ar@{}[r]|-{\supset} &
{F^{-1}Z} \ar@{}[r]|-{\supset} \ar[u] &
{\mathovalbox{F^{0}Z}} \ar@{}[r]|-{\supset} \ar[u] &
{F^{1}Z} \ar@{}[r]|-{\supset} \ar[u] &
{F^2Z} \ar@{}[r]|-{\supset} \ar[u] &
{\cdots} \\
{X:} \ar[u]^-{t} &
{\dots} \ar@{}[r]|-{=} &
{X} \ar@{}[r]|-{=} \ar[u] &
{\mathovalbox{X}} \ar@{}[r]|-{=} \ar[u] &
{X} \ar@{}[r]|-{\supset} \ar[u] &
{F^{2}X} \ar@{}[r]|-{\supset} \ar[u] &
{\cdots}
}
\end{equation*}
Since $t$ is a quasi-isomorphism, all $F^it:F^iX \rightarrow F^iZ$ are
quasi-isomorphisms. For $i$ small enough we have $Z=F^iZ$; this
implies that $X \rightarrow Z$ is a quasi-isomorphism in $C(\mathcal{A})$; hence
$st: X \rightarrow \tildew{Z}$ is a quasi-isomorphism in $CF(\mathcal{A})$.
Hence we get the following diagram
\begin{equation*}
\xymatrix{
& {\tildew{Z}}\\
{Y} \ar[d]_{\alpha_Y} \ar[r]^-h \ar[ru]^-{sh} & {Z} \ar[u]^-s &
{X} \ar[l]_t \ar[lu]_{st}\\
{s(Y)}
}
\end{equation*}
Because of the special form of the filtrations on $\tildew{Z}$ and
on $Y$ it is obvious that $sh:Y \rightarrow \tildew{Z}$ comes from a
unique morphism $\lambda:s(Y) \rightarrow \tildew{Z}$ in $CF(\mathcal{A})$
such that $\lambda \alpha_Y =sh$. We map $f$ to the class of
the roof $(st)^{-1} \lambda$; it is easy to check that this is
well-defined and inverse to the map $g \mapsto g \circ \alpha_Y$.
Now we check that $i$ makes $DF(\mathcal{A})$ into an f-category over
$D(\mathcal{A})$.
It is obvious that
$i:D(\mathcal{A}) \rightarrow DF(\mathcal{A})([0])$
is triangulated.
Our observation shows that it is essentially surjective.
Since $\op{gr}^0 \circ i=\op{id}_{D(\mathcal{A})}$, our functor $i$ is
faithful. It remains to prove fullness:
Let $X$ and $Y$ be in $D(\mathcal{A})$ and let $f: i(X) \rightarrow i(Y)$ be
a morphism in $DF(\mathcal{A})$, represented by a roof
$i(X) \xla{s} Z \xra{g} i(Y)$ with $s$ a quasi-isomorphism.
Consider the morphism
\begin{equation*}
\xymatrix@R1pc{
{i(F^0Z):} \ar[d]^-{t} &
{\dots} \ar@{}[r]|-{=} &
{F^{0}Z} \ar@{}[r]|-{=} \ar[d] &
{\mathovalbox{F^{0}Z}} \ar@{}[r]|-{\supset} \ar[d] &
{0} \ar@{}[r]|-{=} \ar[d] &
{\cdots} \\
{Z:} &
{\dots} \ar@{}[r]|-{\supset} &
{F^{-1}Z} \ar@{}[r]|-{\supset} &
{\mathovalbox{F^{0}Z}} \ar@{}[r]|-{\supset} &
{F^1Z} \ar@{}[r]|-{\supset} &
{\cdots}
}
\end{equation*}
Since $F^0s:F^0Z \rightarrow F^0i(X)=X$ is a
quasi-isomorphism, $st$ is a quasi-isomorphisms (and so is $t$).
But the roof $i(X) \xla{st} i(F^0Z) \xra{gt} i(Y)$ comes from a roof
$X \leftarrow F^0Z \rightarrow Y$; hence $f$ is in the image of $i$.
\end{proof}
\subsection{First properties of filtered triangulated categories}
\label{sec:firstprop-filt-cat}
We will make heavy use of some results of
\cite[App.]{Beilinson} in Section~\ref{sec:strong-WCfun}.
As no proofs have appeared we give proofs for the more difficult
results we need.
\begin{proposition}
[{cf.\ \cite[Prop.~A 3]{Beilinson} (without proof)}]
\label{p:firstprop-filt-cat}
Let $\tildew{\mathcal{T}}$ be a filtered triangulated category and
$n \in \Bl{Z}$.
\begin{enumerate}
\item
\label{enum:firstprop-filt-cat-right-and-left}
The inclusion $\tildew{\mathcal{T}}(\geq n) \subset
\tildew{\mathcal{T}}$ has a right adjoint $\sigma_{\geq n}:
\tildew{\mathcal{T}} \rightarrow \tildew{\mathcal{T}}(\geq n)$,
and the inclusion $\tildew{\mathcal{T}}(\leq n) \subset
\tildew{\mathcal{T}}$ has a left adjoint $\sigma_{\leq n}:
\tildew{\mathcal{T}} \rightarrow \tildew{\mathcal{T}}(\leq n)$.
\end{enumerate}
We fix all these adjunctions.
\begin{enumerate}[resume]
\item
\label{enum:firstprop-filt-cat-trunc-triangle}
For any $X$ in $\tildew{\mathcal{T}}$ there is a unique morphism
$v^n_X: \sigma_{\leq n}X \rightarrow [1]\sigma_{\geq n+1}X$ in ${\tildew{\mathcal{T}}}$
such that the candidate triangle
\begin{equation}
\label{eq:sigma-trunc-triangle}
\xymatrix{
{\sigma_{\geq n+1}X} \ar[r]^-{g^{n+1}_X}
& {X} \ar[r]^-{k^n_X}
& {\sigma_{\leq n}X} \ar[r]^-{v^n_X}
& {[1]\sigma_{\geq n+1}X}
}
\end{equation}
is a triangle where the first two
morphisms are adjunction morphisms. From every triangle $A\rightarrow X
\rightarrow B \rightarrow[1]A$ with
$A$ in $\tildew{\mathcal{T}}(\geq n)$ and $B$ in
$\tildew{\mathcal{T}}(< n)$ there is a
unique isomorphism of triangles to the
above triangle extending $\op{id}_X$.
We call
\eqref{eq:sigma-trunc-triangle}
the
\textbf{$\sigma$-truncation triangle (of type $(\geq n+1, \leq n)$)}.
\item
\label{enum:firstprop-filtr-cat-trunc-perps}
We have
\begin{align*}
\tildew{\mathcal{T}}(\leq n) & = (\tildew{\mathcal{T}}(>n))^\perp \quad \text{and}\\
\tildew{\mathcal{T}}(\geq n) & =
\leftidx{^\perp}{(\tildew{\mathcal{T}}(< n))}.
\end{align*}
In particular if in a triangle $X \rightarrow Y \rightarrow Z
\rightarrow [1]X$ two out of the three objects $X$, $Y$, $Z$ are in
$\tildew{\mathcal{T}}(\leq n)$ (resp.\ $\tildew{\mathcal{T}}(\geq
n)$) then so is the third.
\item
\label{enum:firstprop-filt-cat-trunc-preserve}
Let $a, b \in \Bl{Z}$.
All functors $\sigma_{\leq n}$ and $\sigma_{\geq n}$ are
triangulated and preserve all subcategories
$\tildew{\mathcal{T}}(\leq a)$ and
$\tildew{\mathcal{T}}(\geq b)$.
There is a unique morphism
\begin{equation}
\label{eq:interval-isom-sigma-trunc-morph-functors}
\sigma^{[b,a]}:\sigma_{\leq a} \sigma_{\geq b} \rightarrow
\sigma_{\geq b} \sigma_{\leq a}
\end{equation}
(which is in fact an isomorphism)
such that the diagram
\begin{equation}
\label{eq:interval-isom-sigma-trunc}
\xymatrix{
{\sigma_{\geq b}X} \ar[r]^-{g^{b}_X}
\ar[d]_-{k^a_{\sigma_{\geq b}X}}
&
{X} \ar[r]^-{k_X^a}
&
{\sigma_{\leq a}X}
\\
{\sigma_{\leq a}\sigma_{\geq b}X}
\ar[rr]^-{\sigma^{[b,a]}_X}
&&
{\sigma_{\geq b}\sigma_{\leq a} X} \ar[u]_-{g^{b}_{\sigma_{\leq
a}X}}
}
\end{equation}
commutes for all $X$ in $\tildew{\mathcal{T}}$.
\end{enumerate}
\end{proposition}
Our proof of this theorem gives some more canonical isomorphisms, see
Remark~\ref{rem:consequences-from-three-times-three} below.
If we were only interested in the statements of
Proposition~\ref{p:firstprop-filt-cat}
we
could do without the
$3 \times 3$-diagram~\eqref{eq:sigma-truncation-two-parameters}.
\begin{proof}
The statements
\eqref{enum:firstprop-filt-cat-right-and-left} and
\eqref{enum:firstprop-filt-cat-trunc-triangle}
follow
from the fact that
$(\tildew{\mathcal{T}}(\geq n+1), \tildew{\mathcal{T}}(\leq n))$ is
a t-structure for all $n \in \Bl{Z}$
(see Rem.~\ref{rem:f-cat-vs-t-cat}) and \cite[1.3.3]{BBD}.
For \eqref{enum:firstprop-filtr-cat-trunc-perps} use
\cite[1.3.4]{BBD} and the fact that $\tildew{\mathcal{T}}(\geq n)$
and $\tildew{\mathcal{T}}(\leq n)$ are stable under $[1]$.
We prove \eqref{enum:firstprop-filt-cat-trunc-preserve}:
The functors $\sigma_{\leq n}$ and $\sigma_{\geq n}$ are
triangulated by Proposition~\ref{p:linksad-triang-for-wstr-art}.
Let $X \in \tildew{\mathcal{T}}$ and $a, b \in \Bl{Z}$. Consider the
following $3 \times 3$-diagram
\begin{equation}
\label{eq:sigma-truncation-two-parameters}
\xymatrix@=1.2cm{
{[1]\sigma_{\geq b+1}\sigma_{\geq a+1} X} \ar@{..>}[r]^-{[1]\sigma_{\geq b+1}(g^{a+1}_X)} &
{[1]\sigma_{\geq b+1} X} \ar@{..>}[r]^-{[1]\sigma_{\geq b+1}({k^a_X})} &
{[1]\sigma_{\geq b+1}\sigma_{\leq a}X} \ar@{..>}[r]^-{[1]\sigma_{\geq b+1}(v^a_X)} \ar@{}[rd]|{\circleddash}&
{[2]\sigma_{\geq b+1}\sigma_{\geq a+1} X} \\
{\sigma_{\leq b}\sigma_{\geq a+1}X}
\ar[u]^{v^b_{\sigma_{\geq a+1}X}} \ar[r]^-{\sigma_{\leq b}(g^{a+1}_X)} &
{\sigma_{\leq b}X} \ar[u]^{v^b_{X}}
\ar[r]^-{\sigma_{\leq b}({k^a_X})} &
{\sigma_{\leq b}\sigma_{\leq a}X}
\ar[u]^{v^b_{\sigma_{\leq a}X}} \ar[r]^-{\sigma_{\leq b}(v^a_X)} &
{[1]\sigma_{\leq b}\sigma_{\geq a+1}X} \ar@{..>}[u]^{[1]v^b_{\sigma_{\geq a+1}X}} \\
{\sigma_{\geq a+1}X} \ar[r]^-{g^{a+1}_X} \ar[u]^{k^b_{\sigma_{\geq a+1}X}} &
{X} \ar[r]^-{k^a_X} \ar[u]^{k^b_{X}}
\ar@{}[ru]|{(2)}
&
{\sigma_{\leq a}X} \ar[r]^-{v^a_X} \ar[u]^{k^b_{\sigma_{\leq a}X}} &
{[1]\sigma_{\geq a+1}X} \ar@{..>}[u]^{[1]k^b_{\sigma_{\geq a+1}X}} \\
{\sigma_{\geq b+1}\sigma_{\geq a+1} X} \ar[u]^{g^{b+1}_{\sigma_{\geq
a+1}X}} \ar[r]^-{\sigma_{\geq b+1}(g^{a+1}_X)}
\ar@{}[ru]|{(1)}
&
{\sigma_{\geq b+1} X} \ar[u]^{g^{b+1}_{X}} \ar[r]^-{\sigma_{\geq b+1}({k^a_X})} &
{\sigma_{\geq b+1}\sigma_{\leq a}X} \ar[u]^{g^{b+1}_{\sigma_{\leq a}X}}
\ar[r]^-{\sigma_{\geq b+1}(v^a_X)} &
{[1]\sigma_{\geq b+1}\sigma_{\geq a+1} X} \ar@{..>}[u]^{[1]g^{b+1}_{\sigma_{\geq a+1}X}}
}
\end{equation}
constructed as follows: All morphisms $g$ and $k$ are
adjunction morphisms. We start with the third row which is the
$\sigma$-truncation triangle of $X$ of type $(\geq a+1, \leq a)$.
We apply the triangulated functors $\sigma_{\leq b}$ and $\sigma_{\geq
b+1}$ to this triangle and obtain the triangles in the second and
fourth row. The adjunctions give the morphisms of triangles from
fourth to third and third to second row.
Then extend the first three columns to $\sigma$-truncation
triangles; they can be uniquely connected by morphisms of triangles extending
$g^{a+1}_X$ and $k^a_X$ respectively
using Proposition~\ref{p:BBD-1-1-9-copied-for-w-str}.
Similarly (multiply the last arrow in the fourth column by $-1$ to get a
triangle) we obtain the morphism between third and fourth column.
We prove that $\sigma_{\geq b+1}$ and $\sigma_{\leq b}$ preserve the
subcategories
$\tildew{\mathcal{T}}(\geq a+1)$ and $\tildew{\mathcal{T}}(\leq a)$.
\begin{itemize}
\item
\textbf{Case $a \geq b$:} Then in the left vertical triangle the
first two objects are in $\tildew{\mathcal{T}}(\geq b+1)$; hence
$\sigma_{\leq b}\sigma_{\geq a+1}X \in \tildew{\mathcal{T}}(\geq b+1) \cap
\tildew{\mathcal{T}}(\leq b)$ is zero
(use \eqref{enum:firstprop-filtr-cat-trunc-perps}).
(This shows that
$g^{b+1}_{\sigma_{\geq a+1}X}$ and $\sigma_{\leq b}(k^a_X)$ are
isomorphisms.)
\begin{itemize}
\item \textbf{$X \in \tildew{\mathcal{T}}(\geq a+1)$:} Then
$g^{a+1}_X$ is an isomorphism and the first two vertical
triangles are isomorphic. This shows $\sigma_{\leq b}X=0 \in
\tildew{\mathcal{T}}(\geq a+1)$ and that all four morphisms of the
square $(1)$ are isomorphisms; hence
$\sigma_{\geq b+1}X
\in \tildew{\mathcal{T}}(\geq a+1)$.
\item \textbf{$X \in \tildew{\mathcal{T}}(\leq a)$:} Then
$\sigma_{\leq b}X \in \tildew{\mathcal{T}}(\leq b) \subset
\tildew{\mathcal{T}}(\leq a)$. Hence in the second vertical triangle
two objects are in $\tildew{\mathcal{T}}(\leq a)$; hence
the third object $\sigma_{\geq b+1}X$ is in
$\tildew{\mathcal{T}}(\leq a)$.
\end{itemize}
\item
\textbf{Case $a \leq b$:} Then in the third vertical triangle the
second and third object are in $\tildew{\mathcal{T}}(\leq b)$; hence
$\sigma_{\geq b+1}\sigma_{\leq a}X \in \tildew{\mathcal{T}}(\leq b)\cap
\tildew{\mathcal{T}}(\geq b+1)$ is zero.
(This shows that
$k^b_{\sigma_{\leq a}X}$ and $\sigma_{\geq b+1}(g^{a+1}_X)$ are
isomorphisms.)
\begin{itemize}
\item \textbf{$X \in \tildew{\mathcal{T}}(\leq a)$:} Then
$k^a_X$ is an isomorphism and the second and third vertical
triangles are isomorphic. This shows $\sigma_{\geq b+1}X=0 \in
\tildew{\mathcal{T}}(\leq a)$ and that all four morphisms of the
square $(2)$ are isomorphisms; hence
$\sigma_{\leq b}X \in \tildew{\mathcal{T}}(\leq a)$.
\item \textbf{$X \in \tildew{\mathcal{T}}(\geq a+1)$:} Then
$\sigma_{\geq b+1}X \in \tildew{\mathcal{T}}(\geq b+1) \subset
\tildew{\mathcal{T}}(\geq a+1)$. Hence in the second vertical triangle
two objects are in $\tildew{\mathcal{T}}(\geq a+1)$; hence
the third object $\sigma_{\leq b}X$ is in $\tildew{\mathcal{T}}(\geq
a+1)$.
\end{itemize}
\end{itemize}
For the last statement consider the diagram
\eqref{eq:interval-isom-sigma-trunc} without the arrow
$\sigma^{[b,a]}_X$ and with $b$ replaced by $b+1$.
The vertical arrows are part of $\sigma$-truncation
triangles.
Note that we already know that
$\sigma_{\geq b+1}\sigma_{\leq a}X \in \tildew{\mathcal{T}}(\leq a)$ and
$\sigma_{\leq a}\sigma_{\geq b+1}X \in \tildew{\mathcal{T}}(\geq b+1)$.
Appropriate cohomological functors
give the following
commutative diagram of isomorphisms:
\begin{equation*}
\xymatrix@C=1.5cm{
{\tildew{\mathcal{T}}(\sigma_{\geq b+1}X, \sigma_{\leq a}X)}
&
{\tildew{\mathcal{T}}(\sigma_{\geq b+1}X, \sigma_{\geq b+1}\sigma_{\leq
a}X)}
\ar[l]_-{g^{b+1}_{\sigma_{\leq a}X} \circ ?}^-{\sim}\\
{\tildew{\mathcal{T}}(\sigma_{\leq a}\sigma_{\geq b+1}X, \sigma_{\leq a}X)}
\ar[u]_-{? \circ k^a_{\sigma_{\geq b+1}X}}^-{\sim}
&
{\tildew{\mathcal{T}}(\sigma_{\leq a}\sigma_{\geq b+1}X, \sigma_{\geq b+1}\sigma_{\leq
a}X)}
\ar[l]_-{g^{b+1}_{\sigma_{\leq a}X} \circ ?}^-{\sim}
\ar[u]_-{? \circ k^a_{\sigma_{\geq b+1}X}}^-{\sim}
}
\end{equation*}
It shows that there is a unique morphism
\begin{equation*}
\sigma^{[b+1,a]}_X:
\sigma_{\leq a}\sigma_{\geq b+1}X \rightarrow \sigma_{\geq b+1}\sigma_{\leq a}X
\end{equation*}
such that $g^{b+1}_{\sigma_{\leq a}X} \circ \sigma^{[b+1,a]}_X \circ
k^a_{\sigma_{\geq b+1}X}= k^a_X \circ g^{b+1}_X$.
We have to prove that $\sigma^{[b+1,a]}_X$ is an isomorphism.
From \eqref{enum:firstprop-filt-cat-trunc-triangle} we see that
the lowest horizontal triangle in
\eqref{eq:sigma-truncation-two-parameters} is uniquely isomorphic
(by an isomorphism extending the identity) to the corresponding
$\sigma$-truncation triangle:
\begin{equation*}
\xymatrix@=1.2cm{
{\sigma_{\geq b+1}\sigma_{\geq a+1} X}
\ar[r]^-{\sigma_{\geq b+1}(g^{a+1}_X)}
&
{\sigma_{\geq b+1} X}
\ar[r]^-{\sigma_{\geq b+1}({k^a_X})}
&
{\sigma_{\geq b+1}\sigma_{\leq a}X}
\ar[r]^-{\sigma_{\geq b+1}(v^a_X)}
&
{[1]\sigma_{\geq b+1}\sigma_{\geq a+1} X}
\\
{\sigma_{\geq a+1} \sigma_{\geq b+1}X}
\ar[r]^-{g^{a+1}_{\sigma_{\geq b+1}X}}
\ar[u]^{f}_{\sim}
&
{\sigma_{\geq b+1}X}
\ar[r]^-{k^a_{\sigma_{\geq b+1}X}}
\ar[u]^{\op{id}}
&
{\sigma_{\leq a} \sigma_{\geq b+1}X}
\ar[r]^-{v^a_{\sigma_{\geq b+1}X}}
\ar[u]^{h}_{\sim}
&
{[1]\sigma_{\geq a+1} \sigma_{\geq b+1}X}
\ar[u]^{[1]f}_{\sim}
}
\end{equation*}
In combination with \eqref{eq:sigma-truncation-two-parameters} this
diagram yields
$g^{b+1}_{\sigma_{\leq a}X} \circ h \circ k^a_{\sigma_{\geq b+1}X} =
k^a_X \circ g^{b+1}_X$. This shows that $\sigma^{[b+1,a]}_X=h$
which is hence an isomorphism.
Similarly it is easy to see that $X \mapsto \sigma_X^{[b,a]}$ in
fact defines an isomorphism
\eqref{eq:interval-isom-sigma-trunc-morph-functors}
of functors.
\end{proof}
\begin{remark}
\label{rem:consequences-from-three-times-three}
Let us just mention some consequences one can now easily deduce from
the $3\times 3$-diagram~\eqref{eq:sigma-truncation-two-parameters}.
\begin{itemize}
\item
(As already mentioned in the proof:)
If $a \geq b$ the object $\sigma_{\leq b}\sigma_{\geq a+1}X$ is zero
providing two isomorphisms
\begin{align}
\label{eq:a-geq-b-sigma-isos}
\sigma_{\leq b}(k^a_X): \sigma_{\leq b}X & \xra{\sim}
\sigma_{\leq b}\sigma_{\leq a}X
\quad \text{(for $a \geq b$), and} \\
\notag
g^{b+1}_{\sigma_{\geq a+1}X}: \sigma_{\geq b+1}\sigma_{\geq a+1}X & \xra{\sim}
\sigma_{\geq a+1}X \quad \text{(for $a \geq b$).}
\end{align}
Similarly $\sigma_{\geq b+1}\sigma_{\leq a}X$ vanishes for $a \leq b$
and provides two isomorphisms
\begin{align}
\label{eq:a-leq-b-sigma-isos}
\sigma_{\geq b+1}(g^{a+1}_X): \sigma_{\geq b+1}\sigma_{\geq a+1}X & \xra{\sim}
\sigma_{\geq b+1}X
\quad \text{(for $a \leq b$), and} \\
\notag
k^b_{\sigma_{\leq a}X}: \sigma_{\leq a}X & \xra{\sim}
\sigma_{\leq b}\sigma_{\leq a}X \quad \text{(for $a \leq b$).}
\end{align}
\item
In case $a=b$ the two squares marked $(1)$ and $(2)$ consist of
isomorphisms. Hence
\begin{equation}
\label{eq:a-eq-b-sigma-epsilon-eta-isos}
g^{a+1}_{\sigma_{\geq a+1}X}= \sigma_{\geq a+1}(g^{a+1}_X)
\quad \text{and}
\quad
k^a_{\sigma_{\leq a}X}= \sigma_{\leq a}(k^a_X).
\end{equation}
\item
Proposition~\ref{p:BBD-1-1-9-copied-for-w-str} gives several
uniqueness statements, e.\,g.\ it shows that the
morphisms connecting the horizontal triangles are also unique
extending $g^{b+1}_X$, $k^b_X$ and $v^b_X$ respectively
(in the last case one has to change the sign of the third morphism
of the top sequence to make it into a triangle).
\item Application of $\sigma_{\geq a+1} \xra{g^{a+1}} \op{id}
\xra{k^a} \sigma_{\leq a} \xra{v^a} [1]\sigma_{\geq a+1}$ to the
second vertical triangle in
\eqref{eq:sigma-truncation-two-parameters}
yields a similar $3\times 3$-diagram
which is uniquely isomorphic to the $3\times
3$-diagram~\eqref{eq:sigma-truncation-two-parameters} by an
isomorphism extending $\op{id}_X$ (and this is functorial in $X$).
The four isomorphisms in the corners are
the isomorphism $\sigma^{[b+1,a]}_X$ from the Proposition, the
(inverse of) the isomorphism $\sigma^{[a+1,b]}_X$, and the
isomorphisms $\sigma_{\leq a}\sigma_{\leq b}
\xra{\sim} \sigma_{\leq b}\alpha_{<a}$ and $\sigma_{\geq a+1}\sigma_{\geq
b+1}\xra{\sim} \sigma_{\geq b+1}\sigma_{\geq a+1}$.
\end{itemize}
We use the isomorphisms \eqref{eq:a-geq-b-sigma-isos}
and \eqref{eq:a-leq-b-sigma-isos} and those from the last point
sometimes tacitly in the following and write them as equalities.
\end{remark}
We introduce some shorthand notation: For $a,b \in \Bl{Z}$ define
$\sigma_{[a,b]}:= \sigma_{\leq b} \sigma_{\geq a}$ (which equals
$\sigma_{\geq a}\sigma_{\leq b}$ by the above convention) and
$\sigma_a:=\sigma_{[a]}:=\sigma_{[a,a]}$.
We give some commutation formulas:
Applying the triangulated functor $s$ to the triangle $(\sigma_{\geq
a+1}X, X, \sigma_{\leq a}X)$ yields a triangle that is uniquely
isomorphic to $(\sigma_{\geq a+2}s(X), s(X), \sigma_{\leq
a+1}s(X))$. Hence we obtain isomorphisms (that we write as
equalities)
\begin{equation}
\label{eq:sgle-eq-s-comm}
s\sigma_{\geq a} = \sigma_{\geq a+1}s, \quad
s\sigma_{\leq a} = \sigma_{\leq a+1}s, \quad
s\sigma_{a} = \sigma_{a+1}s.
\end{equation}
Let $(\tildew{\mathcal{T}}, i)$ be an f-category over a triangulated
category $\mathcal{T}$. Define
\begin{equation*}
\op{gr}^n:= i^{-1} s^{-n} \sigma_{n}:
\tildew{\mathcal{T}} \rightarrow \mathcal{T}
\end{equation*}
where $i^{-1}$ is a fixed quasi-inverse of $i$.
Note that $\op{gr}^n$ is a triangulated functor.
From
$ \op{gr}^{n+1} s
= i^{-1} s^{-n-1}\sigma_{n+1}s
= i^{-1} s^{-n}\sigma_{n}
= \op{gr}^n
$
we obtain
\begin{equation}
\label{eq:gr-s-comm}
\op{gr}^{n+1} s= \op{gr}^n.
\end{equation}
Given $X \in \tildew{\mathcal{T}}$ we define its \define{support}
by
\begin{equation*}
\op{supp}(X) :=\{n \in \Bl{Z} \mid \sigma_n(X) \not=0\}.
\end{equation*}
Note that $\op{supp}(X)$ is a bounded subset of $\Bl{Z}$
by axiom \ref{enum:filt-tria-cat-exhaust} and
Proposition~\ref{p:firstprop-filt-cat},
\eqref{enum:firstprop-filt-cat-trunc-preserve}. It is empty if and only
if $X=0$.
The \define{range} $\op{range}(X)$ of $X$ is defined as the smallest
interval (possibly empty) in $\Bl{Z}$ containing $\op{supp} X$.
It is the smallest interval $I$ such that $X \in
\tildew{\mathcal{T}}(I)$.
The \define{length} $l(X)$ of $X$ is the number of
elements in $\op{range}(X)$.
\subsection{Forgetting the filtration -- the functor
\texorpdfstring{$\omega$}{omega}}
\label{sec:forget-fitlration-functor-omega}
\begin{proposition}
[{cf.\ \cite[Prop.~A 3]{Beilinson} (without proof)}]
\label{p:functor-omega}
Let $(\tildew{\mathcal{T}}, i)$ be an f-category over a triangulated
category $\mathcal{T}$. There is a unique (up to unique isomorphism)
triangulated functor
\begin{equation*}
\omega: \tildew{\mathcal{T}} \rightarrow \mathcal{T}
\end{equation*}
such that
\begin{enumerate}[label=(om{\arabic*})]
\item
\label{enum:functor-omega-i}
The restriction $\omega|_{\tildew{\mathcal{T}}(\leq 0)}:
\tildew{\mathcal{T}}(\leq 0) \rightarrow \mathcal{T}$ is
left\footnote{typo in \cite[Prop.~A 3]{Beilinson} (and similar in
\ref{enum:functor-omega-ii})}
adjoint to
$i: \mathcal{T} \rightarrow \tildew{\mathcal{T}}(\leq 0)$.
\item
\label{enum:functor-omega-ii}
The restriction $\omega|_{\tildew{\mathcal{T}}(\geq 0)}:
\tildew{\mathcal{T}}(\geq 0) \rightarrow \mathcal{T}$ is
right
adjoint to
$i: \mathcal{T} \rightarrow \tildew{\mathcal{T}}(\geq 0)$.
\item
\label{enum:functor-omega-iii}
For any $X$ in $\tildew{\mathcal{T}}$, the arrow $\alpha_X: X \rightarrow
s(X)$ is mapped to an isomorphism $\omega(\alpha_X):\omega(X) \xra{\sim}
\omega(s(X))$.
\item
\label{enum:functor-omega-iv}
For all $X$ in $\tildew{\mathcal{T}}(\leq 0)$ and all $Y$ in
$\tildew{\mathcal{T}}(\geq 0)$ we have an isomorphism
\begin{equation}
\label{eq:functor-omega-iv}
\omega: \op{Hom}_{\tildew{\mathcal{T}}}(X,Y) \xra{\sim}
\op{Hom}_{\mathcal{T}}(\omega(X), \omega(Y)).
\end{equation}
\end{enumerate}
In fact $\omega$ is uniquely determined by the properties
\ref{enum:functor-omega-i} and
\ref{enum:functor-omega-iii}
(or by the properties
\ref{enum:functor-omega-ii} and
\ref{enum:functor-omega-iii}).
\end{proposition}
The functor $\omega$ is often called the ``forgetting of the filtration
functor"; the
reason for this name is Lemma~\ref{l:omega-in-basic-example} below.
Note that
$\omega|_{\tildew{\mathcal{T}}([0])}$ is left (and right) adjoint to
the equivalence $i:\mathcal{T} \xra{\sim} \tildew{\mathcal{T}}([0])$
and hence a quasi-inverse of $i$.
\begin{proof}
\textbf{Uniqueness:}
Assume that $\omega: \tildew{\mathcal{T}} \rightarrow
\mathcal{T}$ satisfies the conditions
\ref{enum:functor-omega-i} and
\ref{enum:functor-omega-iii}.
Let $f: X \rightarrow Y$ be a morphism in $\tildew{\mathcal{T}}$. By
\ref{enum:filt-tria-cat-exhaust} there is an $n \in \Bl{Z}$ such that
this is a morphism in $\tildew{\mathcal{T}}(\leq n)$.
By \ref{enum:filt-tria-cat-shift} we can assume that $n \geq 0$.
Consider the commutative diagram
\begin{equation*}
\hspace{-1.4cm}
\xymatrix@C4pc{
{s^{-n}(X)} \ar[r]^-{\alpha} \ar[d]^-{s^{-n}(f)} &
{s^{-n+1} (X)} \ar[r]^-{\alpha}
\ar[d]^-{s^{-n+1}(f)} &
{\dots} &
{\dots} \ar[r]^-{\alpha} &
{s^{-1}(X)} \ar[r]^-{\alpha}
\ar[d]^-{s^{-1}(f)} &
{X} \ar[d]^-{f} \\
{s^{-n}(Y)} \ar[r]^-{\alpha} &
{s^{-n+1} (Y)} \ar[r]^-{\alpha}
&
{\dots} &
{\dots} \ar[r]^-{\alpha} &
{s^{-1}(Y)} \ar[r]^-{\alpha} &
{Y}
}
\end{equation*}
where we omit the indices $s^{-i}(X)$ and $s^{-i}(Y)$ at the various
maps $\alpha$.
If we apply $\omega$, all the horizontal arrows become
isomorphisms; note that $s^{-n}(f)$ is a morphism in
$\tildew{\mathcal{T}}(\leq 0)$. This shows that $\omega$ is uniquely
determined by its restriction to $\tildew{\mathcal{T}}(\leq 0)$ and
knowledge of all isomorphisms $\omega(\alpha_Z): \omega(Z) \xra{\sim}
\omega(s(Z))$ for all $Z$ in $\tildew{\mathcal{T}}$.
If we have two functors $\omega_1$ and $\omega_2$ whose restrictions
are both adjoint to the inclusion $i: \mathcal{T} \rightarrow
\tildew{\mathcal{T}}(\leq 0)$ (i.\,e.\ satisfy
\ref{enum:functor-omega-i}), these adjunctions give rise to a unique
isomorphism between $\omega_1|_{\tildew{\mathcal{T}}(\leq 0)}$ and
$\omega_2|_{\tildew{\mathcal{T}}(\leq 0)}$. If both $\omega_1$ and
$\omega_2$ satisfy in addition
\ref{enum:functor-omega-iii}, this isomorphism can be uniquely
extended to an isomorphism between $\omega_1$ and $\omega_2$ (use
the above diagram after application of $\omega_1$ and $\omega_2$
respectively).
\textbf{Existence:}
Let $X$ in $\tildew{\mathcal{T}}$. We define objects $X_l$ and $X_r$
as follows: If $X=0$ let $X_l=X_r=0$.
If $X\not= 0$ let $a=a^X, b=b^X \in \Bl{Z}$ such that $\op{range}(X)=[a,b]$
(and $l(X)=b-a+1$);
let $X_l := s^{-b}(X)$ and $X_r := s^{-a}(X)$ (the indices stand for
left and right); observe that $X_l \in \tildew{\mathcal{T}}(\leq 0)
\not\ni s(X_l)$ and $X_r \in \tildew{\mathcal{T}}(\geq 0) \not\ni
s^{-1}(X_r)$.
We denote the composition
$X_l \xra{\alpha_{X_l}} s(X_l) \rightarrow \dots \xra{\alpha_{s^{-1}(X_r)}}
X_r$
by
$\alpha_{rl}^X:X_l \rightarrow X_r$.
Our first goal is to construct for every $X$ in
$\tildew{\mathcal{T}}$ an object $\Omega_X$ in $\mathcal{T}$ and a
factorization of $\alpha^X_{rl}$
\begin{equation*}
\xymatrix{
{X_l} \ar[r]^-{\epsilon_X} \ar@/^2pc/[rr]^-{\alpha^X_{rl}} &
{i(\Omega_X)} \ar[r]^-{\delta_X} &
{X_r}
}
\end{equation*}
such that
\begin{align}
\label{eq:univ-epsi}
\op{Hom}(i(\Omega_X), \tildew{A}) \xrightarrow[\sim]{? \circ
\epsilon_X} \op{Hom}(X_l, \tildew{A}) && \text{for all $\tildew{A}$
in $\tildew{\mathcal{T}}(\geq 0)$;}\\
\label{eq:univ-delt}
\op{Hom}(\tildew{B}, i(\Omega_X)) \xrightarrow[\sim]{\delta_X \circ ?}
\op{Hom}(\tildew{B}, X_r) && \text{for all $\tildew{B}$
in $\tildew{\mathcal{T}}(\leq 0)$.}
\end{align}
Here is a diagram to illustrate this:
\begin{equation*}
\xymatrix{
& {\tildew{B}} \ar[d]^-b \ar[rd]^-{b'}\\
{X_l} \ar[r]^-{\epsilon_X} \ar[rd]^-{a'} &
{i(\Omega_X)} \ar[r]^-{\delta_X} \ar[d]^-{a} &
{X_r}\\
& {\tildew{A}}
}
\end{equation*}
If $X$ is in $\tildew{\mathcal{T}}(\leq 0)$ the composition
$X \xra{\alpha_X} s(X) \rightarrow \dots \xra{\alpha_{s^{-1}(X_l)}} X_l$
is denoted
$\alpha_l^X:X \rightarrow X_l$. If \eqref{eq:univ-epsi} holds then
\ref{enum:filt-tria-cat-hom-bij} shows:
\begin{align}
\label{eq:univ-epsi-X}
\op{Hom}(i(\Omega_X), \tildew{A}) \xrightarrow[\sim]{? \circ
\epsilon_X\circ \alpha^X_l} \op{Hom}(X, \tildew{A}) &&
\text{for all $X$ is in $\tildew{\mathcal{T}}(\leq 0)$} \\
\notag
&& \text{and $\tildew{A}$
in $\tildew{\mathcal{T}}(\geq 0)$.}
\end{align}
If $X$ is in
$\tildew{\mathcal{T}}(\geq 0)$
and \eqref{eq:univ-delt} holds, we
similarly have a morphism $\alpha^X_r: X_r \rightarrow X$ and obtain:
\begin{align}
\label{eq:univ-delt-X}
\op{Hom}(\tildew{B}, i(\Omega_X)) \xrightarrow[\sim]{\alpha^X_r \circ
\delta_X \circ ?}
\op{Hom}(\tildew{B}, X) &&
\text{for all $X \in \tildew{\mathcal{T}}(\geq 0)$}
\\
\notag
&& \text{and $\tildew{B}$
in $\tildew{\mathcal{T}}(\leq 0)$.}
\end{align}
We construct the triples $(\Omega_X, \epsilon_X, \delta_X)$
by induction over the length $l(X)$ of $X$. The case $l(X)=0$ is trivial.
Base case $l(X)=1$.
Then $X_l=X_r \in \tildew{\mathcal{T}}([0])$. Let $\kappa$ be a
quasi-inverse
of $i$ and fix an isomorphism
$\theta: \op{id} \xra{\sim} i\kappa$.
We define
$\Omega_X:= \kappa(X_l)$ and
\begin{equation*}
X_l \xrightarrow[\sim]{\epsilon_X:=\theta_{X_l}} i(\Omega_X)
\xrightarrow[\sim]{\delta_X:=\theta_{X_r}^{-1}} X_r.
\end{equation*}
This is a factorization of $\alpha_{rl}^X=\op{id}_X$ and the conditions
\eqref{eq:univ-epsi} and
\eqref{eq:univ-delt} are obviously satisfied.
Now let $X \in \tildew{\mathcal{T}}$ be given with $l(X) > 1$ and
assume that we have
constructed $(\Omega_Y, \epsilon_Y, \delta_Y)$ as desired for all
$Y$ with $l(Y) < l(X)$. Let $L:=l(X)-1$.
Let $P:=\sigma_{\geq 0}(X_l)$ and $Q:= \sigma_{\leq -1}(X_l)$ and
let us explain the following diagram:
\begin{equation*}
\hspace{-1cm}
\xymatrix{
{P=P_l} \ar[r] \ar[dd]^-{\epsilon_{P}}_-{\sim} &
{X_l} \ar[r] \ar@{..>}[dd]^-{\epsilon_X} &
{Q} \ar[r]^-h
\ar[d]^-{\alpha^{Q}_l}
&
{[1] P} \ar[dd]^-{[1]\epsilon_{P}}_-\sim \\
&& {s^{-b_Q}(Q)=Q_l}
\ar[d]^-{\epsilon_{Q}}\\
{i(\Omega_{P})} \ar[d]^-{\delta_{P}}_-{\sim} \ar@{..>}[r]
&
{i(\Omega_X)} \ar@{..>}[r] \ar@{~>}[dd]^-{\delta_X}
& {i(\Omega_{Q})} \ar@{..>}[r]^-{h'}
\ar[dd]^-{\delta_{Q}} &
{[1]i(\Omega_{P})} \ar[d]^-{[1]\delta_{P}}_-{\sim} \\
{P=P_r} \ar[d]^-{\alpha^L} &&
&
{[1]P} \ar[d]^-{[1]\alpha^L}
\\
{s^L(P)} \ar[r] &
{s^L(X_l)=X_r} \ar[r] &
{s^L(Q)=Q_r} \ar[r]^-{s^L(h)}
&
{[1] s^L(P)}
}
\end{equation*}
The first row is the $\sigma$-truncation triangle of type $(\geq 0,
\leq -1)$ of $X_l \in \tildew{\mathcal{T}}([-L,0])$
The last row is the image of this triangle under the triangulated
automorphism $s^L$.
There is a morphism between these triangles given by
$\alpha^L$.
Observe that $\op{range}(P)=[0]$ and $\op{range}(Q) = [-L,b_Q] \subset [-L,-1]$; in
particular the triples
$(\Omega_P, \epsilon_P, \delta_P)$ and
$(\Omega_Q, \epsilon_Q, \delta_Q)$ are constructed.
Since
$\delta\circ
\epsilon$ is a factorization of $\alpha_{rl}$, the
first, third and fourth column are components of this morphism
$\alpha^L$ of triangles.
By \eqref{eq:univ-epsi-X} there is a unique morphism $h'$ as
indicated making the upper right square commutative, and then
\eqref{eq:univ-epsi-X} again shows that the lower right square
commutes.
We can find some $\Omega_X \in \mathcal{T}$ and a completion of $h'$
to a triangle as shown in the middle row of the above diagram.
Then we complete the partial morphism of the two upper triangles
by $\epsilon_X$ to a morphism of triangles.
Let us construct the wiggly morphism $\delta_X$.
Take an arbitrary object $\tildew{A}$ in $\tildew{\mathcal{T}}(\geq
0)$ and apply
$\op{Hom}(?,\tildew{A})$
to the
morphism of the upper two triangles. The resulting morphism of long exact
sequences and the five lemma show that
\eqref{eq:univ-epsi} holds for $\epsilon_X$.
This shows in particular that the morphism $\alpha_{rl}^X:X_l \rightarrow
X_r$ factorizes
uniquely over $\epsilon_X$ by some $\delta_X$. Using
\eqref{eq:univ-epsi} again we see that $\delta_X$ defines in fact a
morphism of the two lower triangles.
Finally we apply for any $\tildew{B}$ in
$\tildew{\mathcal{T}}(\leq 0)$
the functor
$\op{Hom}(\tildew{B}, ?)$
to the
morphism of the two lower triangles; the five lemma again shows
that $\delta_X$ satisfies \eqref{eq:univ-delt}.
This shows that the triple $(\Omega_X, \epsilon_X, \delta_X)$ has
the properties we want.
Now we define the functor $\omega: \tildew{\mathcal{T}} \rightarrow
\mathcal{T}$. On objects we define $\omega(X):=\Omega_X$. Let $f:X
\rightarrow Y$ be a morphism. Then for some $N \in \Bl{N}$ big enough we have
$s^{-N}(X), s^{-N}(Y) \in \tildew{\mathcal{T}}(\leq 0)$, hence
$s^{-N}(f)$ is a
morphism in $\tildew{\mathcal{T}}(\leq 0)$. Similarly for some $M
\in \Bl{N}$
big enough $s^M(f)$ is a morphism in $\tildew{\mathcal{T}}(\geq 0)$.
Consider the diagram
\begin{equation}
\label{eq:omega-on-morphisms}
\xymatrix@C4pc{
{{s^{-N}(X)}} \ar[r]^-{\alpha^?} \ar[d]^-{s^{-N}(f)} &
{X_l} \ar[r]^-{\epsilon_X} \ar@/^2pc/[rr]^-{\alpha_{rl}^X}
&
{i(\omega(X))} \ar[r]^-{\delta_X} \ar@{..>}[d]^-{\Omega_f} & {X_r}
\ar[r]^-{\alpha^?} &
{s^M(X)} \ar[d]^-{s^{M}(f)}
\\
{{s^{-N}(Y)}} \ar[r]^-{\alpha^?} &
{Y_l} \ar[r]^-{\epsilon_Y} \ar@/_2pc/[rr]^-{\alpha_{rl}^Y}
&
{i(\omega(Y))} \ar[r]^-{\delta_Y} & {Y_r}
\ar[r]^-{\alpha^?} &
{s^M(Y).}
}
\end{equation}
The morphism from $S^{-N}(X)$ to $i(\omega(Y))$ factors uniquely to
the dotted arrow by \ref{enum:filt-tria-cat-hom-bij} and
\eqref{eq:univ-epsi}. Similarly the morphism from $i(\omega(X))$ to
$s^M(Y)$ factors to the dotted
arrow. These (a priori) two dotted arrows coincide, since
\begin{equation}
\label{eq:i-omega-isom}
\op{Hom}(i(\omega(X)),i(\omega(Y))\xra{\sim} \op{Hom}(s^{-N}(X), s^{M}(Y))
\end{equation}
by \ref{enum:filt-tria-cat-hom-bij} and
\eqref{eq:univ-epsi} and \eqref{eq:univ-delt} again and since
both compositions $\delta\circ\epsilon$ are a factorization of
$\alpha_{rl}$.
Note that $\Omega_f$ does not depend on the choice of $N$ and $M$.
We define $\omega(f)$ to be the unique arrow $\omega(X) \rightarrow
\omega(Y)$ that is mapped under the equivalence $i$ to $\Omega_f$.
This respects compositions and
identity morphisms and hence defines a functor $\omega$.
We verify \ref{enum:functor-omega-i}-\ref{enum:functor-omega-iv} and
that $\omega$ can be made into a triangulated functor.
Let us show \ref{enum:functor-omega-iii} first:
We can assume that $X\not=0$.
We draw the left part of diagram
\eqref{eq:omega-on-morphisms} for the morphism $\alpha_X:X \rightarrow s(X)$
(note that $b_X+1=b_{s(X)}$ and hence $X_l=(s(X))_l$):
\begin{equation*}
\xymatrix@C4pc{
{{s^{-N}(X)}} \ar[r]^-{\alpha^?}
\ar[d]_{s^{-N}(\alpha_X)} &
{X_l=s^{-{b_X}}(X)} \ar[r]^-{\epsilon_X}
\ar@{..>}[d]^-{\op{id}}
&
{i(\omega(X))} \ar@{~>}[d]^-{\Omega_{\alpha_X}}
\\
{{s^{-N+1}(X)}} \ar[r]^-{\alpha^{?-1}} &
{(s(X))_l=s^{-{b_{s(X)}}}(X)} \ar[r]^-{\epsilon_{s(X)}}
&
{i(\omega(s(X)))}
}
\end{equation*}
By \ref{enum:filt-tria-cat-shift-alpha}
we have $s^{-N}(\alpha_X)=\alpha_{s^{-N}(X)}$ and hence
the left square becomes commutative with the dotted arrow.
Since $\epsilon_X$ and $\epsilon_{s(X)}$ have the same
universal property \eqref{eq:univ-epsi} the morphism $\Omega_{\alpha_X}$ must
be an isomorphism.
Let us show \ref{enum:functor-omega-i}:
Given $X$ in $\tildew{\mathcal{T}}(\leq 0)$ and $A$ in $\mathcal{T}$
replace $\tildew{A}$ by $i(A)$ in
\eqref{eq:univ-epsi-X} and use $i:\op{Hom}(\omega(X),A)\xra{\sim}
\op{Hom}(i(\omega(X)),i(A))$. This gives
\begin{equation*}
\op{Hom}_{\mathcal{T}}(\omega(X), A) \xsira{i(?) \circ \epsilon_X
\circ \alpha_l^X}
\op{Hom}_{\tildew{\mathcal{T}}}(X, i(A)).
\end{equation*}
From \eqref{eq:omega-on-morphisms} it is easy to see that this
isomorphism is compatible with morphisms $f:X \rightarrow Y$ in
$\tildew{\mathcal{T}}(\leq 0)$ and $g:A' \rightarrow A$ in $\mathcal{T}$.
The proof of \ref{enum:functor-omega-ii} is similar.
Proposition~\ref{p:linksad-triang-for-wstr-art} shows that
$\omega|_{\tildew{\mathcal{T}}(\leq 0)}$ is triangulated for a
suitable isomorphism $\phi: \omega|_{\tildew{\mathcal{T}}(\leq 0)}
[1] \xra{\sim} [1] \omega|_{\tildew{\mathcal{T}}(\leq 0)}$.
Using the above techniques it is easy to find an isomorphism $\phi:
\omega[1] \xra{\sim} [1]\omega$ such that $(\omega, \phi)$ is a
triangulated functor.
We leave the details to the reader.
We finally prove \ref{enum:functor-omega-iv}.
Let $X$ in
$\tildew{\mathcal{T}}(\leq 0)$ and $Y$ in $\tildew{\mathcal{T}}(\geq
0)$. The composition
\begin{align*}
\op{Hom}_\mathcal{T}(\omega(X),\omega(Y)) \xsira{i} &
\, \op{Hom}_{\tildew{\mathcal{T}}}(i(\omega(X)),i(\omega(Y))) \\
\xsira{\eqref{eq:i-omega-isom}} &
\, \op{Hom}_{\tildew{\mathcal{T}}}(s^{-N}(X), s^{M}(Y)) \\
\xleftarrow[\sim]{\alpha^M \circ ? \circ
\alpha^N} &
\, \op{Hom}_{\tildew{\mathcal{T}}}(X, Y).
\end{align*}
of isomorphisms (the last isomorphism comes from
\ref{enum:filt-tria-cat-hom-bij}) is easily seen to be inverse to
\eqref{eq:functor-omega-iv}.
\end{proof}
\subsection{Omega in the basic example}
\label{sec:omega-in-basic-example}
Let $\mathcal{A}$ be an abelian category and consider the basic example
of the f-category $DF(\mathcal{A})$ over $D(\mathcal{A})$
(as described in Section~\ref{sec:basic-example-fcat},
Proposition~\ref{p:basic-ex-f-cat}).
Let $\omega: CF(\mathcal{A}) \rightarrow C(\mathcal{A})$ be the functor
mapping a filtered complex $X=(\ul{X},F)$ to its underlying
non-filtered complex $\ul{X}$.
This functor obviously induces a triangulated functor $\omega:
DF(\mathcal{A}) \rightarrow D(\mathcal{A})$.
\begin{lemma}
\label{l:omega-in-basic-example}
The functor $\omega:DF(\mathcal{A}) \rightarrow D(\mathcal{A})$ satisfies
the conditions
\ref{enum:functor-omega-iii} and \ref{enum:functor-omega-i}
of Proposition~\ref{p:functor-omega}.
\end{lemma}
\begin{proof}
Condition \ref{enum:functor-omega-iii} is obvious, so let us show
condition \ref{enum:functor-omega-i}.
We abbreviate $\omega':= \omega|_{DF(\mathcal{A})(\leq 0)}$
and consider $i$ as a functor to $DF(\mathcal{A})(\leq 0)$.)
We first define a morphism
$ \op{id}_{DF(\mathcal{A})(\leq 0)} \xra{\epsilon} i \circ
\omega'$
of functors as follows.
Let $X$ be in ${DF(\mathcal{A})(\leq 0)}$. We have seen that the obvious
morphism
$X \rightarrow X/(X(\geq 1))$
(cf.\ \eqref{eq:basic-example-fcat-def-Xgeq-triangle})
is an isomorphism in $DF(\mathcal{A})$.
As above we denote the filtration of $X$ by
$\dots \supset F^i \supset F^{i+1} \supset \dots$
and its underlying non-filtered
complex $\omega(X)$ by $\underline X$.
Define $\epsilon_X$ by the commutativity of the
following diagram
\begin{equation*}
\hspace{-1cm}
\xymatrix{
{X} \ar[d]^-{\sim} \ar@{..>}[r]^-{\epsilon_X} & {i(\omega(X))=
i(\ul{X}) \ar[d]^-{\sim}}
\ar[d]^-{\sim}
\\
{X/X(\geq 1)} \ar[r] \ar@{=}[d] & {i(\omega(X/X(\geq 1)))=
i(\ul{X}/F^1)} \ar@{=}[d]
\\
\mathovalbox{\ul{X}/F^1 : F^{-1}/F^1 \supset F^0/F^1 \supset 0} &
\mathovalbox{\ul{X}/F^1 : \ul{X}/F^1 = \ul{X}/F^1 \supset 0} &
}
\end{equation*}
where the lower horizontal map is the obvious one.
This is compatible with morphisms and defines $\epsilon$.
Let $\delta: \omega' \circ i \rightarrow \op{id}_{D(\mathcal{A})}$ be the identity
morphism.
We leave it to the reader to check that
$\delta \omega' \circ \omega' \epsilon = \op{id}_{\omega'}$ and
$i \delta \circ \epsilon i = \op{id}_{i}$.
This implies that $\epsilon$ and $\delta$ are unit and counit of an
adjunction $(\omega', i)$.
\end{proof}
\subsection{Construction of the functor \texorpdfstring{$c$}{c}}
\label{sec:construction-functor-cfun}
Let $(\tildew{\mathcal{T}},i)$ be an f-category over a triangulated
category $\mathcal{T}$.
Our aim in this section is to construct a certain functor
$c:\tildew{\mathcal{T}} \rightarrow C^b(\mathcal{T})$.
The construction is in two steps.
It can be found in a very condensed form in
\cite[Prop.~A 5]{Beilinson} or
\cite[8.4]{bondarko-weight-str-vs-t-str}.
\subsubsection{First step}
\label{sec:first-step-cprime}
We proceed similar as in the
construction of the weak weight complex functor (see
Section~\ref{sec:weak-wc-fun}).
This may seem a bit involved
(compare to the second approach explained later on)
but will turn
out to be convenient
when showing that the strong weight complex functor is
a lift of the weak one.
For every object $X$ in
$\tildew{\mathcal{T}}$ we have functorial $\sigma$-truncation
triangles (for all $n \in \Bl{Z}$)
\begin{equation}
\label{eq:sigma-trunc-construction-c}
\xymatrix{
{S^n_X:} &
{\sigma_{\geq n+1}X} \ar[r]^-{g^{n+1}_X} &
{X} \ar[r]^-{k_X^n} &
{\sigma_{\leq n}X} \ar[r]^-{v_X^n} &
{[1]\sigma_{\geq n+1}X.}
}
\end{equation}
For any $n$ there is a unique morphism of triangles
$S^n_X \rightarrow S^{n-1}_X$ extending $\op{id}_X$
(use Prop.~\ref{p:BBD-1-1-9-copied-for-w-str} and
\ref{enum:filt-tria-cat-no-homs}):
\begin{equation}
\label{eq:unique-triang-morph-Snx}
\xymatrix{
{S^n_X:} \ar[d]|{(h^n_X,\op{id}_X,l^n_X)} &
{\sigma_{\geq n+1}X} \ar[r]^-{g^{n+1}_X} \ar[d]^-{h^n_X}
\ar@{}[rd]|{\triangle} &
{X} \ar[r]^-{k^n_X} \ar[d]^-{\op{id}_X}
\ar@{}[rd]|{\nabla} &
{\sigma_{\leq n}X} \ar[r]^-{v^n_X} \ar[d]^-{l^n_X} &
{[1]\sigma_{\geq n+1}X} \ar[d]^-{[1]h^n_X} \\
{S^{n-1}_X:} &
{\sigma_{\geq n}X} \ar[r]^-{g^n_X} &
{X} \ar[r]^-{k^{n-1}_X} &
{\sigma_{\leq n-1}X} \ar[r]^-{v^{n-1}_X} &
{[1]\sigma_{\geq n}X}
}
\end{equation}
More precisely $h^n_X$ (resp.\ $l^n_X$) is the unique morphism making the square
$\Delta$ (resp.\ $\nabla$) commutative.
It is easy to see that $h^n_X$ corresponds under
\eqref{eq:a-leq-b-sigma-isos} to the adjunction morphism
$g^{n+1}_{\sigma_{\geq n}X}:\sigma_{\geq
n+1}\sigma_{\geq n}X \rightarrow \sigma_{\geq n}X$.
Hence we see from
Proposition~\ref{p:firstprop-filt-cat}
\eqref{enum:firstprop-filt-cat-trunc-triangle} that there is
a
unique morphism
$c^n_X$
such
that
\begin{equation}
\label{eq:triang-prime-sigmaX}
S^{'n}_{\sigma_{\geq n}X}:
\xymatrix{
{\sigma_{\geq n+1}X} \ar[r]^-{h^n_X} &
{\sigma_{\geq n}X} \ar[rr]^-{e^n_X:=k^n_{\sigma_{\geq n}X}} &&
{\sigma_{n}X} \ar[r]^-{c^n_X} &
{[1]\sigma_{\geq n+1}X}
}
\end{equation}
is a triangle.
We use the
the tree triangles in \eqref{eq:unique-triang-morph-Snx} and
\eqref{eq:triang-prime-sigmaX} and the square marked with $\triangle$
in
\eqref{eq:unique-triang-morph-Snx}
as the germ cell and obtain (from the octahedral axiom) the following
octahedron:
\begin{equation}
\label{eq:octaeder-cprime}
\tildew{O}_X^n:=
\xymatrix@dr{
&& {[1]{\sigma_{\geq n+1}X}} \ar[r]^-{[1]h^n_X} & {[1]{\sigma_{\geq n}X}}
\ar[r]^-{[1]e^n_X} & {[1]{{\sigma_n X}}}
\\
{S^{''n}_{\sigma_{\leq n X}}:}
&{{\sigma_n X}} \ar@(ur,ul)[ru]^-{c^n_X}
\ar@{..>}[r]^-{a^n_X} &
{\sigma_{\leq n} X} \ar@{..>}[r]^-{l^n_X} \ar[u]^-{v^n_X} &
{\sigma_{\leq n-1} X}
\ar@(dr,dl)[ru]^-{b^n_X} \ar[u]_{v^{n-1}_X}\\
{S^{n-1}_X:} &{\sigma_{\geq n}X} \ar[u]^-{e^n_X} \ar[r]^-{g^n_X}
& {X} \ar@(dr,dl)[ru]^-{k^{n-1}_X} \ar[u]_{k^n_X} \\
{S^n_X:} &{\sigma_{\geq n+1}X} \ar[u]^-{h^n_X} \ar@(dr,dl)[ru]^-{g^{n+1}_X}
\ar@{}[ru]|{\triangle}
\\
& {S^{'n}_{\sigma_{\geq n}X}:}
}
\end{equation}
Note that the lower dotted morphism
is in fact $l^n_X$ by
the uniqueness of $l^n_X$ observed above.
From
Proposition~\ref{p:firstprop-filt-cat}
\eqref{enum:firstprop-filt-cat-trunc-preserve} we obtain that
the upper dotted morphism labeled $a^n_X$ is unique: It is the
morphism
$g^{n}_{\sigma_{\leq n}X}$ (more precisely it is the morphism
$g^{n}_{\sigma_{\leq n}X} \circ \sigma_X^{[n,n]}$
in the notation of \eqref{eq:interval-isom-sigma-trunc}).
We see that the triangle
${S^{''n}_{\sigma_{\leq n X}}}$ can be constructed completely
analogous as triangle \eqref{eq:triang-prime-sigmaX} above.
It is now easy to see that $X \mapsto \tildew{O}^n_X$ is in fact functorial.
Let $c'(X)$ be the following complex in $\tildew{\mathcal{T}}$: Its
$n$-th term is
\begin{equation*}
c'(X)^n:=[n]{\sigma_n X}
\end{equation*}
and the
differential $d^n_{c'(X)}:[n]{\sigma_n X} \rightarrow [n+1]\sigma_{n+1}X$ is defined by
\begin{multline}
\label{eq:differential-cprime-complex}
d^n_{c'(X)}
:= [n](b^{n+1}_X \circ a^n_X)
= [n](([1]e^{n+1}_X) \circ v^n_X \circ a^n_X)\\
= [n](([1]e^{n+1}_X) \circ c^n_X).
\end{multline}
Note that $d^n_{c'(X)} \circ d^{n-1}_{c'(X)}=0$
since
the composition of two consecutive
maps in a triangle is zero (apply this to \eqref{eq:octaeder-cprime}).
Since everything is functorial we have in fact defined a functor
\begin{equation*}
c':\tildew{\mathcal{T}} \rightarrow C(\tildew{\mathcal{T}}).
\end{equation*}
Let $C^b_{\Delta}(\tildew{\mathcal{T}}) \subset
C(\tildew{\mathcal{T}})$ be the full subcategory consisting of objects
$A \in C(\tildew{\mathcal{T}})$ such that $A^n \in
\tildew{\mathcal{T}}([n])$ for all $n \in \Bl{Z}$ and $A^n=0$ for $n\gg 0$.
Axiom \ref{enum:filt-tria-cat-exhaust} and
$c'(X)^n= [n]\sigma_n(X)\in \tildew{\mathcal{T}}([n])$ show that we
can view $c'$ as a functor
\begin{equation*}
c':\tildew{\mathcal{T}} \rightarrow C^b_{\Delta}(\tildew{\mathcal{T}}).
\end{equation*}
In the proof of the fact that a weak weight complex functor is a
functor of additive categories with translation we have used an
octahedron \eqref{eq:wc-weak-octaeder-translation}. A similar
octahedron with the same distribution of signs yields a canonical
isomorphism
\begin{equation}
\label{eq:cprime-translation}
c' \circ [1]s^{-1}
\cong
\Sigma \circ (s^{-1})_{C^b(\tildew{\mathcal{T}})} \circ c'
\end{equation}
where we use notation introduced at the end of
section~\ref{sec:homotopy-categories}.
We leave the details to the reader.
Note that $\Sigma$ and $(s^{-1})_{C^b(\tildew{\mathcal{T}})}$ commute.
We describe two other ways to obtain the functor $c'$:
Let $X \in \tildew{\mathcal{T}}$.
Apply $\sigma_{\leq n+1}$ to the triangle
\eqref{eq:triang-prime-sigmaX}.
This gives the triangle
\begin{equation}
\label{eq:triang-prime-sigmaX-trunc}
\xymatrix{
{\sigma_{n+1}X} \ar[r]
&
{\sigma_{[n,n+1]}X} \ar[r]
&
{\sigma_{\leq n+1}\sigma_{n}X} \ar[rr]^-{\sigma_{\leq n+1}(c^n_X)} &&
{[1]\sigma_{n+1}X.}
}
\end{equation}
The following commutative diagram is obtained by applying the
transformation $k^{n+1}:\op{id} \rightarrow \sigma_{\leq n+1}$ to its upper
horizontal morphism:
\begin{equation}
\label{eq:ce-comm-square}
\xymatrix{
{\sigma_n X} \ar[rr]^{c_X^n}
\ar[d]^{k^{n+1}_{\sigma_n X}}_{\sim}
&& {[1]\sigma_{\geq n+1}X}
\ar[d]^{[1]e^{n+1}_{X}}
\\
{\sigma_{\leq n+1}\sigma_n X} \ar[rr]^{\sigma_{\leq n+1}(c_X^n)}
&& {[1]\sigma_{n+1}X}
}
\end{equation}
We use the isomorphism $k^{n+1}_{\sigma_nX}:\sigma_nX \xra{\sim}
\sigma_{\leq n+1}\sigma_n (X)$ in order to replace the third term
in
\eqref{eq:triang-prime-sigmaX-trunc}
and obtain (using \eqref{eq:ce-comm-square}) the following
triangle, where $\tildew{d}^n_X:=[1]e^{n+1}_X \circ c^n_X$:
\begin{equation}
\label{eq:sigma-truncation-for-tilde-d}
\xymatrix{
{\sigma_{n+1}X} \ar[r]
&
{\sigma_{[n,n+1]}X} \ar[r]
&
{\sigma_{n}X}
\ar[r]^-{\tildew{d}^n_X}
&
{[1]\sigma_{n+1}X}
}
\end{equation}
Completely analogous $\sigma_{\geq n}$ applied to the triangle
${S^{''n+1}_{\sigma_{\leq n+1 X}}}$ and the isomorphism
$g^n_{\sigma_{n+1}X}:\sigma_{\geq n}\sigma_{n+1}X \xra{\sim}
\sigma_{n+1}X$
provide a triangle of the form
\eqref{eq:sigma-truncation-for-tilde-d} with the same third morphism
$\tildew{d}^n_X=b^{n+1}_X \circ a^n_X$ (cf.\
\eqref{eq:differential-cprime-complex}) which presumably is
\eqref{eq:sigma-truncation-for-tilde-d} under the obvious
identifications.
Note that
\begin{equation*}
\dots \rightarrow [n]\sigma_n(X) \xra{[n]\tildew{d}^n} [n+1]\sigma_{n+1}(X)
\rightarrow \dots
\end{equation*}
is the complex $c'(X)$.
Hence we have described two slightly different (functorial)
constructions of the functor $c'$.
We will use the construction described after
\eqref{eq:sigma-truncation-for-tilde-d} later on and refer to it as
the second approach to $c'$.
\subsubsection{Second step}
\label{sec:second-step-cdoubleprime}
In the second step we define the functor $c$ via a functor
$c'':C^b_\Delta(\tildew{\mathcal{T}}) \rightarrow
C^b(\tildew{\mathcal{T}}([0]))$.
There is a shortcut to $c$ described in Remark~\ref{rem:quick-def-c}
below.
Let $A = (A^n, d_A^n) \in C^b_\Delta(\tildew{\mathcal{T}})$. We draw
this complex horizontally in the following diagram:
\begin{equation}
\label{eq:shift-differential-to-diagonal}
\xymatrix{
&
{s(A^{-1})}
\ar@{..>}[dr]
\\
{\dots} \ar[r]
&
{A^{-1}}
\ar[r]_-{{d}^{-1}_A} \ar[u]^-{\alpha}
&
{A^0}
\ar[r]^-{{d}^0_A}
\ar@{..>}[dr]
&
{A^1}
\ar[r]^-{{d}^1_A}
&
{A^2}
\ar[r]^-{{d}^2_A}
&
{\dots}
\\
&&&
{s^{-1}(A^1)}
\ar[u]^-{\alpha}
\ar@{..>}[dr]
\\
&&&&
{s^{-2}(A^2)}
\ar[uu]^-{\alpha^2}
}
\end{equation}
The dotted arrows making everything commutative are uniquely obtained
using \ref{enum:filt-tria-cat-hom-bij}.
The diagonal is again a bounded complex (i.\,e.\ $d^2=0$ (again by
\ref{enum:filt-tria-cat-hom-bij})), now even in
$\tildew{\mathcal{T}}([0])$.
We denote this complex by $c''(A)$.
This construction in fact defines a functor
$c'':C^b_\Delta(\tildew{\mathcal{T}}) \rightarrow
C^b(\tildew{\mathcal{T}}([0]))$.
It is easy to check that there is a canonical isomorphism
\begin{equation}
\label{eq:cdoubleprime-translation}
c'' \circ \Sigma \circ (s^{-1})_{C^b(\tildew{\mathcal{T}})}
\cong
\Sigma \circ c''.
\end{equation}
\subsubsection{Definition of the functor \texorpdfstring{$c$}{c}}
\label{sec:def-functor-c}
By fixing a quasi-inverse $i^{-1}$ to $i$ we now define $c$ to be the
composition
\begin{equation*}
c: \tildew{\mathcal{T}} \xra{c'} C^b_\Delta(\tildew{\mathcal{T}})
\xra{c''} C^b(\tildew{\mathcal{T}}([0]))
\xra{i^{-1}_{C^b}} C^b(\mathcal{T}).
\end{equation*}
This functor maps an object $X\in \tildew{\mathcal{T}}$ to the complex
\begin{equation}
\label{eq:cfun-gr}
\dots \rightarrow
[-1]\op{gr}^{-1}(X)
\rightarrow
\op{gr}^0(X)
\rightarrow
[1]\op{gr}^1(X)
\rightarrow
[2]\op{gr}^2(X)
\rightarrow
\dots.
\end{equation}
\begin{remark}
\label{rem:quick-def-c}
An equivalent shorter definition of $c$ would be to define it as
$\omega_{C^b} \circ c'$.
\end{remark}
Combining the above canonical isomorphisms
\eqref{eq:cprime-translation}
and
\eqref{eq:cdoubleprime-translation} we obtain:
\begin{proposition}
\label{p:cfun-is-functor-of-add-cat-trans}
The functor $c$ constructed above is
a functor
\begin{equation}
\label{eq:functor-c-as-cat-with-translation}
c: (\tildew{\mathcal{T}}, [1]s^{-1}) \rightarrow (C^b(\mathcal{T}), \Sigma)
\end{equation}
of additive categories with translation: On objects we have a
canonical isomorphism
\begin{equation}
\label{eq:c-Sigma-commute}
c([1]s^{-1}(X))\cong \Sigma c(X).
\end{equation}
\end{proposition}
\section{Strong weight complex functor}
\label{sec:strong-WCfun}
Our aim in this section is to show the following Theorem:
\begin{theorem}
[{cf.\ \cite[8.4]{bondarko-weight-str-vs-t-str}}]
\label{t:strong-weight-cplx-functor}
Let $\mathcal{T}$ be a triangulated category with a
bounded
weight structure
$w=(\mathcal{T}^{w \leq 0},\mathcal{T}^{w \geq 0})$.
Let $(\tildew{\mathcal{T}}, i)$ be an f-category over
$\mathcal{T}$.
Assume that $\tildew{\mathcal{T}}$ satisfies
axiom~\ref{enum:filt-tria-cat-3x3-diag} stated below.
Then there is a strong weight complex functor
\begin{equation*}
{\widetilde{WC}}: \mathcal{T} \rightarrow K^b({\heartsuit}(w))^{\op{anti}}.
\end{equation*}
In particular ${\widetilde{WC}}$ is a functor of triangulated categories.
\end{theorem}
A proof of (a stronger version of) this theorem is sketched in
\cite[8.4]{bondarko-weight-str-vs-t-str}, where M.~Bondarko
attributes the argument to A.~Beilinson.
When we tried to understand the details we had to impose the
additional
conditions that
$\tildew{\mathcal{T}}$ satisfies
axiom~\ref{enum:filt-tria-cat-3x3-diag} and that the weight structure
is bounded.
We state axiom~\ref{enum:filt-tria-cat-3x3-diag} in
Section~\ref{sec:additional-axiom} and show
in Section~\ref{sec:additional-axiom-basic-example}
that it is satisfied in the basic example of a filtered derived
category.
Our proof of
Theorem~\ref{t:strong-weight-cplx-functor} is an elaboration of the ideas of
A.~Beilinson and M.~Bondarko; we sketch the idea of the proof
in Section~\ref{sec:idea-construction-strong-WC-functor} and give the
details in Section~\ref{sec:existence-strong-WCfun}.
\subsection{Idea of the proof}
\label{sec:idea-construction-strong-WC-functor}
Before giving the details let us explain the strategy of the proof
of Theorem~\ref{t:strong-weight-cplx-functor}.
Let $(\tildew{\mathcal{T}},i)$ be an f-category over a triangulated
category $\mathcal{T}$ and
assume that $w=(\mathcal{T}^{w \leq 0},\mathcal{T}^{w \geq 0})$ is a weight
structure on $\mathcal{T}$.
Its heart ${\heartsuit}(w)=\mathcal{T}^{w=0}$ is a full subcategory of $\mathcal{T}$,
and hence $C^b({\heartsuit}(w)) \subset C^b(\mathcal{T})$
and $K^b({\heartsuit}(w)) \subset K^b(\mathcal{T})$ are
full subcategories.
Let $\tildew{\mathcal{T}}^s$ be the full subcategory of
$\tildew{\mathcal{T}}$ consisting of objects $X \in
\tildew{\mathcal{T}}$ such that $c(X) \in
C^b({\heartsuit}(w))$
where $c$ is the functor constructed in Section~\ref{sec:construction-functor-cfun}: We have a ``pull-back"
diagram
\begin{equation}
\label{eq:pb-subtildet}
\xymatrix{
{\tildew{\mathcal{T}}} \ar[r]^-{c}
& {C^b(\mathcal{T})}\\
{\tildew{\mathcal{T}}^s} \ar[r]^-{c}
\ar@{}[u]|-{\cup}
& {C^b({\heartsuit}(w))} \ar@{}[u]|-{\cup}
}
\end{equation}
of categories where we denote the lower horizontal functor also by
$c$.
Note that \eqref{eq:c-Sigma-commute} shows that $\tildew{\mathcal{T}}^s$
is stable under $s^{-1}[1]=[1]s^{-1}$.
We expand diagram \eqref{eq:pb-subtildet} to
\begin{equation}
\label{eq:pb-subtildet-expanded}
\xymatrix{
{\tildew{\mathcal{T}}} \ar[r]_-{c} \ar@/^1pc/[rr]^-{h}
& {C^b(\mathcal{T})} \ar[r]_-{\op{can}}
& {K^b(\mathcal{T})^{\op{anti}}}\\
{\tildew{\mathcal{T}}^s} \ar[r]^-{c} \ar@/_1pc/[rr]_{h}
\ar@{}[u]|-{\cup}
& {C^b({\heartsuit}(w))} \ar@{}[u]|-{\cup}
\ar[r]^-{\op{can}}
\ar@{}[u]|-{\cup}
& {K^b({\heartsuit}(w))^{\op{anti}}} \ar@{}[u]|-{\cup}
}
\end{equation}
and define $h:=\op{can} \circ c$ as indicated.
We have seen in
Proposition~\ref{p:cfun-is-functor-of-add-cat-trans}
that
$c$ is a functor of additive categories with translation (where
$\tildew{\mathcal{T}}$ (or $\tildew{\mathcal{T}}^s$) are equipped with
the translation $[1]s^{-1}$); the same is obviously true for $h$.
For this statement the homotopy categories on the right are just
considered as additive categories with translation. In the following
however we view them as triangulated categories with the class of
triangles described in Sections \ref{sec:anti-triangles}
and \ref{sec:homotopy-categories}.
From now on we assume that $\tildew{\mathcal{T}}$ satisfies
axiom~\ref{enum:filt-tria-cat-3x3-diag} (stated below) and that the
weight structure is bounded.
Consider the following diagram whose dotted arrows will be explained:
\begin{equation}
\label{eq:omega-subtildeT-exp}
\xymatrix{
{\mathcal{T}} & {\tildew{\mathcal{T}}} \ar[r]^-{h} \ar[l]_{\omega}
& {K^b(\mathcal{T})^{\op{anti}}}\\
{\mathcal{T}} \ar@{=}[u]
&
{\tildew{\mathcal{T}}^s} \ar[r]^-{h}
\ar@{}[u]|-{\cup} \ar[l]_{\omega|_{\tildew{\mathcal{T}}^s}} \ar@{..>}[d]^-{\op{can}}
& {K^b({\heartsuit}(w))^{\op{anti}}} \ar@{}[u]|-{\cup}\\
&
{\mathcal{Q}}
\ar@{..>}[lu]^-{\ol{\omega}}_{\sim}
\ar@{..>}[ru]_{\ol{h}}
}
\end{equation}
We will prove below:
The restriction $\omega|_{\tildew{\mathcal{T}}^s}$ factors over some
quotient functor $\tildew{\mathcal{T}}^s \xra{\op{can}} \mathcal{Q}$ and
induces an equivalence $\ol{\omega}:\mathcal{Q} \xra{\sim} \mathcal{T}$ of
additive categories with translation
(see Prop.~\ref{p:omega-tildeTs-equiv}) where
the translation functor of $\mathcal{Q}$ is induced by $[1]s^{-1}$.
Transfer of structure turns $\mathcal{Q}$
into a triangulated category;
its class of triangles can be
explicitly described (cf.\ Lemma~\ref{l:omega-bar-triang}).
On the other hand the functor
$h:\tildew{\mathcal{T}}^s \rightarrow K^b({\heartsuit}(w))^{\op{anti}}$ factors over
$\mathcal{Q}$ to a functor $\ol{h}$ of triangulated categories
(see Cor.~\ref{c:h-factors-triang}).
Let ${\widetilde{WC}}: \mathcal{T} \rightarrow K^b({\heartsuit}(w))^{\op{anti}}$ be the
composition $\ol{h} \circ
\ol{\omega}^{-1}$,
where $\ol{\omega}^{-1}$ is a
quasi-inverse of
$\ol{\omega}$.
(In diagram \eqref{eq:omega-subtildeT-exp} ${\widetilde{WC}}$ is the
composition of
the dotted arrows.)
Then ${\widetilde{WC}}$ will turn out to be a strong weight complex functor, proving
Theorem~\ref{t:strong-weight-cplx-functor}.
\subsection{An additional axiom for filtered triangulated categories}
\label{sec:additional-axiom}
Let $\tildew{\mathcal{T}}$ be a
filtered triangulated category $\tildew{\mathcal{T}}$.
Let $Y$ in $\tildew{\mathcal{T}}$ be an object and consider the
$\sigma$-truncation triangle
\begin{equation*}
\label{eq:sigma-trunc}
\xymatrix{
{S^0_Y:} &
{\sigma_{\geq 1}Y} \ar[r]^-{g^1_Y} &
{Y} \ar[r]^-{k_Y^0} &
{\sigma_{\leq 0}Y} \ar[r]^-{v_Y^0} &
{[1]\sigma_{\geq 1}Y.}
}
\end{equation*}
Applying the morphism $\alpha$ we obtain a morphism of triangles
\begin{equation*}
\xymatrix{
{s(S^0_Y):} &
{s(\sigma_{\geq 1}Y)} \ar[r]^-{s(g^1_Y)} &
{s(Y)} \ar[r]^-{s(k_Y^0)} &
{s(\sigma_{\leq 0}Y)} \ar[r]^-{s(v_Y^0)} &
{[1]s(\sigma_{\geq 1}Y)}\\
{S^0_Y:} &
{\sigma_{\geq 1}Y} \ar[r]^-{g^1_Y} \ar[u]^{\alpha_{\sigma_{\geq
1}(Y)}} &
{Y} \ar[r]^-{k_Y^0} \ar[u]^{\alpha_Y} &
{\sigma_{\leq 0}Y} \ar[r]^-{v_Y^0} \ar[u]^{\alpha_{\sigma_{\leq 0}(Y)}}&
{[1]\sigma_{\geq 1}Y.} \ar[u]^{[1]\alpha_{\sigma_{\geq 1}(Y)}}
}
\end{equation*}
where we tacitly identify
$s([1]\sigma_{\geq 1}(Y))= [1]s(\sigma_{\geq 1}(Y))$ and
$\alpha_{[1]\sigma_{\geq 1}(Y)}=[1]\alpha_{\sigma_{\geq 1}(Y)}$.
Given a morphism $f:X \rightarrow Y$ in $\tildew{\mathcal{T}}$,
the morphism of triangles
\begin{equation*}
\xymatrix{
{S^0_Y:} &
{\sigma_{\geq 1}Y} \ar[r]^-{g^1_Y} &
{Y} \ar[r]^-{k_Y^0} &
{\sigma_{\leq 0}Y} \ar[r]^-{v_Y^0} &
{[1]\sigma_{\geq 1}Y.} \\
{S^0_X:} &
{\sigma_{\geq 1}X} \ar[r]^-{g^1_X} \ar[u]^{\sigma_{\geq 1}(f)} &
{X} \ar[r]^-{k_X^0} \ar[u]^{f} &
{\sigma_{\leq 0}X} \ar[r]^-{v_X^0} \ar[u]^{\sigma_{\leq 0}(f)}&
{[1]\sigma_{\geq 1}X.} \ar[u]^{[1]\sigma_{\geq 1}(f)}
}
\end{equation*}
is the unique morphism of triangles extending $f$
(use Prop.~\ref{p:BBD-1-1-9-copied-for-w-str} and
\ref{enum:filt-tria-cat-no-homs}).
We denote the composition of this two morphisms of triangles by
$\alpha \circ f: S_X^0 \rightarrow s(S_Y^0)$:
\begin{equation}
\label{eq:alpha-f-triangle}
\hspace{-1.6cm}
\xymatrix@C2cm{
{s(S^0_Y):} &
{s(\sigma_{\geq 1}Y)} \ar[r]^-{s(g^1_Y)} &
{s(Y)} \ar[r]^-{s(k_Y^0)} &
{s(\sigma_{\leq 0}Y)} \ar[r]^-{s(v_Y^0)} &
{[1]s(\sigma_{\geq 1}Y)}\\
{S^0_X:} \ar[u]^{\alpha \circ f} &
{\sigma_{\geq 1}X} \ar[r]^-{g^1_X} \ar[u]^{\alpha_{\sigma_{\geq
1}(Y)} \circ \sigma_{\geq 1}(f)} &
{X} \ar[r]^-{k_X^0} \ar[u]^{\alpha_Y \circ f} &
{\sigma_{\leq 0}X} \ar[r]^-{v_X^0} \ar[u]^{\alpha_{\sigma_{\leq
0}(Y)} \circ \sigma_{\leq 0}(f)}&
{[1]\sigma_{\geq 1}X.} \ar[u]^{[1](\alpha_{\sigma_{\geq 1}(Y)}
\circ \sigma_{\geq 1}(f))}
}
\end{equation}
(We don't see a reason why this morphism of triangles
extending
$\alpha_Y \circ f$ should be unique;
Proposition~\ref{p:BBD-1-1-9-copied-for-w-str} does not apply.
If it were unique axiom \eqref{enum:filt-tria-cat-3x3-diag} below
would be satisfied automatically.)
Now the additional axiom can be stated:
\begin{enumerate}[label=(fcat{\arabic*}),start=7]
\item
\label{enum:filt-tria-cat-3x3-diag}
For any morphism $f: X \rightarrow Y$ in $\tildew{\mathcal{T}}$ the
morphism
\eqref{eq:alpha-f-triangle}
of triangles
$\alpha \circ f: S_X^0 \rightarrow s(S_Y^0)$ explained above can be
extended to a $3 \times 3$-diagram
\begin{equation}
\label{eq:filt-tria-cat-nine-diag}
\hspace{-1.4cm}
\xymatrix@C2cm{
&
{[1]\sigma_{\geq 1}X} \ar@{..>}[r]^-{[1]g^1_X} &
{[1]X} \ar@{..>}[r]^-{[1]k_X^0} &
{[1]\sigma_{\leq 0}X} \ar@{..>}[r]^-{[1]v_X^0}
\ar@{}[rd]|-{\circleddash} &
{[2]\sigma_{\geq 1}X} \\
&
{A} \ar[u]^-{b} \ar[r] &
{Z} \ar[u] \ar[r] &
{B} \ar[u] \ar[r] &
{[1]A} \ar@{..>}[u]^-{[1]b} \\
{s(S^0_Y):} &
{s(\sigma_{\geq 1}Y)} \ar[r]^-{s(g^1_Y)} \ar[u]^-{a} &
{s(Y)} \ar[r]^-{s(k_Y^0)} \ar[u]&
{s(\sigma_{\leq 0}Y)} \ar[r]^-{s(v_Y^0)} \ar[u]&
{[1]s(\sigma_{\geq 1}Y)} \ar@{..>}[u]^-{[1]a}\\
{S^0_X:} \ar[u]^{\alpha \circ f} &
{\sigma_{\geq 1}X} \ar[r]^-{g^1_X} \ar[u]^{\alpha_{\sigma_{\geq
1}(Y)} \circ \sigma_{\geq 1}(f)} &
{X} \ar[r]^-{k_X^0} \ar[u]^{\alpha_Y \circ f} &
{\sigma_{\leq 0}X} \ar[r]^-{v_X^0} \ar[u]^{\alpha_{\sigma_{\leq
0}(Y)} \circ \sigma_{\leq 0}(f)} &
{[1]\sigma_{\geq 1}X} \ar@{..>}[u]^{[1](\alpha_{\sigma_{\geq 1}(Y)}
\circ \sigma_{\geq 1}(f))}
}
\end{equation}
having the properties described in
Proposition~\ref{p:3x3-diagram-copied-for-w-str}.
\end{enumerate}
Instead of taking the morphism $S_X^0 \xra{\alpha \circ f} s(S^0_Y)$
at the bottom of this $3\times 3$-diagram we get similar diagrams with
morphisms $S_X^n \xra{\alpha \circ f} s(S^n_Y)$ at the bottom (use
the functor $s$ of triangulated categories).
\begin{remark}
\label{rem:fcat-nine-nearly-needless}
We first expected that
axiom \ref{enum:filt-tria-cat-3x3-diag} is a consequence of
Proposition~\ref{p:3x3-diagram-copied-for-w-str}.
In fact this proposition implies that there is a diagram
\eqref{eq:filt-tria-cat-nine-diag} with nearly all the required
properties: If we start from the small commutative square in the
lower left corner, the only thing that is not clear to us is why
one can assume that the morphism from $\sigma_{\leq 0}X$ to
$s(\sigma_{\leq 0}Y)$ is
${\alpha_{\sigma_{\leq 0}(Y)} \circ \sigma_{\leq 0}(f)}$.
\end{remark}
\begin{remark}
\label{rem:fcat-nine-rewritten}
Some manipulations of diagram~\ref{eq:filt-tria-cat-nine-diag} (mind
the signs!) show that
axiom~\ref{enum:filt-tria-cat-3x3-diag} gives:
For any morphism $f: X \rightarrow Y$ and any $n \in \Bl{Z}$ there is a $3
\times 3$-diagram of the following\footnote{We assume here and in
similar situations in the following that $[1][-1]=\op{id}$.} form:
\begin{equation}
\label{eq:filt-tria-cat-nine-diag-sigma}
\xymatrix
{
{[-1]s({\sigma_{\geq n+1}(Y)})}
\ar[r] \ar[d]^-{[-1]s(g^{n+1}_Y)} &
{A'} \ar[r] \ar[d] &
{{\sigma_{\geq n+1}(X)}} \ar[rr]^-{\alpha_{\sigma_{\geq n+1}(Y)}
\circ \sigma_{\geq n+1}(f)} \ar[d]^-{g^{n+1}_X} & &
{s({\sigma_{\geq n+1}(Y)})} \ar@{..>}[d]^-{s({g^{n+1}_Y})} \\
{[-1]s(Y)} \ar[r] \ar[d]^-{[-1]s({k_Y^n})} &
{Z'} \ar[r] \ar[d] &
{X} \ar[rr]^-{\alpha_Y \circ f} \ar[d]^-{k_X^n} &&
{s(Y)} \ar@{..>}[d]^-{s({k_Y^n})} \\
{[-1]s({\sigma_{\leq n}(Y)})} \ar[r] \ar[d]^-{-[-1]s({v_Y^n})} &
{B'} \ar[r] \ar[d] &
{{\sigma_{\leq n}(X)}}
\ar[rr]^-{\alpha_{\sigma_{\leq n}(Y)} \circ \sigma_{\leq n}(f)}
\ar[d]^-{v_X^n} \ar@{}[rrd]|-{\circleddash} &&
{s({\sigma_{\leq n}(Y)})} \ar@{..>}[d]^-{-s({v_Y^n})} \\
{s({\sigma_{\geq n+1}(Y)})} \ar@{..>}[r] &
{[1]A'} \ar@{..>}[r] &
{[1]{\sigma_{\geq n+1}(X)}}
\ar@{..>}[rr]^-{[1](\alpha_{\sigma_{\geq n+1}(Y)}\circ
\sigma_{\geq n+1}(f))} & &
{[1]s({\sigma_{\geq n+1}(Y)})}
}
\end{equation}
\end{remark}
\subsection{The additional axiom in the basic example}
\label{sec:additional-axiom-basic-example}
Let $\mathcal{A}$ be an abelian category and consider the basic example
$DF(\mathcal{A})$ of a filtered triangulated category
(as described in Section~\ref{sec:basic-example-fcat},
Prop.~\ref{p:basic-ex-f-cat}).
\begin{lemma}
\label{l:additional-axiom-true-DFA}
Axiom~\ref{enum:filt-tria-cat-3x3-diag} is true in
$DF(\mathcal{A})$.
\end{lemma}
\begin{proof}
Let $f: X \rightarrow Y$ be a morphism in $DF(\mathcal{A})$. We can assume
without loss of generality that $f$ is (the class of) a morphism
$f:X \rightarrow Y$ in $CF(\mathcal{A})$.
We explain the following diagram:
\begin{equation*}
\hspace{-4.1cm}
\xymatrix@R=40pt@C=18pt{
{F^n[1]L_X} \ar@{..>}[r]^-{[1]{g_X}}
\ar@{}[rd]|(.2){-x} &
{F^n[1]X} \ar@{..>}[r]^-{\svek 10}
\ar@{}[rd]|(.2){-x} &
{F^n[1]X \oplus F^n[2]L_X} \ar@{..>}[r]^-{\zvek 01}
\ar@{}[rd]|-{\circleddash}
\ar@{}[rd]|(.2){\tzmat {-x}{-{g_X}}0{x}} &
{F^n[2]L_X}
\ar@{}[rd]|(.2){x} &
\\
{F^{n-1}L_Y\oplus F^n[1]L_X} \ar[u]^-{\zvek 01} \ar[r]^-{\tzmat
{g_Y}00{g_X}}
\ar@{}[rd]|(.2){\tzmat y\beta 0{-x}} &
{F^{n-1}Y\oplus F^n[1]X} \ar[u]^-{\zvek 01} \ar[r]^-{
\Big[
\begin{smallmatrix}
1&0\\ 0&0\\ 0&1\\ 0&0
\end{smallmatrix}
\Big]
}
\ar@{}[rd]|(.2){\tzmat y\gamma 0{-x}} &
{F^{n-1}Y \oplus F^{n-1}[1]L_Y \oplus F^n[1]X \oplus F^n[2]L_X}
\ar[u]^-{
\Big[
\begin{smallmatrix}
0&0&1&0\\ 0&0&0&1
\end{smallmatrix}
\Big]
} \ar[r]^-{
\Big[
\begin{smallmatrix}
0&1&0&0\\ 0&0&0&{-1}
\end{smallmatrix}
\Big]
}
\ar@{}[rd]|(.35){
\Big[
\begin{smallmatrix}
y&{g_Y}&\gamma&0\\
0&-y&0&\beta\\
0&0&-x&-{g_X}\\
0&0&0&x
\end{smallmatrix}
\Big]
}
&
{F^{n-1}[1]L_Y \oplus F^n[2]L_X} \ar@{..>}[u]^-{\zvek 01}
\ar@{}[rd]|(.2){\tzmat {-y}{-\beta} 0{x}} &
{ } \\
{F^{n-1}L_Y} \ar[u]^-{\svek 10} \ar[r]^-{{g_Y}}
\ar@{}[rd]|(.2){y} &
{F^{n-1}Y} \ar[u]^-{\svek 10} \ar[r]^-{\svek 10}
\ar@{}[rd]|(.2){y} &
{F^{n-1}Y\oplus F^{n-1}[1]L_Y} \ar[u]^-{
\Big[
\begin{smallmatrix}
1&0\\ 0&1\\ 0&0\\ 0&0
\end{smallmatrix}
\Big]
} \ar[r]^-{\zvek 01}
\ar@{}[rd]|(.2){\tzmat y{g_Y}0{-y}} &
{F^{n-1}[1]L_Y} \ar@{..>}[u]^-{\svek 10}
\ar@{}[rd]|(.2){-y} &
{ } \\
{F^nL_X} \ar[u]^-{\beta} \ar[r]^-{g_X}
\ar@{}[rd]|(.2){x} &
{F^nX} \ar[u]^-{\gamma} \ar[r]^-{\svek 10}
\ar@{}[rd]|(.2){x} &
{F^nX \oplus F^n[1]L_X} \ar[u]^-{\tzmat \gamma 0 0 \beta}
\ar[r]^-{\zvek 01}
\ar@{}[rd]|(.2){\tzmat x{g_X}0{-x}} &
{F^n[1]L_X} \ar@{..>}[u]^-{[1]\beta}
\ar@{}[rd]|(.2){-x}
& { } \\
{ } &
{ } &
{ } &
{ } &
{ }
}
\end{equation*}
This diagram is the $n$-th filtered part of the $3\times 3$ diagram
we need.
For simplicity we will in the rest of this description not
distinguish between a morphism and its $n$-th filtered part.
The lowest row is
constructed as follows:
Let $L_X:= X(\geq 1)$ be as defined in
\ref{eq:basic-example-fcat-def-Xgeq-triangle}
and let $g_X: L_X \rightarrow X$ be the obvious morphism (called $i$
there). The lowest row is the ($n$-th filtered part of)
the mapping cone triangle of this morphism. This triangle is
isomorphic to the triangle \eqref{eq:triangle-fcat-basic} and hence
a possible choice
for the $\sigma$-truncation triangle of $X$, cf.\ the proof of
Proposition~\ref{p:basic-ex-f-cat}.
The second row from below is the
corresponding triangle for $s(Y)$.
The lower right ``index" at each object indicates its differential,
e.\,g.\ the differential of $X$ is $x$.
The morphism of triangles between the lower two rows
is constructed as described before \eqref{eq:alpha-f-triangle},
e.\,g.\ $\gamma = \alpha_Y \circ f:X \rightarrow s(Y)$.
Then we fit the morphisms $\beta$, $\gamma$ and $\tzmat \gamma
00\beta$ into mapping cone triangles.
Then consider the horizontal arrows in the second row from above:
They are morphisms of complexes and make all small squares
(anti-)commutative as required.
We only need to show that this second row is a triangle.
This is a consequence of the following diagram which gives an
isomorphism of this row to the mapping cone triangle of
$\tzmat{g_Y}00{g_X}$.
\begin{equation*}
\hspace{-4.4cm}
\xymatrix@R=60pt@C=23pt{
{F^{n-1}L_Y\oplus F^n[1]L_X} \ar@{=}[d] \ar[r]^-{\tzmat
{g_Y}00{g_X}}
\ar@{}[rd]|(.2){\tzmat y\beta 0{-x}} &
{F^{n-1}Y\oplus F^n[1]X} \ar@{=}[d] \ar[r]^-{
\Big[
\begin{smallmatrix}
1&0\\ 0&0\\ 0&1\\ 0&0
\end{smallmatrix}
\Big]
}
\ar@{}[rd]|(.2){\tzmat y\gamma 0{-x}} &
{F^{n-1}Y \oplus F^{n-1}[1]L_Y \oplus F^n[1]X \oplus F^n[2]L_X}
\ar[d]^-{\sim}_-{
\Big[
\begin{smallmatrix}
1&0&0&0\\
0&0&1&0\\
0&1&0&0\\
0&0&0&-1
\end{smallmatrix}
\Big]
} \ar[r]^-{
\Big[
\begin{smallmatrix}
0&1&0&0\\ 0&0&0&{-1}
\end{smallmatrix}
\Big]
}
\ar@{}[rd]|(.35){
\Big[
\begin{smallmatrix}
y&{g_Y}&\gamma&0\\
0&-y&0&\beta\\
0&0&-x&-{g_X}\\
0&0&0&x
\end{smallmatrix}
\Big]
}
&
{F^{n-1}[1]L_Y \oplus F^n[2]L_X} \ar@{=}[d]
\ar@{}[rd]|(.2){\tzmat {-y}{-\beta} 0{x}} &
{ } \\
{F^{n-1}L_Y\oplus F^n[1]L_X} \ar[r]^-{\tzmat
{g_Y}00{g_X}}
\ar@{}[rd]|(.2){\tzmat y\beta 0{-x}} &
{F^{n-1}Y\oplus F^n[1]X} \ar[r]^-{
\Big[
\begin{smallmatrix}
1&0\\ 0&1\\ 0&0\\ 0&0
\end{smallmatrix}
\Big]
}
\ar@{}[rd]|(.2){\tzmat y\gamma 0{-x}} &
{F^{n-1}Y \oplus F^n[1]X \oplus F^{n-1}[1]L_Y \oplus F^n[2]L_X}
\ar[r]^-{
\Big[
\begin{smallmatrix}
0&0&1&0\\ 0&0&0&1
\end{smallmatrix}
\Big]
}
\ar@{}[rd]|(.35){
\Big[
\begin{smallmatrix}
y&\gamma&{g_Y}&0\\
0&-x&0&{g_X}\\
0&0&-y&-\beta\\
0&0&0&x
\end{smallmatrix}
\Big]
}
&
{F^{n-1}[1]L_Y \oplus F^n[2]L_X}
\ar@{}[rd]|(.2){\tzmat {-y}{-\beta} 0{x}} &
{ } \\
{ } &
{ } &
{ } &
{ } &
{ }
}
\end{equation*}
\end{proof}
\subsection{Existence of a strong weight complex functor}
\label{sec:existence-strong-WCfun}
Let
$\mathcal{T}$ be a triangulated category with a
bounded
weight structure
$w=(\mathcal{T}^{w \leq 0},\mathcal{T}^{w \geq 0})$ and let
$(\tildew{\mathcal{T}}, i)$ be an f-category over
$\mathcal{T}$.
An object $X \in \tildew{\mathcal{T}}$ is by definition in
$\tildew{\mathcal{T}}^s$ if and only if $c(X) \in C^b({\heartsuit}(w))$.
Using \eqref{eq:cfun-gr} we obtain:
\begin{align}
\label{eq:subtildeT-equiv}
X \in \tildew{\mathcal{T}}^s
& \Leftrightarrow \forall a \in \Bl{Z}: [a]\op{gr}^a(X) \in
{\heartsuit}(w)=\mathcal{T}^{w=0}\\
\notag
& \Leftrightarrow \forall a \in \Bl{Z}: \op{gr}^a(X) = i^{-1} s^{-a} \sigma_a (X) \in
[-a]\mathcal{T}^{w=0}=\mathcal{T}^{w=a}\\
\notag
& \Leftrightarrow \forall a \in \Bl{Z}: \sigma_a (X) \tildew{\in}
s^a(i(\mathcal{T}^{w=a})),
\end{align}
where ``$\tildew{\in}$" stands for ``is isomorphic to some object in".
\begin{remark}
\label{rem:tildeTs-and-heart-comp-w-str}
Note the difference between $\tildew{\mathcal{T}}^s$ and
the heart \eqref{eq:heart-comp-w-str}
of
the unique compatible w-structure on $\tildew{\mathcal{T}}$
described in Proposition~\ref{p:compatible-w-str}.
\end{remark}
Observe that $\tildew{\mathcal{T}}^s$ is not a triangulated subcategory
of $\tildew{\mathcal{T}}$: It is not closed under the translation $[1]$.
However $\tildew{\mathcal{T}}^s$ is closed under $[1]s^{-1}$
(use \eqref{eq:c-Sigma-commute}) and under
extensions: If
$(A,X,B)$ is a triangle with $A, B \in \tildew{\mathcal{T}}^s$,
apply the triangulated functor $\op{gr}^a$ and obtain the triangle
$(\op{gr}^a(A), \op{gr}^a(X), \op{gr}^a(B))$; now use that $\mathcal{T}^{w=a}$ is closed
under extensions
(Lemma~\ref{l:weight-str-basic-properties}~\eqref{enum:weight-perp-prop}).
It is obvious from \eqref{eq:subtildeT-equiv}
that
$\tildew{\mathcal{T}}^s$
is stable under all $\sigma$-truncations.
\begin{lemma}
\label{l:omega-range-versus-weights-on-tildeTs}
Let $X \in \tildew{\mathcal{T}}^s([a,b])$ for $a,b \in \Bl{Z}$. Then
$\omega(X) \in \mathcal{T}^{w \in [a,b]}$.
\end{lemma}
\begin{proof}
We can build up $X$ as indicated in the
following diagram in the case $[a,b]=[-2,1]$:
\begin{equation}
\label{eq:build-up-X}
\xymatrix@C-1.3cm{
{X=\sigma_{\leq 1}X} \ar[rr]
&& {\sigma_{\leq 0}X} \ar[rr] \ar@{~>}[ld]
&& {\sigma_{\leq -1}X} \ar[rr] \ar@{~>}[ld]
&& {\sigma_{\leq -2}X} \ar[rr] \ar@{~>}[ld]
&& {\sigma_{\leq -3}X=0} \ar@{~>}[ld]
\\
& {\sigma_1(X)} \ar[lu]
&& {\sigma_0(X)} \ar[lu]
&& {\sigma_{-1}(X)} \ar[lu]
&& {\sigma_{-2}(X)} \ar[lu]
}
\end{equation}
All triangles are isomorphic to $\sigma$-truncation triangles with
the wiggly arrows of degree one.
Since $\sigma_n(X)\cong s^n(i(\op{gr}^n(X)))$
Proposition~\ref{p:functor-omega} \ref{enum:functor-omega-iii}
yields $\omega(\sigma_n(X)) \cong \op{gr}^n(X)$.
If we apply $\omega$ to diagram \eqref{eq:build-up-X}
we obtain a diagram that is isomorphic to
\begin{equation*}
\xymatrix@C-1.3cm{
{\omega(X)} \ar[rr]
&& {\omega(\sigma_{\leq 0}X)} \ar[rr] \ar@{~>}[ld]
&& {\omega(\sigma_{\leq -1}X)} \ar[rr] \ar@{~>}[ld]
&& {\omega(\sigma_{\leq -2}X)} \ar[rr] \ar@{~>}[ld]
&& {0} \ar@{~>}[ld]
\\
& {\op{gr}^1(X)} \ar[lu]
&& {\op{gr}^0(X)} \ar[lu]
&& {\op{gr}^{-1}(X)} \ar[lu]
&& {\op{gr}^{-2}(X).} \ar[lu]
}
\end{equation*}
Since $\op{gr}^n(X) \in \mathcal{T}^{w=n}$ and all $\mathcal{T}^{w \leq
n}$, $\mathcal{T}^{w\geq n}$ are closed under extensions
(Lemma \ref{l:weight-str-basic-properties}
\eqref{enum:weight-perp-prop}) we obtain the claim.
\end{proof}
\begin{remark}
\label{rem:omega-range-versus-weights-on-tildeTs}
The converse statement of
Lemma~\ref{l:omega-range-versus-weights-on-tildeTs} is not true in
general (but cf.
Prop.~\ref{p:omega-on-subtildeT}):
Take a non-zero object $X \in \tildew{\mathcal{T}}^s([0])$ (so
$X\cong \sigma_0(X) \tildew{\in} i(\mathcal{T}^{w=0})$) and fit the
morphism $\alpha_X$ into a triangle
\begin{equation*}
\xymatrix{
{[-1]s(X)} \ar[r]
&
{E} \ar[r]
&
{X} \ar[r]^-{\alpha_X}
&
{s(X).}
}
\end{equation*}
Then $E \in \tildew{\mathcal{T}}^s$. Since $\omega(\alpha_X)$ is an
isomorphism we have $\omega(E)=0$ but $\op{range}(E)=[0,1]$.
\end{remark}
In the rest of this section we additionally assume:
The weight structure
$w=(\mathcal{T}^{w \leq 0},\mathcal{T}^{w \geq 0})$ is bounded,
and $\tildew{\mathcal{T}}$ satisfies
axiom~\ref{enum:filt-tria-cat-3x3-diag}.
\begin{lemma}
[{\cite[Lemma 8.4.2]{bondarko-weight-str-vs-t-str}}]
\label{l:alpha-hom-subtildeT}
For all $M, N \in \tildew{\mathcal{T}}^s$ the map
\begin{equation*}
(\alpha_{N})_*=(\alpha_{N} \circ ?): \op{Hom}_{\tildew{\mathcal{T}}}(M, N)
\rightarrow \op{Hom}_{\tildew{\mathcal{T}}}(M, s(N))
\end{equation*}
is surjective and for all $a>0$ the map
\begin{equation*}
(\alpha_{s^a(N)})_*=(\alpha_{s^a(N)} \circ ?): \op{Hom}_{\tildew{\mathcal{T}}}(M, s^a(N))
\rightarrow \op{Hom}_{\tildew{\mathcal{T}}}(M, s^{a+1}(N))
\end{equation*}
is bijective.
\end{lemma}
\begin{proof}
We complete $\alpha_N$ to a triangle
\begin{equation}
\label{eq:alpha-N-tri}
\xymatrix{
{N} \ar[r]^{\alpha_N} &
{s(N)} \ar[r] &
{Q} \ar[r] &
{[1]N,}
}
\end{equation}
apply $s^a$ and obtain the triangle
\begin{equation*}
\xymatrix@C=1.5cm{
{s^a(N)} \ar[r]^{s^a(\alpha_N)=\alpha_{s^a(N)}} &
{s^{a+1}(N)} \ar[r] &
{s^a(Q)} \ar[r] &
{[1]s^a(N).}
}
\end{equation*}
Applying $\tildew{\mathcal{T}}(M,?)$
to this
triangle yields an exact sequence
\begin{equation*}
\tildew{\mathcal{T}}(M,[-1]s^a(Q)) \rightarrow
\tildew{\mathcal{T}}(M,s^a(N)) \xra{(\alpha_{s^a(N)})_*}
\tildew{\mathcal{T}}(M,s^{a+1}(N)) \rightarrow
\tildew{\mathcal{T}}(M,s^a(Q))
\end{equation*}
Hence we have to prove:
\begin{align*}
\tildew{\mathcal{T}}(M,s^a(Q)) & = 0 && \text{for $a \geq 0$, and}\\
\tildew{\mathcal{T}}(M,[-1]s^a(Q)) & = 0 && \text{for $a > 0$.}
\end{align*}
This is clearly implied by
\begin{align*}
\tildew{\mathcal{T}}(M,[b]s^a(Q)) & = 0 && \text{for all $a, b \in
\Bl{Z}$ with $a+b\geq 0$.}
\end{align*}
We claim more generally that
\begin{align}
\label{eq:m-prime-cq-claim}
\tildew{\mathcal{T}}(M',[c]Q) & = 0 && \text{for all $c \in \Bl{N}$
and all $M' \in \tildew{\mathcal{T}}^s$;}
\end{align}
the above special case is obtained by setting $M':=s^{-a}[a]M$ (note
that $\tildew{\mathcal{T}}^s$ is $[1]s^{-1}$-stable) and $c=a+b$ using
$\tildew{\mathcal{T}}(M,[b]s^a(Q)) \xsira{s^{-a}[a]}
\tildew{\mathcal{T}}(M',[c]Q)$.
We claim that we can assume in
\eqref{eq:m-prime-cq-claim} that the support of $M'$ and $N$
(the object $N$ determines $Q$ up to isomorphism) is a singleton (or empty):
For $M'$ this is obvious; for $N$ we use axiom
\ref{enum:filt-tria-cat-3x3-diag} for $\op{id}_N: N \rightarrow N$
and obtain the following $3 \times 3$-diagram:
\begin{equation*}
\xymatrix{
{[1]\sigma_{\geq d+1} (N)} \ar@{..>}[r] &
{[1]s(\sigma_{\geq d+1} (N))} \ar@{..>}[r] &
{[1]Q'} \ar@{..>}[r] \ar@{}[rd]|{\circleddash}&
{[2]\sigma_{\geq d+1} (N)} \\
{\sigma_{\leq d}(N)} \ar[u] \ar[r]^-{\alpha_{\sigma_{\leq d}(N)}} &
{s(\sigma_{\leq d}(N))} \ar[u] \ar[r] &
{Q''} \ar[u] \ar[r] &
{[1]\sigma_{\leq d}(N)} \ar@{..>}[u] \\
{N} \ar[r]^-{\alpha_N} \ar[u] &
{s(N)} \ar[r] \ar[u] &
{Q} \ar[r] \ar[u] &
{[1]N} \ar@{..>}[u] \\
{\sigma_{\geq d+1} (N)} \ar[u] \ar[r]^-{\alpha_{\sigma_{\geq d+1}(N)}} &
{s(\sigma_{\geq d+1} (N))} \ar[u] \ar[r] &
{Q'} \ar[u] \ar[r] &
{[1]\sigma_{\geq 1} (N)} \ar@{..>}[u]
}
\end{equation*}
This shows that knowing \eqref{eq:m-prime-cq-claim} for $Q'$ and
$Q''$ implies \eqref{eq:m-prime-cq-claim} for $Q$, proving the claim.
Assume now that the support of $M'$ is $[x]$ and that of $N$
is $[y]$ for some $x,y \in \Bl{Z}$.
This means that we can assume
(cf.\ \eqref{eq:subtildeT-equiv})
that
\begin{align*}
M' & = s^x i(X) && \text{for some $X \in \mathcal{T}^{w=x}$, and}\\
N & = s^y i(Y) && \text{for some $Y \in \mathcal{T}^{w=y}$.}
\end{align*}
Since $M'$ is in $\tildew{\mathcal{T}}(\geq x)$, the triangle
$(\sigma_{\geq x}([c]Q), [c]Q, \sigma_{\leq x-1}([c]Q))$ and
\ref{enum:filt-tria-cat-no-homs} show the first isomorphism in
\begin{equation*}
\tildew{\mathcal{T}}(M', [c] Q) \overset{\sim}{\leftarrow}
\tildew{\mathcal{T}}(M', \sigma_{\geq x}([c] Q))
\xsira{\omega}
{\mathcal{T}}(\omega(M'), \omega(\sigma_{\geq x}([c] Q))),
\end{equation*}
the second isomorphism is a consequence of
\ref{enum:functor-omega-iv} and $M' \in \tildew{\mathcal{T}}(\leq
x)$.
Note that $Q$ and $[c]Q$ is in $\tildew{\mathcal{T}}([y,y+1])$ by
\eqref{eq:alpha-N-tri}.
In order to show that
${\mathcal{T}}(\omega(M'), \omega(\sigma_{\geq x}([c] Q)))$
vanishes, we consider three cases:
\begin{enumerate}
\item $x > y+1$: Then $\sigma_{\geq x}([c] Q)=0$.
\item $x \leq y$: Then $\sigma_{\geq x}([c] Q)=[c]Q$.
Applying the triangulated functor $\omega$ to
\eqref{eq:alpha-N-tri} and using \ref{enum:functor-omega-iii}
shows that $\omega(\sigma_{\geq x}([c] Q)=\omega([c]Q)=0$.
\item $x=y+1$: Applying the triangulated functor $\sigma_{\geq x}$
to \eqref{eq:alpha-N-tri} shows that
$\sigma_{\geq x}([c] Q) \cong [c]s(N)=[c]s^{y+1}i(Y)=[c]s^x i(Y)$.
Hence we have
\begin{equation*}
{\mathcal{T}}(\omega(M'), \omega(\sigma_{\geq x}([c] Q)))
= {\mathcal{T}}(\omega(s^xi(X)), \omega([c]s^xi(Y)))
\cong {\mathcal{T}}(X, [c]Y)
\end{equation*}
where we use \ref{enum:functor-omega-iii} ($i(X)$
and $s^xi(X)$ are connected by a sequence of morphisms
$\alpha_{s^?i(X)}$, and similarly for $i(Y)$) and the fact that
$\omega|_{\tildew{\mathcal{T}}([0])}$ is a quasi-inverse of
$i$. Since $X \in \mathcal{T}^{w \geq
x}=\mathcal{T}^{w \geq y+1}$ and $[c]Y
\in \mathcal{T}^{w \leq y-c} \subset \mathcal{T}^{w \leq y}$ we have
${\mathcal{T}}(X, [c]Y)=0$ by \ref{enum:ws-iii}.
\end{enumerate}
\end{proof}
\begin{proposition}
\label{p:omega-on-subtildeT}
The restriction
$\omega|_{\tildew{\mathcal{T}}^s}:\tildew{\mathcal{T}}^s \rightarrow
\mathcal{T}$ of
$\omega:\tildew{\mathcal{T}} \rightarrow \mathcal{T}$
is full (i.\,e.\ induces
epimorphisms on morphism spaces) and essentially surjective (i.\,e.\
surjective on isoclasses of objects).
More precisely
$\omega|_{\tildew{\mathcal{T}}^s([a,b])}:\tildew{\mathcal{T}}^s([a,b]) \rightarrow
\mathcal{T}^{w \in [a,b]}$
(cf.\ Lemma~\ref{l:omega-range-versus-weights-on-tildeTs})
has these properties.
\end{proposition}
\begin{proof}
We first prove that $\omega|_{\tildew{\mathcal{T}}^s}$ induces
epimorphisms on morphism spaces. Let $M, N \in
\tildew{\mathcal{T}}^s$.
By \ref{enum:filt-tria-cat-exhaust} we find $m, n \in \Bl{Z}$ such that
$M \in \tildew{\mathcal{T}}(\leq m)$ and $N \in
\tildew{\mathcal{T}}(\geq n)$.
Choose $a \in \Bl{Z}$ satisfying $a \geq m-n$.
We give a pictorial proof:
Consider the following diagrams
\begin{equation}
\label{eq:pictorial-omegas-subtilde}
\xymatrix{
& {s^a(N)} \\
& {\vdots} \ar[u] \\
& {s(N)} \ar[u] \\
{M}
\ar[r]^-{f_0}
\ar[ru]^-{f_1}
\ar[ruuu]^-{f_a}
& {N,} \ar[u]_{\alpha_N}
}
\quad
\xymatrix{
& {\omega(s^a(N))} \\
& {\vdots} \ar[u]_\sim \\
& {\omega(s(N))} \ar[u]_\sim \\
{\omega(M)}
\ar[r]^-{g_0}
\ar[ru]^-{g_1}
\ar[ruuu]^-{g_a}
& {\omega(N).} \ar[u]_{\omega(\alpha_N)}^\sim
}
\end{equation}
Assume that we are given a morphism $g_0$.
Since $\omega$ maps every $\alpha_?$ to an isomorphism (cf.\
\ref{enum:functor-omega-iii}), $g_0$ uniquely determines $g_1,\dots,
g_a$ such that the diagram on the right commutes.
Since
$M \in \tildew{\mathcal{T}}(\leq m)$ and $s^a(N) \in
\tildew{\mathcal{T}}(\geq n+a) \subset \tildew{\mathcal{T}}(\geq m)$,
\ref{enum:functor-omega-iv} implies that there is a unique $f_a$
satisfying $\omega(f_a)=g_a$. This $f_a$ yields
$f_{a-1}, \dots, f_1$ (uniquely) and $f_0$ (possibly non-uniquely)
such that the diagram on the left commutes
(use Lemma \ref{l:alpha-hom-subtildeT}). Taking $\omega$ of the
diagram on the left it is clear that $\omega(f_0)=g_0$.
Now we prove that $\omega|_{\tildew{\mathcal{T}}^s}$ is surjective
on isoclasses of objects. Let $X \in \mathcal{T}$ be given.
Since the given weight structure is bounded there are $a,b \in \Bl{Z}$
such that $X \in \mathcal{T}^{w \in [a,b]}$. We prove the statement
by induction on $b-a$. If $a > b$ then $X=0$ and the statement is
obvious. Assume $a=b$. Then $s^a(i(X))$ is in $\tildew{\mathcal{T}}^s([a])$
by \eqref{eq:subtildeT-equiv} and we have $\omega(s^a(i(X)))\cong
\omega(i(X))\cong X$.
Now assume $a<b$. Choose $c \in \Bl{Z}$ with $a \leq c < b$ and
take a weight decomposition
\begin{equation}
\label{eq:omega-subtildeT-wdec}
w_{\geq c+1}X \rightarrow X \rightarrow w_{\leq c}X \rightarrow [1]w_{\geq c+1}X.
\end{equation}
Lemma~\ref{l:weight-str-basic-properties}
\eqref{enum:weights-bounded} shows that
$w_{\leq c}X \in \mathcal{T}^{w \in [a,c]}$
and $w_{\geq c+1}X \in \mathcal{T}^{w \in [c+1,b]}$.
By induction we can hence lift
$w_{\leq c}X \in \mathcal{T}^{w\in[a,c]}$
and $[1]w_{\geq c+1}X \in \mathcal{T}^{w \in [c,b-1]}$ to
objects
$\tildew{A} \in \tildew{\mathcal{T}}^s([a,c])$,
$\tildew{B} \in \tildew{\mathcal{T}}^s([c,b-1])$.
This shows that the triangle
\eqref{eq:omega-subtildeT-wdec} is isomorphic to
a triangle
\begin{equation}
\label{eq:omega-subtildeT-unten}
[-1]\omega(\tildew{B}) \rightarrow X \rightarrow \omega(\tildew{A}) \xra{g}
\omega(\tildew{B}).
\end{equation}
Since we have already proved fullness there exists $f: \tildew{A} \rightarrow
\tildew{B}$ such that $\omega(f)=g$. We complete the composition
$\tildew{A} \xra{f} \tildew{B} \xra{\alpha_{\tildew{B}}}
s(\tildew{B})$ to a triangle
\begin{equation*}
[-1]s(\tildew{B}) \rightarrow \tildew{X} \rightarrow \tildew{A} \xra{\alpha\circ
f} s(\tildew{B}).
\end{equation*}
The image of this triangle under $\omega$ is isomorphic to triangle
\eqref{eq:omega-subtildeT-unten}.
In particular $X \cong \omega(\tildew{X})$.
Since $[-1]s(\tildew{B}) \in \tildew{\mathcal{T}}^s([c+1,b])$ and
$\tildew{A} \in \tildew{\mathcal{T}}^s([a,c])$ we have
$\tildew{X} \in \tildew{\mathcal{T}}^s([a,b])$ since
$\tildew{\mathcal{T}}^s([a,b])$ is closed under extensions.
\end{proof}
In the following we view $\tildew{\mathcal{T}}^s$ as an additive
category with translation $[1]s^{-1}$. Proposition~\ref{p:functor-omega},
\ref{enum:functor-omega-iii} and the fact that $\omega$ is
triangulated turn
$\omega|_{\tildew{\mathcal{T}}^s}:\tildew{\mathcal{T}}^s \rightarrow
\mathcal{T}$ into a functor of additive categories with translation.
Recall (for example from \cite[A.3.1]{assem-simson-skowronski-1})
that a two-sided ideal $\mathcal{I}$ in an additive category
$\mathcal{A}$ is a subclass of
the class of all morphism satisfying some obvious properties. Then the quotient
$\mathcal{A}/\mathcal{I}$ has the same objects as $\mathcal{A}$ but
morphisms are identified if their difference is in the ideal. Then
$\mathcal{A}/\mathcal{I}$ is again an additive category and the
obvious \textbf{quotient functor} $\op{can}: \mathcal{A} \rightarrow
\mathcal{A}/\mathcal{I}$ has an obvious universal property.
If $F:\mathcal{A} \rightarrow \mathcal{B}$ is an additive functor of additive
categories, its kernel $\op{ker} F$ is the two-sided ideal given by
\begin{equation*}
(\op{ker} F)(A,A') = \op{ker}(F:\mathcal{A}(A,A') \rightarrow \mathcal{B}(FA,FA'))
\end{equation*}
for $A, A' \in \mathcal{A}$.
Define
$\mathcal{Q}:={\tildew{\mathcal{T}}^s/(\op{ker}
\omega|_{\tildew{\mathcal{T}}^s})}$
and let $\op{can}: \tildew{\mathcal{T}}^s \rightarrow \mathcal{Q}$ be the quotient
functor.
The functor $[1]s^{-1}: \tildew{\mathcal{T}}^s \rightarrow
\tildew{\mathcal{T}}^s$ descends to a functor $\mathcal{Q} \rightarrow
\mathcal{Q}$
denoted by the same symbol: $(\mathcal{Q}, [1]s^{-1})$ is an additive
category with translation and $\op{can}$ is a functor of such categories.
\begin{proposition}
\label{p:omega-tildeTs-equiv}
The functor $\omega|_{\tildew{\mathcal{T}}^s}$ factors
uniquely
to an equivalence
\begin{equation*}
\ol{\omega}: (\mathcal{Q},[1]s^{-1}) \xra{\sim} (\mathcal{T},[1])
\end{equation*}
of additive categories with translation
(as indicated in diagram~\eqref{eq:omega-subtildeT-exp}).
\end{proposition}
\begin{proof}
This is clear from
Proposition~\ref{p:omega-on-subtildeT}.
\end{proof}
In any additive category with translation we have the notion of
candidate triangles and morphisms between them. Let
$\Delta_\mathcal{Q}$ be the class of all candidate triangles in
$\mathcal{Q}$ that are isomorphic to the image\footnote
{
We
assume that $[1]s^{-1} [-1]s =\op{id}$.
}
\begin{equation*}
\xymatrix{
{[-1]s(X)} \ar[r]^-{\ol{f}} &
{Y} \ar[r]^-{\ol{g}} &
{Z} \ar[r]^-{\ol{h}} &
{X} &
}
\end{equation*}
in $\mathcal{Q}$ of a sequence
\begin{equation*}
\xymatrix{
{[-1]s(X)} \ar[r]^-{{f}} &
{Y} \ar[r]^-{{g}} &
{Z} \ar[r]^-{{h}} &
{X} &
}
\end{equation*}
in $\tildew{\mathcal{T}}^s$ such that
\begin{equation*}
\xymatrix{
{[-1]s(X)} \ar[r]^-{f} &
{Y} \ar[r]^-{{g}} &
{Z} \ar[r]^-{{\alpha_X \circ h}} &
{s(X)} &
}
\end{equation*}
is a triangle in $\tildew{\mathcal{T}}$.
\begin{lemma}
\label{l:omega-bar-triang}
$(\mathcal{Q}, [1]s^{-1}, \Delta_\mathcal{Q})$ is a triangulated
category and $\ol{\omega}: \mathcal{Q} \rightarrow \mathcal{T}$ is a functor
of triangulated categories.
\end{lemma}
\begin{proof}
Since we already know that $\ol{\omega}$ is an equivalence of
additive categories with translation it is
sufficient to show that a candidate triangle in $\mathcal{Q}$ is in
$\Delta_{\mathcal{Q}}$ if and only if its image under $\ol{\omega}$
is a triangle in $\mathcal{T}$. This is an easy exercise left to the reader.
\end{proof}
\begin{remark}
There is another description of $\Delta_\mathcal{Q}$:
Let $\Delta'_\mathcal{Q}$ be the class of all candidate triangles in
$\mathcal{Q}$ that are isomorphic to the image
\begin{equation*}
\xymatrix{
{X} \ar[r]^-{\ol{f}} &
{Y} \ar[r]^-{\ol{g}} &
{Z} \ar[r]^-{\ol{h}} &
{[1]s^{-1}(X)} &
}
\end{equation*}
in $\mathcal{Q}$ of a sequence
\begin{equation*}
\xymatrix{
{X} \ar[r]^-{{f}} &
{Y} \ar[r]^-{{g}} &
{Z} \ar[r]^-{{h}} &
{[1]s^{-1}(X)} &
}
\end{equation*}
in $\tildew{\mathcal{T}}^s$ such that
\begin{equation*}
\xymatrix{
{s^{-1}(X)} \ar[r]^-{{f\circ \alpha_{s^{-1}(X)}}} &
{Y} \ar[r]^-{{g}} &
{Z} \ar[r]^-{{h}} &
{[1]s^{-1}(X)} &
}
\end{equation*}
is a triangle in $\tildew{\mathcal{T}}$.
Lemma~\ref{l:omega-bar-triang} is also true for the class
$\Delta'_\mathcal{Q}$ of triangles with essentially the same
proof. This shows that $\Delta_\mathcal{Q}=\Delta'_\mathcal{Q}$.
\end{remark}
Recall that $c$ and $h$ are functors of additive categories with
translation if we equip $\tildew{\mathcal{T}}$ (or
$\tildew{\mathcal{T}}^s$) with the
translation $[1]s^{-1}$.
This is used implicitly in the next
proposition.
\begin{proposition}
\label{p:cone-alpha-homotopic-to-zero}
Let $m:M\rightarrow N$ be a morphism in $\tildew{\mathcal{T}}^s$. Assume
that $\alpha_N \circ m$ appears as the last morphisms in a triangle
\begin{equation}
\label{eq:alpha-triangle-MN}
\xymatrix{
{[-1]s(N)} \ar[r]^-{u} &
{Q} \ar[r]^-{v} &
{M} \ar[r]^-{\alpha_N \circ m} &
{s(N)}
}
\end{equation}
in $\tildew{\mathcal{T}}$.
Then $Q \in \tildew{\mathcal{T}}^s$,
the functor
$c:\tildew{\mathcal{T}}^s \rightarrow C^b({\heartsuit}(w))$
maps the sequence
\begin{equation}
\label{eq:alpha-lift-triangle-MN}
\xymatrix{
{[-1]s(N)} \ar[r]^-{u} &
{Q} \ar[r]^-{v} &
{M} \ar[r]^-{m} &
{N}
}
\end{equation}
in $\tildew{\mathcal{T}}^s$
to a sequence that is isomorphic to
\begin{equation}
\label{eq:h-triangulated}
\xymatrix{
{\Sigma^{-1} (c(N))} \ar[r]^-{\svek {1}0} &
{\Sigma^{-1} (\op{Cone}(-c(m)))} \ar[r]^-{\zvek 0{1}} &
{c(M)} \ar[r]^-{c(m)} &
{c(N)}
}
\end{equation}
in $C^b({\heartsuit}(w))$, and
the
functor $h:\tildew{\mathcal{T}}^s \rightarrow K^b({\heartsuit}(w))^{\op{anti}}$ maps
\eqref{eq:alpha-lift-triangle-MN} to a triangle in ${K^b({\heartsuit}(w))}^{\op{anti}}$.
In the particular case that $m=\op{id}_N:N \rightarrow N$ we have $h(Q)=0$ in
$K^b({\heartsuit}(w))^{\op{anti}}$, i.\,e.\
$\op{id}_{c(Q)}$ is homotopic to zero.
\end{proposition}
\begin{proof}
Since $\tildew{\mathcal{T}}^s$ is stable under $[-1]s$ and closed
under extensions, the first three objects in
the triangle \eqref{eq:alpha-triangle-MN}
are in $\tildew{\mathcal{T}}^s$.
When computing $c=c''\circ c'$ in this proof we use the second
approach
to $c'$ described at the end of
Section~\ref{sec:first-step-cprime}.
\textbf{First case: $m$ is a morphism in $\tildew{\mathcal{T}}^s([a])$:}
We start with the case that $m:M \rightarrow N$ is in
$\tildew{\mathcal{T}}^s([a])$ for some $a \in \Bl{Z}$.
Assume that we are given a triangle
\begin{equation}
\label{eq:alpha-triangle-MN-gr-a}
\xymatrix{
{[-1]s(N)} \ar[r]^-{u'} &
{L} \ar[r]^-{v'} &
{M} \ar[r]^-{\alpha_N \circ m} &
{s(N).}
}
\end{equation}
in $\tildew{\mathcal{T}}$ (cf.\ \eqref{eq:alpha-triangle-MN}; we
write $L$ instead of $Q$ here for
notational reasons to become clear later on).
Note that $\op{range}(L) \subset [a,a+1]$ and $\op{range}([-1]s(N)) \subset [a+1]$.
Let
\begin{equation*}
\xymatrix{
{\sigma_{a+1}(L)} \ar[r]^-{u''} &
{L} \ar[r]^-{v''} &
{\sigma_{a}(L)} \ar[r]^-{\tildew{d}^a_{L}} &
{[1]\sigma_{a+1} ({L}).}
}
\end{equation*}
be the triangle\footnote{
We assume in this proof without loss of generality that $\sigma_{\geq n}$
(more precisely $g^n:\sigma_{\geq n} \rightarrow \op{id}$) is the identity
on objects of
$\tildew{\mathcal{T}}(\geq n)$, and similarly for $k^n$.
This gives for example $\sigma_{[a,a+1]}(L)=L$.
}
constructed
in the second approach to $c'(L)$, cf.\
\eqref{eq:sigma-truncation-for-tilde-d}.
This triangle is uniquely isomorphic to
the
triangle
\eqref{eq:alpha-triangle-MN-gr-a} by an isomorphism extending
$\op{id}_{L}$
(use Prop.~\ref{p:BBD-1-1-9-copied-for-w-str}):
\begin{equation}
\label{eq:isom-trunc-extended-alpha}
\xymatrix{
{\sigma_{a+1}(L)} \ar[r]^-{u''} \ar[d]^{p}_{\sim} &
{L} \ar[r]^-{v''} \ar@{=}[d] &
{\sigma_{a}(L)} \ar[r]^-{\tildew{d}^a_{L}}
\ar[d]^{q}_{\sim} &
{[1]\sigma_{a+1} (L)} \ar[d]^{[1]p}_{\sim}\\
{{[-1]s(N)}} \ar[r]^-{u'} &
{{L}} \ar[r]^-{v'} &
{{M}} \ar[r]^-{\alpha_N \circ m} &
{{s(N)}.}
}
\end{equation}
This proposition even characterizes $p$ as the unique morphism
making the square on the left commutative (and similarly for
$p^{-1}$), and $q$ as the unique
morphism making the square
in the middle commutative.
In order to compute the differential of $c'(L)$ in terms of $m$ and
to identify $p$ and $q$ we follow the second approach to $c'$
described in
Section~\ref{sec:construction-functor-cfun}:
We apply to the
triangle \eqref{eq:alpha-triangle-MN-gr-a} the sequence of functors
in the left column of the following diagram
(cf.\ \eqref{eq:sigma-truncation-for-tilde-d})
and obtain in this way the rest of the diagram:
\begin{equation}
\label{eq:alpha-triangle-MN-gr-a-sigma}
\xymatrix{
\sigma_{a+1} \ar[d] &
{[-1]s(N)} \ar[r]^{\sigma_{a+1}(u')} \ar@{=}[d]&
{\sigma_{a+1}(L)} \ar[r] \ar[d]^-{u''} &
{0} \ar[r] \ar[d] &
{s(N)} \ar@{=}[d] \\
\sigma_{[a,a+1]} \ar[d] &
{[-1]s(N)} \ar[r]^-{u'} \ar[d]&
{L} \ar[r]^-{v'} \ar[d]^-{v''} &
{M} \ar[r]^-{\alpha_N \circ m} \ar@{=}[d] &
{s(N)} \ar[d] \\
\sigma_{a} \ar[d]^-{\tildew{d}^a} &
{0} \ar[r] \ar[d]&
{\sigma_a(L)} \ar[r]^-{\sigma_a(v')} \ar[d]^-{\tildew{d}^a_{L}} &
{M} \ar[r] \ar[d] &
{0} \ar[d] \\
[1]\sigma_{a+1} &
{s(N)}
\ar[r]^{[1]\sigma_{a+1}(u')} &
{[1]\sigma_{a+1}(L)} \ar[r] &
{0} \ar[r] &
{[1]s(N)}
}
\end{equation}
From the above remarks we see that $\sigma_{a+1}(u')=p^{-1}$ and
$\sigma_a(v')=q$
(this can also be seen directly from
\eqref{eq:isom-trunc-extended-alpha} using the adjunctions but we
wanted to include the above diagram).
If we also use the commutativity of the square on the right in
\eqref{eq:isom-trunc-extended-alpha} we obtain
\begin{equation}
\label{eq:differential-eL-gr-supp-singleton}
([1]\sigma_{a+1}(u'))^{-1} \circ \tildew{d}^a_{L} \circ
(\sigma_a(v'))^{-1}
= ([1]p) \circ \tildew{d}^a_{L} \circ q^{-1}
= \alpha_N \circ m.
\end{equation}
We will need precisely this formula later on.
It means that (up to unique isomorphisms) the differential
$\tildew{d}^a_{L}$ is $\alpha_N \circ m$.
Let us finish the proof of the proposition in this special case
in order to convince the
reader that we got all signs correct.
These results show that the image of the sequence
\begin{equation}
\label{eq:lift-of-triangle-MN-gr-a}
\xymatrix{
{{[-1]s(N)}} \ar[r]^-{u'} &
{{L}} \ar[r]^-{v'} &
{{M}} \ar[r]^-{m} &
{{N}.}
}
\end{equation}
in $\tildew{\mathcal{T}}^s$
under the functor $c'$
is the sequence
\begin{equation*}
\xymatrix{
{0} \ar[r] \ar[d]&
{\sigma_a(L)} \ar[r]^-{q} \ar[d]^-{\tildew{d}^a_{L}} &
{M} \ar[r]^-{m} \ar[d] &
{N} \ar[d] \\
{s(N)}
\ar[r]^{[1]p^{-1}} &
{[1]\sigma_{a+1}(L)} \ar[r] &
{0} \ar[r] &
{0}
}
\end{equation*}
in $C^b_\Delta(\tildew{\mathcal{T}})$ where we draw the complexes
vertically and only draw their components of degree $a$ in the upper
and of degree $a+1$ in the lower row (all other components are
zero).
These sequence is isomorphic by the isomorphism $(\op{id}, \svek
q{[1]p}, \op{id}, \op{id})$ to the sequence (use
\eqref{eq:differential-eL-gr-supp-singleton})
\begin{equation*}
\xymatrix{
{0} \ar[r] \ar[d]&
{M} \ar[r]^-{\op{id}_M} \ar[d]^-{\alpha_N \circ m} &
{M} \ar[r]^-{m} \ar[d] &
{N} \ar[d] \\
{s(N)}
\ar[r]^{\op{id}_{s(N)}} &
{s(N)} \ar[r] &
{0} \ar[r] &
{0.}
}
\end{equation*}
If we now apply the functor $c''$ we see that
the image of \eqref{eq:lift-of-triangle-MN-gr-a} under $c$ is
isomorphic to
\begin{equation*}
\xymatrix{
{0} \ar[r] \ar[d]&
{s^{-a}(M)} \ar[r]^-{\op{id}} \ar[d]^-{s^{-a}(m)} &
{s^{-a}(M)} \ar[r]^-{s^{-a}(m)} \ar[d] &
{s^{-a}(N)} \ar[d] \\
{s^{-a}(N)}
\ar[r]^{\op{id}} &
{s^{-a}(N)} \ar[r] &
{0} \ar[r] &
{0}
}
\end{equation*}
which is precisely
\eqref{eq:h-triangulated}.
If we pass to the homotopy category we obtain a candidate triangle
that has the same shape as
\eqref{eq:mapcone-minus-m-rot2} but all morphisms are multiplied
by $-1$: It is a triangle in $K^b({\heartsuit}(w))^{\op{anti}}$.
This finishes the proof in the case that
$m:M \rightarrow N$ is in $\tildew{\mathcal{T}}^s([a])$ for some $a \in \Bl{Z}$.
\textbf{Observation:}
Recall that the first three objects of triangle
\eqref{eq:alpha-triangle-MN} are in $\tildew{\mathcal{T}}^s$. If we
apply the triangulated
functor $\sigma_b$ (for any $b \in \Bl{Z}$), we obtain a triangle with
first three terms isomorphic to objects in $s^b(i(\mathcal{T}^{w=b})$
by \eqref{eq:subtildeT-equiv};
since there are only trivial extensions in the heart of a weight
structure
(see Lemma~\ref{l:weight-str-basic-properties}), this triangle
is isomorphic to the obvious direct sum triangle.
We will construct explicit splittings.
\textbf{The general case.}
Let $m:M \rightarrow N$ be a morphism in $\tildew{\mathcal{T}}^s$.
Let $a \in \Bl{Z}$ be arbitrary.
Axiom \ref{enum:filt-tria-cat-3x3-diag}
(we use the variant \eqref{eq:filt-tria-cat-nine-diag-sigma}
for the morphism $\sigma_{\leq a}(m): X=\sigma_{\leq a}(M) \rightarrow Y =
\sigma_{\leq a}(N)$ and $n=a-1$) gives a $3\times 3$-diagram
\begin{equation}
\label{eq:sigma-leq-a-N-slice-off}
\xymatrix{
{[-1]s({\sigma_{a}(N)})} \ar[r]^-{u'_a}
\ar[d]^-{[-1]s(g^a)}
&
{L_a} \ar[r]^-{v'_a} \ar[d]^-{f'_a} &
{{\sigma_{a}(M)}} \ar[rr]^-{\alpha_{\sigma_{a}(N)} \circ \sigma_a(m)}
\ar[d]^-{g^a}
&&
{s({\sigma_{a}(N)})}
\ar@{..>}[d]^-{s(g^a)}
\\
{[-1]s({\sigma_{\leq a}(N)})} \ar[r]^-{u_a} \ar[d]^-{[-1]s({k_N^{a-1,a}})} &
{Q_a} \ar[r]^-{v_a} \ar[d]^-{f_{a-1,a}} &
{{\sigma_{\leq a}(M)}} \ar[rr]^-{\alpha_{\sigma_{\leq a}(N)}
\circ \sigma_{\leq a}(m)} \ar[d]^-{k_M^{a-1,a}} &&
{s({\sigma_{\leq a}(N)})} \ar@{..>}[d]^-{s({k_N^{a-1,a}})} \\
{[-1]s({\sigma_{\leq a-1}(N)})} \ar[r]^-{u_{a-1}}
\ar[d]
&
{Q_{a-1}} \ar[r]^-{v_{a-1}} \ar[d] &
{{\sigma_{\leq a-1}(M)}} \ar[rr]^-{\alpha_{\sigma_{\leq a-1}(N)}
\circ \sigma_{\leq a-1}(m)}
\ar[d]
\ar@{}[rrd]|-{\circleddash} & &
{s({\sigma_{\leq a-1}(N)})}
\ar@{..>}[d]
\\
{s({\sigma_{a}(N)})} \ar@{..>}[r] &
{[1]L_a} \ar@{..>}[r] &
{[1]{\sigma_{a}(M)}}
\ar@{..>}[rr]^-{[1](\alpha_{\sigma_{a}(N)}\circ \sigma_a(m))} &&
{[1]s({\sigma_{a}(N)})}
}
\end{equation}
where we write $k_?^{a-1,a}$ for the adjunction morphism
$k^{a-1}_{\sigma_{\leq a}?}$ and $g^a$ for the adjunction morphisms
$g^{a}_{\sigma_{\leq a}M}$ and $g^{a}_{\sigma_{\leq a}N}$.
We fix such a $3 \times 3$-diagram for any $a \in \Bl{Z}$.
By \ref{enum:filt-tria-cat-exhaust} we find $A \in \Bl{Z}$ such that
$M, N \in \tildew{\mathcal{T}}(\leq A)$. Then the top and bottom row in
diagram
\eqref{eq:sigma-leq-a-N-slice-off} are zero for $a>A$, and the two
rows in the middle are connected by an isomorphism of triangles.
It is easy to see (first define $f_A$, then $f_{A-1}=f_{A-1,A}f_A$
etc.) that there are morphisms
$f_a: Q \rightarrow [1]Q_a$
(for any $a \in \Bl{Z}$) such that
$f_{a-1,a} f_a = f_{a-1}$ holds and such that
\begin{equation}
\label{eq:N-to-sigma-leq-a-second}
\xymatrix{
{{[-1]s(N)}} \ar[r]^-u \ar[d]^-{[-1]s(k_N^a)} &
{{Q}} \ar[r]^-v \ar[d]^-{f_a} &
{{M}} \ar[r]^-{\alpha_N \circ m} \ar[d]^-{k_M^a} &
{{s(N)}} \ar[d]^-{s(k_N^a)} \\
{{[-1]s(\sigma_{\leq a}(N))}} \ar[r]^-{u_a} &
{{Q_a}} \ar[r]^-{v_a} &
{{\sigma_{\leq a}(M)}} \ar[r]^-{\alpha_{\sigma_{\leq a}(N)}\circ
\sigma_{\leq a(m)}} &
{{s(\sigma_{\leq a}(N))}}
}
\end{equation}
is a morphism of triangles, where
the top triangle is \eqref{eq:alpha-triangle-MN}, the bottom triangle
is the second horizontal triangle from above in
\eqref{eq:sigma-leq-a-N-slice-off} and $k_?^a$ are the adjunction
morphisms.
Since $k_?^{a-1,a} k_?^a = k_?^{a-1}$ holds
(under the usual identifications, cf.\
\eqref{eq:sigma-truncation-two-parameters}),
the morphisms of triangles
\eqref{eq:N-to-sigma-leq-a-second} (for $a \in \Bl{Z}$)
are compatible with the morphisms between the two middle rows of
\eqref{eq:sigma-leq-a-N-slice-off}.
We claim that $\sigma_b(f_a)$ is an isomorphism for all
$b \leq a$: Applying the triangulated functor $\sigma_b$ to
\eqref{eq:N-to-sigma-leq-a-second}
gives a morphism of triangles
with two components isomorphisms
by \eqref{eq:a-geq-b-sigma-isos}
(and \eqref{eq:sgle-eq-s-comm}); hence it is an isomorphisms and
$\sigma_b(f_a)$ is an isomorphism.
This shows that any $f_a$ induces an isomorphism between
the parts of the complexes
$c(Q)$ and $c(Q_a)$ in degrees $\leq a$:
Apply $\tildew{d}^b: \sigma_b \rightarrow [1] \sigma_{b+1}$ to $f_a$ for
all $b < a$.
Note that the three horizontal triangles
in \eqref{eq:sigma-leq-a-N-slice-off}
are of the type considered
in the observation. Applying $\sigma_b$ (any $b \in \Bl{Z}$) to them
yields triangles with vanishing connecting morphisms.
Hence we omit these morphisms in the
following diagram which is
$\sigma_a$ applied to the nine upper left entries of
\eqref{eq:sigma-leq-a-N-slice-off} (we use again some canonical
identifications)
\begin{equation}
\label{eq:sigma-leq-a-N-slice-off-sigma-a}
\xymatrix@=1.5cm{
{0} \ar[r] \ar[d] &
{{\sigma_a(L_a)}} \ar[r]^-{\sigma_a(v'_a)}_-\sim
\ar[d]_-{\sigma_a(f'_a)} &
{{\sigma_{a}(M)}} \ar[d]^-{\op{id}}
\ar@/_1pc/@{..>}[dl]^-{\delta_a} \\
{{\sigma_a([-1]s(N))}} \ar[r]^-{\sigma_a(u_a)}
\ar[d]^-{\op{id}}
&
{{\sigma_a(Q_a)}} \ar[r]^-{\sigma_a(v_a)}
\ar[d]^-{\sigma_a(f_{a-1,a})}
\ar@/^1pc/@{..>}[dl]_-{\epsilon_a}
&
{{\sigma_{a}(M)}} \ar[d] \\
{{\sigma_a([-1]s(N))}} \ar[r]_-{\sigma_a(u_{a-1})}^-\sim &
{{\sigma_a(Q_{a-1})}} \ar[r] &
{0}
}
\end{equation}
and where
the dotted arrows are defined by
\begin{align*}
\epsilon_a & := (\sigma_a(u_{a-1}))^{-1} \circ \sigma_a(f_{a-1,a}),\\
\delta_a & := \sigma_a(f'_a) \circ (\sigma_a(v'_a))^{-1}.
\end{align*}
So the four honest triangles with one dotted arrow commute, in
particular
$\epsilon_a \circ \sigma_a(u_a)=\op{id}$ and $\sigma_a(v_a)\circ
\delta_a =\op{id}$.
Note that $\epsilon_a \circ \delta_a=0$ since the middle column in
\eqref{eq:sigma-leq-a-N-slice-off-sigma-a}
is part of a triangle
(this comes from axiom~\ref{enum:filt-tria-cat-3x3-diag} used
above).
Lemma~\ref{l:zero-triang-cat}
gives an explicit splitting
of the middle
row of
\eqref{eq:sigma-leq-a-N-slice-off-sigma-a}:
\begin{equation}
\label{eq:explicit-iso-sigma-a-Q-a}
\xymatrix{
{{\sigma_a([-1]s(N))}} \ar[r]^-{\sigma_a(u_a)}
\ar@{=}[d]
&
{{\sigma_a(Q_a)}} \ar[r]^-{\sigma_a(v_a)}
\ar[d]^-{\svek{\epsilon_a}{\sigma_a(v_a)}}_-{\sim}
&
{{\sigma_{a}(M)}} \ar@{=}[d] \\
{{\sigma_a([-1]s(N))}} \ar[r]^-{\svek 10}
&
{{\sigma_a([-1]s(N))} \oplus \sigma_{a}(M)}
\ar[r]^-{\zvek 01}
&
{{\sigma_{a}(M)}}
}
\end{equation}
and states that $\zvek{\sigma_a(u_a)}{\delta_a}$ is inverse to
$\svek{\epsilon_a}{\sigma_a(v_a)}$.
Our aim now is to compute the morphisms which will yield the
differential of $c(Q)$ using this explicit direct sum
decomposition.
We explain the following diagram (which is commutative without the
dotted arrows).
\begin{equation}
\label{eq:start-analysis-differential-Q}
\xymatrix@=1.5cm{
{\sigma_{a-1}([-1]s(N))} \ar[r]^-{\sigma_{a-1}(u_{a-1})}
&
{\sigma_{a-1}(Q_{a-1})} \ar[r]^-{\sigma_{a-1}(v_{a-1})}
\ar@/^2pc/@{..>}[l]^-{\epsilon_{a-1}}
&
{{\sigma_{a-1}(M)}}
\ar@/_2pc/@{..>}[l]_-{\delta_{a-1}}
\\
{\sigma_{a-1}([-1]s(N))} \ar[r]^-{\sigma_{a-1}(u_a)}
\ar@{=}[u]
\ar[d]^{\tildew{d}^{a-1}_{[-1]s(N)}}
&
{\sigma_{a-1}(Q_a)} \ar[r]^-{\sigma_{a-1}(v_a)}
\ar[u]_-{\sigma_{a-1}(f_{a-1,a})}^-{\sim}
\ar[d]^{\tildew{d}^{a-1}_{Q_a}} &
{{\sigma_{a-1}(M)}}
\ar@{=}[u]
\ar[d]^{\tildew{d}^{a-1}_{M}}
\\
{{[1]\sigma_a([-1]s(N))}} \ar[r]^-{[1]\sigma_a(u_a)}
&
{{[1]\sigma_a(Q_a)}} \ar[r]^-{[1]\sigma_a(v_a)}
\ar@/^2pc/@{..>}[l]^-{[1]\epsilon_a}
&
{{[1]\sigma_{a}(M)}}
\ar@/_2pc/@{..>}[l]_-{[1]\delta_{a}}
}
\end{equation}
The first two rows are (the horizontal mirror image of)
$\sigma_{a-1}$ applied to the two middle rows of
\eqref{eq:sigma-leq-a-N-slice-off}.
The last
row is $[1]$ applied to the middle row of
\eqref{eq:sigma-leq-a-N-slice-off-sigma-a}. The morphism between
second and third row is that from
\eqref{eq:sigma-truncation-for-tilde-d}.
Note that we have split the first and third row explicitly before.
This diagram shows that
\begin{equation*}
\tildew{d}^{a-1}_{Q}: \sigma_{a-1}(Q) \rightarrow [1]\sigma_a (Q)
\end{equation*}
has the form
\begin{equation}
\label{eq:partial-differential-Q}
\begin{bmatrix}
{\tildew{d}^{a-1}_{[-1]s(N)}} & {\kappa^{a-1}} \\ {0} &
{{\tildew{d}^{a-1}_{M}}}
\end{bmatrix}
\end{equation}
if we identify
\begin{equation*}
\xymatrix@C=2cm{
{\sigma_{a-1}(Q)} \ar[r]^-{\sigma_{a-1}(f_{a-1})}_-{\sim} &
{\sigma_{a-1}(Q_{a-1})}
\ar[r]^-{\svek{\epsilon_{a-1}}{\sigma_{a-1}(v_{a-1})}}_-{\sim} &
{\sigma_{a-1}([-1]s(N)) \oplus \sigma_{a-1}(M)}
}
\end{equation*}
and
\begin{equation*}
\xymatrix@C=2cm{
{[1]\sigma_a(Q)} \ar[r]^-{[1]\sigma_a(f_{a})}_-{\sim} &
{[1]\sigma_a(Q_a)}
\ar[r]^-{\svek{[1]\epsilon_{a}}{[1]\sigma_{a}(v_a)}}_-{\sim} &
{{{[1]\sigma_a([-1]s(N))} \oplus [1]\sigma_{a}(M)}}
}
\end{equation*}
along
\eqref{eq:explicit-iso-sigma-a-Q-a}, for some morphism
\begin{equation*}
\kappa^{a-1}: \sigma_{a-1}(M) \rightarrow
[1]\sigma_a([-1]s(N));
\end{equation*}
which we will determine now; our aim is to prove
\eqref{eq:kappa-identified} below.
We apply to the commutative diagram
\begin{equation*}
\xymatrix{
{L_{a-1}} \ar[r]^-{f'_{a-1}} &
{Q_{a-1}} &
{Q_a} \ar[l]^-{f_{a-1,a}} &
{Q} \ar[l]^-{f_{a}} \ar@/_1.5pc/[ll]_-{f_{a-1}}
}
\end{equation*}
the morphism ${\tildew{d}^{a-1}}: \sigma_{a-1} \rightarrow
[1]\sigma_a$ of functors and obtain the
middle part of
the following commutative diagram (the rest will
be explained below):
\begin{equation*}
\hspace{-0.8cm}
\xymatrix@=1.5cm{
{{\sigma_{a-1}(M)}} \ar[dr]^-{\delta_{a-1}}\\
{\sigma_{a-1}(L_{a-1})} \ar[r]^-{\sigma_{a-1}(f'_{a-1})}
\ar[d]^-{\tildew{d}^{a-1}_{L_{a-1}}}
\ar[u]^-{\sigma_{a-1}(v'_{a-1})}_-{\sim} &
{\sigma_{a-1}(Q_{a-1})}
\ar[d]^-{\tildew{d}^{a-1}_{Q_{a-1}}} &
{\sigma_{a-1}(Q_a)} \ar[l]_-{\sigma_{a-1}(f_{a-1,a})}^-{\sim}
\ar[d]^-{\tildew{d}^{a-1}_{Q_{a}}} &
{\sigma_{a-1}(Q)} \ar[l]_-{\sigma_{a-1}(f_{a})}^-{\sim}
\ar@/_2.5pc/[ll]_-{\sigma_{a-1}(f_{a-1})}^-{\sim}
\ar[d]^-{\tildew{d}^{a-1}_{Q}} \\
{[1]\sigma_a(L_{a-1})} \ar[r]^-{[1]\sigma_a(f'_{a-1})}_-{\sim} &
{[1]\sigma_a(Q_{a-1})} &
{[1]\sigma_a(Q_a)} \ar[l]_-{[1]\sigma_a(f_{a-1,a})}
\ar[dl]^-{[1]\epsilon_a} &
{[1]\sigma_a(Q)} \ar[l]_-{[1]\sigma_a(f_{a})}^-{\sim}\\
{{[1]\sigma_a([-1]s(N))}} \ar[u]_-{[1]\sigma_a(u'_{a-1})}^-\sim
\ar@{=}[r] &
{{[1]\sigma_a([-1]s(N))}} \ar[u]_-{[1]\sigma_a(u_{a-1})}^-\sim
}
\end{equation*}
The honest triangles with diagonal sides $\delta_{a-1}$ and $[1]\epsilon_a$
respectively are commutative by
\eqref{eq:sigma-leq-a-N-slice-off-sigma-a}. The
lower left square commutes since it is (up to rotation)
$[1]\sigma_a$ applied to
the upper left square
in \eqref{eq:sigma-leq-a-N-slice-off} (after substituting $a-1$ for
$a$ there). Note that $[1]\sigma_a(u'_{a-1})$ is an isomorphism
since $\sigma_{a}(\sigma_{a-1}(M))=0$.
It is immediate from this diagram that $\kappa^{a-1}$ is the
downward vertical composition in the left column of this diagram,
i.\,e.
\begin{equation}
\label{eq:kappa-nearly-identified}
\kappa^{a-1}= ([1]\sigma_a(u'_{a-1}))^{-1} \circ
\tildew{d}^{a-1}_{L_{a-1}} \circ (\sigma_{a-1}(v'_{a-1}))^{-1}.
\end{equation}
Recall that the considerations
in the first case
gave formula
\eqref{eq:differential-eL-gr-supp-singleton};
if we apply them to the morphism $\sigma_{a-1}(m):\sigma_{a-1}(M)
\rightarrow \sigma_{a-1}(N)$
in $\tildew{\mathcal{T}}^s([a-1])$ and the top horizontal
triangle in
\eqref{eq:sigma-leq-a-N-slice-off} (with $a$ replaced by $a-1$)
(which plays the role of
\eqref{eq:alpha-triangle-MN-gr-a}),
this formula describes the right hand side of
\eqref{eq:kappa-nearly-identified}
and hence yields
\begin{equation}
\label{eq:kappa-identified}
\kappa^{a-1}= \alpha_{\sigma_{a-1}(N)} \circ \sigma_{a-1}(m).
\end{equation}
Let us sum up what we know:
We use from now on tacitly the
identifications
\begin{equation*}
\sigma_a(Q)\xra{\sim}
\sigma_a([-1]s(N)) \oplus \sigma_{a}(M)
\end{equation*}
given by $\sigma_a(f_a)$ and \eqref{eq:explicit-iso-sigma-a-Q-a}.
Then the morphism
$\tildew{d}^{a}_{Q}: \sigma_{a}(Q) \rightarrow [1]\sigma_{a+1} (Q)$
is given by the matrix
\begin{equation*}
\tildew{d}^{a}_{Q} =
\begin{bmatrix}
{\tildew{d}^{a}_{[-1]s(N)}} & {\alpha_{\sigma_a(N)} \circ
\sigma_{a}(m)} \\
{0} & {{\tildew{d}^{a}_{M}}}
\end{bmatrix}.
\end{equation*}
Shifting accordingly this describes the complex $c'(Q)$
completely. Moreover, the
morphisms $c'(u): c'([-1]s(N)) \rightarrow c'(Q)$
and $c'(v):c'(Q) \rightarrow c'(M)$ become identified with the inclusion
$\svek 10$ of the first summand and the projection
$\svek 01$ onto the second summand respectively.
Then it is clear that $c(Q)=c''(c'(Q))$
is given by (we assume that $i$
is just the inclusion $\tildew{\mathcal{T}}([0]) \subset
\tildew{\mathcal{T}}$):
\begin{align*}
c(Q)^a & =c([-1]s(N))^{a} \oplus c(M)^{a},\\
d_{c(Q)}^a & =
\begin{bmatrix}
{d^{a}_{c([-1]s(N))}} & {c(m)^a}\\
0 & {d^{a}_{c(M)}}
\end{bmatrix}.
\end{align*}
Using the canonical identification $c([-1]s(N))\cong \Sigma^{-1}
c(N)$ we see that $c$ maps
\eqref{eq:alpha-lift-triangle-MN} to
\eqref{eq:h-triangulated}.
As before this becomes a triangle
in $K^b({\heartsuit}(w))^{\op{anti}}$,
cf.\ \eqref{eq:mapcone-minus-m-rot2}.
\end{proof}
\begin{corollary}
\label{c:e-prime-zero-on-kernel}
For all $M, N \in \tildew{\mathcal{T}}^s$,
\begin{equation*}
h:\op{Hom}_{\tildew{\mathcal{T}}^s}(M,N) \rightarrow
\op{Hom}_{K^b({\heartsuit}(w))^{\op{anti}}}(h(M), h(N))
\end{equation*}
vanishes
on
the kernel of
$\omega|_{\tildew{\mathcal{T}}^s}:\op{Hom}_{\tildew{\mathcal{T}}^s}(M,N) \rightarrow
\op{Hom}_{\mathcal{T}}(\omega(M), \omega(N))$.
\end{corollary}
\begin{proof}
(We argue similarly as in the proof of Proposition
\ref{p:omega-on-subtildeT}.)
Let $M, N \in \tildew{\mathcal{T}}^s$.
By \ref{enum:filt-tria-cat-exhaust} we find $m, n \in \Bl{Z}$ such that
$M \in \tildew{\mathcal{T}}(\leq m)$ and $N \in
\tildew{\mathcal{T}}(\geq n)$.
Choose $a \in \Bl{Z}$ satisfying $a \geq m-n$.
Consider the following commutative diagram
where the epimorphisms and isomorphisms in the upper row come
from Lemma~\ref{l:alpha-hom-subtildeT}, the isomorphisms in the
lower row from Proposition~\ref{p:functor-omega},
\ref{enum:functor-omega-iii}, and the vertical morphisms are
application of $\omega$;
the vertical morphism on the right is an isomorphism by
Proposition~\ref{p:functor-omega}, \ref{enum:functor-omega-iv}:
\begin{equation*}
\xymatrix{
{\tildew{\mathcal{T}}(M,N)} \ar@{->>}[r]^{\alpha_N \circ ?} \ar[d]^{\omega} &
{\tildew{\mathcal{T}}(M,s(N))} \ar[r]^-{\sim} \ar[d]^{\omega} &
{\tildew{\mathcal{T}}(M,s^a(N))} \ar[d]^{\omega}_{\sim} \\
{{\mathcal{T}}(\omega(M),\omega(N))} \ar[r]^-{\sim} &
{{\mathcal{T}}(\omega(M),\omega(s(N)))} \ar[r]^-{\sim} &
{{\mathcal{T}}(\omega(M),\omega(s^a(N)))}
}
\end{equation*}
This diagram shows that the kernel of the vertical arrow $\omega$ on
the left coincides with the kernel of the horizontal arrow
$(\alpha_N \circ ?)$.
Let $f: M \rightarrow N$ be a morphism in $\tildew{\mathcal{T}}^s$ and
assume that $\omega(f)=0$.
Since $\alpha_N \circ f=0$ the morphism $f$ factors to $f'$ as
indicated in the following commutative diagram
\begin{equation*}
\xymatrix{
&&
{M} \ar[d]^f \ar@{..>}[ld]_{f'}
\\
{[-1]s(N)} \ar[r]^-{u} &
{Q} \ar[r]^-{v} &
{N} \ar[r]^-{\alpha_N} &
{s(N),}
}
\end{equation*}
where the lower horizontal row is a completion of $\alpha_N$ into a triangle.
Proposition~\ref{p:cone-alpha-homotopic-to-zero} shows that
$h(Q)=0$ and hence $h(f)=0$.
\end{proof}
\begin{corollary}
\label{c:h-factors-triang}
As indicated in diagram~\eqref{eq:omega-subtildeT-exp},
the functor $h$ of additive categories with translation factors
uniquely to a triangulated functor
\begin{equation*}
\ol{h}: (\mathcal{Q},[1]s^{-1}, \Delta_\mathcal{Q}) \rightarrow K^b({\heartsuit}(w))^{\op{anti}}.
\end{equation*}
\end{corollary}
\begin{proof}
Corollary~\ref{c:e-prime-zero-on-kernel} shows that $h$ factors
uniquely to a functor $\ol{h}$ of additive categories with
translation.
Proposition~\ref{p:cone-alpha-homotopic-to-zero}
together with the description of the class of triangles
$\Delta_\mathcal{Q}$ before Lemma~\ref{l:omega-bar-triang} show that
$\ol{h}$ is a triangulated functor.
\end{proof}
\begin{proof}
[Proof of Thm.~\ref{t:strong-weight-cplx-functor}]
We first use Proposition~\ref{p:omega-on-subtildeT}:
For any object $X \in \mathcal{T}$ we fix an object $\tildew{X} \in
\tildew{\mathcal{T}}^s$ and an isomorphism $X \cong
\omega(\tildew{X})$. Then we fix for any
morphism $f:X \rightarrow Y$ in
$\mathcal{T}$ a morphism $\tildew{f}:\tildew{X} \rightarrow \tildew{Y}$ in
$\tildew{\mathcal{T}}^s$ such that $f$ corresponds
to $\omega(\tildew{f})$ under the isomorphisms
$X \cong \omega(\tildew{X})$ and
$Y \cong \omega(\tildew{Y})$.
Mapping $X$ to $\tildew{X}$ and $f$ to the class of $\tildew{f}$
in $\mathcal{Q}$ defines a quasi-inverse $\ol{\omega}^{-1}$ to
$\ol{\omega}$
(and any quasi-inverse is of this form).
We claim that ${\widetilde{WC}}:=\ol{h} \circ \ol{\omega}^{-1}$
(cf.\ diagram~\eqref{eq:omega-subtildeT-exp})
is a strong weight complex functor.
Lemma~\ref{l:omega-bar-triang} and
Corollary~\ref{c:h-factors-triang} show that it is a triangulated
functor. We have to show that its
composition
with the canonical functor $K({\heartsuit}(w))^{\op{anti}} \rightarrow
{K_{\op{weak}}({\heartsuit}(w))}$ is isomorphic to a weak weight complex functor.
Observe that the constructions of the weak weight complex functor
(see Section~\ref{sec:weak-wc-fun})
and of $c'$
(see Section~\ref{sec:first-step-cprime})
are almost parallel under $\omega$:
Lemma~\ref{l:omega-range-versus-weights-on-tildeTs} shows that
$\omega$ maps $\sigma$-truncation
triangles of objects $\tildew{X} \in \tildew{\mathcal{T}}^s$ to
weight decompositions. This means that we can take the image
$\omega(S^n_{\tildew{X}})$ of
$S^n_{\tildew{X}}$ (see \eqref{eq:sigma-trunc-construction-c})
under $\omega$ as
a preferred choice for the triangle $T^n_X$ (see
\eqref{eq:choice-weight-decomp}); more precisely we have to replace
$\omega(\tildew{X})$ in $\omega(S^n_{\tildew{X}})$ by $X$.
Similarly
the octahedron $\tildew{O}^n_{\tildew{X}}$ (see \eqref{eq:octaeder-cprime})
yields $\omega(\tildew{O}^n_{\tildew{X}})$ as a preferred choice for
the octahedron
$O^n_X$ (see \eqref{eq:wc-weak-octaeder}): We have
$w_nX=\omega(\sigma_n(X))$. Since the octahedron
$\tildew{O}^n_X$
is functorial we immediately get preferred choices for the
morphisms $f^n$ in \eqref{eq:f-w-trunc-fn}, namely
$\omega(\sigma_n(\tildew{f}))$.
These preferred choices define
the assignment $X \mapsto {WC_{\op{c}}}(X)$, $f \mapsto
{WC_{\op{c}}}(f)=([n]\omega(\sigma_n(\tildew{f})))_{n \in \Bl{Z}}$.
Passing to the weak homotopy category defines
a weak weight complex functor
${WC}:\mathcal{T} \rightarrow K_{\op{weak}}({\heartsuit}(w))$ (see Thm.~\ref{t:weakWCfun}).
Let $\op{can}:C^b({\heartsuit}) \rightarrow K^b({\heartsuit}(w))^{\op{anti}}$ be the obvious functor.
Then we have (using Rem.~\ref{rem:quick-def-c})
\begin{equation*}
\op{can} \circ {WC_{\op{c}}} \circ \omega = \op{can} \circ \omega_{C^b}
\circ c' \cong \op{can}
\circ c'' \circ c' =\op{can} \circ c = h
\end{equation*}
on objects and morphisms and, since $h$ is a functor,
as functors $\tildew{\mathcal{T}}^s \rightarrow K^b({\heartsuit}(w))^{\op{anti}}$.
This implies that $\op{can} \circ {WC_{\op{c}}} \cong \ol{h} \circ
\ol{\omega}^{-1} = {\widetilde{WC}}$.
The composition of $\op{can} \circ {WC_{\op{c}}}$
with $K^b({\heartsuit}(w))^{\op{anti}} \rightarrow K_{\op{weak}}({\heartsuit}(w))$
is the weak weight complex functor ${WC}$ from above. Hence
${\widetilde{WC}}$
is a strong weight complex functor.
\end{proof}
\begin{remark}
\label{rem:special-choice-omega-inverse}
At the beginning of the proof of Theorem
\ref{t:strong-weight-cplx-functor} we chose
some objects $\tildew{X}$.
Let us take some more care there:
For $X=0$ choose $\tildew{X}=0$. Assume $X\not=0$
Then let $a,b \in \Bl{Z}$ be such
that $X \in \mathcal{T}^{w \in [a,b]}$ and $b-a$ is minimal. Then
Proposition~\ref{p:omega-on-subtildeT} allows us to
find an object $\tildew{X} \in \tildew{\mathcal{T}}^s([a,b])$
and an isomorphism $X \cong \omega(\tildew{X})$.
Taking these choices and proceeding as in the above proof it is
obvious that ${\widetilde{WC}}:\mathcal{T} \rightarrow K^b({\heartsuit}(w))^{\op{anti}}$ maps
$\mathcal{T}^{w \in [a,b]}$ to $K^{[a,b]}({\heartsuit}(w))$.
\end{remark}
\section{Lifting weight structures to f-categories}
\label{sec:weight-structures-and-filtered-triang}
We show some statements about compatible weight structures.
For the corresponding and motivating results for t-structures see
\cite[Prop.~A 5 a]{Beilinson}.
\begin{definition}
Let $(\tildew{\mathcal{T}},i)$ be an f-category over a triangulated
category $\mathcal{T}$. Assume that both $\tildew{\mathcal{T}}$ and
$\mathcal{T}$ are weight categories (see Def.~\ref{d:ws}), i.\,e.\
they are equipped with weight structures. Then these weight structures are
\define{compatible}
if\footnote
{
Condition \ref{enum:wstr-ft-i} is natural whereas
\ref{enum:wstr-ft-s} is perhaps a priori not clear. Here is a
(partial) justification. Take an object $0\not= X \in {\heartsuit}(w)$
(where $w$ is the w-structure on $\mathcal{T}$) and
denote $i(X)$ also by $X$. Then we have the nonzero morphism
$\alpha_X: X \rightarrow s(X)$. Now $X \in \tildew{\mathcal{T}}^{w \geq 0}$
and $s(X) \in s(\tildew{\mathcal{T}}^{w \leq 0})$. So we cannot have
$s(\tildew{\mathcal{T}}^{w \leq 0})= \tildew{\mathcal{T}}^{w \leq -1}$.
}
\begin{enumerate}[label=(wcomp-ft{\arabic*})]
\item
\label{enum:wstr-ft-i}
$i: \mathcal{T} \rightarrow \tildew{\mathcal{T}}$ is w-exact and
\item
\label{enum:wstr-ft-s}
$s(\tildew{\mathcal{T}}^{w \leq 0}) = \tildew{\mathcal{T}}^{w \leq
1}$.
\end{enumerate}
Note that the at first sight asymmetric condition
\ref{enum:wstr-ft-s} implies its counterpart
\begin{enumerate}[resume,label=(wstr-ft{\arabic*})]
\item
\label{enum:wstr-ft-s-symm}
$s(\tildew{\mathcal{T}}^{w \geq 0})
= \tildew{\mathcal{T}}^{w \geq 1}$,
\end{enumerate}
as we prove in Remark~\ref{rem:wstr-ft-symm} below.
\end{definition}
\begin{remark}
\label{rem:wstr-ft-symm}
Assume that
$\tildew{\mathcal{T}}$ together with
$i:\mathcal{T} \rightarrow \tildew{\mathcal{T}}$ is an f-category over the
triangulated category $\mathcal{T}$, and that both
$\tildew{\mathcal{T}}$ and $\mathcal{T}$ are equipped with
compatible w-structures.
Then \eqref{eq:ws-geq-left-orth-of-leq} and
\ref{enum:wstr-ft-s}
show that
\begin{equation*}
\tildew{\mathcal{T}}^{w \geq 2}
= \leftidx{^{\perp}}{(\tildew{\mathcal{T}}^{w \leq 1})}{}
= \leftidx{^{\perp}}{(s(\tildew{\mathcal{T}}^{w \leq 0}))}{}
= s(\leftidx{^{\perp}}{(\tildew{\mathcal{T}}^{w \leq 0})}{})
= s(\tildew{\mathcal{T}}^{w \geq 1})
\end{equation*}
Now apply the translation $[1]$ to get
\ref{enum:wstr-ft-s-symm}
\end{remark}
The statement of the following proposition appears independently in
\cite[Prop.~3.4 (1)]{achar-kitchen-koszul-mixed-arXiv} where the proof is
essentially left to the reader.
\begin{proposition}
\label{p:compatible-w-str}
Let $(\tildew{\mathcal{T}},i)$ be an f-category over a triangulated
category $\mathcal{T}$. Given a w-structure $w=(\mathcal{T}^{w \leq
0}, \mathcal{T}^{w \geq 0})$ on $\mathcal{T}$, there
is a unique w-structure $\tildew{w}=(\tildew{\mathcal{T}}^{w \leq
0}, \tildew{\mathcal{T}}^{w \geq 0})$ on $\tildew{\mathcal{T}}$ that
is compatible
with the given w-structure on $\mathcal{T}$. It is given by
\begin{align}
\label{eq:tildeT-wstr}
\tildew{\mathcal{T}}^{w \leq 0} & =\{X \in \tildew{\mathcal{T}}\mid
\text{$\op{gr}^j(X) \in \mathcal{T}^{w \leq -j}$ for all $j \in
\Bl{Z}$}\},\\
\notag
\tildew{\mathcal{T}}^{w \geq 0} & =\{X \in
\tildew{\mathcal{T}}\mid
\text{$\op{gr}^j(X) \in \mathcal{T}^{w \geq -j}$ for all $j \in \Bl{Z}$}\}.
\end{align}
\end{proposition}
From
\eqref{eq:tildeT-wstr} we get
\begin{align}
\label{eq:tildeT-wstr-shifts}
\tildew{\mathcal{T}}^{w \leq n} &
:= [-n]\tildew{\mathcal{T}}^{w \leq 0}
=\{X \in \tildew{\mathcal{T}}\mid
\text{$\op{gr}^j(X) \in \mathcal{T}^{w \leq n-j}$ for all $j \in
\Bl{Z}$}\},\\
\notag
\tildew{\mathcal{T}}^{w \geq n} &
:= [-n]\tildew{\mathcal{T}}^{w \geq 0}
=\{X \in \tildew{\mathcal{T}}\mid
\text{$\op{gr}^j(X) \in \mathcal{T}^{w \geq n-j}$ for all $j \in \Bl{Z}$}\}.
\end{align}
The heart of the w-structure of the proposition is
\begin{equation}
\label{eq:heart-comp-w-str}
{\heartsuit}(\tildew{w}) = \tildew{\mathcal{T}}^{w =0}
=\{X \in \tildew{\mathcal{T}}\mid
\text{$\op{gr}^j(X) \in \mathcal{T}^{w =-j}$ for all $j \in \Bl{Z}$}\}.
\end{equation}
\begin{proof}
\textbf{Uniqueness:}
We assume that we already know that
\eqref{eq:tildeT-wstr}
is a compatible w-structure.
Let $X$ be any object in $\tildew{\mathcal{T}}$.
Then $X$ is in $\tildew{\mathcal{T}}([a,b])$ for some $a, b \in
\Bl{Z}$.
We can build up $X$ from its graded pieces as indicated in the
following diagram
in the case $[a,b]=[-2,1]$:
\begin{equation}
\label{eq:build-up-X2}
\xymatrix@C-1.3cm{
{X=\sigma_{\leq 1}X} \ar[rr]
&& {\sigma_{\leq 0}X} \ar[rr] \ar@{~>}[ld]
&& {\sigma_{\leq -1}X} \ar[rr] \ar@{~>}[ld]
&& {\sigma_{\leq -2}X} \ar[rr] \ar@{~>}[ld]
&& {\sigma_{\leq -3}X=0} \ar@{~>}[ld]
\\
& {s(i(\op{gr}^1(X)))} \ar[lu]
&& {i(\op{gr}^0(X))} \ar[lu]
&& {s^{-1}(i(\op{gr}^{-1}(X)))} \ar[lu]
&& {s^{-2}(i(\op{gr}^{-2}(X)))} \ar[lu]
}
\end{equation}
All triangles are isomorphic to $\sigma$-truncation triangles with the wiggly
arrows of degree one.
Assume that $(\mathcal{C}^{w \leq 0},
\mathcal{C}^{w \geq 0})$ is a compatible w-structure on
$\tildew{\mathcal{T}}$.
We claim that
\begin{equation*}
\tildew{\mathcal{T}}^{w \leq 0} \subset \mathcal{C}^{w \leq 0}
\text{ and }
\tildew{\mathcal{T}}^{w \geq 0} \subset \mathcal{C}^{w \geq 0}
\end{equation*}
which implies equality of the two w-structures
(see Lemma \ref{c:inclusion-w-str}).
Assume that $X$ is in $\tildew{\mathcal{T}}^{w \leq 0}$. Then
$\op{gr}^j(X) \in \mathcal{T}^{w \leq -j}$ implies
$i(\op{gr}^j(X)) \in \mathcal{C}^{w \leq -j}$ and we obtain
$s^j(i(\op{gr}^j(X))) \in \mathcal{C}^{w \leq 0}$.
Now an induction using \eqref{eq:build-up-X2} (respectively its
obvious generalization to arbitrary $[a,b]$) shows that $X$ is in
$\mathcal{C}^{w \leq 0}$. Analogously we show
$\tildew{\mathcal{T}}^{w \geq 0} \subset \mathcal{C}^{w \geq 0}$.
\textbf{Existence:}
If
\eqref{eq:tildeT-wstr} defines a w-structure
compatibility is obvious:
\ref{enum:wstr-ft-i} is clear
and \ref{enum:wstr-ft-s} follows from
$\op{gr}^j \circ s= \op{gr}^{j-1}$ (see
\eqref{eq:gr-s-comm}).
We prove that
\eqref{eq:tildeT-wstr} defines a w-structure on $\tildew{\mathcal{T}}$.
Condition \ref{enum:ws-i} holds:
All functors $\op{gr}^j$ are triangulated and in particular
additive.
Since all $\mathcal{T}^{w\leq j}$ and $\mathcal{T}^{w \geq j}$ are
additive categories and closed under retracts in $\mathcal{T}$,
$\tildew{\mathcal{T}}^{w \leq 0}$
and $\tildew{\mathcal{T}}^{w \geq 0}$ are additive categories and
closed under retracts in $\tildew{\mathcal{T}}$.
Condition \ref{enum:ws-ii} follows from
\eqref{eq:tildeT-wstr-shifts}.
Condition \ref{enum:ws-iii}:
Let $X \in \tildew{\mathcal{T}}^{w \geq 1}$ and
$Y \in \tildew{\mathcal{T}}^{w \leq 0}$.
It is obvious from \eqref{eq:tildeT-wstr-shifts} that
all $\tildew{\mathcal{T}}^{w \geq n}$
and $\tildew{\mathcal{T}}^{w \leq n}$ are stable under all
$\sigma$-truncations.
Hence we can first $\sigma$-truncate $X$ and reduce to the case that
$l(X)\leq 1$ and then similarly reduce to $l(Y) \leq 1$.
So it is sufficient to prove $\op{Hom}(X,Y)=0$ for
$X \in \tildew{\mathcal{T}}([a])$
and $Y \in \tildew{\mathcal{T}}([b])$ for arbitrary $a, b \in \Bl{Z}$.
Let $f: X\rightarrow Y$ be a morphism.
\begin{itemize}
\item
If $b<a$ then $f=0$ by \ref{enum:filt-tria-cat-no-homs}.
\item
Let $b=a$. Then $f=\sigma_a(f)$ (under the obvious identification) and
$\sigma_a(f) \cong s^a(i(\op{gr}^a(f)))$.
But $\op{gr}^a(f):\op{gr}^a(X) \rightarrow \op{gr}^a(Y)$ is zero since
$\op{gr}^a(X) \in \mathcal{T}^{w \geq 1-a}$ and
$\op{gr}^a(Y) \in \mathcal{T}^{w \leq -a}$.
\item
Let $b > a$. Then $f$ can be factorized as $X \xra{f'}
s^{-(b-a)}(Y)\xra{\alpha^{b-a}} Y$ for a unique $f'$ using
\ref{enum:filt-tria-cat-hom-bij}.
Then
\begin{equation*}
s^{-(b-a)}(Y) \in s^{-(b-a)}(\tildew{\mathcal{T}}^{w \leq
0}([b]))=\tildew{\mathcal{T}}^{w \leq a-b}([a])
\subset \tildew{\mathcal{T}}^{w \leq 0}([a])
\end{equation*}
and the case $b=a$ imply that $f'=0$ and hence $f=0$.
\end{itemize}
Condition \ref{enum:ws-iv}:
By induction on $b-a$ we prove the following statement
(which is sufficient by \ref{enum:filt-tria-cat-exhaust}):
Let $X$ be in $\tildew{\mathcal{T}}([a,b])$. Then for each $n \in
\Bl{Z}$ there are triangles
\begin{equation}
\label{eq:taun-comp}
w_{\geq n+1} X \rightarrow X \rightarrow w_{\leq n}X \rightarrow [1] w_{\geq n+1} X
\end{equation}
with
$w_{\geq n+1} X \in \tildew{\mathcal{T}}^{w \geq n+1}([a,b])$ and
$w_{\leq n}X \in \tildew{\mathcal{T}}^{w \leq n}([a,b])$
and satisfying
\begin{align}
\label{eq:omega-tau}
\omega (w_{\geq n+1} X) & \in \mathcal{T}^{w \geq n+1-b},\\
\notag
\omega (w_{\leq n} X) & \in \mathcal{T}^{w \leq n-a}.
\end{align}
For $b-a<0$ we choose everything to be zero.
Assume $b-a=0$. Then the object $s^{-a}(X)$ is isomorphic to $i(Y)$
for some $Y$ in $\mathcal{T}$.
Let $m\in \Bl{Z}$ and let
\begin{equation*}
w_{\geq m+1}Y \rightarrow Y \rightarrow w_{\leq m}Y \rightarrow [1]w_{\geq m+1}Y
\end{equation*}
be a $(w \geq m+1, w\leq m)$-weight decomposition of $Y$ in
$\mathcal{T}$ with respect to $w$.
Applying the triangulated functor $s^a \circ i$ we obtain (using an
isomorphism
$s^{-a}(X)\cong i(Y)$) a triangle
\begin{equation}
\label{eq:taun-comp-aa}
s^a(i(w_{\geq m+1}Y)) \rightarrow X \rightarrow s^a(i(w_{\leq m}Y)) \rightarrow
[1]s^a(i(w_{\geq m+1}Y)).
\end{equation}
Since $i(w_{\geq m+1} Y) \in \tildew{\mathcal{T}}^{w \geq m+1}([0])$ we have
$s^a(i(w_{\geq m+1} Y)) \in \tildew{\mathcal{T}}^{w \geq
m+1+a}([a])$, and similarly
$s^a(i(w_{\leq m}Y)) \in \tildew{\mathcal{T}}^{w \leq m+a}([a])$.
Take now $m=n-a$ and define the triangle
\eqref{eq:taun-comp} to be
\eqref{eq:taun-comp-aa}.
Then \eqref{eq:omega-tau} is satisfied by
\ref{enum:functor-omega-iii}.
Now assume $b>a$. Choose $a \leq c < b$. In the
diagram
\begin{equation}
\label{eq:construct-t-gr-supp}
\xymatrix{
{[1]w_{\geq n+1}\sigma_{\geq c+1}X} \ar[r]
& {[1]\sigma_{\geq c+1}X} \ar[r]
& {[1]w_{\leq n}\sigma_{\geq c+1}X} \ar[r]^-{\circleddash}
& {[2]w_{\geq n+1}\sigma_{\geq c+1}X} \\
{w_{\geq n+1}\sigma_{\leq c}X} \ar[r] \ar@{~>}[u]
& {\sigma_{\leq c}X} \ar[r] \ar[u]
& {w_{\leq n}\sigma_{\leq c}X} \ar[r] \ar@{~>}[u]
& {[1]w_{\geq n+1}\sigma_{\leq c}X} \ar@{~>}[u]^{[1]}
}
\end{equation}
the vertical arrow in the middle is the last arrow in the
$\sigma$-truncation triangle $\sigma_{\geq c+1}X \rightarrow X \rightarrow \sigma_{\leq c}X \rightarrow
[1]\sigma_{\geq c+1}X$. Using induction the lower row is
\eqref{eq:taun-comp} for $\sigma_{\leq c}X$ and the upper row is
$[1]$ applied to \eqref{eq:taun-comp} for $\sigma_{\geq c+1}X$
where we also multiply the last arrow by $-1$ as indicated by
$\circleddash$; then this row is a triangle.
In order to construct the indicated completion to a morphism of
triangles
we claim that
\begin{equation}
\label{eq:tr-filt-map-zero}
\op{Hom}([\epsilon]w_{\geq n+1}\sigma_{\leq c}X, [1]w_{\leq
n}\sigma_{\geq c+1}X)=0
\text{ for $\epsilon \leq 2$.}
\end{equation}
Since the left entry is in $\tildew{\mathcal{T}}(\leq c)$ and the
right entry is in $\tildew{\mathcal{T}}(\geq c+1)$,
it is sufficient by \ref{enum:functor-omega-iv}
to show that
\begin{equation*}
\op{Hom}([\epsilon]\omega (w_{\geq n+1}(\sigma_{\leq c}(X))),
[1]\omega (w_{\leq n}(\sigma_{\geq c+1}(X))))=0
\text{ for $\epsilon \leq 2$.}
\end{equation*}
But this is true by axiom
\ref{enum:ws-iii}
since the left entry
is in $\mathcal{T}^{w \geq n+1-c-\epsilon}
\subset \mathcal{T}^{w \geq n-c-1}$ and
the right entry is in
$\mathcal{T}^{w \leq n-(c+1)-1}=\mathcal{T}^{w \leq n-c-2}$ by
\eqref{eq:omega-tau}.
Hence claim \eqref{eq:tr-filt-map-zero} is proved.
This claim for $\epsilon \in \{0,1\}$ and
Proposition~\ref{p:BBD-1-1-9-copied-for-w-str} show that we can
complete the arrow in \eqref{eq:construct-t-gr-supp} uniquely to a morphism
of triangles as indicated by the squiggly arrows.
Now multiply the last morphism in the first row of
\eqref{eq:construct-t-gr-supp} by $-1$;
this makes the square on the right anti-commutative.
Proposition~\ref{p:3x3-diagram-copied-for-w-str} and the uniqueness
of the squiggly arrows show that this
modified diagram fits as the first two rows into a $3\times
3$-diagram
\begin{equation*}
\xymatrix{
{[1]w_{\geq n+1}\sigma_{\geq c+1}X} \ar[r]
& {[1]\sigma_{\geq c+1}X} \ar[r]
& {[1]w_{\leq n}\sigma_{\geq c+1}X} \ar[r]
\ar@{}[dr]|-{\circleddash}
& {[2]w_{\geq n+1}\sigma_{\geq c+1}X } \\
{w_{\geq n+1}\sigma_{\leq c}X} \ar[r] \ar@{~>}[u]
& {\sigma_{\leq c}X} \ar[r] \ar[u]
& {w_{\leq n}\sigma_{\leq c}X} \ar[r] \ar@{~>}[u]
& {[1]w_{\geq n+1}\sigma_{\leq c}X} \ar@{~>}[u]\\
{A} \ar[r] \ar[u]
& {X} \ar[r] \ar[u]
& {B} \ar[r] \ar[u]
& {[1]A} \ar[u]\\
{w_{\geq n+1}\sigma_{\geq c+1}X} \ar[r] \ar[u]
& {\sigma_{\geq c+1}X} \ar[r] \ar[u]
& {w_{\leq n}\sigma_{\geq c+1}X} \ar[r] \ar[u]
& {[1]w_{\geq n+1}\sigma_{\geq c+1}X } \ar[u]
}
\end{equation*}
where we can assume that the second column is the
$\sigma$-truncation triangle of $X$.
We claim that we can define $w_{\geq n+1}X:=A$ and $w_{\leq n}X:=B$
and that we can take the horizontal triangle $(A,X,B)$ as
\eqref{eq:taun-comp}:
Apply the triangulated functor $\op{gr}^j$ to the vertical column
containing $A$. For $j \leq c$ this yields an isomorphism
\begin{equation*}
\op{gr}^j A \xra{\sim} \op{gr}^j w_{\geq n+1}\sigma_{\leq c}X \in
\mathcal{T}^{w \geq n+1-j},
\end{equation*}
and for $j > c$ we obtain isomorphisms
\begin{equation*}
\op{gr}^j A \overset{\sim}{\leftarrow} \op{gr}^j w_{\geq n+1}\sigma_{\geq c+1}X \in
\mathcal{T}^{w \geq n+1-j}.
\end{equation*}
This shows $A \in \tildew{\mathcal{T}}^{w \geq n+1}([a,b])$ where
the statement about the range comes from the fact that the first
isomorphism is zero for $j<a$ and the second one is zero for $j>b$.
Furthermore, if we apply $\omega$ to the column containing $A$, we
obtain a triangle
\begin{equation*}
\xymatrix{
{\omega (w_{\geq n+1}(\sigma_{\geq c+1}(X)))} \ar[r]
&
{\omega(A)} \ar[r]
&
{\omega (w_{\geq n+1}(\sigma_{\leq c}(X)))} \ar[r]
&
{[1]\omega (w_{\geq n+1}(\sigma_{\geq c+1}(X)))}
}
\end{equation*}
in which the first term is in $\mathcal{T}^{w \geq n+1-b}$ and the third
term is in $\mathcal{T}^{w\geq n+1-c} \subset \mathcal{T}^{w\geq
n+1-b}$. Hence $\omega(A) =\omega(w_{\geq n+1}X)$ is
in $\mathcal{T}^{w \geq n+1-b}$
by Lemma~\ref{l:weight-str-basic-properties}
\eqref{enum:weight-perp-prop}.
Similarly we treat $B$.
\end{proof}
\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
\def\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D}
\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
| {'timestamp': '2011-07-07T02:03:03', 'yymm': '1107', 'arxiv_id': '1107.1227', 'language': 'en', 'url': 'https://arxiv.org/abs/1107.1227'} |
\section{Introduction}
Primordial nucleosynthesis is one of the cornerstones
of the hot big-bang cosmology. The agreement between
the predictions for the abundances of D, $^3$He, $^4$He
and $^7$Li and their inferred primordial abundances provides
the big-bang cosmology's earliest, and perhaps most, stringent test.
Further, big-bang nucleosynthesis has been used to provide
the best determination of the baryon density \cite{ytsso,walker}
and to provide crucial tests of particle-physics theories, e.g.,
the stringent bound to the number of light
neutrino species \cite{nulimit,mathews}.
Over the years various aspects of
the effect of a decaying tau neutrino on primordial nucleosynthesis
have been considered \cite{ks,st1,st2,st,kaw,ketal,dol,osu}.
Each previous study focused on a specific
decay mode and incorporated different microphysics. To be sure,
no one study was complete or exhaustive. Our purpose here is to consider
all the effects of a decaying tau neutrino on nucleosynthesis
in an comprehensive and coherent manner. In particular,
for the first time interactions of decay-produced electron
neutrinos and antineutrinos, which can be important for
lifetimes shorter than $100\,{\rm sec}$ or so, are taken into account.
The nucleosynthesis limits to the mass of an unstable tau
neutrino are currently of great interest as the best laboratory
upper mass limits \cite{labmass}, $31\,{\rm MeV}$ by
the ARGUS Collaboration and $32.6\,{\rm MeV}$ by the CLEO
Collaboration,\footnote{Both are 95\% C.L.
mass limits based upon end-point analyses of tau decays to
final states containing five pions. The CLEO data set contains
113 such decays and the ARGUS data set contains 20 such decays \cite{labmass}.}
are tantalizingly close to the mass range excluded by nucleosynthesis,
approximately $0.4\,{\rm MeV}$ to $30\,{\rm MeV}$ for lifetimes
greater than about $300\,{\rm sec}$. If the upper range
of the cosmologically excluded band can be
convincingly shown to be greater than the upper
bound to the mass from laboratory experiments, the two bounds
together imply that a long-lived tau-neutrino
must be less massive than about $0.4\,{\rm MeV}$. This was the major
motivation for our study.
The effects of a massive, decaying tau neutrino on primordial
nucleosynthesis fall into
three broad categories: (i) the energy density of the tau neutrino
and its daughter product(s) increase the expansion rate, tending
to increase $^4$He, D, and $^3$He production; (ii) the
electromagnetic (EM) plasma is heated by the
daughter product(s) that interact electromagnetically
(photons and $e^\pm$ pairs), diluting the baryon-to-photon
ratio and decreasing $^4$He production and increasing
D and $^3$He production; and (iii) electron neutrino
(and antineutrino) daughters increase
the weak interaction rates that govern the neutron-to-proton
ratio, leading to decreased $^4$He production for short lifetimes
($\mathrel{\mathpalette\fun <} 30\,{\rm sec}$) and masses less than about $10\,{\rm MeV}$
and increased $^4$He production for long lifetimes.
Decays that take place long after nucleosynthesis ($\tau_\nu
\sim 10^5\,{\rm sec} -10^6\,{\rm sec}$) can lead to the destruction of
the light elements through fission
reactions and additional constraints \cite{fission}, neither
of which are considered here.
In terms of the effects on primordial nucleosynthesis
there are, broadly speaking, four generic decay modes:
\begin{enumerate}
\item Tau neutrino decays to daughter products that
are all sterile, e.g., $\nu_\tau \rightarrow \nu_\mu
+\phi$ ($\phi$ is a very weakly interacting boson).
Here, only effect (i) comes into play. Aspects of this case
were treated in Refs.~\cite{ks,st2,ketal,dol,osu}; the very recent
work in Ref.~\cite{osu} is the most complete study of this mode.
\item Tau neutrino decays to a sterile daughter product(s)
plus a daughter product(s) that interacts electromagnetically,
e.g., $\nu_\tau \rightarrow \nu_\mu + \gamma$. Here,
effects (i) and (ii) come into play. This case was
treated in Ref.~\cite{st1}, though not specifically
for a decaying tau neutrino.
\item Tau neutrino decays into an electron neutrino and sterile
daughter product(s), e.g., $\nu_\tau \rightarrow \nu_e
+\phi$. Here, effects (i) and (iii) come into play. This case
was treated in Ref.~\cite{st}; however, the interactions
of electron neutrinos and antineutrinos with the ambient
thermal plasma were not taken into account. They can be important:
The interaction rate of a high-energy electron neutrino produced
by the decay of a massive tau neutrino relative to the
expansion rate $\Gamma /H \sim (m_\nu /\,{\rm MeV})(\,{\rm sec} /t)$.
\item Tau neutrino decays into an electron neutrino
and daughter product(s) that interact electromagnetically,
e.g., $\nu_\tau \rightarrow \nu_e +e^\pm$. Here,
all three effects come into play. Aspects of this case were
treated in Ref.~\cite{kaw}, though interactions of
electron neutrinos and antineutrinos with the ambient thermal
plasma were neglected and the $\nu_e$-spectrum
was taken to be a delta function.
\end{enumerate}
{\it As we shall emphasize more than once, the effect of a tau neutrino of a
given mass and lifetime---and therefore limits to its
mass/lifetime---depends very much upon decay mode.}
\medskip
While these four generic decay modes serve to bracket
the possibilities, the situation is actually somewhat more complicated.
Muon neutrinos are not completely
sterile, as they are strongly coupled
to the electromagnetic plasma down to temperatures of order
a few MeV (times of order a fraction of a second), and thus
can transfer energy to the electromagnetic plasma. However,
for lifetimes longer than a few seconds, their interactions with the
electromagnetic plasma are not very significant (see Ref.~\cite{sd}),
and so to a reasonable approximation muon-neutrino
daughter products can be considered sterile.
Precisely how much electromagnetic entropy is produced
and the effect of high-energy neutrinos on the proton-neutron
interconversion rates depend upon the energy distribution
of the daughter products and their interactions with the
ambient plasma (photons, $e^\pm$ pairs, and neutrinos), which
in turn depends upon the number of daughter products and
the decay matrix element.
Without going to extremes,
one can easily identify more than ten possible decay modes.
However, we believe the four generic decay modes
serve well to illustrate how the nucleosynthesis
mass-lifetime limits depend upon the decay mode and provide
reasonable estimates thereof. In that regard,
input assumptions, e.g., the acceptable range for the
primordial abundances and the relic neutrino
abundance\footnote{The variation between different calculations
of the tau-neutrino abundance are of the order of 10\% to
20\%; they arise to different treatments of thermal averaging,
particle statistics, and so on. Since we use
the asymptotic value of the tau-neutrino abundance
our abundances are in general smaller, making our limits
more conservative.} probably lead to comparable, if not greater,
uncertainties in the precise limits.
Finally, a brief summary of our treatment of the microphysics:
(1) The relic abundance of the tau neutrino is determined by standard
electroweak annihilations and is assumed to be frozen out
at its asymptotic value during the epoch
of nucleosynthesis, thereafter decreasing due to decays only.
Because we assume that the relic abundance of the tau
neutrino has frozen out we cannot
accurately treat the case of short lifetimes, $\tau_\nu
\mathrel{\mathpalette\fun <} (m_\nu /\,{\rm MeV} )^{-2}\,{\rm sec}$, where inverse decays
can significantly affect the tau-neutrino abundance and that of its daughter
products \cite{inversedecay}.\footnote{For generic decay mode (1) the effect
of inverse decays for short lifetimes was considered in Ref.~\cite{osu};
it leads to additional mass constraints for short lifetimes.}
(2) Sterile daughter products, other than neutrinos, are assumed to
have a negligible pre-decay abundance (if this is not true,
the nucleosynthesis limits become even more stringent).
(3) The electromagnetic energy produced by tau-neutrino
decays is assumed to be quickly thermalized and to increase the entropy
in the electromagnetic plasma according to the first law of
thermodynamics. (4) The perturbations to the
phase-space distributions of electron and muon neutrinos
due to tau-neutrino decays and partial coupling to the electromagnetic
plasma are computed. (5) The change in the weak rates that interconvert
neutrons and protons due to the distorted electron-neutrino
distribution are calculated. (6) The total energy of the Universe includes
that of photons, $e^\pm$ pairs, neutrinos, and sterile daughter product(s).
The paper is organized as follows; in the next Section we discuss
the modifications that we have made to the standard nucleosynthesis
code. In Section 3 we present our results, discussing how
a decaying tau neutrino affects the yields of nucleosynthesis and
deriving the mass/lifetime limits for the four generic decay
modes. In Section 4 we discuss other astrophysical and laboratory
limits to the mass/lifetime of the tau neutrino, and finish
in Section 5 with a brief summary and concluding remarks.
\section{Modifications to the Standard Code}
In the standard treatment of nucleosynthesis \cite{kawano} it is
assumed that there are three massless neutrino species that
completely decouple from the electromagnetic plasma at a
temperature well above that of the electron mass ($T \sim
10\,{\rm MeV}\gg m_e$). Thus the neutrino species do not interact with
the electromagnetic plasma and do not share in the ``heating'' of
the photons when the $e^\pm$ pairs disappear.
In order to treat the most general case of a decaying tau
neutrino we have made a number of modifications to the standard
code. These modifications are of four kinds: (1) Change the
total energy density to account for the massive tau neutrino and
its daughter products; (2) Change the first-law of
thermodynamics for the electromagnetic plasma to account for the
injection of energy by tau decays and interactions with the
other two neutrino seas; (3) Follow the Boltzmann equations for the
phase-space distributions for electron and muon neutrinos,
accounting for their interactions with one another and the
electromagnetic plasma; (4) Modify the weak interaction rates
that interconvert neutrons and protons to take account of the
perturbations to the electron-neutrino spectrum.
These modifications required tracking five quantities as a
function of $T \equiv R^{-1}$, the
neutrino temperature in the fully decoupled limit ($R=$ the
cosmic-scale factor). They are: $\rho_{\nu_\tau}$,
$\rho_\phi$ (where $\phi$ is any sterile, relativistic decay
product), $T_\gamma$, and $\Delta_e$ and $\Delta_\mu$, the
perturbations to the electron-neutrino and mu-neutrino
phase-space distributions.
Our calculations were done with two separate codes. The
first code tracks $\rho_{\nu_\tau}$,
$\rho_\phi$, $T_\gamma$, $\Delta_e$, and $\Delta_\mu$
as a function of $T$, for simplicity, using Boltzmann statistics.
These five quantities were then converted to functions of the
photon temperature using the $T(T_\gamma )$
relationship calculated, and their values
were then passed to the second code, a modified version
of the standard nucleosynthesis code \cite{kawano}.\footnote{The
correct statistics for all species are of course used in the
nucleosynthesis code; the five quantities are passed
as fractional changes (to the energy density, temperature and rates)
to minimize the error made by using Boltzmann statistics in the first code.}
We now discuss in more detail the four modifications.
\subsection{Energy density}
There are four contributions to
the energy density: massive tau neutrinos, sterile decay
products, two massless neutrino species, and the EM
plasma. Let us consider each in turn.
As mentioned earlier, we fix the relic abundance of tau
neutrinos assuming that freeze out occurs before nucleosynthesis
commences ($t\ll 1\,{\rm sec}$). We follow Ref.~\cite{ketal} in writing
\begin{equation}
\rho_{\nu_\tau} = r \left[ \frac{\sqrt{(3.151 T )^2 +
{m_\nu}^2}}{{3.151 T }}\right] \rho_\nu(m_\nu =0)\,\exp (-t/\tau_\nu ),
\end{equation}
where $r$ is the ratio of the number density of massive neutrinos to a massless
neutrino species, the $(3.151T)^2$ term takes account of the
kinetic energy of the neutrino, and the exponential factor
takes account of decays. The relic abundance is taken
from Ref.~\cite{ketal}; for a Dirac neutrino it is assumed
that all four degrees are freedom are populated for masses
greater than $0.3\,{\rm MeV}$ (see Ref.~\cite{ketal} for further
discussion).
Note that for temperatures much
less than the mass of the tau neutrino, $\rho_{\nu_\tau}/\rho_\nu
(m_\nu =0) = rm_\nu e^{-t/\tau_\nu}/3.151T$,
which increases as the scale factor
until the tau neutrinos decay; further, $rm_\nu$ determines the
energy density contributed by massive tau neutrinos and hence
essentially all of their effects on nucleosynthesis.
The relic neutrino abundance times mass ($rm_\nu$) is shown in Fig.~1.
The energy density of the sterile decay products is slightly more
complicated. Since the $\phi$'s are massless, their energy
density is governed by
\begin{equation} \label{eq:phi}
\frac{d\rho_\phi}{dT} = \frac{4\rho_\phi}{T} - {f_\phi\over T}
\frac{\rho_{\nu_\tau}} {H \tau_\nu},
\end{equation}
where the first term accounts for the effect of the expansion
of the Universe and the second accounts for the energy dumped into the
sterile sector by tau-neutrino decays. The quantity $f_\phi$
is the fraction of the tau-neutrino decay energy that goes into
sterile daughters: for $\nu_\tau \rightarrow$ all-sterile daughter
products, $f_\phi =
1$; for $\nu_\tau \rightarrow \phi + \nu_e$ or EM, $f_\phi =0.5$;
and for all other modes $f_\phi =0$. Eq.~(\ref{eq:phi})
was integrated numerically, and
$\rho_\phi$ was passed to the nucleosynthesis code
by means of a look-up table.
The neutrino seas were the most complicated to treat.
The contribution of the neutrino seas was divided into two parts, the
standard, unperturbed thermal contribution and the perturbation
due to the slight coupling of neutrinos to the EM plasma and
tau-neutrino decays,
\begin{equation}
\rho_\nu = \rho_{\nu 0} + \delta \rho_\nu.
\end{equation}
The thermal contribution is simply $6T^4/\pi^2$ per massless
neutrino species (two in our case). The second term is given
as an integral over the perturbation to the neutrino phase-space
distribution,
\begin{equation}
\delta \rho_\nu = \sum_{i=e,\mu} {2\over (2\pi )^3}
\int p d^3 p \Delta_i(p,t) ,
\end{equation}
where the factor of two accounts for neutrinos and antineutrinos.
Finally, there is the energy density of the EM plasma.
Since the electromagnetic plasma is in thermal equilibrium
it only depends upon $T_\gamma$:
\begin{equation}
\rho_{\rm EM} =
\frac{6{T_\gamma}^4}{\pi^2} + \frac{2{m_e}^3 T_\gamma} {\pi^2}
\left[K_1(m_e/T_\gamma ) + \frac{3 K_2(m_e/T_\gamma )}
{m_e/T_\gamma }\right] ,
\end{equation}
where $K_1$ and $K_2$ are modified Bessel functions.
We numerically compute $T_\gamma$ as a function of $T$ by
using the first law of thermodynamics.
\subsection{First law of thermodynamics}
Energy conservation in the expanding Universe is
governed by the first law of thermodynamics,
\begin{equation}\label{eq:first}
d[\rho_{\rm TOT} R^3] = -p_{\rm TOT} dR^3,
\end{equation}
where in our case $\rho_{\rm TOT} = \rho_{\rm EM} + \rho_{\nu 0} +
\delta \rho_\nu + \rho_\phi + \rho_{\nu_\tau}$, $p_{\rm TOT} =
p_{EM} + p_{\nu 0} + \delta p_\nu + p_\phi + p_{\nu_\tau}$,
$\delta p_\nu = \delta\rho_\nu /3$, $p_\phi = \rho_\phi /3$, and
\begin{equation}
p_{\rm EM} = {2T_\gamma^4\over \pi^2} +
{2m_e^2T_\gamma^2\over \pi^2}\,K_2(m_e/T_\gamma ) .
\end{equation}
Eq.~(\ref{eq:first}) can be rewritten in a more useful form,
\begin{equation}
\frac{dT_\gamma}{dt} =
{-3H\left(\rho_{\rm TOT}+ p_{\rm TOT} -4\rho_{\nu 0} /3\right)
-d(\delta\rho_\nu + \rho_\phi + \rho_{\nu_\tau})/dt \over
d\rho_{\rm EM}/dT_\gamma}.
\end{equation}
The quantity $d\rho_{\rm EM}/dT_\gamma$ is easily calculated,
and the time derivatives of the densities can either be solved for
analytically, or taken from the previous time step.
\subsection{Neutrino phase-space distribution functions}
The Boltzmann equations governing the neutrino phase-space
distribution functions in the standard case
were derived and solved in Ref.~\cite{sd}.
We briefly summarize that treatment here, focusing on the
modifications required to include massive tau-neutrino decays.
We start with the Boltzmann equation for the phase-space
distribution of neutrino species $a$ in the absence of decays:
\begin{eqnarray}\label{eq:boltzmann}
\frac{\partial f_a}{\partial t} -{H |p|^2\over E_a}
\frac{\partial f_a}{\partial E} & = & -\frac{1}{2E_a}
\sum_{processes} \int d\Pi_1 d\Pi_2 d\Pi_3(2\pi)^4
\delta^4({p}_a+{p}_1-{p}_2-{p}_3)\nonumber\\
&\times & |{\cal M}_{a+1\leftrightarrow 2+3}|^2 [f_a f_1-f_2 f_3],
\end{eqnarray}
where the processes summed over include
all the standard electroweak $2\leftrightarrow 2$ interactions
of neutrinos with themselves and the electromagnetic plasma, and
Boltzmann statistics have been used throughout.
We write the distribution functions for the electron and
muon neutrinos as an unperturbed part plus a small perturbation:
\begin{equation}
f_i (p,t) = \exp (-p/T) + \Delta_i(p,t),
\end{equation}
where we have assumed that both species are effectively
massless. During nucleosynthesis the photon temperature
begins to deviate from the neutrino temperature $T$, and
we define
$$\delta (t) = T_\gamma /T -1.$$
Eq.~(\ref{eq:boltzmann}) is expanded to lowest order in
$\Delta_i$ and $\delta (t)$, leading to master equations of the form:
\begin{equation}\label{eq:master}
\frac{p}{T}{\dot\Delta}_i(p,t) = 4 G_F^2 T^3
[-A_i(p,t)\Delta_i(p,t) + B_i(p,t)\delta(t)
+C_i(p,t) + C_i^\prime(p,t)],
\end{equation}
where $i=e,\mu$ and the expressions for $A_i$, $B_i$, $C_i$,
and $C_i^\prime$ are given in Ref.~\cite{sd} [in Eq.~(2.11d)
for $C_\mu$ the coefficient $(c+8)$ should be $(c+7)$].
In context of tau-neutrino decays we treat decay-produced
muon neutrinos as a sterile species, and thus
we are only interested in modifying the master equation for electron
neutrinos to allow for decays. In the case of two-body decays
(e.g., $\nu_\tau \rightarrow \nu_e+\phi$
or $\nu_\tau \rightarrow \nu_e +$ EM) the additional term
that arises on the right-hand side of Eq.~(\ref{eq:master}) is
\begin{equation}
{p\over T}{\dot\Delta}_e(p,T) = \cdots +
{n_{\nu_\tau}\over \tau_\nu}\,{2\pi^2 \over pT}\,\delta (p-m_\nu /2),
\end{equation}
where $n_{\nu_\tau}$ is the number density of massive tau
neutrinos.\footnote{For $m_\nu$ we actually use our expression
for the total tau-neutrino energy $E_\nu= \sqrt{m_\nu^2+(3.151T)^2}$.
Except for very short lifetimes and small masses, $E_\nu\approx m_\nu$.}
The decay mode $\nu_\tau \rightarrow \nu_e +e^\pm$ has a three-body
final state, so that the energy distribution of electron neutrinos
is no longer a delta function. In this case, the source term is
\begin{equation}
{p\over T}{\dot\Delta}_e(p,T) = \cdots +
{32\pi^2 n_{\nu_\tau} p (3-4p/m_\nu ) \over \tau_\nu m_\nu^3 T}
\,\theta (p-m_\nu /2),
\end{equation}
where for simplicity we have assumed that all particles except
the massive tau neutrino are ultrarelativistic.
\subsection{Weak-interaction rates}
Given $\Delta_e $, it is simple to
calculate the perturbations to the weak interaction rates
that convert protons to neutrons and vice versa
(see Ref.~\cite{sd} for details). The perturbations to
the weak rates are obtained by substituting $\exp (-p/T) +\Delta_e (p,t)$
for the electron phase-space distribution in the usual expressions
for the rates \cite{sd} and then expanding to lowest order.
The perturbations to the rates for proton-to-neutron conversion
and neutron-to-proton conversion (per nucleon) are respectively
\begin{eqnarray}
\delta\lambda_{pn} & = & \frac{1}{\lambda_0 \tau_n} \int_{m_e}^\infty EdE
(E^2-m_e^2)^{1/2} (E + Q)^2 \Delta_e(E+Q), \\
\delta\lambda_{np} & = & \frac{1}{\lambda_0 \tau_n} \int_{m_e}^\infty EdE
(E^2-m_e^2)^{1/2} (E - Q)^2 \Delta_e(E-Q),
\end{eqnarray}
where Boltzmann statistics have been used for all species,
$\tau_n$ is the neutron mean lifetime, $Q=1.293\,{\rm MeV}$ is the
neutron-proton mass difference, and
$$\lambda_0 \equiv \int_{m_e}^Q EdE(E^2-m_e^2)^{1/2}(E-Q)^2 .$$
The perturbations to the weak rates are computed in the first
code and passed to the nucleosynthesis code by means of a
look-up table. The unperturbed part of the weak rates are
computed by numerical integration in the nucleosynthesis
code; for all calculations we took the neutron mean lifetime to be $889\,{\rm sec}$.
\section{Results}
In this section we present our results for the four generic decay modes.
Mode by mode we discuss how the light-element abundances
depend upon the mass and lifetime of the tau neutrino
and derive mass/lifetime limits. We exclude a mass and
lifetime if, for no value of the baryon-to-photon ratio,
the light-element abundances can satisfy:
\begin{eqnarray}
Y_P & \le & 0.24 ;\\
{\rm D/H} & \ge & 10^{-5}; \\
({\rm D} +^3{\rm He})/{\rm H} & \le & 10^{-4};\\
{\rm Li}/{\rm H} &\le & 1.4\times 10^{-10}.
\end{eqnarray}
For further discussion of this choice of constraints to
the light-element abundances we refer the reader
to Ref.~\cite{walker}.
The $^4$He and D + $^3$He abundances play the most
important role in determining the excluded regions.
The mass/lifetime limits that follow necessarily
depend upon the range of acceptable primordial abundances that one
adopts, a fact that should be kept in mind when comparing
the work of different authors and assessing confidence levels.
Further, the relic abundances used by different
authors differ by 10\% to 20\%.
Lastly, the precise limit for a specific decay mode
will of course differ slightly from that derived for its ``generic class.''
In illustrating how the effects of a decaying tau neutrino
depend upon lifetime and in comparing different decay modes we
use as a standard case an initial (i.e., before decay
and $e^\pm$ annihilations) baryon-to-photon ratio $\eta_i =
8.25\times 10^{-10}$. In the absence of entropy production
(no decaying tau neutrino or decay modes 1 and 3 which produce
no EM entropy) the final
baryon-to-photon ratio $\eta_0 = 4\eta_i /11 = 3\times 10^{-10}$,
where $4/11$ is the usual factor that arises due to the entropy transfer
from $e^\pm$ pairs to photons. In the case of decay modes
2 and 4 there can be significant EM entropy production, and
the final baryon-to-photon ratio $\eta = \eta_0/(S_f/S_i)
\le \eta_0$ ($S_f/S_i$ is the ratio of the EM entropy per
comoving volume after decays to that before decays).
Even though $\eta_0$ does not
correspond to the present baryon-to-photon ratio if there has
been entropy production, we believe
that comparisons for fixed $\eta_0$ are best for
isolating the three different effects of a decaying tau
neutrino on nucleosynthesis. For reference,
in the absence of a decaying tau neutrino the $^4$He mass
fraction for our standard case is: $Y_P=0.2228$ (two massless
neutrino species) and $0.2371$ (three massless neutrino species).
\subsection{$\nu_\tau \rightarrow$ sterile daughter products}
Since we are considering lifetimes greater than $0.1\,{\rm sec}$, by which time muon
neutrinos are essentially decoupled, the muon neutrino is by our definition
effectively sterile, and examples of this decay mode include,
$\nu_\tau \rightarrow \nu_\mu + \phi$ where $\phi$ is some
very weakly interacting scalar particle (e.g., majoron)
or $\nu_\tau \rightarrow \nu_\mu +\nu_\mu + {\bar\nu}_\mu$.
For this decay mode the only effect of the unstable tau
neutrino on nucleosynthesis involves the energy density it and
its daughter products contribute. Thus, it is the simplest
case, and we use it as ``benchmark'' for comparison to the
other decay modes. The light-element abundances
as a function of tau-neutrino lifetime are shown in Figs.~2-4
for a Dirac neutrino of mass $20\,{\rm MeV}$.
The energy density of the massive
tau neutrino grows relative to a massless neutrino species
as $rm_\nu/3T$ until the tau neutrino decays, after
which the ratio of energy density in the daughter products
to a massless neutrino species remains constant.
For tau-neutrino masses in the $0.3\,{\rm MeV}$ to $30\,{\rm MeV}$ mass
range and lifetimes greater than about a second the energy
density of the massive tau neutrino exceeds that of a massless neutrino
species before it decays, in spite of its smaller abundance
(i.e., $r\ll 1$). The higher energy density increases the
expansion rate and ultimately $^4$He production because
it causes the neutron-to-proton ratio to freeze out earlier and at
a higher value and because fewer neutrons decay before nucleosynthesis
begins. Since the neutron-to-proton ratio freezes out around
$1\,{\rm sec}$ and nucleosynthesis occurs at
around a few hundred seconds, the $^4$He abundance is only sensitive
to the expansion rate between one and a few hundred seconds.
In Fig.~2 we see that for short lifetimes ($\tau_\nu\ll 1\,{\rm sec}$)
the $^4$He mass fraction approaches that for two massless neutrinos
(tau neutrinos decay before their energy density becomes
significant). As expected, the $^4$He mass fraction increases with
lifetime leveling off at a few hundred
seconds at a value that is significantly greater than that
for three massless neutrino species.
The yields of D and $^3$He depend upon how much of these isotopes
are not burnt to $^4$He. This in turn depends upon competition
between the expansion rate and nuclear reaction rates: Faster expansion
results in more unburnt D and $^3$He. Thus the yields of D and
$^3$He increase with tau-neutrino lifetime, and begin to level
off for lifetimes of a few hundred seconds as this is when
nucleosynthesis is taking place (see Fig.~3).
The effect on the yield of $^7$Li is a bit more complicated.
Lithium production decreases with increasing $\eta$
for $\eta \mathrel{\mathpalette\fun <} 3\times 10^{-10}$ because the final abundance
is determined by competition between the expansion rate and
nuclear processes that destroy $^7$Li, and increases
with increasing $\eta$ for $\eta \mathrel{\mathpalette\fun >} 3\times 10^{-10}$
because the final abundance is determined by competition between
the expansion rate and nuclear processes that produce $^7$Li.
Thus, an increase in expansion rate leads to increased $^7$Li
production for $\eta \mathrel{\mathpalette\fun <} 3\times 10^{-10}$ and decreased $^7$Li
production for $\eta \mathrel{\mathpalette\fun >} 3\times 10^{-10}$; this is shown
in Fig.~4. Put another way
the valley in the $^7$Li production curve shifts to
larger $\eta$ with increasing tau-neutrino lifetime.
We show in Figs.~5 and 6 the excluded
region of the mass/lifetime plane for a Dirac
and Majorana tau neutrino respectively.
As expected, the excluded mass range grows with lifetime,
asymptotically approaching $0.3\,{\rm MeV}$ to
$33\,{\rm MeV}$ (Dirac) and $0.4\,{\rm MeV}$ to
$30\,{\rm MeV}$ (Majorana). We note the significant dependence of the excluded
region on lifetime; our results are in good agreement with the
one other work where comparison is straightforward \cite{ketal},
and in general agreement with Refs.~\cite{st2,osu}.
\subsection{$\nu_\tau \rightarrow$ sterile + electromagnetic daughter products}
Again, based upon our definition of sterility, the sterile
daughter could be a muon neutrino; thus, examples of this
generic decay mode include $\nu_\tau \rightarrow
\nu_\mu + \gamma$ or $\nu_\tau \rightarrow \nu_\mu + e^\pm$.
Our results here are based upon a two-body decay (e.g.,
$\nu_\tau \rightarrow \nu_\mu + \gamma$), and
change only slightly in the case of a three-body decay
(e.g., $\nu_\tau \rightarrow \nu_\mu + e^\pm$), where a larger
fraction of the tau-neutrino mass goes into electromagnetic entropy.
Two effects now come into play: the energy density
of the massive tau neutrino and its daughter products speed
up the expansion rate, tending to increase $^4$He, $^3$He, and D
production; and EM entropy production due to tau-neutrino decays reduce
the baryon-to-photon ratio (at the time of nucleosynthesis),
tending to decrease $^4$He production
and to increase D and $^3$He production. Both
effects tend to shift the $^7$Li valley (as a function
of $\eta_0$) to larger $\eta_0$.
While the two effects have the ``same sign'' for D, $^3$He, and
$^7$Li, they have opposite signs for $^4$He.
It is instructive to compare $^4$He production as a function of
lifetime to the previous ``all-sterile'' decay mode. Because of
the effect of entropy production, there is little
increase in $^4$He production
until a lifetime greater than $1000\,{\rm sec}$ or so. For
lifetimes greater than $1000\,{\rm sec}$ the bulk of the entropy
release takes place after nucleosynthesis,
and therefore does not affect the value of $\eta$ during nucleosynthesis.
Because of the competing effects on $^4$He production,
the impact of an unstable,
massive tau neutrino on nucleosynthesis is significantly less
than that in the all-sterile decay mode for lifetimes less than
about $1000\,{\rm sec}$. The excluded region of the mass/lifetime
plane is shown in Figs.~5 and 6. For lifetimes greater than about $1000\,{\rm sec}$
the excluded mass interval is essentially the same as
that for the all-sterile decay mode; for shorter lifetimes it
is significantly smaller.
Finally, because of entropy production, the final value of the
baryon-to-photon ratio is smaller for fixed initial
baryon-to-photon ratio: it is reduced by the
factor by which the entropy per comoving volume is increased.
In the limit of significant entropy production ($S_f/S_i
\gg 1$), this factor is given by, cf. Eq. (5.73) of Ref.~\cite{kt},
\begin{equation}\label{eq:entropy}
S_f/S_i \simeq 0.13 rm_\nu \sqrt{\tau_\nu /{m_{\rm Pl}}} \simeq 1.5
\,{rm_\nu \over \,{\rm MeV}}\,\sqrt{\tau_\nu \over 1000 \,{\rm sec}}.
\end{equation}
A precise calculation of entropy production for this decay
mode is shown in Fig.~7. As can be seen in the figure
or from Eq.~(\ref{eq:entropy}), entropy production becomes
significant for lifetimes longer than about $100\,{\rm sec}$.
\subsection{$\nu_\tau \rightarrow \nu_e$ + sterile daughter products}
Once again, by our definition of sterility this includes
decay modes such as $\nu_\tau \rightarrow \nu_e + \phi$
or $\nu_\tau \rightarrow \nu_e +\nu_\mu {\bar\nu}_\mu$.
Here, we specifically considered the two-body decay mode $\nu_\tau
\rightarrow \nu_e + \phi$, though the results for the
three-body mode are very similar.
Two effects come into play: the energy density
of the massive tau neutrino and its daughter products
and the interaction of daughter electron
neutrinos with the nucleons and the ambient plasma.
The first effect has been discussed previously.
The second effect leads to some interesting new effects.
Electron neutrinos and antineutrinos produced by tau-neutrino decays
increase the weak rates that govern the neutron-to-proton
ratio. For short lifetimes ($\mathrel{\mathpalette\fun <} 30\,{\rm sec}$) and masses less
than about $10\,{\rm MeV}$ the main effect is to delay slightly
the ``freeze out'' of the neutron-to-proton ratio, thereby
decreasing the neutron fraction at the time of nucleosynthesis
and ultimately $^4$He production. For long lifetimes, or short lifetimes
and large masses, the perturbations to the $n\rightarrow p$
and $p\rightarrow n$ rates (per nucleon) are comparable; since after freeze out
of the neutron-to-proton ratio there are about six times as
many protons as neutrons, this has the effect of increasing
the neutron fraction and $^4$He production. This is illustrated in Fig.~8.
The slight shift in the neutron fraction does not affect
the other light-element abundances significantly.
The excluded portion of the mass/lifetime plane is shown
in Figs.~5 and 6. It agrees qualitatively with the results
of Ref.~\cite{st}.\footnote{The authors of Ref.~\cite{st}
use a less stringent constraint to $^4$He production,
$Y_P\le 0.26$; in spite of this, in some regions of
the $m_\nu -\tau_\nu$ plane their bounds are as, or even more, stringent.
This is presumably due to the neglect of electron-neutrino
interactions with the ambient plasma.} Comparing the limits for this
decay mode with the all-sterile mode, the effects of electron-neutrino
daughter products are clear: for long lifetimes
much higher mass tau neutrinos are excluded
and for short lifetimes low-mass tau neutrinos are allowed.
\subsection{$\nu_\tau \rightarrow \nu_e$ + electromagnetic daughter products}
Now we consider the most complex of the decay modes, where
none of the daughter products is sterile.
Specifically, we consider the decay mode $\nu_\tau \rightarrow
\nu_e+e^\pm$, though our results change very little for
the two-body decay $\nu_\tau \rightarrow \nu_e + \gamma$.
In this case all three effects previously discussed come into
play: the energy density of the massive tau neutrino and
its daughter products speed up the expansion rate; the
entropy released dilutes the baryon-to-photon ratio;
and daughter electron neutrinos increase the weak-interaction
rates that control the neutron fraction. The net effect
on $^4$He production is shown in Fig.~9 for a variety of
tau-neutrino masses. The main difference between this
decay mode and the previous one, $\nu_\tau \rightarrow \nu_e$ +
sterile, is for lifetimes between
$30\,{\rm sec}$ and $300\,{\rm sec}$, where the increase in $^4$He production is
less due to the entropy production which
reduces the baryon-to-photon ratio at the time of nucleosynthesis.
The excluded region of the mass/lifetime plane is shown in
Figs.~5 and 6. It agrees qualitatively with the results
of Ref.~\cite{kaw}.\footnote{The authors of Ref.~\cite{kaw}
use a less stringent constraint to $^4$He production,
$Y_P\le 0.26$; in spite of this, in some regions of
the $m_\nu -\tau_\nu$ plane their bounds are as, or even more, stringent.
This is presumably due to the neglect of electron-neutrino
interactions with the ambient plasma.}
The excluded region for this decay mode
is similar to that of the previous
decay mode, except that lifetimes less than about $100\,{\rm sec}$ are
not excluded as entropy production has diminished $^4$He
production in this lifetime interval.
\subsection{Limits to a generic light species}
We can apply the arguments for the four decay
modes discussed above to a hypothetical species whose
relic abundance has frozen out at a value $r$ relative
to a massless neutrino species before the epoch of
primordial nucleosynthesis (also see Refs.~\cite{st1,st2}).
The previous limits become limits to $rm$ as a function
of lifetime $\tau$ and mass $m$, which are difficult
to display. With the exception of the effect that involves
daughter electron neutrinos, all other effects only depend
upon $rm$, which sets the energy density of the massive
particle and its daughter products. In Fig.~10, we show
that for lifetimes greater than about $100\,{\rm sec}$ and masses
greater than about $10\,{\rm MeV}$, the $^4$He production is
relatively insensitive to the mass of the decaying particle.
This means that for lifetimes greater than about $100\,{\rm sec}$
the limit to $rm$ should be relatively insensitive to
particle mass.
We show in Fig.~11 the excluded regions
of the $rm$-$\tau$ plane for a $20\,{\rm MeV}$ decaying particle.
In deriving these limits we used the same criteria for
acceptable light-element abundances and assumed three massless
neutrino species. The limits to $rm$ for decay modes without
electron-neutrino daughter products are strictly independent
of mass; the two other should be relatively insensitive
to the particle mass for $\tau \mathrel{\mathpalette\fun >} 100\,{\rm sec}$ (and the actual
limits are more stringent for $m > 20\,{\rm MeV}$).
\section{Laboratory and Other Limits}
There are a host of other constraints to the mass and lifetime
of the tau neutrino~\cite{sarkar}.
As a general rule, cosmological arguments, such as the one
presented above, pose {\it upper} limits to the tau-neutrino
lifetime for a given mass: cosmology
has nothing to say about a particle
that decays very early since it would not have affected
the ``known cosmological history.'' Laboratory experiments
on the other hand pose {\it lower}
limits to the lifetime because nothing happens inside
a detector if the lifetime of the decaying particle is too long.
Finally, astrophysical considerations generally rule out
bands of lifetime since ``signals'' can only be detected if (a) the tau
neutrinos escape the object of interest before decaying
and (b) decay before they pass by earthly detectors.
\subsection{Laboratory}
The most important limits of course are the direct limits to the
tau-neutrino mass. These have come down steadily over the past few
years. The current upper limits are $31\,{\rm MeV}$ and
$32.6\,{\rm MeV}$ \cite{labmass}.
If the tau neutrino has a mass greater than $2m_e = 1.02\,{\rm MeV}$,
then the decay $\nu_\tau
\rightarrow \nu_e+e^\pm$ takes place through ordinary
electroweak interactions at a rate
\begin{equation}\label{UET}
\Gamma = {G_{F}^2 m_\nu^5\over 192\pi^3} \vert U_{e\tau} \vert^2
\vert U_{ee} \vert^2 \simeq
{ (m_\nu /{\,{\rm MeV}})^5 \vert U_{e\tau} \vert^2
\over 2.9\times 10^4 ~{\rm sec}} ,
\end{equation}
where $U_{e\tau}$ and $U_{ee}$ are elements of the
unitary matrix that relates
mass eigenstates to weak eigenstates, the leptonic
equivalent of the Cabbibo-Kobayashi-Maskawa matrix. We note that
the rate could be larger (or even perhaps smaller) in models where
the decay proceeds through new interactions. Thus, limits to
$U_{e\tau}$ give rise to model-dependent limits to the tau-neutrino lifetime.
A number of experiments have set limits to $U_{e\tau}$.
The most sensitive experiment in the mass range $1.5\,{\rm MeV}
< m_\nu < 4\,{\rm MeV}$ was performed at the power reactor in Gosgen,
Switzerland~\cite{gosgen}, which produces tau
neutrinos at a rate proportional to $\vert U_{e\tau}\vert^2$
through decay of heavy nuclei and $\nu_e-\nu_\tau$ mixing.
Above this mass range, experiments that search for
additional peaks in the positron spectrum of the
$\pi^+ \rightarrow e^+\nu$ decay (due to $\nu_e-\nu_\tau$ mixing)
provide the strictest limits. In the
mass range $4\,{\rm MeV} < m_\nu < 20\,{\rm MeV}$,
Bryman et al. \cite{bryman}\
set the limits shown in Fig.~12; for larger masses the best limits
come from Ref.~\cite{leener}.
There are also direct accelerator bounds to the lifetime of a
an unstable tau neutrino that produces a photon or $e^\pm$ pair.
In particular, as has been recently emphasized by
Babu et al. \cite{babu}, the BEBC beam dump experiment~\cite{bebc}
provides model-independent limits based upon the direct search for the
EM decay products. These limits, while not quite as
strict as those mentioned above, are of interest
since they apply to the photon mode and to the $e^\pm$ mode
even if the decay proceeds through new interactions. The limit,
\begin{equation}
\tau_\nu > 0.18\,(m_\nu /\,{\rm MeV})\,\,{\rm sec} ,
\end{equation} is shown in Fig.~12.
\subsection{Astrophysical}
The standard picture of type II supernovae has the binding
energy of the newly born neutron star (about $3\times
10^{53}{\,\rm erg}$) shared equally by neutrinos
of all species emitted from a neutrinosphere of temperature of about $4\,{\rm MeV}$.
There are two types of limits based upon SN 1987A, and combined they
rule out a large region of $m_\nu - \tau_\nu$ plane.
First, if the tau neutrino decayed
after it left the progenitor supergiant, which has a radius
$R\simeq 3\times10^{12}\,{\rm cm}$, the high-energy daughter
photons could have been detected \cite{smm,ktsn,pvo}. The Solar Maximum
Mission (SMM) Gamma-ray Spectrometer set an upper
limit to the fluence of $\gamma$ rays during the ten
seconds in which neutrinos were detected:
\begin{equation}
f_\gamma < 0.9~{\,{\rm cm}}^{-2}; \qquad 4.1\,{\rm MeV} < E_\gamma < 6.4\,{\rm MeV}.
\end{equation}
As we will see shortly, if only one in $10^{10}$
of the tau neutrinos leaving the supernova produced a photon,
this limit would have been saturated. In the mass regime
of interest there are two ways out of
this constraint: The lifetime can be so
long that the arrival time was more than ten seconds
after the electron antineutrinos arrived, or
the lifetime can be so short that
the daughter photons were produced
inside the progenitor. We can take account of both of these
possibilities in the following formula for the
expected fluence of $\gamma$ rays:
\begin{equation}
f_{\gamma,10} = f_{\nu\bar\nu} W_\gamma B_\gamma\langle F_1 F_2 \rangle
\end{equation}
where the subscript $10$ reminds us that we are
only interested in the first ten seconds,
$f_{\nu\bar\nu} \simeq 1.4\times 10^{10}$ cm$^{-2}$ is the fluence of
a massless neutrino species, $W_\gamma \sim 1/4$ is the
fraction of decay photons produced with energies between
$4.1\,{\rm MeV}$ and $6.4\,{\rm MeV}$, $F_1$ is
the fraction of tau neutrinos that decay outside the progenitor,
and $F_2$ is the fraction of these that decay early enough so that the decay
products were delayed by less than ten seconds. The quantity
$B_\gamma$ is the branching ratio to a decay mode that includes a photon.
For $m_\nu \mathrel{\mathpalette\fun >} 1\,{\rm MeV}$ one expects the $\nu_e+e^\pm$ mode to be dominant;
however, ordinary radiative corrections should lead to
$B_\gamma \simeq 10^{-3}$ \cite{mohapatra}. Finally angular brackets denote
an average over the Fermi-Dirac distribution of neutrino momenta,
\begin{equation}
\langle A\, \rangle \equiv {1\over 1.5\zeta(3) T^3}
\int_0^\infty {A\,dp\,p^2\over e^{E/T} + 1},
\end{equation}
where $T\simeq 4$ MeV is the temperature of the neutrinosphere and
$E = (p^2 + m_\nu^2)^{1/2}$.
To evaluate the fluence of gamma rays we need to know
$F_1$ and $F_2$. The fraction $F_1$ that decay outside the
progenitor is simply $e^{-t_1/\tau_L}$ where $t_1 = R/v = RE/p$
and the ``lab'' lifetime $\tau_L
= \tau E/m_\nu$. Of these, the fraction
whose decay products arrive {\it after}
ten seconds is $e^{-t_2/\tau_L}/e^{-t_1/\tau_L}$ where $t_2 = 10\,{\rm sec}
/(1-v/c)$; thus, $F_2 = 1 - e^{(t_1-t_2)/\tau_L}$.
Figure 12 shows this constraint assuming
a branching ratio $B_\gamma=10^{-3}$.
The second constraint comes from observing that if tau
neutrinos decayed within the progenitor supergiant,
the energy deposited (up to about $10^{53}{\,\rm erg}$) would have
``heated up'' the progenitor so much as to conflict
with the observed optical luminosity of SN 1987A (and other
type II supernovae) \cite{mohapatra,schramm}.
We require
\begin{equation}
E_{\rm input} = \langle (1-F_1) \rangle E_\nu \mathrel{\mathpalette\fun <} 10^{47} {\,\rm erg} ,
\end{equation}
where $E_\nu \sim 10^{53}{\,\rm erg}$ is the energy carried off
by a massless neutrino species, and $1-F_1$ is the fraction
of tau neutrinos that decay within
the progenitor. This constraint is mode-independent since decay-produced
photons or $e^\pm$ pairs will equally well ``overheat'' the progenitor.
As Fig.~12 shows, the ``supernova-light'' bound is extremely powerful.
Finally, a note regarding our SN 1987A constraints.
We have assumed that a massive tau-neutrino
species has a Fermi-Dirac distribution with the same
temperature as a massless ($m_\nu \ll 10\,{\rm MeV}$)
neutrino species. This is almost certainly false.
Massive ($m_\nu \mathrel{\mathpalette\fun >} 10\,{\rm MeV}$ or so)
tau neutrinos will drop out of chemical equilibrium
(maintained by pair creation/annihilations and possibly
decays/inverse decays) interior to the usual neutrinosphere
as the Boltzmann factor suppresses annihilation
and pair creation rates relative to scattering rates.
This leads us to believe that we have actually {\it underestimated}
the fluence of massive neutrinos.
While the problem has yet to be treated rigorously,
we are confident that, if anything, our simplified treatment
results in limits that are overly conservative.
Accurate limits await a more detailed analysis \cite{sigl}.
\subsection{Cosmological}
The most stringent cosmological constraint for masses
$0.1\,{\rm MeV} \mathrel{\mathpalette\fun <} m \mathrel{\mathpalette\fun <} 100\,{\rm MeV}$ is the nucleosynthesis bound
discussed in this paper. Nonetheless, it is worthwhile to
mention some of the other cosmological limits since they are
based upon independent arguments. A stable tau neutrino
with mass in the MeV range contributes much more energy density than is
consistent with the age of the Universe.
Such a neutrino must be unstable, with a lifetime short
enough for its decay products to lose enough most of their energy
to ``red shifting '' \cite{dicus}.
The lifetime limit is mass dependent; a neutrino with a mass of about
$1\,{\rm MeV}$ must have a lifetime shorter than about $10^9\,{\rm sec}$,
and the constraint gets less severe for larger or smaller masses.
There is an even more stringent bound based the necessity
of the Universe being matter dominated by a red shift of
about $10^4$ in order to produce the observed large-scale structure
\cite{steigman}. Finally, there are other
nucleosynthesis bounds based upon the dissociation of the light
elements by decay-produced photons or electron-neutrinos \cite{fission}
and by $e^\pm$ pairs produced by the continuing annihilations
of tau neutrinos \cite{josh}.
\section{Summary and Discussion}
We have presented a comprehensive study of the effect of
an unstable tau neutrino on primordial nucleosynthesis.
The effects on the primordial abundances and the mass/lifetime
limits that follow depend crucially upon the decay
mode. In the context of primordial nucleosynthesis
we have identified four generic decay modes that bracket
the larger range of possibilities: (1)
all-sterile daughter products; (2) sterile daughter product(s)
+ EM daughter product(s); (3) $\nu_e$ + sterile daughter product(s);
and (4) $\nu_e$ + EM daughter product(s). The excluded
regions of the tau-neutrino mass/lifetime plane for these
four decay modes are shown in Figs.~5 (Dirac) and 6 (Majorana).
In the limit of long lifetime ($\tau_\nu \gg 100\,{\rm sec}$), the
excluded mass range is: $0.3\,{\rm MeV} -33\,{\rm MeV}$ (Dirac) and
$0.4\,{\rm MeV} - 30\,{\rm MeV}$ (Majorana). Together with current
laboratory upper mass limits, $31\,{\rm MeV}$ (ARGUS) and $32.6\,{\rm MeV}$
(CLEO), our results very nearly exclude a long-lived, tau neutrino
more massive than about $0.4\,{\rm MeV}$. Moreover, other
astrophysical and laboratory data exclude a tau-neutrino
in the $0.3\,{\rm MeV} - 50\,{\rm MeV}$ mass range if its decay product(s)
include a photon or $e^\pm$ pair. Thus, if the mass of the
tau neutrino is the range $0.4\,{\rm MeV}$ to $30\,{\rm MeV}$, then its decay
products cannot include a photon or an $e^\pm$ pair and its
lifetime must be shorter than a few hundred seconds.
We note that the results of Ref.~\cite{osu} for the all-sterile
decay mode are more restrictive than ours, excluding masses
from about $0.1\,{\rm MeV}$ to about $50\,{\rm MeV}$ for $\tau_\nu \gg
100\,{\rm sec}$. This traces in
almost equal parts to (i) small ($\Delta Y \simeq +0.003$), but significant,
corrections to the $^4$He mass fraction and
(ii) slightly larger relic neutrino abundance.
With regard to the first difference, this illustrates the
sensitivity to the third significant figure of the $^4$He
mass fraction. With regard to the second difference, it is
probably correct that within the assumptions made
the tau-neutrino abundance during nucleosynthesis is
larger than what we used. However, other effects that have
been neglected probably lead to differences in
the tau-neutrino abundance of the
same magnitude. For example, for tau-neutrino masses
around the upper range of excluded masses, $50\,{\rm MeV} -100\,{\rm MeV}$,
finite-temperature corrections, hadronic final states
(e.g., a single pion), and tau-neutrino mixing have not
been included in the annihilation cross section and
are likely to be important at the 10\% level.
So is a tau neutrino with lifetime greater than
a few hundred seconds and mass greater than a fraction
of an $\,{\rm MeV}$ ruled out or not? Unlike a limit based upon
a laboratory experiment, it is impossible to place
standard error flags on an astrophysical or cosmological
bound. This is because of assumptions that
must be made and modeling that must be done. For example,
the precise limits that one derives depend
upon the adopted range of acceptable light-element abundances.
To be specific, in Ref.~\cite{osu} the upper limit of the
excluded mass range drops to around $38\,{\rm MeV}$ and the lower
limit increases to about $0.4\,{\rm MeV}$ when the
primordial $^4$He mass fraction is allowed to be as large as 0.245
(rather than 0.240). {\it In our opinion, a very strong case has been
made against a tau-neutrino mass in the mass range $0.4\,{\rm MeV}$ to $30\,{\rm MeV}$
with lifetime much greater than $100\,{\rm sec}$; together
with the laboratory limits this very nearly excludes a
long-lived, tau neutrino of mass greater than $0.4\,{\rm MeV}$.}
Perhaps the most interesting thing found in our study is the fact that
a tau neutrino of mass $1\,{\rm MeV}$ to $10\,{\rm MeV}$ and lifetime
$0.1\,{\rm sec}$ to $10\,{\rm sec}$ that decays to an electron neutrino
and a sterile daughter product can very significantly decrease
the $^4$He mass fraction (to as low as 0.18 or so). It has
long been realized that the standard picture of nucleosynthesis
would be in trouble if the primordial $^4$He mass fraction
were found to be smaller than about 0.23; within the standard
framework we have found one way out: an unstable tau
neutrino.\footnote{Based upon dimensional considerations the
lifetime for the mode $\nu_\tau \rightarrow \nu_e+\phi$
is expected to be $\tau_\nu \sim 8\pi f^2/m_\nu^3$,
where $f$ is the energy scale of the superweak interactions
that mediate the decay. For $\tau_\nu \sim 10\,{\rm sec}$ and
$m_\nu\sim 10\,{\rm MeV}$, $f\sim 10^9\,{\rm GeV}$.}
In principle, the possibility of an unstable tau neutrino
also loosens the primordial-nucleosynthesis
bound to the number of light species which is largely
based on the overproduction of $^4$He. However, an
unstable tau neutrino does not directly affect the primordial
important nucleosynthesis bound to the
baryon-to-photon ratio (and $\Omega_B$) as this bound involves
the abundances of D, $^3$He, and $^7$Li and not $^4$He.
Finally, we translated our results for the tau neutrino into limits to the
relic abundance of an unstable, hypothetical particle species that
decays into one of the four generic decay models discussed.
Those very stringent limits are shown in Fig.~11.
\vskip 1.5cm
\noindent We thank Robert Scherrer, David Schramm, Gary Steigman,
and Terry Walker for useful comments. This work was supported in part by the
DOE (at Chicago and Fermilab), by the NASA through
NAGW-2381 (at Fermilab), and GG's NSF predoctoral fellowship.
MST thanks the Aspen Center for Physics for its hospitality
where some of this work was carried out.
\vskip 2 cm
| {'timestamp': '1993-12-28T23:55:46', 'yymm': '9312', 'arxiv_id': 'astro-ph/9312062', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9312062'} |
\section{Introduction}
Understanding the structure evolution in amorphous alloys during
thermal and mechanical treatments is important for tuning their
physical and mechanical properties~\cite{Greer16}. It is well
accepted by now that in contrast to crystalline solids where
plasticity is governed by topological line defects, known as
disclinations, the elementary plastic events in amorphous materials
involve collective rearrangements of a few tens of atoms or the
so-called shear transformations~\cite{Spaepen77,Argon79}. In a
driven system, these rearrangements can assemble into shear bands
where flow becomes sharply localized and act as a precursor for
fracture~\cite{Wang15,Zhong16,Zaccone17,Scudino17}. Once a shear
band is formed, the structural integrity can be recovered either by
heating a sample above the glass transition temperature and then
cooling back to the glass phase (resetting the structure) or,
alternatively, via mechanical agitation. For example, it was shown
using atomistic simulations that cracks in nanocrystalline metals
can be completely healed via formation of wedge disclinations during
stress-driven grain boundary migration~\cite{Demkowicz13}. It was
also found experimentally and by means of atomistic simulations that
after steady deformation of bulk metallic glasses, the shear bands
relax during annealing below the glass transition temperature and
the local diffusion coefficient exhibits a nonmonotonic
behavior~\cite{Binkowski16}. In the case of amorphous solids,
however, the effects of periodic loading and initial glass stability
on structural relaxation within the shear band domain, degree of
annealing, and change in mechanical properties yet remain to be
understood.
\vskip 0.05in
During the last decade, molecular dynamics simulations were
particularly valuable in elucidating the atomic mechanisms of
structural relaxation, rejuvenation, and yielding in amorphous
materials under periodic loading
conditions~\cite{Lacks04,Priezjev13,
Sastry13,Reichhardt13,Priezjev14,IdoNature15,Priezjev16,Kawasaki16,
Priezjev16a,Sastry17,Priezjev17,OHern17,Priezjev18,Priezjev18a,
NVP18strload,Sastry18,PriMakrho05,PriMakrho09,Sastry19band,
PriezSHALT19,Ido2020,Priez20ba,Peng20,Jana20,Kawasaki20,KawBer20,BhaSastry20,
Priez20alt,Pelletier20,Priez20del}. Remarkably, it was found that in
athermal, disordered solids subjected to oscillatory shear in the
elastic range, the trajectories of atoms after a number of transient
cycles become exactly reversible and fall into the so-called `limit
cycles'~\cite{Reichhardt13,IdoNature15}. On the other hand, in the
presence of thermal fluctuations, the relaxation process generally
continues during thousands of cycles and the decay of the potential
energy becomes progressively slower over
time~\cite{Priezjev18,NVP18strload,PriezSHALT19}. More recently,
it was shown that the critical strain amplitude increases in more
stable athermal glasses~\cite{KawBer20,BhaSastry20}, whereas the
yielding transition can be significantly delayed in mechanically
annealed binary glasses at finite temperature~\cite{Priez20del}. In
general, the formation of a shear band during the yielding
transition is accelerated in more rapidly annealed glasses
periodically loaded at a higher strain amplitude or when the shear
orientation is alternated in two or three spatial
dimensions~\cite{Priezjev17,Priezjev18a,Sastry19band,
Priez20ba,Priez20alt}. Interestingly, after a shear band is formed
during cyclic loading, the glass outside the band remains well
annealed, and upon reducing strain amplitude below yield, the
initial shear band anneals out, which leads to reversible dynamics
in the whole domain~\cite{Sastry19band}. However, despite
extensive efforts, it remains unclear whether mechanical annealing
of a shear band or a crack in metallic glasses depends on the
preparation history, sample size and loading conditions.
\vskip 0.05in
In this paper, the influence of periodic shear deformation in the
elastic range on shear band annealing and mechanical properties of
binary glasses is studied using molecular dynamics simulations. The
system-spanning shear band is initially formed in stable glasses
that were either thermally or mechanically annealed. It will be
shown that small-amplitude oscillatory shear anneals out the shear
band and leads to nearly reversible deformation after a few hundred
cycles at finite temperature. Moreover, upon loading at higher
strain amplitudes, the glasses become increasingly better annealed,
which results in higher yield stress.
\vskip 0.05in
The rest of the paper is outlined as follows. The preparation
procedure, deformation protocol as well as the details of the
simulation model are described in the next section. The time
dependence of the potential energy, mechanical properties, and
spatial organization of atoms with large nonaffine displacements are
presented in section\,\ref{sec:Results}. The results are briefly
summarized in the last section.
\section{Molecular dynamics (MD) simulations}
\label{sec:MD_Model}
In the present study, the amorphous alloy is represented by the
binary (80:20) Lennard-Jones (LJ) mixture originally introduced by
Kob and Andersen (KA) about twenty years ago~\cite{KobAnd95}. In
this model, the interaction between different types of atoms is
strongly non-additive, thus, allowing formation of a disordered
structure upon slow cooling below the glass transition
temperature~\cite{KobAnd95}. More specifically, the pairwise
interaction is modeled via the LJ potential, as follows:
\begin{equation}
V_{\alpha\beta}(r)=4\,\varepsilon_{\alpha\beta}\,\Big[\Big(\frac{\sigma_{\alpha\beta}}{r}\Big)^{12}\!-
\Big(\frac{\sigma_{\alpha\beta}}{r}\Big)^{6}\,\Big],
\label{Eq:LJ_KA}
\end{equation}
with the parameters: $\varepsilon_{AA}=1.0$, $\varepsilon_{AB}=1.5$,
$\varepsilon_{BB}=0.5$, $\sigma_{AA}=1.0$, $\sigma_{AB}=0.8$,
$\sigma_{BB}=0.88$, and $m_{A}=m_{B}$~\cite{KobAnd95}. It should be
mentioned that a similar parametrization was used by Weber and
Stillinger to study the amorphous metal-metalloid alloy
$\text{Ni}_{80}\text{P}_{20}$~\cite{Weber85}. To save computational
time, the LJ potential was truncated at the cutoff radius
$r_{c,\,\alpha\beta}=2.5\,\sigma_{\alpha\beta}$. The total number of
atoms is fixed $N=60\,000$ throughout the study. For clarity, all
physical quantities are reported in terms of the reduced units of
length, mass, and energy $\sigma=\sigma_{AA}$, $m=m_{A}$, and
$\varepsilon=\varepsilon_{AA}$. Using the LAMMPS parallel code, the
equations of motion were integrated via the velocity Verlet
algorithm with the time step $\triangle t_{MD}=0.005\,\tau$, where
$\tau=\sigma\sqrt{m/\varepsilon}$ is the LJ
time~\cite{Allen87,Lammps}.
\vskip 0.05in
All simulations were carried out at a constant density
$\rho=\rho_A+\rho_B=1.2\,\sigma^{-3}$ in a periodic box of linear
size $L=36.84\,\sigma$. It was previously found that the computer
glass transition temperature of the KA model at the density
$\rho=1.2\,\sigma^{-3}$ is
$T_c=0.435\,\varepsilon/k_B$~\cite{KobAnd95}. The system temperature
was maintained via the Nos\'{e}-Hoover
thermostat~\cite{Allen87,Lammps}. After thorough equilibration and
gradual annealing at the temperature $T_{LJ}=0.01\,\varepsilon/k_B$,
the system was subjected to periodic shear deformation along the
$xz$ plane as follows:
\begin{equation}
\gamma(t)=\gamma_0\,\text{sin}(2\pi t/T),
\label{Eq:shear}
\end{equation}
where $\gamma_0$ is the strain amplitude and $T=5000\,\tau$ is the
period of oscillation. The corresponding oscillation frequency is
$\omega=2\pi/T=1.26\times10^{-3}\,\tau^{-1}$. Once a shear band was
formed at $\gamma_0=0.080$, the glasses were periodically strained
at the strain amplitudes $\gamma_0=0.030$, $0.040$, $0.050$,
$0.060$, and $0.065$ during 3000 cycles. It was previously found
that in the case of poorly annealed (rapidly cooled) glasses, the
critical value of the strain amplitude at the temperature
$T_{LJ}=0.01\,\varepsilon/k_B$ and density $\rho=1.2\,\sigma^{-3}$
is $\gamma_0\approx0.067$~\cite{Priez20alt}. The typical simulation
during 3000 cycles takes about 80 days using 40 processors in
parallel.
\vskip 0.05in
For the simulation results presented in the next section, the
preparation and the initial loading protocols are the same as the
ones used in the previous MD study on the yielding transition in
stable glasses~\cite{Priez20del}. Briefly, the binary mixture was
first equilibrated at $T_{LJ}=1.0\,\varepsilon/k_B$ and
$\rho=1.2\,\sigma^{-3}$ and then slowly cooled with the rate
$10^{-5}\varepsilon/k_{B}\tau$ to $T_{LJ}=0.30\,\varepsilon/k_B$.
Furthermore, one sample was cooled down to
$T_{LJ}=0.01\,\varepsilon/k_B$ during the time interval $10^4\,\tau$
(see Fig.\,\ref{fig:shapshot}). The other sample was mechanically
annealed at $T_{LJ}=0.30\,\varepsilon/k_B$ via cyclic loading at
$\gamma_0=0.035$ during 600 cycles, and only then cooled to
$T_{LJ}=0.01\,\varepsilon/k_B$ during $10^4\,\tau$. Thus, after
relocating glasses to $T_{LJ}=0.01\,\varepsilon/k_B$, two glass
samples with different processing history and potential energies
were obtained. In what follows, these samples will be referred to as
\textit{thermally annealed} and \textit{mechanically annealed}
glasses.
\section{Results}
\label{sec:Results}
Amorphous alloys typically undergo physical aging, when a system
slowly evolves towards lower energy states, and generally this
process can be accelerated by external cyclic deformation within the
elastic range~\cite{Qiao19}. Thus, the structural relaxation of
disordered solids under periodic loading proceeds via collective,
irreversible rearrangements of
atoms~\cite{Sastry13,Priezjev18,Priezjev18a,Sastry18}, while at
sufficiently low energy levels, mechanical annealing becomes
inefficient~\cite{KawBer20}. The two glass samples considered in the
present study were prepared either via mechanical annealing at a
temperature not far below the glass transition temperature or by
computationally slow cooling from the liquid state. It was
previously shown that small-amplitude periodic shear deformation at
temperatures well below $T_g$ does not lead to further annealing of
these glasses~\cite{Priez20del}. Rather, the results presented
below focus on the annealing process of a shear band, introduced in
these samples by large periodic strain, and subsequent recovery of
their mechanical properties.
\vskip 0.05in
The time dependence of the potential energy at the end of each cycle
is reported in Fig.\,\ref{fig:poten_Quench_SB_heal} for the
\textit{thermally annealed} glass. In this case, the glass was first
subjected to oscillatory shear during 200 cycles with the strain
amplitude $\gamma_0=0.080$ (see the black curve in
Fig.\,\ref{fig:poten_Quench_SB_heal}). The strain amplitude
$\gamma_0=0.080$ is slightly larger than the critical strain
amplitude $\gamma_0\approx0.067$ at $T_{LJ}=1.0\,\varepsilon/k_B$
and $\rho=1.2\,\sigma^{-3}$~\cite{Priez20alt}, and, therefore, the
periodic loading induced the formation of a shear band across the
system after about 20 cycles. As shown in
Fig.\,\ref{fig:poten_Quench_SB_heal}, the process of shear band
formation is associated with a sharp increase in the potential
energy followed by a plateau at $U\approx-8.26\,\varepsilon$ with
pronounced fluctuations due to plastic flow within the band. It was
previously demonstrated that during the plateau period, the periodic
deformation involves two well separated domains with diffusive and
reversible dynamics~\cite{Priez20del}.
\vskip 0.05in
After the shear band became fully developed in the \textit{thermally
annealed} glass, the strain amplitude of periodic deformation was
reduced in the range $0.030\leqslant \gamma_0 \leqslant 0.065$ when
$t=200\,T$. The results in Fig.\,\ref{fig:poten_Quench_SB_heal}
indicate that the potential energy of the system is gradually
reduced when $t>200\,T$, and the energy drop increases at higher
strain amplitudes (except for $\gamma_0=0.065$). Notice that the
potential energy levels out at $t\gtrsim 1300\,T$ for
$\gamma_0=0.030$, $0.040$, and $0.050$, while the relaxation process
continues up to $t=3200\,T$ for $\gamma_0=0.060$. These results
imply that the shear band becomes effectively annealed by the
small-amplitude oscillatory shear, leading to nearly reversible
dynamics in the whole sample, as will be illustrated below via the
analysis of nonaffine displacements. By contrast, the deformation
within the shear band remains irreversible at the higher strain
amplitude $\gamma_0=0.065$ (denoted by the fluctuating grey curve in
Fig.\,\ref{fig:poten_Quench_SB_heal}). This observation can be
rationalized by realizing that the strain remains localized within
the shear band, and the effective strain amplitude within the band
is greater than the critical value
$\gamma_0\approx0.067$~\cite{Priez20alt}.
\vskip 0.05in
The potential energy minima for the \textit{mechanically annealed}
glass are presented in Fig.\,\ref{fig:poten_600cyc_SB_heal} for the
indicated strain amplitudes. It should be commented that the
preparation protocol, which included 600 cycles at $\gamma_0=0.035$
and $T_{LJ}=0.30\,\varepsilon/k_B$, produced an atomic configuration
with a relatively deep potential energy level, \textit{i.e.},
$U\approx-8.337\,\varepsilon$. Upon periodic loading at
$\gamma_0=0.080$ and $T_{LJ}=0.01\,\varepsilon/k_B$, the yielding
transition is delayed by about 450 cycles, as shown by the black
curve in Fig.\,\ref{fig:poten_600cyc_SB_heal} (the same data as in
Ref.\,\cite{Priez20del}). Similarly to the case of thermally
annealed glasses, the potential energy in
Fig.\,\ref{fig:poten_600cyc_SB_heal} is gradually reduced when the
strain amplitude is changed from $\gamma_0=0.080$ to the selected
values in the range $0.030\leqslant \gamma_0 \leqslant 0.065$.
Interestingly, the largest decrease in the potential energy at the
strain amplitude $\gamma_0=0.060$ is nearly the same ($\Delta
U\approx 0.03\,\varepsilon$) for both thermally and mechanically
annealed glasses. In addition, it can be commented that in both
cases presented in Figs.\,\ref{fig:poten_Quench_SB_heal} and
\ref{fig:poten_600cyc_SB_heal}, the potential energy remains above
the energy levels of initially stable glasses (before a shear band
is formed) even for loading at the strain amplitude
$\gamma_0=0.060$. The results of a previous MD study on mechanical
annealing of \textit{rapidly quenched} glasses imply that the energy
level $U\approx-8.31\,\varepsilon$ can be reached via cyclic loading
at $T_{LJ}=0.01\,\varepsilon/k_B$ but it might take thousands of
additional cycles~\cite{Priez20alt}.
\vskip 0.05in
While the potential energy within a shear band becomes relatively
large, the energy of the glass outside the band remains largely
unaffected during the yielding transition. As shown above, the
\textit{mechanically annealed} glass is initially more stable (has a
lower potential energy) than the \textit{thermally annealed} glass.
This in turn implies that the boundary conditions for the subyield
loading of the shear band are different in the two cases, and,
therefore, the potential energy change during the relaxation
process, in principle, might also vary. In other words, the
annealing of the shear band by small-amplitude periodic deformation
might be affected by the atomic structure of the adjacent glass.
However, the results in Figs.\,\ref{fig:poten_Quench_SB_heal} and
\ref{fig:poten_600cyc_SB_heal} suggest that the potential energy
change is roughly the same in both cases; although a more careful
analysis might be needed in the future to clarify this point.
\vskip 0.05in
We next report the results of mechanical tests that involve startup
continuous shear deformation in order to probe the effect of
small-amplitude periodic loading on the yield stress. The shear
modulus, $G$, and the peak value of the stress overshoot,
$\sigma_Y$, are plotted in Figs.\,\ref{fig:G_and_Y_thermq} and
\ref{fig:G_and_Y_600cyc} for glasses that were periodically deformed
with the strain amplitudes $\gamma_0=0.030$ and $0.060$. In each
case, the startup deformation was imposed along the $xy$, $xz$, and
$yz$ planes with the constant strain rate
$\dot{\gamma}=10^{-5}\,\tau^{-1}$. The data are somewhat scattered,
since simulations were carried out only for one realization of
disorder, but the trends are evident. First, both $G$ and $\sigma_Y$
are relatively small when shear is applied along the $xz$ plane at
$t=200\,T$ in Fig.\,\ref{fig:G_and_Y_thermq} and at $t=1000\,T$ in
Fig.\,\ref{fig:G_and_Y_600cyc} because of the shear band that was
formed previously at $\gamma_0=0.080$. Second, the shear modulus
and yield stress increase towards plateau levels during the next few
hundred cycles, and their magnitudes are greater for the larger
strain amplitude $\gamma_0=0.060$, since those samples were annealed
to deeper energy states (see Figs.\,\ref{fig:poten_Quench_SB_heal}
and \ref{fig:poten_600cyc_SB_heal}).
\vskip 0.05in
The results in Figures\,\ref{fig:G_and_Y_thermq}\,(b) and
\ref{fig:G_and_Y_600cyc}\,(b) show that the yield stress is only
weakly dependent on the number of cycles in glasses that were
periodically strained at the smaller amplitude $\gamma_0=0.030$,
whereas for $\gamma_0=0.060$, the yield stress increases noticeably
and levels out at $\sigma_Y\approx0.9\,\varepsilon\sigma^{-3}$ for
the \textit{mechanically annealed} glass and at a slightly smaller
value for the \textit{thermally annealed} glass. It was previously
shown that the yield stress is slightly larger, \textit{i.e.},
$\sigma_Y\approx1.05\,\varepsilon\sigma^{-3}$, for rapidly quenched
glasses that were mechanically annealed at the strain amplitude
$\gamma_0=0.060$ for similar loading conditions~\cite{PriezSHALT19}.
This discrepancy might arise because in Ref.\,\cite{PriezSHALT19}
the glass was homogenously annealed starting from the rapidly
quenched state, while in the present study, the potential energy
within the annealed shear-band domain always remains higher than in
the rest of the sample, thus resulting in spatially heterogeneous
structure. On the other hand, it was recently shown that the
presence of an interface between relaxed and rejuvenated domains in
a relatively large sample might impede strain
localization~\cite{Kosiba19}.
\vskip 0.05in
The relative rearrangements of atoms with respect to their neighbors
in a deformed amorphous system can be conveniently quantified via
the so-called nonaffine displacements. By definition, the nonaffine
measure $D^2(t, \Delta t)$ for an atom $i$ is computed via the
transformation matrix $\mathbf{J}_i$ that minimizes the following
expression for a group of neighboring atoms:
\begin{equation}
D^2(t, \Delta t)=\frac{1}{N_i}\sum_{j=1}^{N_i}\Big\{
\mathbf{r}_{j}(t+\Delta t)-\mathbf{r}_{i}(t+\Delta t)-\mathbf{J}_i
\big[ \mathbf{r}_{j}(t) - \mathbf{r}_{i}(t) \big] \Big\}^2,
\label{Eq:D2min}
\end{equation}
where $\Delta t$ is the time interval between two atomic
configurations, and the summation is performed over the nearest
neighbors located within $1.5\,\sigma$ from the position of the
$i$-th atom at $\mathbf{r}_{i}(t)$. The nonaffine quantity defined
by Eq.\,(\ref{Eq:D2min}) was originally introduced by Falk and
Langer in order to accurately detect the localized shear
transformations that involved swift rearrangements of small groups
of atoms in driven disordered solids~\cite{Falk98}. In the last few
years, this method was widely used to study the collective,
irreversible dynamics of atoms in binary glasses subjected to time
periodic~\cite{Priezjev16,Priezjev16a,Priezjev17,Priezjev18,Priezjev18a,
PriezSHALT19,Priez20ba,Peng20,KawBer20,Priez20alt} and startup
continuous~\cite{HorbachJR16,Schall07,Pastewka19,Priez20tfic,Priez19star,Ozawa20,ShiBai20}
shear deformation, tension-compression cyclic
loading~\cite{NVP18strload,Jana20}, prolonged elastostatic
compression~\cite{PriezELAST19,PriezELAST20}, creep~\cite{Eckert21}
and thermal cyclic
loading~\cite{Priez19one,Priez19tcyc,Priez19T2000,Priez19T5000,Guan20}.
\vskip 0.05in
The representative snapshots of \textit{thermally annealed} glasses
are presented in
Fig.\,\ref{fig:snapshots_Tquench_T001_amp080_heal_amp030_1_5_20_100}
for the strain amplitude $\gamma_0=0.030$ and in
Fig.\,\ref{fig:snapshots_Tquench_T001_amp080_heal_amp060_1_20_100_1000}
for $\gamma_0=0.060$. For clarity, only atoms with relatively large
nonaffine displacements during one oscillation period are displayed.
Note that the typical cage size at $\rho=1.2\,\sigma^{-3}$ is about
$0.1\,\sigma$~\cite{Priezjev13}, and, therefore, the displacements
of atoms with $D^2(n\,T, T)>0.04\,\sigma^2$ correspond to
cage-breaking events. It can be clearly seen in the panel (a) of
Figures\,\ref{fig:snapshots_Tquench_T001_amp080_heal_amp030_1_5_20_100}
and
\ref{fig:snapshots_Tquench_T001_amp080_heal_amp060_1_20_100_1000},
that the shear band runs along the $yz$ plane right after switching
to the subyield loading regime. As expected, the magnitude of
$D^2(200\,T, T)$ on average decays towards the interfaces. Upon
continued loading, the shear band becomes thinner and eventually
breaks up into isolated clusters whose size is reduced over time.
The coarsening process is significantly slower for the strain
amplitude $\gamma_0=0.060$ (about 1000 cycles) than for
$\gamma_0=0.030$ (about 200 cycles). This trend is consistent with
the decay of the potential energy denoted in
Fig.\,\ref{fig:poten_Quench_SB_heal} by the red and orange curves.
\vskip 0.05in
Similar conclusions can be drawn by visual inspection of consecutive
snapshots of the \textit{mechanically annealed} glass cyclically
loaded at the strain amplitude $\gamma_0=0.030$ (see
Fig.\,\ref{fig:snapshots_600cyc_T001_amp080_heal_amp030_1_5_10_100})
and at $\gamma_0=0.060$ (see
Fig.\,\ref{fig:snapshots_600cyc_T001_amp080_heal_amp060_1_100_200_2000}).
It can be observed that the shear band is initially oriented along
the $xy$ plane, which is consistent with a relatively large value of
the yield stress along the $xy$ direction at $t=1000\,T$ in
Fig.\,\ref{fig:G_and_Y_600cyc}. The atomic trajectories become
nearly reversible already after about 10 cycles at the strain
amplitude $\gamma_0=0.030$, as shown in
Fig.\,\ref{fig:snapshots_600cyc_T001_amp080_heal_amp030_1_5_10_100},
while isolated clusters of atoms with large nonaffine displacements
are still present after about 2000 cycles at $\gamma_0=0.060$ (see
Fig.\,\ref{fig:snapshots_600cyc_T001_amp080_heal_amp060_1_100_200_2000}).
Altogether these results indicate that oscillatory shear deformation
with a strain amplitude just below the critical value can be used to
effectively anneal a shear band and make the amorphous material
stronger.
\section{Conclusions}
In summary, the process of shear band annealing in metallic glasses
subjected to small-amplitude periodic shear deformation was examined
using molecular dynamics simulations. The glass was modeled as a
binary mixture with non-additive interaction between atoms of
different types, and the shear band was initially developed in
stable glasses under oscillatory shear above the yielding point. It
was shown that periodic loading in the elastic range results in a
gradual decay of the potential energy over consecutive cycles, and
upon increasing strain amplitude, lower energy states can be
accessed after thousands of cycles. Furthermore, the spatiotemporal
analysis of nonaffine displacements demonstrated that a shear band
becomes thinner and breaks into separate clusters whose size is
reduced upon continued loading. Thus, in a wide range of strain
amplitudes below yield, the cyclic loading leads to a nearly
reversible dynamics of atoms at finite temperature. Lastly, both the
shear modulus and yield stress saturate to higher values as the
shear band region becomes better annealed at higher strain
amplitudes.
\section*{Acknowledgments}
Financial support from the National Science Foundation (CNS-1531923)
is gratefully acknowledged. The article was prepared within the
framework of the HSE University Basic Research Program and funded in
part by the Russian Academic Excellence Project `5-100'. The
simulations were performed at Wright State University's Computing
Facility and the Ohio Supercomputer Center. The molecular dynamics
simulations were carried out using the parallel LAMMPS code
developed at Sandia National Laboratories~\cite{Lammps}.
\begin{figure}[t]
\includegraphics[width=9.0cm,angle=0]{system_snapshot_T001.pdf}
\caption{(Color online) A snapshot of the \textit{thermally
annealed} glass at the temperature $T_{LJ}=0.01\,\varepsilon/k_B$.
The system consists of 48\,000 atoms of type \textit{A} (large blue
circles) and 12\,000 atoms of type \textit{B} (small red circles) in
a periodic box of linear size $L=36.84\,\sigma$. Atoms are not shown
to scale. The black arrows indicate the direction of oscillatory
shear deformation along the $xz$ plane. }
\label{fig:shapshot}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{poten_xz_T001_quench_amp080_log_heal_amp030_060.pdf}
\caption{(Color online) The dependence of the potential energy
minima (at zero strain) on the number of cycles for the indicated
values of the strain amplitude. The shear band was formed in the
\textit{thermally annealed} glass during the first 200 cycles at the
strain amplitude $\gamma_0=0.080$ (the black curve). The system
temperature is $T_{LJ}=0.01\,\varepsilon/k_B$ and the oscillation
period is $T=5000\,\tau$. }
\label{fig:poten_Quench_SB_heal}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{poten_xz_T001_600cyc_amp080_log_heal_amp030_060.pdf}
\caption{(Color online) The variation of the potential energy (at
the end of each cycle) as a function of the cycle number for the
selected strain amplitudes. The shear band was introduced in the
\textit{mechanically annealed} glass after 1000 cycles at the strain
amplitude $\gamma_0=0.080$ (the black curve; see text for details).
The time is reported in terms of oscillation periods, \textit{i.e.},
$T=5000\,\tau$. The temperature is $T_{LJ}=0.01\,\varepsilon/k_B$. }
\label{fig:poten_600cyc_SB_heal}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{G_sigY_cycle_thermq.pdf}
\caption{(Color online) The shear modulus $G$ (in units of
$\varepsilon\sigma^{-3}$) and yielding peak $\sigma_Y$ (in units of
$\varepsilon\sigma^{-3}$) as a function of the cycle number for the
\textit{thermally annealed} glass. The startup continuous shear
with the strain rate $\dot{\gamma}=10^{-5}\,\tau^{-1}$ was applied
along the $xy$ plane (circles), $xz$ plane (squares), and $yz$ plane
(diamonds). Before startup deformation, the samples were
periodically deformed with the strain amplitudes $\gamma_0=0.030$
(solid blue) and $\gamma_0=0.060$ (dashed red). The time range is
the same as in Fig.\,\ref{fig:poten_Quench_SB_heal}. }
\label{fig:G_and_Y_thermq}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{G_sigY_cycle_600cyc.pdf}
\caption{(Color online) The shear modulus $G$ (in units of
$\varepsilon\sigma^{-3}$) and yielding peak $\sigma_Y$ (in units of
$\varepsilon\sigma^{-3}$) versus cycle number for the
\textit{mechanically annealed} glass. The startup shear deformation
with the strain rate $\dot{\gamma}=10^{-5}\,\tau^{-1}$ was imposed
along the $xy$ plane (circles), $xz$ plane (squares), and $yz$ plane
(diamonds). Before continuous shear, the samples were cyclically
deformed with the strain amplitudes $\gamma_0=0.030$ (solid blue)
and $\gamma_0=0.060$ (dashed red). The same cycle range as in
Fig.\,\ref{fig:poten_600cyc_SB_heal}. }
\label{fig:G_and_Y_600cyc}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{snapshots_Tquench_T001_amp080_heal_amp030_1_5_20_100.pdf}
\caption{(Color online) A series of snapshots of atomic
configurations during periodic shear with the strain amplitude
$\gamma_0=0.030$. The loading conditions are the same as in
Fig.\,\ref{fig:poten_Quench_SB_heal} (the red curve). The nonaffine
measure in Eq.\,(\ref{Eq:D2min}) is (a) $D^2(200\,T,
T)>0.04\,\sigma^2$, (b) $D^2(205\,T, T)>0.04\,\sigma^2$, (c)
$D^2(220\,T, T)>0.04\,\sigma^2$, and (d) $D^2(300\,T,
T)>0.04\,\sigma^2$. The colorcode in the legend denotes the
magnitude of $D^2$. Atoms are not shown to scale. }
\label{fig:snapshots_Tquench_T001_amp080_heal_amp030_1_5_20_100}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{snapshots_Tquench_T001_amp080_heal_amp060_1_20_100_1000.pdf}
\caption{(Color online) The position of atoms in the thermally
annealed glass subjected to periodic shear with the strain amplitude
$\gamma_0=0.060$. The corresponding potential energy is denoted by
the orange curve in Fig.\,\ref{fig:poten_Quench_SB_heal}. The
nonaffine measure is (a) $D^2(200\,T, T)>0.04\,\sigma^2$, (b)
$D^2(220\,T, T)>0.04\,\sigma^2$, (c) $D^2(300\,T,
T)>0.04\,\sigma^2$, and (d) $D^2(1200\,T, T)>0.04\,\sigma^2$. The
magnitude of $D^2$ is defined in the legend. }
\label{fig:snapshots_Tquench_T001_amp080_heal_amp060_1_20_100_1000}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{snapshots_600cyc_T001_amp080_heal_amp030_1_5_10_100.pdf}
\caption{(Color online) Instantaneous snapshots of the binary glass
periodically sheared with the strain amplitude $\gamma_0=0.030$. The
data correspond to the red curve in
Fig.\,\ref{fig:poten_600cyc_SB_heal}. The nonaffine quantity is (a)
$D^2(1000\,T, T)>0.04\,\sigma^2$, (b) $D^2(1005\,T,
T)>0.04\,\sigma^2$, (c) $D^2(1010\,T, T)>0.04\,\sigma^2$, and (d)
$D^2(1100\,T, T)>0.04\,\sigma^2$. The colorcode denotes the
magnitude of $D^2$. }
\label{fig:snapshots_600cyc_T001_amp080_heal_amp030_1_5_10_100}
\end{figure}
\begin{figure}[t]
\includegraphics[width=12.0cm,angle=0]{snapshots_600cyc_T001_amp080_heal_amp060_1_100_200_2000.pdf}
\caption{(Color online) Atomic positions in the binary glass
cyclically loaded at the strain amplitude $\gamma_0=0.060$. The data
are taken from the selected time intervals along the orange curve in
Fig.\,\ref{fig:poten_600cyc_SB_heal}. The nonaffine quantity is (a)
$D^2(1000\,T, T)>0.04\,\sigma^2$, (b) $D^2(1100\,T,
T)>0.04\,\sigma^2$, (c) $D^2(1200\,T, T)>0.04\,\sigma^2$, and (d)
$D^2(3000\,T, T)>0.04\,\sigma^2$. $D^2$ is defined in the legend. }
\label{fig:snapshots_600cyc_T001_amp080_heal_amp060_1_100_200_2000}
\end{figure}
\bibliographystyle{prsty}
| {'timestamp': '2021-02-11T02:09:40', 'yymm': '2010', 'arxiv_id': '2010.00547', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.00547'} |
\section{Phonetics} \label{sec:intro}
Phonetic variation in speech is a complex and fascinating phenomenon. The sound of our speech is influenced by the communities and groups we belong to, places we come from, the immediate social context of speech, and many physiological factors. There is acoustic variation in speech due to sex and gender specific differences in articulation \citep{Huber1999}, age \citep{Safavi2018}, social class and ethnicity \citep{Clayards}, and individual idiosyncrasies of sound production \citep{Noiray2014VariabilityAcoustics}. This linguistic variation is relevant to many fields of study like anthropology, economics and demography
\citep{Ginsburgh2014}, and has connections to the study of speech production and perception in the human brain. It helps us understand how languages developed in the past, and the evolutionary links that still exist between languages today \citep{Pigoli}. Modelling phonetic variation is also important for many practical applications, like speech recognition and speech synthesis. In this work, we study one source of variation in particular: geographical accent variation.
To describe phonetic variation conveniently, in his seminal work \textit{Accents of English}, \citet{wells_1982} introduced lexical sets, which are groups of words containing vowels that are pronounced the same way within an accent. The \textit{trap} lexical set contains words like trap, cat and man, and the \textit{bath} lexical set contains words like bath, class and grass. In Northern English accents both \textit{trap} and \textit{bath} words use the `short a' vowel /\ae/. In Southern English accents \textit{trap} words use /\ae/ and \textit{bath} words use the `long a' vowel /\textipa{A}/; this is known as the trap-bath split.
The trap-bath split is one of the most well studied geographical accent differences. The geographical accent variation in sounds like these has historically been studied using written transcriptions of speech from surveys and interviews by trained linguists. These were used to construct isogloss maps (see Figure~\ref{fig:isogloss}) to visualise regions having the same dialect. \citet{Upton1996} explain that in reality these isoglosses are not sharp boundaries, and they are drawn to show only the most prominent linguistic variation in a region for the sake of simplicity. The boundaries are also constantly moving and changing over time.
\begin{figure}[h]
\centering
\includegraphics[width=2.5in]{figs/trap-bath-isogloss.jpg}
\caption{Isoglosses for the ``class'' vowel in England. Reproduced with permission from \citet[][p.\ 6--7]{Upton1996}.}
\label{fig:isogloss}
\end{figure}
More recently, advances in statistical methods and technology have allowed accent variation to be modelled by directly using audio recordings of speech. A sound can be represented as a set of smooth curves, and functional data analysis \citep[FDA;][]{Ramsay2005,ferraty:vieu:2006,horvath:2012:book} offers techniques to model variation in these curves. This work demonstrates one such approach, in which we analyse variation in vowel sounds using techniques from FDA and generalised linear models.
This paper has two main contributions. The first contribution is to use functional data analysis to classify vowels by directly using speech recordings: we demonstrate two approaches for classifying \textit{bath} vowels as Northern or Southern. The first approach models variation in formant curves (see Section~\ref{sec:formants}) using a functional linear model. The second approach models variation in mel-frequency cepstral coefficient (MFCC) curves (see Section~\ref{sec:mfcc}) through penalised logistic regression on functional principal components, and it can be used to resynthesise vowel sounds in different accents, allowing us to ``listen to the model''. Both approaches classify accents using the temporal dynamics of the MFCC or formant curves in sounds. These two classifiers were trained using a dataset of labelled audio recordings\hl[]{ that was collected for this paper in an experimental setup} \citep{Koshy2020_shahin}.
The second contribution is to construct maps that visualise geographic variation in the \textit{bath} vowel that can be attributed to typical Northern and Southern accent differences, using a soap film smoother. For this we use the audio BNC dataset \citep{BNC}, which is a representative sample of accents in Great Britain. The resulting maps show a geographical variation in the vowel similar to what is seen in isogloss maps like Figure~\ref{fig:isogloss}.
The paper is structured as follows. In Section~\ref{sec:preprocessing}, we introduce two ways of representing vowel sounds as multivariate curves. Section~\ref{sec:data} introduces the two datasets used in this analysis, and the preprocessing steps involved. Section~\ref{sec:classify} gives the two models for classifying \textit{bath} vowels, and Section~\ref{sec:maps} presents the maps constructed to visualise geographical accent variation. We conclude with a discussion of the results in Section~\ref{sec:discussion}.
\section{Sound as data objects} \label{sec:preprocessing}
Sound is a longitudinal air pressure wave. Microphones measure the air pressure at fixed rates, for example at 16 kHz (Hz is a unit of frequency representing samples per second). The waveform of the vowel in the word ``class'' in Figure~\ref{fig:waveform} shows this rapidly oscillating air pressure wave as measured by a microphone. This signal can be transformed in several ways to study it; for example as a spectrogram, formants, or mel-frequency cepstral coefficients (MFCCs), see Sections~\ref{sec:spec}, \ref{sec:formants} and \ref{sec:mfcc}.
\begin{figure}[hb]
\centering
\includegraphics[width=3in]{figs/waveform.pdf}
\caption{Sound wave of the vowel from a single ``last'' utterance.}
\label{fig:waveform}
\end{figure}
\subsection{Spectrograms}
\label{sec:spec}
We begin by defining the spectrogram of a sound. A spectrogram is a time-frequency representation of a sound: it reveals how the most prominent frequencies in a sound change over time. To define it precisely, let us denote the sound wave as a time series $\{s(t): t = 1, \ldots, T\}$, where $s(t)$ is the deviation from normal air pressure at time $t$. We can define $s(t)=0$ for $t\le 0$ or $t>T$. Let $w: \mathbb{R} \rightarrow \mathbb{R}$ be a symmetric window function which is non-zero only in the interval $[-\frac{M}{2},\frac{M}{2}]$ for some $M<T$. The Short-Time Fourier Transform of $\{s(t)\}_{t=1}^T$ is computed as
\begin{align*}
\text{STFT}(s)(t, \omega) &= \sum_{u=-\infty}^\infty s(u)w(u-t)\text{exp}(-i\omega u) \\
& = \sum_{u=1}^T s(u)w(u-t)\text{exp}(-i\omega u),
\end{align*}
for $t=1,\ldots,T$, and $\omega \in \{2\pi k/N: k=0, \ldots, N-1\}$ for some $N\ge T$ which is a power of 2. The window width $M$ is often chosen to correspond to a 20 ms interval.
The spectrogram of $\{s(t)\}_{t=1}^T$ is then defined as
\begin{align*}
\text{Spec}(s)(t, \omega) & = |\text{STFT}(s)(t, \omega)|^2.
\end{align*}
At a time point $t$, the spectrogram shows the magnitude of different frequency components $\omega$ in the sound. Figure~\ref{fig:formants} shows spectrograms of recordings of different vowels, with time on the x-axis, frequency on the y-axis, and colour representing the amplitude of each frequency. The dark bands are frequency peaks in the sound, which leads us to the concept of formants.
\begin{figure}[p]
\centering
\includegraphics[width=4in]{figs/Spectrograms_of_syllables_dee_dah_doo.png}
\caption{In these spectrograms of the syllables \textit{dee, dah, doo}, the dark
bands are the formants of each vowel and the overlaid red dotted lines are estimated formant trajectories. The y axis represents frequency and
darkness represents intensity \citep{kluk2007}. }
\label{fig:formants}
\end{figure}
\subsection{Formants}
\label{sec:formants}
Formants are the strongest frequencies in a vowel sound, observed as high-intensity bands in the spectrogram of the sound. By convention they are numbered in order of increasing frequency, $\text{F}_1, \text{F}_2, \ldots$.
Formants are produced by the resonating cavities and tissues of the vocal tract \citep{Johnson2005}. The resonant frequencies depend on the shape of the vocal tract, which is influenced by factors like rounding of the lips, and height and shape of the tongue (illustrated in Figure~\ref{fig:vocaltract}). The pattern of these frequencies is what distinguishes different vowels. They are particularly important for speech perception because of their connection to the vocal tract itself, and not the vocal cords. Listeners use formants to identify vowels even when they are spoken at different pitches, or when the vowels are whispered and the vocal cords don't vibrate at all \citep{Johnson2005}. One can also sometimes ``hear'' a person smile as they speak, because the act of smiling changes the shapes of the vocal cavities and hence the formants produced \citep{Ponsot2018}.
\begin{figure}[h]
\centering
\includegraphics[width=2in]{figs/vocaltract-cc.pdf}
\caption{This diagram shows how varying the height of the tongue creates different vowels \citep{CC2008_shahin}.}
\label{fig:vocaltract}
\end{figure}
\subsection{Mel-Frequency Cepstral Coefficients}
\label{sec:mfcc}
Mel-frequency cepstral coefficients (MFCCs) are a further transformation of the spectrogram, and are often used in speech recognition and speech synthesis. The way they are constructed is related to how the human auditory system processes acoustic input; in particular, how different frequency ranges are filtered through the cochlea in the inner ear. This filtering is the reason humans can distinguish between low frequencies better than high frequencies. MFCCs roughly correspond to the energy contained in different frequency bands, but are not otherwise easily interpretable.
There are many variants of MFCCs; we use the one from \citet{Erro2011,Erro2014} which allow for high fidelity sound resynthesis.
MFCCs are computed in two steps as follows \citep{Tavakoli2019}. First the mel-spectrogram is computed from the spectrogram, using a mel scale filter bank with $F$ filters $(b_{f,k})_{k=0,\ldots,N-1}$, $f=0, \ldots, F$. The mel scale is a perceptual scale of pitches, under which pairs of sounds that are perceptually equidistant in pitch are also equidistant in mel units.
This is unlike the linear Hz scale, in which a pair of low frequencies will sound further apart than an equidistant pair of high frequencies.
The mapping from Hz ($f$) to mels ($m$) is given by $m=2595\, \text{log}_{10}(1+f/700)$, shown in Figure~\ref{fig:melfilter}. The mel-spectrogram is defined as
\begin{align*}
\text{MelSpec}(s)(t, f) &= \sum_{k=0}^{N-1} \text{Spec}(s)(t, 2\pi k/N)b_{f,k}.
\end{align*}
\begin{figure}[h]
\centering
\includegraphics[height=3in]{figs/melhz.pdf}
\caption{Mapping from Hz to mel. A pair of high frequencies on the Hz scale sound more similar to the human ear than an equidistant pair at low frequencies. This is captured by the mel scale.}
\label{fig:melfilter}
\end{figure}
In the second step, we take the inverse Fourier transform of the logarithm of this mel-spectrogram. The first $M$ resulting coefficients are the MFCCs,
\begin{align*}
\text{MFCC}(s)(t, m) &= \frac{1}{F}\sum_{f=0}^{F} \text{log}\left(\text{MelSpec}(s)(t, f)\right) \text{exp} \left( i\frac{2\pi (m-1)f}{F+1} \right).
\end{align*}
At each time point $t$ we have $M$ MFCCs. We use the \texttt{ahocoder} software \citep{Erro2014} to extract MFCCs, which uses $M=40$ at each time point. Thus we represent each vowel sound by 40 MFCC curves.
Formants are a low-dimensional summary of the original sound which allow interpretation of the vocal tract position. MFCCs retain a lot of information about speech sounds and do not simplify the representation in an immediately interpretable way, but the model with MFCCs allows us to resynthesise \textit{bath} vowels along the /\ae/ to /\textipa{A}/ spectrum. MFCCs and formants therefore have different strengths and limitations for analysis, depending on the goal. In this paper we demonstrate classifiers using both representations.
Regardless of whether we work with vowel formants or MFCCs, we can view the chosen sound representation as a smooth multivariate curve over time, $X(t) \in \mathbb{R}^d$, where $t \in [0,1]$ is normalised time. In practice we assume $X(t)$ is observed with additive noise due to differences in recording devices and background noise in the recording environment.
\section{Data sources} \label{sec:data}
In this section we describe the two data sources used in this paper.
\subsection{North-South Class Vowels} \label{sec:nscv}
The North-South Class Vowels \citep[NSCV;][]{Koshy2020_shahin}
dataset is a collection of 400 speech recordings of the vowels /\ae/ and /\textipa{A}/ that distinguish stereotypical Northern and Southern accents in the \textit{bath} lexical set. The vowels were spoken by a group of 4 native English speakers (100 recordings per speaker).
It was collected in order to have a high-quality labelled dataset of the /\ae/ and /\textipa{A}/ vowel sounds in \textit{bath} words.
The NSCV dataset was collected with ethical approval from the Biomedical and Scientific Research Ethics Committee of the University of Warwick.
The speech recordings were collected in an experimental setup. The speakers were two male and two female adults between the ages of 18 and 55. In order to participate they were required to be native English speakers but were not required to be proficient in Southern and Northern accents.
To access the vowels, they were shown audio recordings as pronunciation guides, and example rhyming words such as `cat' for the /\ae/ vowel and `father' for the /\textipa{A}/ vowel. They were allowed to practice using the two vowels in the list of words, before being recorded saying a list of words using both vowels. The words were \textit{class, grass, last, fast}, and \textit{pass}. Each word was repeated 5 times using each vowel, by each speaker. The speech was simultaneously recorded with two different microphones.
The purpose of this dataset is to demonstrate a method of training accent classification models. By using vowels as a proxy for accent, it allows us to train models to distinguish between Northern and Southern accents, to the extent that they differ by this vowel. Using two microphones and having the same speaker producing both vowels allows us to train models that are robust to microphone and speaker effects. Despite the small number of speakers in this dataset, we are still able to classify vowels with high accuracy and resynthesise vowels well. A limitation of the dataset is that the speakers were not required to be native speakers of both Northern and Southern accents or have any phonetic training.
\subsection{British National Corpus} \label{sec:bnc}
The audio edition of the British National Corpus (BNC) is a collection of recordings taken across the UK in the mid 1990s, now publicly available for research \citep{BNC}. A wide range of people had their speech recorded as they went about their daily activities, and the audio recordings were annotated (transcriptions of the conversations, with information about the speakers). From this corpus we analyse utterances of the following words from the \textit{bath} lexical set, which we call the ``class'' words: \textit{class, glass, grass, past, last, brass, blast, ask, cast, fast}, and \textit{pass}.
Among the sound segments in the BNC labelled as a ``class'' word, not all of them do correspond to a true utterance of a ``class'' word by a British speaker\hl[]{, and some are not of good quality}. Some sounds were removed from the dataset using the procedure described in Appendix~\ref{app:exploration}.
The resulting dataset contains 3852 recordings from 529 speakers in 124 locations across England, Scotland and Wales. Figure~\ref{fig:obsnum} shows the number of sounds and speakers at each location. Some speakers were recorded at multiple locations, but 94\% of them have all their recording locations within a 10 kilometre radius. 88\% of all speakers only have one recording location in this dataset.
\begin{figure}[t]
\centering
\includegraphics[width=.4\linewidth]{figs/obsnum.pdf}
\includegraphics[width=.4\linewidth]{figs/idnum.pdf}
\caption{Each bubble is centred at a location at which we have observations in the BNC, and its size corresponds to the number of recordings (left plot) and number of speakers (right plot) at each location.}
\label{fig:obsnum}
\end{figure}
This dataset captures a wide range of geographical locations and socio-economic characteristics, and speakers were recorded in their natural environment. It has, however, some limitations for our analysis. For example, we do not know the true origin of a speaker, so unless the metadata shows otherwise, we must assume that speakers' accents are representative of the location where they were recorded. There are very few speech recordings available from the North, especially Scotland. The timestamps used to identify word boundaries are often inaccurate, and the sound quality varies widely between recordings, due to background noise and the different recording devices used.
\subsection{Transforming sounds into data objects} \label{sec:preprocess-steps}
Each speech recording in the BNC and NSCV datasets was stored as a mono-channel 16 kHz \texttt{.wav} file. The raw formants were computed using the \texttt{wrassp} R package \citep{wrassp}. At each single time point the first four formants were computed, and this is done at 200 points per second. A sound of length 1 second is thus represented as a $200\times4$ matrix, where each column corresponds to one formant curve. For each vowel sound, raw MFCCs were extracted using the \texttt{ahocoder} software \citep{Erro2011, Erro2014}, which also computes them at 200 points per second. Hence a sound of length 1 second would be represented as a $200\times40$ matrix, where each column represents one MFCC curve.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/Preprocessing.pdf}
\caption{Summary of preprocessing steps.}
\label{fig:flowchart}
\end{figure}
We smooth the raw formants and raw MFCCs in order to remove unwanted variation due to noise, and to renormalise the length of the curves by evaluating each smoothed curve at a fixed number of time points \citep{Ramsay2005}.
Assuming a signal plus noise model on the raw formants and raw MFCCs, we smooth and resample them on an equidistant grid of length $T=40$. Since the raw formants exhibit large jumps that are physiologically implausible, we smooth them using robust loess \citep[R function \texttt{loess}][]{Cleveland1979} with smoothing parameter $l=0.4$ and using locally linear regression. The raw MFCCs are less rough, and we smooth them using cubic splines \citep[R function \texttt{smooth.spline},][]{R2020} with knots chosen at each point on the time grid and smoothing parameter chosen by cross-validation for each curve. We have used $T=40$ in this analysis because it captures the main features while not inflating the dataset too much. We do not model vowel duration, which also depends on other factors, such as speech context \citep{Clayards}. Other implementations and smoothing methods could be used here, such as the R package \texttt{mgcv} for smoothing MFCCs with cubic splines, and robust smoothing for formants using the scaled t family.
Finally, we perform an alignment step to reduce misalignments between NSCV curves and BNC curves. This is necessary because the BNC speech recordings often have inaccurate timestamps and this can cause their vowels to be misaligned with the NSCV curves. Since we classify BNC vowels using models trained on NSCV curves, these misalignments can cause inaccuracies in the predictions. We consider the differences in relative timing of the vowel in the sound to be due to a random phase variation; alignment or registration of curves allows us to reduce the effect of this phase variation \citep{Ramsay2005}. We use the approach of \citet{Srivastava2011}, where the Fisher--Rao metric distance between two curves is minimised by applying a nonlinear warping function to one of the curves.
The first MFCC curve (MFCC 1) of each sound contains the volume dynamics. To align NSCV vowels, we first align all NSCV MFCC 1 curves together. These warping functions are then applied to the formant curves and other MFCC curves from the same vowels, since they come from the same underlying sounds. For each BNC vowel, we first align its MFCC 1 curve to the mean aligned NSCV MFCC 1 curve, and then use the obtained warping function to align all the other MFCC curves and formant curves from the same vowel. Alignment was performed using the R package \texttt{fdasrvf} \citep{fdasrvf2020}, and the preprocessing steps are summarised in Figure~\ref{fig:flowchart}.
\section{Classifying accents} \label{sec:classify}
In this section, we will present two models for classifying \textit{bath} vowels as Southern or Northern.
\subsection{Modeling formants} \label{sec:formant-model}
Our first task is to build a classifier to classify \textit{bath} vowels as Northern or Southern. The model uses the fact that the first two formants $\text{F}_1$ and $\text{F}_2$ are known to predominantly differentiate vowels, and higher formants do not play as significant a role in discriminating them \citep{Adank, Johnson2005}. It has been suggested that the entire trajectory of formants are informative even for stationary vowels like the \textit{bath} vowels, and they should not be considered as static points in the formant space \citep{Johnson2005}; (see also the discussion in Section \ref{sec:discussion}). This suggests the use of formant curves as functional covariates when modelling the vowel sounds.
Since the dynamic changes in the formant curves are not drastic, we do not believe there are time-localised effects of the formants, so we use the entire formant curve as a covariate with a roughness penalty. Due to the nested structure of the NSCV corpus with 100 speech recordings from each speaker, we also include a random effect term to account for variation between speakers.
Now we can propose the following functional logistic regression model to classify accents:
\begin{equation}
\mathrm{logit}(p_{ij}) = \beta_0 + \int_{0}^{1}\text{F}_{2ij}(t)\beta_1(t)dt + \gamma_j, \label{eq:loggam}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=4in]{figs/f2.pdf}
\caption{Smoothed and aligned $\text{F}_2$ formant curves for the NSCV vowels. Each curve corresponds to one vowel sound.}
\label{fig:f2}
\end{figure}
where $p_{ij}$ is the probability of sound $i$ from speaker $j$ being Southern, $\text{F}_{2ij}(t)$ is the value of the $\text{F}_2$ curve at time $t$ for sound $i$ from speaker $j$, and $\gamma_j \sim N(0, \sigma_s^2)$ is a random effect for speaker $j$. The functional covariate contributes to the predictor through a linear functional term. The integral is from 0 to 1 since we have normalised the length of all sounds during preprocessing. The function $\beta_1(t)$ is represented with a cubic spline with knots at each time point on the grid, and its ``wiggliness'' is controlled by penalising its second derivative. Model selection was done by comparing the adjusted AIC \citep{Wood} to decide which other terms should be included in the model. Further details from the model selection procedure are given in Appendix \ref{app:modelselection}, where we also consider simpler non-functional models. The model was fitted using the \texttt{mgcv} package in R \citep{Wood2011}.
The fitted coefficient curve $\hat \beta_1(t)$, shown in Figure~\ref{fig:betahatt}, reveals that middle section of the $\text{F}_2$ curve is important in distinguishing the vowels. A lower $\text{F}_2$ curve in this region indicates a Northern /\ae/ vowel. From a speech production perspective, this corresponds to the Northern vowel being more ``front'', which indicates that the highest point of the tongue is closer to the front of the mouth, compared to the Southern vowel.
The point estimate for $\beta_0$ is 328.0 (p-value $= 0.267$, 95\% CI $[-250.85, 906.87]$). The variance component explained by the speaker random effects is $\hat{\sigma}_s^2 = 0.006$ (p-value $= 0.776$).
\begin{figure}[h]
\centering
\includegraphics[width=4in]{figs/betahat.pdf}
\caption{$\hat{\beta_{1}}(t)$ shows that a lower $\text{F}_2$ region towards the middle of the sound indicates a more Northern vowel sound. The dashed lines are 95\% pointwise confidence intervals of the coefficient curve.}
\label{fig:betahatt}
\end{figure}
This model assigns a ``probability of being Southern'' to a given vowel sound, by first aligning the sound to the mean NSCV sound using MFCC 1, and then plugging its formants into \eqref{eq:loggam}. We classify a vowel sound as Southern if its predicted probability of being Southern is higher than $0.5$.
We can estimate the classification accuracy of this model through cross-validation. The model was cross-validated by training it on 3 speakers and testing on the fourth speaker's vowels, and repeating this 4 times by holding out each speaker in the dataset. Using a random split of the data instead would lead to overestimated accuracy, because different utterances by the same speaker cannot be considered independent. The cross-validated accuracy is 96.75\%, and the corresponding confusion matrix is shown in Table~\ref{table:conf}. We can also compare the performance of this model for different classification thresholds, using the ROC curve in Figure~\ref{fig:roc}.
\begin{figure}[h]
\centering
\includegraphics{figs/flm-roc.pdf}
\caption{ROC curve for the functional logistic regression model. The dotted line corresponds to random guessing and the red dot corresponds to using a threshold of 0.5 to classify vowels.}
\label{fig:roc}
\end{figure}
\begin{table}[h]
\caption{Cross-validated confusion matrix for the functional logistic regression.}
\label{table:conf}
\centering
\begin{tabular}{rcrr}
\toprule
{} &\phantom{} &\multicolumn{2}{c}{\textbf{Truth}}\\
\cmidrule{3-4}
&& North & South \\
\textbf{Prediction} \\
North && 196 & 4 \\
South && 4 & 196\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Modeling MFCCs} \label{sec:mfcc-model}
We will now present another approach to classifying vowel sounds, which uses the MFCC curves obtained from each vowel recording. We have 40 smoothed MFCC curves for each sound.
Unlike with formants, we do not have prior knowledge about which curves contain information about the vowel quality. Additionally, since MFCC curves contain information about different parts of the frequency spectrum, they are not independent and the co-variation between curves is important. For example, setting an MFCC curve (or a region of the curve) to a constant value distorts the resulting sound. Hence a multivariate functional regression approach with $\ell_1$ penalty to remove certain curves from the model would not be appropriate, and we need to incorporate information from potentially all the MFCC curves in our model. The problem of concurvity between MFCC curves can also destabilise the resulting coefficient curve estimates in such an approach.
Interpreting the shapes of the curves is also not as useful here since MFCC trajectories do not have a physical interpretation as formants do. We are more interested in the model's ability to resynthesise vowels by capturing as much relevant information about vowel quality as possible. Hence we use functional principal components analysis to capture the co-variation of the MFCC curves. This step essentially generates new features by reparametrising the MFCC curves, which we can then use to fit the classification model.
We select the most informative functional principal components to be in the model through $\ell_1$ penalisation.
\subsubsection{Functional Principal Component Analysis}
Functional principal component analysis \citep[FPCA;][]{Ramsay2005} is an unsupervised learning technique which identifies the different modes of variation in a set of observed smooth curves $\{X_i: [0,1] \rightarrow \mathbb{R},\, i = 1 ,\ldots, n\}$. It is very similar to standard principal component analysis, except that the variables are curves instead of scalar features, and each functional principal component (FPC) is also a curve instead of a vector.
Assuming that the curves $\{X_i\}$ are centred, the $k$th FPC is a smooth curve $\varphi_k:[0,1] \rightarrow \mathbb{R}$ which maximises
\[
\frac{1}{n} \sum_{i=1}^{n} \left( \int \varphi_k(t) X_i(t) dt \right) ^2,
\]
subject to $\int \varphi_k(t)^2 dt = 1$ and $\int\varphi_k(t)\varphi_j(t)dt = 0$ for all $j < k$; there is no constraint for $k=1$. The functional principal component score (FPC score) of curve $i$ with respect to principal component $\varphi_k$ is $s_{ik} = \int \varphi_k(t)X_i(t)dt$.
In multivariate FPCA, each observation is a curve in $\mathbb{R}^M$, and the set of observations is $\{ {\boldsymbol X}_i=(X_i^{(1)}, X_i^{(2)}, \ldots, X_i^{(M)}): [0,1] \rightarrow \mathbb{R}^M,\, i = 1 ,\ldots, n\}$. Amongst the existing variants of multivariate FPCA \citep{chiou2014multivariate,Happ2018}, we use the following one:
assuming that the curves $\{\boldsymbol X_i\}$ are centred, the $k$th FPC is a smooth multivariate curve, defined as ${\boldsymbol \varphi}_k = (\varphi_k^{(1)}, \varphi_k^{(2)}, \ldots, \varphi_k^{(M)}):[0,1] \rightarrow \mathbb{R}^M$ which maximises
\[
\frac{1}{n} \sum_{i=1}^{n} \sum_{j=1}^{M} \left( \int \varphi_k^{(j)}(t) X_i^{(j)}(t) dt\right)^2
\]
subject to $\sum_{j=1}^M \int [ \varphi_k^{(j)}(t) ]^2 dt = 1$ and $\sum_{j=1}^{M} \int\varphi_k^{(j)}(t)\varphi_l^{(j)}(t)dt = 0$ for all $l < k$. The $k$-th FPC score of ${\boldsymbol X}_i$ is defined as $s_{ik} = \sum_{j=1}^M \int \varphi_k^{(j)}(t) X_i^{(j)}(t) dt$.
In our case, the curves $\{ {\boldsymbol X}_i\}$ are the MFCC curves with $M=40$. Each curve $\boldsymbol{X_i}$ discretised on a grid of $T$ equally spaced time points, yielding a $T \times M$ matrix, which is then transformed by stacking the rows into a vector in $\mathbb{R}^{MT}$. The whole dataset is then represented as an $n \times MT$ matrix, which contains observations as rows. The (discretised) FPCs and their scores can therefore be directly computed using a standard implementation of (non-functional) PCA, such as \texttt{prcomp} in R \citep{R2020}.
Before performing the FPCA we centre each MFCC 1 curve at zero, because the average level of MFCC 1 mainly contains differences in the overall volume of the sound, which is influenced by factors other than accent. Centring the curve at zero retains the volume dynamics in the vowel while normalising the overall volume between sounds. Since there are 400 observations in the NSCV training data, we can decompose the MFCC curves into (at most) 400 functional principal components. The first 25 eigenvalues of the FPCs obtained are plotted in Figure~\ref{fig:screeplot}.
\begin{figure}[h]
\centering
\includegraphics[width=4.5in]{figs/mfcc-screeplot.pdf}
\caption{First 25 eigenvalues of the functional principal components of the MFCCs.}
\label{fig:screeplot}
\end{figure}
\subsubsection{$\ell_1$-Penalised Logistic Regression}
$\ell_1$-penalised logistic regression \citep[PLR;][]{Hastie2017} can be used for binary classification problems when we have many covariates (here we have $p=400$ FPC scores which we could include in the model, which corresponds to a reparametrisation of the MFCC curves without losing any information). Through the penalisation and model fitting procedure, a smaller subset of covariates are chosen in the final model.
The model is the same as for the usual logistic regression: if $Y$ is a Bernoulli random variable and $\boldsymbol{X} \in \mathbb{R}^p$ is its covariate vector, the model is
\[
\mathrm{logit}( \mathbb{P}(Y = 1 | \boldsymbol{X} = \boldsymbol{x}) ) = \beta_0 + \boldsymbol{\beta}^\mathsf{T} \boldsymbol{x},
\]
but it is fitted with an added $\ell_1$ penalty on the regression coefficients to deal with high-dimensionality, which encourages sparsity and yields a parsimonious model.
In our setting,
if $y_i = 1$ if sound $i$ is Southern, $y_i=0$ if it is Northern, and $\boldsymbol{x}_i \in \mathbb{R}^{400}$ is a vector of its 400 FPC scores, PLR is fitted by solving
\begin{equation}
\label{eq:PLR}
(\hat{\beta_0}, \hat{\boldsymbol \beta}) = \arg \max_{\beta_0, {\boldsymbol \beta}} \sum_{i=1}^n \left(y_i (\beta_0 + {\boldsymbol \beta}^\mathsf{T} {\boldsymbol x}_i) - \log(1 + e^{\beta_0+ {\boldsymbol \beta}^\mathsf{T} {\boldsymbol x}_i})\right) - \lambda \sum_{j=1}^{p} \lvert \beta_j \rvert,
\end{equation}
where $\lambda \geq 0$ is a penalty weight.
Notice that the first term in \eqref{eq:PLR} is the usual log-likelihood, and the second term is an $\ell_1$ penalty term. The penalty $\lambda$ is chosen by 10-fold cross-validation.
A new sound with FPC scores vector $\boldsymbol{x_*}$ is assigned a ``probability of being Southern'' of $\mathrm{ilogit}( \hat \beta_0 + \boldsymbol{\hat \beta}^\mathsf{T} \boldsymbol{x_*} )$, where
$\mathrm{ilogit}(\cdot)$ is the inverse logit function. We classify the sound as Southern if $\mathrm{ilogit}( \hat \beta_0 + \boldsymbol{\hat \beta}^\mathsf{T} \boldsymbol{x_*} ) \geq 0.5$.
We can estimate the accuracy of the model by cross-validating using individual speakers as folds, as in the functional linear model of Section~\ref{sec:formant-model}. Within each training set, we first perform the FPCA to obtain the FPCs and their scores. Then we cross-validate the penalised logistic regression model to find the optimal penalty $\lambda$, and retrain on the whole training set with this $\lambda$. Finally, we project the test speaker's sounds onto the FPCs from the training set to obtain the test FPC scores, and use them to classify the vowel of each sound using the predicted probabilities from the trained model. This process is repeated holding out each speaker in turn. The cross-validated accuracy of this model is 95.25\%. The confusion matrix is shown in Table~\ref{table:plrconf}, and the ROC curve is shown in Figure~\ref{fig:plr_roc}.
To fit the full model, we use the entire dataset to cross-validate to choose the best $\lambda$, and then refit on the entire dataset using this penalty. The entries of $\bf \beta$ are essentially weights for the corresponding FPCs. By identifying the FPC scores which have nonzero coefficients, we can visualise the weighted linear combination of the corresponding FPCs which distinguish Northern and Southern vowels. In total 10 FPCs had nonzero weights, and all of the chosen FPCs were within the first 20. A plot of the first 25 coefficient values is given in Figure~\ref{fig:plr_coefs}.
\begin{figure}[h]
\centering
\includegraphics[width=4in]{figs/plr-roc.pdf}
\caption{ROC curve for the MFCC model using penalised logistic regression classifier.}
\label{fig:plr_roc}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=4in]{figs/plrmodel-25coefs.pdf}
\caption{The first 25 entries of $\hat{\boldsymbol \beta}$ maximising \eqref{eq:PLR}. Nonzero entries are shown in red. All the later entries are zero, not shown here.}
\label{fig:plr_coefs}
\end{figure}
\begin{table}[h]
\caption{Cross-validated confusion matrix for the penalised logistic regression classifier.}
\label{table:plrconf}
\centering
\begin{tabular}{rcrr}
\toprule
{} &\phantom{} &\multicolumn{2}{c}{\textbf{Truth}}\\
\cmidrule{3-4} && North & South \\
\textbf{Prediction} \\
North && 189 & 8 \\
South && 11 & 192 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Resynthesising vowel sounds}
\label{sec:resynthesising}
The combined effect of the functional principal components that are predictive of accent is given by the function
\begin{equation}
\label{eq:mfcc_making_more_southern}
\sum_{k=1}^{400} \hat{\beta}_{k} \hat {\boldsymbol \varphi_k}: [0,1] \to \mathbb{R}^{40}.
\end{equation}
Discretising this function on an equispaced grid of $T$ points yields a $T \times 40$ matrix, which can be visualised (Figure~\ref{fig:contrib}), or interpreted as a set of MFCC curves (Figure~\ref{fig:contrib_first_9}).
This MFCC matrix captures the difference between the /\ae/ and /\textipa{A}/ vowels. Since MFCCs can be used to synthesise speech sounds, we can now make a given \textit{bath} vowel sound more Southern or Northern, through the following procedure:
We first extract the MFCCs for the entire utterance of a \textit{bath} word, as a $T \times 40$ matrix where $T$ is determined by the length of the sound. With manually identified timestamps we find the $T_v$ rows of this matrix which correspond to the vowel in the word. We align MFCC 1 of this vowel to the mean NSCV MFCC 1 curve, to obtain the optimal warping function for the sound. The MFCC matrix in Figure~\ref{fig:contrib} is `unwarped' using the inverse of this warping function, resampled at $T_v$ equidistant time points, and padded with $T - T_v$ rows of zeroes corresponding to the rest of the sound's MFCCs (which we do not change). We can then add multiples of this $T\times40$ matrix to the original sound's MFCC matrix and synthesise the resulting sounds using \texttt{ahodecoder} \citep{Erro2014}. Adding positive multiples of the matrix makes the vowel sound more Southern, while subtracting multiples makes it sound more Northern. In the supplementary material we provide audio files with examples of this: \texttt{blast-StoN.wav} contains the word ``blast'' uttered in a Southern accent and perturbed towards a Northern accent, and \texttt{class-NtoS.wav} contains the word ``class'' uttered in a Northern accent and perturbed towards a Southern accent. Both of these original vowels were new recordings, and not from the NSCV corpus.
\begin{figure}[h]
\centering
\includegraphics[width=4.5in]{figs/perturb.pdf}
\caption{This image shows the
MFCCs of \eqref{eq:mfcc_making_more_southern} which make a vowel sound more Southern. Each row of the image is an MFCC curve.}
\label{fig:contrib}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figs/perturb-first9-cv.pdf}
\caption{The first 9 MFCCs from model \eqref{eq:mfcc_making_more_southern}, which correspond to the bottom 9 rows of the matrix in Figure~\ref{fig:contrib}, plotted sequentially. We can see that MFCC 3 and 5 have large contributions. The grey lines are the MFCC curves obtained in each cross-validation fold and thicker black lines are from the final model.}
\label{fig:contrib_first_9}
\end{figure}
\section{Modelling geographic variation} \label{sec:maps}
In this section we demonstrate an approach for visualising the trap--bath split by combining data from the BNC with the trained accent classifiers described in Sections~\ref{sec:formant-model} and~\ref{sec:mfcc-model}. For each BNC speaker we predict the probability of their vowel sound being Southern (using in turn the formant model and the MFCC model), and then smooth the predicted probabilities spatially using a soap film smoother.
The BNC \textit{bath} vowels contain more variation than the NSCV dataset. This is partly because of more natural variation in conversational speech, as well as other factors like poor quality of some recordings and background noise. The BNC recordings also contain whole words and not only the vowel portion of the utterance. The timestamps for word boundaries are often inaccurate and many sounds are either a partial word, or contain parts of other words or speech from other speakers. It is hard to automatically detect the vowel portions within these recordings. We address this issue through the alignment step described in Section \ref{sec:preprocessing} to align each sound to the NSCV using the mean NSCV MFCC 1 curve.
A single representative sound can be constructed for each speaker by taking an average of these aligned formant and MFCC curves from the speaker's utterances. By resynthesising the sound of the average MFCC curves, we can hear that it retains the quality of a \textit{bath} vowel, so we use these average MFCCs and formants as representative of each speaker's vowel sound. For each speaker we obtain two predicted probabilities of their accent being Southern (one based on the formants, and one on the MFCCs), using models of Sections~\ref{sec:formant-model} and \ref{sec:mfcc-model}. Notice that for each speaker, plugging this average sound's formants (MFCCs) into the trained models of Sections~\ref{sec:formant-model} (Section~\ref{sec:mfcc-model}) yields the same predicted logit probability as if we averaged the logit probabilities from each sound's aligned formants (aligned MFCCs). The averaging step used to get speaker-specific probabilities ensures that the model is not unduly influenced by individual speakers who have many recordings at one location, while also reducing the predicted probability uncertainties. Where a speaker has recordings at multiple locations, we attribute their average sound to the location with most recordings.
At each location $(\texttt{lon}, \texttt{lat})$ in Great Britain, we denote by $f(\texttt{lon}, \texttt{lat})$ the logit of the expected probability of a randomly chosen person's accent being Southern.
We will estimate this surface using a spatial Beta regression model:
\begin{eqnarray}
\label{eq:betaReg}
p_{ij} &\stackrel{\text{iid}}{\sim}& \text{Beta}(\mu_i \nu, \nu (1-\mu_i)), \quad j \in \{1, \ldots, n_i\}\\
\mathrm{logit}(\mu_i) &=& f(\texttt{lon}_i, \texttt{lat}_i), \nonumber
\end{eqnarray}
where $p_{ij} \in [0,1]$ is the predicted probability of the $j$-th speaker's accent at location $(\texttt{lon}_i, \texttt{lat}_i)$ being Southern, $j=1,\ldots, n_i$.
The surface $f$ is estimated using a soap film smoother within the geographic boundary of Great Britain.
A single value of $\nu > 0$ is estimated for all observations, as in GLMs. Notice that $\mathrm{ilogit}(f(\texttt{lon}_i,\texttt{lat}_i)) = \mu_i = \mathbb{E}(p_{ij}) \in [0,1]$ represents the expected probability of the accent of a randomly chosen person being Southern at location $(\texttt{lon}_i, \texttt{lat}_i)$.
One may instead consider fitting a linear model directly on the estimated linear predictor scores obtained from the formant or MFCC models. This linear approach would not be robust to the estimated linear predictor taking large values (which is the case with our data), even though the associated probabilities are essentially equal to one (or zero). Our approach alleviates this by smoothing predictions on the probability scale which makes it less influenced by outliers. Link functions other than the logit could also be used in alternative approaches.
Let us now recall the soap film smoother.
The soap film smoother \citep{Wood2008} is a nonparametric solution to spatial smoothing problems, which avoids smoothing across boundaries of a bounded non-convex spatial domain.
We observe data points $\{(x_i, y_i, z_i), i=1,\ldots,n\}$, where $z_i$ are the responses with random noise and $\{(x_i, y_i)\}$ lie in a bounded region $\Omega \subset \mathbb{R}^2$. The objective is to find the function $f:\Omega \rightarrow \mathbb{R}$ which minimises
\[
\sum_{i=1}^n (z_i - f(x_i, y_i))^2 + \lambda \int_{\Omega} \left( \frac{\partial^2f}{\partial x^2} + \frac{\partial^2f}{\partial y^2}\right)^2 dx dy.
\]
The smoothing parameter $\lambda$ is chosen through cross-validation. The soap film smoother is implemented in the R package \texttt{mgcv} \citep{Wood2011}.
In our model \eqref{eq:betaReg}, the predicted Southern accent probabilities $\{p_{ij}\}$ of individual speakers are observations at different locations $\{(\texttt{lon}_i, \texttt{lat}_i)\}$ in Great Britain, and we use the soap film smoother to construct a smooth surface $f(\cdot, \cdot)$ to account for the geographic variation. We can compare the results using accent predictions from the two classification models proposed in the previous section.
Plots of the fitted response surfaces $\hat \mu(\texttt{lon}, \texttt{lat}) = \mathrm{ilogit}( \hat f(\texttt{lon}, \texttt{lat}) )$ using the formant and the MFCC classification models are given in Figure~\ref{fig:accent_maps}.
Both maps seem to suggest a North against Southeast split, similar to the isogloss map in Figure~\ref{fig:isogloss}. The predicted probabilities are usually not close to 0 or 1, because the BNC contains more variation than we have in the NSCV training data, due for instance to the variation in recording environments, and since not all speakers have a stereotypical Northern or Southern accent.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{.49\textwidth}
\centering
\includegraphics[trim=50 0 20 0, clip=TRUE, width=0.9\textwidth]{figs/FLR-map.pdf}
\caption{Map using formants.}
\end{subfigure}
\begin{subfigure}[b]{.49\textwidth}
\centering
\includegraphics[trim=50 0 20 0, clip=TRUE, width=0.9\textwidth]{figs/PLR-map.pdf}
\caption{Map using MFCCs.}
\end{subfigure}
\caption{Smoothed predicted probabilities of a vowel sound being Southern, when using the two models of Section~\ref{sec:classify}. Black crosses are recording locations.}
\label{fig:accent_maps}
\end{figure}
To visualise the uncertainty associated with the contours in Figure~\ref{fig:accent_maps}, Figure~\ref{fig:accent_SE_maps} shows the approximate 95\% pointwise confidence intervals for $\mu$. These are computed as $[\mathrm{ilogit}( \hat{f} - 1.96\times \hat{\texttt{se}}(\hat{f})), \mathrm{ilogit}(\hat{f} + 1.96\times \hat{\texttt{se}}(\hat{f}))]$, based on a normal approximation on the link function scale. Notice that the uncertainty for both models is high in Northern England, Scotland and Wales, due to fewer observations in those regions. However, the North-Southeast variation is consistent and Greater London emerges as a region with significantly Southern accents.
\begin{figure}[p]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[height=0.4\textheight]{figs/FLR-se-maps.pdf}
\caption{Pointwise confidence intervals for $\mu(\cdot, \cdot)$ for the formant model.}
\end{subfigure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[height=0.4\textheight]{figs/PLR-se-maps.pdf}
\caption{Pointwise confidence intervals for $\mu(\cdot, \cdot)$ for the MFCC model.}
\end{subfigure}
\caption{Contours of the spatially smoothed probabilities, showing the lower and upper bounds of a 95\% pointwise confidence interval for $\mu(\cdot, \cdot)$, constructed using a pointwise Normal approximation on the logit scale.}
\label{fig:accent_SE_maps}
\end{figure}
\section{Discussion} \label{sec:discussion}
We first demonstrated two principled and interpretable approaches to modelling accent variation in speech sounds, using techniques from functional data analysis and generalised additive models. We presented a model that uses formant trajectories to classify \textit{bath} vowel sounds as Northern or Southern based on their similarity to /\ae/ and /\textipa{A/} vowels, trained on a set of labelled vowels\hl[]{ collected in an experimental setup}. The same audio dataset was also used in a different model using MFCC curves, by using functional principal components analysis to generate new features from the MFCC curves, and then classifying the sounds using $\ell_1$-penalised logistic regression of the FPC scores. We showed in Section~\ref{sec:resynthesising} how this MFCC model allowed us to resynthesise vowel sounds along a spectrum between /\ae/ and /\textipa{A}/.
These formant and MFCC models were used to predict the probability of a Southern accent for vowels from the audio BNC \citep{BNC}, our second dataset. The predictions were smoothed spatially to visualise the trap--bath split in England, Wales and Scotland, using a spatial beta regression with a soap film smoother. The resulting maps show a North versus South-east difference in accents which we can directly attribute to the variation in the /\ae/ or /\textipa{A}/ vowel quality of BNC sounds.
This analysis demonstrates how we can combine information from a labelled audio dataset such as the NSCV dataset, with the unlabelled BNC dataset. Despite the small sample of 4 speakers in the NSCV dataset, it allowed for vowel classification models to be trained. From cross-validation it seems that these classification models are highly accurate, a property that we believe would hold in similar recording conditions (such as background noise level) as the training data.
However, the classifiers can only distinguish between Northern and Southern BNC \textit{bath} vowels to the extent that they differ by the /\ae/ and /\textipa{A}/ vowels captured in the NSCV training dataset. To produce a more valid characterisation of accent variation, one could use a labelled dataset of speech recordings from a larger corpus of speakers who can produce both accents accurately. Another limitation of this analysis is that we cannot verify the assumption of smooth spatial accent variation since we have no accent labels for BNC sounds. An extension of this work could involve augmenting the BNC by having human listeners manually classify a random sample of BNC vowels as Northern or Southern. These labels could then be used to train accent classifiers directly on BNC vowels, and also to validate the assumption of smooth spatial accent variation.
In phonetics, an ongoing research question has been whether dynamic information about formants is necessary for differentiating between vowels, or whether formant values at the midpoint of the vowel or sampled at fewer time points are sufficient \citep{watson1999, strange1983}. In Appendix \ref{app:modelselection} we have compared the functional formant model \ref{eq:loggam} to simpler models using $\text{F}_1$ and $\text{F}_2$ formants measured at the middle of the vowel, or at 25\%, 50\% and 75\% of the vowel. Even though the \textit{bath} vowel is a monophthong which doesn't contain significant vowel transitions, we see that the functional models show slight increases in cross-validated classification accuracy as compared to their non-functional versions, and sampling formants at more time points does not hinder classification. This is due to the regularisation of smooth terms in the generalised additive models used. The functional modelling approach also doesn't require specific time points for sampling to be chosen in advance, so it can be easily used with other vowels which have different formant trajectories. The MFCC model is another higher dimensional approach to modelling variation in vowels, as it uses information from more frequency bands. It has a slightly lower accuracy than the functional formant model, but its ability to resynthesise vowel sounds may be desirable for some applications.
We also observe some differences between the predictions of the formant model and the MFCC model. These models agree on vowel classification about 94\% of the time for the NSCV vowels and 73\% of the time for BNC vowels. The disagreements occur when the vowel quality is not clear, for example when formant curves are close to the boundary between the two vowels, or in the BNC, when there is considerable background noise and more variation in conversational speech. Nevertheless, the resulting spatial maps (Figure~\ref{fig:accent_maps} and Figure \ref{fig:accent_SE_maps}) show many similarities. Another way to compare the two models is to resynthesise vowels along a spectrum using the MFCC model, and classify these new vowels using the formant model. Changing the ``Southernness'' of a vowel with the MFCC model does change the corresponding prediction from the formant model, suggesting that similar information about vowel quality is being used by both models. We have added more detail on comparing the models in Appendix \ref{app:comparing-models}.
It is also possible to combine both MFCCs and formants from the same sound in one model. This can be done similarly to the MFCC model \ref{eq:PLR}, by appending the matrix of formant curves to the matrix of MFCC curves from each sound, and performing FPCA and $\ell_1$-penalised logistic regression as before. The disadvantage of this model is that we can neither interpret it from a speech production perspective (since it contains MFCCs which don't have a physical interpretation), nor use it to resynthesise vowels (since we cannot resynthesise vowels using formants). We have nevertheless trained this model, which has an accuracy of 92.75\%, and the results are in Appendix \ref{app:combined-model}.
The functional approach to modelling accent variation which we have demonstrated can easily be used with a larger corpus with more words or speakers. It can also be applied to to other vowels, including diphthongs (vowels which contain a transition, such as in ``house'') to visualise other accent variation in Great Britain, or other geographic regions.
\section*{Supplementary Material}
\textbf{Data and R code:} The R code and preprocessed data used to generate these results can be obtained \hl[from the authors, on request]{online at https://doi.org/10.5281/zenodo.4003815}.
\noindent
\textbf{Other outputs:} \texttt{nscv.gif} shows animated formant trajectories of the NSCV data. Resynthesised vowels are in \texttt{class-NtoS.wav} (perturbing the vowel in ``class'' from /\ae/ towards /\textipa{A/}), and \texttt{blast-StoN.wav} (perturbing the vowel in ``blast'' from /\textipa{A/} towards /\ae/).
\section*{Acknowledgements}
We thank the Associate Editor and three referees for their comments that helped to improve the quality of the paper.
\bibliographystyle{agsm}
\citestyle{agsm}
| {'timestamp': '2022-02-01T02:25:23', 'yymm': '2008', 'arxiv_id': '2008.12233', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.12233'} |
\section{Introduction}
A hypothetical pseudoscalar particle called axion is predicted by the theory
related to solving the CP-invariance violation problem in QCD. The most
important parameter determining the axion properties is the energy scale $f_a$
of the so-called U(1) Peccei-Quinn symmetry violation. It determines both the
axion mass and the strength of its coupling to fermions and gauge bosons
including photons. However, in spite of the numerous direct experiments, they
have not been discovered so far. Meanwhile, these experiments together with the
astrophysical and cosmological limitations leave a rather narrow band for the
permissible parameters of invisible axion (e.g.
$10^{-6} eV \leqslant m_a \leqslant 10^{-2} eV$~\citep{ref01,ref02}), which is
also a well-motivated cold dark matter candidate in this mass region
\citep{ref01,ref02}.
A whole family of axion-like particles (ALP) with their own features may exist
along with axions having the similar Lagrangian structure relative to the
Peccei-Quinn axion, as well as their own distinctive features. It consists in
the fact that if they exist, the connection between their mass and their
constant of coupling to photons must be highly weakened, as opposed to the
axions. It should be also mentioned that the phenomenon of photon-ALP mixing in
the presence of the electromagnetic field not only leads to the classic
neutrino-like photon-ALP oscillations, but also causes the change in the
polarization state of the photons (the $a \gamma \gamma$ coupling acts like a
polarimeter \citep{ref03}) propagating in the strong enough magnetic fields. It
is generally assumed that there are light ALPs coupled only to two photons,
although the realistic models of ALPs with couplings both to photons and to
matter are not excluded \citep{ref04}. Anyway, they may be considered a
well-motivated cold dark matter candidate \citep{ref01,ref02} under certain
conditions, just like axions.
It is interesting to note that the photon-ALP mixing in magnetic fields of
different astrophysical objects including active galaxies, clusters of
galaxies, intergalactic space and the Milky Way, may be the cause of the
remarkable phenomena like dimming of stars luminosity (e.g. supernovae in the
extragalactic magnetic field \citep{ref06,ref07}) and ``light shining through
a wall'' (e.g. light from very distant objects, travelling through the Universe
\citep{ref03,ref05}). In the former case the luminosity of an astrophysical
object is dimmed because some part of photons transforms into axions in the
object's magnetic field. In the latter case photons produced by the object are
initially converted into axions in the object's magnetic field, and then after
passing some distance (the width of the ``wall'') are converted back into
photons in another magnetic field (e.g. in the Milky Way), thus emulating the
process of effective free path growth for the photons in astrophysical medium
\citep{ref08,ref09}.
For the sake of simplicity let us hereinafter refer to all such particles as
axions if not stated otherwise.
In the present paper we consider the possible existence of the axion mechanism
of Sun luminosity variations\footnote{Let us point out that the axion mechanism of Sun
luminosity used for estimating the axion mass was described for the first time
in 1978
by \cite{ref10}.} based on the ``light shining through a wall'' effect. To be
more exact, we attempt to explain the axion mechanism of Sun luminosity variations by the
``light shining through a wall'', when the photons born mainly in the solar
core are at first converted into axions via the Primakoff effect \citep{ref11}
in its magnetic field, and then are converted back into photons after passing
the solar radiative zone and getting into the magnetic field of the overshoot
tachocline. We estimate this magnetic field within the framework of the
Ettingshausen-Nernst effect. In addition to that we obtain the consistent
estimates for the axion mass ($m_a$) and the axion coupling constant to photons
($g_{a \gamma}$), basing on this mechanism, and verify their values against the
axion model results and the known experiments including CAST, ADMX, RBF.
\section{Photon-axion conversion and the case of maximal mixing}
Let us give some implications and extractions from the photon-axion
oscillations theory which describes the process of the photon conversion into
an axion and back under the constant magnetic field $B$ of the length $L$. It
is easy to show \citep{ref05,Raffelt-Stodolsky1988,ref07,Hochmuth2007} that in
the case of the negligible photon absorption coefficient
($\Gamma _{\gamma} \to 0$) and axions decay rate ($\Gamma _{a} \to 0$) the
conversion probability is
\begin{equation}
P_{a \rightarrow \gamma} = \left( \Delta_{a \gamma}L \right)^2 \sin ^2 \left( \frac{ \Delta_{osc}L}{2} \right) \Big/ \left( \frac{ \Delta_{osc}L}{2}
\right)^2 \label{eq01}\, ,
\end{equation}
where the oscillation wavenumber $\Delta_{osc}$ is given by
\begin{equation}
\Delta_{osc}^2 = \left( \Delta_{pl} + \Delta_{Q,\perp} - \Delta_{a} \right)^2 + 4 \Delta_{a \gamma} ^2
\label{eq02}
\end{equation}
while the mixing parameter $\Delta _{a \gamma}$, the axion-mass parameter
$\Delta_{a}$, the refraction parameter $\Delta_{pl}$ and the QED dispersion
parameter $\Delta_{Q,\perp}$ may be represented by the following expressions:
\begin{equation}
\Delta _{a \gamma} = \frac{g_{a \gamma} B}{2} = 540 \left( \frac{g_{a \gamma}}{10^{-10} GeV^{-1}} \right) \left( \frac{B}{1 G} \right) ~~ pc^{-1}\, ,
\label{eq03}
\end{equation}
\begin{equation}
\Delta _{a} = \frac{m_a^2}{2 E_a} = 7.8 \cdot 10^{-11} \left( \frac{m_a}{10^{-7} eV} \right)^2 \left( \frac{10^{19} eV}{E_a} \right) ~~ pc^{-1}\, ,
\label{eq04}
\end{equation}
\begin{equation}
\Delta _{pl} = \frac{\omega ^2 _{pl}}{2 E_a} = 1.1 \cdot 10^{-6} \left( \frac{n_e}{10^{11} cm^{-3}} \right) \left( \frac{10^{19} eV}{E_a} \right) ~~ pc^{-1},
\label{eq05}
\end{equation}
\begin{equation}
\Delta _{Q,\perp} = \frac{m_{\gamma, \perp}^2}{2 E_a} .
\label{eq06}
\end{equation}
Here $g_{a \gamma}$ is the constant of axion coupling to photons; $B$ is the
transverse magnetic field; $m_a$ and $E_a$ are the axion mass and energy;
$\omega ^2 _{pl} = 4 \pi \alpha n_e / m_e$ is an effective photon mass in terms
of the plasma frequency if the process does not take place in vacuum, $n_e$ is
the electron density, $\alpha$ is the fine-structure constant, $m_e$ is the
electron mass; $m_{\gamma, \perp}^2$ is the effective mass square of the
transverse photon which arises due to interaction with the external magnetic
field.
The conversion probability (\ref{eq01}) is energy-independent, when
$2 \Delta _{a \gamma} \approx \Delta_{osc}$, i.e.
\begin{equation}
P_{a \rightarrow \gamma} \cong \sin^2 \left( \Delta _{a \gamma} L \right)\, ,
\label{eq07}
\end{equation}
or, whenever the oscillatory term in (\ref{eq01}) is small
($\Delta_{osc} L / 2 \to 0$), implying the limiting coherent behavior
\begin{equation}
P_{a \rightarrow \gamma} \cong \left( \frac{g_{a \gamma} B L}{2} \right)^2\, .
\label{eq08}
\end{equation}
It is worth noting that the oscillation length corresponding to (\ref{eq07})
reads
\begin{equation}
L_{osc} = \frac{\pi}{\Delta_{a \gamma}} = \frac{2 \pi}{g_{a \gamma} B} \cong 5.8 \cdot 10^{-3}
\left( \frac{10^{-10} GeV^{-1}}{g_{a \gamma}} \right)
\left( \frac{1G}{B} \right) ~pc
\label{eq13}
\end{equation}
\noindent assuming a purely transverse field. In the case of the appropriate
size $L$ of the region a complete transition between photons and axions is
possible.
From now on we are going to be interested in the energy-independent case
(\ref{eq07}) or (\ref{eq08}) which plays the key role in determination of the parameters
for the axion mechanism of Sun luminosity variations hypothesis (the axion coupling
constant to photons $g_{a \gamma}$, the transverse magnetic field $B$ of length
$L$ and the axion mass $m_a$).
\section{Axion mechanism of Sun luminosity variations}
Our hypothesis is that the solar axions which are born in the solar core
\citep{ref01,ref02} through the known Primakoff effect \citep{ref11}, may be
converted back into $\gamma$-quanta in the magnetic field of the solar
tachocline (the base of the solar convective zone). The magnetic field
variations in the tachocline cause the converted $\gamma$-quanta intensity
variations in this case, which in their turn cause the variations of the Sun
luminosity known as the active and quiet Sun states. Let us consider this
phenomenon in more detail below.
As we noted above, the expression (\ref{eq01}) for the probability of the
axion-photon oscillations in the transversal magnetic field was obtained for
the media with the quasi-zero refraction, i.e. for the media with a negligible
photon absorption coefficient ($\Gamma_{\gamma} \to 0$). It means that in order
for the axion-photon oscillations to take place without any significant losses,
a medium with a very low or quasi-zero density is required, which would
suppress the processes of photon absorption almost entirely.
Surprisingly enough, it turns out that such ``transparent'' media can take
place, and not only in plasmas in general, but straight in the convective zone
of the Sun. Here we generally mean the so-called magnetic flux tubes, the
properties of which are examined below.
\subsection{Ideal photon channeling conditions inside the magnetic flux tubes}
\label{subsec-channeling}
The idea of the energy flow channeling along a fanning magnetic field has been
suggested for the first time by
\cite{ref12} as an explanation for
darkness of umbra of sunspots. It was incorporated in a simple sunspot model by
\cite{ref13}.
\cite{ref14} extended this suggestion to smaller
flux tubes to explain the dark pores and the bright faculae as well.
Summarizing the research of the convective zone magnetic fields in the form of
the isolated flux tubes,
\cite{ref15} suggested a simple mathematical model for the behavior of thin
magnetic flux tubes, dealing with the nature of the solar cycle, sunspot
structure, the origin of spicules and the source of mechanical heating in the
solar atmosphere. In this model, the so-called thin tube approximation is used
(see \cite{ref15} and references therein), i.e. the field is conceived to exist
in the form of slender bundles of field lines (flux tubes) embedded in a
field-free fluid (Fig.~\ref{fig01}). Mechanical equilibrium between the tube
and its surrounding is ensured by a reduction of the gas pressure inside the
tube, which compensates the force exerted by the magnetic field. In our
opinion, this is exactly the kind of mechanism
\cite{Parker1955} was thinking about when he wrote about the problem of flux
emergence: ``Once the field has been amplified by the dynamo, it needs to be
released into the convection zone by some mechanism, where it can be
transported to the surface by magnetic buoyancy''~\citep{ref17}.
\begin{figure*}
\begin{center}
\includegraphics[width=12cm]{TachoclineFluxTubes-3.pdf}
\end{center}
\caption{(a) Vertical cut through an active region illustrating the connection
between a sunspot at the surface and its origins in the toroidal field layer at
the base of the convection zone. Horizontal fields stored at the base of the
convection zone (the overshoot tachocline zone) during the cycle. Active
regions form from sections brought up by buoyancy (one shown in the process of
rising). After eruption through the solar surface a nearly potential field is
set up in the atmosphere (broken lines), connecting to the base of the
convective zone via almost vertical flux tube. Hypothetical small scale
structure of a sunspot is shown in the inset (Adopted from
\cite{ref18}
and
\cite{ref15}).
(b) Detection of emerging sunspot regions in the solar interior~\citep{ref18}.
Acoustic ray paths with lower turning points between 42 and 75 Mm
(1 Mm=1000 km) crossing a region of emerging flux. For simplicity, only four
out of total of 31 ray paths used in this study (the time-distance
helioseismology experiment) are shown here. Adopted from~\cite{ref19}.
(c) Emerging and anchoring of stable flux tubes in the overshoot tachocline
zone, and its time-evolution in the convective zone. Adopted from \cite{ref20}.
(d) Vector magnetogram of the white light image of a sunspot (taken with SOT on
a board of the Hinode satellite -- see inset) showing in red the direction of
the magnetic field and its strength (length of the bar). The movie shows the
evolution in the photospheric fields that has led to an X class flare in the
lower part of the active region. Adopted from~\cite{ref21}.}
\label{fig01}
\end{figure*}
In order to understand magnetic buoyancy, let us consider an isolated
horizontal flux tube in pressure equilibrium with its non-magnetic surroundings
so that
in cgs units
\begin{equation}
p_{ext} = p_{int} + \frac{\vert \vec{B} \vert^2}{8 \pi} ,
\label{eq21}
\end{equation}
\noindent where $p_{int}$ and $p_{ext}$ are the internal and external gas
pressures respectively, $B$ denotes the uniform field strength in the flux
tube. If the internal and external temperatures are equal so that $T_{int} =
T_{ext}$ (thermal equilibrium), then since $p_{ext} > p_{int}$, the gas in the
tube is less dense than its surrounding ($\rho _{ext} > \rho _{int}$), implying
that the tube will rise under the influence of gravity.
In spite of the obvious, though turned out to be surmountable, difficulties of
the application to real problems, it was shown (see~\cite{ref15} and Refs.
therein) that strong buoyancy forces act in magnetic flux tubes of the required
field strength ($10^4 - 10^5 ~G$~\citep{ref23}). Under their influence tubes
either float to the surface as a whole (e.g. Fig.~1 in \citep{ref24}) or they
form loops of which the tops break through the surface (e.g. Fig.~1
in~\citep{ref14}) and lower parts descend to the bottom of the convective zone,
i.e. to the overshoot tachocline zone. The convective zone, being unstable,
enhanced this process~\citep{ref25,ref26}. Small tubes take longer to erupt
through the surface because they feel stronger drag forces. It is interesting
to note here that the phenomenon of the drag force which raises the magnetic
flux tubes to the convective surface with the speeds about 0.3-0.6~km/s, was
discovered in direct experiments using the method of time-distance
helioseismology~\citep{ref19}. Detailed calculations of the
process~\citep{ref27} show that even a tube with the size of a very small spot,
if located within the convective zone, will erupt in less than two years. Yet,
according to~\cite{ref27}, horizontal fields are needed in overshoot tachocline
zone, which survive for about 11~yr, in order to produce an activity cycle.
A simplified scenario of magnetic flux tubes (MFT) birth and space-time
evolution (Fig.~\ref{fig01}a) may be presented as follows. MFT is born in the
overshoot tachocline zone (Fig.~\ref{fig01}c) and rises up to the convective
zone surface (Fig.~\ref{fig01}b) without separation from the tachocline (the
anchoring effect), where it forms the sunspot (Fig.~\ref{fig01}d) or other
kinds of active solar regions when intersecting the photosphere. There are
more fine details of MFT physics expounded in overviews by
\cite{ref17} and
\cite{ref24}, where certain fundamental questions, which need to be addressed
to understand the basic nature of magnetic activity, are discussed in detail:
How is the magnetic field generated, maintained and dispersed? What are its
properties such as structure, strength, geometry? What are the dynamical
processes associated with magnetic fields? \textbf{What role do magnetic fields
play in energy transport?}
Dwelling on the last extremely important question associated with the energy
transport, let us note that it is known that thin magnetic flux tubes can
support longitudinal (also called sausage), transverse (also called kink),
torsional (also called torsional Alfv\'{e}n), and fluting modes
(e.g.~\cite{ref28,ref29,ref30,ref31,ref32}); for the tube modes supported by
wide magnetic flux tubes, see
\cite{ref31}. Focusing on the longitudinal tube waves known to be an important
heating agent of solar magnetic regions, it is necessary to mention the recent
papers by
\cite{ref33}, which showed that the longitudinal flux tube waves are identified
as insufficient to heat the solar transition region and corona in agreement
with previous studies~\citep{ref34}.
\textbf{In other words, the problem of generation and transport of energy by
magnetic flux tubes remains unsolved in spite of its key role in physics of
various types of solar active regions.}
It is clear that this unsolved problem of energy transport by magnetic flux
tubes at the same time represents another unsolved problem related to the
energy transport and sunspot darkness (see 2.2 in \cite{Rempel2011}). From a
number of known concepts playing a noticeable role in understanding of the
connection between the energy transport and sunspot darkness, let us consider
the most significant theory, according to our vision. It is based on the
Parker-Biermann cooling effect \citep{ref41-3,Biermann1941,ref43-3} and
originates from the works of~\cite{Biermann1941} and~\cite{Alfven1942}.
The main point of the Parker-Biermann cooling effect is that the classical
mechanism of the magnetic tubes buoyancy (e.g. Fig.~\ref{fig04-3}a,
\cite{ref41-3}), emerging as a result of the shear flows instability
development in the tachocline, should be supplemented with the following
results of the~\cite{Biermann1941} postulate and the theory developed by
\cite{ref41-3,ref43-3}: the electric conductivity in the strongly ionized
plasma may be so high that the magnetic field becomes frozen into plasma and
causes the split magnetic tube (Fig.~\ref{fig04-3}b,c) to cool inside.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=12cm]{Parker-empty-tubes-6.pdf}
\end{center}
\caption{The possible ways of a toroidal magnetic flux tube development into a
sunspot.
(a) A rough representation of the form a tube can take after the rise to the
surface by magnetic buoyancy (adopted from Fig.~2a in \cite{ref41-3});
(b) demonstrates the ``crowding'' below the photosphere surface because of
cooling (adopted from Fig.~2b in \cite{ref41-3});
(c) demonstrates the tube splitting as a consequence of the inner region
cooling under the conditions when the tube is in the thermal disequilibrium
with its surroundings and the convective heat transfer is suppressed
\mbox{\citep{Biermann1941}} above $\sim 0.71 R_{Sun}$. This effect as well as
the mechanism of the neutral atoms appearance inside the magnetic tubes
are discussed further in the text (see \mbox{Fig.~\ref{fig-lampochka}a}).
Adopted from Fig.~2c in \cite{ref41-3}.
}
\label{fig04-3}
\end{figure}
Biermann understood that the magnetic field within the sunspots might itself be
a reason of their darkness. Around the sunspots, the heat is transported up to
the surface of the Sun by means of convection (see 2.2.1 in~\cite{Rempel2011}),
while~\cite{Biermann1941} noted that such transport is strongly inhibited by
the nearly vertical magnetic field within the sunspot, thereby providing a
direct explanation for the reduced temperature at the visible surface. Thus,
the sunspot is dark because it is cooler than its surroundings, and it is
cooler because the convection is inhibited underneath.
Still, the missing cause of a very high conductivity in strongly ionized
plasma, which would produce a strong magnetic field ``frozen'' into this
plasma, has been the major flaw of the so called~\cite{Biermann1941} postulate.
Let us show a solution to the known problem of the Parker-Biermann cooling
effect, which is defined by the nature of the very large poloidal magnetic
fields in the tachocline (determined by the thermomagnetic Ettingshausen-Nernst
effect) and provides the physical basis for the photon channeling conditions
inside the magnetic flux tubes.
\subsubsection{The thermomagnetic Ettingshausen-Nernst effect and poloidal magnetic field in the tachocline}
For the dynamo theories of planetary, stellar and spiral galactic magnetism the
Coriolis force is of crucial importance. However, the assumed large solar
dynamo leads to very large magnetic fields ($\sim 5 \cdot 10^7$ gauss
\citep{Fowler1955,Couvidat2003}, not observed on the surface of the Sun. This
requires an explanation of how these fields are screened from reaching the
surface.
As is known~\citep{Schwarzschild1958}, the temperature dependence of the
thermonuclear reaction rate in the region of 10$^7$K goes in proportion to
T$^{4.5}$. This means there is a sharp boundary between a much hotter region
where most of the thermonuclear reactions occur and a cooler region where they
are largely absent~\citep{Winterberg2015}. This boundary between radiative and
convective zones is the tachocline. It is the thermomagnetic
Ettingshausen-Nernst
effect~\citep{Ettingshausen1886,Sondheimer1948,Spitzer1956,Kim1969} which by
the large temperature gradient in the tachocline between the hotter and cooler
region leads to large currents shielding the large magnetic field of the dynamo
\citep{Winterberg2015}.
Subject to a quasi-steady state characterized by a balance of the magnetic
field of the dynamo, in the limit of weak collision (the collision frequency
much less than the cyclotron frequency of positive ions), a thermomagnetic
current can be generated in a magnetized
plasma~\citep{Spitzer1962,Spitzer2006}. For a fully ionized gases plasma the
thermomagnetic Ettingshausen-Nernst effect leads to a current density given by
(see Eqs.~(5-49) in~\citep{Spitzer1962,Spitzer2006}):
\begin{equation}
\vec{j} _{\perp} = \frac{3 k n_e c}{2 B^2} \vec{B} \times \nabla T
\label{eq06-01}
\end{equation}
\noindent where $n_e$ is the electron number density, $B$ is the magnetic
field, and $T$ is the absolute temperature (K). With $n_e = \left[ Z / (Z+1)
\right] n$, where $n = n_e + n_i$, and $n_i = n_e / Z$ is the ion number
density for a $Z$-times ionized plasma, the following is obtained:
\begin{equation}
\vec{j} _{\perp} = \frac{3 k n c}{2 B^2} \frac{Z}{Z+1} \vec{B} \times \nabla
T\, . \label{eq06-02}
\end{equation}
It exerts a force on plasma, with the force density $F$ given by
\begin{equation}
\vec{F} = \frac{1}{c} \vec{j} _{\perp} \times \vec{B} =
\frac{3 n k}{2 B^2} \frac{Z}{Z+1} \left( \vec{B} \times \nabla T \right)
\times \vec{B}
\label{eq06-03}
\end{equation}
or with $\nabla T$ perpendicular to $\vec{B}$
\begin{equation}
\vec{F} = \frac{3 n k}{2} \frac{Z}{Z+1} \nabla T
\label{eq06-04}
\end{equation}
leading to the magnetic equilibrium condition (see Eqs.~(4-1)
in~\citep{Spitzer1962})
\begin{equation}
\vec{F} = \frac{1}{c} \vec{j} _{\perp} \times \vec{B} = \nabla p
\label{eq06-05}
\end{equation}
with $p = (\rho / m) kT = nkT$. And by equating~(\ref{eq06-04})
and~(\ref{eq06-05}),
\begin{equation}
\frac{3 n k}{2} \frac{Z}{Z+1} \nabla T = nk \nabla T + kT \nabla n
\label{eq06-06}
\end{equation}
\noindent or
\begin{equation}
a \frac{\nabla T}{T} + \frac{\nabla n}{n} = 0,
~~~ where ~~ a = \frac{2 - Z}{2(Z+1)} ,
\label{eq06-06a}
\end{equation}
\noindent we obtain the condition:
\begin{equation}
T ^a n = const .
\label{eq06-07}
\end{equation}
For a singly-ionized plasma with $Z=1$, one has
\begin{equation}
T ^{1/4} n = const .
\label{eq06-08}
\end{equation}
For a doubly-ionized plasma ($Z=2$) one has $n=const$. Finally, in the limit $Z
\rightarrow A$, one has $T^{-1/2}n = const$. Therefore, $n$ does not strongly
depend on $T$, unlike in a plasma of constant pressure, in which $Tn=const$. It
shows that the thermomagnetic currents may change the pressure distribution in
magnetized plasma considerably.
Taking a Cartesian coordinate system with $z$ directed along $\nabla T$, the
magnetic field in the $x$-direction and the Ettingshausen-Nernst current in the
$y$-direction, and supposing a fully ionized hydrogen plasma with $Z=1$ in the
tachocline, one has
\begin{equation}
{j} _{\perp} = {j} _y = - \frac{3 n k c}{4 B} \frac{dT}{dz}.
\label{eq06-09}
\end{equation}
From Maxwell's equation $4 \pi \vec{j}_{\perp}/ c = curl \vec{B}$, one has
\begin{equation}
{j} _y = \frac{c}{4 \pi} \frac{dB}{dz},
\label{eq06-10}
\end{equation}
and thus by equating~(\ref{eq06-09}) and~(\ref{eq06-10}) we obtain:
\begin{equation}
2B \frac{dB}{dz} = -6 \pi k n \frac{dT}{dz}.
\label{eq06-11}
\end{equation}
From~(\ref{eq06-08}) one has
\begin{equation}
n = \frac{n _{OT} T_{OT}^{1/4}}{T^{1/4}},
\label{eq06-12}
\end{equation}
\noindent where the values $n = n_{OT}$ and $T = T_{OT}$ correspond to the
overshoot tachocline. Inserting~(\ref{eq06-12}) into~(\ref{eq06-11}), one finds
\begin{equation}
dB^2 = -\frac{6 \pi k n _{OT} T_{OT}^{1/4}}{T^{1/4}} dT,
\label{eq06-13}
\end{equation}
\noindent and hence, as a result of integration in the limits $[B_{OT},0]$ in
the left-hand side and $[0,T_{OT}]$ in the right-hand side,
\begin{equation}
\frac{B_{OT}^2}{8 \pi} = n _{OT} kT_{OT}
\label{eq06-14}
\end{equation}
which shows that the magnetic field of the thermomagnetic current in the
overshoot tachocline neutralizes the magnetic field of the dynamo reaching the
overshoot tachocline (see Fig.~\ref{fig-R-MagField}).
\begin{figure}[tb]
\begin{center}
\includegraphics[width=12cm]{MagField-Radius-TurckChieze-1.pdf}
\end{center}
\caption{The reconstructed solar magnetic field (in blue) simulation
from~\cite{Couvidat2003}: 10$^3$-10$^4$~Tesla (left), 30-50~Tesla (middle) and
2-3~Tesla (right), with a temperature of $\sim$9~MK, $\sim$2~MK
and~$\sim$200~kK, respectively. The thin lines show the estimated range of
values for each magnetic field component. Internal rotation was not included in
the calculation. An additional axion production at those places can modify both
intensity and shape of the solar axion spectrum (Courtesy Sylvaine
Turck-Chi\`{e}ze (see Fig.~2 in~\cite{Zioutas2007})). The reconstructed solar
magnetic field (in red) simulation from~(\ref{eq06-16}): $4 \cdot 10^3$~T in
tachocline ($\sim0.7 R_{Sun}$).}
\label{fig-R-MagField}
\end{figure}
Hence, it is not hard to understand what forces compress the field into intense
filaments, in opposition to the enormous magnetic pressure
\begin{equation}
\frac{B_{OT}^2}{8 \pi} = p_{ext} \approx 6.5 \cdot 10^{13} \frac{erg}{cm^3} ~~
at ~~ 0.7 R_{Sun},
\label{eq06-15}
\end{equation}
\noindent where the gas pressure $p_{ext}$ at the tachocline of the Sun ($\rho
\approx 0.2 ~g\cdot cm^{-3}$ and $T \approx 2.3 \cdot 10^6 K$ \citep{ref45-3}
at~$0.7 R_{Sun}$) gives rise to a poloidal magnetic field
\begin{equation}
B_{OT} \simeq 4100 T.
\label{eq06-16}
\end{equation}
According to (\ref{eq06-16}), a magnetic flux tube anchored in tachocline (see
Fig.~\ref{fig-twisted-tube}) has a significant toroidal magnetic field
($\sim$4100~T), within a layer near the base of the convection zone, where $0.7
R_{Sun}$ and $d \sim 0.05 R_{Sun}$ are constants defining the mean position and
thickness of the layer where the field is concentrated. Each of these anchored
magnetic flux tubes forms a pair of sunspots on the surface of the Sun.
Let us now show the theoretical possibility of the sunspot activity correlation
with the variations of the $\gamma$-quanta of axion origin, induced by the
magnetic field variations in the overshoot tachocline.
\subsubsection{The Parker-Biermann cooling effect, Rosseland mean opacity and\\ axion-photon oscillations in twisted magnetic tubes}
\label{parker-biermann}
Several local models are known to have been used with great success to
investigate the formation of buoyant magnetic transport, which transforms
twisted magnetic tubes generated through shear amplification near the base
tachocline (e.g.~\cite{Nelson2014}) and the structure and evolution of
photospheric active regions (e.g.~\cite{Rempel2011a}).
Because these models assume the anchored magnetic flux tubes depending on the
poloidal field in the tachocline, it is not too hard to show that the magnetic
field $B_{OT}$ reaching $\sim 4100 T$ (see~(\ref{eq06-16})) may be at the same
time the reason for the Parker-Biermann cooling effect in the twisted magnetic
tubes (see~Fig.~\ref{fig-twisted-tube}b). The theoretical consequences of such
reasoning of the Parker-Biermann cooling effect are considered below.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=15cm]{MagTube-twisted-05.pdf}
\end{center}
\caption{An isolated and anchored in tachocline (a) magnetic flux tube (adopted
from~\cite{Parker1979}) and (b) twisted magnetic flux tube
(e.g.~\citep{Stein2012}, Fig.~2 in~\citep{Gold1960}, Fig.~1 and Fig.~2
in~\citep{Sturrock2001}) bursting through the solar photosphere to form a
bipolar region. \textbf{Inset in panel (b)}: topological effect of the magnetic
reconnection in the magnetic tube (see~\cite{Priest2000}), where the
$\Omega$-loop reconnects across its base, pinching off the $\Omega$-loop to
form a free $O$-loop (see Fig.~4 in~\cite{Parker1994}). The buoyancy of the
$O$-loop is limited by the magnetic tube interior with Parker-Biermann
cooling.}
\label{fig-twisted-tube}
\end{figure}
First of all, we suggest that the classic mechanism of magnetic tubes buoyancy
(Fig.~\ref{fig-twisted-tube}а), appearing as a result of the shear instability
development in the tachocline, should be supplemented by the rise of the
twisted magnetic tubes in a stratified medium (Fig.~\ref{fig-twisted-tube}b
(see Fig.~1 and Fig.~2 in \citep{Sturrock2001}), where the magnetic field is
produced by dynamo action throughout the convection zone, primarily by
stretching and twisting in the turbulent downflows (see~\citep{Stein2012}).
Second, the twisting of the magnetic tube may not only promote its splitting,
but also may form a cool region under a certain condition
\begin{equation}
p_{ext} = \frac{B^2}{8\pi}
\label{eq06v2-01}
\end{equation}
\noindent
when the tube (inset in~Fig.~\ref{fig-twisted-tube}b) is in the thermal
disequilibrium with its surroundings and the convective heat transfer is
suppressed \citep{Biermann1941}.
It is interesting to explore how the cool region stretching from the
tachocline to the photosphere, where the magnetic tube is in thermal
non-equilibrium~(\ref{eq06v2-01}) with its surroundings, deals with the
appearance of the neutral atoms (e.g. hydrogen) in the upper convection zone
(see Fig.~\ref{fig-lampochka}a in contrast to Fig.~2c in \cite{Parker1955}). In
other words, how does this very cool region prevent the neutral atoms to
penetrate from the upper convection zone to the base of the convection zone,
i.e. tachocline?
\begin{figure*}[tbp]
\begin{center}
\includegraphics[width=12cm]{Sun-mag_tube-Nernst-10.pdf}
\end{center}
\caption{(a) Topological effects of the magnetic reconnection inside the
magnetic tubes with the ``magnetic steps''. The left panel shows the
temperature and pressure change along the radius of the Sun from the tachocline
to the photosphere \citep{ref45-3}, $L_{MS}$ is the height of the magnetic
shear steps. At $R \sim 0.72~R_{Sun}$ the vertical magnetic field reaches $B_z
\sim 3600$~T, and the magnetic pressure $p_{ext} = B^2 / 8\pi
\simeq 5.21 \cdot 10^{13}~erg/cm^3$ \citep{ref45-3}. The very cool regions
along the entire convective zone caused by the Parker-Biermann cooling effect
have the magnetic pressure (\mbox{\ref{eq06v2-01}}) in the twisted magnetic tubes.
\newline (b) All the axion flux, born via the Primakoff effect (i.e. the real
thermal photons interaction with the Coulomb field of the solar plasma) comes
from the region $\leq 0.1 R_{Sun}$~\citep{ref36}. Using the angle
$\alpha = 2 \arctan \left( 0.1 R_{Sun} / 0.7 R_{Sun} \right)$ marking the
angular size of this region relative to tachocline, it is possible to estimate
the flux of the axions distributed over the surface of the Sun. The flux of the
X-ray (of axion origin) is defined by the angle
$\gamma = 2 \arctan \left( 0.5 d_{spot} / 0.3 R_{Sun} \right)$, where
$d_{spot}$ is the diameter of a sunspot on the surface of the Sun (e.g.
$d_{spot} \sim 11000~km$~\citep{Dikpati2008}).}
\label{fig-lampochka}
\end{figure*}
It is essential to find the physical solution to the problem of solar convective zone which would fit the opacity experiments. The full calculation of solar opacities, which depend on the chemical composition, pressure and temperature of the gas, as well as the wavelength of the incident light, is a complex endeavour. The problem can be simplified by using a mean opacity averaged over all wavelengths, so that only the dependence on the gas physical properties remains (e.g. \cite{Rogers1994,Ferguson2005,Bailey2009}). The most commonly used is the Rosseland mean opacity $k_R$, defined as:
\begin{equation}
\frac{1}{k_R} = \left. \int \limits_{0}^{\infty} d \nu \frac{1}{k_\nu} \frac{dB_\nu}{dT} \middle/
\int \limits_{0}^{\infty} d \nu \frac{dB_\nu}{dT} \right.
\label{eq06v2-02}
\end{equation}
\noindent
where $dB_\nu / dT$ is the derivative of the Planck function with respect to
temperature, $k_{\nu}$ is the monochromatic opacity at frequency $\nu$ of the
incident light or the total extinction coefficient, including stimulated
emission plus scattering. A large value of the opacity indicates strong
absorption from beam of photons, whereas a small value indicates that the beam
loses very little energy as it passes through the medium.
Note that the Rosseland opacity is an harmonic mean, in which the greatest
contribution comes from the lowest values of opacity, weighted by a function
that depends on the rate at which the blackbody spectrum varies with
temperature (see Eq.~(\ref{eq06v2-02}) and Fig.~\ref{fig-opacity}), and the
photons are most efficiently transported through the ``windows'' where $k_\nu$
is the lowest (see Fig.2 in \cite{Bailey2009}).
\begin{figure}[tbp!]
\begin{center}
\includegraphics[width=15cm]{rosseland_opacity-01.pdf}
\end{center}
\caption{Rosseland mean opacity $k_R$, in units of $cm^2 g^{-1}$, shown versus
temperature (X-axis) and density (multi-color curves, plotted once per decade),
computed with the solar metallicity of hydrogen and helium mixture X=0.7 and
Z=0.02. The panel shows curves of $k_R$ versus temperature for several
``steady'' values of the density, labelled by the value of $\log {\rho}$ (in
$g/cm^3$). Curves that extend from $\log {T} = 3.5$ to 8 are from the Opacity
Project (opacities.osc.edu). Overlapping curves from $\log {T} = 2.7$ to 4.5
are from \cite{Ferguson2005}. The lowest-temperature region (black dotted
curve) shows an estimate of ice-grain and metal-grain opacity from
\cite{Stamatellos2007}. Adapted from \cite{Cranmer2015}.}
\label{fig-opacity}
\end{figure}
Taking the Rosseland mean opacities shown in Fig.~\ref{fig-opacity}, one may
calculate, for example, four consecutive cool ranges within the convective
zone (Fig.~\ref{fig-lampochka}a), where the internal gas pressure $p_{int}$ is
defined by the following values:
\begin{equation}
p_{int} = n k_B T, ~where~
\begin{cases}
T \simeq 10^{3.48} ~K, \\
T \simeq 10^{3.29} ~K, \\
T \simeq 10^{3.20} ~K, \\
T \simeq 10^{3.11} ~K, \\
\end{cases}
\rho = 10^{-7} ~g/cm^3
\label{eq06v2-03}
\end{equation}
Since the inner gas pressure~(\ref{eq06v2-03}) grows towards the tachocline so
that
\begin{align}
p_{int} &(T = 10^{3.48} ~K) \vert _{\leqslant 0.85 R_{Sun}} >
p_{int} (T = 10^{3.29} ~K) \vert _{\leqslant 0.9971 R_{Sun}} > \nonumber \\
& > p_{int} (T = 10^{3.20} ~K) \vert _{\leqslant 0.99994 R_{Sun}} >
p_{int} (T = 10^{3.11} ~K) \vert _{\leqslant R_{Sun}} ,
\label{eq06v2-04}
\end{align}
\noindent
it becomes evident that the neutral atoms appearing in the upper convection
zone ($\geqslant 0.85 R_{Sun}$) cannot descend deep to the base of the
convection zone, i.e. tachocline (see Fig.~\ref{fig-lampochka}a).
Therefore it is very important to examine the connection between the Rosseland
mean opacity and axion-photon oscillations in twisted magnetic tube.
Let us consider the qualitative nature of the $\Omega$-loop formation and
growth process, based on the semiphenomenological model of the magnetic
$\Omega$-loops in the convective zone.
\vspace{0.3cm}
\noindent $\bullet$ A high concentration azimuthal magnetic flux
($B_{OT} \sim 4100$~T, see Fig.~\ref{fig-lampochka}) in the overshoot
tachocline through the shear flows instability development.
An interpretation of such link is related to the fact that helioseismology
places the principal rotation $\partial \omega / \partial r$ of the Sun in the
overshoot layer immediately below the bottom of the convective zone
\citep{Parker1994}. It is also generally believed that the azimuthal magnetic
field of the Sun is produced by the shearing $r \partial \omega / \partial r$
of the poloidal field $B_{OT}$ from which it is generally concluded that the
principal azimuthal magnetic flux resides in the shear layer
\citep{Parker1955,Parker1993}.
\vspace{0.3cm}
\noindent
$\bullet$ If some ``external'' factor of the local shear perturbation appears
against the background of the azimuthal magnetic flux concentration, such
additional local density of the magnetic flux may lead to the magnetic field
strength as high as, e.g. $B_z \sim 3600$~T (see Fig.~\ref{fig-lampochka}a
and \mbox{Fig.~\ref{fig-Bz}b}). Of course, this brings up a question about the
physics behind such ``external'' factor and the local shear perturbation.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=15cm]{Bz-11.pdf}
\end{center}
\caption{
(a) Normalized external temperature, density and gas pressure as functions of
the solar depth $R/R_{Sun}$. The standard solar model with $He$ diffusion
\citep{ref45-3} was used for $R < 0.95 R_{Sun}$ (solid lines). The dotted
lines mark extrapolated values.
(b) Variation of the magnetic field strength $B_z$ along the emerging
$\Omega$-loop as a function of the solar depth $R / R_{Sun}$ throughout the
convection zone. The solid blue line marks the permitted values for
the same standard solar model with $He$ diffusion \citep{ref45-3} starting at
the theoretical estimate of the magnetic field
$B_{OT} \approx B_z(0) = 4100~T$. The dashed line is the continuation,
according to the existence of the very cool regions inside the magnetic tube.
Red point marks the up-to-date observations showing the mean magnetic field
strength at the level $\sim 0.25~T = 2500 ~G$ \citep{Pevtsov2011,Pevtsov2014}.}
\label{fig-Bz}
\end{figure}
In this regard let us consider the superintense magnetic $\Omega$-loop
formation in the overshoot tachocline through the local shear caused by the
high local concentration of the azimuthal magnetic flux. The buoyant force
acting on the $\Omega$-loop decreases slowly with concentration so the vertical
magnetic field of the $\Omega$-loop reaches $B_z \sim 3600$~T at about
$R / R_{Sun} \sim 0.72$ (see Fig.~\ref{fig-lampochka}a and Fig.~\ref{fig-Bz}b).
Because of the magnetic pressure
see analog \mbox{(\ref{eq06-15})} and Fig.~\ref{fig-lampochka}a)
$p_{ext} = B_{0.72 R_{Sun}}^2 / 8\pi = 5.21\cdot 10^{13}~erg/cm^3$
\citep{ref45-3} this leads
to a significant cooling of the $\Omega$-loop tube (see
Fig.~\ref{fig-lampochka}a).
In other words, we assume the effect of the $\Omega$-loop cooling to be the
basic effect responsible for the magnetic flux concentration. It arises from
the well known suppression of convective heat transport by a strong magnetic
field~\citep{Biermann1941}. It means that although the principal azimuthal
magnetic flux resides in the shear layer, it predetermines the additional local
shear giving rise to a significant cooling inside the $\Omega$-loop.
Thus, the ultralow pressure is set inside the magnetic tube as a result of the
sharp limitation of the magnetic steps buoyancy inside the cool magnetic tube
(Fig.~\ref{fig-lampochka}a). This happens because the buoyancy of the magnetic
flows requires finite \textbf{superadiabaticity} of the convection zone
\citep{ref47-3,ref35-3}, otherwise, expanding according to the magnetic
\textbf{adiabatic} law (with the convection being suppressed by the magnetic
field), the magnetic clusters may become cooler than their surroundings, which
compensates the effect of the magnetic buoyancy of superintense magnetic O-loop.
Eventually we suppose that the axion mechanism based on the X-ray
channeling along the ``cool'' region of the split magnetic tube
\sethlcolor{pink}
\sethlcolor{yellow}
(Fig.~\ref{fig-lampochka}a) effectively supplies the necessary energy flux
``channeling'' in magnetic tube to the photosphere while the convective heat transfer is heavily
suppressed.
In this context it is necessary to have a clear view of the energy transport by the X-ray of axion origin, which are a primary transfer mechanism. The recent improvements in the calculation of the radiative properties of solar matter have helped to resolve several long-standing discrepancies between observations and the predictions of theoretical models (e.g. \cite{Rogers1994,Ferguson2005,Bailey2009}), and now it is possible to calculate the photon mean free path (Rosseland length) for Fig.~\ref{fig-opacity}:
\begin{equation}
l_{photon} = \frac{1}{k_R \rho} \sim
\begin{cases}
2 \cdot 10^{10} ~cm ~~ & for ~~ k_R \simeq 5 \cdot 10^{-4} ~cm^2/g, \\
10^{10} ~cm ~~ & for ~~ k_R \simeq 10^{-3} ~cm^2/g, \\
1.5 \cdot 10^{8} ~cm ~~ & for ~~ k_R \simeq 6.7 \cdot 10^{-2} ~cm^2/g, \\
10^{7} ~cm ~~ & for ~~ k_R \simeq 1 ~cm^2/g,
\end{cases}
~~ \rho = 10^{-7} ~g/cm^3
\label{eq06v2-05}
\end{equation}
\noindent
where the Rosseland mean opacity values $k_R$ and density $\rho$ are chosen so
that the very low internal gas pressure $p_{int}$ (see Eq.~(\ref{eq06v2-04}))
along the entire magnetic tube almost does not affect the external gas pressure
$p_{ext}$ (see (\ref{eq06v2-05}) and Fig.~\ref{fig-opacity}).
Let us now examine the appearance of the X-ray of axion origin, induced by the magnetic field variations near the tachocline (Fig.~\ref{fig-lampochka}a) and their impact on the Rosseland length (see~(\ref{eq06v2-05})) inside the cool region of the magnetic tubes.
Let us remind that the magnetic field strength $B_{OT}$ in the overshoot
tachocline of $\sim 4100~T$ (see Fig.~\ref{fig-lampochka}a) and the
Parker-Biermann cooling effect in~(\ref{eq06v2-01}) lead to the corresponding
value of the magnetic field strength $B(z = 0.72 R_{Sun}) \sim 3600 ~T$
(see Fig.~\ref{fig-lampochka}a), which in its turn assumes virtually zero
internal gas pressure of the magnetic tube.
As it is shown above (see~\cite{Priest2000}), the topological effect of the
magnetic reconnection inside the $\Omega$-loop results in the formation of the
so-called O-loops (Fig.~\ref{fig-twisted-tube} and Fig.~\ref{fig-lampochka}a)
with their buoyancy limited from above by the strong cooling inside the
$\Omega$-loop (Fig.~\ref{fig-lampochka}a). It is possible to derive the value
of the horizontal magnetic field of the magnetic steps at the top of the O-loop:
$\vert B_{MS} \vert \approx \vert B(z = 0.72 R_{Sun}) \vert \sim 3600 ~T$.
So in the case of a large enough Rosseland lengh (see Eq.~(\ref{eq06v2-05})),
X-ray of axion origin induced by the horizontal magnetic field in O-loops, reach
the photosphere freely, while in the photosphere itself, according to the
Rosseland length
\begin{equation}
l_{photon} \approx 100 ~km < l \approx 300 \div 400 ~km,
\label{eq06v2-06}
\end{equation}
these photons undergo a multiple Compton scattering (see
Section~\ref{subsec-osc-parameters}) producing a typical directional pattern
(Fig.~\ref{fig-lampochka}a).
Aside from the X-rays of axion origin with mean energy of 4.2~keV, there are
only $h \nu \sim 0.95 ~keV$ X-rays (originating from the tachocline, according
to a theoretical estimate by \cite{Bailey2009}) inside the magnetic tube. Such
X-rays would produce the Compton-scattered photons with mean energy of
$\leqslant 0.95~keV$ which contradicts the known measurements of the photons
with mean energy of 3-4~keV (see Fig.~4 in \cite{Rieutord2014}). Our suggested
theoretical model thus removes these contradictions by involving the X-rays of
axion origin \textit{plus} the axions of the thermal X-ray origin, both
produced in the magnetic field of O-loops (see Fig.~\ref{fig-lampochka}a and
Fig.~\ref{app-b-fig01} in Appendix~\ref{appendix-luminosity}).
And finally, let us emphasize that we have just shown a theoretical possibility
of the time variation of the sunspot activity to correlate with the flux
of the X-rays of axion origin; the latter being controlled by the magnetic
field variations near the overshoot tachocline. As a result, it may be
concluded that the the axion mechanism for solar luminosity variations
based on the lossless X-ray ``channeling'' along the
magnetic tubes allows to explain the effect of the almost complete suppression of the
convective heat transfer, and thus to understand the known puzzling darkness of
the sunspots \citep{Rempel2011}.
\subsection{Estimation of the solar axion-photon oscillation parameters on the basis of the hadron axion-photon coupling in white dwarf cooling}
\label{subsec-osc-parameters}
It is known \citep{Cadamuro2012} that astrophysics provides a very interesting
clue concerning the evolution of white dwarf stars with their small mass
predetermined by the relatively simple cooling process. It is related to the
fact that recently it has been possible to determine their luminosity function
with the unprecedented precision \citep{Isern2008}. It seems that if the DFSZ
axion \citep{ref47,Dine1981} has a direct coupling to electrons and a decay
constant $f_a \sim 10^{9} ~GeV$, it provides an additional energy-loss channel
that permits to obtain a cooling rate that better fits the white dwarf
luminosity function than the standard one~\citep{Isern2008}. On the other hand,
the KSVZ axion \citep{ref46,ref46a}, i.e. the hadronic axion (with the mass in
the $meV$ range and $g_{a\gamma \gamma} \sim 10^{-12} ~GeV^{-1}$) would also
help in fitting the data, but in this case a stronger value for
$g_{a\gamma \gamma}$ is required to perturbatively produce an electron coupling
of the required strength (\cite{Cadamuro2012}, Fig.~1 in \cite{Srednicki1985},
Fig.~1 in \cite{Turner1990}, Eq.~82 in \cite{Kim2010}).
Our aim is to estimate the solar axion-photon oscillation parameters basing on
the hadron axion-photon coupling derived from white dwarf cooling (see
\mbox{Appendix~\ref{appendix-wd-cooling}}). The estimate
of the horizontal magnetic field in the O-loop is not related to the
photon-axion conversion in the Sun only, but also to the axions in the model of
white dwarf evolution. Therefore along with the values of the magnetic field
strength
$B_{MS} \sim 3600 ~T$
and the height of the magnetic shear steps
$L_{MS} \sim 1.28 \cdot 10^4 ~km$
(Fig.~\ref{fig-lampochka}a,b) we use the following parameters of the hadronic
axion (from the White Dwarf area in Fig.~\ref{fig05}a \citep{Irastorza2013,
Carosi2013}):
\begin{figure}[tbp!]
\begin{center}
\begin{minipage}[h]{0.44\linewidth}
\includegraphics[width=7.6cm]{gagamma_ma-limits-2.pdf}
\end{minipage}
\hfill
\begin{minipage}[h]{0.53\linewidth}
\includegraphics[width=8.8cm]{Y-g_agamma-4.pdf}
\end{minipage}
\end{center}
\caption{\textbf{(a)} Summary of astrophysical, cosmological and laboratory
constraints on axions and axion-like particles. Comprehensive axion/ALP
parameter space, highlighting the two main front lines of direct detection
experiments: helioscopes (CAST~\citep{ref58,ref72,CAST2011,Arik2013}) and
haloscopes (ADMX~\citep{ref50} and RBF~\citep{ref51}). The astrophysical bounds
from horizontal branch and massive stars are labeled ``HB''~\citep{ref02} and
``Cepheids''~\citep{Carosi2013} respectively. The QCD motivated models
(KSVZ~\citep{ref46,ref46a} and DFSZ~\citep{ref47,Dine1981}) for axions lay in
the yellow diagonal band. The orange parts of the band correspond to
cosmologically interesting axion models: models in the ``classical axion
window'' possibly composing the totality of DM (labelled ``Axion CDM'') or a
fraction of it (``WIMP-axion CDM''~\citep{Baer2011}). For more generic ALPs,
practically all the allowed space up to the red dash line may contain valid ALP
CDM models~\citep{Arias2012}. The region of axion masses invoked in the WD
cooling anomaly is shown by the blue dash line~\citep{Irastorza2013}. The red
star marks the values of the axion mass $m_a \sim 3.2 \cdot 10^{-2} eV$ and the
axion-photon coupling constant $g_{a\gamma} \sim 4.4 \cdot 10^{-11} GeV^{-1}$
chosen in the present paper on the basis of the suggested relation between the
axion mechanisms of the Sun's and the white dwarf luminosity variations.
\newline
\textbf{(b)} $R$ parameter constraints to $Y$ and $g_{a \gamma}$ (adopted from
\cite{Ayala2014}. The dark purple area delimits the 68\%~C.L. for $Y$ and
$R_{th}$ (see Eq.~(1) in \cite{Ayala2014}). The resulting bound on the axion
($g_{10} = g_{a \gamma \gamma}/(10^{-10} ~GeV^{-1})$) is somewhere between a
rather conservative $0.5 < g_{10} \leqslant 0.8$ and most aggressive $0.35 <
g_{10} \leqslant 0.5$ \citep{Friedland2013}. The red line marks the values of
the axion-photon coupling constant $g_{a \gamma} \sim 4.4 \cdot 10^{-11}
~GeV^{-1}$ chosen in the present paper.
The blue shaded area represents the bounds from Cepheids
observation. The yellow star corresponds to $Y$=0.254 and the bounds from HB
lifetime (yellow dashed line).}
\label{fig05}
\end{figure}
\begin{equation}
g_{a \gamma} \sim 4.4 \cdot 10^{-11} ~ GeV^{-1}, ~~~ m_a \sim 3.2 \cdot 10^{-2} ~eV.
\label{eq3.30}
\end{equation}
The choice of these values is also related to the observed solar luminosity
variations in the X-ray band (see (\ref{eq3.35})). The theoretical
estimate and the
consequences of such choice are considered below.
As it is shown above, the $\sim 4100~T$ magnetic field in the overshoot
tachocline and the Parker-Biermann cooling effect in~(\ref{eq06v2-01}) may
produce the O-loops with the horizontal magnetic field of
$\vert B_{MS} \vert \approx \vert B(z = 0.72 R_{Sun}) \vert \sim 3600 ~T$
stretching for about $L_{MS} \sim 1.28 \cdot 10^4 ~km$, and surrounded by virtually
zero internal gas pressure of the magnetic tube (see Fig.~\ref{fig-lampochka}a).
It is not hard to use the expression (\ref{eq08})
for the conversion probability\footnote{Hereinafter we use rationalized natural
units to convert the magnetic field units from $Tesla$ to $eV^2$, and the
conversion reads $1\,T = 195\,eV^2$~\citep{Guendelman2009}.}
\begin{equation}
P_{a \rightarrow \gamma} = \frac{1}{4} \left( g_{a \gamma} B_{MS} L_{MS} \right)^2 \sim 1
\label{eq3.31}
\end{equation}
for estimating the axion coupling constant to photons (\ref{eq3.30}).
Thus, it is shown that the hypothesis about the possibility for the solar
axions born in the core of the Sun to be efficiently converted back into
$\gamma$-quanta in the magnetic field of the magnetic steps of the O-loop
(above the solar overshoot tachocline) is relevant. Here the variations of the
magnetic field in the solar tachocline are the direct cause of the converted
$\gamma$-quanta intensity variations. The latter in their turn may be the cause
of the overall solar luminosity variations known as the active and quiet Sun phases.
It is easy to show that the theoretical estimate for the part of the axion
luminosity $L_a$ in the total luminosity of the Sun $L_{Sun}$ with respect to
(\ref{eq3.30}) is~\citep{ref58}
\begin{equation}
\frac{L_a}{L_{Sun}} = 1.85 \cdot 10 ^{-3} \left(
\frac{g_{a \gamma}}{10^{-10} GeV^{-1}} \right)^2 \sim 3.6 \cdot 10^{-4} .
\label{eq3.32}
\end{equation}
As opposed to the classic mechanism of the Sun modulation, the
axion mechanism is determined by the magnetic tubes rising to the photosphere,
and not by the over-photosphere magnetic fields. In this case the solar
luminosity modulation is determined by the axion-photon oscillations in the
magnetic steps of the O-loop causing the formation and channeling of the
$\gamma$-quanta inside the almost empty magnetic $\Omega$-tubes (see
Fig.~\ref{fig-twisted-tube} and Fig.~\ref{fig-lampochka}a). When the magnetic
tubes cross the photosphere, they ``open'' (Fig.~\ref{fig-lampochka}a), and the
$\gamma$-quanta are ejected to the photosphere, where their comfortable journey
along the magnetic tubes (without absorption and scattering) ends. As the
calculations by \cite{ref36} show, the further destiny of the $\gamma$-quanta
in the photosphere may be described by the Compton scattering, which actually
agrees with the observed solar spectral shape (Fig.~\ref{fig06}b,c).
\begin{figure*}
\begin{center}
\includegraphics[width=14cm]{Sun_total_spectra-13.pdf}
\end{center}
\caption{(a) Reconstructed solar photon spectrum below 10~keV from the active
Sun (red line) and quiet Sun (blue line) from accumulated observations
(spectral bin is 6.1~eV wide). Adopted from~\cite{ref59}.
\newline
(b) Reconstructed solar photon spectrum fit in the active phase of the Sun by
the quasi-invariant soft part of the solar photon spectrum (grey shaded area;
see \mbox{Eq.~(\ref{eq06-34})}) and three spectra (\ref{eq3.33}) degraded to
the Compton scattering for column densities above the initial conversion place
of 16 (adopted from~\cite{ref36}) and 2~$g / cm^2$ (present paper).
\newline
(c) The similar curves for the quiet phase of the Sun (grey shaded area
corresponds to \mbox{Eq.~(\ref{eq06-35})})
\newline
(d) Cartoon showing the interplay between magnetic field expansion and the EUV
loop. A coalescent flow forming the sunspot drags the magnetic field in the
photosphere near the solar surface into the sunspot. In response, a hot spot of
enhanced upward directed Poynting flux, $S$, forms (red arrow). The expanding
field lines (blue) move upwards and to the side. When they transverse the hot
spot of Poynting flux, the plasma on that field line gets heated and brightens
up. As the field line expands further, it leaves the hot spot and gets darker
again. In consequence a bright coronal EUV loop forms (orange) and remains
rather stable as the successively heated field lines move through (adopted from
\cite{Chen2015}). X-ray emission is the $\gamma$-quanta of axion origin coming
from the magnetic tubes and not related to the magnetic reconnection as
conjectured by e.g. \cite{Shibata2011}.}
\label{fig06}
\end{figure*}
From the axion mechanism point of view it means that the solar spectra during
the active and quiet phases (i.e. during the maximum and minimum solar
activity) differ from each other by the smaller or larger part of the Compton
spectrum, the latter being produced by the $\gamma$-quanta of the axion origin
ejected from the magnetic tubes into the photosphere (see Fig.~4 in
\cite{Chen2015}).
A natural question arises at this point: ``What are the real parts of the
Compton spectrum of the axion origin in the active and quiet phases of the Sun,
and do they agree with the experiment?'' Let us perform the
mentioned estimations basing on the known experimental results by ROSAT/PSPC,
where the Sun's coronal X-ray spectra and the total luminosity during the
minimum and maximum of the solar coronal activity were obtained~\citep{ref59}.
Apparently, the solar photon spectrum below 10~keV of the active and quiet Sun
(Fig.~\ref{fig06}a) reconstructed from the accumulated ROSAT/PSPC observations
may be described by three Compton spectra for different column densities rather
well (Fig.~\ref{fig06}b,c). This gives grounds for the assumption that the hard
part of the solar spectrum is mainly determined by the axion-photon conversion
efficiency:
\begin{align}
\left( \frac{d \Phi}{dE} \right)^{(*)} \simeq
\left( \frac{d \Phi}{dE} \right)^{(*)}_{corona} +
\left( \frac{d \Phi _{\gamma}}{dE} \right)^{(*)}_{axions} ,
\label{eq06-33}
\end{align}
\noindent where $\frac{d \Phi}{dE}$ is the observed solar spectra during the
active (red line in Fig.~\ref{fig06}a,b) and quiet (blue line in
Fig.~\ref{fig06}a,c) phases, $\left( \frac{d \Phi}{dE} \right)_{corona}$
represents the power-like theoretical solar spectra
\begin{equation}
\left( \frac{d \Phi}{dE} \right)_{corona} \sim E^{-(1+\alpha)} e^{-E/E_0} ,
\label{eq06-33a}
\end{equation}
\noindent
where a power law decay with the ``semi-heavy tail'' takes place in practice
\citep{Lu1993} instead of the so-called power laws with heavy tails
\citep{Lu1991,Lu1993} (see e.g. Figs.~3 and~6 in \cite{Uchaikin2013}).
Consequently, the observed corona spectra
($0.25 ~keV < E \leqslant 2.5 ~keV$) (shaded area in Fig.~\ref{fig06}b)
\begin{align}
\left( \frac{d \Phi}{dE} \right)^{(active)}_{corona} \sim
5 \cdot 10^{-3} \cdot (E~[keV])^{-3} \cdot \exp{\left(-\frac{E}{1 keV} \right)}
~~for~the~active~Sun
\label{eq06-34}
\end{align}
\noindent and (shaded area in Fig.~\ref{fig06}c)
\begin{align}
\left( \frac{d \Phi}{dE} \right)^{(quiet)}_{corona} \sim
1 \cdot 10^{-4} \cdot (E~[keV])^{-3} \cdot \exp{\left(-\frac{E}{0.5 keV} \right)}
~~for~the~quiet~Sun ;
\label{eq06-35}
\end{align}
\noindent $\left( \frac{d \Phi _{\gamma}}{dE} \right)_{axions}$ is the
reconstructed solar photon spectrum fit ($0 ~keV < E < 10 ~keV$) constructed
from three spectra (\ref{eq3.33}) degraded to the Compton scattering for
different column densities (see Fig.~\ref{fig06}b,c for the active and quiet
phases of the Sun respectively).
As is known, this class of flare models (Eqs.~(\ref{eq06-34})
and~(\ref{eq06-35})) is based on the recent paradigm in statistical physics
known as self-organized criticality
\citep{Bak1987,Bak1988,Bak1989,Bak1996,Aschwanden2011}. The basic idea is that
the flares are a result of an ``avalanche'' of small-scale magnetic
reconnection events cascading \citep{Lu1993,Charbonneau2001,Aschwanden2014}
through the highly intense coronal magnetic structure \citep{Shibata2011} driven
at the critical state by the accidental photospheric movements of its magnetic
footprints. Such models thus provide a natural and computationally convenient
basis for the study of Parker hypothesis of the coronal heating by nanoflares
\citep{Parker1988}.
Another significant fact discriminating the theory from practice, or rather
giving a true understanding of the measurements against some theory, should be
recalled here (e.g. (\ref{eq06-33a}) (see Eq.~(5) in \cite{Lu1993})). The
nature of power laws is related to the strong connection between the consequent
events (this applies also to the ``catastrophes'', which in turn gives rise to
a spatial nonlocality related to the appropriate structure of the medium (see
page 45 in \cite{Uchaikin2013})). As a result, the ``chain reaction'', i.e. the
avalanche-like growth of perturbation with more and more resource involved,
leads to the heavy-tailed distributions. On the other hand, obviously, none of
the natural events may be characterized by the infinite values of mean and
variance. Therefore, the power laws like (\ref{eq06-33a}) are approximate and
must not hold for the very large arguments. It means that the power law decay
of the probability density rather corresponds to the average asymptotics, and
the ``semi-heavy tails'' must be observed in practice instead.
In this regard we suppose that the application of the power-law distributions
with semi-heavy tails leads to a soft attenuation of the observed corona
spectra (which are not visible above $E > 2 \div 3 ~keV$), and thus to a close
coincidence between the observed solar spectra and $\gamma$-spectra of axion
origin (Fig.~\ref{fig06}). I.e.
\begin{equation}
\left( \frac{d \Phi}{dE} \right)^{(*)} \simeq
\left( \frac{d \Phi _{\gamma}}{dE} \right)^{(*)}_{axions}
~~~ \text{for energies} ~~ E > 2 \div 3 ~keV.
\label{eq06-35a}
\end{equation}
It means that the physics of the formation and ejection of the $\gamma$-quanta
above $2 \div 3 ~keV$ through the sunspots into corona is not related to the
magnetic reconnection theory by e.g. \cite{Shibata2011} (Fig.~\ref{fig06}d),
and may be of the axion origin.
With this in mind, let us suppose that the part of the differential solar axion
flux at the Earth~\citep{ref58}
\begin{align}
\frac{d \Phi _a}{dE} = 6.02 \cdot 10^{10} \left( \frac{g_{a\gamma}}{10^{10} GeV^{-1}} \right)^2 E^{2.481} \exp \left( - \frac{E}{1.205} \right) ~~cm^{-2}
s^{-1} keV^{-1} ,
\label{eq3.33}
\end{align}
\noindent which characterizes the differential $\gamma$-spectrum of the axion
origin $d \Phi _{\gamma} / dE$
(see $[ d \Phi _{\gamma} / dE ]_{axions}$ in (\ref{eq06-33}) and
(\ref{eq06-35a}))
\begin{align}
\frac{d \Phi _{\gamma}}{dE} \cong P_{\gamma} \frac{d \Phi _{a}}{dE}
~~ cm^{-2} s^{-1} keV^{-1} \approx
6.1 \cdot 10^{-3} P_{\gamma} \frac{d \Phi _{a}}{dE}
~ ph\cdot cm^{-2} s^{-1} bin^{-1}
\label{eq3.34}
\end{align}
\noindent
where the spectral bin width is 6.1~eV (see Fig.~\ref{fig06}a);
the probability $P_{\gamma}$ describing the relative portion of $\gamma$-quanta
(of axion origin) channeling along the magnetic tubes may be defined, according
to~\cite{ref59}, from the observed solar luminosity variations in the X-ray
band, recorded in ROSAT/PSPC experiments (Fig.~\ref{fig06}):
$\left(L_{corona}^X \right) _{min} \approx 2.7
\cdot 10^{26} ~erg/s$ at minimum and
$\left( L_{corona}^X \right) _{max} \approx 4.7 \cdot 10^{27} ~erg/s$
at maximum,
\begin{equation}
P_{\gamma} = P_{a \rightarrow \gamma} \cdot \dfrac{\Omega \cdot (0.5 d_{spot})^2}
{(\tan \left( \alpha / 2 \right) \cdot 0.3 R_{Sun})^2} \cdot \Lambda_a
\approx 3.4 \cdot 10^{-3},
\label{eq3.35}
\end{equation}
\noindent directly following from the geometry of the system
(Fig.~\ref{fig-lampochka}b), where the conversion probability
$P_{a \rightarrow \gamma} \sim 1$ (\ref{eq3.31});
\begin{equation}
\Omega = (I_{\gamma ~CZ} / I_0) \cdot (I_{\gamma ~photo} / I_{\gamma ~CZ})
\cdot (I_{\gamma ~corona} / I_{\gamma ~photo}) \approx 0.23
\end{equation}
\noindent
is the total relative intensity of
$\gamma$-quanta, where $(I_{\gamma ~CZ} / I_0) \sim 1$ is the relative
intensity of $\gamma$-quanta ``channeling'' through the
magnetic tubes in the convective zone,
$I_{\gamma ~photo} / I_{\gamma ~CZ} = \exp {[-(\mu l)_{photo}]} \sim 0.23$ (see
Eq.~\ref{eq06-43}) is the relative intensity of the Compton-scattered
$\gamma$-quanta in the solar photosphere, and $I_{\gamma ~corona} / I_{\gamma
~photo} = \exp {[-(\mu l)_{corona}]} \approx 1$ (see Eq.~\ref{eq06-44})
is the relative intensity of the Compton-scattered $\gamma$-quanta in the solar
corona;
$d_{spot}$ is the measured diameter of the sunspot
(umbra) \citep{Dikpati2008,Gough2010}. Its size determines the relative portion
of the axions hitting the sunspot area. Further,
\begin{equation}
\dfrac{(0.5 d_{spot})^2}{(\tan \left( \alpha / 2 \right) \cdot 0.3 R_{Sun})^2} \cong 0.034,
\end{equation}
\noindent where
\begin{equation}
0.5 d_{spot} = \left[ \frac{1}{\pi} \left(
\frac{\langle sunspot ~area \rangle _{max}}{\left\langle N_{spot} \right\rangle _{max}}
\right) \right] ^{(1/2)} \cong 5500~km,
\end{equation}
\noindent and the value $\Lambda_a$ characterizes the portion of the
axion flux going through the total $(2\left\langle N_{spot}
\right\rangle_{max})$ sunspots on the photosphere:
\begin{equation}
\Lambda_a = \dfrac{\left( sunspot\ axion\ flux \right)}{(1/3)\left( total\ axion\ flux \right)} \approx
\dfrac{2 \left\langle N_{spot} \right\rangle _{max} (\tan \left( \alpha / 2 \right) \cdot 0.3 R_{Sun})^2}{(4/3) R_{Sun} ^2} \sim 0.42 ,
\label{eq3.36}
\end{equation}
\noindent and $\left\langle N_{spot} \right\rangle _{max} \approx 150$ is the
average number of the maximal sunspot number, and
$\langle sunspot ~area \rangle _{max} \approx 7.5 \cdot 10^9 ~km^2 \approx 2470 ~ppm ~of ~visible ~hemisphere$
is the sunspot area (over the visible
hemisphere~\citep{Dikpati2008,Gough2010}) for the cycle 22 experimentally
observed by the Japanese X-ray telescope Yohkoh (1991)~\citep{ref36}.
On the other hand, from the known observations (see~\cite{ref59} and
Appendix~\ref{appendix-luminosity})
\begin{equation}
\frac{(L_{corona}^X)_{max}}{L_{Sun}} \cong 1.22 \cdot 10^{-6},
\label{eq3.37}
\end{equation}
\noindent where $L_{Sun} = 3.8418 \cdot 10^{33} erg / s$ is the solar
luminosity~\citep{ref63}. Using the theoretical axion impact estimate
(\ref{eq3.32}), one can clearly see that the obtained value (\ref{eq3.35}) is
in good agreement with the observations (\ref{eq3.37}):
\begin{equation}
P_{\gamma} = \left. \frac{(L_{corona}^X)_{max}}{L_{Sun}} \middle/
\frac{L_a}{L_{Sun}} \sim 3.4 \cdot 10^{-3} \right. ,
\label{eq3.38}
\end{equation}
\noindent derived independently.
In other words, if the hadronic axions found in the Sun are the same particles
found in the white dwarfs with the known strength of the axion coupling to
photons (see (\ref{eq3.30}) and Fig.~\ref{fig05}a,b),
it is quite natural that the independent observations give the same estimate of
the probability $P_{\gamma}$ (see (\ref{eq3.35}) and (\ref{eq3.38})). So the
consequences of the choice (\ref{eq3.30}) are determined by the independent
measurements of the average sunspot radius, the sunspot
number~\citep{Dikpati2008,Gough2010}, the model estimates of the horizontal
magnetic field and the height $L_{MS}$ of the magnetic steps (see
Fig.~\ref{fig-lampochka}), and the hard part of the solar photon spectrum
mainly determined by the axion-photon conversion efficiency, and the
theoretical estimate for the part of the axion luminosity $L_a$ in the total
luminosity of the Sun $L_{Sun}$ (\ref{eq3.38}).
\section{Axion mechanism of the solar Equator -- Poles effect}
The axion mechanism of Sun luminosity variations is largely validated by the experimental
X-ray images of the Sun in the quiet (Fig.~\ref{fig-Yohkoh}a) and active
(Fig.~\ref{fig-Yohkoh}b) phases~\citep{ref36} which clearly reveal the
so-called Solar Equator -- Poles effect (Fig.~\ref{fig-Yohkoh}b).
\begin{figure*}
\centerline{\includegraphics[width=12cm]{Sun_X-ray_image_spectrum-3.pdf}}
\caption{\textbf{Top:} Solar images at photon energies from 250~eV up to a few
keV from the Japanese X-ray telescope Yohkoh (1991-2001) (adopted
from~\cite{ref36}). The following is shown:
\newline
(a) a composite of 49 of the quietest solar periods during the solar minimum in 1996;
\newline
(b) solar X-ray activity during the last maximum of the 11-year solar cycle.
Most of the X-ray solar activity (right) occurs at a wide bandwidth of
$\pm 45^{\circ}$ in latitude, being homogeneous in longitude. Note that
$\sim$95\% of the solar magnetic activity covers this bandwidth.
\newline
\textbf{Bottom:} (c) Axion mechanism of solar irradiance variations
above $2 \div 3 ~keV$, which is independent of the cascade reconnection
processes in corona (see shaded areas and Fig.~\ref{fig06}b,c,d),
and the red and blue curves characterizing the irradiance increment in the
active and quiet phases of the Sun, respectively;
\newline
(d) schematic picture of the radial travelling of the axions inside the Sun.
Blue lines on the Sun designate the magnetic field. Near the tachocline
(Fig.~\ref{fig-lampochka}a) the axions are converted into $\gamma$-quanta,
which form the experimentally observed Solar photon spectrum after passing the
photosphere (Fig.~\ref{fig06}). Solar axions that move towards the poles (blue
cones) and in the equatorial plane (blue bandwidth) are not converted by
Primakoff effect (inset: diagram of the inverse coherent process). The
variations of the solar axions may be observed at the Earth by special
detectors like the new generation CAST-helioscopes~\citep{ref68}. }
\label{fig-Yohkoh}
\end{figure*}
The essence of this effect lies in the following. It is known that the axions
may be transformed into $\gamma$-quanta by inverse Primakoff effect in the
transverse magnetic field only. Therefore the axions that pass towards the
poles (blue cones in Fig.~\ref{fig-Yohkoh}b) and equator (the blue band in
Fig.~\ref{fig-Yohkoh}b) are not transformed into $\gamma$-quanta by inverse
Primakoff effect, since the magnetic field vector is almost collinear to the
axions' momentum vector. The observed nontrivial X-ray distribution in the
active phase of the Sun may be easily and naturally described within the
framework of the axion mechanism of the solar luminosity variations.
As described in Section~\ref{subsec-channeling}, the photons of axion origin
travel through the convective zone along the magnetic flux tubes, up to the
photosphere. In the photosphere they are Compton-scattered, which results in a
substantial deviation from the initial axions directions of propagation
(Fig.~\ref{fig07a}).
\begin{figure*}
\centerline{\includegraphics[width=15cm]{axion-channaling-scattering-Yohkoh-3.pdf}}
\caption{The formation of the high X-ray intensity bands on the Yohkoh
matrix. \label{fig07a}}
\end{figure*}
Let us make a simple estimate of the Compton scattering efficiency in terms of
the X-ray photon mean free path (MFP) in the photosphere:
\begin{equation}
l_{\mu} = (\mu)^{-1} = \left( \sigma_c \cdot n_e \right)^{-1} ,
\label{eq-compt-01}
\end{equation}
\noindent where
$\mu$ is the total linear attenuation coefficient
(cm$^{-1}$),
the total Compton cross-section $\sigma_c = \sigma_0 = 8 \pi r_0^2 / 3$ for the
low-energy photons \citep{ref81,ref82}, $n_e$ is the electrons density in the
photosphere, and $r_0 = 2.8\cdot10^{-13}~cm$ is the so-called classical
electron radius.
Taking into account the widely used value of the matter density in the solar
photosphere $\rho \sim 10^{-7} ~g/cm^3$ and supposing that it consists of the
hydrogen (for the sake of the estimation only), we obtain that
\begin{equation}
n_e \approx \frac{\rho}{m_H} \approx 6 \cdot 10^{16} ~ electron / cm^3\, ,
\label{eq-compt-02}
\end{equation}
which yields the MFP of the photon \citep{ref81,ref82}
\begin{align}
l_{\mu} = \left( 7 \cdot 10^{-25} ~cm^2 \cdot 6 \cdot 10^{16} ~electron/cm^3 \right)^{-1}
\approx 2.4 \cdot 10^7 ~cm = 240 ~km .
\label{eq-compt-03}
\end{align}
Since this value is smaller than the thickness of the solar photosphere
($l_{photo} \sim 300 \div 400 ~km$),
the Compton scattering is efficient
enough to be detected at the Earth
(see \mbox{Fig.~\ref{fig07a}} and \mbox{Fig.~\ref{fig06}} adopted from
\mbox{\cite{ref59}});
\begin{equation}
\frac{I_{\gamma ~photo}}{I_{\gamma ~CZ}}
= \exp {\left[ - (\mu l)_{photo} \right]} \sim 0.23,
\label{eq06-43}
\end{equation}
\noindent which follows the particular case of the Compton scattering: Thomson
differential and total cross section for unpolarized photons
\citep{Griffiths1995}.
And finally taking into account that $l_{chromo} \sim 2 \cdot 10^3 ~km$ and
$n_e \sim 10^{13} ~electron/cm^3$ (i.e. $l_{\mu} \sim 1.4 \cdot 10^6 ~km$) and
$l_{corona} \sim 10^5 ~km$ and $n_e < 10^{11} ~electron/cm^3$ (i.e. $l_{\mu} >
1.4 \cdot 10^8 ~km$) (Fig.~12.9 in \cite{Aschwanden2004}), one may calculate
the relative intensity of the $\gamma$-quanta by Compton scattering in the
solar corona
\begin{equation}
\frac{I_{\gamma ~corona}}{I_{\gamma ~photo}}
= \frac{I_{\gamma ~chromo}}{I_{\gamma ~photo}} \cdot
\frac{I_{\gamma ~corona}}{I_{\gamma ~chromo}} =
\exp {\left[ - (\mu l)_{chromo} \right]} \cdot
\exp {\left[ - (\mu l)_{corona} \right]} \approx 1 ,
\label{eq06-44}
\end{equation}
\noindent which depends on the
total relative intensity of $\gamma$-quanta (see Eq.~(\ref{eq3.35})).
A brief summary is appropriate here. Coronal activity is a
collection of plasma processes manifesting from the passage of magnetic fields
through it from below, generated by the solar dynamo in cycles of approximately
11 years (Fig.~\ref{fig-Yohkoh}). This global process culminating in the
reversal of the solar magnetic dipole at the end of each cycle involves the
turbulent dissipation of the magnetic energy, the flares and heating of the
corona. The turbulent, highly dissipative, as well as largely ideal MHD
processes play their distinct roles, each liberating a comparable amount of
energy stored in the magnetic fields.
This mechanism is illustrated
in Fig.~\ref{fig06}d. When the magnetic flux erupts through the photosphere, it
forms a pair of sunspots pushing the magnetic field up and aside. The magnetic
field inside the sunspots is very high and the convection is suppressed.
Therefore the coalescence of the magnetic field is also suppressed. When some
magnetic field line crosses the region of the high Poynting flux, the energy is
distributed along this line in the form of plasma heating. This makes such line
visible in EUV band for some short time. While the magnetic field is being
pushed to the sides, next field line crosses the region of high Poynting flux
and flares up at the same position as the previous one and so on. This creates
an illusion of a static flaring loop, while the magnetic field is in fact
moving. It is interesting to note that \cite{Chen2015} expect the future
investigation to show to what extent this scenario also holds for X-ray
emission (see Supplementary Section~3 in \cite{Chen2015}).
In this context it is very important to consider the experimental observations
of solar X-ray jets (e.g. solar space missions of Yohkoh and Hinode
satellites), which show, for example, a gigantic coronal jet, ejecting from a
compact active region in a coronal hole \citep{Shibata1994} and tiny
chromospheric anemone jets \citep{Shibata2007}.
These jets are believed to be an indirect proof of small-scale ubiquitous
reconnection in the solar atmosphere and may play an important role in heating
it, as conjectured by Parker (\cite{Parker1988,Zhang2015,Sterling2015}).
Our main supposition here is that in contrast to EUV images (see orange line in
Fig.~\ref{fig06}d) and the coronal X-ray below $\sim 2 \div 3~keV$, the hard X-ray
emission above $\sim 3~keV$ is in fact the $\gamma$-quanta of axion origin,
born inside the magnetic tubes (see sunspot in Fig.~\ref{fig06}d), and is not
related to the mentioned indirect evidence (see e.g. Fig.~42 and Fig.~47 in
\cite{Shibata2011}) of the coronal jet, generated by the solar dynamo in cycles
of approximately 11 years (Fig.~\ref{fig-Yohkoh}). It will be interesting to
see if the proposed picture will ultimately be confirmed, modified, or rejected
by future observations and theoretical work to pin down the underlying physical
ideas.
Taking into account the directional patterns of the resulting radiation as well
as the fact that the maximum of the axion-originated X-ray radiation is
situated near 30 -- 40 degrees of latitude (because of the solar magnetic field
configuration), the mechanism of the high X-ray intensity bands formation on
the Yohkoh matrix becomes obvious. The effect of these bands widening near the
edges of the image is discussed in Appendix~\ref{appendix-widening} in detail.
\section{Summary and Conclusions}
In the given paper we present a self-consistent model of the axion mechanism of
the Sun's luminosity variations, in the framework of which we estimate the values of the axion
mass ($m_a \sim 3.2 \cdot 10^{-2} ~eV$) and the axion coupling constant to
photons ($g_{a \gamma} \sim 4.4 \cdot 10^{-11} ~GeV^{-1}$). A good
correspondence between the solar axion-photon oscillation parameters and the
hadron axion-photon coupling derived from white dwarf cooling (see
Fig.~\ref{fig05}) is demonstrated.
One of the key ideas behind the axion mechanism of Sun luminosity variations is the effect
of $\gamma$-quanta channeling along the magnetic flux tubes (waveguides inside
the cool region) above the base of the Sun convective zone
(Figs.~\ref{fig04-3}, \ref{fig-twisted-tube} and~\ref{fig-lampochka}). The low
refraction (i.e. the high transparency) of the thin magnetic flux tubes is
achieved due to the ultrahigh magnetic pressure (Fig.~\ref{fig-lampochka}a)
induced by the magnetic field of about 4100~T (see Eq.~(\ref{eq06-16}) and
Fig.~\ref{fig-lampochka}a). So it may be concluded that the axion mechanism of
Sun luminosity variations based on the lossless $\gamma$-quanta channeling along the
magnetic tubes allows to explain the effect of the partial suppression of the
convective heat transfer, and thus to understand the known puzzling darkness of
the sunspots (see 2.2.1 in \cite{Rempel2011}).
It is shown that the axion mechanism of luminosity variations (which means that
they are produced by adding the intensity variations of the $\gamma$-quanta of
the axion origin to the coronal part of the solar spectrum
(Fig.~\ref{fig-Yohkoh}c)) easily explains the physics of the so-called Solar
Equator -- Poles effect observed in the form of the anomalous X-ray
distribution over the surface of the active Sun, recorded by the Japanese X-ray
telescope Yohkoh (Fig.~\ref{fig-Yohkoh}, top).
The essence of this effect consists in the following: axions that move towards
the poles (blue cones in Fig.~\ref{fig-Yohkoh}, bottom) and equator (blue
bandwidth in Fig.~\ref{fig-Yohkoh}, bottom) are not transformed into
$\gamma$-quanta by the inverse Primakoff effect, because the magnetic field
vector is almost collinear to the axions' momentum in these regions (see the
inset in Fig.~\ref{fig-Yohkoh}, bottom). Therefore the anomalous X-ray
distribution over the surface of the active Sun is a kind of a ``photo'' of the
regions where the axions' momentum is orthogonal to the magnetic field vector
in the solar over-shoot tachocline. The solar Equator -- Poles effect is not
observed during the quiet phase of the Sun because of the magnetic field
weakness in the overshoot tachocline, since the luminosity increment of the
axion origin is extremely small in the quiet phase as compared to the active
phase of the Sun.
In this sense, the experimental observation of the solar Equator -- Poles
effect is the most striking evidence of the axion mechanism of Sun luminosity
variations. It is hard to imagine another model or considerations which would
explain such anomalous X-ray radiation distribution over the active Sun surface
just as well (compare Fig.~\ref{fig-Yohkoh}a,b with Fig.~\ref{app-b-fig01}a
).
And, finally, let us emphasize one essential and the most painful point of the
present paper. It is related to the key problem of the axion mechanism of the solar
luminosity variations and is stated rather simply: ``Is the process of axion conversion
into $\gamma$-quanta by the Primakoff effect really possible in the magnetic
steps of an O-loop near the solar overshoot tachocline?'' This question is
directly connected to the problem of the hollow magnetic flux tubes existence
in the convective zone of the Sun, which are supposed to connect the tachocline
with the photosphere. So, either the more general theory of the Sun or the
experiment have to answer the question of whether there are the waveguides in
the form of the hollow magnetic flux tubes in the cool region of the convective
zone of the Sun, which are perfectly transparent for $\gamma$-quanta, or our
model of the axion mechanism of Sun luminosity variations is built around simply guessed
rules of calculation which do not reflect any real nature of things.
\section*{Acknowledgements}
\noindent The work of M. Eingorn was supported by NSF CREST award HRD-1345219
and NASA grant NNX09AV07A.
| {'timestamp': '2015-12-08T02:10:57', 'yymm': '1508', 'arxiv_id': '1508.03836', 'language': 'en', 'url': 'https://arxiv.org/abs/1508.03836'} |
\section{Introduction}
In this paper, $\Gamma$ is a discrete group, $K$ is a (complex) Hilbert space and $\pi$ is a unitary representation of $\Gamma$ on $K$.
We denote by $1_\Gamma$ the one dimensional trivial representation of $\Gamma$, and $K^\pi$ the set of $\pi$-invariant vectors in $K$.
Recall that (see \cite{BC}) $\pi$ is said to have a \emph{spectral gap} if the restriction of $\pi$ on the orthogonal complement $(K^\pi)^\bot$ does not weakly contain $1_\Gamma$ (in the sense of Fell), i.e.\ there does not exist a net of unit vectors $\xi_i\in (K^\pi)^\bot$ satisfying $\|\pi_t\xi_i -\xi_i\| \to 0$ for every $t\in \Gamma$.
The following result on group actions, which relates the existence of spectral gap of certain representation and the uniqueness of the invariant states (means), is crucial to the solution of the Banach-Ruziewicz problem (see \cite{Marg},\cite{Sull},\cite{Drin}).
\begin{thm2}\label{motivation}
Let $\Gamma$ be a countable group with a measure preserving ergodic action on a standard non-atomic probability space $(X, \mu)$, and $\rho$ be the associated unitary representation of $\Gamma$ on $L^2(X, \mu)$.
\smallskip\noindent
(a) (\cite{dR}, \cite{Sch}, \cite{LH}) If $\rho$ has a spectral gap, then the integration with respect to $\mu$ is the unique $\Gamma$-invariant state on $L^\infty(X,\mu)$.
\smallskip\noindent
(b) (\cite{Sch}) If there is a unique $\Gamma$-invariant state on $L^\infty(X,\mu)$, then $\rho$ has a spectral gap.
\end{thm2}
Motivated by this result we will define spectral gap actions of discrete groups on von Neumann algebras, and study the corresponding invariant states on the von Neumann algebras.
From now on, $N$ is a von Neumann algebra with standard form $(N, \mathfrak{H}, \mathfrak{J}, \mathfrak{P})$, i.e.\ $\mathfrak{H}$ is a Hilbert space, $\mathfrak{J}:\mathfrak{H} \to \mathfrak{H}$ is a conjugate linear bijective isometry, and $\mathfrak{P}$ is a self-dual cone in $\mathfrak{H}$ such that there is a (forgettable) faithful representation of $N$ on $\mathfrak{H}$ satisfying some compatibility conditions with $\mathfrak{J}$ and $\mathfrak{P}$.
The readers may find in \cite{Haag-st-form} more information on this topic.
As proved in \cite[Theorem 3.2]{Haag-st-form}, there exists a unique unitary representation $\mathcal{U}$ of the $^*$-automorphism group ${\rm Aut}(N)$ on $\mathfrak{H}$ such that
$$g(x) = \mathcal{U}_{g} x \mathcal{U}_{g}^*, \quad \mathcal{U}_g\mathfrak{J}=\mathfrak{J}\mathcal{U}_g \quad \text{and} \quad \mathcal{U}_g(\mathfrak{P})\subseteq \mathfrak{P}\qquad (x\in N; g\in{\rm Aut}(N)).$$
Hence, an action $\alpha$ of $\Gamma$ on $N$ gives rise to a unitary representation $\mathcal{U}_{\alpha}=\mathcal{U}\circ\alpha$ of $\Gamma$ on $\mathfrak{H}$.
\begin{defn2}
Let $\alpha$ be an action of $\Gamma$ on $N$.
\smallskip\noindent
(a) $\alpha$ is said to have a \emph{spectral gap} if the representation $\mathcal{U}_{\alpha}$ has a spectral gap.
\smallskip\noindent
(b) $\alpha$ is said to be \emph{standard} if every $\alpha$-invariant state on $N$ is a weak-$^*$-limit of a net of normal $\alpha$-invariant states.
\end{defn2}
We will study the relationship between spectral gap actions and standard actions of discrete groups on von Neumann algebras. This is the main topic of Section 2. Our first main result generalizes Theorem \ref{motivation}(a) to the situation of discrete group actions on von Neumann algebras.
\begin{thm2}\label{thm:main}Let $\alpha$ be an action of a (possibly uncountable) discrete group $\Gamma$ on a von Neumann algebra $N$. If $\alpha$ has a spectral gap, then $\alpha$ is standard.
\end{thm2}
In contrast to Theorem \ref{motivation}(b), the converse statement of Theorem \ref{thm:main} is not true in general (see Example \ref{eg:4-not-2}(b)).
Nevertheless, we will give several situations in which the converse does hold (see e.g.\ Proposition \ref{prop:inv-mean-sp-gap-act}(b)).
Let us also note that Theorem \ref{motivation} (and hence Theorem \ref{thm:main}) does not extend to locally compact groups.
Indeed, if $G$ is the circle group (equipped with the canonical compact topology) and $N := L^\infty(G)$ with the left translation action $\alpha$ by $G$, then there is a unique normal $\alpha$-invariant state on $N$, but there exist more than one $\alpha$-invariant states (see e.g.\ \cite[Proposition 2.2.11]{Lub}).
We will give two applications of Theorem \ref{thm:main}.
The first one concerns inner amenability and will be considered in Section 3.
The notion of inner amenability was first introduced by
Effros, aiming to give a group theoretic description of property Gamma of the group von Neumann algebra $L(\Gamma)$ of an ICC group.
Recall that that
$\Gamma$ is an \emph{ICC group} if $\Gamma \neq \{e\}$ and all the non-trivial conjugacy classes of $\Gamma$ are infinite.
Moreover, $\Gamma$ is \emph{inner amenable} if there exist more than one inner invariant states on $\ell^\infty(\Gamma)$.
Effros showed in \cite{Eff} that a countable ICC group $\Gamma$ is inner amenable if $L(\Gamma)$ has property Gamma.
However, Vaes recently gave, in \cite{Vaes}, a counter example to the converse.
From an opposite angle, it is natural to ask whether one can express inner amenability of $\Gamma$ in terms of certain property of $L(\Gamma)$.
One application of Theorem \ref{thm:main} is the following result.
\begin{thm2}\label{thm:inner}
Let $\Gamma$ be a finitely generated ICC group. Then $\Gamma$ is inner amenable if and only if there exist more than one inner invariant states on the group von Neumann algebra $L(\Gamma)$.
\end{thm2}
We will also study an alternative generalization of inner amenability (which is called ``strongly inner amenability'') of (not necessarily ICC) discrete groups, which is of independent interest.
Section 4 is concerned with our second application, which is related to property $(T)$.
Recall that $\Gamma$ is said to have \emph{property $(T)$} if every unitary representation of $\Gamma$ has a spectral gap.
It follows from Theorem \ref{thm:main} that if $\Gamma$ has property $(T)$, then every action of $\Gamma$ on a von Neumann algebra is standard, and in particular, the absence of normal invariant state implies the absence of invariant state for this action.
We will show that this property actually characterizes property $(T)$.
\begin{thm2}\label{thm:property-T} Let $\Gamma$ be a countable discrete group.
The following statements are equivalent.
\begin{enumerate}
\item $\Gamma$ has property $(T)$.
\item All actions of $\Gamma$ on von Neumann algebras are standard.
\item For any action $\alpha$ of $\Gamma$ on a von Neumann algebra with only one normal $\alpha$-invariant state, there is only one $\alpha$-invariant state.
\item For every action $\alpha$ of $\Gamma$ on a von Neumann algebra without normal $\alpha$-invariant state, there is no $\alpha$-invariant state.
\end{enumerate}
\end{thm2}
We will also consider the implication $(4)\Rightarrow (1)$ for general discrete group $\Gamma$ with property $(T, FD)$ in the sense of \cite{LZ89} (Proposition \ref{prop:no-inv-state}). Consequently, a minimally almost period discrete group satisfying (4) is finitely generated.
We end this introduction by introducing more notation.
Throughout this article, $\overline{K}$ and $\mathcal{L}(K)$ denote the conjugate Hilbert space of $K$ and the space of bounded linear maps on $K$ respectively.
For any $\xi,\eta\in K$, we define $\omega_{\xi,\eta}\in \mathcal{L}(K)^*$ by $\omega_{\xi,\eta}(x) := \langle x\xi, \eta\rangle$ ($x\in \mathcal{L}(K)$) and set $\omega_\xi := \omega_{\xi,\xi}$.
We consider $p_\pi\in \mathcal{L}(K)$ to be the orthogonal projection onto $K^\pi$.
Furthermore, we denote by $C^*(\pi)$ the $C^*$-subalgebra generated by $\{\pi_t\}_{t\in \Gamma}$ and by $vN(\pi)$ the bicommutant of $C^*(\pi)$.
We define a functional $\varepsilon_\pi: C^*(\pi) \to \mathbb{C}$ formally by $\varepsilon_\pi\big(\sum_{k=1}^n c_k \pi_{t_k}\big) = \sum_{k=1}^n c_k$ (warning: $\varepsilon_\pi$ is not necessarily well-defined.)
\section{Invariant States and Spectral Gap Actions (Theorem \ref{thm:main})}
Let us start with the following lemma, which may be a known result.
Note that part (d) of this lemma is motivated by \cite{Pasch} (and one can also obtain this part using the argument in \cite{Pasch}).
\begin{lem}\label{lem:p_pi}
(a) $\varepsilon_\pi$ is well-defined if and only if $1_\Gamma$ is weakly contained in $\pi$.
This is equivalent to the existence of $\psi\in \mathcal{L}(K)^*_+$ satisfying
\begin{equation}\label{eqt:m-pi}
\psi(\pi_t)\ =\ 1 \qquad (t\in \Gamma)
\end{equation}
(in this case, $\varepsilon_\pi = \psi|_{C^*(\pi)}$).
\smallskip\noindent
(b) $\pi$ does not have a spectral gap if and only if there is $\psi\in \mathcal{L}(K)_+^*$ satisfying \eqref{eqt:m-pi} and $\psi(p_{\pi}) = 0$.
\smallskip\noindent
(c) If $\psi\in \mathcal{L}(K)_+^*$ satisfies \eqref{eqt:m-pi}, then $\psi(\pi_t x) = \psi(x)$ ($t\in \Gamma, x\in \mathcal{L}(K)$) and $\psi$ is ${\rm Ad}\ \!\pi$-invariant.
\smallskip\noindent
(d) $p_\pi\notin C^*(\pi)$ if and only if $p_\pi \neq 0$ and $\pi$ does not have a spectral gap.
\end{lem}
\begin{prf}
Let $\pi^0$ be the restriction of $\pi$ on $(K^\pi)^\bot$.
One has $\pi_t = 1_{K^\pi}\oplus \pi^0_t$ ($t\in \Gamma$) and $p_\pi = 1_{K^\pi}\oplus 0$.
\smallskip\noindent
(a) For any unitary representation $\rho$ of $\Gamma$, we denote by $\tilde \rho$ the induced $^*$-representation of the full group $C^*$-algebra of $\Gamma$.
Then $\varepsilon_\pi$ is well-defined if and only if $\ker \tilde\pi \subseteq \ker \tilde 1_\Gamma$, which in turn is equivalent to $1_\Gamma$ being weakly contained in $\pi$ (see e.g.\ \cite[Theorem F.4.4]{BHV}).
The second statement is trivial.
\smallskip\noindent
(b) Part (a) implies that $\pi$ does not have a spectral gap if and only if there is $\phi\in \mathcal{L}\big((K^\pi)^\bot\big)^*_+$ satisfying $\phi(\pi_t^0) = 1$ for each $t\in \Gamma$.
This, in turn, is equivalent to the existence of $\varphi\in \big(\mathcal{L}(K^\pi)\oplus \mathcal{L}((K^\pi)^\bot)\big)^*_+$ with $\varphi(\pi_t) = 1$ ($t\in \Gamma$) and $\varphi(p_\pi) = 0$.
\smallskip\noindent
(c) This can be obtained by applying the Cauchy-Schwarz inequality to $\psi\big((1-\pi_t)x\big)$.
\smallskip\noindent
(d) Notice that any of the two statements implies that $p_\pi \neq 0$, and so $\varepsilon_\pi$ is well-defined (because of part (a)).
\smallskip\noindent
$\Rightarrow)$.
Since $p_\pi\notin C^*(\pi)$, the map $\rho:C^*(\pi) \to \mathcal{L}(K)$ defined by $\rho(x) := x - \varepsilon_\pi(x) p_\pi$ is injective with its image being $0\oplus C^*(\pi^0)$.
Consequently, $\varepsilon_{\pi^0} = \varepsilon_\pi\circ \rho^{-1}$ (we identify $0\oplus C^*(\pi^0)$ with $C^*(\pi^0)$) and is well-defined.
This shows that $1_\Gamma$ is weakly contained in $\pi^0$ (by part (a)).
\smallskip\noindent
$\Leftarrow)$.
By parts (a) and (b), there is $\psi\in \mathcal{L}(K)^*_+$ with $\psi|_{C^*(\pi)} = \varepsilon_\pi$ and $\psi(p_\pi) = 0$.
Take any unit vector $\xi\in K^\pi$.
If $p_\pi \in C^*(\pi)$, then $1 = \omega_\xi(p_\pi) = \varepsilon_\pi(p_\pi) = 0$ (as $\varepsilon_\pi = \omega_\xi$ on $C^*(\pi)$), which is absurd.
\end{prf}
Let $N$ be a von Neumann algebra.
We denote by $N_*$ the pre-dual space of $N$, which is naturally identified with the subspace of all weak-$^*$-continuous linear functionals in the dual space $N^*$.
Let
$$N^*_\alpha:= \{f\in N^*: f\circ \alpha_t =f \text{ for any } t\in \Gamma\}, \quad \mathfrak{M}_{N,\alpha} := \{f\in N^*_\alpha: f\geq 0 \text{ and } f(1) =1\}$$
and $\mathfrak{M}^{N,\alpha} := \overline{\mathfrak{M}_{N,\alpha}\cap N_*}^{\sigma(N^*,N)}$ (where $\overline{E}^{\sigma(N^*,N)}$ means the weak-$^*$-closure of a subset $E\subseteq N^*$).
Clearly, $\mathfrak{M}^{N,\alpha}\subseteq \mathfrak{M}_{N,\alpha}$.
The following theorem is a refined version of Theorem \ref{thm:main}.
This result can also be regarded as generalizations of the main results in \cite{Pasch} and in \cite{LH} (see also \cite[Proposition 3.4.1]{Lub}).
\begin{thm}\label{thm:sp-gap-act-inv-mean}
Let $\alpha$ be an action of a group $\Gamma$ on a von Neumann algebra $N$ with standard form $(N, \mathfrak{H}, \mathfrak{J}, \mathfrak{P})$.
Consider the following statements.
\begin{enumerate}[(G1)]
\item If $\psi\in \mathcal{L}(\mathfrak{H})_+^*$ satisfies $\psi(\mathcal{U}_{\alpha_t}) = 1$ ($t\in \Gamma$), then $\psi(p_{\mathcal{U}_{\alpha}}) \neq 0$.
\item $\alpha$ has a spectral gap.
\item $p_{\mathcal{U}_{\alpha}}\in C^*(\mathcal{U}_{\alpha})$
\item $\mathfrak{M}_{N,\alpha} = \mathfrak{M}^{N,\alpha}$.
\end{enumerate}
\noindent
One has $(G1) \Leftrightarrow (G2) \Rightarrow (G3)$ and $(G2)\Rightarrow (G4)$.
\end{thm}
\begin{prf}
$(G1) \Leftrightarrow (G2)$.
This follows from Lemma \ref{lem:p_pi}(b).
\smallskip\noindent
$(G2) \Rightarrow (G3)$.
This follows from Lemma \ref{lem:p_pi}(d).
\smallskip\noindent
$(G2) \Rightarrow (G4)$.
Let $m\in N^*_+$ be an $\alpha$-invariant state.
If $\{f_i\}_{i\in I}$ is a net of states in $N_*$ that $\sigma(N^*, N)$-converges to $m$, the ``convergence to invariance'' type argument (see e.g.\ \cite[p.33-34]{Green}) will produce a net $\{g_j\}_{j\in J}$ in the convex hull of $\{f_i\}_{i\in I}$ that $\sigma(N^*, N)$-converges to $m$ and satisfies $\|\alpha_t^*(g_j) - g_j\|_{N_*} \to 0$ for every $t\in \Gamma$.
For each $j\in J$, there is a unique unit vector $\zeta_j\in \mathfrak{P}$ with $g_j = \omega_{\zeta_j}$.
As $\alpha_{t^{-1}}^*(g_j) = \omega_{\mathcal{U}_{\alpha_{t}}(\zeta_j)}$,
we have, by \cite[Lemma 2.10]{Haag-st-form} (and the fact that $\mathcal{U}_{\alpha_{t}}(\mathfrak{P})\subseteq \mathfrak{P}$),
\begin{equation}\label{eqt:Pow-Stor-ineq}
\|\mathcal{U}_{\alpha_{t}}(\zeta_j) - \zeta_j\|^2
\ \leq\ \|\alpha_{t^{-1}}^*(g_j) - g_j\|_{N_*}
\qquad (t\in \Gamma).
\end{equation}
Let $\zeta^0_j := \zeta_j - p_{\mathcal{U}_{\alpha}}(\zeta_j)$.
If $\|\zeta_j^0\| \nrightarrow 0$, a subnet of $\{\zeta_j^0\}_{j\in J}$ will produce an almost $\mathcal{U}_{\alpha}$-invariant unit vector in $(\mathfrak{H}^{\mathcal{U}_{\alpha}})^\bot$, which contradicts the spectral gap assumption of $\mathcal{U}_{\alpha}$.
Consequently, if we set $\zeta_j^1:= \frac{p_{\mathcal{U}_{\alpha}}(\zeta_j)}{\|p_{\mathcal{U}_{\alpha}}(\zeta_j)\|}$, then $\{\omega_{\zeta_j^1}\}_{j\in J}$ is a net of $\alpha$-invariant states that $\sigma(N^*, N)$-converges to $m$.
\end{prf}
It is easy to see that $\mathfrak{M}_{N,\alpha}$ spans $N^*_\alpha$ (see e.g.\ \cite[Proposition 2.2]{Pat}).
Moreover, if we denote $N_*^\alpha :=N^*_\alpha\cap N_*$, it is not hard to check that $\mathfrak{M}^{N,\alpha}$ spans $\overline{N_*^\alpha}^{\sigma(N^*, N)}$.
Thus, Statement (G4) is equivalent to the fact that $N^*_\alpha = \overline{N_*^\alpha}^{\sigma(N^*, N)}$.
\begin{lem}\label{lem:H-N_*}
(a) The map sending $\xi \in \mathfrak{H}$ to $\omega_\xi \in N_*$ restricts to a bijection from $\mathfrak{P} \cap \mathfrak{H}^{\mathcal{U}_{\alpha}}$ onto the positive part of
$N_*^\alpha$.
\smallskip\noindent
(b) If $\mathfrak{H}^{\mathcal{U}_{\alpha}}$ is finite dimensional, then $N_*^\alpha$ is finite dimensional.
\smallskip\noindent
(c) If $\dim \mathfrak{H}^{\mathcal{U}_{\alpha}} =1$, then $\dim N_*^\alpha = 1$.
\end{lem}
\begin{prf}
(a) This follows from the fact that for any $g\in (N_*^\alpha)_+$, if $\zeta\in \mathfrak{P}$ is the unique element satisfying $g = \omega_\zeta$, we have $\zeta\in \mathfrak{H}^{\mathcal{U}_{\alpha}}$, by the inequality in \eqref{eqt:Pow-Stor-ineq}.
\smallskip\noindent
(b) For any $\phi\in N_*^\alpha$, there exist $\alpha$-invariant elements $\phi_1,\phi_2,\phi_3,\phi_4\in (N_*^\alpha)_+$ with $\phi = (\phi_1 - \phi_2) + \mathrm{i} (\phi_3 - \phi_4)$.
By part (a), one can find $\zeta_k\in \mathfrak{P}\cap \mathfrak{H}^{\mathcal{U}_{\alpha}}$ such that $\phi_k = \omega_{\zeta_k}$ ($k=1,2,3,4$).
Therefore, the linear map from $(\mathfrak{H}^{\mathcal{U}_{\alpha}} \otimes \overline{\mathfrak{H}^{\mathcal{U}_{\alpha}}}) \oplus (\mathfrak{H}^{\mathcal{U}_{\alpha}} \otimes \overline{\mathfrak{H}^{\mathcal{U}_{\alpha}}})$ to $N_*^\alpha$ given by $(\xi_1\otimes \overline{\eta_1}, \xi_2\otimes \overline{\eta_2}) \mapsto \omega_{\xi_1,\eta_1} + \mathrm{i} \omega_{\xi_2,\eta_2}$ is surjective and $N_*^\alpha$ is finite dimensional.
\smallskip\noindent
(c) Clearly, $N_*^\alpha \neq (0)$.
If $\dim N_*^\alpha > 1$, there will be two different norm one elements in $(N_*^\alpha)_+$.
Thus, part (a) gives two norm one elements in $\mathfrak{P} \cap \mathfrak{H}^{\mathcal{U}_{\alpha}}$, which is not possible.
\end{prf}
The following proposition concerns some converses of the implications in Theorem \ref{thm:sp-gap-act-inv-mean}.
Note that the idea of the argument for $p_{\mathcal{U}_{\alpha}}\in vN(\mathcal{U}_{\alpha})$ in part (b) comes from \cite{Pasch}.
\begin{prop}\label{prop:inv-mean-sp-gap-act}
Let $\alpha$ be an action of a group $\Gamma$ on a von Neumann algebra $N$.
\smallskip\noindent
(a) If $p_{\mathcal{U}_{\alpha}}\neq 0$, then Statement (G3) implies Statement (G1).
\smallskip\noindent
(b) If $p_{\mathcal{U}_{\alpha}}\in N$, then Statement (G4) implies Statement (G1) and $p_{\mathcal{U}_{\alpha}}$ belongs to the center $Z\big(vN(\mathcal{U}_{\alpha})\big)$ of $vN(\mathcal{U}_{\alpha})$.
\end{prop}
\begin{prf}
(a) This follow from Lemma \ref{lem:p_pi}(d).
\smallskip\noindent
(b) Suppose that Statement (G4) holds, but there exists $\psi\in \mathcal{L}(\mathfrak{H})^*_+$ with $\psi(\mathcal{U}_{\alpha_t}) = 1$ ($t\in \Gamma$) and $\psi(p_{\mathcal{U}_{\alpha}}) = 0$.
Then Lemma \ref{lem:p_pi}(c) implies that $\psi|_{N}$ is $\alpha$-invariant, and Statement (G4) gives a net $\{g_j\}_{j\in J}$ of states in $N_*^\alpha$ that $\sigma(N^*,N)$-converges to $\psi|_N$.
By Lemma \ref{lem:H-N_*}(a), one can find $\zeta_j\in \mathfrak{P}\cap \mathfrak{H}^{\mathcal{U}_{\alpha}}$ with $g_j = \omega_{\zeta_j}$, which means that $g_j(p_{\mathcal{U}_{\alpha}}) = 1$ ($j\in J$).
This contradicts $\psi(p_{\mathcal{U}_{\alpha}}) = 0$.
For the second conclusion, it is clear that $p_{\mathcal{U}_{\alpha}}\in vN(\mathcal{U}_{\alpha})'$ because $\mathfrak{H}^{\mathcal{U}_{\alpha}}$ is $\mathcal{U}_{\alpha}$-invariant.
Suppose on the contrary that $p_{\mathcal{U}_{\alpha}}\notin vN(\mathcal{U}_{\alpha})$.
Since $\mathbb{C} p_{\mathcal{U}_{\alpha}}$ is one dimensional and
\begin{equation}\label{eqt:rel-p}
\mathcal{U}_{\alpha_t} p_{\mathcal{U}_{\alpha}}\ =\ p_{\mathcal{U}_{\alpha}} \qquad (t\in \Gamma),
\end{equation}
one sees that $vN(\mathcal{U}_{\alpha}) + \mathbb{C} p_{\mathcal{U}_{\alpha}}$ is a von Neumann algebra.
Moreover, if $\xi\in \mathfrak{H}^{\mathcal{U}_{\alpha}}$ is a norm one vector (note that we assume $p_{\mathcal{U}_{\alpha}}\notin vN(\mathcal{U}_{\alpha})$), we define a functional on $vN(\mathcal{U}_{\alpha}) + \mathbb{C} p_{\mathcal{U}_{\alpha}}$ by
$$\phi(x+ c p_{\mathcal{U}_{\alpha}})\ :=\ \omega_{\xi}(x)
\qquad (x\in vN(\mathcal{U}_{\alpha}); c\in \mathbb{C}).$$
Clearly, $\phi$ is weak-$^*$-continuous and it is a $^*$-homomorphism since $xp_{\mathcal{U}_{\alpha}}\in \mathbb{C} p_{\mathcal{U}_{\alpha}}$ (because of \eqref{eqt:rel-p}).
If $\psi$ is a normal state extension of $\phi$ on $\mathcal{L}(\mathfrak{H})$, then $\psi(p_{\mathcal{U}_{\alpha}}) = 0$ and Lemma \ref{lem:p_pi}(c) implies that $\psi|_N$ is an $\alpha$-invariant normal state.
Now, Lemma \ref{lem:H-N_*}(a) produces $\zeta\in \mathfrak{P}\cap \mathfrak{H}^{\mathcal{U}_{\alpha}}$ with $\psi|_N = \omega_{\zeta}$ and we have the contradiction that $\psi(p_{\mathcal{U}_{\alpha}}) = 1$.
\end{prf}
Our next example shows that Statement (G4) does not imply Statement (G2) in general.
We first set some notation and recall some facts.
Let $\mathcal{I}_N$ be the inner automorphism group of $N$ and $\beta$ be the canonical action of $\mathcal{I}_N$ on $N$.
Then $\beta$-invariant states on $N$ are precisely tracial states.
Suppose, in addition, that $N$ admits a normal faithful tracial state $\tau$.
If $(H_\tau, \Psi_\tau)$ is the GNS construction with respect to $\tau$ and $\Lambda_\tau: N \to H_\tau$ is the canonical map, then $\mathfrak{H} \cong H_\tau$ and the canonical action of $N$ on $\mathfrak{H}$ can be identified with $\Psi_\tau$.
For any $g\in {\rm Aut}(N)$, one has $\mathcal{U}_{\beta_g}(\Lambda_\tau (x)) = \Lambda_\tau(g(x))$ ($x\in N$).
\begin{eg}\label{eg:4-not-2}
Suppose that $\lambda$ is the left regular representation of $\Gamma$ and put $L(\Gamma):= vN(\lambda)$.
Then $\mathfrak{H} = \ell^2(\Gamma)$ and we let $\mathcal{I}_{L(\Gamma)}$ and $\beta$ be as in the above.
\smallskip\noindent
(a) The representation $\mathcal{U}_{\beta}\circ {\rm Ad}\circ \lambda: \Gamma\to \mathcal{L}(\ell^2(\Gamma))$ coincides with the ``conjugate representation'' $\gamma$ defined by $\gamma_t(\xi)(s) := \xi (t^{-1}st)$ ($s,t\in \Gamma; \xi\in \ell^2(\Gamma)$).
\smallskip\noindent
(b) Suppose that $\Gamma$ is an amenable countable ICC group.
Since $L(\Gamma)$ is a type $I\!I_1$-factor, it has only one tracial state and this state is normal.
Consequently, $\mathfrak{M}_{L(\Gamma), \beta} = \mathfrak{M}^{L(\Gamma), \beta}$ (because $\beta$-invariant states are precisely tracial states).
On the other hand, we have $\mathfrak{H}^{\mathcal{U}_\beta} = \mathbb{C}\delta_e$.
Moreover, as $L(\Gamma)$ is semi-discrete, it has property Gamma (see \cite[Corollary 2.2]{Con}), and the restriction of $\mathcal{U}_{\beta}$ on $(\mathfrak{H}^{\mathcal{U}_\beta})^\bot$ weakly contains the trivial representation (by an equivalent form of property Gamma in \cite[Theorem 2.1(c)]{Con}).
Thus, the representation $\mathcal{U}_\beta$ (and hence the action $\beta$) does not have a spectral gap.
\end{eg}
\section{Invariant States and Inner Amenability (Theorem \ref{thm:inner})}
In this section, we will give an application of Theorem \ref{thm:sp-gap-act-inv-mean} to inner amenability.
We recall (from the main theorem in \cite{Eff}) that an ICC group $\Gamma$ is inner amenable if and only if there is a net $\{\xi_i\}_{i\in I}$ of unit vectors in $\ell^2(\Gamma\setminus \{e\})$ such that $\|\gamma_t(\xi_i) - \xi_i\| \to 0$ for any $t\in \Gamma$ (where $\gamma$ is as in Example \ref{eg:4-not-2}(a)).
Notice that this inner amenability is slightly different from the one in \cite{LP} and \cite{Pat}, and it is called ``non-trivially inner amenable'' in \cite[p.84]{Pat}.
Let us consider another extension of inner amenability to general (not necessarily ICC) discrete groups.
\begin{defn}
$\Gamma$ is said to be \emph{strongly inner amenable} if the conjugate representation $\gamma$ does not have a spectral gap.
\end{defn}
It is obvious that strong inner amenability implies inner amenability, and the converse holds when $\Gamma$ is an ICC group.
On the other hand, all abelian groups and all property $(T)$ groups are not strongly inner amenable.
\begin{prop}\label{prop:st-inner}
Let $\alpha$ be the action of $\Gamma$ on $\mathcal{L}(\ell^2(\Gamma))$ given by $\alpha_t(x) := \gamma_t x \gamma_{t^{-1}}$, $\Gamma_{\rm fin}$ be the normal subgroup consisting of elements in $\Gamma$ with finite conjugacy classes and $A(\Gamma)$ be the predual of $L(\Gamma)$.
\smallskip\noindent
(a) Consider the following statements.
\begin{enumerate}[(S1)]
\item $\Gamma/\Gamma_{\rm fin}$ is inner amenable.
\item $\ell^\infty(\Gamma)^*_\alpha$ is not the $\sigma(\ell^\infty(\Gamma)^*, \ell^\infty(\Gamma))$-closure of $\ell^1(\Gamma)^\alpha$.
\item $\Gamma$ is strongly inner amenable.
\item There is $\psi\in \mathcal{L}(\ell^2(\Gamma))_+^*$ satisfying $\psi(\gamma_t) = 1$ ($t\in \Gamma$) and $\psi(p_\gamma) = 0$.
\item $p_\gamma\notin C^*(\gamma)$.
\item $L(\Gamma)^*_\alpha$ is not the $\sigma(L(\Gamma)^*, L(\Gamma))$-closure of $A(\Gamma)^\alpha$.
\item $\mathfrak{M}_{L(\Gamma),\alpha}$ does not coincide with the set of tracial states on $L(\Gamma)$.
\end{enumerate}
\noindent
Then $(S1) \Rightarrow (S2) \Rightarrow (S3) \Leftrightarrow (S4) \Leftrightarrow (S5)$ and $(S6)\Leftrightarrow (S7) \Rightarrow (S3)$.
\smallskip\noindent
(b) If $\Gamma_{\rm fin}$ coincides with the center of $\Gamma$, then $(S5) \Rightarrow (S1)$ and $p_\gamma\in Z(vN(\gamma))$.
\smallskip\noindent
(c) If $\Gamma_{\rm fin}$ is finite, we have $(S5) \Rightarrow (S1)$.
\end{prop}
\begin{prf}
Notice that if $x\in L(\Gamma)$, then $\alpha_t(x) = \lambda_t x\lambda_t^*\in L(\Gamma)$.
On the other hand, if $y\in \ell^\infty(\Gamma)$ (considered as a subalgebra of $\mathcal{L}(\ell^2(\Gamma))$ in the canonical way), then $\alpha_t(y)\in \ell^\infty(\Gamma)$ and $\alpha_t(y)(s) = y(t^{-1}st)$.
Moreover, if we either set
$$N = \ell^\infty(\Gamma) \quad \text{or} \quad N = L(\Gamma)$$
(and we denote by $\alpha$ the restriction of $\alpha$ on $N$ by abuse of notation), then $\mathfrak{H} = \ell^2(\Gamma)$ and $\mathcal{U}_{\alpha}$ coincides with $\gamma$ (even though $\mathfrak{J}$ and $\mathfrak{P}$ are different in these two cases).
\smallskip\noindent
(a) By Theorem \ref{thm:sp-gap-act-inv-mean} and Proposition \ref{prop:inv-mean-sp-gap-act}(a) (for both $N=\ell^\infty(\Gamma)$ and $N= L(\Gamma)$), it remains to show that $(S1)\Rightarrow (S2)$ and $(S6)\Leftrightarrow (S7)$.
\noindent
$(S1)\Rightarrow (S2)$.
Since $\Gamma/\Gamma_{\rm fin}$ is inner amenable, we know from \cite[Corollary 1.4]{Kan-Mark} as well as the argument of Lemma \ref{lem:p_pi}(c) that there is an $\alpha$-invariant mean $m$ on $\Gamma$ such that $m(\chi_{\Gamma_{\rm fin}}) \neq 1$.
Obviously, $\ell^2(\Gamma)^\gamma \subseteq \ell^2(\Gamma_{\rm fin})$ and Lemma \ref{lem:H-N_*}(a) implies that $g(\chi_{\Gamma_{\rm fin}}) = \|g\|$ if $g\in \ell^1(\Gamma)^\alpha_+$.
Consequently, $m\notin \mathfrak{M}^{\ell^\infty(\Gamma),\alpha}$.
\noindent
$(S6)\Rightarrow (S7)$.
Assume that $\mathfrak{M}_{L(\Gamma),\alpha}$ coincides with the set of all tracial states on $L(\Gamma)$.
It is well-known that every tracial state on $L(\Gamma)$ is a weak-$^*$-limit of a net of normal tracial states (see e.g.\ \cite[Proposition 8.3.10]{KRII}).
Thus, $\mathfrak{M}_{L(\Gamma),\alpha} = \mathfrak{M}^{L(\Gamma),\alpha}$, which contradicts Statement (S6).
\noindent
$(S7)\Rightarrow (S6)$.
Clearly, any tracial state on $L(\Gamma)$ is $\alpha$-invariant.
Moreover, $\omega\in A(\Gamma)$ is $\alpha$-invariant if and only if $$\omega(\lambda_t x)\ =\ \omega(x\lambda_{t}) \qquad (x\in L(\Gamma); t\in \Gamma),$$
which is equivalent to $\omega(yx) = \omega(xy)$ ($x,y\in L(\Gamma)$).
Hence, $\mathfrak{M}_{L(\Gamma),\alpha}\cap A(\Gamma)$ is the set of all normal tracial states on $L(\Gamma)$.
Consequently, if $L(\Gamma)^*_\alpha$ is the weak-$^*$-closure of $A(\Gamma)^\alpha$, then $\mathfrak{M}_{L(\Gamma),\alpha}$ coincides with the set of tracial states, which contradicts Statement (S7).
\smallskip\noindent
(b) The assumption implies that every finite conjugacy class of $\Gamma$ is a singleton set.
Consequently, one has $\ell^2(\Gamma)^\gamma = \ell^2(\Gamma_{\rm fin})$ and $p_\gamma = \chi_{\Gamma_{\rm fin}}\in \ell^\infty(\Gamma)$.
Thus, $p_\gamma\in Z(vN(\gamma))$ by Proposition \ref{prop:inv-mean-sp-gap-act}(b).
Furthermore, Statement (S4) produces an $\alpha$-invariant mean $m$ on $\Gamma$ (see Lemma \ref{lem:p_pi}(c)) such that $m(p_\gamma) =0$.
Now, by \cite[Corollary 1.4]{Kan-Mark}, $\Gamma/\Gamma_{\rm fin}$ is inner amenable.
\smallskip\noindent
(c) Let $C_1,...,C_n$ be all the finite conjugacy classes of $\Gamma$.
Let $p_k\in \mathcal{L}(\ell^2(\Gamma))$ be the orthogonal projection onto $\ell^2(C_k)$ ($k=1,...,n$) and $p_0\in \mathcal{L}(\ell^2(\Gamma))$ be the orthogonal projection onto $\ell^2(\Gamma\setminus \Gamma_{\rm fin})$.
Then
$$\ell^2(\Gamma)^\gamma = \{\xi\in \ell^2(\Gamma): p_0\xi = 0 \text{ and } p_k\xi \text{ is a constant function for every } k=1,...,n\}.$$
Suppose that $\{\xi_i\}_{i\in I}$ is an almost $\gamma$-invariant unit vector in $(\ell^2(\Gamma)^\gamma)^\bot$.
One can find a subnet $\{\xi_{i_j}\}_{j\in J}$ such that $\{p_k\xi_{i_j}\}_{j\in J}$ is norm-converging to some $\xi^{(k)} \in \ell^2(C_k)$ with $\sum_{s\in C_k}\xi^{(k)}(s) = 0$ (for $k =1,...,n$).
As $\|\gamma_t(p_k \xi_{i_j}) - p_k \xi_{i_j}\|\to 0$, we see that $\gamma_t \xi^{(k)} = \xi^{(k)}$ ($t\in \Gamma$), which means that $\xi^{(k)} = 0$.
Consequently, $\|p_0\xi_{i_j}\| \to 1$ and a subnet of $\{\omega_{p_0\xi_{i_j}}\}_{j\in J}$ will produce a $\gamma$-invariant mean $m$ satisfying $m(\chi_{\Gamma_{\rm fin}}) =0$.
Now, Statement (S1) follows from \cite[Corollary 1.4]{Kan-Mark}.
\end{prf}
The following theorem concerns the case when there is a unique inner invariant state on $L(\Gamma)$.
Part (b) of which gives Theorem \ref{thm:inner}.
\begin{thm}\label{thm:ICC}
Let $\alpha$ be as in Proposition \ref{prop:st-inner}.
\smallskip\noindent
(a) If $\Gamma$ is amenable, then there are more than one $\alpha$-invariant states on $L(\Gamma)$.
\smallskip\noindent
(b) Suppose that $\Gamma$ is an ICC group.
If $\Gamma$ is not inner amenable, there is a unique $\alpha$-invariant state on $L(\Gamma)$.
The converse holds when $\Gamma$ is finitely generated.
\end{thm}
\begin{prf}
(a) Suppose that $n\in \ell^\infty(\Gamma)^*_+$ is an invariant mean.
Using the ``convergence to invariance'' argument, there is a net $\{g_j\}_{j\in J}$ of norm one elements in $\ell^1(\Gamma)_+$ such that $\|t\cdot g_j - g_j\|_{\ell^1(\Gamma)} \to 0$ and $\|g_j\cdot t- g_j\|_{\ell^1(\Gamma)} \to 0$, for every $t\in \Gamma$ (where $(t\cdot g_i)(s) = g_i(t^{-1}s)$ and $(g_i\cdot t)(s) = g_i(st)$).
Set $\eta_j := g_j^{1/2}\in \ell^2(\Gamma)$ ($j\in J$).
Then
\begin{equation*}\label{eqt:left-and-adj-inv}
\|\lambda_t(\eta_j) - \eta_j\|_{\ell^2(\Gamma)} \to 0
\qquad \text{and}\qquad
\|\gamma_t(\eta_j) - \eta_j\|_{\ell^2(\Gamma)} \to 0,
\end{equation*}
If $\phi$ is the $\sigma(L(\Gamma)^*, L(\Gamma))$-limit of a subnet of $\{\omega_{\eta_j}\}_{j\in J}$, then $\phi$ is an $\alpha$-invariant state on $L(\Gamma)$ but $\phi\neq \omega_{\delta_e}$ because $\phi(\lambda_t) = 1$ for all $t\in \Gamma$.
\smallskip\noindent
(b) Since $\Gamma$ is an ICC group, one has $\mathfrak{H}^{\mathcal{U}_{\alpha}} = \mathbb{C} \delta_e$ and $A(\Gamma)^\alpha = \mathbb{C} \omega_{\delta_e}$.
The first statement follows from Proposition \ref{prop:st-inner}(a).
To show the second statement, we suppose, on the contrary, that $\Gamma$ has a finite generating subset $\{t_1,...,t_n\}$, $\omega_{\delta_e}$ is the only $\alpha$-invariant state on $N:=L(\Gamma)$, but $\gamma$ does not have a spectral gap.
Then there is a sequence $\{\xi_k\}_{k\in \mathbb{N}}$ of unit vectors in $\ell^2(\Gamma)$ satisfying
$$\langle \xi_k, \delta_e\rangle\ =\ 0 \qquad \text{and} \qquad \|\lambda_{t_j}(\rho_{t_j}(\xi_k)) -\xi_k\| \to 0 \quad (j=1,...,n)$$
(notice that $\gamma = \lambda\circ \rho$, where $\rho$ is the right regular representation of $\Gamma$).
Set $u_j := \lambda_{t_j}\in N$ ($j=1,...,n$) and consider $\mathfrak{F}$ to be a free ultrafilter on $\mathbb{N}$.
Since the relative commutant of $F:=\{u_1,...,u_n\}$ in $N$ is the same as the relative commutant of $F$ in the ultrapower $N^\mathfrak{F}$ (by the argument of \cite[Lemma 2.6]{Con}; see lines 3 and 4 of \cite[p.86]{Con}), the relative commutant of $F$ in $N^\mathfrak{F}$ is $\mathbb{C}$.
Now, one may employ the argument for Case (1) of ``(c) $\Rightarrow$ (b)'' in \cite[p.87]{Con} to conclude that there exists a subsequence of $\{\omega_{\xi_k}\}_{k\in \mathbb{N}}$ which is not relatively $\sigma(N_*,N)$-compact in $N_*$.
Hence, this subsequence has a subnet that $\sigma(N^*,N)$-converges to a non-normal state $\varphi\in N^*$.
As $\varphi$ is invariant under $\{{\rm Ad}\ \! \gamma_{t_1}, ..., {\rm Ad}\ \! \gamma_{t_n}\}$, it is $\alpha$-invariant and we obtain a contradiction.
\end{prf}
\begin{rem}\label{rem:icc}
(a) It was shown in \cite{Pasch} that inner amenability of an ICC group $\Gamma$ is equivalent to $p_\gamma\notin C^*(\gamma)$.
However, Proposition \ref{prop:st-inner}(b) tells us that $p_\gamma\in vN(\gamma)$ for any ICC group $\Gamma$, whether or not it is inner amenable.
\smallskip\noindent
(b) A functional $\varphi\in L(\Gamma)^*$ is $\alpha$-invariant if and only if $\varphi(ax) = \varphi(xa)$ ($x\in L(\Gamma);a\in C^*(\lambda) = C^*_r(\Gamma)$).
Hence, the difference between $\alpha$-invariant states and tracial states on $L(\Gamma)$ is similar to the different between inner amenability of $\Gamma$ and property Gamma of $L(\Gamma)$ (see e.g.\ \cite[p.394]{Vaes}).
It is shown in Proposition \ref{prop:st-inner}(a) that if $\Gamma$ is not strongly inner amenable, all $\alpha$-invariant states on $L(\Gamma)$ are tracial states.
On the other hand, if $\Gamma$ is an amenable ICC group, then $L(\Gamma)$ has only one tracial state, but there exist more than one $\alpha$-invariant states on $L(\Gamma)$ (by Theorem \ref{thm:ICC}(a)).
\end{rem}
\section{Invariant States and Property ($T$) (Theorem \ref{thm:property-T})}
Although Statement (G4) does not imply Statement (G2) for individual action, if Statement (G4) holds for every action, then so does Statement (G2).
This can be seen in the following result, which is a more elaborated version of Theorem \ref{thm:property-T}.
\begin{thm}\label{thm:equiv-prop-T}
Let $\Gamma$ be a countable discrete group.
The following statements are equivalent.
\begin{enumerate}[(T1)]
\item $\Gamma$ has property $(T)$.
\item If $\alpha$ is an action of $\Gamma$ on a von Neumann algebra $N$ and $\psi\in \mathcal{L}(\mathfrak{H})_+^*$ satisfying $\psi(\mathcal{U}_{\alpha_t}) = 1$ ($t\in \Gamma$), one has $\psi(p_{\mathcal{U}_{\alpha}}) \neq 0$.
\item $\mathfrak{M}_{N, \alpha} = \mathfrak{M}^{N, \alpha}$ for every action $\alpha$ of $\Gamma$ on a von Neumann algebra $N$.
\item For each action $\alpha$ of $\Gamma$ on a von Neumann algebra $N$ with $\dim N_*^\alpha =1$, there is only one $\alpha$-invariant state on $N$.
\item For any Hilbert space $K$ and any action $\alpha$ of $\Gamma$ with $\mathcal{L}(K)_*^\alpha = (0)$, there is no $\alpha$-invariant state on $\mathcal{L}(K)$.
\end{enumerate}
\end{thm}
\begin{prf}
Theorem \ref{thm:sp-gap-act-inv-mean} gives $(T1) \Rightarrow (T2) \Rightarrow (T3)$.
Moreover, it is clearly that $(T3) \Rightarrow (T4)$ and $(T3) \Rightarrow (T5)$. We need $(T4) \Rightarrow (T1)$ and $(T5) \Rightarrow (T1)$.
\smallskip\noindent
$(T4) \Rightarrow (T1)$.
This follows from \cite[Theorem 2.5]{Sch} (by considering the case when $N= L^\infty(X,\mu)$, where $(X,\mu)$ is a fixed non-atomic standard probability space, and $\alpha$ runs through all ergodic actions of $\Gamma$ on $(X,\mu)$).
\smallskip\noindent
$(T5) \Rightarrow (T1)$.
Suppose that $\Gamma$ does not have property $(T)$.
By \cite[Theorem 1]{BV}, there is a unitary representation $\mu: \Gamma\to \mathcal{L}(K)$ such that $\mu$ does not have a nonzero finite dimensional subrepresentation and $\pi:=\mu\otimes \overline{\mu}$ weakly contains $1_\Gamma$.
It is easy to see, by using \cite[Proposition A.1.12]{BHV}, that $\pi$ does not have a nonzero finite dimensional subrepresentation as well.
Let $N := \mathcal{L}(K\otimes \overline{K})$ and $\alpha := {\rm Ad}\ \!\pi$.
Then $\mathfrak{H} = K\otimes \overline{K}\otimes K\otimes \overline{K} = \mathcal{H}\mathcal{S}(K\otimes \overline{K})$ (where $\mathcal{H}\mathcal{S}$ denote the space of all Hilbert-Schmidt operators), $\mathcal{U}_{\alpha} = \pi \otimes \overline{\pi}$ and the $^*$-representation of $N$ on $\mathfrak{H}$ is given by compositions.
As $\pi$ does not have any nonzero finite dimensional subrepresentation, we see that $\mathfrak{H}^{\mathcal{U}_{\alpha}} = (0)$ (again by \cite[Proposition A.1.12]{BHV}).
On the other hand, if $\{\xi_i\}_{i\in I}$ is an almost $\pi$-invariant unit vector in $K\otimes \overline{K}$, the $\sigma(N^*,N)$-limit of a subnet of $\{\omega_{\xi_i}\}_{i\in I}$ will produce an $\alpha$-invariant state on $N$.
This contradicts Statement (T5).
\end{prf}
\begin{rem}
One may regard Theorem \ref{thm:equiv-prop-T} as a sort of non-commutative analogue of \cite[Theorem 2.5]{Sch}.
There is also a non-commutative analogue of \cite[Theorem 2.4]{Sch}, which is basically a reformulation of \cite[Theorem 2.2]{B}.
More precisely, the following are equivalence for a discrete group $\Gamma$:
\begin{enumerate}[({A}1)]
\item $\Gamma$ is amenable.
\item $\mathfrak{M}_{N, \alpha} \neq \emptyset$ for every action $\alpha$ of $\Gamma$ on a von Neumann algebra $N$.
\item For any action $\alpha$ of $\Gamma$ on $\mathcal{L}(K)$ (where $K$ is a Hilbert space) with $\mathcal{L}(K)_*^\alpha = (0)$, there exists an $\alpha$-invariant state on $\mathcal{L}(K)$.
\end{enumerate}
In fact, if $\Gamma$ is amenable, then \cite[Theorem 5.1]{B} implies that $\mathcal{U}_\alpha\otimes \overline{\mathcal{U}_\alpha}$ weakly contains $1_\Gamma$.
Thus, if we identify $N$ with the subalgebra $N\otimes 1$ of $\mathcal{L}(\mathfrak{H}\otimes \mathfrak{H})$, an almost $\mathcal{U}_\alpha\otimes \overline{\mathcal{U}_\alpha}$-invariant unit vector will produce an $\alpha$-invariant state on $N$.
To show (A3) $\Rightarrow$ (A1), one may consider the action $\alpha := {\rm Ad}\ \! \lambda$ on $\mathcal{L}(\ell^2(\Gamma))$.
It is easy to see that $\mathcal{L}(\ell^2(\Gamma))^\alpha_* = (0)$ (e.g.\ by Lemma \ref{lem:H-N_*}(a)) and an $\alpha$-invariant state will restricts to a left invariant state on $\ell^\infty(\Gamma)$, which implies that $\Gamma$ is amenable.
\end{rem}
Recall from \cite{LZ89} that $\Gamma$ has property $(T,FD)$ if $1_\Gamma$ is not in the closure of the subset of $\hat \Gamma$ consisting of non-trivial finite dimensional irreducible representations.
\begin{prop}\label{prop:no-inv-state}
Let $\Gamma$ be an infinite discrete group with property $(T,FD)$.
Then (T1) is equivalent to the following statement.
\begin{enumerate}[(T5')]
\item For any action $\alpha$ of $\Gamma$ on a von Neumann algebra $N$ with $N_*^\alpha = (0)$, there is no $\alpha$-invariant state on $N$.
\end{enumerate}
\end{prop}
\begin{prf}
By Theorem \ref{thm:sp-gap-act-inv-mean}, we have (T1)$\Rightarrow$(T5').
Now, suppose that $\Gamma$ does not have property $(T)$.
As $\Gamma$ has property $(T,FD)$, there is a net $\{(\pi^i,K^i)\}_{i\in I}$ in $\hat \Gamma\setminus\{1_\Gamma\}$ with each $K^i$ being infinite dimensional and there is a unit vector $\xi_i\in K^i$ ($i\in I$) such that
$$\|\pi_t^i\xi_i - \xi_i\|\ \to\ 0
\qquad (t\in \Gamma).$$
Let $N$ be the von Neumann algebra $\bigoplus_{i\in I} \mathcal{L}(K^i)$ and set $\pi:=\bigoplus_{i\in I}\pi^i$ as well as $\alpha := {\rm Ad}\ \!\pi$.
Then $\mathfrak{H} = \bigoplus_{i\in I} \mathcal{H}\mathcal{S}(K^i)\cong \bigoplus_{i\in I} K^i\otimes \overline{K^i}$ and the representation of $N$ on $\mathfrak{H}$ is given by compositions.
In this case, one has
$$\mathcal{U}_{{\rm Ad}w}\big((\zeta^i \otimes \overline{\eta^i}\big)_{i\in I}\big)\ =\ \big(w_i\zeta^i \otimes \overline{w_i\eta^i}\big)_{i\in I}
\qquad \Big(w = (w_i)_{i\in I}\in {\bigoplus}_{i\in I}U(K^i); (\zeta^i \otimes \overline{\eta^i})_{i\in I}\in \mathfrak{H} \Big),$$
where $U(K^i)$ is the group of unitaries on $K^i$.
Thus, $\mathcal{U}_{\alpha_t}(y) = \pi_t \circ y \circ \pi_{t^{-1}}$ ($y\in \mathfrak{H}; t\in \Gamma$), which implies that
$$\mathfrak{H}^{\mathcal{U}_{\alpha}}\ \subseteq\ {\bigoplus}_{i\in I}\{S_i\in \mathcal{H}\mathcal{S}(K^i): S_i\pi^i_t = \pi^i_t S_i,\ \! \forall t\in \Gamma\}$$
and hence $\mathfrak{H}^{\mathcal{U}_{\alpha}} = (0)$.
On the other hand, the $\sigma(N^*,N)$-limit of a subnet of $\{\omega_{\xi_i}\}_{i\in I}$ (we consider $\mathcal{L}(K^i)\subseteq N$ for all $i\in I$) will give an $\alpha$-invariant state on $N$.
\end{prf}
A similar statement as the above for the strong property $(T)$ of locally compact groups can be found in \cite{LN}.
\begin{cor}
Let $\Gamma$ be a minimally almost periodic group in the sense of \cite{vNW40} (i.e.\ there is no non-trivial finite dimensional irreducible representation of $\Gamma$).
If $\Gamma$ satisfies (T5'), then it has property (T) and hence is finitely generated.
\end{cor}
\section*{Acknowledgement}
We would like to thank the referee for helpful comments and suggestions that lead to a better presentation of the article, and for informing us that one can use the materials in \cite{BV} to obtain (T5) $\Rightarrow$ (T1) for general countable groups (we only had Proposition \ref{prop:no-inv-state} in the original version.)
We would also like to thank Prof. Roger Howe for helpful discussion leading to a simpler argument for Theorem \ref{thm:equiv-prop-T}.
\bibliographystyle{plain}
| {'timestamp': '2013-04-29T02:00:44', 'yymm': '1304', 'arxiv_id': '1304.7051', 'language': 'en', 'url': 'https://arxiv.org/abs/1304.7051'} |
\section{Introduction}
The physics of $F=2$ spinor Bose-Einstein condensates (BECs) started to gain the attention of both theorists and experimentalists during the last decade. The interest was motivated by the structure of $F=2$ condensates: being more complex than that of $F=1$ condensates, it made possible properties and phenomena which are not present in an $F=1$ system.
One example of this can been seen in the structure of the ground states.
The energy functional of an $F=2$ condensate is characterized by one additional degree of freedom compared to the $F=1$ case.
This leads to a rich ground state manifold
as now there are two free parameters parametrizing the ground states \cite{Ciobanu00,Zheng10}.
This should be contrasted with an $F=1$ condensate, where the ground state is determined by the sign
of the spin-dependent interaction term \cite{Ho98,Ohmi98}.
Another difference can be seen in the structure of topological defects. It has been shown that non-commuting vortices can exist in an $F=2$ condensate \cite{Makela03}, while these are not possible in an $F=1$ BEC \cite{Ho98,Makela03}. The topological defects of $F=2$ condensates have been studied further by the authors of Refs. \cite{Makela06,Huhtamaki09,Kobayashi09}.
Experimental studies of $F=2$ BECs have been advancing in the past ten years.
Experiments on $F=2$ ${}^{87}$Rb atoms
cover topics such as spin dynamics \cite{Schmaljohann04,Chang04,Kuwamoto04,Kronjager06,Kronjager10}, creation of skyrmions \cite{Leslie09a}, spin-dependent inelastic collisions \cite{Tojo09}, amplification of fluctuations \cite{Klempt09,Klempt10}, spontaneous breaking of spatial and spin symmetry \cite{Scherer10}, and atomic homodyne detection \cite{Gross11}. An $F=2$ spinor condensate of ${}^{23}$Na atoms has been obtained experimentally \cite{Gorlitz03}, but it has a much shorter lifetime than $F=2$ rubidium condensates.
In this work, we study the dynamical stability of nonstationary states of homogeneous
$F=2$ spinor condensates. The stability of stationary states has been examined
both experimentally \cite{Klempt09,Klempt10,Scherer10} and theoretically \cite{Martikainen01,Ueda02}.
Interestingly, the experimental studies show that the observed instability of the $|m_F=0\rangle$ state can be used to amplify vacuum fluctuations \cite{Klempt10} and to analyze symmetry breaking \cite{Scherer10}
(see Refs. \cite{Lamacraft07,Leslie09b} for related studies in an $F=1$ system).
The stability of nonstationary states of spinor condensates, on the other hand, has received only little attention. Previous studies on the topic concentrate on $F=1$ condensates \cite{Matuszewski08,Matuszewski09,Matuszewski10,Zhang05,Makela11}.
Here we extend the analysis of the authors of Ref. \cite{Makela11} to an $F=2$ rubidium condensate and present results
concerning the magnetic field dependence of the excitation spectrum and stability.
Although we concentrate on the stability of ${}^{87}$Rb condensates, many of the excitation spectra and stability conditions given in this article are not specific to rubidium condensates but have a wider applicability.
We show that, in comparison with an $F=1$ system, the stability analysis of an $F=2$ condensate is considerably
more complicated. This is partly due to the presence of a spin-singlet term in the energy functional of the latter system, but the main reason for the increased complexity is seen to be the much larger number of states available in an $F=2$ condensate.
This article is organized as follows. Section \ref{sec:overview} introduces the system and presents the Hamiltonian and equations of motion. In Sec. \ref{sec:stability} the Bogoliubov analysis of nonstationary states is introduced. This method is applied to study the stability both in the presence and absence of a magnetic field.
In this section it is also described how Floquet theory can be used in the stability analysis.
In Sec. \ref{sec:g2not0} the stability is studied under the (physically motivated) assumption that one of the interaction coefficients vanishes. Finally, Sec. \ref{sec:conclusions} contains the concluding remarks.
\section{Theory of a spin-2 condensate}\label{sec:overview}
The order parameter of a spin-$2$ Bose-Einstein condensate can be written as $\psi=(\psi_2,\psi_{1},\psi_{0},\psi_{-1},\psi_{-2})^T$, where $T$ denotes the transpose. The normalization is $\sum_{m=-2}^2|\psi_{m}|^2=n$, where $n$ is the total particle density. We assume that the trap confining the condensate is such that all the
components of the hyperfine spin can be trapped simultaneously and are degenerate
in the absence of magnetic field. This can be readily achieved in experiments \cite{Stamper-Kurn98}.
If the system is exposed to an external magnetic field which
is parallel to the $z$ axis, the energy functional reads
\begin{align}
\label{energy}
&E[\psi] =\!\! \int d{\bm r}
\left[ \langle \hat{h}\rangle +\frac{1}{2}\left(g_0 n^2 + g_1 \langle\hat{\mathbf{F}}\rangle^2
+ g_2 |\Theta|^2\right)\right],
\end{align}
where $\hat{\mathbf{F}}=(\hat{F}_x,\hat{F}_y,\hat{F}_z)$ is the (dimensionless) spin operator of a spin-2 particle.
$\Theta$ describes singlet pairs and is given by $\Theta=2\psi_2\psi_{-2}-2\psi_1\psi_{-1}+\psi_0^2$. It can also be written as $\Theta=\psi^T e^{-i\pi \hat{F}_y}\psi$. The single-particle Hamiltonian $\hat{h}$ reads
\begin{align}
\label{h}
\hat{h}= -\frac{\hbar^2 \nabla^2}{2m} + U(\mathbf{r}) -\mu-p\hat{F}_z+q\hat{F}_z^2.
\end{align}
Here $U$ is the external trapping potential, $\mu$ is the chemical potential, and
$p=-g\mu_{\rm B}B$ is the linear Zeeman term. In the last of these $g$ is the Land\'e hyperfine $g$-factor, $\mu_{\rm B}$ is the Bohr magneton, and $B$ is the external magnetic field.
The last term in Eq. (\ref{h}) is the quadratic Zeeman term, $q=-(g\mu_{\rm B}B)^2/E_{\rm hf}$, where $E_{\rm hf}$ is the hyperfine splitting. The sign of $q$ can be controlled experimentally by using a linearly polarized microwave field \cite{Gerbier06}. In this article we consider both
positive and negative values of $q$.
The strength of the spin-independent interaction is characterized by $g_0=4\pi \hbar^2(4a_2+3a_4)/7m$, whereas $g_1=4\pi \hbar^2(a_4-a_2)/7m$ and $g_2=4\pi\hbar^2[(a_0-a_4)/5-2(a_2-a_4)/7]$ describe spin-dependent scattering. Here $a_F$ is the $s$-wave scattering length for two atoms colliding with total angular momentum $F$.
In the case of $^{87}$Rb, we calculate $g_0$ using the scattering lengths given in Ref.\ \cite{Ciobanu00}, and $g_2$ and $g_4$ are calculated using the experimentally measured scattering length differences from Ref.\ \cite{Widera06}.
Two important quantities characterizing the state $\psi$ are the spin vector
\begin{align}
\mathbf{f}(\mathbf{r})= \frac{\psi^\dag(\mathbf{r}) \hat{\mathbf{F}} \psi(\mathbf{r})}{n(\mathbf{r})},
\end{align}
and the magnetization in the direction of the magnetic field
\begin{align}
\label{Mz}
M_z= \frac{\int d\mathbf{r}\,n(\mathbf{r}) f_z (\mathbf{r})}{\int d\mathbf{r}\,n(\mathbf{r})}.
\end{align}
The length of $\mathbf{f}$ is denoted by $f$.
For rubidium the magnetic dipole-dipole interaction is weak and consequently the magnetization
is a conserved quantity. The Lagrange multiplier related to the conservation of magnetization
can be included into $p$.
The time evolution equation obtained from Eq. (\ref{energy}) is
\begin{align}
i\hbar \frac{\partial }{\partial t}\psi =\hat{H}[\psi] \psi,
\end{align}
where
\begin{align}
\label{H}
\hat{H}[\psi]= \hat{h}+ g_0 \psi^\dag\psi +g_1 \langle\hat{\mathbf{F}}\rangle\cdot\hat{\mathbf{F}} + g_2 \Theta \hat{{\mathcal{T}}}.
\end{align}
Here $\hat{{\mathcal{T}}}=e^{-i\pi \hat{F}_y}\hat{C}$ is the time-reversal operator, where $\hat{C}$ is the
complex conjugation operator.
\section{Stability of nonstationary states when $g_2\not=0$}\label{sec:stability}
The stability analysis is performed in a basis where the state in question
is time independent. This requires that the time evolution operator of the state
is known. As we are interested in analytical calculations, an analytical expression for this operator has to be known. To calculate the time evolution operator analytically, the Hamiltonian has to be time independent. In particular, the singlet term $\Theta$ should not depend on time. This is clearly the case if the time evolution of the state is such that $\Theta$ vanishes at all times, and we now study this case.
We define a state
\begin{align}
\label{eq:psiparallel}
\psi_{2;-1}=
\sqrt{\frac{n}{3}}
\begin{pmatrix}
\sqrt{1+f_z}\\
0\\
0\\
\sqrt{2- f_z}\\
0
\end{pmatrix},\quad -1\leq f_z\leq 2.
\end{align}
For this state $\Theta=0$, $\langle \hat{F}_x\rangle=\langle \hat{F}_y\rangle=0$, and $\langle\hat{F}_z\rangle =f_z$. Furthermore, the populations of the state $\psi_{2;-1}$ remain unchanged during the time evolution
determined by the Hamiltonian (\ref{H}). Consequently, $\Theta=0$ throughout the time evolution.
The state $\psi_{2;-1}$ with $f_z=0$, called the cyclic state, is a ground state at zero magnetic field \cite{Ciobanu00}. The creation of vortices with fractional winding number in states of the form $\psi_{2;-1}$ has been discussed by the authors of Ref. \cite{Huhtamaki09}. The stability properties of the state
$\psi_{1;-2}=\sqrt{n}(0,\sqrt{2+f_z},0,0,\sqrt{1-f_z})^T/\sqrt{3}$ are similar to those of $\psi_{2;-1}$ and will therefore not be studied separately.
The Hamiltonian giving the time evolution of $\psi_{2;-1}$ is
\begin{align}
\label{eq:Hparallel}
\hat{H}[\psi_{2;-1}]=g_0 n-\mu +(g_1 n f_z - p)\hat{F}_z +q\hat{F}_z^2,
\end{align}
where we have set $U=0$ as the system is assumed to be homogeneous.
This is of the same form as the Hamiltonian of an $F=1$ system discussed by the authors of Ref. \cite{Makela11}.
The time evolution operator of $\psi_{2;-1}$ is given by
\begin{align}
\label{eq:Uparallel}
\hat{U}_{2;-1}(t)= e^{-i t \hat{H}[\psi_{2;-1}]/\hbar}.
\end{align}
We analyze the stability in a basis where the state $\psi_{2;-1}$ is time independent.
In the new basis, the energy of an arbitrary state $\phi$ is given by \cite{Makela11}
\begin{align}
\label{Erot}
E^{\textrm{new}}[\phi]&=E[\hat{U}_{2;-1}\phi]+i\hbar\langle\phi|\left(\frac{\partial}{\partial t}\hat{U}^{-1}_{2;-1}\right)\hat{U}_{2;-1}\phi\rangle,
\end{align}
and the time evolution of the components of $\phi$ can be obtained from the equation
\begin{align}\label{variE}
i\hbar\frac{\partial\phi_\nu}{\partial t}=\frac{\delta E^{\textrm{new}}[\phi]}{\delta\phi_\nu^*},\quad \nu =-2,-1,0,1,2.
\end{align}
We replace $\phi\rightarrow \psi_{2;-1} +\delta\psi$ in the time evolution equation (\ref{variE}) and expand the resulting equations to first order in $\delta\psi$. The perturbation $\delta\psi=(\delta\psi_2,\delta\psi_1,\delta\psi_0,\delta\psi_{-1},\delta\psi_{-2})^T$ is written as
\begin{align*}
\delta\psi_j=\sum_{\mathbf{k}} \left[ u_{j;\mathbf{k}}(t)\,e^{i\mathbf{k}\cdot\mathbf{r}}
-v_{j;\mathbf{k}}^{*}(t)\, e^{-i\mathbf{k}\cdot\mathbf{r}}\right],
\end{align*}
where $j=-2,-1,0,1,2$.
Straightforward calculation gives the differential equation for the time evolution of the perturbations as
\begin{align}
i\hbar\frac{\partial}{\partial t}
\begin{pmatrix}
\mathbf{u}_\mathbf{k}\\
\mathbf{v}_\mathbf{k}
\end{pmatrix}
&=\hat{B}_{2;-1}
\begin{pmatrix}
\mathbf{u}_\mathbf{k}\\
\mathbf{v}_\mathbf{k}
\end{pmatrix},\\
\label{HBG}
\hat{B}_{2;-1} &=\begin{pmatrix}
\hat{X} &- \hat{Y}\\
\hat{Y}^* & -\hat{X}^*
\end{pmatrix},
\end{align}
where
$\mathbf{u}_\mathbf{k}=(u_{2;\mathbf{k}},u_{1;\mathbf{k}},u_{0;\mathbf{k}},u_{-1;\mathbf{k}},
u_{-2;\mathbf{k}})^T$, $\mathbf{v}_\mathbf{k}$ is defined similarly, and the $5\times 5$ matrices $\hat{X}$ and $\hat{Y}$ are
\begin{align}
\nonumber
\hat{X} =&\,\, \epsilon_k + g_0 |\psi_{2;-1}\rangle\langle\psi_{2;-1}|
+g_1 \!\!\!\!\sum_{j=x,y,z} |\psi_{2;-1}^j (t)\rangle\langle\psi_{2;-1}^j (t)| \\
\label{X}
&+2g_2 |\psi_{2;-1}^\textrm{s} (t)\rangle\langle\psi_{2;-1}^\textrm{s} (t)|, \\
\label{Y}
\hat{Y} =&\,\, g_0 |\psi_{2;-1}\rangle\langle\psi^*_{2;-1}|
+g_1 \!\!\!\!\sum_{j=x,y,z} |\psi_{2;-1}^j (t)\rangle\langle (\psi_{2;-1}^{j})^*(t)|.
\end{align}
Here we have defined
\begin{align}
\label{epsilonk}
\epsilon_k &= \, \frac{\hbar^2 k^2}{2m},\\
\psi_{2;-1}^j (t) &= \, \hat{U}_{2;-1}^\dag(t)\hat{F}_j U_{2;-1} (t)\psi_{2;-1},\quad j=x,y,z,\\
\psi_{2;-1}^\textrm{s} (t) &= \, \hat{U}_{2;-1}^T(t)e^{-i\pi \hat{F}_y} U_{2;-1} (t)\psi_{2;-1}.
\end{align}
In the rest of the article we call the operator determining the time evolution of the perturbations the Bogoliubov matrix. In the present case, $\hat{B}_{2;-1}$ is the Bogolibov matrix of $\psi_{2;-1}$.
It is possible to write $\hat{B}_{2;-1}$ as a direct sum of three operators
\begin{align}
\hat{B}_{2;-1} (t) &=\hat{B}_{2;-1}^{4}\oplus \hat{B}_{2;-1}^3(t) \oplus \hat{B}_{2;-1}^{3'}(t),\\
\hat{B}_{2;-1}^{3'} &=-(\hat{B}_{2;-1}^3)^*.
\end{align}
Here $\hat{B}_{2;-1}^{4}$ is a time independent $4\times 4$ matrix and $\hat{B}_{2;-1}^3$
is a time-dependent $3\times 3$ matrix. The bases in which these operators are defined are given in Appendix \ref{sec:appendixa}.
The time-dependent terms of $\hat{B}_{2;-1}^3$ are proportional to
$e^{\pm i k q t/\hbar}$, where $k=2,4$, or $6$, and consequently the system is periodic with minimum period $T=\pi\hbar/q$. Hence it is possible to use Floquet theory to analyze the stability of the system \cite{Makela11}. In the following we first calculate the eigenvalues of $\hat{B}_{2;-1}^{4}$, then those of $\hat{B}_{2;-1}^3$ and $ \hat{B}_{2;-1}^{3'}$ in the case $q=0$, and finally we discuss the general case $q\not =0$ using Floquet theory.
\subsection{Eigenvalues of $\hat{B}_{2;-1}^{4}$}
First we calculate the eigenvalues and eigenvectors of $\hat{B}_{2;-1}^{4}$. This
operator is independent of $q$. The eigenvalues are
\begin{align}
\nonumber
\label{psi2m1omega1234}
\hbar\omega_{1,2,3,4} &=\pm\Big[\epsilon_k\Big(\epsilon_k +g_0 n+g_1 n(2+f_z)\\
&\pm n\sqrt{[g_0-g_1 (2+f_z)]^2+4 g_0 g_1 f_z^2}\Big)\Big]^{1/2}.
\end{align}
Here we use a labeling such that $++,-+,+-$, and $--$ correspond to $\omega_1,\omega_2,\omega_3$, and $\omega_4$, respectively. Now $\omega_{1,2}$ have a non-vanishing
imaginary part only if $g_0$ and $g_1$ are both negative, while $\omega_{3,4}$ have an imaginary component if $g_0$ and $g_1$ are not both positive. Consequently, these modes are stable for rubidium for which $g_0,g_1>0$.
The eigenvectors can be calculated straightforwardly, see Appendix \ref{sec:appendixa}.
The eigenvectors, like the eigenvalues, are independent of $g_2$.
The perturbations corresponding to the eigenvectors of $\hat{B}_{2;-1}^{4}$ can be written as
\begin{align}
\delta\psi^{1,2,3,4}(\mathbf{r},\mathbf{k};t) = C_{1,2,3,4}(\mathbf{r},\mathbf{k};t)\,\psi_{2;-1},
\end{align}
where the $C_j$'s include all position, momentum, and time dependence.
These change the total density of the condensate and are therefore called density modes.
\subsection{Eigenvalues of $\hat{B}_{2;-1}^3$ and $\hat{B}_{2;-1}^{3'}$ at $q=0$}
In the absence of an external magnetic field $\hat{B}_{2;-1}^3$ is time independent.
The eigenvalues of $\hat{B}_{2;-1}^{3'}$ can be obtained from
those of $\hat{B}_{2;-1}^3$ by complex conjugating and changing the sign. For this
reason we give only the eigenvalues of $\hat{B}_{2;-1}^3$:
\begin{align}
\label{psi2m1omega5}
\hbar\omega_{5} &= \epsilon_k +2 g_2 n,\\
\label{psi2m1omega67}
\hbar\omega_{6,7} &=\frac{1}{2}\Big[g_1 n f_z
\pm \sqrt{(2\epsilon_k-g_1 n f_z)^2+ 16 g_1 n\epsilon_k}\Big].
\end{align}
The eigenvalues $\hbar\omega_6$ and $\hbar\omega_7$ have a non-vanishing complex
part if $g_1<0$. For rubidium all eigenvalues are real.
There are two gapped excitations: at $\epsilon_k=0$ we get $\hbar\omega_5=2g_2 n$ and $\hbar\omega_{6}
\,(\hbar\omega_{7})=g_1 nf_z$ if $g_1 f_z>0$ $(g_1 f_z<0)$.
The eigenvectors are given in Appendix A.
The corresponding perturbations become
\begin{align}
\label{omega3}
\delta\psi^{5}(\mathbf{r},\mathbf{k};t) &= C_5\,
\begin{pmatrix}
0\\
\sqrt{2-f_z}\\
0\\
0\\
-\sqrt{1+f_z}
\end{pmatrix},\\
\delta\psi^{6,7}(\mathbf{r},\mathbf{k};t) &= \sum_\mathbf{k} C_{6,7} \,
\begin{pmatrix}
0\\
e^{i \mathbf{k}\cdot \mathbf{r}} g_1 n\sqrt{(2-f_z)(1+f_z)}\\
e^{-i \mathbf{k}\cdot \mathbf{r}}\sqrt{\frac{3}{2}}(\epsilon_k+2 g_1 n-\hbar\omega_{6,7})\\
0\\
e^{i \mathbf{k}\cdot \mathbf{r}} g_1 n (2-f_z),
\end{pmatrix}
\end{align}
where $C_{5,6,7}$ are functions of $\mathbf{r},\mathbf{k}$, and $t$.
These modes change both the direction of the spin and magnetization and are therefore called spin-magnetization modes.
\subsection{Non-vanishing magnetic field}
If $q\not =0$, the stability can be analyzed using Floquet theory due to the
periodicity of $\hat{B}_{2;-1}^3$ \cite{Makela11}. We denote the time evolution operator
determined by $\hat{B}_{2;-1}^3$ by $\hat{U}_{2;-1}^3$.
According to the Floquet theorem (see, e.g., Ref. \cite{Chicone}), $\hat{U}_{2;-1}^3$ can be written as
\begin{align}
\hat{U}_{2;-1}^3(t)=\hat{M}(t) e^{-i t \hat{K}},
\end{align}
where $\hat{M}$ is a periodic matrix with minimum period $T$ and $\hat{M}(0)=\hat{\textrm{I}}$,
and $\hat{K}$ is some time-independent matrix.
At times $t=nT$, where $n$ is an integer, we get $\hat{U}_{2;-1}^3(nT)=e^{-i n T \hat{K}}$. The eigenvalues of $\hat{K}$ determine the stability of the system.
We say that the system is unstable if at least one of the eigenvalues of $\hat{K}$ has a positive imaginary part. We calculate the eigenvalues $\{\hbar\omega\}$ of $\hat{K}$ from the equation
\begin{align}
\hbar\omega=\hbar\omega^\textrm{r}+i \hbar\omega^\textrm{i}=i\frac{\ln\lambda}{T},
\end{align}
where $\{\lambda\}$ are the eigenvalues of $\hat{U}_{2;-1}^3(T)$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.7]{psi2m1fig_rev.eps}
\end{center}
\caption{(Color online) The positive imaginary part $\omega^\textrm{i}$ related to $\hat{U}_{2;-1}(T)$ for different values of the magnetic field parameter $q$.
The unit of $\omega^\textrm{i}$ is $|g_1|n/\hbar$. Note that the scales of $\epsilon_k$ and $\omega^\textrm{i}$ are not equal in the top and bottom rows. Note also that the scale of the $q=5|g_1|n$ figure
is shifted with respect to the scale of the $q=|g_1|n$ case. The solid white line gives the approximate location of the fastest-growing instability, and the dashed white line corresponds to the largest possible size of a
stable condensate, see Eq. (\ref{lambda2m1}) and Table \ref{table}.
\label{fig:psi2m1}}
\end{figure}
We plot $\omega^\textrm{i}$ for several values of the magnetic field in Fig. \ref{fig:psi2m1}.
By comparing this to the case of a rubidium condensate with $g_2=0$,
we found that the instabilities are essentially determined by $g_1$, the effect of $g_2$ is negligible.
The eigenvectors of $\hat{U}_{2;-1}^3(T)$ correspond to perturbations which affect both
spin direction and magnetization.
With the help of numerical results we find that a good fitting formula is given by
\begin{align}
\hbar\omega^\textrm{i}
\approx &\textrm{Im}\Big\{
\sqrt{(\epsilon_k+q)[\epsilon_k+q+\frac{5}{3}(2-f_z) g_1 n]}\Big\}, q<0,\\
\nonumber
\approx &\textrm{Im}\Big\{\sqrt{(\epsilon_k-2q+ g_1 n)^2-\frac{4}{9}|(f_z-2)(f_z+1)g_1n|}\Big\},\\
& q>0.
\end{align}
We see that for $q>0$ the fastest-growing instability is located approximately at $\epsilon_k=\max\{0,2q-g_1 n\}$ regardless of the value of $f_z$. For $q<0$ the location
of this instability becomes magnetization dependent and is approximately given
by $\epsilon_k=\max \{0, |q|-5(2-f_z)|g_1|n/6\}$.
The values of $\epsilon_k$ corresponding to unstable wavelengths are bounded above approximately by the inequality $\epsilon_k\leq (3|q|+q)/2$. Therefore, the state $\psi_{2;-1}$
is stable if the condensate is smaller than the shortest unstable wavelength
\begin{align}
\lambda_{2;-1} =\frac{2\pi\hbar}{\sqrt{m(3|q|+q)}}.
\label{lambda2m1}
\end{align}
At $q=0$ the system is stable regardless of its size.
Figure \ref{fig:psi2m1} shows that the shape of the unstable region
depends strongly on the sign of $q$. This can be understood qualitatively
with the help of the energy functional of Eq. (\ref{energy}).
We choose $\psi_{\textrm{ini}}=\sqrt{n}|m_F=-1\rangle$ to be the initial state of the system and assume that the
final state is of the form
\begin{align}
\psi_{\textrm{fin}}(x,y,z)=
\begin{cases}
\sqrt{n}|0\rangle, & x\, \textrm{mod}\, 2L \in [0,L),\\
\sqrt{n}|-2\rangle, & x\, \textrm{mod}\, 2L \in [L,2L).
\end{cases}
\end{align}
Then the energy of the initial state is $E_{\textrm{ini}}=g_1 n/2+q$ (dropping constant terms),
while the energy of the final configuration
is $E_{\textrm{fin}}=g_1 n+ 2q$. If $g_1,q>0$, $E_{\textrm{ini}}<E_{\textrm{fin}}$ and domain formation is suppressed for energetic reasons.
If, on the other hand, $g_1>0$ and $q<0$, the energy of the final state is smaller than the energy of the initial state if $q<-g_1 n/2$ and domain formation is possible.
\section{Stability of nonstationary states when $g_2=0$}\label{sec:g2not0}
For rubidium the value of $g_2$ is small in comparison with $g_0$ and $g_1$.
Consequently, it can be assumed that this term has only a minor effect on the stability of the system. This assumption is supported by the results of the previous section.
In the following we will therefore study the stability in the limit $g_2=0$.
This makes it possible to obtain an analytical expression for the time evolution
operator also for states other than $\psi_{2;-1}$. First we discuss a state that has three nonzero components,
and then two states that have two nonzero components.
\subsection{Nonzero $\psi_2$, $\psi_0$, and $\psi_{-2}$}
We consider a state of the form
\begin{align}
\psi_{2;0;-2}=
\frac{\sqrt{n}}{2}\begin{pmatrix}
\sqrt{2-2 \rho_0+f_z}\\
0\\
2e^{i\theta} \sqrt{\rho_0}\\
0\\
\sqrt{2-2 \rho_0-f_z}
\end{pmatrix},\quad |f_z|\leq 2-2\rho_0
\end{align}
For this $\langle\hat{F}_x\rangle=\langle\hat{F}_y\rangle=0$ and
$\langle\hat{F}_z\rangle= nf_z$. The Hamiltonian and time evolution operator of this state
are given by Eqs. (\ref{eq:Hparallel}) and (\ref{eq:Uparallel}), respectively.
The equations determining the time evolution of the perturbations can be obtained
from Eqs. (\ref{X}) and (\ref{Y}) by replacing $\psi_{2;-1}$ with $\psi_{2;0;-2}$
and setting $g_2=0$. In this way, we obtain a time dependent Bogoliubov matrix
$\hat{B}_{2;0;-2}$, which is a function of the population of the zero component $\rho_0$.
The Bogoliubov matrix can now be written as
\begin{align}
\label{eq:HB20m2}
\hat{B}_{2;0;-2} (t) =\hat{B}_{2;0;-2}^6\oplus \hat{B}_{2;0;-2}^4(t),
\end{align}
where $\hat{B}_{2;0;-2}^6$ is time independent and
$\hat{B}_{2;0;-2}^4$ is periodic in time with period $T=\pi/q$.
The bases in which these operators are defined are given in Appendix B.
The eigenvalues of $ \hat{B}_{2;0;-2}^6$ are
\begin{align}
\hbar\omega_{1,2} =& \pm\epsilon_k,\\
\nonumber
\hbar\omega_{3,4,5,6} =& \pm\Big[\epsilon_k^2 +\epsilon_k [g_0+4g_1(1-\rho_0)]n \\
&\pm \epsilon_k n\sqrt{[g_0-4g_1(1-\rho_0)]^2+4 g_0 g_1 f_z^2}\Big]^{1/2}.
\end{align}
Here $++,-+,+-$, and $--$ correspond to $\omega_3,\omega_4,\omega_5$,
and $\omega_6$, respectively.
These eigenvalues are always real if $g_0$ and $g_1$ are positive.
From the eigenvectors given in Appendix \ref{sec:appendixb} we see that $\omega_{3,4}$ are density modes and $\omega_{1,2}$ and $\omega_{5,6}$ are magnetization modes. All these are gapless excitations.
Note that the eigenvalues are independent of $\theta$.
We discuss next the stability properties determined by $\hat{B}_{2;0;-2}^4$. We consider first the special case $\rho_0=0$ and proceed then to the case $\rho_0>0$.
\subsubsection{Stability at $\rho_0=0$}
In the case $\rho_0=0$ a complete analytical solution of the excitation spectrum can be obtained.
In Appendix \ref{sec:appendixb} we show that by a suitable choice of basis the time dependence of the Bogoliubov matrix can be eliminated. The eigenvalues are
\begin{align}
\nonumber
\label{psi20m2omega78910}
&\hbar\omega_{7,8,9,10} = \frac{1}{2}\Big[\pm g_1 n f_z + 6q \\
&\pm\sqrt{4(\epsilon_k+g_1 n-3 q)^2
-(4-f_z^2)(g_1 n)^2} \Big].
\end{align}
These are gapped excitations and correspond to spin-magnetization modes (see Appendix \ref{sec:appendixb}).
If $g_1>0$, these eigenvalues have a non-vanishing complex part when $3q-2g_1 n\leq \epsilon_k \leq 3q$. This is possible only if $q$ is positive.
The location of the fastest-growing unstable mode, determined by $\epsilon_k=\max\{0,3q - g_1 n\}$, is independent of $f_z$. The maximal width of the unstable region in the $\epsilon_k$ direction, obtained at $f_z=0$,
is $2|g_1|n$. The state is stable if the system is smaller than the size given by
\begin{align}
\label{lambda20m2}
\lambda_{2;0;-2} =\frac{2\pi\hbar}{\sqrt{6mq}},\quad q>0.
\end{align}
If $q<0$, the state is stable regardless of the size of the condensate.
In Fig. \ref{fig:psi20m2} we plot the positive imaginary part of the eigenvalues (\ref{psi20m2omega78910}) for various values of $q$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.86]{psi20m2fig_rev.eps}
\end{center}
\caption{The positive imaginary part $\omega^\textrm{i}$ related to the eigenvalues (\ref{psi20m2omega78910}) and
to $\hat{U}_{2;0;-2}(T)$
for various values of the quadratic Zeeman term $q$ and population $\rho_0$.
The unit of $\omega^\textrm{i}$ is $|g_1|n/\hbar$.
We have chosen $f_z=0$ as this choice gives the fastest-growing instabilities and the smallest size of a stable condensate. In the top row the dashed, dotted, and solid lines correspond to $q=1,2,3$, respectively, while in the bottom row they correspond to $q=-1,-2,-3$, respectively. We have set $\theta=0$ in $\psi_{2;0;-2}$ as
the stability was found to be independent of $\theta$.
\label{fig:psi20m2}}
\end{figure}
\subsubsection{Stability when $\rho_0>0$}
In the case $\rho_0>0$ the stability can be studied using Floquet theory.
The stability properties can be shown to be independent of the sign of $f_z$.
At $q=0$ the operator $\hat{B}_{2;0;-2}^4$ is time independent. The eigenvalues
can be obtained analytically but are not given here.
The eigenvalues show that in the absence of magnetic field
the state is stable in a rubidium condensate regardless of the value of $\rho_0$.
Figure \ref{fig:psi20m2} illustrates how the stability depends on the value of $q$ and the population $\rho_0$. We plot only the case $f_z=0$ as it gives the fastest-growing
instabilities and the smallest size of a stable condensate.
We found numerically that the stability properties are independent of the value of $\theta$. We have set $\theta=0$ in the calculations described here.
If $q>0$, the amplitude $\omega^\textrm{i}$ of the short-wavelength instabilities is suppressed as $\rho_0$ increases.
This can be understood with the help of the energy functional
\begin{align}
E_{2;0;-2} = \frac{1}{2} g_1 n f_z^2 + 8q (1-\rho_0).
\end{align}
If $q>0$, the energy decreases as $\rho_0$ increases.
Therefore there is less energy available to be converted into the kinetic energy
of the domain structure. From the top row of Fig. \ref{fig:psi20m2} it can be seen that Eq. (\ref{lambda20m2}) gives an upper bound for the size of a stable condensate also when the value of $\rho_0$ is larger than zero.
If $q<0$, the state is stable at $\rho_0=0$. The bottom row of Fig. \ref{fig:psi20m2} shows that now $\psi_{2;0;-2}$ becomes more unstable as $\rho_0$ grows. This is natural because the energy $E_{2;0;-2}$ grows as $\rho_0$ increases, the energy surplus can be converted into kinetic energy of the domains.
Figure \ref{fig:psi20m2} shows that Eq. (\ref{lambda20m2}) gives an upper bound for the size of a stable condensate also in the case $q<0$.
\subsection{Nonzero $\psi_1$ and $\psi_{-1}$}
As the next example we consider a state of the form
\begin{align}
\psi_{1;-1}=
\sqrt{\frac{n}{2}}\begin{pmatrix}
0\\
\sqrt{1+f_z}\\
0\\
\sqrt{1-f_z}\\
0
\end{pmatrix},\quad |f_z|\leq 1.
\end{align}
Also for this state $\langle\hat{F}_x\rangle=\langle\hat{F}_y\rangle=0$ and
$\langle\hat{F}_z\rangle= nf_z$ and therefore
the Hamiltonian and time evolution operator are given by Eqs. (\ref{eq:Hparallel}) and (\ref{eq:Uparallel}), respectively.
The Bogoliubov matrix reads
\begin{align}
\label{eq:HB1m1}
\hat{B}_{1;-1} (t) =\hat{B}_{1;-1}^6(t)\oplus \hat{B}_{1;-1}^4.
\end{align}
Here $\hat{B}_{1;-1}^6(t)$ is time dependent with period $T=\pi/q$ and
$\hat{B}_{1;-1}^4$ is time independent. The eigenvalues of $\hat{B}_{1;-1}^4$ are
\begin{align}
\nonumber
\hbar\omega_{1,2,3,4} =&\pm\Big[\epsilon_k\Big(\epsilon_k+ (g_0+g_1)n\\
& \pm n\sqrt{(g_0-g_1)^2+4 g_0 g_1 f_z^2}\Big)\Big]^{1/2}.
\end{align}
Now $++,-+,+-$, and $--$ correspond to $\omega_1,\omega_2,\omega_3$,
and $\omega_4$, respectively.
These are all gapless modes. For rubidium the eigenvalues are real.
In Appendix C we show that $\omega_{1,2}$ are density modes
and $\omega_{3,4}$ are magnetization modes.
We now turn to the eigenvalues of $\hat{B}_{1;-1}^6$.
At $q=0$ $\hat{B}_{1;-1}^6$ becomes time independent and the eigenvalues are
\begin{align}
\hbar\omega_{5,6} &= \pm\epsilon_k,\\
\nonumber
\hbar\omega_{7,8,9,10} &= \pm\frac{1}{\sqrt{2}}
\Big[2\epsilon_k^2+10\epsilon_k g_1 n +(g_1 n f_z)^2 \\
&\pm g_1 n \sqrt{(6\epsilon_k+g_1 n f_z^2)^2-8\epsilon_k f_z^2(4\epsilon_k-g_1 n)}\Big]^{1/2}.
\end{align}
For rubidium these are all real.
One of the eigenvalues $\hbar\omega_{4,5}$ has an energy gap $|g_1 n f_z|$.
These eigenvalues describe spin-magnetization modes.
For non-zero $q$ the stability can be analyzed using Floquet theory.
As in the previous section, the fastest growing instabilities are obtained
at $f_z=0$. This case can be studied analytically by changing basis as described in Appendix \ref{sec:appendixc}. The eigenvalues for the case $f_z=0$ are
\begin{align}
\label{omega1m1a}
&\hbar\omega_{5,6} = -3 q\pm \sqrt{(\epsilon_k+3 q)(\epsilon_k+ 2 g_1 n+3 q)},\\
\nonumber
\label{omega1m1b}
&\hbar\omega_{7,8,9,10} = 3q \pm \Big[(\epsilon_k+q)^2+4(\epsilon_k g_1 n+q^2)\\
&\pm 4\sqrt{[q^2+\epsilon_k (g_1 n+q)]^2-3 g_1 n q (\epsilon_k^2-q^2)}\Big]^{1/2}.
\end{align}
These are gapped excitations with a magnetic-field-dependent gap. In more detail,
at $\epsilon_k=0$ we get $\hbar\omega_{5,6}=-3q\pm\sqrt{3q(2 g_1 n+3q)}$ and
$\hbar\omega_{7,8,9,10}=3q\pm\sqrt{5q^2\pm 4q\sqrt{q(3 g_1 n+q)}}$.
For positive $q$, the fastest-growing instability
is determined by $\omega_8^{\textrm{i}}$ and is located approximately at $\epsilon_k =\textrm{max}\{0,-3+q\}.$ For negative $q$ there are three local maxima for $\omega^{\textrm{i}}$. The one with the largest amplitude is given by
$\omega^{\textrm{i}}_{7}$ and $\omega^{\textrm{i}}_{10}$ and is located at $\epsilon_k\approx \textrm{max}\{0,q^2(|q|-1)/(q^2+|q|+1)\}$. The second largest is given by
$\omega^{\textrm{i}}_5$ and is at $\epsilon_k\approx \textrm{max}\{0,3|q|+1\}$. Finally, the instability with the smallest amplitude is related to $\omega^{\textrm{i}}_8$ and is at
$\epsilon_k\approx \textrm{max}\{0,29 (10 |q|-1)/100\}$. In Fig. \ref{fig:psi1m1fig}
we plot the behavior of $\omega^\textrm{i}$ for $q=6$ and $q=-3$.
From Eqs. (\ref{omega1m1a}) and (\ref{omega1m1b}) it can be seen (see also Fig. \ref{fig:psi1m1fig})
that the state is stable if the size of the condensate is smaller than
\begin{align}
\label{lambda1m1}
\lambda_{1;-1} =\frac{2\pi\hbar}{\sqrt{2m(2|q|-q)}}.
\end{align}
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.75]{psi1m1fig_rev.eps}
\end{center}
\caption{The positive imaginary part $\omega^\textrm{i}$ of the eigenvalues (\ref{omega1m1a})
and (\ref{omega1m1b}) related to $\psi_{1;-1}$ for $q=3$ and $q=-6$.
The unit of $\omega^\textrm{i}$ is $|g_1|n/\hbar$.
We have chosen $f_z=0$ as it gives the fastest-growing instabilities and the smallest size of a stable system.
The solid and dashed lines correspond to $\omega_8^\textrm{i}$ and $\omega^{\textrm{i}}_5$, respectively, while the dotted line gives $\omega^{\textrm{i}}_7$ and $\omega^{\textrm{i}}_{10}$
[see Eqs. (\ref{omega1m1a}) and (\ref{omega1m1b})].
\label{fig:psi1m1fig}}
\end{figure}
\subsection{Nonzero $\psi_2$ and $\psi_{0}$}
As the final example we consider a state
\begin{align}
\psi_{2;0}=
\sqrt{\frac{n}{2}}\begin{pmatrix}
\sqrt{f_z}\\
0\\
\sqrt{2-f_z}\\
0\\
0
\end{pmatrix},\quad 0\leq f_z\leq 2.
\end{align}
As for other states considered in this article, now $\langle\hat{F}_x\rangle=\langle\hat{F}_y\rangle=0$ and
$\langle\hat{F}_z\rangle= nf_z$ and
the Hamiltonian and time evolution operator are given by Eqs. (\ref{eq:Hparallel}) and (\ref{eq:Uparallel}), respectively. We note that the stability properties of the states $\psi_{2;0}$ and
$\psi_{0;-2}=\sqrt{n}(0,0,\sqrt{2-f_z},0,\sqrt{f_z})/\sqrt{2}$ are similar. Therefore the latter
state will not be discussed in more detail.
The Bogoliubov matrix of $\psi_{2;0}$ reads
\begin{align}
\label{eq:HB20}
\hat{B}_{2;0} (t) =\hat{B}_{2;0}^2\oplus\hat{B}_{2;0}^4\oplus \hat{B}_{2;0}^{4'}(t),
\end{align}
where only $\hat{B}_{2;0}^{4'}$ is time dependent (with period $T=\pi/q$). The eigenvalues of $\hat{B}_{0;2}^2$ and $\hat{B}_{0;2}^4$ are
\begin{align}
\hbar\omega_{1,2} = &\pm \epsilon_k, \\
\nonumber
\hbar\omega_{3,4,5,6} =& \pm\Big[\epsilon_k^2+ \epsilon_k( g_0 n+2 g_1 n f_z)\\
&\pm \epsilon_k n\sqrt{(g_0-2 g_1 f_z)^2+4 g_0 g_1 f_z^2} \Big]^{1/2}.
\end{align}
In the lower equation, $++, -+, +-$, and $--$ correspond to $\omega_3,\omega_4,\omega_5$, and $\omega_6$, respectively.
These are gapless excitations. In Appendix \ref{sec:appendixd} we show that $\omega_{3,4}$ correspond to density modes, while $\omega_{1,2,5,6}$ are magnetization modes. For rubidium, these are all stable modes.
After a suitable change of basis the Bogoliubov matrix $\hat{B}_{2;0}^{4'}$ becomes
time independent, see Appendix \ref{sec:appendixd}. The eigenvalues of the new matrix are found to be
\begin{align}
\label{omega20}
&\hbar\omega_{7,8,9,10} = \pm\frac{1}{\sqrt{2}}\sqrt{s_1 \pm \sqrt{(2\epsilon_k+g_1 n f_z +2 q)s_2}},
\end{align}
where
\begin{align}
\nonumber
&s_1 = 2 \epsilon_k^2+(g_1 n f_z)^2 +4 \epsilon_k [(3-f_z)g_1 n +q]\\
\nonumber
&\hspace{0.7cm} - 8 f_z g_1 n q +2 q (6 g_1 n+5q),\\
\nonumber
& s_2 = f_z (g_1 n)^2[24(\epsilon_k+q)-10 \epsilon_k f_z -18 q f_z+ g_1 n f_z^2]\\
\nonumber
&\hspace{0.7cm} +32 q^2 [q+\epsilon_k +3 g_1 n (2- f_z)] - 16 g_1 n q \epsilon_k f_z.
\end{align}
Now $++,-+,+-$, and $--$ are related to $\omega_7,\omega_8,\omega_9$,
and $\omega_{10}$, respectively. These are gapped excitations and correspond to spin-magnetization modes.
These modes can be unstable for rubidium; an example of the behavior of the positive imaginary component of
$\omega_{7,8,9,10}$ is shown in Fig. \ref{fig:psi20fig}.
An upper bound for the size of a stable condensate is the same as in the case of $\psi_{1;-1}$, see
Eq. (\ref{lambda1m1}). With the help of Eq. (\ref{omega20}) it can be seen that the fastest-growing
instability is approximately at $\epsilon_k=\textrm{max}\{0,-2+0.9 q+0.04 f_z (1+q)\}$ when $q>0$ and
at $\epsilon_k=\textrm{max}\{0,|q|-3+1.3 f_z-0.16 f_z^2\}$ when $q<0$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.65]{psi20fig_rev.eps}
\end{center}
\caption{(Color online) The positive imaginary part $\omega^\textrm{i}$ of the eigenvalues
(\ref{omega20}) related to $\psi_{2;0}$ for $q=5$ and $q=-2$.
The unit of $\omega^\textrm{i}$ is $|g_1|n/\hbar$. The solid white line gives the approximate location of the fastest-growing instability, and the dashed white line corresponds to the largest possible size of a
stable condensate, see Table (\ref{table}).
\label{fig:psi20fig}}
\end{figure}
\begin{table}[h]
\begin{tabular}{|c|c|c|c|}
\hline
State & $q$ & Stable size & Fastest-growing instability ($\epsilon_k$)\\
\hline
\hline
$\psi_{2;-1},$ & $>0$ & $\frac{2\pi\hbar}{\sqrt{4mq}}$ & $2q-g_1 n$\\
\cline{2-4}
$\psi_{1;-2}$ &$<0$ & $\frac{2\pi\hbar}{\sqrt{2m|q|}}$ & $|q|-\frac{5}{6}(2\mp f_z)|g_1|n$\\
\hline
$\psi_{2;0;-2}$ & $>0$ & $\frac{2\pi\hbar}{\sqrt{6mq}}$ & $3q-g_1 n$\\
\cline{2-4}
& $<0$ & $\infty$ & -\\
\hline
$\psi_{1;-1}$ & $>0$ & $\frac{2\pi\hbar}{\sqrt{2mq}}$ & $q-3g_1 n$\\
\cline{2-4}
& $<0$ & $\frac{2\pi\hbar}{\sqrt{6m|q|}}$ & $\frac{q^2(|q|-g_1 n)}{q^2+|q| g_1 n+(g_1 n)^2}$\\
\hline
$\psi_{2;0},$ & $>0$ & $\frac{2\pi\hbar}{\sqrt{2mq}}$ & $-2+0.9 q+0.04 |f_z| (1+q)$\\
\cline{2-4}
$\psi_{0;-2}$ & $<0$ & $\frac{2\pi\hbar}{\sqrt{6m|q|}}$ & $|q|-3+1.3 |f_z| -0.16 f_z^2$\\
\hline
\end{tabular}
\caption{Summary of the results. Stable size gives the largest possible size of a stable
homogeneous condensate and the fastest-growing instability indicates the approximate value of $\epsilon_k$ corresponding to the fastest growing instability. If $q$ is such that the $\epsilon_k$ given in the table
is negative, the fastest-growing instability is at $\epsilon_k=0$. On the second line of the table,
the $-$ sign holds for $\psi_{2;-1}$ and the $+$ sign for $\psi_{1;-2}$.
\label{table}}
\end{table}
\section{Conclusions} \label{sec:conclusions}
In this article, we have studied the dynamical
stability of some nonstationary states of homogeneous $F=2$ rubidium
BECs. The states were chosen to be such
that the spin vector remains parallel to the magnetic field throughout the
time evolution, making it possible to study the stability analytically.
The stability analysis was done using the Bogoliubov approach in a frame of reference where the states were stationary. The states considered had two or three spin components populated simultaneously. These types of states were found to be stable in a rubidium condensate in the absence of a magnetic field, but a finite magnetic field makes them unstable. The wavelength and the growth rate of the instabilities depends on the strength of the magnetic field. The locations of the fastest-growing instabilities and the upper bounds for the sizes of stable condensates are given in Table \ref{table}.
For positive $q$, the most unstable state, in the sense that its upper bound for the size of a stable condensate is the smallest, is $\psi_{2;0;-2}$.
However, this is the only state that is stable when $q$ is negative. For $q<0$, the states giving the smallest size of a stable condensate are $\psi_{1;-1}$ and $\psi_{2;0}$.
In comparison with $F=1$ condensates, the structure of the instabilities is much richer in an $F=2$
condensate. In an $F=1$ system, there is only one type of a state whose spin is parallel to the magnetic field.
The excitations related to this state can be classified into spin and magnetization excitations \cite{Makela11}.
In the present system, there are many types of states which are parallel to the magnetic field;
we have discussed six of these. In addition to the spin and magnetization excitations, there
exist also modes which change spin and magnetization simultaneously. The increase in the complexity can
be attributed to the number of components of the spin vector.
The stability properties of the states discussed in this article can be studied experimentally straightforwardly.
These states had two or three non-zero components, a situation which can be readily achieved by current experimental means \cite{Ramanathan11}.
Furthermore, the stability of these states does not depend on the relative phases of the populated components,
making it unnecessary to prepare states with specific relative phases.
Finally, we note that the lifetime of an $F=2$
rubidium condensate is limited by hyperfine changing collisions \cite{Schmaljohann04}. Consequently, the instabilities are visible
only if the their growth rate is large enough compared to the lifetime of the condensate. We also remark that
the stability analysis was performed for a homogeneous condensate, whereas in experiments an inhomogeneous trapping potential is used. The stability properties can be sensitive to the shape of this potential \cite{Klempt09}.
| {'timestamp': '2012-05-21T02:01:37', 'yymm': '1202', 'arxiv_id': '1202.3658', 'language': 'en', 'url': 'https://arxiv.org/abs/1202.3658'} |
\section{Introduction}
A large part of the visible matter in our universe is made up of protons and neutrons, collectively
called hadrons. Hadrons are made up of more fundamental particles called quarks and gluons. The
quantum theory for these particles is Quantum Chromodynamics(QCD). QCD is a strongly interacting
theory and the strength of interaction becomes vanishingly small only at asymptotically high energies.
Due to this reason, the quarks and gluons are not visible directly in our world and remain
confined within the hadrons. Lattice gauge theory has emerged as the most successful non-perturbative
tool to study QCD, with very precise lattice results available for hadron masses and decay constants which are
in excellent agreement with the experimental values~\cite{lat1}.
It is expected that at high enough temperatures that existed in the early universe, the hadrons would melt
into a quark gluon plasma(QGP) phase. Signatures of such a phase have been seen during the last decade in the
Heavy ion collision experiments at the Relativistic Heavy Ion Collider(RHIC), in Brookhaven National Laboratory
This is particularly exciting for the lattice theory community which has been predicting such a phase
transition since a long time \cite{lat2}. The formation of the QGP phase occurs at temperatures near
$\Lambda_{QCD}$, where QCD is strongly interacting, which means lattice is the most reliable
tool to understand the properties of the hot QCD medium. Over the past three decades, the lattice community
has contributed significantly to the understanding of the physics of heavy ion experiments and strongly interacting
matter under extreme conditions, in general. Lattice computations are
entering into the precision regime, where lattice data can be directly used for interpreting the experimental
results and set benchmarks for the heavy ion experiments at RHIC and at the ALICE facility in CERN. It is now generally
believed that the hot and dense matter created due to the collision of two heavy nuclei at RHIC and ALICE, equilibrates
within 1 fm/c of the initial impact. The equilibrated QGP medium then expands and cools down in the process,
ultimately forming hadrons at the chemical freezeout. The evolution of the fireball from its equilibration till
the chemical freezeout is described by relativistic hydrodynamics~\cite{hydro}. The QCD Equation of State(EoS) is
an input for the hydrodynamic equations and lattice can provide a non-perturbative estimate of this quantity from first principles.
The lattice data for the speed of sound in the QCD medium is also an important input for the hydrodynamic
study, once bulk viscosity is considered.
In this article, I have selected the most recent results form lattice QCD thermodynamics
that are relevant for the heavy ion phenomenology. I have tried to review the necessary background, but not attempted
to provide a comprehensive account of the development of the subject throughout these years.
I have divided this article into two major sections. The first section deals with QCD at finite temperature
and zero baryon density, where lattice methods are very robust. I have given a basic introduction to the lattice techniques,
and how the continuum limit is taken, which is essential to relate the lattice data with the real world experiments.
I have discussed the current understanding we have of the nature of QCD phase transition as a function of quark masses,
inferred from lattice studies.
Subsequently the different aspects of the hot QCD medium for physical quark masses are discussed; the EoS, the
nature and the temperature of transition and the behaviour of various thermodynamic observables in the different phases. In the study of thermodynamics, the contribution of the lighter u, d and s quarks are usually considered.
The effect of heavier charm quarks on QCD thermodynamics is discussed in this section, in view of their relevance for the
heavy ion experiments at LHC, where hydrodynamic evolution is expected to set in already at temperatures about 500 MeV
and also for the physics of early
universe. The relevance of chiral symmetry for the QCD phase diagram and the effects of chiral anomaly are discussed in
detail. The chiral anomaly is believed to have an important role in shaping the phase diagram and several lattice studies in the
recent years are trying to understand its effect. It is a difficult problem and I have tried to compile the recent results
and review the general understanding within the community, about how to improve upon them.
The second section is about lattice QCD at finite density, where there is an inherent short-coming of the lattice algorithms
due to the so-called sign problem. A brief overview of the different methods used and those being developed
by the lattice practitioners to circumvent this problem, is given. It is an active field of research, with a lot
of understanding of the origin and the severity of this problem gained in recent years, which is motivating
the search for its possible cure.
In the regime where the density of baryons is not too large, which is being probed by the experiments at RHIC, lattice
techniques have been used successfully to produce some interesting results. One such important proposal in the recent time, is the first principles determination of the chemical freezeout curve using experimental data on the electric charge fluctuations. This and the
lattice results on the fluctuations of different quantum numbers in the hot medium and the EoS at finite baryon density are discussed in detail. An important feature of the QCD phase diagram is the possible presence of a critical end-point for the chiral first order transition.
Since critical end-point search is one of the main objectives at RHIC, I have reviewed the current lattice results on
this topic. The presence of the critical end-point is still not conclusively proven from lattice studies. It is a very challenging
problem and I mention about the further work in progress to address this problem effectively. Fermions
with exact chiral symmetry on the lattice are important in this context. I have discussed the recent
successful development to construct fermion operators that have exact chiral symmetry even at finite density which
would be important for future studies on the critical end-point.
The signatures of the critical end-point could be detected in the experiments if the critical region is not separated from
the freezeout curve. It is thus very crucial to estimate the curvature of the critical line from first principles and I
devote an entire subsection to discuss the lattice results on this topic.
I apologize for my inability to include all the pioneering works that have firmly established this subject and also to review the extensive
set of interesting contemporary works. For a comprehensive review of the current activity in lattice thermodynamics, at
finite temperature and density, I refer to the excellent review talks of the Lattice conference, 2012 \cite{maria,gaarts}.
\section{QCD at finite temperature on the lattice}
The starting point of any thermodynamic study is the partition function. The QCD partition function
for $N_f$ flavours of quarks in the canonical ensemble is given as,
\begin{equation}
\mathcal{Z}_{QCD}(T,V)=\int \mathcal {D}U_\mu(x) \prod_{f=1}^{N_f} det D_f~\rm{e}^{-S_G}
\end{equation}
where $D_f$ is the fermion operator for each flavour of quark $f$.
$U_\mu$ is the gauge link defined as $U_\mu(x)=exp(-ig\int^x A_\mu (x') d x')$ in terms of
gauge fields $A_\mu$, which are adjoint representation of the $SU(3)$ color group and
$g$ is the strength of the gauge coupling.
$S_G$ is the gluon action in Euclidean space of finite temporal extent of size
denoted by the inverse of the temperature of the system, $T$.
Lattice QCD involves discretizing the spacetime into a lattice with a spacing
denoted by $a$. The volume of the lattice is given as $V=N^3 a^3$, where $N$ are the number
of lattice sites along the spatial directions and the temperature being
$T=1/(N_\tau a)$, where $N_\tau$ are the number of sites along the temporal direction. The
lattice is usually denoted as $N^3\times N_\tau$. The
gluon action and the fermion determinant are discretized on the lattice. The simplest gluon action,
known as Wilson plaquette action is of the form,
\begin{equation}
S_G=\frac{6}{g^2}\sum_{x,\mu,\nu,\mu<\nu}(1-\frac{1}{3}Tr\text{Re}~U_{\mu,\nu}(x))~,~
U_{\mu,\nu}(x)=U_{\mu}(x)U_{\nu}(x+\mu)U^\dagger_{\mu}(x+\nu)U^\dagger_{\nu}(x).
\end{equation}
where $U_{\mu,\nu}(x)$ is called a plaquette. The naive discretization of the continuum
Dirac equation on the lattice results in the fermion operator of the form,
\begin{equation}
D_f(x,y)=\sum_{x,y}\left[\sum_{\mu=1}^4\frac{1}{2}\gamma_\mu \left(U_{\mu}(x)\delta_{y,x+\mu}-
U^\dagger_{\mu}(y) \delta_{y,x-\mu}\right)+ a m_f \delta_{x,y}\right].
\end{equation}
where in each of the expressions the site index runs from $x=1$ to $N^3\times N_\tau$.
The discretization of the gluon and fermion
operators are not unique and there are several choices which give the correct continuum limit.
Usually discretized operators with small finite $a$ corrections are preferred. Reducing $a$-dependent corrections
by adding suitable ``irrelevant'' terms in the Renormalization Group(RG) sense, is known
as improvement of the operator. Another issue related to the discretization of the fermion
operator is called as the ``fermion doubling problem''. It arises because the
naive discretization of the continuum fermion operator introduces extra unphysical fermion
species called the doublers. The existence of the doublers can be traced back to a No-Go theorem \cite{nn}
on the lattice which states that fermion actions which are ultra-local, have exact chiral symmetry, and have the
correct continuum limit, cannot be free from the doublers. Doublers are problematic
since in the continuum limit we would get a theory with 16 fermion species and QCD with 16 flavours is very close
to the upper bound of the number of flavours beyond which the asymptotic freedom is lost. It is thus
important to ensure that the discrete fermion operator should be free of the doublers. In order to do so
the chiral symmetry is explicitly broken on the lattice, like for the case of Wilson fermions \cite{wilson}, or only a
remnant of it is preserved for the staggered fermions \cite{ks}. The staggered fermion discretization retains the
doubling problem in a milder form. In the continuum limit, the staggered fermion determinant would give contribution
of four degenerate fermion species or tastes. However on a finite lattice, there is a considerable mixing among the
tastes so a simple fourth root of the determinant would not yield the contribution of a single fermion flavour. This
is called the rooting problem. The severity of rooting problem can be minimized by choosing either the stout-smeared staggered quarks~\cite{stoutref} or the Highly Improved Staggered Quarks(HISQ)~\cite{hisqref}.
Other improved versions of staggered fermions used for QCD thermodynamics are the p4 and asqtad fermions~\cite{p4ref,asqtadref}.
Only the overlap \cite{neunar} and the
domain wall fermions \cite{kaplan} have exact chiral symmetry on the lattice at the expense of breaking
the ultra-locality condition of the Nielsen-Ninomiya No-go theorem. As a result overlap and domain wall
fermions are much more expensive to simulate compared to the staggered and the Wilson fermions. For QCD thermodynamics,
the staggered and to some extent the Wilson fermions are favourites, with very high precision data
available with improved versions of staggered quarks \cite{bf1,bw1}.
With the advent of faster computing resources and smarter algorithms, even large scale simulations with
chiral fermions are becoming a reality \cite{cossu1, bwov,dwold,twqcd}.
With the choice of a suitable gauge and the fermion operators on the lattice,
different physical observables are measured on statistically independent configurations generated using suitable
Monte-Carlo algorithms. To make connection with the continuum physics, one needs to take the
$a\rightarrow0$ limit of the observables measured on the lattice. The gauge coupling is related to the lattice
spacing through the beta-function and the continuum limit, in turn, implies $g\rightarrow 0$.
In the space of coupling constants and the
fermion masses, the continuum limit is a second order fixed point and the approach to the fixed point
should be done along the correct RG trajectory or the lines of constant physics. The line of
constant physics is defined by setting the mass of hadrons on the lattice to the continuum
values, at each value of the coupling constant. The number of such relations required depends on the number of fermion flavours.
To relate the lattice hadron masses to their experimental values, one has to define a scale to express the lattice spacing $a$,
in terms of some physical units. There are two often used methods in QCD to set the scale, using the quantities $r_1$
and the kaon decay constant $f_K$. The $r_1$ scale is defined from the quark-antiquark potential $V_{\bar q q}(r)$, as,
\begin{equation}
\left(r^2\frac{\partial V_{\bar q q}(r)}{\partial r}\right)_{r=r_1}=1.0.
\end{equation}
On the lattice one measures $V_{\bar q q}(r)$ and $r_1$ is extracted from it using a suitable fit ansatz for
the potential. To quantify the value of $r_1$ in physical units, one uses either the pion decay constant or the
splitting of energy levels of bottom mesons to set the lattice spacing~\cite{milcscale}. Advantages of this scale
is that it is not sensitive to fermion discretization effects and
to the choice of quark masses that defines the line of constant physics. However, the accurate determination
of the potential requires very good statistics. One can also set the scale by choosing the $f_K$ measured on the
lattice to its physical value. The $f_K$ is known with very high accuracy from the experiments.
Once the line of constant physics is set, one has to take care of the finite size and lattice spacing
effects such that the continuum extrapolation is correctly performed.
To minimize such corrections, the correlation length which is given by the inverse of the mass of the
lowest excitation of the system should be much larger that the lattice spacing but sufficiently smaller than the spatial
size. Also for thermodynamics, it is crucial to minimize finite volume corrections which is ensured for
the choice $\zeta\geq3$, where $\zeta=N/N_\tau$.
To characterize different phases one needs to define a suitable order parameter which depends
on the symmetries of the theory. In the limit of infinitely heavy quark masses QCD is just a
pure gauge theory with an exact order parameter, the expectation value of the Polyakov loop given as,
\begin{equation}
L(\mathbf x)=\frac{1}{3}Tr P \prod_{x_4=1}^{N_\tau} U_4(\mathbf x,x_4)~,~\text{P}\Rightarrow\text{Path ordering}.
\end{equation}
The phase transition from a phase of confined colour degrees of freedom to the deconfined regime of
free gluons, is of first order and is established very firmly from lattice studies \cite{karsch1}.
The corresponding transition temperature is $T_c$(pure-gauge)$=276(2)$ MeV \cite{karsch2} using the
string tension, $\sqrt{\sigma}$, value to be 425 MeV, to set the scale .
If the quarks are massless, the QCD partition function with $N_f$ quark flavours has an exact
$SU(N_f)\otimes SU(N_f)$ chiral symmetry. At some temperature, there is a phase transition from a chiral symmetry
broken phase to the symmetry restored phase, characterized by the order parameter called the chiral condensate,
\begin{equation}
\langle\bar\psi_f\psi_f\rangle=\lim_{m_f\rightarrow0}\lim_{V\rightarrow\infty}\frac{T}{V}\frac{\partial ln\mathcal{Z}_{QCD}}{\partial m_f}~,~f=1,..,N_f.
\end{equation}
The phase transition in the chiral limit for $N_f=3$, is expected to be of first order and there are
several lattice results supporting this \cite{nf3}. For $N_f=2$, the lattice results are contradictory with some
claiming a first order transition \cite{nf2fo} whereas recent results showing that the second order transition is
also a possibility \cite{mageos}. The current status of $N_f=2$ QCD phase transition in the chiral limit would be discussed
again in a later subsection. For any finite value of quark masses, however there is no unique order parameter and
no sharp phase transition is expected but only a gradual crossover.
Based on effective field theories with same symmetries as QCD, using universality arguments
and renormalization group inspired techniques, a schematic diagram of different phases of QCD as a function
of quark mass is summarized in the famous ``Columbia plot'' \cite{colplot}. The first order regions
in the quenched and the chiral limits are separated from the cross-over region by second order lines
belonging to the $Z(2)$ universality class. These boundaries are schematic, though, and it is important to
estimate the precise location of the physical point in this diagram. Lattice studies over the years
have helped to redraw the boundaries more quantitatively. A latest version of the ``Columbia plot''
is shown in figure \ref{colplot}. With the high precision lattice data with physical
light and strange quark masses, it is now known that the QCD transition in our world is a crossover \cite{milc,bf2,yaoki}.
The boundary of the first order region in the upper right hand corner of figure \ref{colplot} is fairly well known \cite{saito}. The extent of the first order region in the bottom left hand is now believed to be small and much far away from the physical point \cite{ding,bwnf3}. However the extent of the $Z(2)$ line in the left hand corner is still not well established; it can either continue along the $m_{u,d}=0$ axis to the $m_s\rightarrow\infty$ corner or end at a tricritical point. A better understanding of this issue is currently underway. The key to the resolution of this issue is to understand the effects of chiral anomaly through rigorous lattice computations. Since the light u,d-quark masses are much smaller than $\Lambda_{QCD}$, the QCD action has an approximate $SU(2)\times SU(2)\times U_B(1)$ symmetry with an additional classical
$U_A(1)$ symmetry broken explicitly by quantum effects. This is known as the $U_A(1)$ anomaly \cite{abj,fujikawa}.
At zero temperature, the magnitude of this anomaly is related to the instanton-density. If the magnitude of this
anomaly is temperature independent, the phase transition along the $m_{u,d}=0$ axes has to be
of second order, belonging to the $O(4)$ universality class \cite{piswil}.
This would mean that the $Z(2)$ line has to end at a tri-critical point characterized by the strange quark mass,
$m_s^{tric}$. The difference between the physical and tri-critical mass for the strange quark is not
yet known with a good precision.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.3]{cplot.eps}
\caption{The present status of the Columbia plot.}
\label{colplot}
\end{center}
\end{figure}
In the following subsections, the lattice results for the QCD EoS for physical quark
masses are discussed, which is an input for the hydrodynamics of the QGP medium.
The current results on the pseudo-critical temperature, the entropy
density, the speed of sound are also shown. All the results are for $2+1$ flavour QCD, i.e., two light
degenerate u and d quarks and a heavier strange quark mass. The effect of the heavy charm quarks on the
thermodynamic quantities is also highlighted. At the end of this section, I touch upon the $N_f=2$ QCD
near the chiral limit and the effects of the $U_A(1)$ anomaly for QCD thermodynamics.
\subsection{Equation of state}
The Equation of State(EoS) is the relation between the pressure and energy density of a system
in thermal equilibrium. For estimating the QCD EoS, the most frequently used method by the lattice
practitioners is the integral method \cite{intmethod}. In this method, one first computes the
trace anomaly $I(T)$, which is the trace of the energy-momentum tensor. This is
equal to the quantity $\epsilon-3p$ where $\epsilon$ is the energy density of the
system and $p$ is the pressure. Moreover it is related to the pressure of the
system through the following relation
\begin{equation}
I(T)=T^5\frac{\partial}{\partial T}\frac{p}{T^4}
\end{equation}
So if $I(T)$ is known, the pressure can be computed by integrating $I(T)$ over a range of
temperature, with the lower value of temperature chosen such that the corresponding value of
pressure is vanishingly small.
The trace anomaly is related to the chiral condensate and the gluon action as,
\begin{equation}
\frac{I(T)}{T^4}=-N_\tau^4\left(a\frac{d \beta}{d a}(\langle S_G\rangle-\langle S_G\rangle_0)
+\sum_f a \frac{d (m_f a)}{d a}(\langle \bar\psi_f\psi_f\rangle-\langle \bar\psi_f\psi_f\rangle_0)\right)~,
~\beta=\frac{6}{g^2}~,
\end{equation}
where the subscript zero denotes the vacuum expectation values of the corresponding quantities. The
subtraction is necessary to remove the zero temperature ultraviolet divergences and the vacuum expectation
values are usually computed on a lattice with number of sites $(N_\tau)_0$ in the temporal direction, equal to the
corresponding spatial number of sites, $N$. The subtraction is an unavoidable expense of this method.
A new idea of deriving thermodynamic observables from cumulants of momentum distribution has emerged, where
the vacuum subtraction is not required~\cite{giusti} and it would be interesting to check the application of this method
in QCD. Also one needs to know the functional dependence of the inverse of QCD coupling constant
$\beta$ and the quark masses with the lattice spacing $a$ along the line of constant physics.
On the lattice, $I(T)$ is known only for a finite number of temperature values. The pressure computed by the numerical integration of
the $I(T)$ data, has errors both due to statistical fluctuations and systematic uncertainties involved in the numerical interpolation of the data.
The results for the trace anomaly are available for different lattice discretizations of the fermions. For staggered quarks there are two sets of results, one from the
HotQCD collaboration using HISQ discretization \cite{petreczky,hotqcd1} and the other from the Budapest-Wuppertal collaboration using stout smeared staggered quarks \cite{bwcharm, bw1}. These results are compiled in Figures \ref{intmeashisq}, \ref{intmeasbw}. For the HISQ results, the bare lattice parameters are fixed by setting the lowest strange pseudo-scalar meson mass to its physical value at about 686 MeV and $m_\pi=160$ MeV, which defines the line of constant physics. The kaon decay constant $f_K=156.1$ MeV or alternatively the $r_1=0.3106$ fm from the static quark potential is used to set the scale. The corresponding parameters for the stout smeared quarks are $m_\pi=135$ MeV, $m_K=498$ MeV and the kaon decay constant.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{trace_hisq.eps}
\includegraphics[scale=0.5]{trace_hisqhigh.eps}
\caption{The results for the trace anomaly using the HISQ action for low (left panel) and high(right panel) temperatures
for lattice sizes with temporal extent $N_\tau$ and spatial size $4 N_\tau$, from \cite{petreczky}. Also in the right panel, the HISQ results are compared to the results using p4 fermions, which has an improved behaviour at high temperatures and to the continuum perturbation theory results at 1-loop(solid line) and 2-loop(dashed line)for the trace anomaly. The stout data are the continuum estimates from the
$N_\tau=6,8,10$ data in \cite{bw1}.}
\label{intmeashisq}
\end{center}
\end{figure}
From figure \ref{intmeashisq}, it is evident that there is a good agreement between the two sets of results for $T<180$MeV and also for high enough temperatures $T>350$ MeV. The stout continuum results in the figure were obtained extrapolation with the $N_\tau=6,8,10$ data from Ref. \cite{bw1}. In the intermediate temperature range there is some discrepancy, specially the peaks of the interaction measure do not coincide for these two different discretization schemes, which may be due finite lattice spacing effects. However the HISQ $N_\tau=12$ data is inching closer to the stout results in this regime. The recent continuum stout results, obtained from continuum extrapolation of the new $N_\tau=12$ data in addition to the older data are consistent with the HISQ results, with the peak position shifting to 200 MeV(left panel of figure \ref{intmeasbw}). There is also a good agreement of the HISQ and stout data with the trace anomaly obtained from the Hadron Resonance Gas(HRG) model for $T<140$ MeV and with the
resummed perturbation theory results at high temperatures. Using the $N_\tau=6,8$ data which is available upto temperatures of 1000 MeV, a continuum extrapolation of the stout data
was performed, the result of which is shown in right panel of figure \ref{intmeasbw}. For this entire range of temperature, there is a useful parameterization characterizing the trace anomaly \cite{bw1} with the following parametric form,
\begin{equation}
\frac{I(T)}{T^4}=\rm{e}^{-h_1/t-h_2/t^2}.\left(h_0+\frac{f_0[tanh(f_1 t+f_2)+1]}{1+g_1 t+g_2 t^2}\right)~,
~t=T/200MeV~.
\end{equation}
where the best fit parameters are
\begin{equation}
\nonumber
h_0=0.1396~,~h_1=-0.18~,~h_2=0.035~,~f_0=2.76~,~f_1=6.79~,~f_2=-5.29~,~g_1=-0.47~,~g_2=1.04.
\end{equation}
This parametric form could be a useful input for the hydrodynamical simulations, which usually uses the lattice EoS
before hadronization and that from the HRG after the freezeout of hadrons.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.55]{tracebw.ps}
\includegraphics[scale=0.4]{par.eps}
\caption{The latest data with the stout smeared fermions(left panel), from \cite{bwcharm}. In the right panel, the fit to the
trace anomaly data from the continuum extrapolation of the $N_\tau=6,8$ results, from \cite{bw1}. The results are in perfect agreement with
the Hadron resonance gas model calculations for $T<140~$MeV.}
\label{intmeasbw}
\end{center}
\end{figure}
There are lattice results for the EoS using alternative fermion discretizations, the Wilson fermions. The WHOT-QCD collaboration has results for $2+1$ flavours of improved Wilson fermions \cite{whot} with the physical value of strange quark mass but a large pion mass
equal to $0.63 m_\rho$. The tmfT collaboration has results for 2 flavours of maximally twisted Wilson fermions \cite{tmft} with $m_\pi>400$MeV. Both these results are compiled in figure \ref{intmeaswil}. These are in rough qualitative agreement with the staggered fermion data,
specially the peak for the WHOT-QCD data occurring at 200 MeV is consistent with the HISQ and stout results. A more quantitative
agreement at this stage is difficult, since the pion masses for the Wilson fermions are much larger than the physical value.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=4 cm,width=0.3\textwidth]{eos_umed.eps}
\includegraphics[height=4 cm,width=0.3\textwidth]{eos_tm.eps}
\caption{The results for the pressure, energy density and the trace anomaly with clover-improved Wilson fermions
on a $32^3\times8$ lattice, from \cite{whot}(left panel) and the trace anomaly data with the twisted mass Wilson
fermions, from \cite{tmft}(right panel).}
\label{intmeaswil}
\end{center}
\end{figure}
\subsection{The pseudo-critical temperature}
We recall that the QCD transition, from a phase of color singlet states to a phase of colored quantum states
is an analytic crossover, for physical quark masses. This is fairly well established by now from lattice studies using two different approaches. One approach is to monitor the behaviour of the thermodynamic observables in the transition region for physical values
of quark masses while the other is to map out the chiral critical line as a function of light quark mass~\cite{fph1}. The absence of
a sharp phase transition implies that there is no unique transition temperature but only different pseudo-critical temperatures
corresponding to different observables. There is no order parameter but the observables like the renormalized
Polyakov loop, $L_R$ has a point of inflexion across the crossover region. Another observable relevant in the crossover regime is the
renormalized chiral condensate, which has been defined \cite{hotqcd2} in the following manner to take into account the
multiplicative renormalization as well additive ones due to a finite bare quark mass,
\begin{equation}
\Delta_{l,s}(T)=\frac{\langle \bar\psi\psi\rangle_{l,T}-\frac{m_l}{m_s}\langle \bar\psi\psi\rangle_{s,T}}
{\langle \bar\psi\psi\rangle_{l,0}-\frac{m_l}{m_s}\langle \bar\psi\psi\rangle_{s,0}} ~,~l=u,d.
\end{equation}
The normalized chiral susceptibility $\chi_R$ for the light quarks defined as,
\begin{equation}
\chi_R=\frac{1}{VT^3}m_l^2\frac{\partial^2 }{\partial m_l^2}(\ln\mathcal{Z}(T)- \ln\mathcal{Z}(0))
\end{equation}
is a good observable as well. Both $L_R$ and $\Delta_{l,s}(T)$ have a point of inflexion at the pseudo-critical temperature and
$\chi_R$ has a smooth peak. From the continuum extrapolated data of the stout-smeared staggered fermions,
the pseudo-critical temperatures corresponding to these observables for physical quark masses are,
\[ T_c = \left\{
\begin{array}{ll}
170(4)(3) & \text{for}~ L_R\\
157(3)(3) & \Delta_{l,s}\\
147(2)(3) & \chi_R
\end{array}
\right.
\]
The data for $L_R$ and $\Delta_{l,s}$ with the HISQ disretization, is shown in figure \ref{deltaLhisq}. These are for
lattice of size $N_\tau\times (4 N_\tau)^3$. The HISQ data are in good agreement with the continuum extrapolated stout-smeared
staggered results from \cite{bwtc}. The fact that the rise of $L_R$ is more gradual than the corresponding rise of $\Delta_{l,s}$
signals that the crossover is more likely influenced by the chiral symmetry restoration.
Previous scaling studies of the renormalized chiral condensate with the p4-staggered quarks
showed that the physical light quarks already approximate the O(4) critical behaviour of the chiral
quarks \cite{mageos}. Using the O(4) scaling of the renormalized chiral condensate, the $T_c$ obtained for HISQ quarks
through chiral and continuum extrapolation is $154\pm 9~$ MeV. This value is in excellent agreement with
the stout result, implying that the continuum extrapolation done with the staggered fermions is quite robust.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{deltalshisq.eps}
\includegraphics[scale=0.5]{ploophisq.eps}
\caption{The results for the subtracted chiral condensate(left panel) and the renormalized Polyakov loop(right panel)
from the HotQCD collaboration, from \cite{hotqcd1}. These data are compared with the continuum results using stout
smeared fermions, from \cite{bwtc}.}
\label{deltaLhisq}
\end{center}
\end{figure}
\subsection{Comparing results for different fermion discretizations}
The results for the EoS and the pseudo-critical temperature discussed so far, have been obtained using different improved
versions of the staggered quarks. For these fermion species, the so called ``rooting'' problem may alter the continuum limit due
to breaking of the $U_A(1)$ anomaly \cite{creutz} though some other work refutes this claim \cite{gsh}. It is important to
check the effects of the rooting procedure on the continuum
extrapolation of finite temperature observables. The Budapest-Wuppertal collaboration has recently compared the
continuum extrapolated results for different observables using the Wilson and staggered fermions \cite{bwwil}
as the former discretization does not suffer from the rooting problem.
The scale for the Wilson fermions was determined using $m_{\Omega}=1672~$ MeV and the line of constant physics
was set using $m_\pi/m_{\Omega}\sim 0.3$ and $m_K/m_{\Omega}\sim 0.36$. For the staggered quarks, the
line of constant physics was set such that the ratios $m_\pi/m_{\Omega}$ and $m_K/m_{\Omega}$ are within
3\% of the corresponding values for the Wilson fermions. This means that the pions are quite heavy
with $m_\pi\sim 540~$ MeV for both these discretizations. The continuum extrapolated results for $L_R$ and the renormalized chiral condensate
are shown in figure \ref{wil}. The continuum results for both these quantities are in good agreement for the whole range of temperature,
implying that these two different fermion discretizations indeed have the correct continuum limit.
In all these computations an improved Wilson operator was used, in which the dominant $\mathcal{O}(a)$
correction terms due to explicit breaking of chiral symmetry by these fermions were cancelled. It ensured that in both the studies the approach to the continuum limit was chosen to be the same. However at this large value of quark masses, the rooting problem may be mild enough to show any adverse effects and it would be desirable to perform a similar comparison at physical value of the quark masses.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.85]{ppbwilbw.eps}
\includegraphics[scale=0.85]{ploopwilbw.eps}
\caption{The continuum extrapolated renormalized chiral condensate(left panel) and the Polakov loop (right panel) are compared
for Wilson and stout-smeared staggered fermions, from \cite{bwwil}. }
\label{wil}
\end{center}
\end{figure}
Since the effects of chiral symmetry persist in the cross-over region, it is important to compare the existing
results for $T_c$ with those using fermions with exact chiral symmetry on the lattice. For the Wilson and the
staggered action, even for massless quarks, the full
$SU(2)\otimes SU(2)$ chiral symmetry is realized only in the continuum limit. For chiral fermions on the lattice, like the overlap or the domain wall fermions, the chiral and the continuum limits are disentangled, allowing us to understand the remnant effects of chiral symmetry in the cross-over region even on a finite lattice. However, lattice QCD with overlap fermions is computationally prohibitive \cite{fodor} and currently better algorithms are being developed to simulate them with comparatively lesser effort \cite{fixedtop}. The domain wall fermions have exact chiral symmetry only when the extent of the fifth dimension, $N_5$, of the five dimensional lattice on which these fermions are defined, is infinite. For smooth gauge fields, the chiral symmetry violation on a finite lattice is suppressed as an exponential of $N_5$ but the suppression could be much slower, as $1/N_5$ for rough gauge configurations in the crossover region. Better algorithms have been employed to ensure exponential suppression even
for rough gauge fields \cite{hotqcddw}. The most recent results for the overlap fermions from the Budapest-Wuppertal collaboration \cite{bwov} and the domain wall fermions from the HotQCD collaboration \cite{hotqcddw}, are shown in figure \ref{ppbchiral}. The renormalized chiral condensate for the overlap fermions are qualitatively consistent with the continuum staggered fermion results, even for small volumes and large pion masses of about 350 MeV around the crossover region. The
lattice cut-off effects seem to be quite small for $N_\tau=8$.
The renormalized chiral condensate and the $\Delta_{l,s}$ for the domain wall fermions are shown in figure \ref{ppbchiral}.
The lattice size is $16^3\times8$ with the number of lattice sites along the fifth dimension taken to be 32 for $T>160~$MeV and
48 otherwise and the pion mass is about 200 MeV. The lattice volume is comparatively small therefore these results do not show a sharp rise
in the crossover region. With larger volumes, the rise in these thermodynamic quantities is expected to be much steeper. The value of $T_c$ estimated from the peak of the chiral susceptibility i.e the derivative
of the chiral condensate is between 160-170 MeV which is consistent with the continuum results from the
HISQ fermions.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.3\textwidth]{ppbovbw.eps}
\includegraphics[width=0.15\textwidth,angle=-90]{pbp_subdw.eps}
\caption{The renormalized chiral condensate for the overlap quarks is compared to the continuum extrapolated results using
the stout smeared staggered quarks in the left panel, from \cite{bwov}. In the right panel, the behaviour of different chiral
condensates defined using the domain wall fermions are shown in the critical region, from \cite{hotqcddw}.}
\label{ppbchiral}
\end{center}
\end{figure}
\subsection{The thermodynamical observables}
Thermodynamic observables characterize the different phases across a phase transition.
From the behaviour of these observables, one can infer about the degrees of freedom of
the different phases and the nature of the interactions among the constituents. It was already
known from an important lattice study that the pressure in high temperature phase of QCD showed a strong
dependence on the number of quark flavours~\cite{peikert}, signaling deconfinement of the
quark and gluon degrees of freedom.
Recent results for the pressure, entropy density and the speed of sound for QCD, using the
stout-smeared staggered quarks are compiled in figure \ref{obsv}. Though in our
world there is no real phase transition, the entropy density increases rapidly with temperature,
again signaling the liberation of a large number of colour degrees of freedom. The entropy density for QCD
is almost 20\% off from the value of a free gas of quarks and gluons, even at temperatures about 1000 MeV.
The deviation of the pressure of QGP computed at similar temperatures, from its free theory value is even
more, close to about 25\% of its value. Another observable that characterizes the different phases is
the speed of sound $c_s$. If QGP at high temperatures was qualitatively close to a strongly interacting
conformal theory, then the speed of sound would be exactly $1/\sqrt{3}$. However the deviation from conformality is quite
significant even at temperatures about $T=500 ~$MeV which hints that the AdS-CFT inspired study of the QGP medium should
be done with more care.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.25]{eos_s.eps}
\includegraphics[scale=0.25]{eos_p.eps}
\includegraphics[scale=0.25]{eos_cs.eps}
\vspace{-2 cm}
\caption{The entropy density, pressure and the speed of sound for the stout-smeared fermions as a function of temperature,
from ~\cite{bw1}. }
\label{obsv}
\end{center}
\end{figure}
The values of entropy density computed with different discretizations of staggered fermions like the asqtad or the
p4 fermions \cite{petreczkymunich} show about $10\%$ deviation from the free theory value at very high temperatures.
The departure from AdS-CFT values is even more severe using these fermions. The stout results are about $10\%$ lower
than the corresponding asqtad and p4 results. This deviation is attributed to the fact that the latter discretizations have smaller
cut-off effects at higher temperatures and would be more closer to the continuum results. The stout continuum values shown in the
figure were obtained by averaging the $N_\tau=8,10$ data. A proper continuum extrapolation of the results for both the fermion discretizations
is necessary for resolving the difference and for use of these values for the real world calculations. However, the lattice results
with at least $10\%$ off from the free theory values even at very high temperatures, implies that the QGP phase is
strongly interacting, more like a liquid rather than a gas of quarks and gluons, confirming the similar prediction
from the RHIC experiments.
For $T<T_c$, the results for all these observables are in agreement with Hadron resonance gas model predictions.
\subsection{Effects of charm quarks on the EoS}
The effects of charm quarks to the pressure in the QGP phase was estimated sometime ago, using next-to leading order
perturbation theory \cite{charmpt}. It was observed that the contribution of charm quarks become significant
for temperatures $T>2T_c$. Preliminary data from the LHC already indicates that the charm quarks
would thermalize quickly as the lighter quarks. It would then affect the EoS and thus the hydrodynamical
evolution of the fireball formed at LHC energies. Lattice studies are important to quantify the contribution of charm to
the EoS in the QGP phase. The first lattice studies were done by the RBC \cite{mcheng} as well as the MILC collaboration \cite{milc1}
with quenched charm quarks, i.e., by neglecting quantum fluctuations due to the charm quarks. The quenched charm results for the EoS
differ from the 2+1 flavour results, already at 1.2 $T_c$.
Recent results from the Budapest-Wuppertal collaboration with dynamical charm quarks \cite{bwcharm}, however, show that the effects of charm quarks show up only around 300 MeV, more in agreement with the perturbative estimates.
Both the approaches highlight the fact that the effects of charm quark should be considered for the EoS used as an input for the hydrodynamical evolution of the fireball at LHC energies, which may set in at $T\sim 500$ MeV. It would be also important for the EoS of the Standard Model, important for the cosmological evolution in the early universe~\cite{hind,soldner}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.27\textwidth]{charmeosmilc.eps}
\includegraphics[width=0.3\textwidth]{charmeosbw.ps}
\caption{In the left panel, the effects of quenched charm quark to the pressure, energy density and trace anomaly are shown as a function of temperature, from \cite{milc1}. The lattice size is $24^3\times 6$. In the right panel, the effects of dynamical charm quarks to the pressure is shown as a function of temperature, from \cite{bwcharm}.}
\label{eoscharm}
\end{center}
\end{figure}
\subsection{The 2 flavour QCD transition and the fate of the $U_A(1)$ anomaly}
The chiral phase transition for $N_f=2$ QCD is still not well understood from lattice studies, as was emphasized at the beginning of this section.
Though the lattice results for $2+1$ flavours with different fermion discretizations are in good agreement, the corresponding ones for the two
light flavour case are still inconclusive. Two major approaches have been undertaken in the recent years to understand the order of this transition.
One of them is to check the scaling properties of the order parameter. If the phase transition is indeed a second order one, then the order
parameter would show O(4) scaling in the transition region. The second approach is to understand the effects of the $U_A(1)$ anomaly near in
the phase transition. If the quantum fluctuations responsible for this $U_A(1)$ anomaly decrease significantly with temperature, it would
result in the degeneracy of the masses of mesons of certain quantum numbers and a characteristic behaviour of
the density of low lying eigenmodes of the fermion operator. I discuss the major lattice results using both these approaches,
in the following paragraphs. Most of these approaches are hinting that the two flavour chiral phase transition may be a second order one.
\subsubsection{Scaling analysis in the critical region}
\label{sec:curv}
The order parameter that characterizes the chiral phase transition is the chiral condensate.
A suitable dimensionless definition of the chiral condensate used in the lattice study
by the BNL-Bielefeld collaboration \cite{mageos} is,
\begin{equation}
\label{eqn:odparam}
M_b=m_s\frac{\langle\bar\psi\psi\rangle}{T^4}
\end{equation}
The additive ultraviolet divergences are not explicitly subtracted from the condensate and hence it is the bare
value denoted by subscript b. This additive divergence would be included
in the regular part and in the transition region, would be much smaller in magnitude than the singular part of $M_b$. In the
vicinity of the transition region, the order parameter can be written as,
\begin{equation}
\label{eqn:func}
M_b(T,H)=h^{1/\delta}f_G(t/h^{1/\beta\delta})+f_{reg}(T,H)~,
\end{equation}
where $f_G$ is the universal scaling function, known from analysis of the $O(N)$ spin-models \cite{on1,on2,on3} with $\beta$ and
$\delta$ being the corresponding critical exponents. The quantities $h$ and $t$ are
dimensionless parameters that determine the deviations from the critical point and are defined as,
\begin{equation}
t=\frac{1}{t_0}\frac{T-T_{c,0}}{T_{c,0}}~,~h=\frac{H}{h_0}~,~H=\frac{m_l}{m_s}~,
\end{equation}
with $T_{c,0}$ being the transition temperature in the chiral regime i.e, for $h\rightarrow0$ and
$h_0$ and $t_0$ are non universal constants. One of the choice of the regular part of the order
parameter used in the lattice study is,
\begin{equation}
f_{reg}=H\left(a_0+a_1 \frac{T-T_{c,0}}{T_{c,0}}+a_2 \left(\frac{T-T_{c,0}}{T_{c,0}}\right)^2\right)~,~
\end{equation}
where one assumes that the regular part is an analytic function of the relevant parameters
around the transition point. The BNL-Bielefeld collaboration used an improved
variety of the staggered quarks, called the p4 quarks, to compute the order parameter defined in
Eq. (\ref{eqn:odparam}) and $\chi_m$, its derivative with respect to $m_l$ for different
values of the light quark masses, $m_l$. The strange quark mass was fixed at its physical value.
These quantities were fitted to the functional form given in Eq. (\ref{eqn:func}) and its derivative
respectively. The scaling analysis was done for a fixed lattice of size $N^3\times4$, so
the order parameter and its derivatives are expected to have an O(2) scaling in the chiral regime
since the fermion discretization only retains a remnant of the continuum O(4) symmetry group.
From the plots for the order parameter in the left panel of figure \ref{meos}, it is evident that for $m_l/m_s=1/80$ the phase
transition is indeed a second order one with O(2) critical exponents, though O(4) scaling cannot
be ruled out completely with the current precision available. In the scaling regime,
the variable $M_b/h^{1/\delta}$ should be a universal function of $t/h^{1/\beta\delta}$.
In the right panel of figure \ref{meos}, the scaled chiral condensate is seen to be
almost universal for $m_l/m_s<1/20$, which provides a hint that even for the physical
quark masses there is a remnant effect of the chiral symmetry. The crossover transition
for 2+1 flavour QCD should be sensitive to the effects of chiral symmetry and therefore
also to the effects of the $U_A(1)$ anomaly.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.3\textwidth]{pbp_o2.eps}
\includegraphics[width=0.3\textwidth]{scalingallm.eps}
\caption{The interpolated data for $M_b$ for different light quark masses are compared with the
corresponding plot for an O(4) spin model in the continuum, denoted by the solid blue line(left panel).
In the right panel, the scaling plots for the renormalized chiral condensate for QCD are shown to match with
the universal function with O(2) symmetry for $m_l/m_s<1/20$. Both the plots are for p4 staggered quarks,
from \cite{mageos}.}
\label{meos}
\end{center}
\end{figure}
\subsubsection{The effects of $U_A(1)$ anomaly}
The QCD partition function breaks $U_A(1)$ symmetry explicitly. However its effects varies with
temperature since we know that at asymptotically high temperatures, we approach the ideal Fermi gas
limit where this symmetry is restored. It is important to understand the temperature dependence
of $U_A(1)$ breaking near the chiral phase transition. If $U_A(1)$ breaking is significantly reduced
from that at zero temperature, one would then claim that the symmetry is effectively restored.
This would result in the degeneracy of the mass of the isospin triplet pseudoscalar(pion) and scalar
(delta) mesons. The order parameter for such an effective restoration is the quantity defined as,
\begin{equation}
\chi_\pi-\chi_\delta=\int~d^4 x~ \left[\langle\bar\psi(x)\tau\gamma_5\psi(x)\bar\psi(0)\tau\gamma_5\psi(0)\rangle
-\langle\bar\psi(x)\tau\psi(x)\bar\psi(0)\tau\psi(0)\rangle\right]~,
\end{equation}
and the order parameter for the restoration of the chiral symmetry is the chiral condensate.
These quantities are also related to the fundamental theory through the density of eigenvalues,
$\rho(\lambda)$ of the Dirac operator as,
\begin{eqnarray}
\nonumber
\langle\bar\psi\psi\rangle&=&\int d~\lambda \rho(\lambda,m)\frac{2 m}{m^2+\lambda^2}\\
\chi_\pi-\chi_\delta&=&\int d~\lambda \rho(\lambda,m)~\frac{4 m^2}{(m^2+\lambda^2)^2}~.
\end{eqnarray}
Different scenarios that could lead to different functional behaviour of $\rho(\lambda)$ were
discussed in detail in Ref. \cite{hotqcddw}. I summarize the arguments below,
\begin{itemize}
\item From dilute instanton gas approximation: $\rho(\lambda,m)=c_0m^2\delta(\lambda)~\Rightarrow$
$\langle\bar\psi\psi\rangle\sim m$ and $\chi_\pi-\chi_\delta\sim2$
\item Analyticity of $\rho(\lambda,m)$ as a function of $\lambda$ and $m$ when chiral symmetry
is restored. To the leading order $\rho(\lambda,m)=c_m m+c_\lambda \lambda+\mathcal{O}(m^2,\lambda^2)$.
If $\rho(\lambda,m)\sim \lambda~\Rightarrow \langle\bar\psi\psi\rangle\sim -2m\ln m~,~
\chi_\pi-\chi_\delta\sim2$.
If $\rho(\lambda,m)\sim m~\Rightarrow \langle\bar\psi\psi\rangle\sim \pi m~,~
\chi_\pi-\chi_\delta\sim\pi$.
\end{itemize}
In fact to understand the effect of anomaly, it is desirable to use fermions with exact chiral
symmetry on the lattice. The overlap and the domain wall fermions are such candidates, for which the chiral anomaly
can be defined. Indeed, the overlap fermions satisfies an exact index theorem on the lattice~\cite{hln}.
A recent study of the eigenvalue spectrum with the domain wall fermions from the HotQCD collaboration
\cite{hotqcddwlat} seems to favour $\rho(\lambda,m)=c_0m^2\delta(\lambda)+c_1 \lambda$, for
the density of eigenvalues. This would imply that in the chiral limit, the $U_A(1)$ anomaly would still survive when
the chiral symmetry is restored. This is also consistent with the behaviour of $\chi_\pi-\chi_\delta$
as a function of temperature, shown in the left panel of figure \ref{dweigen}. At crossover temperature around
160 MeV, the $\chi_\pi-\chi_\delta$ is far from zero, implying that the effects of the anomaly may be large
in the crossover region.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.27\textwidth]{pscorr_dw.eps}
\includegraphics[width=0.3\textwidth]{evdw177.ps}
\caption{The susceptibilities for different meson quantum states constructed with the domain wall fermions are shown as a function
of temperature in the left panel, from \cite{hotqcddw}. The eigenvalue distribution with domain wall fermions, shown
in the right panel, from \cite{hotqcddwlat}, has a peak in the near zero mode distribution at 177 MeV. The lattice size is
$16^3\times 8\times N_5$ where $N_5=32$ for $T\geq 160~$MeV and $N_5=48$ otherwise.}
\label{dweigen}
\end{center}
\end{figure}
A recent theoretical study \cite{aoki} with the overlap fermions shows that in the chiral symmetry
restored phase where $\langle\bar\psi\psi\rangle=0$, the eigenvalue density in the chiral
limit should behave as,
\begin{equation}
lim_{m\rightarrow0}\langle\rho(\lambda,m)\rangle=lim_{m\rightarrow0}
\langle\rho(m)\rangle\frac{\lambda^3}{3!}+\mathcal{O}(\lambda^4).
\end{equation}
which would imply that $\chi_\pi-\chi_\delta\rightarrow0$ as $m\rightarrow0$.
Moreover it is argued that if an operator is invariant under some symmetry transformation
then its expectation value becoming zero would not necessarily imply that the symmetry is
restored, whereas the converse is true \cite{aoki}. This would mean that the observable $\chi_\pi-\chi_\delta$
may not be a good candidate to study the $U_A(1)$ restoration. Rather the equality of the
correlators of the pion and delta meson could be a more robust observable to indicate the
restoration of the $U_A(1)$ symmetry. Recent results from the JLQCD collaboration with
2 flavours of overlap fermions seems to indicate that the $U_A(1)$ may be restored near the chiral symmetry
restoration temperature, making it a first order transition \cite{cossu2}. Two of their main results are compiled in
figure \ref{oveigen}. The correlators of the scalar mesons become degenerate at about 196 MeV and at the same
temperature a gap opens up in the small eigenvalue region of the eigenvalue spectrum. $T=196~$MeV
is slightly above the transition temperature which is near about 177 MeV. For $T=177~$ MeV, there is no degeneracy between the
scalar and the pseudoscalar correlators and the density of zero modes is finite implying that the
chiral symmetry is broken, which means that the $U_A(1)$ changes rapidly near the phase transition.
However the lattice size is $16^3\times8$, which is small enough to introduce significant finite volume and cut-off
effects in the present results.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.4\textwidth]{evov_j.eps}
\includegraphics[width=0.4\textwidth]{corrov_j.eps}
\caption{In the left panel, the quark mass dependence of eigenvalue distribution for the overlap quarks are compared
at different temperatures, from \cite{cossu2}. In the right panel, the degeneracy of the scalar and pseudoscalar mesons for
overlap fermions are shown at a temperature of 192 MeV which is slightly higher than the corresponding pseudo-critical temperature,
from \cite{cossu2}.}
\label{oveigen}
\end{center}
\end{figure}
With the chiral fermions, the fate of $U_A(1)$ in the crossover region is still undetermined and more work needs to be done
for conclusive understanding of this issue. With Wilson and staggered quarks, the anomaly is recovered only in the continuum limit. For
fine enough lattice spacings, one can however check the behaviour of the low lying eigenmodes and the meson masses for
different quantum numbers, to understand the effects of the remnant $U_A(1)$ anomaly using these fermions. From the eigenvalue distribution
of HISQ operator shown in the left panel of figure \ref{wilhisqeigen}, it is evident that the effect of $U_A(1)$
still persists at $T=330~$ MeV~\cite{hiroshi}. The long tail in the low lying eigenmodes is not a finite volume artifact since it persists even for
very large volumes. However, the data is quite noisy and more statistics is required for making a final conclusion. The
screening masses for the mesons of different quantum numbers were obtained from lattice studies~\cite{phwil} with improved
Wilson fermions(right panel of figure \ref{wilhisqeigen}). In the transition region, the scalar and pseudoscalar mesons
are not degenerate and an agreement seen only for temperatures above 1.2 $T_c$. However the input quark masses are quite
large compared to the physical values and more data is needed to take a final call. At present, the effects of quantum
anomalies are not yet understood from lattice studies.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.3\textwidth]{hisqev.ps}
\includegraphics[width=0.4\textwidth]{wilsonpscorr.ps}
\caption{The density of eigenvalues at $T=330.1$ MeV for HISQ discretization showing a long tail even with large
volumes, from \cite{hiroshi}(left panel). In the right panel, the screening masses for scalar, pseudo-scalar, vector and
axial vector mesons using Wilson fermions are shown as a function of temperature, from \cite{phwil}.}
\label{wilhisqeigen}
\end{center}
\end{figure}
\section{Lattice QCD at finite density}
QCD with a finite number of baryons is relevant for the physics of neutron stars and supernovae. It is
the theoretical setup for the heavy ion physics phenomena occurring at low center of mass energy, $\sqrt{s}$, of the colliding nuclei.
Some of these low $\sqrt s$ collisions are being investigated at the RHIC and to be probed further with the start of
the heavy ion experiments at FAIR, GSI and NICA, Dubna. In fact, an interesting feature of the QCD
phase diagram is the critical end-point related to chiral symmetry restoration. The existence of the critical point has
important consequences on the QCD phase diagram and it is the aim of the extensive Beam Energy Scan(BES) program
at the RHIC to search for it.
To explain these experimental results from first principles, we need to extend the lattice QCD formulation to
include the information of finite baryon density. One of the methods is to work in a grand canonical ensemble. In such
an ensemble, the partition function is given by
\begin{equation}
\mathcal{Z}_{QCD}(T,\mu)=Tr(\rm{e}^{\mathcal{H}_{QCD}-\mu N})=\int
\mathcal {D}U_\mu \prod_{f=1}^{N_f} det D_f(\mu)\rm{e}^{-S_G}
\end{equation}
where the chemical potential $\mu$ is the Lagrange multiplier corresponding to the conserved number density $N$, that commutes with the
QCD Hamiltonian ${H}_{QCD}$. $N$ can be the baryon number or the net electric charge . The $\mu$ enters into the lattice fermion action as $\exp(\pm\mu a)$
factors multiplying the forward and backward temporal links respectively~\cite{hk,kogut}, referred to as the
Hasenfratz-Karsch method. The naive fermion operator at finite $\mu$ on the lattice would be of the form,
\begin{equation}
D_f(\mu)_{x,y}=\left[\sum_{i=1}^{3}\frac{1}{2}\gamma_i\left( U_{i}(x)\delta_{y,x+i}-
U^\dagger_{i}(y) \delta_{y,x-i}\right)
+\frac{1}{2}\gamma_4 \left(\rm{e}^{\mu a} U_{4}(x)\delta_{y,x+4}-\rm{e}^{-\mu a} U^\dagger_{\mu}(y)\delta_{y,x-4}
\right)+a m_f \delta_{x,y}\right] .
\end{equation}
This is not a unique way of introducing $\mu$ and it could be also done in several different ways~\cite{gavai}.
The lattice fermion determinant at finite $\mu$, like in the continuum, is no longer positive definite since
\begin{equation}
det D_f^\dagger(\mu)= det D_f(-\mu)\Rightarrow det D_f(\mu)=\vert det D_f(\mu)\vert~\rm{e}^{i\theta}
\end{equation}
and the interpretation of $\int \mathcal{D}U det D_f(\mu)\rm{e}^{-S_G}$ as a probability weight in the
standard Monte Carlo simulations is no longer well defined.
This is known as the ``sign problem''. One may consider only the real part of the fermion determinant for Monte Carlo
algorithms and generate configurations, by so called phase quenching. Once the partition function is known in the
phase quenched limit, one can then use the reweighting techniques to generate the partition function of the full theory
at different values of $\mu$.
The expectation value of the phase of the determinant, needed for re-weighting, at some finite $\mu$ is given as
\begin{equation}
\langle \rm{e}^{i\theta}\rangle=\frac{\int
\mathcal {D}U \prod_{f=1}^{N_f}~\vert det D_f(\mu)\vert~\rm{e}^{i\theta}~\rm{e}^{-S_G}}{\int
\mathcal {D}U \prod_{f=1}^{N_f} ~\vert det D_f(\mu)\vert~\rm{e}^{-S_G}}=\rm{e}^{\frac{-V~\Delta F}{T}}~.
\end{equation}
where $\Delta F$ is the difference between the free energy densities of the full and the phase quenched QCD. For
two degenerate quark flavours, the phase quenched theory is equivalent to a theory with a finite isospin chemical potential
\cite{son} and $\Delta F$ is the difference of free energies of QCD with finite baryon(quark) chemical
potential and that at an isospin chemical potential. These two theories are qualitatively quite different and the sign
problem results in a very small overlap between these two theories. For isospin QCD,
the charged pions are the lightest excitations and these can undergo a Bose-Einstein condensation for $\mu>m_\pi/2$.
The difference between the respective free energies in this regime is quite large, leading to a severe sign problem.
This is an algorithmic problem that can arise for any theory which has chiral symmetry breaking. A better understanding of
the sign-problem has been achieved in the recent years with a knowledge of the regions in the phase diagram with
severe sign problem and those where it is controllable~\cite{sign1,sign2}. There are several methods followed to circumvent
this problem on the lattice, some of which are listed below,
\begin{itemize}
\item Reweighting of the $\mu=0$ partition function~\cite{reweight1, bwreweos,reweight2}
\item Taylor series expansion \cite{tay1,gg1}
\item Canonical ensemble method \cite{cano1,cano2}
\item Imaginary chemical potential approach\cite{imchem1,imchem2,imchem3}
\item Complex Langevin algorithm \cite{complexl1,complexl2,complexl3}
\item Worm algorithms \cite{shailesh,gattringer}
\item QCD on a Lefschetz thimble~\cite{luigi}
\end{itemize}
The Taylor series method has been widely used in the lattice QCD studies in the recent years, which has lead to interesting
results relevant for the experiments. One such proposal is the determination of the line of chemical freezeout
for the hadrons in the phase diagram at small baryon density, from first principles lattice study. It was first
proposed that cumulants of baryon number fluctuations could
be used for determining the freezeout parameters~\cite{gg5} on the lattice. Last year, another interesting suggestion was made ~\cite{freezeouthisq}, where the experimental data on cumulants of electric charge fluctuations could be used as an input to compute the freezeout curve using lattice data. This and some other results are discussed in the subsequent subsections. Most of the results are obtained with improved versions of staggered fermions. It has been known that
the rooting problem may be more severe at finite density~\cite{rootingmu}. It is thus important to explore other fermion formulations
as well for lattice studies. Wilson fermions have been used but it is important to use chiral fermions, especially for the
study of the critical point. I outline in the next subsection, the theoretical efforts in the recent years that have led to the
development of fermion operators at finite density with exact chiral symmetry on the lattice which can be used for future
lattice studies on the critical point.
\subsection{Chiral fermions at finite density}
The contribution of the $U_A(1)$ anomaly is believed to affect the order of the chiral phase transition at zero density
and hence is crucial for the presence or absence of the critical point. If the anomaly is not represented correctly at finite density, it may
affect the location of the critical point in the phase diagram, if it exists. Overlap fermions
have exact chiral symmetry on the lattice, in the sense that the overlap action is invariant under suitable chiral transformations
known as the Luscher transformations \cite{luscher}. It can be further shown that the fermion measure in the path integral is not invariant under Luscher transformations, and its change gives the chiral anomaly. The index theorem, relating the anomaly to the difference
between the fermion zero modes, can be proved for them \cite{hln}. Thus the overlap fermions have the properties analogous to the fermions in the continuum QCD. In the continuum, it is known that the anomaly is not affected in presence of a finite baryon chemical potential. It
would be desirable to preserve this continuum property with the overlap fermions as well, such that the physical properties
important for the existence of the critical point are faithfully presented on a finite lattice. Defining an overlap fermion
action at finite chemical potential is non-trivial as the conserved currents have to be defined with care~\cite{mandula}.
The first attempt to define an overlap fermion operator at finite density \cite{blochw} was done in the last decade,
and an index theorem at finite $\mu$ was also derived for them. However these overlap fermions did not have exact chiral symmetry
symmetry on a finite lattice~\cite{bgs}. Moreover, the index theorem for them was $\mu$-dependent, unlike in the continuum. Recently,
overlap fermion at finite density has been defined from the first principles~\cite{ns}, which has exact chiral symmetry on the
lattice \cite{gs1} and preserves the $\mu$-independent anomaly as well. A suitable domain wall fermion action has been also defined at finite density \cite{gs1}, which was shown to reproduce the overlap action in the appropriate limit. It would be important to
check the application of these overlap and domain wall fermion operators at finite $\mu$ for future large scale QCD simulations.
\subsection{Correlations and fluctuations on the lattice }
The studies of fluctuations of the conserved charges are important to understand the nature of the degrees of freedom in a
thermalized medium and the interactions among them~\cite{koch,asakawa1}.
The diagonal susceptibility of order $n$, defined as,
\begin{equation}
\chi^{X}_{n}=\frac{T}{V}\frac{\partial ^{n}ln \mathcal{Z}}{\partial \mu_X^n }~,~X\equiv B,S,Q.
\end{equation}
measures the fluctuations of the conserved quantum number $X$. In a heavy-ion
experiment the relevant conserved numbers are the baryon number $B$ and electric charge $Q$. The strangeness $S$, is zero at the
initial time of collision of heavy nuclei but strange quark excitations are produced at a later time in the QGP and is also believed
to be a good quantum number. These fluctuations can be computed
exactly on the lattice at $\mu=0$ from the quark number susceptibilities~\cite{gottleib}. Continuum extrapolated results for the second order susceptibilities of baryon number, strangeness and electric charge exist for both HISQ~\cite{hisqsusc}and stout smeared staggered quarks~\cite{bwsusc}. The fluctuations of baryon number are very well explained by the hadron resonance gas model for $T<160~$ MeV.
However, the fluctuations of the strangeness are usually larger than the HRG values by about 20\% in the freezeout region characterized by
$160\leq T\leq 170~$ MeV. The electric charge fluctuations, on the other hand, are smaller than the corresponding HRG values by 10\% in the
same region. The ratio of $\chi_2^Q/\chi_2^B(\mu=0)\simeq
0.29-0.35$ in the freezeout region. A first principle determination of this ratio is crucial, as it would allow us to relate the net baryon number fluctuations with the net proton number fluctuations, which is an observable in the heavy ion experiments~\cite{hisqsusc}. At high temperatures, these fluctuations slowly approach the corresponding free theory value, with the
continuum extrapolated data for the baryon number susceptibility showing about 20\% deviation from the free theory value even at
$2T_c$~\cite{hisqsusc}. The data are in good agreement with resummed perturbation theory estimates at these temperatures
~\cite{sylvain,gm} indicating that the QGP is still fairly strongly interacting even at temperatures around $2T_c$.
To relate to the results of the heavy ion experiments at a lower collision energy, $\sqrt{s}$, one has to compute the fluctuations on the lattice at a finite value of $\mu$. The most widely used lattice method to compute the susceptibilities at a
finite value of quark chemical potential $\mu$, is through the Taylor expansion of the corresponding quantity at $\mu=0$~,e.g.,
\begin{equation}
\label{eqn:series}
\chi^{B}_2(\mu)/T^2=\chi^{B}_2(0)/T^2+\frac{\mu^2}{2! T^2}\chi^{B}_4(0)+\frac{\mu^4}{4! T^4}\chi^{B}_6(0)T^2+...
\end{equation}
The light and strange quark susceptibilities have been
computed at finite but small densities from Taylor expansion, using asqtad staggered quarks~\cite{milc1} and the ratios of
baryon number susceptibilities using the unimproved staggered fermions~\cite{gg5} in the region of interest for the
RHIC experiments. All these ratios agree well with the estimates from the HRG model~\cite{gg5}, the
results for which are compiled in the right panel of figure \ref{criticalpt}. The ratios of susceptibilities
serve as a good observable for comparing the lattice and the experimental data since these are free from the unknown
quantities, like the volume of the fireball during freezeout~\cite{sgupta}.
The higher order susceptibilities $\chi_n$, for $n>4$, are important even in the $\mu=0$ regime. In the chiral limit, it
is expected that the fourth order baryon number susceptibility would have a cusp and the sixth order would diverge
with O(4) scaling at the critical temperature. Even for physical quark masses, $\chi_6^B$ for QCD would show oscillations near
the pseudo-critical temperature and $\chi_8^B$ would have negative values in the same region~\cite{kr}, quite contrary
to the HRG predictions. Thus the signatures of critical behaviour could be understood
by the careful study of these quantities already at $\mu\sim0$, which is probed by the experiments at
LHC ~\cite{kr}.
Other important quantities of relevance are the off-diagonal susceptibilities. These defined as
\begin{equation}
\chi_{ijk}^{BSQ}=\frac{T}{V}\frac{\partial ^{i+j+k}ln \mathcal{Z}}{\partial \mu_B^i\partial \mu_S^j \partial\mu_Q^k}
\end{equation}
are a measure of the correlations between different quantum numbers and hence good observables to
estimate the effects of interactions in the different phases of the QCD medium. It has been suggested that the
quantity $C_{BS}=-3\chi_{11}^{BS}/\chi^S_2$ is a good observable to characterize the deconfinement in thermal QCD \cite{maz}.
If the strangeness is carried by quark like excitations, the value of $C_{BS}$ would be identity and would
be much smaller than unity in the phase where only the baryons and mesons carries the strangeness quantum number. Recent results
from the HotQCD collaboration using HISQ action~\cite{hisqsusc} show that $C_{BS}$ approaches unity very quickly
at around 200 MeV implying that almost no strange hadrons survive in the QGP phase above $T_c$.
This is compiled in the left panel of figure \ref{corr}. The HotQCD data is consistent with the corresponding continuum extrapolated data
with the stout smeared fermions~\cite{bwsusc}. Also $C_{BS}$ is not sensitive to the sea strange quark masses for $T>T_c$ since the
first partially quenched results~\cite{ggcbs} for this quantity are consistent with the full QCD results. The other
important observable is the baryon-electric charge correlation. In the confined phase, electric charge
in the baryon sector is mainly carried by protons and anti-protons, therefore the
correlation would rise exponentially with temperature if this phase could be described as a non-interacting gas consisting of
these particles. At high temperatures, however, quark like excitations would be important and their masses being much smaller than the temperature, this correlation would fall to zero. From the behaviour of the continuum extrapolated HISQ
data for $\chi_{11}^{BQ}$, compiled in the right panel of figure \ref{corr}, it is evident that near the pseudo-critical temperature
there is a change in the fundamental properties of the degrees of freedom of the medium, with quark like excitations
dominating at $1.5 T_c$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.3\textwidth]{cbs_hisq.eps}
\includegraphics[width=0.3\textwidth]{chibq_hisq.eps}
\caption{The HISQ data for $C_{BS}$(left panel) and $\chi_{11}^{BQ}/T^2$(right panel)
as a function of temperature, from \cite{hisqsusc}.}
\label{corr}
\end{center}
\end{figure}
\subsection{The freezeout curve from lattice}
To relate the results from heavy ion experiments with the lattice data, it is crucial to map
the center of mass energy of the colliding nuclei in the heavy ion collisions,$\sqrt{s}$, to the corresponding point in the
$T-\mu_B$ plane of the QCD phase diagram. This is called the freezeout curve. Phenomenologically, the freezeout curve
is obtained from a particular parameterization of the HRG model obtained through fitting the experimental
data on hadron abundances~\cite{freezeout}.
At chemical freezeout, the chemical composition of the baryons gets frozen, meaning that the inelastic collisions between these
species become less probable under further cooling of the system.
However the systematic uncertainties in determining the hadron yields are not taken into account in the phenomenological
determination of the freezeout curve. Recent work by the BNL-Bielefeld collaboration shows how lattice techniques
can provide first principle determination of the freezeout curve through suitable experimental observables~\cite{freezeouthisq} . As emphasized
in the last subsection, the ratios of susceptibilities are believed to be good observables for comparing the lattice and
the experimental data. Two such observables proposed in Ref. \cite{freezeouthisq} are,
\begin{eqnarray}
\nonumber
R_{12}^{X}\equiv\frac{M_X}{\sigma_X^2}&=&\frac{\mu_B}{T}\left( R_{12}^{X,1}+\frac{\mu_B^2}{T^2}R_{12}^{X,3}+
\mathcal{O}(\mu_B^4)\right)\\
R_{31}^{X}\equiv\frac{S_X\sigma_X^3}{M_X}&=&R_{31}^{X,1}+\frac{\mu_B^2}{T^2}R_{31}^{X,3}+
\mathcal{O}(\mu_B^4)
\end{eqnarray}
where $M_X,~\sigma_X,~S_X$ denotes the mean, variance and the skewness in dimensionless units for the conserved quantum number $X$. These
observables are chosen because these have are odd and even functions of $\mu_B$, allowing us to independently
determine $T$ and $\mu_B$ from these two quantities. The quantum number $X$ can either chosen to be the net electric
charge $Q$ or the net baryon number $B$. In the experiments one can only measure the proton number fluctuations and it
is not clear whether the proton number fluctuations could be a proxy for the net baryon fluctuation~\cite{asakawa2}.
It was thus suggested, that the ratios of net charge fluctuations would be a better observable to compare with the experiments. Once the
$R^{Q}_{31}$ is known from experiments, one can determine the freezeout temperature $T_f$ from it by comparing with the
continuum extrapolated lattice data. Analogously, one can obtain the $\mu_B$ at freezeout from comparison of the
$R^{Q}_{12}$ data. In the left panel figure \ref{freezeoutmu}, the results for $R^{Q}_{31}$ are shown as a function
of temperature. It is evident that the first order correction to the value of the ratio is within 10\% of the
leading order value for $\mu_B/T<1.3$ and in the freezeout region i.e, $T>140$ MeV. From the leading order
results of $R^Q_{31}$ one can estimate the freezeout temperature. For $\sqrt{s}$ in the range of 39-200GeV currently
probed in the Beam Energy Scan(BES) experiment at RHIC, the freezeout temperature from the HRG
parameterization of the hadron multiplicities is about 165 MeV. At this temperature, the ratio $R^Q_{31}$ calculated from
the HRG model is quite larger than the lattice estimate which would mean that the freezeout temperature estimated from
lattice data would differ from the model results by atleast 5\%. Similarly, if $R^Q_{12}$ is known from the
experiments, $\mu_B$ can be accurately estimated and is expected to be different from the current HRG estimates. This is
not very surprising because the freezeout of the fluctuations happens due to diffusive processes and is due to a different mechanism from
the freezeout of hadrons due to decreasing probability of inelastic collisions. Another question that was addressed
in this work was how relevant are the other parameters like $\mu_S$ and $\mu_Q$ for the phase diagram and the
freezeout curve. It was seen that $\mu_S$ and $\mu_Q$ are significantly smaller
than $\mu_B$ and the ratios of these quantities have a very small $\mu_B$ dependence in the
entire temperature range of 140-170 MeV relevant for the freezeout studies. It signifies that the relevant axes for the
phase diagram are indeed $T$ and $\mu_B$ and these two parameters are sufficient for characterizing the freezeout curve.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.3\textwidth]{R31Q.eps}
\includegraphics[width=0.3\textwidth]{muQ_muS.eps}
\caption{In the left panel, the leading term for $R^{Q}_{31}$ shown in the yellow band, is compared to its
NLO value denoted by the blue band in the
continuum limit. In the right panel, the ratios of $\mu_Q$ and $\mu_S$ with respect to
$\mu_B$ are compared with the HRG model predictions at different temperatures. Both figures are from Ref.
\cite{freezeouthisq}.}
\label{freezeoutmu}
\end{center}
\end{figure}
\subsection{Physics near the critical point}
It is known from models with the same symmetries as QCD, that the chiral phase transition at $T=0$ and
finite $\mu$ is a first order one. At zero density and high enough temperatures, QCD undergoes a crossover
from the hadron to the QGP phase. By continuity, it is expected that the first order line should end
at a critical end-point in the phase diagram~\cite{crit}. The determination of its existence from first principles
lattice computation has been quite challenging and the currently available lattice results are summarized in the
left panel of figure \ref{criticalpt}. These are all obtained using staggered fermions. The first lattice study
on the critical point was done using reweighting technique.
Configurations generated at the critical value of the gauge coupling for $\mu_B=0$ were used to
determine the partition function at different values of $T$ and $\mu_B$ using two-parameter reweighting~\cite{reweight2}.
By observing the finite volume behaviour of the Lee-Yang zeroes of the partition function, it was predicted that for
2+1 flavour QCD, there is a critical end-point at $T_E=160(4)~$MeV and $\mu_B=725(35)~$MeV. In this study the light quark was
four times heavier than its physical value. Reducing the light quark mass, shifted the critical end-point to
$\mu_B=360(40)~$MeV with $T_E=162(2)$ remaining the same~\cite{reweight3}. However, this result is
for a rather small lattice of size $16^3\times 4$ and is expected to change in the continuum limit
and with larger volumes. Reweighting becomes more expensive with increasing volume of the lattice, so going to a larger
lattice seems difficult with this method.
The other results for the critical point were obtained using the Taylor series method. In this method,
the baryon number susceptibility at finite density is expanded in powers
of $\mu_B/T$ as a Taylor series as shown in Eq. (\ref{eqn:series}), for each value of temperature.
The baryon number susceptibility is expected to diverge at the critical end-point~\cite{shuryak1}, so
the radius of convergence of the series would give the location of the critical end-point~\cite{gg1}.
However on a finite lattice, there are no divergences but the different estimates of the radius of
convergence given as
\begin{equation}
\label{eqn:radiusc}
r_n(\text{n=odd})= \sqrt{\frac{\chi^B_{n+1}}{T^2\chi^B_{n+3}}}~,~r_n(\text{n=even})=
\left[\frac{\chi^B_{2}}{T^n\chi^B_{n+2}}\right]^{1/n}
\end{equation}
should all be positive and equal within errors at the critical end-point. Currently, the state of the art
on the lattice are estimates of baryon number susceptibilities upto $\chi^B_{8}$. This
gives five different independent estimates of the radius of convergence upto $r_6$, which were shown to be consistent within
errors for $N_\tau=4,6,8$ at $T_E=0.94(1) T_c$~\cite{gg2,gg3,gg4}. The radius of convergence after finite volume correction
is $\mu_B/T_E=1.7(1)$ \cite{gg4}, which means $\mu_B=246(15)~$MeV at the critical end-point if we choose $T_c=154~$MeV. The input
pion mass for this computation is about 1.5 times the physical value and could affect the final coordinates
of the end-point. Moreover the different estimates for the radius of convergence $r_n$ in Eq. (\ref{eqn:radiusc}), agrees
with each other for asymptotically large
values of $n$ and one might need to check the consistency of the results with the radii of convergence estimates
beyond $r_6$. Hints of the critical end-point were also obtained~\cite{critcano} using a different
fermion discretization and a different methodology as well. Working with the canonical ensemble of improved
Wilson fermions, the presence of a critical point was reported at $T_E=0.925(5)T_c$ and $\mu_B/T_c=2.60(8)$.
This is a very preliminary study though, with a small lattice volume and a very heavy pion mass of about 700 MeV.
Though there is growing evidence in support for the existence of the critical end-point, the systematics for all
these lattice studies are still not under control. It would be desirable to follow a different strategy
to determine its existence. The alternate method suggested~\cite{dfp1} was to determine the curvature of the surface
of second order chiral phase transitions as a function of the baryon chemical potential $\mu_B$. If the chiral
critical surface bends towards larger values of $m_{u,d}$ with increasing baryon chemical potential and for a fixed
value of the strange quark mass, it would pass through the physical point, ensuring the existence of a critical end-point.
However if the curvature is of opposite sign, the chiral critical end-point would not exist. For lattice size of
$8^3\times4$, the critical value of the light quarks was estimated upto $\mathcal{O}(\mu_B^4)$~\cite{dfp2},
\begin{equation}
\frac{m_c(\mu_B)}{m_c(0)}=1-39(8)\left(\frac{\mu_B}{3 \pi T}\right)^2-...~,~
\end{equation}
with the strange quark mass fixed at its physical value. The leading value of the curvature has the same sign even
for a finer lattice of extent $N_\tau=6$~\cite{dfp3}. These studies show that the region of first order transition
shrinks for small values of $\mu_B$, which would mean that the critical point may not exist in this regime of $\mu_B$.
However for larger values of $\mu_B$, the higher order terms could be important and may bend the chiral critical line
towards the physical values of quark masses. The finite cut-off effects are still sizeable and
it is currently premature to make any definite predictions in the continuum limit with this method.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.3\textwidth]{crit.eps}
\includegraphics[width=0.3\textwidth]{m16_gg.eps}
\caption{The estimates of the critical point from lattice studies are shown in the left panel, from \cite{gg4}. The
magenta solid circle, box and star denote the $N_\tau=4,6,8$ data respectively for 2 flavours of staggered quarks~\cite{gg2,gg3,gg4}
and the open circles denote $N_\tau=4$ data for 2+1 flavours obtained with reweighting techniques~\cite{reweight2,reweight3}.
In the right panel, the ratio of the third and the second order baryon number susceptibility is plotted as a function of $\sqrt s$ relevant
for the RHIC and LHC experiments and compared with the HRG model data, from ~\cite{gg5}. }
\label{criticalpt}
\end{center}
\end{figure}
It is equally important to understand the possible experimental signatures of the
critical point. The search of the critical end-point is one of the important aims for the extensive
BES program at RHIC. In a heavy ion experiment,
one measures the number of charged hadrons at the chemical freezeout and its cumulants. During the expansion
of the fireball, the hot and dense QCD medium would pass through the critical region and cool down eventually
forming hadrons. If the freezeout and the critical regions are far separated, the system
would have no memory of the critical fluctuations and the baryon number susceptibility measured from the
experiments could be consistent with the predictions from thermal HRG models which has no critical behaviour.
If the freezeout region is within the critical region, the critical fluctuations would be larger than
the thermal fluctuations. It is thus important to estimate the chiral critical line for QCD from first
principles.
The curvature of the chiral critical line has been estimated by the BNL-Bielefeld collaboration~\cite{curvhisq}, by extending the
scaling analysis of the dimensionless chiral condensate $M_b$ outlined in Section \ref{sec:curv} for finite values
of baryon chemical potential, using Taylor series expansion. The corresponding scaling variables at finite $\mu_B$ are
\begin{equation}
t=\frac{1}{t_0}\left(\frac{T-T_{c,0}}{T_{c,0}}+\kappa_B\frac{\mu_B}{3T}\right)~,~h=\frac{m_l}{h_0 m_s}.
\end{equation}
The quantity $M_b$ can be expanded as a Taylor series in $\mu_B/3T$ as,
\begin{equation}
M_b(\mu)=M_b(0)+\frac{\chi_{m,B}}{2 T}\left(\frac{\mu_B}{3T}\right)^2+\mathcal{O}\left(\frac{\mu_B}{3T}\right)^4
\end{equation}
where $\chi_{m,B}$ is the mixed susceptibility defined as $\chi_{m,B}=\frac{T^2}{m_s}\partial^2 M_b/\partial (\mu_B/3T)^2$ computed
at $\mu_B=0$. In the critical region, it would show a scaling behaviour of the form,
\begin{equation}
\frac{\chi_{m,B}}{T} =\frac{2 \kappa_B T}{t_0 m_s}h^{-(1-\beta)/\beta\delta}f'_G(t/h^{1/\beta\delta}).
\end{equation}
The universality of the scaled $\chi_{m,B}$ data is clearly visible in the right panel of figure \ref{criticalcv}, both for
p4 staggered quarks on $N_\tau=4$ lattice with mass ratios of light and strange quarks varying from $1/20$ to $1/80$ and
with HISQ discretization on a $32^3\times8$ lattice with the mass ratio fixed at $1/20$. The fit of the complete lattice
data set to the scaling relation for $\chi_{m,B}$, gave the value of $\kappa_B=0.00656(66)$. At non-vanishing
$\mu_B$, the phase transition point is located at $t=0$, which implies that the critical temperature at finite density
can be parameterized as
\begin{equation}
\frac{T_c(\mu_B)}{T_c(0)}=1-\kappa_B\left(\frac{\mu_B}{3T}\right)^2+\mathcal{O}\left(\frac{\mu_B}{3T}\right)^4
\Rightarrow T_c(\mu_B)\simeq154(1-0.0066\left(\frac{\mu_B}{3T}\right)^2)~\text{MeV}.
\end{equation}
This estimate of the curvature is about three times larger than the corresponding prediction from the hadron
resonance gas model. It would be interesting to compare the curvature of the freezeout line computed on the lattice
with that of the critical line, once the experimental data for the electric charge cumulants are available.
Another complimentary study about the fate of the critical region at finite density was done by the Budapest-Wuppertal group~
\cite{bwfreezeout}. It was suggested that if the critical region shrinks with increasing $\mu_B$, it would imply that
one slowly converges to the critical end-point. The width of the critical region was measured from two different
observables, the renormalized chiral condensate and the strange quark number susceptibility. Stout smeared staggered
quarks were employed and the continuum limit was taken with the $N_\tau=6,8,10$ data. The results are summarized in the left panel of
figure \ref{criticalcv}. From the plots, it seems that the width of the crossover region does not
change from its $\mu_B=0$ value significantly for $\mu_B<500~$ MeV, which implies either that the critical end-point does not
exist at all or is present at a higher value of $\mu_B$. The corresponding curvature measured for the light quark chiral condensate
is $0.0066(20)$ which is consistent with the result from the BNL-Bielefeld collaboration. The results indicate that the
chiral pseudo-critical line and the phenomenological freezeout curve would separate apart at larger values of
$\mu_B$ and would be further away at the critical end-point.
It was noted that the higher order fluctuations are more strongly dependent on the correlation length
of the system~\cite{stephanov} and would survive even if the chiral and freezeout lines are far apart.
It has been proposed~\cite{kr}, that the signature of the critical point can be detected by monitoring
the behaviour of the sixth and higher order fluctuations of the electric charge along the freezeout
curve.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.3\textwidth]{phased_bw.eps}
\includegraphics[width=0.35\textwidth]{c2q_pbp_scaled.eps}
\caption{In the left panel, the width of the pseudocritical region for chiral condensate is shown as a blue curve and that
for strange quark susceptibility is shown as a red curve, from~\cite{bwfreezeout}. In the right panel, the scaling of the mixed
susceptibility is shown for different light quark masses and at the physical value of strange quark mass, from~\cite{curvhisq}. }
\label{criticalcv}
\end{center}
\end{figure}
\subsection{The EoS at finite density}
The EoS at finite density would be the important input for understanding the hydrodynamical evolution of the
fireball formed at low values of the collisional energy, at the RHIC and the future experiments at FAIR
and NICA. It is believed that there is no generation of entropy once the fireball thermalizes~\cite{shuryak2}.
In that case, as pointed out in Ref.~\cite{christian}, it is important to determine the EoS along lines of constant entropy per net baryon
number, $S/n_B$ to relate the lattice results with the experiments. The isentrope, determined by a fixed value of $S/n_B$
that characterizes the evolution of the fireball, is $S/n_B\simeq300$ for RHIC experiments, at $\sqrt{s}=200~$GeV. For the future
experiments at FAIR, the isentropes would be labeled by $S/n_B=30$ nearly as same as the early SPS experiments at CERN where $S/n_B\sim 45$. For two flavour QCD with p4 staggered quarks and with pion mass heavier than its physical value, it was already observed
that the ratio of pressure and energy density showed little variation as a function of $S/n_B$. The pressure and
the energy density at finite $\mu$ are usually computed on the lattice as a Taylor series about its value at zero
baryon density, as,
\begin{equation}
\frac{P(\mu_l,\mu_s)}{T^4}= \frac{P(0)}{T^4}+\sum_{i,j}\frac{\chi_{ij}(T)}{T^{4-i-j}}\left(\frac{\mu_l}{T}\right)^i
\left(\frac{\mu_s}{T}\right)^j~.
\end{equation}
The formula is valid for two degenerate light quark flavours and a heavier strange quark. The coefficients $\chi_{ij}$ are the quark number susceptibilities at $\mu=0$ and are non-zero for $i+j$=even. The corresponding expression for the trace anomaly is given as,
\begin{equation}
\frac{I(\mu_l,\mu_s)}{T^4}=-\frac{N_\tau^3}{N^3}\frac{d ln \mathcal{Z}}{d ln a}= \frac{I(0)}{T^4}+\sum_{i,j}b_{ij}(T)\left(\frac{\mu_l}{T}\right)^i
\left(\frac{\mu_s}{T}\right)^j~.
\end{equation}
The $\chi_{ij}$s can be also obtained from the coefficients $b_{ij}$ by integrating the latter along the line of constant physics.
For 2+1 flavours of improved asqtad staggered quarks with physical strange quark mass and $m_l=m_s/10$, the interaction
measure was computed upto $\mathcal{O}(\mu^6)$ for two different lattice spacings (the left
panel of figure \ref{eosmu}). The interaction measure did not change significantly from the earlier results with heavier quarks
and showed very little sensitivity to the cut-off effects along the isentropes~\cite{milc1}. However, it was observed that the
light and the strange quark number susceptibilities change significantly from the zero temperature values along the isentropes.
No peaks were found in the quark number susceptibilities at isentropes $S/n_b=300$, which led to the conclusion that the
critical point may not be observed at the RHIC~\cite{milc1}. The EoS and the thermodynamic quantities were computed for physical values
of quark masses by the Budapest-Wuppertal collaboration~\cite{bweosmu}. In this study, they set the values of the light
quark chemical potentials such that $\mu_l=\mu_B/3$ and the strange quark susceptibility
is $\mu_s=-2 \mu_l\chi_{11}^{us}/\chi_2^s$ to mimic the experimental conditions
where the net strangeness is zero. The pressure and the energy density was computed upto $\mathcal{O}(\mu^2)$. The ingredients that
went into the computations were a) the near continuum values of the interaction measure data from the $N_\tau=10$ lattice and b) the
spline interpolated values of $\chi_2^s,\chi_{11}^{us}$ for the range $125<T<400~$MeV obtained using the continuum extrapolated data
for $\chi_2^s,\chi_{11}^{us}$.
It was observed, as evident from the right panel of figure \ref{eosmu}, that the finite density effects along the RHIC isentropes are
negligible consistent with the earlier work. However for isentropes given by $S/n_B=30$, the finite density effects become more
important. The effect of truncation at $\mathcal{O}(\mu^2)$ was also estimated on a reasonably large $N_\tau=8$ lattice.
It was observed,
\begin{equation}
\frac{p \textmd{ up to }\mathcal{O}((\mu_B/T)^4)}{p \textmd{ up to }\mathcal{O}((\mu_B/T)^2)} \le
\begin{cases}
1.1, \hspace*{0.31cm} \textmd{ for $\mu_B/T\le2$}, \\
1.35, \hspace*{0.14cm} \textmd{ for $\mu_B/T\le3$}.
\end{cases}
\end{equation}
implying that the fourth and higher order terms need to be determined for even modest values of $\mu_B$ in the Taylor series
method. An independent study about the truncation effects of the Taylor series was performed in Ref. \cite{dfp4}. The derivatives
of pressure were computed for two flavour QCD with staggered quarks at imaginary chemical potential. These derivatives are related
to the successive terms of the Taylor coefficients of pressure evaluated at $\mu=0$. By fitting the imaginary $\mu$ data with a polynomial ansatz, these Taylor coefficients were obtained and compared with the exact values. It was observed that for
$T_c\leq T\leq 1.04 T_c$, at least the 8th order Taylor coefficient is necessary for a good fit. This highlights the necessity
to evaluate higher order susceptibilities, beyond the currently measured eighth order in the studies of EoS or the critical end-point.
New ideas to extend the Taylor series to higher order susceptibilities are evolving~\cite{gs2,dfp4} and these should be explored
in full QCD simulations.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.3\textwidth]{Iisentrmilc.eps}
\includegraphics[width=0.3\textwidth]{eosSNB.eps}
\caption{The EoS for different isentropes using asqtad quarks are shown in the left panel, from~\cite{milc1}.
In the right panel, the data for the energy density and pressure is compared for different isentropes using stout smeared staggered
quarks, from~\cite{bweosmu}.}
\label{eosmu}
\end{center}
\end{figure}
\section{Summary}
As emphasized in the introduction, I have tried to compile together some of the important instances to show that
the lattice results have already entered into the precision regime with different fermion discretizations
giving consistent continuum results for the pseudo-critical temperature and fluctuations of different
quantum numbers. The continuum result for the EoS would be available in very near future, with consistency
already observed for different discretizations. The lattice community has opened the door for a very active collaboration
between the theorists and experimentalists. With the EoS as an input, one can study the phenomenology of the
hot and dense matter created at the heavy ion colliders. On the hand, there is a proposal of non-perturbative
determination of the freezeout curve using lattice techniques, once the experimental data on cumulants of the
charged hadrons are available.
A good understanding of the QCD phase diagram at zero baryon density has been achieved from the lattice studies.
While the early universe transition from the QGP to the hadron phase is now known to be an analytic crossover
and not a real phase transition, it is observed that the chiral dynamics will have observable effects in the
crossover region. One of the remnant effects of the chiral symmetry would be the presence of a critical end-point.
The search for the still elusive critical endpoint is one of the focus areas of lattice studies, and the
important developments made so far in this area are reviewed.
While QCD at small baryon density is reasonably well understood with lattice techniques, the physics of
baryon rich systems cannot be formulated satisfactorily on the lattice due to the infamous sign-problem.
A lot of conceptual work, in understanding the severity and consequences of the sign problem as well algorithmic
developments in circumventing this problem is ongoing and is one of the challenging problems in the field of lattice
thermodynamics.
\section{Acknowledgements}
I would like to thank all the members of the Theoretical physics group at Bielefeld University, and in particular,
Frithjof Karsch, Edwin Laermann, Olaf Kaczmarek and Christian Schmidt for a lot of discussions that have enriched
my knowledge about QCD thermodynamics and lattice QCD. I express my
gratitude to Edwin Laermann for a careful reading of the manuscript and his helpful suggestions and Toru Kojo and
Amaresh Jaiswal for their constructive criticism, that has let to a considerable improvement of this writeup. I also acknowledge
Rajiv Gavai and Rajamani Narayanan for very enjoyable collaboration, in which I learnt many aspects of the subject.
| {'timestamp': '2014-03-11T01:08:50', 'yymm': '1403', 'arxiv_id': '1403.2102', 'language': 'en', 'url': 'https://arxiv.org/abs/1403.2102'} |
\section{Introduction}
When one is attempting to associate newly discovered meteoroid streams to their parent bodies, there are four critical steps that need to be carried out. The first is obviously the stream discovery through the search of databases and past records, which is ideally performed on meteor data comprised of Keplerian orbital elements. The second phase involves the verification of the meteoroid stream using completely independent meteor databases and stream searches as published online and/or reported in the literature. This is to help validate the existence of the stream. The third step explores the identification of candidate parent bodies, such as comets and asteroids, which show similar orbits to the space-time aggregated meteoroid Keplerian elements of the found stream. However, close similarity of the orbits between a meteoroid stream and a potential parent body is not necessarily conclusive proof of association or linkage, since the two object types (parent body and meteoroid) can undergo significantly different orbital evolution as shown by \citet{vaubaillon2006mechanisms}. Thus the most critical fourth step in determining the actual association is to perform dynamic modeling and orbital evolution on a sample of particles ejected from a candidate parent body. Given a comet's or asteroid's best estimated orbit in the past, and following the ejected stream particles through many hundreds to thousands of years, one looks for eventual encounters with the Earth at the time of meteor observation, and whether those encounters have a geometric similarity to the observed meteoroids of the stream under investigation. The work by \citet{mcnaught1999leonid} demonstrates this point. However, this current paper follows the approach of \citet{vaubaillon2005new} in focusing on the results of the dynamical modeling phase and is a culmination of all the steps just outlined and performed on new streams discovered from recent Croatian Meteor Network stream searches. The application of dynamical stream modeling indicates, with a high level of confidence, that seven new streams can be associated to either comets or asteroids, the latter of which are conjectured to be dormant comets.
\section{Processing approach}
The seven streams and their hypothetical parent body associations were initially discovered using a meteor database search technique as described in \citet{segon2014parent}. In summary, the method compared every meteor orbit to every other meteor orbit using the combined Croatian Meteor Network \citep{segon2012croatian, korlevic2013croatian}\footnote{\label{cmn_orbits}\url{http://cmn.rgn.hr/downloads/downloads.html#orbitcat}.} and SonotaCo \citep{sonotaco2009meteor}\footnote{\label{sonotaco_orbits}\url{http://sonotaco.jp/doc/SNM/index.html}.} video meteor orbit databases, looking for clusters and groupings in the five-parameter, Keplerian orbital element space. This was based on the requirement that three D-criteria (\citet{southworth1963statistics}, \citet{drummond1981test}, \citet{jopek2008meteoroid}) were all satisfied within a specified threshold. These groups had their mean orbital elements computed and sorted by number of meteor members. Mean orbital elements where computed by a simple averaging procedure. Working down from the largest sized group, meteors with similar orbits to the group under evaluation were assigned to the group and eliminated from further aggregation. This captured the known streams quickly, removing them from the meteor pool, and eventually found the newly discovered streams. According to International Astronomical Union (IAU) shower nomenclature rules \citep{jenniskens2006iau}, all results for the stream discoveries were first published. In these cases the search results can be found in three papers posted to WGN, The Journal of the International Meteor Organization \citep{andreic2014, gural2014results, segon2014results}.
Next, the literature was scoured for similar stream searches in other independent data sets, such as the CAMS \citep{rudawska2014new, jenniskens2016cams} and the EDMOND \citep{rudawska2014independent} video databases, to determine the validity of the new streams found. The verified new streams were then compared against known cometary and asteroidal orbits, from which a list of candidate parent bodies were compiled based once again on meeting multiple D-criteria for orbital similarity. Each section below describes in greater detail the unique processes and evidence for each stream's candidate association to a parent body. Besides the seven reported shower cases and their hypothetical parent bodies, the possibility of producing a meteor shower has also been investigated for four possible streams with similar orbital parameters to asteroids, 2002 KK3, 2008 UZ94, 2009 CR2, and 2011 YX62, but the results were inconclusive or negative. The remaining possible parent bodies from the search were not investigated due to the fact that those comets do not have orbital elements precise enough to be investigated or are stated to have parabolic orbits.
The dynamical analysis for each object was performed as follows. First, the nominal orbit of the body was retrieved from the JPL HORIZONS ephemeris\footnote{\label{horizonsJPL}\url{http://horizons.jpl.nasa.gov}} for the current time period as well as for each perihelion passage for the past few centuries (typically two to five hundred years). Assuming the object presented cometary-like activity in the past, the meteoroid stream ejection and evolution was simulated and propagated following \citet{vaubaillon2005new}. In detail, the method considers the ejection of meteoroids when the comet is within 3 AU from the Sun. The ejection velocity is computed following \citet{crifo1997dependence}. The ejection velocities typically range from 0 to \textasciitilde100 m/s. Then the evolution of the meteoroids in the solar system is propagated using numerical simulations. The gravitation of all the planets as well as non-gravitational forces (radiation pressure, solar wind, and the Poynting-Robertson effect) are taken into account. More details can be found in \citet{vaubaillon2005new}. When the parent body possessed a long orbital period, the stream was propagated starting from a more distant period in the past few thousand years. The intersection of the stream and the Earth was accumulated over 50 to 100 years, following the method by \citet{jenniskens2008minor}. Such a method provides a general view of the location of the meteoroid stream and give statistically meaningful results. For each meteoroid that is considered as intersecting the Earth, the radiant was computed following the \citet{neslusan1998computer} method (the software was kindly provided by those authors). Finally, the size distribution of particles intercepting the Earth was not considered in this paper, nor was the size of modeled particles compared to the size of observed particles. The size distribution comparison will be the topic of a future paper.
\section{IAU meteor shower \#549 FAN - 49 Andromedids and Comet 2001 W2 Batters
}
The first case to be presented here is that of meteor shower IAU \#542 49 Andromedids. Following the IAU rules, this shower was first reported as part of a paper in WGN, Journal of International Meteor Organization by \citet{andreic2014}. Independent meteor shower database searches resulted in confirmation of the existence of this shower, namely \citet{rudawska2015independent} and \citet{jenniskens2016cams}. The radiant position from the Croatian Meteor Network (CMN) search into the SonotaCo and CMN orbit databases was found to be R.A. = 20.9°, Dec. = +46.7°, with a mean geocentric velocity V$_{g}$ = 60.1 km/s near the center of the activity period (solar longitude $\lambda_{0}$ = 114°, 35 orbits). \citet{rudawska2015independent} found the same radiant to be at R.A. = 19.0°, Dec. = +45.3° and V$_{g}$ = 59.8 km/s ($\lambda_{0}$ = 112.5°, 226 orbits), while \citet{jenniskens2016cams} give R.A. = 20.5°, Dec. = +46.6°, and V$_{g}$ = 60.2 km/s ($\lambda_{0}$ = 112°, 76 orbits). This shower was accepted as an established shower during the 2015 IAU Assembly\footnote{\label{IAU2015}\url{https://astronomy2015.org}.} and is now listed in the IAU meteor database.
At the time of the initial finding, there were 35 meteors associated with this shower resulting in orbital parameters similar to published values for a known comet, namely 2001 W2 Batters. This Halley type comet with an orbital period of 75.9 years, has been well observed and its orbital parameters have been determined with higher precision than many other comets of this type. The mean meteoroid orbital parameters, as found by the above mentioned procedure, are compared with the orbit of 2001 W2 Batters in Table \ref{tab:table1}. Despite the fact that the orbital parameters' distance according to the Southworth-Hawkins D-criteria D$_{SH}$ = 0.14 seems a bit high to claim an association, the authors pointed out the necessity of using dynamic stream modeling to confirm or deny the association hypothesis because of the nearly identical ascending node values. Moreover, changes in 2001 W2 Batters' orbital parameters as far back as 3000 BC, as extracted from HORIZONS, has shown that the comet approached closer to Earth's orbit in 3000 BC than it has during the last few hundred years. Thus stream particles ejected from the comet farther in the past could have the possibility of producing a meteoroid stream observed at the Earth in the current epoch.
\begin{table*}[t]
\caption{Orbital parameters for the 49 Andromedids and Comet 2001 W2 Batters with corresponding D$_{SH}$ values. If the value of 112° for the ascending node (from \citet{jenniskens2016cams}) is used instead of the mean value (118°), then the resulting D$_{SH}$ is 0.16. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 2001 W2 Batters.}
\label{tab:table1}
\centering
\begin{tabular}{l c c c c c c}
\hline\hline
49 Andromedids & q & e & i & Node & $\omega$ & D$_{SH}$ \\% table heading
References & (AU) & & (\degr) & (\degr) & (\degr) & \\
\hline
1 & 0.918 & 0.925 & 118.2 & 114.0 & 143.1 & 0.14\\
2 & 0.907 & 0.878 & 119.2 & 112.5 & 142.2 & 0.17\\
3 & 0.898 & 0.922 & 117.9 & 118.0 & 139.8 & 0.19\\
2001 W2 Batters & 1.051 & 0.941 & 115.9 & 113.4 & 142.1 & 0\\
\hline
\end{tabular}
\tablebib{(1) \citet{andreic2014}; (2) \citet{rudawska2015independent}; (3) \citet{jenniskens2016cams}.
}
\end{table*}
The dynamical modeling for the hypothetical parent body 2001 W2 Batters was performed following \citet{vaubaillon2005new} and \citet{jenniskens2008minor}. In summary, the dynamical evolution of the parent body is considered over a few hundred to a few thousand years. At a specific chosen time in the past, the creation of a meteoroid stream is simulated and its evolution is followed forward in time until the present day. The intersection of the particles with the Earth is recorded and the radiant of each particle is computed and compared to observations. The first perihelion passages were initially limited to 500 years back in time. No direct hits to the Earth were found from meteoroids ejected during the aforementioned period. However, the authors were convinced that such close similarity of orbits may result in more favorable results if the dynamical modeling was repeated for perihelion passages back to 3000 BC. The new run did provide positive results, with direct hits to the Earth predicted at R.A. = 19.1°, Dec. = +46.9°, and V$_{g}$ = 60.2 km/s, at a solar longitude of $\lambda_{0}$ = 113.2°. A summary of the observed and modeled results is given in Table \ref{tab:table2}.
\begin{table}[h]
\caption{Observed and modeled radiant positions for the 49 Andromedids and comet Batters' meteoroids ejected 3000 years ago.}
\label{tab:table2}
\centering
\begin{tabular}{l c c c c}
\hline\hline
49 Andromedids & R.A. & Dec. & V$_{g}$ & $\lambda_{0}$\\
References & (\degr) & (\degr) & (km/s) & (\degr)\\
\hline
1 & 20.9 & 46.7 & 60.1 & 114.0\\
2 & 19.0 & 45.3 & 59.8 & 112.5\\
3 & 20.5 & 46.6 & 60.2 & 112.0\\
2001 W2 Batters \\meteoroids, this work & 19.1 & 46.9 & 60.2 & 113.2\\
\hline
\end{tabular}
\tablebib{(1) \citet{andreic2014}; (2) \citet{rudawska2015independent}; (3) \citet{jenniskens2016cams}.
}
\end{table}
The maximum difference between the average observed radiant positions and modeled mean positions is less than 2° in both right ascension and declination, while there are also single meteors very close to the predicted positions according to the model. Since the observed radiant position fits very well with the predictions, we may conclude that there is a strong possibility that comet 2001 W2 Batters is indeed the parent body of the 49 Andromedids shower. The high radiant dispersion seen in the observations can be accounted for by 1) less precise observations in some of the reported results, and 2) the 3000 year old nature of the stream which produces a more dispersed trail. The next closest possible association was with comet 1952 H1 Mrkos but with D$_{SH}$ of 0.28, it was considered too distant to be connected with the 49 Andromedids stream.
Figures \ref{fig:figure1} and \ref{fig:figure2} show the location of the stream with respect to the Earth's path, as well as the theoretical radiant. These results were obtained by concatenating the locations of the particles intersecting the Earth over 50 years in order to clearly show the location of the stream (otherwise there are too few particles to cross the Earth each year). As a consequence, it is expected that the level of activity of this shower would not change much from year to year.
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{media/image1.jpeg}}
\caption{Location of the nodes of the particles released by 2001 W2 Batters over several centuries, concatenated over the years 2000 to 2050. The Earth crosses the stream.}
\label{fig:figure1}
\end{figure}
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{media/image2.png}}
\caption{Theoretical radiant of the particles released by 2001 W2 Batters which were closest to the Earth. The range of solar longitudes for modeled radiants is from 113.0\degr to 113.9\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure2}
\end{figure}
\section{IAU meteor shower \#533 JXA - July $\xi$ Arietids and comet 1964 N1 Ikeya}
The discovery of the possible meteor shower July $\xi$ Arietids was first published in \citet{segon2014new}. The shower had been found as a grouping of 61 meteoroid orbits, active from July 4 to August 12, peaking around July 21. Three other searches for meteor showers in different meteoroid orbit databases done by \citet{rudawska2015independent}, \citet{jenniskens2016cams}, and \citet{kornovs2014confirmation} found this shower as well, but with slight differences in the period of activity. This shower had been accepted as an established shower during the 2015 IAU Assembly held on Hawaii and is now referred to as shower \#533.
Among the possible parent bodies known at the time of this shower's discovery, comet C/1964 N1 Ikeya was found to have similar orbital parameters as those of the July $\xi$ Arietids. Comet C/1964 N1 Ikeya is a long period comet, having an orbital period of 391 years and contrary to comet 2001 W2 Batters, has less precision in its orbit estimation. A summary of the mean orbital parameters of the shower compared with C/1964 N1 Ikeya are shown in Table \ref{tab:table3}, from which it can be seen that the distance estimated from D$_{SH}$ suggests a possible connection between the shower and the comet.
\begin{table*}[t]
\caption{Orbital parameters for the July $\xi$ Arietids and Comet 1964 N1 Ikeya with corresponding D$_{SH}$ values. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 1964 N1 Ikeya.}
\label{tab:table3}
\centering
\begin{tabular}{l c c c c c c}
\hline\hline
July $\xi$ Arietids & q & e & i & Node & $\omega$ & D$_{SH}$\\
References & (AU) & & (\degr) & (\degr) & (\degr) & \\
\hline
1 & 0.883 & 0.965 & 171.6 & 299.0 & 318.0 & 0.10\\
2 & 0.863 & 0.939 & 171.8 & 292.6 & 313.8 & 0.08\\
3 & 0.836 & 0.919 & 171.5 & 291.1 & 309.8 & 0.09\\
4 & 0.860 & 0.969 & 170.4 & 292.7 & 312.4 & 0.08\\
C/1964 N1 Ikeya & 0.822 & 0.985 & 171.9 & 269.9 & 290.8 & 0\\
\hline
\end{tabular}
\tablebib{(1) \citet{segon2014new}; (2) \citet{kornovs2014confirmation}; (3) \citet{rudawska2015independent}; (4) \citet{jenniskens2016cams}.
}
\end{table*}
Similar to the previous case, the dynamical modeling was performed for perihelion passages starting from 5000 BC onwards. Only two direct hits were found from the complete analysis, but those two hits confirm that there is a high possibility that comet C/1964 N1 Ikeya is indeed the parent body of the July $\xi$ Arietids. The mean radiant positions for those two modeled meteoroids as well as the mean radiant positions found by other searches are presented in Table \ref{tab:table4}. As can be seen from Table \ref{tab:table4}, the difference in radiant position between the model and the observations appears to be very significant.
\begin{table}[h]
\caption{Observed and modeled radiant positions for July $\xi$ Arietids and comet C/1964 N1 Ikeya. Rows in bold letters show radiant positions of the entries above them at 106.7° of solar longitude. The applied radiant drift was provided in the respective papers.}
\label{tab:table4}
\centering
\begin{tabular}{l c c c c}
\hline\hline
July $\xi$ Arietids & R.A. & Dec. & V$_{g}$ & $\lambda_{0}$\\
References & (\degr) & (\degr) & (km/s) & (\degr)\\
\hline
1 & 40.1 & 10.6 & 69.4 & 119.0\\
& \textbf{32.0} & \textbf{7.5} & \textbf{...} & \textbf{106.7}\\
2 & 35.0 & 9.2 & 68.9 & 112.6\\
3 & 33.8 & 8.7 & 68.3 & 111.1\\
4 & 41.5 & 10.7 & 68.9 & 119.0\\
& \textbf{29.6} & \textbf{7.0} & \textbf{...} & \textbf{106.7}\\
1964 N1 Ikeya \\meteoroids, this work & 29.0 & 6.5 & 68.7 & 106.7\\
\hline
\end{tabular}
\tablebib{(1) \citet{segon2014new}; (2) \citet{kornovs2014confirmation}; (3) \citet{rudawska2015independent}; (4) \citet{jenniskens2016cams}.
}
\end{table}
However, the radiant position for solar longitude as found from dynamical modeling fits very well with that predicted by the radiant's daily motion: assuming $\Delta$R.A. = 0.66° and $\Delta$Dec. = 0.25° from \citet{segon2014new}, the radiant position at $\lambda_{0}$ = 106.7° would be located at R.A. = 32.0°, Dec. = 7.5° or about three degrees from the modeled radiant. If we use results from \citet{jenniskens2016cams} ($\Delta$R.A. = 0.97° and $\Delta$Dec. = 0.30°), the resulting radiant position fits even better – having R.A. = 29.0° Dec. = 7.0° or about one degree from the modeled radiant. The fact that the model does not fit the observed activity may be explained by various factors, from the lack of precise data of the comet position in the past derived using the relatively small orbit arc of observations, to the possibility that this shower has some other parent body (possibly associated to C/1964 N1 Ikeya) as well. The next closest possible association was with comet 1987 B1 Nishikawa-Takamizawa-Tago where the D$_{SH}$ was 0.21, but due to high nodal distances between orbits, we consider this to be not connected to the July $\xi$ Arietids.
The simulation of the meteoroid stream was performed for hypothetical comet returns back to 5000 years before the present. According to the known orbit of the comet, it experienced a close encounter with Jupiter and Saturn in 1676 and 1673 AD respectively, making the orbital evolution prior to this date much more uncertain. Nevertheless, the simulation of the stream was performed in order to get a big picture view of the stream in the present day solar system as visualized in Figures \ref{fig:figure3} and \ref{fig:figure4}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image3.jpg}}
\caption{Location of the particles ejected by comet C/1964 N1 Ikea over several centuries, concatenated over 50 years in the vicinity of the Earth.}
\label{fig:figure3}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image4.png}}
\caption{Theoretical radiant of the particles released by C/1964 N1 Ikea which were closest to the Earth. The match with the July $\xi$ Arietids is not convincing in this case. The range of solar longitudes for modeled radiants is from 99.0\degr to 104.8\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure4}
\end{figure}
\section{IAU meteor shower \#539 ACP - $\alpha$ Cepheids and comet 255P Levy
}
The $\alpha$ Cepheids shower had been reported by \citet{segon2014new}, as a disperse grouping of 41 meteors active from mid-December to mid-January at a mean radiant position of R.A. = 318°, Dec. = 64° at $\lambda_{0}$ = 281° (January 2). The authors investigated the possibility that this new shower could be connected to the predicted enhanced meteor activity of IAU shower \#446 DPC December $\phi$ Cassiopeiids. However, the authors pointed out that \#466 DPC and \#539 ACP cannot be the same meteor shower \citep{segon2014new}. Despite the fact that a predicted meteor outburst was not detected \citep{roggemans2014letter}, there is a strong possibility that the activity from comet 255P/Levy produces a meteor shower which can be observed from the Earth as the $\alpha$ Cepheids shower. Meteor searches conducted by \citet{kornovs2014confirmation} and \citet{jenniskens2016cams} failed to detect this shower, but \citet{rudawska2015independent} found 11 meteors with a mean radiant position at R.A. = 333.5°, Dec. = +66°, V$_{g}$ = 13.4 km/s at $\lambda_{0}$ = 277.7°.
The mean geocentric velocity for the $\alpha$ Cepheids meteors has been found to be small, of only 15.9 km/s, but ranges from 12.4 to 19.7 kilometres per second. Such a high dispersion in velocities may be explained by the fact that the D-criterion threshold for automatic search has been set to D$_{SH}$ = 0.15, which allowed a wider range of orbits to be accepted as meteor shower members. According to the dynamical modeling results, the geocentric velocity for meteoroids ejected from 255P/Levy should be of about 13 km/s, and observations show that some of the $\alpha$ Cepheids meteors indeed have such velocities at more or less the predicted radiant positions, as can be seen from Figure \ref{fig:figure5}. This leads us to the conclusion that this meteor shower has to be analyzed in greater detail, but at least some of the observations represent meteoroids coming from comet 255P/Levy.
\begin{figure}[b]
\resizebox{\hsize}{!}{\includegraphics{media/image5.png}}
\caption{Radiant positions of observed $\alpha$ Cepheids and predicted meteors from 255P/Levy. The range of solar longitudes for modeled radiants is from 250\degr to 280\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure5}
\end{figure}
The simulation of the meteoroid stream ejected by comet 255P/Levy includes trails ejected from 1801 through 2017 as visualized in Figures \ref{fig:figure6} and \ref{fig:figure7}. Several past outbursts were forecasted by the dynamical modeling but none had been observed, namely during apparitions in 2006 and 2007 (see Table \ref{tab:table5}). As a consequence, the conclusion is that the activity of the $\alpha$ Cepheids is most likely due to the global background of the stream.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image6.jpg}}
\caption{Location of the particles ejected by comet 255P/Levy in the vicinity of the Earth in 2006: an outburst should have been detected.}
\label{fig:figure6}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image7.jpg}}
\caption{Location of all the particles ejected by 255P over 50 years in order to show the location of the whole stream in the solar system. This graph does not imply several outbursts but rather provides a global indication of the stream.}
\label{fig:figure7}
\end{figure}
\begin{table}
\caption{Expected outburst caused by 255P/Levy. No unusual outburst was reported in 2006 and 2007. Columns: Year = the year of Earth's collision with the trail, Trail = year of particle ejection from the given trail, $\lambda_{0}$ = solar longitude in degrees, yyyy-mm-ddThh:mm:ss = date and time of the trail's closest approach, ZHR = zenithal hourly rate.}
\label{tab:table5}
\centering
\begin{tabular}{c c c c c}
\hline\hline
Year & Trail & $\lambda_{0}$ & yyyy-mm-ddThh:mm:ss & ZHR\\
& & (\degr) & &\\
\hline
2001 & 1963 & 279.132 & 2001-12-30T18:37:00 & 1\\
2001 & 1975 & 279.765 & 2001-12-31T12:01:00 & 3\\
2001 & 1980 & 279.772 & 2001-12-31T15:00:00 & 2\\
2001 & 1985 & 279.828 & 2001-12-31T11:24:00 & 11\\
2001 & 1991 & 279.806 & 2001-12-31T10:44:00 & 13\\
2002 & 1963 & 278.914 & 2002-10-20T14:56:00 & 1\\
2002 & 1980 & 279.805 & 2002-12-31T10:23:00 & 2\\
2002 & 1985 & 279.808 & 2002-12-31T10:40:00 & 15\\
2002 & 1991 & 279.789 & 2002-12-31T10:24:00 & 6\\
2006 & 1963 & 279.285 & 2006-12-31T08:01:00 & 1\\
2007 & 1963 & 279.321 & 2007-12-31T07:04:00 & 1\\
2012 & 1980 & 279.803 & 2012-12-31T06:25:00 & 6\\
2013 & 1980 & 279.882 & 2013-12-31T08:16:00 & 2\\
2014 & 1969 & 264.766 & 2014-12-17T00:07:00 & 1\\
2017 & 1930 & 342.277 & 2017-09-21T18:39:00 & 1\\
2017 & 1941 & 279.510 & 2017-12-30T03:41:00 & 1\\
2018 & 1969 & 278.254 & 2018-12-29T07:29:00 & 1\\
2033 & 1975 & 275.526 & 2033-12-27T10:12:00 & 1\\
2033 & 1980 & 275.488 & 2033-12-27T10:06:00 & 1\\
2033 & 1985 & 275.452 & 2033-12-27T09:55:00 & 1\\
2033 & 1991 & 275.406 & 2033-12-27T09:54:00 & 1\\
2033 & 1996 & 275.346 & 2033-12-27T08:58:00 & 1\\
2034 & 1975 & 262.477 & 2034-12-13T22:22:00 & 1\\
2034 & 1980 & 261.456 & 2034-06-06T03:40:00 & 1\\
2034 & 1985 & 261.092 & 2034-04-05T17:02:00 & 1\\
2034 & 1991 & 260.269 & 2034-03-09T11:52:00 & 1\\
2035 & 1914 & 276.553 & 2035-01-09T07:59:00 & 1\\
2035 & 1952 & 271.463 & 2035-12-20T03:11:00 & 1\\
2039 & 1980 & 272.974 & 2039-12-25T01:51:00 & 1\\
2039 & 1991 & 272.131 & 2039-12-25T01:05:00 & 1\\
\hline
\end{tabular}
\end{table}
There are several other parent bodies possibly connected to the $\alpha$ Cepheids stream: 2007 YU56 (D$_{SH}$ = 0.20), 2005 YT8 (D$_{SH}$ = 0.19), 1999 AF4 (D$_{SH}$ = 0.19), 2011 AL52 (D$_{SH}$ = 0.19), 2013 XN24 (D$_{SH}$ = 0.12), 2008 BC (D$_{SH}$ = 0.17), and 2002 BM (D$_{SH}$ = 0.16). The analysis for those bodies will be done in a future analysis.
\section{IAU meteor shower \#541 SSD - 66 Draconids and asteroid 2001 XQ
}
Meteor shower 66 Draconids had been reported by \citet{segon2014new}, as a grouping of 43 meteors having mean radiant at R.A. = 302°, Dec. = +62°, V$_{g}$ = 18.2 km/s. This shower has been found to be active from solar longitude 242° to 270° (November 23 to December 21), having a peak activity period around 255° (December 7). Searches by \citet{jenniskens2016cams} and \citet{kornovs2014confirmation} failed to detect this shower. But again, \citet{rudawska2015independent} found this shower to consist of 39 meteors from the EDMOND meteor orbits database, at R.A. = 296°, Dec. = 64°, V$_{g}$ = 19.3 km/s for solar longitude $\lambda_{0}$ = 247°.
A search for a possible parent body of this shower resulted in asteroid 2001 XQ, which having a D$_{SH}$ = 0.10 represented the most probable choice. The summary of mean orbital parameters from the above mentioned searches compared with 2001 XQ are shown in Table \ref{tab:table6}.
\begin{table*}[t]
\caption{Orbital parameters for 66 Draconids and 2001XQ with respective D$_{SH}$ values. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 2001XQ.}
\label{tab:table6}
\centering
\begin{tabular}{l c c c c c c}
\hline\hline
66 Draconids & q & e & i & Node & $\omega$ & D$_{SH}$\\
References & (AU) & & (\degr) & (\degr) & (\degr) & \\
\hline
1 & 0.981 & 0.657 & 27.2 & 255.2 & 184.4 & 0.10\\
2 & 0.980 & 0.667 & 29.0 & 247.2 & 185.2 & 0.13\\
2001 XQ & 1.035 & 0.716 & 29.0 & 251.4 & 190.1 & 0\\
\hline
\end{tabular}
\tablebib{(1) \citet{segon2014new}; (2) \citet{rudawska2015independent}.
}
\end{table*}
Asteroid 2001 XQ has Tisserand parameter T$_{j}$ = 2.45, which is a value common for Jupiter family comets and this makes us suspect it may not be an asteroid per se, but rather a dormant comet. To the collected author's knowledge, no cometary activity has been observed for this body. Nor was there any significant difference in the full-width half-max spread between stars and the asteroid on the imagery provided courtesy of Leonard Kornoš (personal communication) from Modra Observatory. They had observed this asteroid (at that time named 2008 VV4) on its second return to perihelion, during which it reached \nth{18} magnitude.
Numerical modeling of the hypothetical meteor shower whose particles originate from asteroid 2001 XQ was performed for perihelion passages from 800 AD up to 2100 AD. The modeling showed multiple direct hits into the Earth for many years, even outside the period covered by the observations. The summary of observed and modeled radiant positions is given in Table \ref{tab:table7}.
\begin{table}
\caption{Observed 66 Draconid and modeled 2001 XQ meteors' mean radiant positions (prefix C\_ stands for calculated (modeled), while prefix O\_ stands for observed). The number in the parenthesis indicates the number of observed 66 Draconid meteors in the given year. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.}
\label{tab:table7}
\centering
\begin{tabular}{l c c c c c}
\hline\hline
Year & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\
& (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\
\hline
C\_2007 & 250.3 & 308.2 & 65.3 & 19.3 & ...\\
O\_2007 (5) & 257.5 & 300.1 & 63.2 & 18.2 & 4.1\\
C\_2008 & 248.2 & 326.8 & 56.9 & 16.1 & ...\\
O\_2008 (8) & 254.0 & 300.5 & 62.6 & 18.0 & 14.3\\
C\_2009 & 251.1 & 309.6 & 64.0 & 18.8 & ...\\
O\_2009 (5) & 253.6 & 310.4 & 61.0 & 17.0 & 3.0\\
C\_2010 & 251.2 & 304.0 & 63.1 & 19.1 & ...\\
O\_2010 (17) & 253.7 & 300.4 & 63.4 & 18.9 & 1.6\\
\hline
\end{tabular}
\end{table}
Despite the fact that the difference in the mean radiant positions may seem significant, radiant plots of individual meteors show that some of the meteors predicted to hit the Earth at the observation epoch were observed at positions almost exactly as predicted. It is thus considered that the results of the simulations statistically represent the stream correctly, but individual trails cannot be identified as responsible for any specific outburst, as visualized in Figures \ref{fig:figure8} and \ref{fig:figure9}. The activity of this shower is therefore expected to be quite regular from year to year.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image8.jpg}}
\caption{Location of the nodes of the particles released by 2001 XQ over several centuries, concatenated over 50 years. The Earth crosses the stream.}
\label{fig:figure8}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image9.png}}
\caption{Theoretical radiants of the particles released by 2001 XQ which were closest to the Earth. The range of solar longitudes for modeled radiants is from 231.1\degr to 262.8\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure9}
\end{figure}
Two other candidate parent bodies were initially considered, 2004 YY23 and 2015 WB13, in which both had a D$_{SH}$ of 0.26. This was deemed too distant to be associated with the 66 Draconids stream.
\section{IAU meteor shower \#751 KCE - $\kappa$ Cepheids and asteroid 2009 SG18
}
The meteor shower $\kappa$ Cepheids had been reported by \citet{segon2015four}, as a grouping of 17 meteors with very similar orbits, having average D$_{SH}$ of only 0.06. The activity period was found to from September 11 to September 23, covering solar longitudes from 168° to 180°. The radiant position was R.A. = 318°, Dec. = 78° with V$_{g}$ = 33.7 km/s, at a mean solar longitude of 174.4°. Since the new shower discovery has been reported only recently, the search by \citet{kornovs2014confirmation} could be considered totally blind having not found its existence, while the search by \citet{jenniskens2016cams} did not detect it as well in the CAMS database. Once again, the search by \citet{rudawska2015independent} found the shower, but in much higher numbers than it has been found in the SonotaCo and CMN orbit databases. In total 88 meteors have been extracted as $\kappa$ Cepheids members in the EDMOND database. A summary of the mean orbital parameters from the above mentioned searches compared with 2009 SG18 are shown in Table \ref{tab:table8}.
\begin{table*}[t]
\caption{Orbital parameters for $\kappa$ Cepheids and asteroid 2009 SG18 with corresponding D$_{SH}$ values. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 2009 SG18.}
\label{tab:table8}
\centering
\begin{tabular}{l c c c c c c}
\hline\hline
$\kappa$ Cepheids & q & e & i & Node & $\omega$ & D$_{SH}$\\
References & (AU) & & (\degr) & (\degr) & (\degr) & \\
\hline
1 & 0.983 & 0.664 & 57.7 & 174.4 & 198.4 & 0.10\\
2 & 0.987 & 0.647 & 55.9 & 177.2 & 190.4 & 0.17\\
2009 SG18 & 0.993 & 0.672 & 58.4 & 177.6 & 204.1 & 0\\
\hline
\end{tabular}
\tablebib{(1) \citet{segon2014new}; (2) \citet{rudawska2015independent}.
}
\end{table*}
What can be seen at a glance is that the mean orbital parameters for both searches are very consistent (D$_{SH}$ = 0.06), while the difference between the mean shower orbits and the asteroid's orbit differs mainly in the argument of perihelion and perihelion distance. Asteroid 2009 SG18 has a Tisserand parameter for Jupiter of T$_{j}$ = 2.31, meaning that it could be a dormant comet.
Numerical modeling of the hypothetical meteor shower originating from asteroid 2009 SG18 for perihelion passages from 1804 AD up to 2020 AD yielded multiple direct hits into the Earth for more years than the period covered by the observations, as seen in Figures \ref{fig:figure10} and \ref{fig:figure11}. The very remarkable coincidence found between the predicted and observed meteors for years 2007 and 2010 is summarized in Table \ref{tab:table9}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image10.jpg}}
\caption{Location of the nodes of the particles released by 2009 SG18 over several centuries, concatenated over 50 years. The Earth crosses the stream.}
\label{fig:figure10}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image11.png}}
\caption{Theoretical radiant of the particles released by 2009 SG18 which were closest to the Earth. Several features are visible due to the difference trails, but care must be taken when interpreting these data. The range of solar longitudes for modeled radiants is from 177.0\degr to 177.7\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure11}
\end{figure}
\begin{table}
\caption{Observed $\kappa$ Cepheids and modeled 2009 SG18 meteors' mean radiant positions (prefix C\_ stands for calculated or modeled, while prefix O\_ stands for observed). The number in the parenthesis indicates the number of observed meteors in the given year. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.}
\label{tab:table9}
\centering
\begin{tabular}{l c c c c c}
\hline\hline
Year & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\
& (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\
\hline
C\_2007 & 177.4 & 327.5 & 77.0 & 34.0 & ...\\
O\_2007 (3) & 177.1 & 328.3 & 77.9 & 35.3 & 0.9\\
C\_2010 & 177.7 & 327.7 & 77.7 & 34.3 & ...\\
O\_2010 (2) & 177.7 & 326.5 & 80.5 & 34.7 & 2.8\\
\hline
\end{tabular}
\end{table}
Based on an initial analysis given in this paper, a prediction of possible enhanced activity on September 21, 2015 was made by \citet{segon2015croatian}. At the moment, there are no video meteor data that confirm the prediction of the enhanced activity, but a paper on visual observations of the $\kappa$ Cepheids by a highly reputable visual observer confirmed some level of increased activity (\citet{rendtel2015minor}).
The encounters shown in Table \ref{tab:table10} between the trails ejected by 2009 SG18 and the Earth were found theoretically through the dynamical modeling. Caution should be emphasized when interpreting the results since the confirmation of any historical outbursts still need to be performed before trusting such predictions.
\begin{table*}[t]
\caption{Prediction of possible outbursts caused by 2009 SG18. Columns: Year = the year of Earth's collision with the trail, Trail = year of particle ejection from the given trail, rE-rD = the distance between the Earth and the center of the trail, $\lambda_{0}$ = solar longitude in degrees, yyyy-mm-ddThh:mm:ss = date and time of the trail's closest approach, ZHR = zenithal hourly rate}
\label{tab:table10}
\centering
\begin{tabular}{c c c c c c}
\hline\hline
Year & Trail & rE-rD & $\lambda_{0}$ & yyyy-mm-ddThh:mm:ss & ZHR\\
& & (AU) & (\degr) & &\\
\hline
2005 & 1967 & 0.00066 & 177.554 & 2005-09-20T12:08:00 & 11\\
2006 & 1804 & 0.00875 & 177.383 & 2006-09-20T11:31:00 & 13\\
2010 & 1952 & -0.00010 & 177.673 & 2010-09-20T21:38:00 & 12\\
2015 & 1925 & -0.00143 & 177.630 & 2015-09-21T03:29:00 & 10\\
2020 & 1862 & -0.00064 & 177.479 & 2020-09-20T06:35:00 & 11\\
2021 & 1962 & 0.00152 & 177.601 & 2021-09-20T15:39:00 & 11\\
2031 & 2004 & -0.00126 & 177.267 & 2031-09-20T21:15:00 & 12\\
2031 & 2009 & -0.00147 & 177.222 & 2031-09-20T19:55:00 & 13\\
2033 & 1946 & 0.00056 & 177.498 & 2033-09-20T14:57:00 & 10\\
2036 & 1978 & -0.00042 & 177.308 & 2036-09-20T04:44:00 & 20\\
2036 & 2015 & -0.00075 & 177.220 & 2036-09-20T02:33:00 & 20\\
2036 & 2025 & 0.00109 & 177.254 & 2036-09-20T03:19:00 & 13\\
2037 & 1857 & -0.00031 & 177.060 & 2037-09-20T04:37:00 & 13\\
2037 & 1946 & 0.00021 & 177.273 & 2037-09-20T09:56:00 & 10\\
2038 & 1841 & -0.00050 & 177.350 & 2038-09-20T18:02:00 & 10\\
2038 & 1925 & 0.00174 & 177.416 & 2038-09-20T19:39:00 & 11\\
2039 & 1815 & -0.00018 & 177.303 & 2039-09-20T23:01:00 & 10\\
\hline
\end{tabular}
\end{table*}
The next closest possible association was 2002 CE26 with D$_{SH}$ of 0.35, which was deemed too distant to be connected to the $\kappa$ Cepheids stream.
\section{IAU meteor shower \#753 NED - November Draconids and asteroid 2009 WN25
}
The November Draconids had been previously reported by \citet{segon2015four}, and consist of 12 meteors on very similar orbits having a maximal distance from the mean orbit of D$_{SH}$ = 0.08, and on average only D$_{SH}$ = 0.06. The activity period was found to be between November 8 and 20, with peak activity at solar longitude of 232.8°. The radiant position at peak activity was found to be at R.A. = 194°, Dec. = +69°, and V$_{g}$ = 42.0 km/s. There are no results from other searches since the shower has been reported only recently. Other meteor showers were reported on coordinates similar to \#753 NED, namely \#387 OKD October $\kappa$ Draconids and \#392 NID November i Draconids \citep{brown2010meteoroid}. The difference in D$_{SH}$ for \#387 OKD is found to be far too excessive (0.35) to be considered to be the same shower stream. \#392 NID may be closely related to \#753, since the D$_{SH}$ of 0.14 derived from radar observations show significant similarity; however, mean orbits derived from optical observations by \citet{jenniskens2016cams} differ by D$_{SH}$ of 0.24 which we consider too far to be the same shower.
The possibility that asteroid 2009 WN25 is the parent body of this possible meteor shower has been investigated by numerical modeling of the hypothetical meteoroids ejected for the period from 3000 BC up to 1500 AD and visualized in Figures \ref{fig:figure12} and \ref{fig:figure13}. The asteroid 2009 WN25 has a Tisserand parameter for Jupiter of T$_{j}$ = 1.96. Despite the fact that direct encounters with modeled meteoroids were not found for all years in which the meteors were observed, and that the number of hits is relatively small compared to other modeled showers, the averaged predicted positions fit the observations very well (see Table \ref{tab:table11}). This shows that the theoretical results have a statistically meaningful value and validates the approach of simulating the stream over a long period of time and concatenating the results to provide an overall view of the shower.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image12.jpg}}
\caption{Location of the nodes of the particles released by 2009 WN25 over several centuries, concatenated over 100 years. The Earth crosses the stream.}
\label{fig:figure12}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image13.png}}
\caption{Theoretical radiant of the particles released by 2009 WN25 which were closest to the Earth. The range of solar longitudes for modeled radiants is from 230.3\degr to 234.6\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure13}
\end{figure}
\begin{table}
\caption{Averaged observed and modeled radiant positions for \#753 NED and 2009 WN25. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.}
\label{tab:table11}
\centering
\begin{tabular}{l c c c c c}
\hline\hline
November Draconids & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\
& (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\
\hline
Predicted & 232.8 & 194.2 & 68.6 & 42.0 & ...\\
Observed & 232.4 & 196.5 & 67.6 & 41.8 & 1.3\\
\hline
\end{tabular}
\end{table}
Moreover, it appears that the predicted 2014 activity sits exactly at the same equatorial location (R.A. = 199°, Dec. = +67°) seen on Canadian Meteor Orbit Radar (CMOR) plots.\footnote{\label{cmor_plots}\url{http://fireballs.ndc.nasa.gov/} - "radar".} The shower has been noted as NID, but its position fits more closely to the NED. Since orbital data from the CMOR database are not available online, the authors were not able to confirm the hypothesis that the radar is seeing the same meteoroid orbits as the model produces. However, the authors received a confirmation from Dr. Peter Brown at the University of Western Ontario (private correspondence) that this stream has shown activity each year in the CMOR data, and likely belongs to the QUA-NID complex. A recently published paper \citep{micheli2016evidence} suggests that asteroid 2009 WN25 may be a parent body of the NID shower as well, so additional analysis with more observations will be needed to reveal the true nature of this shower complex. The next closest possible association was 2012 VF6 with D$_{SH}$ of 0.49, which was deemed too distant to be connected to the November Draconids stream.
\section{IAU meteor shower \#754 POD - $\psi$ Draconids and asteroid 2008 GV
}
The possible new meteor shower $\psi$ Draconids was reported by \citet{segon2015four}, consisting of 31 tight meteoroid orbits, having maximal distance from a mean orbit of D$_{SH}$ = 0.08, and on average only D$_{SH}$ = 0.06. The $\psi$ Draconids were found to be active from March 19 to April 12, with average activity around solar longitude of 12° with radiant at R.A. = 262°, Dec. = +73°, and V$_{g}$ = 19.8 km/s. No confirmation from other shower searches exists at the moment, since the shower has been reported upon only recently.
If this shower's existence could be confirmed, the most probable parent body known at this time would be asteroid 2008 GV. This asteroid was found to have a very similar orbit to the average orbit of the $\psi$ Draconids, D$_{SH}$ being of 0.08. Since the asteroid has a Tisserand parameter of T$_{j}$ = 2.90, it may be a dormant comet as well. Dynamical modeling has been done for hypothetical meteoroids ejected for perihelion passages from 3000 BC to 2100 AD, resulting in direct hits with the Earth for almost every year from 2000 onwards.
For the period covered by observations used in the CMN search, direct hits were found for years 2008, 2009, and 2010. The summary of the average radiant positions from the observations and from the predictions are given in Table \ref{tab:table12}. The plots of modeled and observed radiant positions are shown in Figure \ref{fig:figure15}, while locations of nodes of the modeled particles released by 2008 GV are shown in Figure \ref{fig:figure14}.
\begin{table}
\caption{Observed $\psi$ Draconids and modeled 2008 GV meteors' mean radiant positions (prefix C\_ stands for calculated (modeled), while prefix O\_ stands for observed). The number in the parenthesis indicates the number of observed meteors in the given year. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.}
\label{tab:table12}
\centering
\begin{tabular}{l c c c c c}
\hline\hline
Year & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\
& (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\
\hline
C\_2008 & 15.9 & 264.6 & 75.2 & 19.4 & ...\\
O\_2008 (2) & 14.2 & 268.9 & 73.3 & 20.7 & 2.2\\
C\_2009 & 13.9 & 254.0 & 74.3 & 19.3 & ...\\
O\_2009 (11) & 9.5 & 257.4 & 72.0 & 19.9 & 2.5\\
C\_2010 & 12.8 & 244.7 & 73.4 & 19.1 & ...\\
O\_2010 (6) & 15.1 & 261.1 & 73.0 & 19.8 & 4.7\\
\hline
\end{tabular}
\end{table}
As can be seen from Table \ref{tab:table12}, the mean observations fit very well to the positions predicted by dynamical modeling, and for two cases there were single meteors very close to predicted positions. On the other hand, the predictions for year 2015 show that a few meteoroids should hit the Earth around solar longitude 14.5° at R.A. = 260°, Dec. = +75°, but no significant activity has been detected in CMN observations. However, small groups of meteors can be seen on CMOR plots for that solar longitude at a position slightly lower in declination, but this should be verified using radar orbital measurements once available. According to Dr. Peter Brown at the University of Western Ontario (private correspondence), there is no significant activity from this shower in the CMOR orbital data.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image14.jpg}}
\caption{Location of the nodes of the particles released by 2008 GV over several centuries, concatenated over 100 years. The Earth crosses the stream.}
\label{fig:figure14}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image15.png}}
\caption{Theoretical radiant of the particles released by 2008 GV which were closest to the Earth. The range of solar longitudes for modeled radiants is from 355.1\degr to 17.7\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure15}
\end{figure}
One other potential parent body may be connected to the $\psi$ Draconids stream: 2015 FA118. The analysis for that potential parent alternative will be done in a future analysis.
\section{IAU meteor shower \#755 MID - May $\iota$ Draconids and asteroid 2006 GY2
}
The possible new meteor shower May $\iota$ Draconids was reported by \citet{segon2015four}, consisting of 19 tight meteoroid orbits, having maximal distance from their mean orbit of D$_{SH}$ = 0.08, and on average only D$_{SH}$ = 0.06. The May $\iota$ Draconids were found to be active from May 7 to June 6, with peak activity around solar longitude of 60° at R.A. = 231°, Dec. = +53°, and V$_{g}$ = 16.7 km/s. No confirmation from other searches exists at the moment, since the shower has been reported in the literature only recently. Greaves (from the meteorobs mailing-list archives\footnote{\label{meteorobs}\url{http://lists.meteorobs.org/pipermail/meteorobs/2015-December/018122.html}.}) stated that this shower should be the same as \#273 PBO $\phi$ Bootids. However, if we look at the details of this shower as presented in \citet{jenniskens2006meteor}, we find that the solar longitude stated in the IAU Meteor Data Center does not correspond to the mean ascension node for three meteors chosen to represent the $\phi$ Bootid shower. If a weighted orbit average of all references is calculated, the resulting D$_{SH}$ from MID is 0.18 which we consider a large enough value to be a separate shower (if the MID exists at all). Three \#273 PBO orbits from the IAU MDC do indeed match \#755 MID, suggesting that these two possible showers may somehow be connected.
Asteroid 2006 GY2 was investigated as a probable parent body using dynamical modeling as in previous cases. The asteroid 2006 GY2 has a Tisserand parameter for Jupiter of T$_{j}$ = 3.70. From all the cases we discussed in this paper, this one shows the poorest match between the observed and predicted radiant positions. The theoretical stream was modeled with trails ejected from 1800 AD through 2100 AD. According to the dynamical modeling analysis, this parent body should produce meteors for all years covered by the observations and at more or less the same position, R.A. = 248.5°, Dec. = +46.2°, and at same solar longitude of 54.4° with V$_{g}$ = 19.3 km/s, as visualized in Figures \ref{fig:figure16} and \ref{fig:figure17}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image16.jpg}}
\caption{Location of the nodes of the particles released by 2006 GY2 over several centuries, concatenated over 50 years. The Earth crosses the stream.}
\label{fig:figure16}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image17.png}}
\caption{Theoretical radiant of the particles released by 2006 GY2 which were closest to the Earth. The range of solar longitudes for modeled radiants is from 54.1\degr to 54.5\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure17}
\end{figure}
However, six meteors belonging to the possible \#755 MID shower found in the solar longitude range from 52.3 to 53.8 (the next meteor found was at 58.6°) show a mean radiant position at R.A. = 225.8°, Dec. = +46.4°, with mean V$_{g}$ of 16.4 km/s. Given the angular distance of 15.6\degr from the observed radiant, the difference in geocentric velocity (3 km/s) compared to the modeled meteor radiant parameters, and the fact that there were no single model meteors observed at that position nor nearby, we cannot conclude that this asteroid may be the parent body of the possible meteor shower May $\iota$ Draconids.
Another potential parent body was 2002 KK3, having a D$_{SH}$ = 0.18. However, the dynamical modeling for 2002 KK3 showed no crossings with Earth's orbit. There were also three more distant bodies at D$_{SH}$ = 0.20: 2010 JH3, 2013 JY2, and 2014 WC7. The analysis for those bodies will be done in a future analysis.
\section{IAU meteor shower \#531 GAQ - $\gamma$ Aquilids and comet C/1853G1 (Schweizer), and other investigated bodies
}
The possible new meteor shower $\gamma$ Aquilids was reported by \citet{segon2014new}, and found in other stream search papers (\citet{kornovs2014confirmation}, \citet{rudawska2015independent} and \citet{jenniskens2016cams}). Meteoroids from the suggested parent body comet C/1853G1 (Schweizer) were modeled for perihelion passages ranging from 3000 BC up to the present, and were evaluated. Despite their being very similar orbits between \#531 GAQ and the comet C/1853G1 (Schweizer), no direct hits to the Earth were found.
Besides C/1853G1 (Schweizer), negative results were found for dynamical analyses done on asteroids 2011 YX62 (as a possible parent body of \#604 ACZ $\zeta$1 Cancrids) and 2002 KK3, 2008 UZ94, and 2009 CR2 (no shower association reported).
\section{Discussion}
\paragraph{}
The new meteoroid stream discoveries described in this work have been reported on previously, and were searches based on well-defined conditions and constraints that individual meteoroid orbits must meet for association. The simulated particles ejected by the hypothetical parent bodies were treated in the same rigorous manner. Although we consider the similarity of the observed radiant and the dynamically modeled radiant as sufficient evidence for association with the hypothetical parent body when following the approach of \citet{vaubaillon2005new}, there are several points worth discussing.
All meteoroid orbits used in the analysis of \citet{segon2014parent} were generated using the UFOorbit software package by \citet{sonotaco2009meteor}. As this software does not estimate errors of the observations, or the errors of calculated orbital elements, it is not possible to consider the real precision of the individual meteoroid orbits used in the initial search analysis. Furthermore, all UFOorbit generated orbits are calculated on the basis of the meteor's average geocentric velocity, not taking the deceleration into consideration. This simplification introduces errors in orbital elements of very slow, very long, and/or very bright meteors. The real impact of this simplification is discussed in \citet{segon2014draconids} where the 2011 Draconid outburst is analyzed. Two average meteoroid orbits generated from average velocities were compared, one with and one without the linear deceleration model. These two orbits differed by as much as 0.06 in D$_{SH}$ (D$_{H}$ = 0.057, D$_{D}$ = 0.039). The deviation between the orbits does not necessarily mean that the clustering would not be determined, but it does mean that those orbits will certainly differ from the orbits generated with deceleration taken into account, as well as differing from the numerically generated orbits of hypothetical parent bodies. Consequently, the radiant locations of slower meteors can be, besides the natural radiant dispersion, additionally dispersed due to the varying influence of the deceleration in the position of the true radiant. This observation is not only relevant for UFOorbit, but for all software which potentially does not properly model the actual deceleration. CAMS Coincidence software uses an exponential deceleration model \citep{jenniskens2011cams}, however not all meteors decelerate exponentially as was shown in \citet{borovivcka2007atmospheric}. The real influence of deceleration in radiant dispersion will be a topic of some future work. Undoubtedly an important question is whether the dispersion caused by the improperly calculated deceleration of slow (e.g., generated by near-Earth objects) meteors can render members of a meteoroid stream to be unassociated with each other by the automated stream searching methods.
Besides the lack of error estimation of meteor observations, parent bodies on relatively unstable orbits are observed over a short observation arc, thus they often do not have very precise orbital element solutions. Moreover, unknown past parent body activity presents a seemingly unsolvable issue of how the parent orbit could have been perturbed on every perihelion pass close to the Sun. Also if the ejection modeling assumed that the particle ejection occurred during a perihelion passage when the parent body was not active, there would be no meteors present when the Earth passes through the point of the falsely predicted filament. On the other hand, if the Earth encounters meteoroids from a perihelion passage of particularly high activity, an unpredicted outburst can occur. \citet{vaubaillon2015latest} discuss the unknowns regarding parent bodies and the problems regarding meteor shower outburst prediction in greater detail.
Another fundamental problem that was encountered during this analysis was the lack of any rigorous definitions of what meteor showers or meteoroid streams actually are. Nor is there a common consensus to refer to. This issue was briefly discussed in \citet{brown2010meteoroid} and no real advances towards a clear definition have been made since. We can consider a meteor shower as a group of meteors which annually appear near the same radiant and which have approximately the same entry velocity. To better embrace the higher-dimensional nature of orbital parameters and time evolution versus a radiant that is fixed year after year, this should be extended to mean there exists a meteoroid stream with meteoroids distributed along and across the whole orbit with constraints dictated by dynamical evolution away from the mean orbit. By using the first definition however, some meteor showers caused by Jupiter-family comets will not be covered very well, as they are not active annually. The orbits of these kinds of meteor showers are not stable in the long term due to the gravitational and non-gravitational influences on the meteoroid stream.\footnote{\label{vaubaillonIMC2014}\url{http://www.imo.net/imc2014/imc2014-vaubaillon.pdf}.} On the other hand, if we are to consider any group of radiants which exhibit similar features but do not appear annually as a meteor shower, we can expect to have thousands of meteor shower candidates in the near future.
It is the opinion of the authors that with the rising number of observed multi-station video meteors, and consequently the rising number of estimated meteoroid orbits, the number of new potential meteor showers detected will increase as well, regardless of the stream search method used. As a consequence of the vague meteor shower definition, several methods of meteor shower identification have been used in recent papers. \citet{vida2014meteor} discussed a rudimentary method of visual identification combined with D-criterion shower candidate validation. \citet{rudawska2014new} used the Southworth and Hawking D-criterion as a measure of meteoroid orbit similarity in an automatic single-linkage grouping algorithm, while in the subsequent paper by \citet{rudawska2015independent}, the geocentric parameters were evaluated as well. In \citet{jenniskens2016cams} the results of the automatic grouping by orbital parameters were disputed and a manual approach was proposed.
Although there are concerns about automated stream identification methods, we believe it would be worthwhile to explore the possibility of using density-based clustering algorithms, such as DBSCAN or OPTICS algorithms by \citet{kriegel2005density}, for the purpose of meteor shower identification. They could possibly discriminate shower meteors from the background as they have a notion of noise and varying density of the data. We also strongly encourage attempts to define meteor showers in a more rigorous manner, or an introduction of an alternate term which would help to properly describe such a complex phenomenon. The authors believe that a clear definition would be of great help in determining whether a parent body actually produces meteoroids – at least until meteor observations become precise enough to determine the connection of a parent body to a single meteoroid orbit.
\section{Conclusion}
From this work, we can conclude that the following associations between newly discovered meteoroid streams and parent bodies are validated:
\begin{itemize}
\item \#549 FAN 49 Andromedids and Comet 2001 W2 Batters
\item \#533 JXA July $\xi$ Arietids and Comet C/1964 N1 Ikeya
\item \#539 ACP $\alpha$ Cepheids and Comet P/255 Levy
\item \#541 SSD 66 Draconids and Asteroid 2001 XQ
\item \#751 KCE $\kappa$ Cepheids and Asteroid 2009 SG18
\item \#753 NED November Draconids and Asteroid 2009 WN25
\item \#754 POD $\psi$ Draconids and Asteroid 2008 GV
\end{itemize}
The connection between \#755 MID May $\iota$ Draconids and asteroid 2006 GY2 is not firmly established enough and still requires some additional observational data before any conclusion can be drawn. The asteroidal associations are interesting in that each has a Tisserand parameter for Jupiter indicating it was a possible Jupiter-family comet in the past, and thus each may now be a dormant comet. Thus it may be worth looking for outgassing from asteroids 2001 XQ, 2009 SG18, 2009 WN25, 2008 GV, and even 2006 GY2 during their perihelion passages in the near future using high resolution imaging.
\begin{acknowledgements}
JV would like to acknowledge the availability of computing resources on the Occigen super computer at CINES (France) to perform the computations required for modeling the theoretical meteoroid streams. Special acknowledgement also goes to all members of the Croatian Meteor Network for their devoted work on this project.
\end{acknowledgements}
\bibpunct{(}{)}{;}{a}{}{,}
\bibliographystyle{aa}
\section{Introduction}
When one is attempting to associate newly discovered meteoroid streams to their parent bodies, there are four critical steps that need to be carried out. The first is obviously the stream discovery through the search of databases and past records, which is ideally performed on meteor data comprised of Keplerian orbital elements. The second phase involves the verification of the meteoroid stream using completely independent meteor databases and stream searches as published online and/or reported in the literature. This is to help validate the existence of the stream. The third step explores the identification of candidate parent bodies, such as comets and asteroids, which show similar orbits to the space-time aggregated meteoroid Keplerian elements of the found stream. However, close similarity of the orbits between a meteoroid stream and a potential parent body is not necessarily conclusive proof of association or linkage, since the two object types (parent body and meteoroid) can undergo significantly different orbital evolution as shown by \citet{vaubaillon2006mechanisms}. Thus the most critical fourth step in determining the actual association is to perform dynamic modeling and orbital evolution on a sample of particles ejected from a candidate parent body. Given a comet's or asteroid's best estimated orbit in the past, and following the ejected stream particles through many hundreds to thousands of years, one looks for eventual encounters with the Earth at the time of meteor observation, and whether those encounters have a geometric similarity to the observed meteoroids of the stream under investigation. The work by \citet{mcnaught1999leonid} demonstrates this point. However, this current paper follows the approach of \citet{vaubaillon2005new} in focusing on the results of the dynamical modeling phase and is a culmination of all the steps just outlined and performed on new streams discovered from recent Croatian Meteor Network stream searches. The application of dynamical stream modeling indicates, with a high level of confidence, that seven new streams can be associated to either comets or asteroids, the latter of which are conjectured to be dormant comets.
\section{Processing approach}
The seven streams and their hypothetical parent body associations were initially discovered using a meteor database search technique as described in \citet{segon2014parent}. In summary, the method compared every meteor orbit to every other meteor orbit using the combined Croatian Meteor Network \citep{segon2012croatian, korlevic2013croatian}\footnote{\label{cmn_orbits}\url{http://cmn.rgn.hr/downloads/downloads.html#orbitcat}.} and SonotaCo \citep{sonotaco2009meteor}\footnote{\label{sonotaco_orbits}\url{http://sonotaco.jp/doc/SNM/index.html}.} video meteor orbit databases, looking for clusters and groupings in the five-parameter, Keplerian orbital element space. This was based on the requirement that three D-criteria (\citet{southworth1963statistics}, \citet{drummond1981test}, \citet{jopek2008meteoroid}) were all satisfied within a specified threshold. These groups had their mean orbital elements computed and sorted by number of meteor members. Mean orbital elements where computed by a simple averaging procedure. Working down from the largest sized group, meteors with similar orbits to the group under evaluation were assigned to the group and eliminated from further aggregation. This captured the known streams quickly, removing them from the meteor pool, and eventually found the newly discovered streams. According to International Astronomical Union (IAU) shower nomenclature rules \citep{jenniskens2006iau}, all results for the stream discoveries were first published. In these cases the search results can be found in three papers posted to WGN, The Journal of the International Meteor Organization \citep{andreic2014, gural2014results, segon2014results}.
Next, the literature was scoured for similar stream searches in other independent data sets, such as the CAMS \citep{rudawska2014new, jenniskens2016cams} and the EDMOND \citep{rudawska2014independent} video databases, to determine the validity of the new streams found. The verified new streams were then compared against known cometary and asteroidal orbits, from which a list of candidate parent bodies were compiled based once again on meeting multiple D-criteria for orbital similarity. Each section below describes in greater detail the unique processes and evidence for each stream's candidate association to a parent body. Besides the seven reported shower cases and their hypothetical parent bodies, the possibility of producing a meteor shower has also been investigated for four possible streams with similar orbital parameters to asteroids, 2002 KK3, 2008 UZ94, 2009 CR2, and 2011 YX62, but the results were inconclusive or negative. The remaining possible parent bodies from the search were not investigated due to the fact that those comets do not have orbital elements precise enough to be investigated or are stated to have parabolic orbits.
The dynamical analysis for each object was performed as follows. First, the nominal orbit of the body was retrieved from the JPL HORIZONS ephemeris\footnote{\label{horizonsJPL}\url{http://horizons.jpl.nasa.gov}} for the current time period as well as for each perihelion passage for the past few centuries (typically two to five hundred years). Assuming the object presented cometary-like activity in the past, the meteoroid stream ejection and evolution was simulated and propagated following \citet{vaubaillon2005new}. In detail, the method considers the ejection of meteoroids when the comet is within 3 AU from the Sun. The ejection velocity is computed following \citet{crifo1997dependence}. The ejection velocities typically range from 0 to \textasciitilde100 m/s. Then the evolution of the meteoroids in the solar system is propagated using numerical simulations. The gravitation of all the planets as well as non-gravitational forces (radiation pressure, solar wind, and the Poynting-Robertson effect) are taken into account. More details can be found in \citet{vaubaillon2005new}. When the parent body possessed a long orbital period, the stream was propagated starting from a more distant period in the past few thousand years. The intersection of the stream and the Earth was accumulated over 50 to 100 years, following the method by \citet{jenniskens2008minor}. Such a method provides a general view of the location of the meteoroid stream and give statistically meaningful results. For each meteoroid that is considered as intersecting the Earth, the radiant was computed following the \citet{neslusan1998computer} method (the software was kindly provided by those authors). Finally, the size distribution of particles intercepting the Earth was not considered in this paper, nor was the size of modeled particles compared to the size of observed particles. The size distribution comparison will be the topic of a future paper.
\section{IAU meteor shower \#549 FAN - 49 Andromedids and Comet 2001 W2 Batters
}
The first case to be presented here is that of meteor shower IAU \#542 49 Andromedids. Following the IAU rules, this shower was first reported as part of a paper in WGN, Journal of International Meteor Organization by \citet{andreic2014}. Independent meteor shower database searches resulted in confirmation of the existence of this shower, namely \citet{rudawska2015independent} and \citet{jenniskens2016cams}. The radiant position from the Croatian Meteor Network (CMN) search into the SonotaCo and CMN orbit databases was found to be R.A. = 20.9°, Dec. = +46.7°, with a mean geocentric velocity V$_{g}$ = 60.1 km/s near the center of the activity period (solar longitude $\lambda_{0}$ = 114°, 35 orbits). \citet{rudawska2015independent} found the same radiant to be at R.A. = 19.0°, Dec. = +45.3° and V$_{g}$ = 59.8 km/s ($\lambda_{0}$ = 112.5°, 226 orbits), while \citet{jenniskens2016cams} give R.A. = 20.5°, Dec. = +46.6°, and V$_{g}$ = 60.2 km/s ($\lambda_{0}$ = 112°, 76 orbits). This shower was accepted as an established shower during the 2015 IAU Assembly\footnote{\label{IAU2015}\url{https://astronomy2015.org}.} and is now listed in the IAU meteor database.
At the time of the initial finding, there were 35 meteors associated with this shower resulting in orbital parameters similar to published values for a known comet, namely 2001 W2 Batters. This Halley type comet with an orbital period of 75.9 years, has been well observed and its orbital parameters have been determined with higher precision than many other comets of this type. The mean meteoroid orbital parameters, as found by the above mentioned procedure, are compared with the orbit of 2001 W2 Batters in Table \ref{tab:table1}. Despite the fact that the orbital parameters' distance according to the Southworth-Hawkins D-criteria D$_{SH}$ = 0.14 seems a bit high to claim an association, the authors pointed out the necessity of using dynamic stream modeling to confirm or deny the association hypothesis because of the nearly identical ascending node values. Moreover, changes in 2001 W2 Batters' orbital parameters as far back as 3000 BC, as extracted from HORIZONS, has shown that the comet approached closer to Earth's orbit in 3000 BC than it has during the last few hundred years. Thus stream particles ejected from the comet farther in the past could have the possibility of producing a meteoroid stream observed at the Earth in the current epoch.
\begin{table*}[t]
\caption{Orbital parameters for the 49 Andromedids and Comet 2001 W2 Batters with corresponding D$_{SH}$ values. If the value of 112° for the ascending node (from \citet{jenniskens2016cams}) is used instead of the mean value (118°), then the resulting D$_{SH}$ is 0.16. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 2001 W2 Batters.}
\label{tab:table1}
\centering
\begin{tabular}{l c c c c c c}
\hline\hline
49 Andromedids & q & e & i & Node & $\omega$ & D$_{SH}$ \\% table heading
References & (AU) & & (\degr) & (\degr) & (\degr) & \\
\hline
1 & 0.918 & 0.925 & 118.2 & 114.0 & 143.1 & 0.14\\
2 & 0.907 & 0.878 & 119.2 & 112.5 & 142.2 & 0.17\\
3 & 0.898 & 0.922 & 117.9 & 118.0 & 139.8 & 0.19\\
2001 W2 Batters & 1.051 & 0.941 & 115.9 & 113.4 & 142.1 & 0\\
\hline
\end{tabular}
\tablebib{(1) \citet{andreic2014}; (2) \citet{rudawska2015independent}; (3) \citet{jenniskens2016cams}.
}
\end{table*}
The dynamical modeling for the hypothetical parent body 2001 W2 Batters was performed following \citet{vaubaillon2005new} and \citet{jenniskens2008minor}. In summary, the dynamical evolution of the parent body is considered over a few hundred to a few thousand years. At a specific chosen time in the past, the creation of a meteoroid stream is simulated and its evolution is followed forward in time until the present day. The intersection of the particles with the Earth is recorded and the radiant of each particle is computed and compared to observations. The first perihelion passages were initially limited to 500 years back in time. No direct hits to the Earth were found from meteoroids ejected during the aforementioned period. However, the authors were convinced that such close similarity of orbits may result in more favorable results if the dynamical modeling was repeated for perihelion passages back to 3000 BC. The new run did provide positive results, with direct hits to the Earth predicted at R.A. = 19.1°, Dec. = +46.9°, and V$_{g}$ = 60.2 km/s, at a solar longitude of $\lambda_{0}$ = 113.2°. A summary of the observed and modeled results is given in Table \ref{tab:table2}.
\begin{table}[h]
\caption{Observed and modeled radiant positions for the 49 Andromedids and comet Batters' meteoroids ejected 3000 years ago.}
\label{tab:table2}
\centering
\begin{tabular}{l c c c c}
\hline\hline
49 Andromedids & R.A. & Dec. & V$_{g}$ & $\lambda_{0}$\\
References & (\degr) & (\degr) & (km/s) & (\degr)\\
\hline
1 & 20.9 & 46.7 & 60.1 & 114.0\\
2 & 19.0 & 45.3 & 59.8 & 112.5\\
3 & 20.5 & 46.6 & 60.2 & 112.0\\
2001 W2 Batters \\meteoroids, this work & 19.1 & 46.9 & 60.2 & 113.2\\
\hline
\end{tabular}
\tablebib{(1) \citet{andreic2014}; (2) \citet{rudawska2015independent}; (3) \citet{jenniskens2016cams}.
}
\end{table}
The maximum difference between the average observed radiant positions and modeled mean positions is less than 2° in both right ascension and declination, while there are also single meteors very close to the predicted positions according to the model. Since the observed radiant position fits very well with the predictions, we may conclude that there is a strong possibility that comet 2001 W2 Batters is indeed the parent body of the 49 Andromedids shower. The high radiant dispersion seen in the observations can be accounted for by 1) less precise observations in some of the reported results, and 2) the 3000 year old nature of the stream which produces a more dispersed trail. The next closest possible association was with comet 1952 H1 Mrkos but with D$_{SH}$ of 0.28, it was considered too distant to be connected with the 49 Andromedids stream.
Figures \ref{fig:figure1} and \ref{fig:figure2} show the location of the stream with respect to the Earth's path, as well as the theoretical radiant. These results were obtained by concatenating the locations of the particles intersecting the Earth over 50 years in order to clearly show the location of the stream (otherwise there are too few particles to cross the Earth each year). As a consequence, it is expected that the level of activity of this shower would not change much from year to year.
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{media/image1.jpeg}}
\caption{Location of the nodes of the particles released by 2001 W2 Batters over several centuries, concatenated over the years 2000 to 2050. The Earth crosses the stream.}
\label{fig:figure1}
\end{figure}
\begin{figure}[h]
\resizebox{\hsize}{!}{\includegraphics{media/image2.png}}
\caption{Theoretical radiant of the particles released by 2001 W2 Batters which were closest to the Earth. The range of solar longitudes for modeled radiants is from 113.0\degr to 113.9\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure2}
\end{figure}
\section{IAU meteor shower \#533 JXA - July $\xi$ Arietids and comet 1964 N1 Ikeya}
The discovery of the possible meteor shower July $\xi$ Arietids was first published in \citet{segon2014new}. The shower had been found as a grouping of 61 meteoroid orbits, active from July 4 to August 12, peaking around July 21. Three other searches for meteor showers in different meteoroid orbit databases done by \citet{rudawska2015independent}, \citet{jenniskens2016cams}, and \citet{kornovs2014confirmation} found this shower as well, but with slight differences in the period of activity. This shower had been accepted as an established shower during the 2015 IAU Assembly held on Hawaii and is now referred to as shower \#533.
Among the possible parent bodies known at the time of this shower's discovery, comet C/1964 N1 Ikeya was found to have similar orbital parameters as those of the July $\xi$ Arietids. Comet C/1964 N1 Ikeya is a long period comet, having an orbital period of 391 years and contrary to comet 2001 W2 Batters, has less precision in its orbit estimation. A summary of the mean orbital parameters of the shower compared with C/1964 N1 Ikeya are shown in Table \ref{tab:table3}, from which it can be seen that the distance estimated from D$_{SH}$ suggests a possible connection between the shower and the comet.
\begin{table*}[t]
\caption{Orbital parameters for the July $\xi$ Arietids and Comet 1964 N1 Ikeya with corresponding D$_{SH}$ values. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 1964 N1 Ikeya.}
\label{tab:table3}
\centering
\begin{tabular}{l c c c c c c}
\hline\hline
July $\xi$ Arietids & q & e & i & Node & $\omega$ & D$_{SH}$\\
References & (AU) & & (\degr) & (\degr) & (\degr) & \\
\hline
1 & 0.883 & 0.965 & 171.6 & 299.0 & 318.0 & 0.10\\
2 & 0.863 & 0.939 & 171.8 & 292.6 & 313.8 & 0.08\\
3 & 0.836 & 0.919 & 171.5 & 291.1 & 309.8 & 0.09\\
4 & 0.860 & 0.969 & 170.4 & 292.7 & 312.4 & 0.08\\
C/1964 N1 Ikeya & 0.822 & 0.985 & 171.9 & 269.9 & 290.8 & 0\\
\hline
\end{tabular}
\tablebib{(1) \citet{segon2014new}; (2) \citet{kornovs2014confirmation}; (3) \citet{rudawska2015independent}; (4) \citet{jenniskens2016cams}.
}
\end{table*}
Similar to the previous case, the dynamical modeling was performed for perihelion passages starting from 5000 BC onwards. Only two direct hits were found from the complete analysis, but those two hits confirm that there is a high possibility that comet C/1964 N1 Ikeya is indeed the parent body of the July $\xi$ Arietids. The mean radiant positions for those two modeled meteoroids as well as the mean radiant positions found by other searches are presented in Table \ref{tab:table4}. As can be seen from Table \ref{tab:table4}, the difference in radiant position between the model and the observations appears to be very significant.
\begin{table}[h]
\caption{Observed and modeled radiant positions for July $\xi$ Arietids and comet C/1964 N1 Ikeya. Rows in bold letters show radiant positions of the entries above them at 106.7° of solar longitude. The applied radiant drift was provided in the respective papers.}
\label{tab:table4}
\centering
\begin{tabular}{l c c c c}
\hline\hline
July $\xi$ Arietids & R.A. & Dec. & V$_{g}$ & $\lambda_{0}$\\
References & (\degr) & (\degr) & (km/s) & (\degr)\\
\hline
1 & 40.1 & 10.6 & 69.4 & 119.0\\
& \textbf{32.0} & \textbf{7.5} & \textbf{...} & \textbf{106.7}\\
2 & 35.0 & 9.2 & 68.9 & 112.6\\
3 & 33.8 & 8.7 & 68.3 & 111.1\\
4 & 41.5 & 10.7 & 68.9 & 119.0\\
& \textbf{29.6} & \textbf{7.0} & \textbf{...} & \textbf{106.7}\\
1964 N1 Ikeya \\meteoroids, this work & 29.0 & 6.5 & 68.7 & 106.7\\
\hline
\end{tabular}
\tablebib{(1) \citet{segon2014new}; (2) \citet{kornovs2014confirmation}; (3) \citet{rudawska2015independent}; (4) \citet{jenniskens2016cams}.
}
\end{table}
However, the radiant position for solar longitude as found from dynamical modeling fits very well with that predicted by the radiant's daily motion: assuming $\Delta$R.A. = 0.66° and $\Delta$Dec. = 0.25° from \citet{segon2014new}, the radiant position at $\lambda_{0}$ = 106.7° would be located at R.A. = 32.0°, Dec. = 7.5° or about three degrees from the modeled radiant. If we use results from \citet{jenniskens2016cams} ($\Delta$R.A. = 0.97° and $\Delta$Dec. = 0.30°), the resulting radiant position fits even better – having R.A. = 29.0° Dec. = 7.0° or about one degree from the modeled radiant. The fact that the model does not fit the observed activity may be explained by various factors, from the lack of precise data of the comet position in the past derived using the relatively small orbit arc of observations, to the possibility that this shower has some other parent body (possibly associated to C/1964 N1 Ikeya) as well. The next closest possible association was with comet 1987 B1 Nishikawa-Takamizawa-Tago where the D$_{SH}$ was 0.21, but due to high nodal distances between orbits, we consider this to be not connected to the July $\xi$ Arietids.
The simulation of the meteoroid stream was performed for hypothetical comet returns back to 5000 years before the present. According to the known orbit of the comet, it experienced a close encounter with Jupiter and Saturn in 1676 and 1673 AD respectively, making the orbital evolution prior to this date much more uncertain. Nevertheless, the simulation of the stream was performed in order to get a big picture view of the stream in the present day solar system as visualized in Figures \ref{fig:figure3} and \ref{fig:figure4}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image3.jpg}}
\caption{Location of the particles ejected by comet C/1964 N1 Ikea over several centuries, concatenated over 50 years in the vicinity of the Earth.}
\label{fig:figure3}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image4.png}}
\caption{Theoretical radiant of the particles released by C/1964 N1 Ikea which were closest to the Earth. The match with the July $\xi$ Arietids is not convincing in this case. The range of solar longitudes for modeled radiants is from 99.0\degr to 104.8\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure4}
\end{figure}
\section{IAU meteor shower \#539 ACP - $\alpha$ Cepheids and comet 255P Levy
}
The $\alpha$ Cepheids shower had been reported by \citet{segon2014new}, as a disperse grouping of 41 meteors active from mid-December to mid-January at a mean radiant position of R.A. = 318°, Dec. = 64° at $\lambda_{0}$ = 281° (January 2). The authors investigated the possibility that this new shower could be connected to the predicted enhanced meteor activity of IAU shower \#446 DPC December $\phi$ Cassiopeiids. However, the authors pointed out that \#466 DPC and \#539 ACP cannot be the same meteor shower \citep{segon2014new}. Despite the fact that a predicted meteor outburst was not detected \citep{roggemans2014letter}, there is a strong possibility that the activity from comet 255P/Levy produces a meteor shower which can be observed from the Earth as the $\alpha$ Cepheids shower. Meteor searches conducted by \citet{kornovs2014confirmation} and \citet{jenniskens2016cams} failed to detect this shower, but \citet{rudawska2015independent} found 11 meteors with a mean radiant position at R.A. = 333.5°, Dec. = +66°, V$_{g}$ = 13.4 km/s at $\lambda_{0}$ = 277.7°.
The mean geocentric velocity for the $\alpha$ Cepheids meteors has been found to be small, of only 15.9 km/s, but ranges from 12.4 to 19.7 kilometres per second. Such a high dispersion in velocities may be explained by the fact that the D-criterion threshold for automatic search has been set to D$_{SH}$ = 0.15, which allowed a wider range of orbits to be accepted as meteor shower members. According to the dynamical modeling results, the geocentric velocity for meteoroids ejected from 255P/Levy should be of about 13 km/s, and observations show that some of the $\alpha$ Cepheids meteors indeed have such velocities at more or less the predicted radiant positions, as can be seen from Figure \ref{fig:figure5}. This leads us to the conclusion that this meteor shower has to be analyzed in greater detail, but at least some of the observations represent meteoroids coming from comet 255P/Levy.
\begin{figure}[b]
\resizebox{\hsize}{!}{\includegraphics{media/image5.png}}
\caption{Radiant positions of observed $\alpha$ Cepheids and predicted meteors from 255P/Levy. The range of solar longitudes for modeled radiants is from 250\degr to 280\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure5}
\end{figure}
The simulation of the meteoroid stream ejected by comet 255P/Levy includes trails ejected from 1801 through 2017 as visualized in Figures \ref{fig:figure6} and \ref{fig:figure7}. Several past outbursts were forecasted by the dynamical modeling but none had been observed, namely during apparitions in 2006 and 2007 (see Table \ref{tab:table5}). As a consequence, the conclusion is that the activity of the $\alpha$ Cepheids is most likely due to the global background of the stream.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image6.jpg}}
\caption{Location of the particles ejected by comet 255P/Levy in the vicinity of the Earth in 2006: an outburst should have been detected.}
\label{fig:figure6}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image7.jpg}}
\caption{Location of all the particles ejected by 255P over 50 years in order to show the location of the whole stream in the solar system. This graph does not imply several outbursts but rather provides a global indication of the stream.}
\label{fig:figure7}
\end{figure}
\begin{table}
\caption{Expected outburst caused by 255P/Levy. No unusual outburst was reported in 2006 and 2007. Columns: Year = the year of Earth's collision with the trail, Trail = year of particle ejection from the given trail, $\lambda_{0}$ = solar longitude in degrees, yyyy-mm-ddThh:mm:ss = date and time of the trail's closest approach, ZHR = zenithal hourly rate.}
\label{tab:table5}
\centering
\begin{tabular}{c c c c c}
\hline\hline
Year & Trail & $\lambda_{0}$ & yyyy-mm-ddThh:mm:ss & ZHR\\
& & (\degr) & &\\
\hline
2001 & 1963 & 279.132 & 2001-12-30T18:37:00 & 1\\
2001 & 1975 & 279.765 & 2001-12-31T12:01:00 & 3\\
2001 & 1980 & 279.772 & 2001-12-31T15:00:00 & 2\\
2001 & 1985 & 279.828 & 2001-12-31T11:24:00 & 11\\
2001 & 1991 & 279.806 & 2001-12-31T10:44:00 & 13\\
2002 & 1963 & 278.914 & 2002-10-20T14:56:00 & 1\\
2002 & 1980 & 279.805 & 2002-12-31T10:23:00 & 2\\
2002 & 1985 & 279.808 & 2002-12-31T10:40:00 & 15\\
2002 & 1991 & 279.789 & 2002-12-31T10:24:00 & 6\\
2006 & 1963 & 279.285 & 2006-12-31T08:01:00 & 1\\
2007 & 1963 & 279.321 & 2007-12-31T07:04:00 & 1\\
2012 & 1980 & 279.803 & 2012-12-31T06:25:00 & 6\\
2013 & 1980 & 279.882 & 2013-12-31T08:16:00 & 2\\
2014 & 1969 & 264.766 & 2014-12-17T00:07:00 & 1\\
2017 & 1930 & 342.277 & 2017-09-21T18:39:00 & 1\\
2017 & 1941 & 279.510 & 2017-12-30T03:41:00 & 1\\
2018 & 1969 & 278.254 & 2018-12-29T07:29:00 & 1\\
2033 & 1975 & 275.526 & 2033-12-27T10:12:00 & 1\\
2033 & 1980 & 275.488 & 2033-12-27T10:06:00 & 1\\
2033 & 1985 & 275.452 & 2033-12-27T09:55:00 & 1\\
2033 & 1991 & 275.406 & 2033-12-27T09:54:00 & 1\\
2033 & 1996 & 275.346 & 2033-12-27T08:58:00 & 1\\
2034 & 1975 & 262.477 & 2034-12-13T22:22:00 & 1\\
2034 & 1980 & 261.456 & 2034-06-06T03:40:00 & 1\\
2034 & 1985 & 261.092 & 2034-04-05T17:02:00 & 1\\
2034 & 1991 & 260.269 & 2034-03-09T11:52:00 & 1\\
2035 & 1914 & 276.553 & 2035-01-09T07:59:00 & 1\\
2035 & 1952 & 271.463 & 2035-12-20T03:11:00 & 1\\
2039 & 1980 & 272.974 & 2039-12-25T01:51:00 & 1\\
2039 & 1991 & 272.131 & 2039-12-25T01:05:00 & 1\\
\hline
\end{tabular}
\end{table}
There are several other parent bodies possibly connected to the $\alpha$ Cepheids stream: 2007 YU56 (D$_{SH}$ = 0.20), 2005 YT8 (D$_{SH}$ = 0.19), 1999 AF4 (D$_{SH}$ = 0.19), 2011 AL52 (D$_{SH}$ = 0.19), 2013 XN24 (D$_{SH}$ = 0.12), 2008 BC (D$_{SH}$ = 0.17), and 2002 BM (D$_{SH}$ = 0.16). The analysis for those bodies will be done in a future analysis.
\section{IAU meteor shower \#541 SSD - 66 Draconids and asteroid 2001 XQ
}
Meteor shower 66 Draconids had been reported by \citet{segon2014new}, as a grouping of 43 meteors having mean radiant at R.A. = 302°, Dec. = +62°, V$_{g}$ = 18.2 km/s. This shower has been found to be active from solar longitude 242° to 270° (November 23 to December 21), having a peak activity period around 255° (December 7). Searches by \citet{jenniskens2016cams} and \citet{kornovs2014confirmation} failed to detect this shower. But again, \citet{rudawska2015independent} found this shower to consist of 39 meteors from the EDMOND meteor orbits database, at R.A. = 296°, Dec. = 64°, V$_{g}$ = 19.3 km/s for solar longitude $\lambda_{0}$ = 247°.
A search for a possible parent body of this shower resulted in asteroid 2001 XQ, which having a D$_{SH}$ = 0.10 represented the most probable choice. The summary of mean orbital parameters from the above mentioned searches compared with 2001 XQ are shown in Table \ref{tab:table6}.
\begin{table*}[t]
\caption{Orbital parameters for 66 Draconids and 2001XQ with respective D$_{SH}$ values. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 2001XQ.}
\label{tab:table6}
\centering
\begin{tabular}{l c c c c c c}
\hline\hline
66 Draconids & q & e & i & Node & $\omega$ & D$_{SH}$\\
References & (AU) & & (\degr) & (\degr) & (\degr) & \\
\hline
1 & 0.981 & 0.657 & 27.2 & 255.2 & 184.4 & 0.10\\
2 & 0.980 & 0.667 & 29.0 & 247.2 & 185.2 & 0.13\\
2001 XQ & 1.035 & 0.716 & 29.0 & 251.4 & 190.1 & 0\\
\hline
\end{tabular}
\tablebib{(1) \citet{segon2014new}; (2) \citet{rudawska2015independent}.
}
\end{table*}
Asteroid 2001 XQ has Tisserand parameter T$_{j}$ = 2.45, which is a value common for Jupiter family comets and this makes us suspect it may not be an asteroid per se, but rather a dormant comet. To the collected author's knowledge, no cometary activity has been observed for this body. Nor was there any significant difference in the full-width half-max spread between stars and the asteroid on the imagery provided courtesy of Leonard Kornoš (personal communication) from Modra Observatory. They had observed this asteroid (at that time named 2008 VV4) on its second return to perihelion, during which it reached \nth{18} magnitude.
Numerical modeling of the hypothetical meteor shower whose particles originate from asteroid 2001 XQ was performed for perihelion passages from 800 AD up to 2100 AD. The modeling showed multiple direct hits into the Earth for many years, even outside the period covered by the observations. The summary of observed and modeled radiant positions is given in Table \ref{tab:table7}.
\begin{table}
\caption{Observed 66 Draconid and modeled 2001 XQ meteors' mean radiant positions (prefix C\_ stands for calculated (modeled), while prefix O\_ stands for observed). The number in the parenthesis indicates the number of observed 66 Draconid meteors in the given year. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.}
\label{tab:table7}
\centering
\begin{tabular}{l c c c c c}
\hline\hline
Year & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\
& (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\
\hline
C\_2007 & 250.3 & 308.2 & 65.3 & 19.3 & ...\\
O\_2007 (5) & 257.5 & 300.1 & 63.2 & 18.2 & 4.1\\
C\_2008 & 248.2 & 326.8 & 56.9 & 16.1 & ...\\
O\_2008 (8) & 254.0 & 300.5 & 62.6 & 18.0 & 14.3\\
C\_2009 & 251.1 & 309.6 & 64.0 & 18.8 & ...\\
O\_2009 (5) & 253.6 & 310.4 & 61.0 & 17.0 & 3.0\\
C\_2010 & 251.2 & 304.0 & 63.1 & 19.1 & ...\\
O\_2010 (17) & 253.7 & 300.4 & 63.4 & 18.9 & 1.6\\
\hline
\end{tabular}
\end{table}
Despite the fact that the difference in the mean radiant positions may seem significant, radiant plots of individual meteors show that some of the meteors predicted to hit the Earth at the observation epoch were observed at positions almost exactly as predicted. It is thus considered that the results of the simulations statistically represent the stream correctly, but individual trails cannot be identified as responsible for any specific outburst, as visualized in Figures \ref{fig:figure8} and \ref{fig:figure9}. The activity of this shower is therefore expected to be quite regular from year to year.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image8.jpg}}
\caption{Location of the nodes of the particles released by 2001 XQ over several centuries, concatenated over 50 years. The Earth crosses the stream.}
\label{fig:figure8}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image9.png}}
\caption{Theoretical radiants of the particles released by 2001 XQ which were closest to the Earth. The range of solar longitudes for modeled radiants is from 231.1\degr to 262.8\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure9}
\end{figure}
Two other candidate parent bodies were initially considered, 2004 YY23 and 2015 WB13, in which both had a D$_{SH}$ of 0.26. This was deemed too distant to be associated with the 66 Draconids stream.
\section{IAU meteor shower \#751 KCE - $\kappa$ Cepheids and asteroid 2009 SG18
}
The meteor shower $\kappa$ Cepheids had been reported by \citet{segon2015four}, as a grouping of 17 meteors with very similar orbits, having average D$_{SH}$ of only 0.06. The activity period was found to from September 11 to September 23, covering solar longitudes from 168° to 180°. The radiant position was R.A. = 318°, Dec. = 78° with V$_{g}$ = 33.7 km/s, at a mean solar longitude of 174.4°. Since the new shower discovery has been reported only recently, the search by \citet{kornovs2014confirmation} could be considered totally blind having not found its existence, while the search by \citet{jenniskens2016cams} did not detect it as well in the CAMS database. Once again, the search by \citet{rudawska2015independent} found the shower, but in much higher numbers than it has been found in the SonotaCo and CMN orbit databases. In total 88 meteors have been extracted as $\kappa$ Cepheids members in the EDMOND database. A summary of the mean orbital parameters from the above mentioned searches compared with 2009 SG18 are shown in Table \ref{tab:table8}.
\begin{table*}[t]
\caption{Orbital parameters for $\kappa$ Cepheids and asteroid 2009 SG18 with corresponding D$_{SH}$ values. Orbital elements (mean values for shower data): q = perihelion distance, e = eccentricity, i = inclination, Node = Node, $\omega$ = argument of perihelion, D$_{SH}$ = Southworth and Hawking D-criterion with respect to 2009 SG18.}
\label{tab:table8}
\centering
\begin{tabular}{l c c c c c c}
\hline\hline
$\kappa$ Cepheids & q & e & i & Node & $\omega$ & D$_{SH}$\\
References & (AU) & & (\degr) & (\degr) & (\degr) & \\
\hline
1 & 0.983 & 0.664 & 57.7 & 174.4 & 198.4 & 0.10\\
2 & 0.987 & 0.647 & 55.9 & 177.2 & 190.4 & 0.17\\
2009 SG18 & 0.993 & 0.672 & 58.4 & 177.6 & 204.1 & 0\\
\hline
\end{tabular}
\tablebib{(1) \citet{segon2014new}; (2) \citet{rudawska2015independent}.
}
\end{table*}
What can be seen at a glance is that the mean orbital parameters for both searches are very consistent (D$_{SH}$ = 0.06), while the difference between the mean shower orbits and the asteroid's orbit differs mainly in the argument of perihelion and perihelion distance. Asteroid 2009 SG18 has a Tisserand parameter for Jupiter of T$_{j}$ = 2.31, meaning that it could be a dormant comet.
Numerical modeling of the hypothetical meteor shower originating from asteroid 2009 SG18 for perihelion passages from 1804 AD up to 2020 AD yielded multiple direct hits into the Earth for more years than the period covered by the observations, as seen in Figures \ref{fig:figure10} and \ref{fig:figure11}. The very remarkable coincidence found between the predicted and observed meteors for years 2007 and 2010 is summarized in Table \ref{tab:table9}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image10.jpg}}
\caption{Location of the nodes of the particles released by 2009 SG18 over several centuries, concatenated over 50 years. The Earth crosses the stream.}
\label{fig:figure10}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image11.png}}
\caption{Theoretical radiant of the particles released by 2009 SG18 which were closest to the Earth. Several features are visible due to the difference trails, but care must be taken when interpreting these data. The range of solar longitudes for modeled radiants is from 177.0\degr to 177.7\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure11}
\end{figure}
\begin{table}
\caption{Observed $\kappa$ Cepheids and modeled 2009 SG18 meteors' mean radiant positions (prefix C\_ stands for calculated or modeled, while prefix O\_ stands for observed). The number in the parenthesis indicates the number of observed meteors in the given year. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.}
\label{tab:table9}
\centering
\begin{tabular}{l c c c c c}
\hline\hline
Year & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\
& (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\
\hline
C\_2007 & 177.4 & 327.5 & 77.0 & 34.0 & ...\\
O\_2007 (3) & 177.1 & 328.3 & 77.9 & 35.3 & 0.9\\
C\_2010 & 177.7 & 327.7 & 77.7 & 34.3 & ...\\
O\_2010 (2) & 177.7 & 326.5 & 80.5 & 34.7 & 2.8\\
\hline
\end{tabular}
\end{table}
Based on an initial analysis given in this paper, a prediction of possible enhanced activity on September 21, 2015 was made by \citet{segon2015croatian}. At the moment, there are no video meteor data that confirm the prediction of the enhanced activity, but a paper on visual observations of the $\kappa$ Cepheids by a highly reputable visual observer confirmed some level of increased activity (\citet{rendtel2015minor}).
The encounters shown in Table \ref{tab:table10} between the trails ejected by 2009 SG18 and the Earth were found theoretically through the dynamical modeling. Caution should be emphasized when interpreting the results since the confirmation of any historical outbursts still need to be performed before trusting such predictions.
\begin{table*}[t]
\caption{Prediction of possible outbursts caused by 2009 SG18. Columns: Year = the year of Earth's collision with the trail, Trail = year of particle ejection from the given trail, rE-rD = the distance between the Earth and the center of the trail, $\lambda_{0}$ = solar longitude in degrees, yyyy-mm-ddThh:mm:ss = date and time of the trail's closest approach, ZHR = zenithal hourly rate}
\label{tab:table10}
\centering
\begin{tabular}{c c c c c c}
\hline\hline
Year & Trail & rE-rD & $\lambda_{0}$ & yyyy-mm-ddThh:mm:ss & ZHR\\
& & (AU) & (\degr) & &\\
\hline
2005 & 1967 & 0.00066 & 177.554 & 2005-09-20T12:08:00 & 11\\
2006 & 1804 & 0.00875 & 177.383 & 2006-09-20T11:31:00 & 13\\
2010 & 1952 & -0.00010 & 177.673 & 2010-09-20T21:38:00 & 12\\
2015 & 1925 & -0.00143 & 177.630 & 2015-09-21T03:29:00 & 10\\
2020 & 1862 & -0.00064 & 177.479 & 2020-09-20T06:35:00 & 11\\
2021 & 1962 & 0.00152 & 177.601 & 2021-09-20T15:39:00 & 11\\
2031 & 2004 & -0.00126 & 177.267 & 2031-09-20T21:15:00 & 12\\
2031 & 2009 & -0.00147 & 177.222 & 2031-09-20T19:55:00 & 13\\
2033 & 1946 & 0.00056 & 177.498 & 2033-09-20T14:57:00 & 10\\
2036 & 1978 & -0.00042 & 177.308 & 2036-09-20T04:44:00 & 20\\
2036 & 2015 & -0.00075 & 177.220 & 2036-09-20T02:33:00 & 20\\
2036 & 2025 & 0.00109 & 177.254 & 2036-09-20T03:19:00 & 13\\
2037 & 1857 & -0.00031 & 177.060 & 2037-09-20T04:37:00 & 13\\
2037 & 1946 & 0.00021 & 177.273 & 2037-09-20T09:56:00 & 10\\
2038 & 1841 & -0.00050 & 177.350 & 2038-09-20T18:02:00 & 10\\
2038 & 1925 & 0.00174 & 177.416 & 2038-09-20T19:39:00 & 11\\
2039 & 1815 & -0.00018 & 177.303 & 2039-09-20T23:01:00 & 10\\
\hline
\end{tabular}
\end{table*}
The next closest possible association was 2002 CE26 with D$_{SH}$ of 0.35, which was deemed too distant to be connected to the $\kappa$ Cepheids stream.
\section{IAU meteor shower \#753 NED - November Draconids and asteroid 2009 WN25
}
The November Draconids had been previously reported by \citet{segon2015four}, and consist of 12 meteors on very similar orbits having a maximal distance from the mean orbit of D$_{SH}$ = 0.08, and on average only D$_{SH}$ = 0.06. The activity period was found to be between November 8 and 20, with peak activity at solar longitude of 232.8°. The radiant position at peak activity was found to be at R.A. = 194°, Dec. = +69°, and V$_{g}$ = 42.0 km/s. There are no results from other searches since the shower has been reported only recently. Other meteor showers were reported on coordinates similar to \#753 NED, namely \#387 OKD October $\kappa$ Draconids and \#392 NID November i Draconids \citep{brown2010meteoroid}. The difference in D$_{SH}$ for \#387 OKD is found to be far too excessive (0.35) to be considered to be the same shower stream. \#392 NID may be closely related to \#753, since the D$_{SH}$ of 0.14 derived from radar observations show significant similarity; however, mean orbits derived from optical observations by \citet{jenniskens2016cams} differ by D$_{SH}$ of 0.24 which we consider too far to be the same shower.
The possibility that asteroid 2009 WN25 is the parent body of this possible meteor shower has been investigated by numerical modeling of the hypothetical meteoroids ejected for the period from 3000 BC up to 1500 AD and visualized in Figures \ref{fig:figure12} and \ref{fig:figure13}. The asteroid 2009 WN25 has a Tisserand parameter for Jupiter of T$_{j}$ = 1.96. Despite the fact that direct encounters with modeled meteoroids were not found for all years in which the meteors were observed, and that the number of hits is relatively small compared to other modeled showers, the averaged predicted positions fit the observations very well (see Table \ref{tab:table11}). This shows that the theoretical results have a statistically meaningful value and validates the approach of simulating the stream over a long period of time and concatenating the results to provide an overall view of the shower.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image12.jpg}}
\caption{Location of the nodes of the particles released by 2009 WN25 over several centuries, concatenated over 100 years. The Earth crosses the stream.}
\label{fig:figure12}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image13.png}}
\caption{Theoretical radiant of the particles released by 2009 WN25 which were closest to the Earth. The range of solar longitudes for modeled radiants is from 230.3\degr to 234.6\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure13}
\end{figure}
\begin{table}
\caption{Averaged observed and modeled radiant positions for \#753 NED and 2009 WN25. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.}
\label{tab:table11}
\centering
\begin{tabular}{l c c c c c}
\hline\hline
November Draconids & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\
& (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\
\hline
Predicted & 232.8 & 194.2 & 68.6 & 42.0 & ...\\
Observed & 232.4 & 196.5 & 67.6 & 41.8 & 1.3\\
\hline
\end{tabular}
\end{table}
Moreover, it appears that the predicted 2014 activity sits exactly at the same equatorial location (R.A. = 199°, Dec. = +67°) seen on Canadian Meteor Orbit Radar (CMOR) plots.\footnote{\label{cmor_plots}\url{http://fireballs.ndc.nasa.gov/} - "radar".} The shower has been noted as NID, but its position fits more closely to the NED. Since orbital data from the CMOR database are not available online, the authors were not able to confirm the hypothesis that the radar is seeing the same meteoroid orbits as the model produces. However, the authors received a confirmation from Dr. Peter Brown at the University of Western Ontario (private correspondence) that this stream has shown activity each year in the CMOR data, and likely belongs to the QUA-NID complex. A recently published paper \citep{micheli2016evidence} suggests that asteroid 2009 WN25 may be a parent body of the NID shower as well, so additional analysis with more observations will be needed to reveal the true nature of this shower complex. The next closest possible association was 2012 VF6 with D$_{SH}$ of 0.49, which was deemed too distant to be connected to the November Draconids stream.
\section{IAU meteor shower \#754 POD - $\psi$ Draconids and asteroid 2008 GV
}
The possible new meteor shower $\psi$ Draconids was reported by \citet{segon2015four}, consisting of 31 tight meteoroid orbits, having maximal distance from a mean orbit of D$_{SH}$ = 0.08, and on average only D$_{SH}$ = 0.06. The $\psi$ Draconids were found to be active from March 19 to April 12, with average activity around solar longitude of 12° with radiant at R.A. = 262°, Dec. = +73°, and V$_{g}$ = 19.8 km/s. No confirmation from other shower searches exists at the moment, since the shower has been reported upon only recently.
If this shower's existence could be confirmed, the most probable parent body known at this time would be asteroid 2008 GV. This asteroid was found to have a very similar orbit to the average orbit of the $\psi$ Draconids, D$_{SH}$ being of 0.08. Since the asteroid has a Tisserand parameter of T$_{j}$ = 2.90, it may be a dormant comet as well. Dynamical modeling has been done for hypothetical meteoroids ejected for perihelion passages from 3000 BC to 2100 AD, resulting in direct hits with the Earth for almost every year from 2000 onwards.
For the period covered by observations used in the CMN search, direct hits were found for years 2008, 2009, and 2010. The summary of the average radiant positions from the observations and from the predictions are given in Table \ref{tab:table12}. The plots of modeled and observed radiant positions are shown in Figure \ref{fig:figure15}, while locations of nodes of the modeled particles released by 2008 GV are shown in Figure \ref{fig:figure14}.
\begin{table}
\caption{Observed $\psi$ Draconids and modeled 2008 GV meteors' mean radiant positions (prefix C\_ stands for calculated (modeled), while prefix O\_ stands for observed). The number in the parenthesis indicates the number of observed meteors in the given year. $\theta$ is the angular distance between the modeled and the observed mean radiant positions.}
\label{tab:table12}
\centering
\begin{tabular}{l c c c c c}
\hline\hline
Year & $\lambda_{0}$ & R.A. & Dec. & V$_{g}$ & $\theta$\\
& (\degr) & (\degr) & (\degr) & (km/s) & (\degr)\\
\hline
C\_2008 & 15.9 & 264.6 & 75.2 & 19.4 & ...\\
O\_2008 (2) & 14.2 & 268.9 & 73.3 & 20.7 & 2.2\\
C\_2009 & 13.9 & 254.0 & 74.3 & 19.3 & ...\\
O\_2009 (11) & 9.5 & 257.4 & 72.0 & 19.9 & 2.5\\
C\_2010 & 12.8 & 244.7 & 73.4 & 19.1 & ...\\
O\_2010 (6) & 15.1 & 261.1 & 73.0 & 19.8 & 4.7\\
\hline
\end{tabular}
\end{table}
As can be seen from Table \ref{tab:table12}, the mean observations fit very well to the positions predicted by dynamical modeling, and for two cases there were single meteors very close to predicted positions. On the other hand, the predictions for year 2015 show that a few meteoroids should hit the Earth around solar longitude 14.5° at R.A. = 260°, Dec. = +75°, but no significant activity has been detected in CMN observations. However, small groups of meteors can be seen on CMOR plots for that solar longitude at a position slightly lower in declination, but this should be verified using radar orbital measurements once available. According to Dr. Peter Brown at the University of Western Ontario (private correspondence), there is no significant activity from this shower in the CMOR orbital data.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image14.jpg}}
\caption{Location of the nodes of the particles released by 2008 GV over several centuries, concatenated over 100 years. The Earth crosses the stream.}
\label{fig:figure14}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image15.png}}
\caption{Theoretical radiant of the particles released by 2008 GV which were closest to the Earth. The range of solar longitudes for modeled radiants is from 355.1\degr to 17.7\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure15}
\end{figure}
One other potential parent body may be connected to the $\psi$ Draconids stream: 2015 FA118. The analysis for that potential parent alternative will be done in a future analysis.
\section{IAU meteor shower \#755 MID - May $\iota$ Draconids and asteroid 2006 GY2
}
The possible new meteor shower May $\iota$ Draconids was reported by \citet{segon2015four}, consisting of 19 tight meteoroid orbits, having maximal distance from their mean orbit of D$_{SH}$ = 0.08, and on average only D$_{SH}$ = 0.06. The May $\iota$ Draconids were found to be active from May 7 to June 6, with peak activity around solar longitude of 60° at R.A. = 231°, Dec. = +53°, and V$_{g}$ = 16.7 km/s. No confirmation from other searches exists at the moment, since the shower has been reported in the literature only recently. Greaves (from the meteorobs mailing-list archives\footnote{\label{meteorobs}\url{http://lists.meteorobs.org/pipermail/meteorobs/2015-December/018122.html}.}) stated that this shower should be the same as \#273 PBO $\phi$ Bootids. However, if we look at the details of this shower as presented in \citet{jenniskens2006meteor}, we find that the solar longitude stated in the IAU Meteor Data Center does not correspond to the mean ascension node for three meteors chosen to represent the $\phi$ Bootid shower. If a weighted orbit average of all references is calculated, the resulting D$_{SH}$ from MID is 0.18 which we consider a large enough value to be a separate shower (if the MID exists at all). Three \#273 PBO orbits from the IAU MDC do indeed match \#755 MID, suggesting that these two possible showers may somehow be connected.
Asteroid 2006 GY2 was investigated as a probable parent body using dynamical modeling as in previous cases. The asteroid 2006 GY2 has a Tisserand parameter for Jupiter of T$_{j}$ = 3.70. From all the cases we discussed in this paper, this one shows the poorest match between the observed and predicted radiant positions. The theoretical stream was modeled with trails ejected from 1800 AD through 2100 AD. According to the dynamical modeling analysis, this parent body should produce meteors for all years covered by the observations and at more or less the same position, R.A. = 248.5°, Dec. = +46.2°, and at same solar longitude of 54.4° with V$_{g}$ = 19.3 km/s, as visualized in Figures \ref{fig:figure16} and \ref{fig:figure17}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image16.jpg}}
\caption{Location of the nodes of the particles released by 2006 GY2 over several centuries, concatenated over 50 years. The Earth crosses the stream.}
\label{fig:figure16}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{media/image17.png}}
\caption{Theoretical radiant of the particles released by 2006 GY2 which were closest to the Earth. The range of solar longitudes for modeled radiants is from 54.1\degr to 54.5\degr. Pluses represent the modeled radiants in the given solar longitude range, while the circles represent the observed radiants during the whole activity of the shower.}
\label{fig:figure17}
\end{figure}
However, six meteors belonging to the possible \#755 MID shower found in the solar longitude range from 52.3 to 53.8 (the next meteor found was at 58.6°) show a mean radiant position at R.A. = 225.8°, Dec. = +46.4°, with mean V$_{g}$ of 16.4 km/s. Given the angular distance of 15.6\degr from the observed radiant, the difference in geocentric velocity (3 km/s) compared to the modeled meteor radiant parameters, and the fact that there were no single model meteors observed at that position nor nearby, we cannot conclude that this asteroid may be the parent body of the possible meteor shower May $\iota$ Draconids.
Another potential parent body was 2002 KK3, having a D$_{SH}$ = 0.18. However, the dynamical modeling for 2002 KK3 showed no crossings with Earth's orbit. There were also three more distant bodies at D$_{SH}$ = 0.20: 2010 JH3, 2013 JY2, and 2014 WC7. The analysis for those bodies will be done in a future analysis.
\section{IAU meteor shower \#531 GAQ - $\gamma$ Aquilids and comet C/1853G1 (Schweizer), and other investigated bodies
}
The possible new meteor shower $\gamma$ Aquilids was reported by \citet{segon2014new}, and found in other stream search papers (\citet{kornovs2014confirmation}, \citet{rudawska2015independent} and \citet{jenniskens2016cams}). Meteoroids from the suggested parent body comet C/1853G1 (Schweizer) were modeled for perihelion passages ranging from 3000 BC up to the present, and were evaluated. Despite their being very similar orbits between \#531 GAQ and the comet C/1853G1 (Schweizer), no direct hits to the Earth were found.
Besides C/1853G1 (Schweizer), negative results were found for dynamical analyses done on asteroids 2011 YX62 (as a possible parent body of \#604 ACZ $\zeta$1 Cancrids) and 2002 KK3, 2008 UZ94, and 2009 CR2 (no shower association reported).
\section{Discussion}
\paragraph{}
The new meteoroid stream discoveries described in this work have been reported on previously, and were searches based on well-defined conditions and constraints that individual meteoroid orbits must meet for association. The simulated particles ejected by the hypothetical parent bodies were treated in the same rigorous manner. Although we consider the similarity of the observed radiant and the dynamically modeled radiant as sufficient evidence for association with the hypothetical parent body when following the approach of \citet{vaubaillon2005new}, there are several points worth discussing.
All meteoroid orbits used in the analysis of \citet{segon2014parent} were generated using the UFOorbit software package by \citet{sonotaco2009meteor}. As this software does not estimate errors of the observations, or the errors of calculated orbital elements, it is not possible to consider the real precision of the individual meteoroid orbits used in the initial search analysis. Furthermore, all UFOorbit generated orbits are calculated on the basis of the meteor's average geocentric velocity, not taking the deceleration into consideration. This simplification introduces errors in orbital elements of very slow, very long, and/or very bright meteors. The real impact of this simplification is discussed in \citet{segon2014draconids} where the 2011 Draconid outburst is analyzed. Two average meteoroid orbits generated from average velocities were compared, one with and one without the linear deceleration model. These two orbits differed by as much as 0.06 in D$_{SH}$ (D$_{H}$ = 0.057, D$_{D}$ = 0.039). The deviation between the orbits does not necessarily mean that the clustering would not be determined, but it does mean that those orbits will certainly differ from the orbits generated with deceleration taken into account, as well as differing from the numerically generated orbits of hypothetical parent bodies. Consequently, the radiant locations of slower meteors can be, besides the natural radiant dispersion, additionally dispersed due to the varying influence of the deceleration in the position of the true radiant. This observation is not only relevant for UFOorbit, but for all software which potentially does not properly model the actual deceleration. CAMS Coincidence software uses an exponential deceleration model \citep{jenniskens2011cams}, however not all meteors decelerate exponentially as was shown in \citet{borovivcka2007atmospheric}. The real influence of deceleration in radiant dispersion will be a topic of some future work. Undoubtedly an important question is whether the dispersion caused by the improperly calculated deceleration of slow (e.g., generated by near-Earth objects) meteors can render members of a meteoroid stream to be unassociated with each other by the automated stream searching methods.
Besides the lack of error estimation of meteor observations, parent bodies on relatively unstable orbits are observed over a short observation arc, thus they often do not have very precise orbital element solutions. Moreover, unknown past parent body activity presents a seemingly unsolvable issue of how the parent orbit could have been perturbed on every perihelion pass close to the Sun. Also if the ejection modeling assumed that the particle ejection occurred during a perihelion passage when the parent body was not active, there would be no meteors present when the Earth passes through the point of the falsely predicted filament. On the other hand, if the Earth encounters meteoroids from a perihelion passage of particularly high activity, an unpredicted outburst can occur. \citet{vaubaillon2015latest} discuss the unknowns regarding parent bodies and the problems regarding meteor shower outburst prediction in greater detail.
Another fundamental problem that was encountered during this analysis was the lack of any rigorous definitions of what meteor showers or meteoroid streams actually are. Nor is there a common consensus to refer to. This issue was briefly discussed in \citet{brown2010meteoroid} and no real advances towards a clear definition have been made since. We can consider a meteor shower as a group of meteors which annually appear near the same radiant and which have approximately the same entry velocity. To better embrace the higher-dimensional nature of orbital parameters and time evolution versus a radiant that is fixed year after year, this should be extended to mean there exists a meteoroid stream with meteoroids distributed along and across the whole orbit with constraints dictated by dynamical evolution away from the mean orbit. By using the first definition however, some meteor showers caused by Jupiter-family comets will not be covered very well, as they are not active annually. The orbits of these kinds of meteor showers are not stable in the long term due to the gravitational and non-gravitational influences on the meteoroid stream.\footnote{\label{vaubaillonIMC2014}\url{http://www.imo.net/imc2014/imc2014-vaubaillon.pdf}.} On the other hand, if we are to consider any group of radiants which exhibit similar features but do not appear annually as a meteor shower, we can expect to have thousands of meteor shower candidates in the near future.
It is the opinion of the authors that with the rising number of observed multi-station video meteors, and consequently the rising number of estimated meteoroid orbits, the number of new potential meteor showers detected will increase as well, regardless of the stream search method used. As a consequence of the vague meteor shower definition, several methods of meteor shower identification have been used in recent papers. \citet{vida2014meteor} discussed a rudimentary method of visual identification combined with D-criterion shower candidate validation. \citet{rudawska2014new} used the Southworth and Hawking D-criterion as a measure of meteoroid orbit similarity in an automatic single-linkage grouping algorithm, while in the subsequent paper by \citet{rudawska2015independent}, the geocentric parameters were evaluated as well. In \citet{jenniskens2016cams} the results of the automatic grouping by orbital parameters were disputed and a manual approach was proposed.
Although there are concerns about automated stream identification methods, we believe it would be worthwhile to explore the possibility of using density-based clustering algorithms, such as DBSCAN or OPTICS algorithms by \citet{kriegel2005density}, for the purpose of meteor shower identification. They could possibly discriminate shower meteors from the background as they have a notion of noise and varying density of the data. We also strongly encourage attempts to define meteor showers in a more rigorous manner, or an introduction of an alternate term which would help to properly describe such a complex phenomenon. The authors believe that a clear definition would be of great help in determining whether a parent body actually produces meteoroids – at least until meteor observations become precise enough to determine the connection of a parent body to a single meteoroid orbit.
\section{Conclusion}
From this work, we can conclude that the following associations between newly discovered meteoroid streams and parent bodies are validated:
\begin{itemize}
\item \#549 FAN 49 Andromedids and Comet 2001 W2 Batters
\item \#533 JXA July $\xi$ Arietids and Comet C/1964 N1 Ikeya
\item \#539 ACP $\alpha$ Cepheids and Comet P/255 Levy
\item \#541 SSD 66 Draconids and Asteroid 2001 XQ
\item \#751 KCE $\kappa$ Cepheids and Asteroid 2009 SG18
\item \#753 NED November Draconids and Asteroid 2009 WN25
\item \#754 POD $\psi$ Draconids and Asteroid 2008 GV
\end{itemize}
The connection between \#755 MID May $\iota$ Draconids and asteroid 2006 GY2 is not firmly established enough and still requires some additional observational data before any conclusion can be drawn. The asteroidal associations are interesting in that each has a Tisserand parameter for Jupiter indicating it was a possible Jupiter-family comet in the past, and thus each may now be a dormant comet. Thus it may be worth looking for outgassing from asteroids 2001 XQ, 2009 SG18, 2009 WN25, 2008 GV, and even 2006 GY2 during their perihelion passages in the near future using high resolution imaging.
\begin{acknowledgements}
JV would like to acknowledge the availability of computing resources on the Occigen super computer at CINES (France) to perform the computations required for modeling the theoretical meteoroid streams. Special acknowledgement also goes to all members of the Croatian Meteor Network for their devoted work on this project.
\end{acknowledgements}
\bibpunct{(}{)}{;}{a}{}{,}
\bibliographystyle{aa}
| {'timestamp': '2016-11-09T02:00:34', 'yymm': '1611', 'arxiv_id': '1611.02297', 'language': 'en', 'url': 'https://arxiv.org/abs/1611.02297'} |
\section{Introduction}
We are motivated by the classical question of irregularities of distribution \cite{MR903025} and recent results
which give new lower bounds on the star-Discrepancy in all dimensions $ d\ge 3$ \cites{MR2409170,MR2414745}. We recall these results.
Given integer $ N$, and selection $ \mathcal P$ of $ N$ points in the unit cube $ [0,1] ^{d}$, we define a \emph{Discrepancy
Function} associated to $ \mathcal P$ as follows. At any point $ x\in [0,1] ^{d}$, set
\begin{equation*}
D_N (x) = \sharp ( \mathcal P \cap [0,x))- N \lvert [0,x)\rvert \,.
\end{equation*}
Here, by $ [0,x)$ we mean the $ d$-dimensional rectangle with left-hand corner at the origin, and
right-hand corner at $ x\in [0,1] ^{d}$. Thus, if we write $ x= (x_1 ,\dotsc, x_d)$ we then have
\begin{equation*}
[0,x)= \prod _{j=1} ^{d} [0,x_j) \,.
\end{equation*}
At point $ x$ we are taking the difference between the actual and
the expected number of points in the rectangle.
Traditionally, the dependence of $ D_N$
on the selection of points $ \mathcal P$ is only indicated through the number of points in the collection
$ \mathcal P$. We mention only the main points of the subject here, and leave the (interesting)
history of the subject to references such as \cite{MR903025}.
The result of Klaus Roth \cite{MR0066435} gives a definitive average case lower bound on the Discrepancy function.
\begin{roth}
For any dimension $ d\ge 2$, we have the following estimate
\begin{equation} \label{e.roth}
\norm D_N. 2. \gtrsim (\log N) ^{(d-1)/2} \,.
\end{equation}
\end{roth}
The same lower bound holds in all $ L ^{p}$, $ 1<p<\infty $, as observed by Schmidt \cite{MR0319933}.
But, the $ L ^{\infty } $ infinity estimate is much harder. In dimension $ d=2$ the definitive
result was obtained by Schmidt again \cite{MR0491574}.
\begin{schmidt} \label{t.schmidt}
We have the estimates below, valid for all collections $\mathcal P\subset [0,1]^2 $:
\begin{align} \label{e.schmidt}
\norm D_N .\infty. {}\gtrsim{} \log N .
\end{align}
\end{schmidt}
The $ L ^{\infty }$ estimates are referred to as star-Discrepancy bounds. Extending and greatly
simplifying an intricate estimate of Jozef Beck \cite{MR1032337}, some of these authors have obtained a partial
extension of Schmidt's result to all dimensions $ d\ge3$.
\begin{theorem}\label{t.dge3}[\cites{MR2409170,MR2414745}] For dimensions $ d\ge 3$ there is an $ \eta = \eta (d)>0$ for which we have
the inequality
\begin{equation*}
\norm D_N. \infty . \gtrsim (\log N) ^{(d-1)/2 + \eta } \,.
\end{equation*}
That is, there is an $ \eta $ improvement in the Roth exponent.
\end{theorem}
As explained in these references, the analysis of the star-Discrepancy function is closely related to other
questions in probability theory, approximation theory, and harmonic analysis. We turn to one of these,
the simplest to state question, which is central to all of these issues. We begin with the definition of the Haar
functions.
In one dimension, the dyadic intervals of the real line $ \mathbb R $ are given by
\begin{equation*}
\mathcal D = \bigl\{ [j 2 ^{k}, (j+1) 2 ^{k}) \;:\; j,k\in \mathbb Z \bigr\}\,.
\end{equation*}
Any interval $ I$ is a union of its left and right halves, denoted by $ I _{\textup{left/right}}$,
which are also dyadic.
The \emph{Haar function $ h_I$ associated to $ I$}, or simply \emph{ Haar function} is
\begin{equation*}
h_I = -\mathbf 1_{I _{\textup{left}}}+ \mathbf 1_{I_{\textup{right}}}
\end{equation*}
Note that for dyadic intervals $ J\subsetneq I$, the
Haar function $ h_J$ is completely supported on a set where $ h_I $ is
constant. This basic property leads to far-reaching implications that we will exploit in these
notes.
In higher dimensions $ d\ge 2$, we take the dyadic rectangles to be the tensor product of dyadic intervals in dimension $ d$:
\begin{equation*}
\mathcal D ^{d} = \bigl\{ R=R_1 \times \cdots \times R_d \;:\; R_1 ,\dotsc, R_d\in \mathcal D \bigr\}\,.
\end{equation*}
The \emph{Haar function} associated to $ R\in \mathcal D_d$ is likewise defined as
\begin{equation} \label{e.haarFunctionDef}
h _{R } (x_1 ,\dotsc, x_d) = \prod _{j=1} ^{d} h _{R_j} (x_j)\,, \qquad R= R_1 \times \cdots \times R_d\,.
\end{equation}
While making these definitions on all of $ \mathbb R ^{d}$, we are mainly interested in local questions,
thus rectangles $ R \subset [0,1] ^{d}$ are always dyadic rectangles $ R\in \mathcal D ^{d}$.
Namely, we are mainly interested in the following conjectural \emph{reverse triangle inequality} for sums of Haar functions
on $ L ^{\infty }$:
\begin{sbi} For dimensions $ d\ge 3$, there is a constant $ C_d$ so that for all integers $ n\ge 1$, and
constants $ \{ a_R \;:\; \lvert R\rvert=2 ^{-n}\,,\ R\subset [0,1] ^{d} \}$, we have
\begin{equation} \label{e.sbi}
n ^{ (d-2)/2} \NORm \sum
_{\substack{\lvert R\rvert\ge 2 ^{-n} \\ R\subset [0,1] ^{d} }}
a_R \cdot h_R . \infty . \ge C_d 2 ^{-n} \sum
_{\substack{\lvert R\rvert=2 ^{-n} \\ R\subset [0,1] ^{d} }} \lvert a_R\rvert \,.
\end{equation}
\end{sbi}
We are stating this inequality in its strongest possible form. On the left, the sum goes over all
rectangles with volume \emph{at least} $ 2 ^{-n}$, while on the right, we only sum over rectangles with
volume \emph{equal to} $ 2 ^{-n}$. Given the primitive state of our knowledge of this conjecture,
we will not insist on this distinction below.
For the case of $ d=2$, \eqref{e.sbi} holds, and is a Theorem of Talagrand \cite{MR95k:60049}.
(Also see \cites{MR637361,MR0319933,MR96c:41052}).
The special case of the Small Ball Inequality when all the coefficients $ a_R$ are equal to either
$ -1$ or $ +1$ we refer to as the `Signed Small Ball Inequality.' Before stating this conjecture,
let us note that we have the following (trivial) variant of Roth's Theorem in the Signed case:
\begin{equation*}
\NORm \sum
_{\substack{\lvert R\rvert= 2 ^{-n} \\ R\subset [0,1] ^{d} }}
a_R \cdot h_R . \infty . \gtrsim n ^{ (d-1)/2} \,, \qquad a_R\in \{\pm 1\} \,.
\end{equation*}
The reader can verify this by noting that the left-hand side can be written as about $ n ^{d-1}$ orthogonal
functions, by partitioning the unit cube into homothetic copies of dyadic rectangles of a fixed volume.
The Signed Small Ball Inequality asserts a `square root of $ n$' gain over this average case estimate.
\begin{ssbi} For coefficients $ a_R \in \{\pm1\}$,
\begin{equation} \label{e.signed}
\NORm \sum
_{\substack{\lvert R\rvert=2 ^{-n} \\ R\subset [0,1] ^{d} }}
a_R \cdot h_R . \infty . \ge C'_d n ^{ d/2} \,, \qquad \,.
\end{equation}
Here, $ C'_d$ is a constant that only depends upon dimension.
\end{ssbi}
We should emphasize that random selection of the coefficients shows that the power on $ n$ on the right
is sharp. Unfortunately, random coefficients are very far from the `hard instances' of the inequality, so
do not indicate a proof of the conjecture.
The Signed Small Ball Conjecture should be easier, but even this special case eludes us.
To illustrate the difficulty in this question, note that in dimension $ d=2$, each point
$ x$ in the unit square is in $ n+1$ distinct dyadic rectangles of volume $ 2 ^{-n}$. Thus, it suffices
to find a \emph{single} point where all the Haar functions have the same sign. This we will do explicitly
in \S~\ref{s.talagrand} below.
Passing to three dimensions reveals a much harder problem. Each point $ x$ in the unit cube is in about $ n ^2 $
rectangles of volume $ 2 ^{-n}$,
but in general we can only achieve a $ n ^{3/2} $ supremum norm. Thus, the task is to find a single point $ x$
where the number of pluses is more than the number of minuses by $ n ^{3/2}$. In percentage terms this
represents only a $ n ^{-1/2}$-percent imbalance over equal distribution of signs.
The main Theorem of this note is Theorem~\ref{t.3} below,
which gives the best exponent we are aware of in the Signed Small Ball Inequality. The method of proof
is also the simplest we are aware of. (In particular, it gives a better result than
the more complicated argument in \cite{MR2375609}). Perhaps this argument can inspire further progress on this
intriguing and challenging question.
The authors thank the anonymous referee whose attention to detail has helped greater clarity in
our arguments.
\begin{dedication} One of us was a PhD student of Walter Philipp, the last of seven students. Walter
was very fond of the subject of this note, though the insights he would have into the recent developments
are lost to us. As a scientist, he held himself to high standards in all his areas of study.
As a friend, he was faithful, loyal, and took great pleasure in renewing contacts and friendship.
He is very much missed.
\end{dedication}
\section{The Two Dimensional Case} \label{s.talagrand}
This next definition is due to Schmidt, refining a definition of Roth.
Let $\vec r\in \mathbb N^d$ be a partition of $n$, thus $\vec r=(r_1 ,\dotsc, r_d)$,
where the $r_j$ are non negative integers and $\abs{ \vec r} \coloneqq \sum _{t=1} ^{d} r_t=n$.
Denote all such vectors at $ \mathbb H _n$. (`$ \mathbb H $' for `hyperbolic.')
For vector $ \vec r $, let $ \mathcal R _{\vec r} $ be all dyadic rectangles
$ R$ such that for each coordinate $ 1\le t \le d$, $ \lvert R _t\rvert= 2 ^{-r_t} $.
\begin{definition}\label{d.rfunction}
We call a function $f$ an \emph{$\mathsf r$-function with parameter $ \vec r$ } if
\begin{equation}
\label{e.rfunction} f=\sum_{R\in \mathcal R _{\vec r}}
\varepsilon_R\, h_R\,,\qquad \varepsilon_R\in \{\pm1\}\,.
\end{equation}
We will use $f _{\vec r} $ to denote a generic $\mathsf r$-function.
A fact used without further comment is that $ f _{\vec r} ^2 \equiv 1$.
\end{definition}
Note that in the Signed Small Ball Inequality, one is seeking lower bounds on
sums $ \sum _{\lvert \vec r\rvert=n } f _{\vec r}$.
\smallskip
There is a trivial proof of the two dimensional Small Ball Inequality.
\begin{proposition}\label{p.independent}
The random variables $ f_{(j,n-j)}$, $ 0\le j \le n$ are independent.
\end{proposition}
\begin{proof}
The sigma-field generated by the functions $ \{ f _{ (k,n-k)} \;:\; 0\le k < j\} $
consists of dyadic rectangles $ S = S_1 \times S_2$ with $ \lvert S_1\rvert = 2 ^{-j} $
and $ \lvert S_2\rvert= 2 ^{-n} $.
On each line segment $ S_1 \times \{x_2\}$, $ f _{(j, n-j)}$ takes the values $ \pm 1$
in equal measure, so the proof is finished.
\end{proof}
We then have
\begin{proposition}\label{p.2^n} In the case of two dimensions,
\begin{equation*}
\mathbb P \Bigl(\sum _{k=0} ^{n} f _{ (k,n-k)} = {n+1} \Bigr)= 2 ^{-n-1}
\end{equation*}
\end{proposition}
\begin{proof}
Note that
\begin{equation*}
\mathbb P \Bigl(\sum _{k=0} ^{n} f _{ (k,n-k)} = {n+1} \Bigr)=
\mathbb P \bigl( f _{ (k,n-k)}=1\ \forall 0\le k \le n \bigr)= 2 ^{-n-1}\,.
\end{equation*}
\end{proof}
It is our goal to give a caricature of this arugment in three dimensions. See \S\ref{s.heuristics}
for a discussion.
\section{Elementary Lemmas}
We recall some elementary Lemmas that we will need in our three dimensional proof.
\begin{pz}
Suppose that $ Z$ is a positive random variable with $ \mathbb E Z = \mu _1 $, $ \mathbb E Z ^{2}
= \mu ^{2} _{2}$. Then,
\begin{equation} \label{e.pz}
\mathbb P ( Z \ge \mu _1/2 )\ge \tfrac 14 \frac {\mu _2 ^2 } { \mu _1 ^{2}} \,.
\end{equation}
\end{pz}
\begin{proof}
\begin{align*}
\mu _1 = \mathbb E Z
&= \mathbb E Z \mathbf 1_{ Z < \mu _1/2} + \mathbb E Z \mathbf 1_{Z \ge \mu _1/2}
\\
& \le \mu _1/2 + \mu _2 \mathbb P ( Z\ge \mu _1/2 ) ^{1/2}
\end{align*}
Now solve for $ \mathbb P ( Z\ge \mu _1/2 )$.
\end{proof}
\begin{pz2}
For all $ \rho _1>1$ there is a $ \rho _2>0$ so that for all random variables $Z $ which
satisfy
\begin{equation} \label{e.pz2}
\mathbb E Z=0\,, \qquad \norm Z.2. \le \norm Z . 4. \le \rho _1 \norm Z.2.
\end{equation}
we have the inequality $ \mathbb P (Z > \rho _2 \norm Z.2.)> \rho _2 $.
\end{pz2}
\begin{proof}
Let $ Z_+ \coloneqq Z \mathbf 1_{ Z>0} $ and $ Z_- \coloneqq - Z \mathbf 1_{Z<0}$, so that $ Z= Z_+-Z_- $.
Note that $ \mathbb E Z=0$ forces $ \mathbb E Z_+=\mathbb E Z_-$. And,
\begin{align*}
\sigma _2 ^2 &\coloneqq \mathbb E Z ^2 = \mathbb E Z_+ ^2 + \mathbb E Z_- ^2 \,,
\\
\sigma _4 ^{4} & \coloneqq \mathbb E Z ^{4}= \mathbb E Z_+ ^4 + \mathbb E Z_- ^4\,.
\end{align*}
Suppose that the conclusion is not true. Namely $ \mathbb P (Z > \rho _2 \sigma _2) < \rho _2 $
for a very small $ \rho _2$. It follows that
\begin{align*}
\mathbb E Z_+ &\le
\mathbb E Z_+ \mathbf 1_{ Z_+ \leq \rho _2 \sigma _2} + \mathbb E Z_+ \mathbf 1_{ Z_+ > \rho _2 \sigma _2}
\\ & \le
\rho _2 \sigma _2 + \mathbb P (Z > \rho _2 \sigma _2) ^{1/2} \sigma _2 \le
2 \rho _2 ^{1/2} \sigma _2\,,
\end{align*}
for $ \rho _2<1$. Hence $ \mathbb E Z_- = \mathbb E Z_+ \le 2 \rho _2 ^{1/2} \sigma _2$.
It is this condition that we will contradict below.
\smallskip
We also have
\begin{align*}
\mathbb E Z_+ ^2 & \le
\mathbb E Z_+ ^2 \mathbf 1_ { \{ Z_+ \leq \rho _2 \sigma _2\}} + \mathbb E Z_+ ^2 \mathbf 1_{ \{ Z_+ > \rho _2 \sigma _2\}}
\\ & \le
\rho _2 ^2 \sigma _2 ^2 + \rho _2 ^{1/2} \sigma _4 ^2
\\
& \le 2 \rho _2 ^{1/2} \rho _1 ^2 \sigma _2 ^2 \,.
\end{align*}
So for $ \rho _2 < (4 \rho _1) ^{-4}$, we have $ \mathbb E Z_+ ^2 \le \tfrac 12 \sigma _2 ^2 $.
It follows that we have $ \mathbb E Z_- ^2 \ge \tfrac 12 \sigma _2 ^2 $, and $ \mathbb E Z_- ^{4} \le
\rho _1 \sigma _2 ^{4}$. So by \eqref{e.pz}, we have
\begin{equation*}
\mathbb P (Z_-> \rho _3 \sigma _2) > \rho _3
\end{equation*}
where $ \rho _3$ is only a function of $ \rho _1$. But this contradicts $ \mathbb E Z_-\le 2 \rho _2 ^{1/2} \sigma _2$,
for small $ \rho _2$, so finishes our proof.
\end{proof}
The Paley-Zygmund inequalities require a higher moment, and in application we find it convenient to use
the Littlewood-Paley inequalities to control this higher moment.
Let $ \mathcal F_0, \mathcal F _1 ,\dotsc, \mathcal F _{T}$
a sequence of increasing sigma-fields generated by dyadic intervals,
and let $ d _{t} $, $ 1\le t\le T$ be a martingale difference sequence,
namely $ \mathbb E (d_t \;:\; \mathcal F _{t-1})=0$ for all $ t=1,2 ,\dotsc, T $. Set $ f= \sum _{t=1} ^{T} d_t $.
The martingale square function of $ f$ is $ \operatorname S (f) ^2 \coloneqq \sum_{t=1} ^{T} d_t ^2 $.
The instance of the Littlewood-Paley inequalities we need are:
\begin{lemma}\label{l.lp} With the notation above, suppose that we have in addition
that the distribution of $ d_t $ is conditionally symmetric given $ \mathcal F_{t-1}$.
By this we mean that on each atom $ A$ of $ \mathcal F _{t-1}$, the
the distribution of $ d_t \mathbf 1_{A}$ is equal to that
of $ - d_t \mathbf 1_{A}$. Then, we have
\begin{equation} \label{e.LP}
\norm f . 4. \simeq \norm \operatorname S (f). 4. \,.
\end{equation}
\end{lemma}
\begin{proof}
The case of the Littlewood-Paley for even integers can be proved by expansion of the integral, an argument
that goes back many decades, and our assumption of being conditionally symmetric is added just to
simplify this proof. Thus,
\begin{align*}
\norm f. 4. ^{4} = \sum _{ 1\le t_1, t_2, t_3, t_4\le T} \mathbb E \prod _{u=1} ^{4} d _{t_u} \,.
\end{align*}
We claim that unless the integers $ 1\le t_1, t_2, t_3, t_4\le T$ occur in pairs of
equal integers, the expectation on the right above is zero. This claim shows that
\begin{equation*}
\norm f. 4. ^{4} = \sum _{ 1\le t_1, t_2\le T} \mathbb E d _{t_1} ^2 \cdot d _{t_2} ^2 \,.
\end{equation*}
It is easy to see that this proves the Lemma, namely we would have
\begin{equation*}
\norm \operatorname S (f). 4. ^{4} \le
\norm f. 4. ^{4} \le \tfrac {4!} 2 \norm \operatorname S (f). 4. ^{4} \,.
\end{equation*}
\smallskip
Let us suppose $ t_1\le t_2 \le t_3 \le t_4$. If we have $ t_3$ strictly less than $ t_4$, then
\begin{equation*}
\mathbb E \prod _{u=1} ^{4} d _{t_u} =
\mathbb E \prod _{u=1} ^{3} d _{t_u} \cdot \mathbb E (d _{t_4} \;:\; \mathcal F _{t_3})=0\,.
\end{equation*}
If we have $ t_1 < t_2=t_3=t_4$, then by conditional symmetry, $ \mathbb E (d _{t_2} ^{3} \;:\; \mathcal F_{t_1})=0$,
and so we have
\begin{equation*}
\mathbb E \prod _{u=1} ^{4} d _{t_u} =
\mathbb E d _{t_1} \cdot \mathbb E (d^3 _{t_2} \;:\; \mathcal F _{t_1})=0\,.
\end{equation*}
If we have $ t_1<t_2<t_3=t_4$, the conditional symmetry again implies that
$ \mathbb E (d _{t_2} \cdot d _{t_3}^{2} \;:\; \mathcal F_{t_1})=0$, so that
\begin{equation*}
\mathbb E \prod _{u=1} ^{4} d _{t_u} =
\mathbb E d _{t_1} \cdot \mathbb E ( d _{t_2} \cdot d _{t_3} ^2 \;:\; \mathcal F _{t_1})=0\,.
\end{equation*}
Thus, the claim is proved.
\end{proof}
We finish this section with an elementary, slightly technical, Lemma.
\begin{lemma}\label{l.condIndep} Let $ \mathcal F_0, \mathcal F _1 ,\dotsc, \mathcal F_{q} $
a sequence of increasing sigma-fields. Let $ A_1 ,\dotsc, A_q$ be events, with $ A_t \in \mathcal F_t $.
Assume that for some $ 0<\gamma <1$,
\begin{equation}\label{e.E>}
\mathbb E \bigl( \mathbf 1_{A_t} \;:\; \mathcal F_{t-1} \bigr)\ge \gamma \,, \qquad 1\le t \le q
\end{equation}
We then have that
\begin{equation}\label{e.F>}
\mathbb P \Bigl( \bigcap _{t=1} ^{q} A_t \Bigr)\ge \gamma ^{q} \,.
\end{equation}
More generally, assume that
\begin{equation}\label{e.E>>}
\mathbb P \Bigl( \bigcup _{t=1} ^{q} \bigl\{ \mathbb E \bigl( \mathbf 1_{A_t} \;:\; \mathcal F_{t-1} \bigr)\le \gamma \bigr\} \Bigr)
\le \tfrac 12 \cdot \gamma ^{q}\,.
\end{equation}
Then,
\begin{equation}\label{e.F>>}
\mathbb P \Bigl( \bigcap _{t=1} ^{q} A_t \Bigr)\ge \tfrac 12 \cdot \gamma ^{q} \,.
\end{equation}
\end{lemma}
\begin{proof}
To prove \eqref{e.F>}, note that by assumption \eqref{e.E>}, and backwards induction we have
\begin{align*}
\mathbb P \Bigl( \bigcap _{t=1} ^{q} A_t \Bigr) & =
\mathbb E \prod _{t=1} ^{q} \mathbf 1_{A_t}
\\
& = \mathbb E \prod _{t=1} ^{q-1} \mathbf 1_{A_t} \times \mathbb E \bigl(\mathbf 1_{A_q} \;:\; \mathcal F_{q-1} \bigr)
\\
& \ge \gamma \mathbb E \prod _{t=1} ^{q-1} \mathbf 1_{A_t}
\\
& \;\; \vdots
\\
&\ge \gamma ^{q} \,.
\end{align*}
\medskip
To prove \eqref{e.F>>}, let us consider an alternate sequence of events. Define
\begin{equation*}
\beta _t \coloneqq
\bigl\{ \mathbb E ( {A_t} \;:\; \mathcal F_{t-1} )\le \gamma \bigr\} \,.
\end{equation*}
These are the `bad' events. Now define $ \widetilde A_t \coloneqq A_t \cup \beta _t $.
By construction, the sets $ \widetilde A_t$ satisfy \eqref{e.E>}. Hence, we have by \eqref{e.F>},
\begin{equation*}
\mathbb P \Bigl( \bigcap _{t=1} ^{q} \widetilde A_t \Bigr)\ge \gamma ^{q} \,.
\end{equation*}
But, now note that by \eqref{e.E>>},
\begin{align*}
\mathbb P \Bigl( \bigcap _{t=1} ^{q} A_t \Bigr)
& = \mathbb P \Bigl( \bigcap _{t=1} ^{q} \widetilde A_t \Bigr) - \mathbb P \Bigl(\bigcup _{t=1} ^{q} \beta _t \Bigr)
\\
& \ge \gamma ^{q}- \tfrac 12 \cdot \gamma ^{q} \ge \tfrac 12 \cdot \gamma ^{q} \,.
\end{align*}
\end{proof}
\section{Conditional Expectation Approach in Three Dimensions}
This is the main result of this note.
\begin{theorem}\label{t.3} For $ \lvert a_R\rvert=1 $ for all $ R$, we have the estimate
\begin{equation*}
\NOrm \sum _{\substack{\lvert R\rvert= 2 ^{-n}\\ \lvert R_1\rvert\ge 2 ^{-n/2} } } a_R \, h_R . L ^{\infty }. \gtrsim n ^{9/8}\,.
\end{equation*}
We restrict the sum to those dyadic rectangles whose first side has the lower bound $ \lvert R_1\rvert\ge 2 ^{-n/2} $.
\end{theorem}
Heuristics for our proof are given in the next section.
The restriction on the first side lengths of the rectangles is natural from the point of view of our proof,
in which the first coordinate plays a distinguished role. Namely, if we hold the first side length fixed,
we want the corresponding sum over $ R$ to be suitably generic.
Let $ 1\ll q \ll n$ be integers. The integer $ q $ will be taken to be $ q \simeq n ^{1/4}$. Our `gain over
average case' estimate will be $ \sqrt q \simeq n ^{1/8}$. While this is a long way from $ n ^{1/2}$, it is
much better than the explicit gain of $ 1/24$ in \cite{MR2375609}.
\smallskip
We begin the proof.
Let $ \mathcal F _t$ be the sigma field generated by dyadic intervals in [0,1] with
$ \lvert I\rvert = 2 ^{- \lfloor t n/q\rfloor} $, for $ 1\le t \le \tfrac 12 q $.
Let $ \mathbb I _{t} \coloneqq \{ \vec r \;:\; (t-1)n/q\le r_1 < tn/q\}$. Note that the size $ \# \mathbb I _{t} \approx n^2 / q$.
Let $ f _{\vec r}$ be the $ \mathsf r$-functions specified by the choice of signs in Theorem~\ref{t.3}.
Here is a basic observation.
\begin{proposition}\label{p.condExp} Let $ I\in \mathcal F _t$.
The distribution of $ \{ f _{\vec r} \;:\; \vec r\in \mathbb I _t\}$ restricted to the set
$ I \times [0,1] ^2 $ with normalized Lebesgue measure is that of
\begin{equation*}
\{ f _{\vec s} \;:\; \lvert \vec s\rvert=n-\lfloor t n/q\rfloor \,,\, 0\le s_1< n/q\} \,,
\end{equation*}
where the $ f _{\vec s}$ are some $ \mathsf r$-functions.
The exact specification of this collection depends upon the atom in $ \mathcal F_t$.
\end{proposition}
\begin{proof}
An atom $ I$ of $ \mathcal F_t$ are dyadic intervals of length $ 2 ^{-\lfloor t n/q\rfloor}$.
For $\vec r\in \mathbb I _{t}$, $ f _{\vec r}$ restricted to $ I \times [0,1] ^2 $, with normalized
measure, is an $ \mathsf r$-function with index
\begin{equation*}
(r_1- \lfloor t n/q\rfloor,\, r_2, r_3) \,.
\end{equation*}
The statement holds jointly in $ \vec r\in \mathbb I _t$ so finishes the proof.
\end{proof}
Define sum of `blocks' of $ f _{\vec r}$ as
\begin{align} \label{e.B}
B_t &\coloneqq \sum _{\vec r \in \mathbb I _t } f _{\vec r} \,,
\\ \label{e.cap}
{\textstyle\bigsqcap}_t & \coloneqq \sum _{\substack{\vec r \neq \vec s\in \mathbb I _t \\ r_1=s_1 }} f _{\vec r} \cdot f _{\vec s} \,.
\end{align}
The sums ${\textstyle\bigsqcap} _t$ play a distinguished role in our analysis, as revealed by the
basic computation of a square function in \eqref{e.condSqfn} and the fundamental Lemma~\ref{l.exp}.
Let us set $ \sigma _t ^2 = \norm B_t. 2 . ^2 \simeq n ^2 /q $, for $ 0\le t \le q/2$.
We want to show that for $ q$ as big as $ c n ^{1/4}$, we have
\begin{equation}\label{e.>}
\mathbb P \Bigl( \sum _{t=1} ^{q/2} B_t \gtrsim n \sqrt q \Bigr) > 0
\end{equation}
In fact, we will show
\begin{equation*}
\mathbb P \Bigl(\bigcap _{t=1} ^{q/2} \bigl\{B_t \gtrsim n/\sqrt q\bigr\} \Bigr)> 0\,,
\end{equation*}
from which \eqref{e.>} follows immediately.
Note that the event $ \bigl\{B_t \gtrsim n/\sqrt q\bigr\}$ simply requires that $ B_t$ be of typical size,
and positive, that is this event will have a large probability.
Clearly, we should try to show that these events are in some sense independent, in which
case the lower bound in \eqref{e.>} will be of the form $ \operatorname e ^{-C q}$, for some $ C>0$.
Exact independence, as we had in the two-dimensional case, is too much to hope for. Instead, we will
aim for some conditional independence, as expressed in Lemma~\ref{l.condIndep}.
\smallskip
There is a crucial relationship between $ B_t $ and $ {\textstyle\bigsqcap}_t$, which is expressed
through the martingale square function of $ B_t$, computed in the first coordinate.
Namely, define
\begin{equation}\label{e.SqDef}
\operatorname S (B_t) ^2 \coloneqq
\sum _{j \in J_t} \ABs{\sum _{\vec r \;:\; r_1 =j} f _{\vec r}} ^2
\end{equation}
where $ J_t = \{ s \in \mathbb N \;:\; (t-1)n/q\le s < tn/q\}$.
\begin{proposition}\label{p.sqfn} We have
\begin{align} \label{e.sqfn}
\operatorname S (B_t) ^2 &= \sigma ^2 _t + {\textstyle\bigsqcap}_t \,,
\\ \label{e.condSqfn}
\operatorname S (B_t \;:\; \mathcal F_t) &=
\sigma ^2 _t + \mathbb E ({\textstyle\bigsqcap}_t \;:\; \mathcal F_t) \,.
\end{align}
\end{proposition}
By construction, we have $ {}^{\sharp}\, \mathbb I _t \simeq n ^2 /q$, for $ 0\le t<\tfrac 12 \, q$.
\begin{proof}
In \eqref{e.SqDef}, one expands the square on the right hand side. Notice that this shows that
\begin{equation*}
\operatorname S (B_t) ^2 = \sum _{\substack{\lvert \vec r\rvert= \lvert \vec s\rvert=n \\ r_1=s_1 \in J_t }}
f _{\vec r} \cdot f _{\vec s} \,.
\end{equation*}
We can have $ \vec r=\vec s$ for $ ^\sharp\, \mathbb I _t$ choices of $ \vec r$. Otherwise, we have a
term that contributes to $ {\textstyle\bigsqcap}_t $. The conditional expectation conclusion follows from
\eqref{e.sqfn}
\end{proof}
The next fact is the critical observation in \cites{MR2414745,MR2409170,MR2375609} concerning coincidences,
assures us that typically on the right in \eqref{e.sqfn}, that the first term $\sigma ^2 _t \simeq n ^2 /q $
is much larger than the second $ {\textstyle\bigsqcap}_t$. See
\cite{MR2414745}*{4.1, and the discussion afterwords}.
\begin{lemma}\label{l.exp} We have the uniform estimate
\begin{equation*}
\norm {\textstyle\bigsqcap} _t . \operatorname {exp} (L ^{2/3}). \lesssim n ^{3/2} / \sqrt q \,.
\end{equation*}
Here, we are using standard notation for an exponential Orlicz space.
\end{lemma}
\begin{remark}\label{r.higher} A variant of Lemma~\ref{l.exp} holds in higher dimensions, which permits
an extension of Theorem~\ref{t.3} to higher dimensions. We return to this in \S\ref{s.heuristics}.
\end{remark}
Let us quantify the relationship between these two observations and our task of proving \eqref{e.>}.
\begin{proposition}\label{p.tau} There is a universal constant $ \tau >0$ so that defining the event
\begin{equation} \label{e.Gamma}
\Gamma _t \coloneqq \Bigl\{ \mathbb E ({\textstyle\bigsqcap}_t ^2 \;:\; \mathcal F_t)^{1/2}
< \tau n ^2 /q
\Bigr\}
\end{equation}
we have the estimate
\begin{equation} \label{e.B>}
\mathbb P ( B_t > \tau \cdot n /\sqrt q \;:\; \Gamma _t ) > \tau \mathbf 1_{\Gamma _t} \,.
\end{equation}
\end{proposition}
The point of this estimate is that the events $ \Gamma _t$ will be overwhelmingly likely for $ q \ll n$.
\begin{proof}
This is a consequence of the Paley-Zygmund Inequalities, Proposition~\ref{p.condExp},
Littlewood-Paley inequalities, and \eqref{e.condSqfn}.
Namely, by Proposition~\ref{p.condExp}, we have $ \mathbb E (B_t \;:\; \mathcal F_t)=0$,
and the conditional distribution of $ B_t$ given $ \mathcal F_t$ is symmetric.
By \eqref{e.condSqfn}, we have
\begin{align*}
\mathbb E (B_t ^2 \;:\; \mathcal F_t)= \operatorname S (B_t \;:\; \mathcal F_t)
=\sigma _{t} ^2 + \mathbb E ({\textstyle\bigsqcap}_t \;:\; \mathcal F_t) \,.
\end{align*}
We apply the Littlewood-Paley inequalities \eqref{e.LP} to see that
\begin{align*}
\mathbb E (B_t ^ 4\;:\; \mathcal F_t)
& \lesssim \mathbb E (S (B_t \;:\; \mathcal F_t) ^2 \;:\; \mathcal F_t )
\\
& = \sigma _t ^{4} + 2\sigma _t ^2 \mathbb E ({\textstyle\bigsqcap}_t \;:\; \mathcal F_t)
+ \mathbb E ({\textstyle\bigsqcap}_t ^2 \;:\; \mathcal F_t) \,.
\end{align*}
The event $ \Gamma _t $ gives an upper bound on the terms involving $ {\textstyle\bigsqcap}_t $ above.
This permits us to estimate, as $ \Gamma _t \in \mathcal F_t$,
\begin{equation*}
\bigl\lvert
\mathbb E (B_t ^2 \;:\; \Gamma _t ) ^{1/2} - \sigma _t \mathbf 1_{\Gamma _t}
\bigr\rvert \le \tau n/\sqrt q \,,
\end{equation*}
but $ \sigma _{t} \simeq n/\sqrt q$, so we have $ \mathbb E (B_t ^2 \;:\; \Gamma _t ) ^{1/2} \simeq n/\sqrt q $.
Similarly,
\begin{align*}
\mathbb E (B_t ^4 \;:\; \Gamma _t ) ^{1/4} & \lesssim
\sigma _t + \sigma _t ^{1/2}
\lvert \mathbb E ({\textstyle\bigsqcap}_t \;:\; \mathcal F_t) \rvert ^{1/4}
+ \mathbb E ({\textstyle\bigsqcap}_t ^2 \;:\; \mathcal F_t) ^{1/4}
\\
& \lesssim (1+ \tau )\sigma _t \,.
\end{align*}
Hence, we can apply the Paley-Zygmund inequality \eqref{e.pz2} to conclude the Proposition.
\end{proof}
By way of explaining the next steps, let us observe the following. If we have
\begin{equation} \label{e.x}
\mathbb E (\mathbf 1 _{\Gamma _t} \;:\; \mathcal F_t )\ge \tau \qquad \textup{a.s. } (x_1)\,,
\qquad 1\le t\le q/2 \,,
\end{equation}
then \eqref{e.B>} holds, namely $ \mathbb P ( B_t ( \cdot , x_2, x_3) > \tau \cdot n /\sqrt q \;:\; \Gamma _t ) > \tau $
almost surely. Applying
Lemma~\ref{l.condIndep}, and in particular \eqref{e.F>}, we then have
\begin{equation*}
\mathbb P _{x_1} \Bigl( \bigcap _{t=1} ^{q/2} \{ B_t ( \cdot , x_2, x_3)> \tau n/\sqrt q\} \Bigr)
\ge \tau ^{q/2} \,.
\end{equation*}
Of course there is no reason that such a pair $ (x_2,x_3)$ exits. Still,
the second half of Lemma~\ref{l.condIndep} will apply
if we can demonstrate that
we can choose $ x_2, x_3$ so that \eqref{e.x} holds
except on a set, in the $ x_1$ variable, of sufficiently small probability.
\smallskip
Keeping \eqref{e.E>>} in mind, let us identify an exceptional set. Use the sets $ \Gamma _t $ as given
in \eqref{e.Gamma} to define
\begin{equation}\label{e.E}
E \coloneqq \Biggl\{ (x_2, x_3) \;:\;
\mathbb P _{x_1} \Biggl[
\bigcup _{t=1} ^{ q/2 } \Gamma _t ^{c}
\Biggr] > \operatorname {exp}\bigl(- c_1 (n/ q) ^{1/3}\bigr)
\Biggr\}
\end{equation}
Here, $ c_1>0$ will be a sufficiently small constant, independent of $ n$.
Let us give an upper bound on this set.
\begin{align}
\mathbb P _{x_2,x_3} (E) & \le \operatorname {exp}\bigl( c_1 (n/ q) ^{1/3}\bigr)
\cdot
\mathbb P _{x_1,x_2,x_3} \Bigl( \bigcup _{t=1} ^{ q/2 } \Gamma _t ^{c}\Bigr)
\\&
\le
\operatorname {exp}\bigl( c_1 (n/ q) ^{1/3}\bigr)
\sum _{t=1} ^{ q/2 } \mathbb P _{x_1,x_2,x_3} (\Gamma _t ^{c})
\\ &
\lesssim q \operatorname {exp}\bigl( c_1 (n/ q) ^{1/3}\bigr) \cdot
\operatorname {exp} \Biggl( - \Bigl[
\tau (n ^2 /q)
\norm \mathbb E \bigl( {\textstyle\bigsqcap}_t ^2
\;:\; \mathcal F_t\bigr) ^{1/2} . \operatorname {exp}(L ^{2/3}) . ^{-1}
\Bigr] ^{2/3} \Biggr)
\\ \label{e.c2}
& \le
q \operatorname {exp} \bigl( (c_1-c_2 \tau ^{2/3}) \cdot (n/ q) ^{1/3} \bigr)
\end{align}
Here, we have used Chebyscheff inequality. And, more importantly, the convexity of
conditional expectation and $ L ^2 $-norms to estimate
\begin{equation*}
\norm \mathbb E \bigl( {\textstyle\bigsqcap}_t ^2
\;:\; \mathcal F_t\bigr) ^{1/2} . \operatorname {exp}(L ^{2/3}) .
\lesssim n ^{3/2}/\sqrt q \,,
\end{equation*}
by Lemma~\ref{l.exp}. The implied constant is absolute, and determines the constant $ c_2$
in \eqref{e.c2}. For an absolute choice of $ c_1$, and constant $ \tau '$, we see that we have
\begin{equation}\label{e.PE}
\mathbb P _{x_2,x_3} (E) \lesssim \operatorname {exp}(-\tau ' (n/q) ^{1/3} ) \,.
\end{equation}
We only need $ \mathbb P_{x_2,x_3} (E)< \tfrac 12$, but an exponential estimate of this
type is to be expected.
Our last essential estimate is
\begin{lemma}\label{l.EB} For $ 0<\kappa <1$ sufficiently small,
$ q \le \kappa n ^{1/4}$, and $ (x_2,x_3)\not\in E$, we have
\begin{equation*}
\mathbb P _{x_1} \Bigl( \bigcap _{t=1} ^{q/2} \{ B_t ( \cdot , x_2, x_3)> \tau n/\sqrt q\} \Bigr) \gtrsim \tau ^{q} \,.
\end{equation*}
\end{lemma}
Assuming this Lemma, we can select
$ (x_2,x_3)\not\in E$. Thus,
we see that there is some $ (x_1,x_2,x_3) $ so that for all
$ 1\le t\le q/2$ we have $ B_t(x_1,x_2,x_3) > \tau n/\sqrt q$, whence
\begin{equation*}
\sum _{t=1} ^{q/2} B_t(x_1,x_2,x_3) > \tfrac \tau2 \cdot n \sqrt q \,.
\end{equation*}
That is, \eqref{e.>} holds.
And we can make the last expression as big as $ \gtrsim n ^{9/8}$.
\begin{proof}
If $ (x_2, x_3) \not \in E$, bring together the
definition of $ E$ in \eqref{e.E}, Proposition~\ref{p.tau}, and Lemma~\ref{l.condIndep}.
We see that \eqref{e.F>>} holds (with $ \gamma = \tau $, and the $ q$ in \eqref{e.F>>} equal to the current
$ q/2$) provided
\begin{equation*}
\tfrac 12 \cdot \tau ^{q/2}> \operatorname {exp}\bigl(- c_1 (n/ q) ^{1/3}\bigr)\,.
\end{equation*}
But this is true by inspection, for $ q\le \kappa n ^{1/4}$.
\end{proof}
\section{Heuristics} \label{s.heuristics}
In two dimensions, Proposition~\ref{p.2^n} clearly reveals an underlying exponential-square distribution
governing the Small Ball Inequality. The average case estimate is $ n ^{1/2}$, and the set on which
the sum is about $ n$ (a square root gain over the average case) is exponential in $ n$.
Let us take it for granted that the same phenomena should hold in three dimensions. Namely, in three dimensions
the average case estimate for a signed small ball sum is $ n $, then the
event that the sum exceeds $ n ^{3/2}$ (a square root gain over the average case) is also exponential in $ n$.
How could this be proved? Let us write
\begin{align*}
H & = \sum _{\substack{\lvert R\rvert= 2 ^{-n}\\ \lvert R_1\rvert \ge 2 ^{-n} } } a_R h_R
= \sum _{\substack{\lvert \vec r\rvert= n\\ r_1\le n/2 } } f _{\vec r}
= \sum _{j=0} ^{n/2} \beta _j \,,
\\
\beta _j & \coloneqq \sum _{\substack{\lvert \vec r\rvert=n \\ r_1=j }} f _{\vec r} \,.
\end{align*}
Here we have imposed the same restriction on the first coordinate as we did in Theorem~\ref{t.3}. With this
restriction, note that each $ \beta _j$ is a two-dimensional sum, hence by Proposition~\ref{p.independent},
a sum of bounded independent random variables. It follows that we have by the usual Central Limit Theorem,
\begin{equation*}
\mathbb P (\beta _j > c \sqrt n) \ge \tfrac 14 \,,
\end{equation*}
for a fixed constant $ c$. If one could argue for some sort of independence of the events $ \{\beta _j>c \sqrt n\}$
one could then write
\begin{align*}
\mathbb P (H>c n ^{3/2}) \ge \mathbb P \Bigl( \bigcap _{j=0} ^{n/2} \{\beta _j>c \sqrt n\} \Bigr)
\gtrsim \epsilon ^{n} \,,
\end{align*}
for some $ \epsilon >0$. This matches the `exponential in $ n$' heuristic.
We cannot implement this proof for the $ \beta _j$, but can in the more restrictive
`block sums' used above.
\bigskip
We comment on extensions of Theorem~\ref{t.3} to higher dimensions. Namely, the methods of this paper will
prove
\begin{theorem}\label{t.4} For $ \lvert a_R\rvert=1 $ for all $ R$, we have the estimate estimate in
dimensions $ d\ge 4$:
\begin{equation*}
\NOrm \sum _{\substack{\lvert R\rvert= 2 ^{-n}\\ \lvert R_1\rvert\ge 2 ^{-n/2} } } a_R \, h_R . L ^{\infty }.
\gtrsim n ^{ (d-1)/2+ 1/4d}\,.
\end{equation*}
We restrict the sum to those dyadic rectangles whose first side has the lower bound $ \lvert R\rvert\ge 2 ^{-n/2} $.
\end{theorem}
This estimate, when specialized to $ d=3$ is worse than that of Theorem~\ref{t.3} due to the fact that the
full extension of the critical estimate Lemma~\ref{l.exp} is not known to hold in dimensions $ d\ge 4$.
Instead, this estimate is known. Fix the coefficients $ a_R \in \{\pm1\}$ as in Theorem~\ref{t.4},
and let $ f _{\vec r}$ be the corresponding $ \mathsf r$-functions. For $ 1 \ll q \ll n$, define $ \mathbb I _t$
as above, namely $ \{\vec r \;:\; \lvert \vec r\rvert=n\,,\ r_1\in J _t \}$. Define
$ {\textstyle\bigsqcap}_t$ as in \eqref{e.cap}. The analog of Lemma~\ref{l.exp} in dimensions $ d\ge 4$ are
\begin{lemma}\label{l.exp4} In dimensions $ d\ge 4$ we have the estimate
\begin{equation*}
\norm {\textstyle\bigsqcap}_t . \operatorname {exp}(L ^{2/(2d-1)}). \lesssim n ^{(2d-3)/2} /\sqrt q \,.
\end{equation*}
\end{lemma}
See \cite{MR2409170}*{Section 5, especially (5.3)}, which proves the estimate above for the case of $ q=1$.
The details of the proof of Theorem~\ref{t.4} are omitted, since the Theorem is at this moment only
a curiosity.
It would be quite interesting to extend Theorem~\ref{t.4} to the case where, say,
one-half of the coefficients are permitted to be zero. This result would have implications for
Kolmogorov entropy of certain Sobolev spaces; as well this case is much more indicative of the
case of general coefficients $ a_R$. As far as the authors are aware, there is no straight forward
extension of this argument to the case of even a small percentage of the $ a_R$ being zero.
\begin{bibsection}
\begin{biblist}
\bib{MR1032337}{article}{
author={Beck, J{\'o}zsef},
title={A two-dimensional van Aardenne-Ehrenfest theorem in
irregularities of distribution},
journal={Compositio Math.},
volume={72},
date={1989},
number={3},
pages={269\ndash 339},
issn={0010-437X},
review={MR1032337 (91f:11054)},
}
\bib{MR903025}{book}{
author={Beck, J{\'o}zsef},
author={Chen, William W. L.},
title={Irregularities of distribution},
series={Cambridge Tracts in Mathematics},
volume={89},
publisher={Cambridge University Press},
place={Cambridge},
date={1987},
pages={xiv+294},
isbn={0-521-30792-9},
review={MR903025 (88m:11061)},
}
\bib{MR2375609}{article}{
author={Bilyk, Dmitriy},
author={Lacey, Michael},
author={Vagharshakyan, Armen},
title={On the signed small ball inequality},
journal={Online J. Anal. Comb.},
number={3},
date={2008},
pages={Art. 6, 7},
issn={1931-3365},
review={\MR{2375609}},
}
\bib{MR2409170}{article}{
author={Bilyk, Dmitriy},
author={Lacey, Michael T.},
author={Vagharshakyan, Armen},
title={On the small ball inequality in all dimensions},
journal={J. Funct. Anal.},
volume={254},
date={2008},
number={9},
pages={2470--2502},
issn={0022-1236},
review={\MR{2409170}},
}
\bib{MR2414745}{article}{
author={Bilyk, Dmitriy},
author={Lacey, Michael T.},
title={On the small ball inequality in three dimensions},
journal={Duke Math. J.},
volume={143},
date={2008},
number={1},
pages={81--115},
issn={0012-7094},
review={\MR{2414745}},
}
\bib{MR637361}{article}{
author={Hal{\'a}sz, G.},
title={On Roth's method in the theory of irregularities of point
distributions},
conference={
title={Recent progress in analytic number theory, Vol. 2},
address={Durham},
date={1979},
},
book={
publisher={Academic Press},
place={London},
},
date={1981},
pages={79--94},
review={\MR{637361 (83e:10072)}},
}
\bib{MR0066435}{article}{
author={Roth, K. F.},
title={On irregularities of distribution},
journal={Mathematika},
volume={1},
date={1954},
pages={73--79},
issn={0025-5793},
review={\MR{0066435 (16,575c)}},
}
\bib{MR0319933}{article}{
author={Schmidt, Wolfgang M.},
title={Irregularities of distribution. VII},
journal={Acta Arith.},
volume={21},
date={1972},
pages={45--50},
issn={0065-1036},
review={\MR{0319933 (47 \#8474)}},
}
\bib{MR0491574}{article}{
author={Schmidt, Wolfgang M.},
title={Irregularities of distribution. X},
conference={
title={Number theory and algebra},
},
book={
publisher={Academic Press},
place={New York},
},
date={1977},
pages={311--329},
review={\MR{0491574 (58 \#10803)}},
}
\bib{MR95k:60049}{article}{
author={Talagrand, Michel},
title={The small ball problem for the Brownian sheet},
journal={Ann. Probab.},
volume={22},
date={1994},
number={3},
pages={1331\ndash 1354},
issn={0091-1798},
review={MR 95k:60049},
}
\bib{MR96c:41052}{article}{
author={Temlyakov, V. N.},
title={An inequality for trigonometric polynomials and its application
for estimating the entropy numbers},
journal={J. Complexity},
volume={11},
date={1995},
number={2},
pages={293\ndash 307},
issn={0885-064X},
review={MR 96c:41052},
}
\end{biblist}
\end{bibsection}
\end{document}
| {'timestamp': '2010-01-30T15:06:07', 'yymm': '0909', 'arxiv_id': '0909.5158', 'language': 'en', 'url': 'https://arxiv.org/abs/0909.5158'} |
\section{Introduction}
\label{sec:intro}
Solar jets are one type of pervasive and explosive phenomena on the Sun,
usually appearing as a footpoint brightening followed by a nearly collimated plasma ejection \citep[e.g.,][]{2016SSRv..201....1R,2021RSPSA.47700217S,2018ApJ...861..108Z}.
Coronal jets are often observed in extreme ultraviolet \citep[EUV, e.g., ][]{2009SoPh..259...87N,2011RAA....11.1229Y,2012SoPh..281..729S,2014A&A...562A..98C,2018ApJ...854..155K,2019ApJ...885L..15K,2019ApJ...873...93K} and X-ray passbands \citep[e.g.,][]{1996PASJ...48..123S,2019ApJ...871..220S} in active regions (ARs) and coronal holes.
These coronal jets normally appear as anemone jets \citep[i.e., inverted-Y or $\lambda$ shape, e.g.,][]{2007Sci...318.1580C,2009SoPh..259...87N} or
two-sided-loop jets \citep[e.g.,][]{2017ApJ...845...94T,2020MNRAS.498L.104W}.
The anemone jets can be divided into blowout jets and standard jets \citep[e.g.,][]{2010ApJ...720..757M}.
The obvious differences are that a blowout jet often hosts two bright points (only one bright point at a standard one's base) and often involves a twisting mini-filament at its base \citep{2010ApJ...720..757M,2018ApJ...859....3M,2015Natur.523..437S,2017ApJ...844L..20Z,2019ApJ...887..239Y}.
Observational studies show that the coronal jets are often associated with or occur above satellite sunspots \citep{2008A&A...478..907C,2015ApJ...815...71C,2017PASJ...69...80S,2018ApJ...861..105S,2020ApJ...891..149P},
coronal bright points \citep[CBPs,][]{2007PASJ...59S.751C,2016Ap&SS.361..301L,2019LRSP...16....2M},
mini-filaments \citep{2016ApJ...821..100S,2016ApJ...830...60H,2018ApJ...854..155K,2019ApJ...873...93K}, or sigmoidal structures \citep{2016ApJ...817..126L,2014A&A...561A.104C}.
Coronal jets have been frequently found to be closely related to flux emergence, flux convergence, and/or cancellation, indicating their generation by magnetic reconnection \citep{2007A&A...469..331J,2010A&A...519A..49H,2012ApJ...745..164S,2016ApJ...832L...7P}.
The brightening of the loops at the bases of some jets is found to be associated with a pattern of Doppler blue-to-red shifts, indicating an outbreak of siphon flow along the newly reconnected and submerging closed loop during the interchange magnetic reconnection \citep{2010A&A...519A..49H}.
The lengths, widths, projected speeds, lifetimes, temperatures, and densities of coronal jets are mostly 10$-$400\,Mm, 5$-$100\,Mm,
10$-$1000\,{\rm km\;s$^{-1}$}, tens of minutes, 3$-$8 MK, and 0.7$-$4.0$\times$10$^{9}$ cm$^{-3}$, respectively \citep{2000ApJ...542.1100S,2007PASJ...59S.771S,2016A&A...589A..79M,2018ApJ...868L..27P,2015A&A...579A..96P}.
Jet-like features have also been observed in the chromosphere and transition region (TR). These low-temperature jets are usually smaller than the coronal jets, and they appear as anemone jets in plage regions \citep[e.g.,][]{2007Sci...318.1591S,2021ApJ...913...59W}, penumbral jets \citep[e.g.,][]{2007Sci...318.1594K,2016ApJ...816...92T,2020A&A...642A..44H},
light bridge jets \citep[e.g.,][]{2018ApJ...854...92T,2017AA...597A.127B,2017ApJ...848L...9H,2019ApJ...870...90B}, flare ribbon jets \citep{2019PASJ...71...14L}, cool polar jet \citep{2011A&A...534A..62S}, mini- or nano-jets from rotating prominence structures \citep{2017ApJ...841L..13C,2021NatAs...5...54A},
chromospheric spicules \citep[e.g.,][]{2007PASJ...59S.655D,2015ApJ...799L...3R,2019Sci...366..890S}, TR network jets \citep[e.g.,][]{2014Sci...346A.315T,2016SoPh..291.1129N,2019ApJ...873...79C,2019SoPh..294...92Q}, and TR explosive events \citep[TREEs, interpreted as bi-directional jets,][]{1997Natur.386..811I}.
Though smaller in size, many chromospheric/TR jets have morphologies that are similar to those of coronal jets. For instance, many of these small jets also reveal inverted-Y shaped structures, indicating that magnetic reconnection between small-scale magnetic loops and the background field might be a common formation mechanism for these jets \citep[e.g.,][]{2020RSPSA.47690867N}.
The lengths, widths, velocities, and lifetimes of these chromospheric/TR jets have been found to be 1$-$11\,Mm, 100$-$400\,km, 5$-$250\,{\rm km\;s$^{-1}$}, and 20$-$500\,s, respectively.
Due to their high occurrence rate and ubiquity, these small-scale jets have been suggested to contribute significantly to the heating of the upper solar atmosphere and origin of the solar wind \citep[e.g.,][]{2011Sci...331...55D,2014Sci...346A.315T,2019Sci...366..890S}.
With recent high-resolution observations taken by the Extreme Ultraviolet Imager \citep[EUI,][]{2020A&A...642A...8R}
onboard Solar Orbiter \citep[SO,][]{2020A&A...642A...1M},
we report the smallest coronal jets ever observed in the quiet Sun and investigate their physical properties.
We describe our observations in Section\,\ref{sec:obs}, present the analysis results in Section\,\ref{sec:res},
discuss the results in Section\,\ref{sec:dis} and summarize our findings in Section\,\ref{sec:sum}.
\begin{table*}[htp]
\centering
\renewcommand\tabcolsep{3.pt}
\caption{Detailed information of the EUI datasets}
\begin{tabular}{c c c c c c c c }
\hline
Dataset&Date &Time &Distance from&Pixel size &Time &Ly$\alpha$ ~data & SDO data \\
& & [UT] &Sun [AU] & HRI$_{EUV}$ (HRI$_{Ly\alpha}$) [Mm] & cadence [s] & available? & available? \\\hline
1 & 2020-05-20 & 21:12$-$22:09 & 0.61 & 0.22 (/) & 10 & / & yes \\
2 & 2020-05-21 & 16:04$-$17:30 & 0.60 & 0.22 (/) & 10 (60) & / & yes \\
3 & 2020-05-30 & 14:46$-$14:50 & 0.56 & 0.19 (0.21) & 5 & yes &yes \\
4 & 2020-10-19 & 19:44$-$20:31 & 0.99 & 0.35 (/) & 12 & / &/ \\
5 & 2020-10-22 & 14:23$-$14:36 & 0.98 & 0.35 (/) & 10 & / &/ \\
6 & 2020-11-19 & 11:52$-$14:31 & 0.92 & 0.32 (0.68) & 15 & yes &/ \\\hline
\end{tabular}
\\
\tablenotetext{}{\textbf{For dataset\,2}, the regular time cadence is 10 s covering most micojets, whereas the time cadence of 60 s covers two microjets (Jet-09 and Jet-12). \textbf{For dataset\,6}, the HRI$_{Ly\alpha}$ images were binned with 2$\times$2 pixels.}
\label{datasets}
\end{table*}
\section{Observations}
\label{sec:obs}
The primary data\footnote{https://doi.org/10.24414/z2hf-b008} we used were taken by the two High Resolution Imagers (HRI) of EUI,
the passbands of which are centered at 174 \AA\ (HRI$_{EUV}$: dominated by Fe~{\sc{x}} and Fe~{\sc{ix}} lines) and
1216 \AA\ (HRI$_{Ly\alpha}$: dominated by the Ly$\alpha$\ line of Hydrogen), respectively \citep{2021arXiv210403382B}.
The HRI$_{EUV}$ passband has a peak response at the temperature of $\sim$1 MK \citep{2021arXiv210410940C}, and thus samples plasma in the corona. We mainly used HRI$_{EUV}$images to search for coronal microjets.
Six datasets taken on 2020 May 20, May 21, May 30, Oct 19, Oct 22, and Nov 19 were analyzed.
The detailed information of these datasets is shown in Table\,\ref{datasets}.
We also analyzed simultaneous observations taken by the Atmospheric Imaging Assembly \citep[AIA,][]{2012SoPh..275...17L} and
the Helioseismic and Magnetic Imager \citep[HMI,][]{2012SoPh..275..207S}
onboard the Solar Dynamics Observatory \citep[SDO,][]{2012SoPh..275....3P}
on 2020 May 20, May 21, and May 30, when the field of views (FOVs) of EUI HRIs were also seen by SDO.
We used the AIA 1600 \AA\ images at a 24 s cadence and
the 304 \AA, 171 \AA, 193 \AA, 211 \AA, 131 \AA, 335 \AA, and 94 \AA\ images at a 12 s cadence
to investigate the thermal property of the coronal microjets.
These AIA images have a pixel size of 430 km and a resolution of about 1 Mm.
To investigate their origin, we also examined the HMI line-of-sight (LOS) magnetic field data at a 45 s cadence.
The pixel size of the magnetograms is 360 km.
Since the HRI$_{EUV}$ and the AIA 171 \AA\ images have similar temperature responses, we first aligned them through a linear Pearson correlation. And other AIA and HMI images were then aligned with the AIA 171 \AA\ images using the aia\_prep.pro routine available in SolarSoftware (SSW).
For aligning the HRI$_{EUV}$ and HRI$_{Ly\alpha}$ images, several CBPs in the images were used as the referent features.
Due to the difference in the heliocentric distance of SO and SDO,
the solar radiation emitted at a particular time reaches the two spacecraft at different times.
To avoid confusion in further analysis, we converted the observation times of all telescopes into the light emitting times on the Sun, which are listed in Table\,\ref{datasets}.
\section{Results}
\label{sec:res}
\begin{figure*}
\centering
\includegraphics[trim=0.0cm 1.0cm 0.0cm 0.0cm,width=1.0\textwidth]{overview_v1.eps}
\caption{ Overviews for all the six datasets in the HRI$_{EUV}$ passband.
The green open diamonds in each panel mark the footpoint locations of the identified coronal microjets.
}
\label{fig:overview}
\end{figure*}
Figure\,\ref{fig:overview} shows an overview for each of the six datasets in the HRI$_{EUV}$ passband.
The Sun was very quiet on these days, and the FOVs mainly include wide-spreading CBPs and diffuse regions in between. Several ARs appear near the edges of the FOVs in datasets 4-6; these ARs were saturated due to their high radiance and were excluded when searching for coronal jets.
The high spatial and temporal resolutions allow us to investigate various types of small-scale transient activity. Previous investigations have identified numerous small brightenings termed campfires from these HRI$_{EUV}$ images \citep{2021arXiv210403382B}. Here we focus on small-scale collimated jet-like features, which we call coronal microjets.
In total, we identified 52 coronal microjets from the six HRI$_{EUV}$ image sequences. The footpoint locations of all these jets are marked in Fig.\,\ref{fig:overview}.
The detailed information such as he staring times of their occurrence on the Sun, durations, and pixel locations are listed in Table\,\ref{paras}.
There are 2 events in each of the datasets\,1,\,3 and 5, 11 events in dataset\,2, 7 events in dataset\,4, and 27 events in dataset\,6.
Figure\,\ref{fig:overview} shows that most of these microjets are located in the quiet-Sun regions that are far away from the ARs.
A few of them are located close to but not inside the ARs.
\subsection{Morphology and parameters of the coronal microjets in HRI$_{EUV}$}
\label{subsec:jets_in_174}
The identified microjets are generally preceded by clear brightenings at their footpoints in the HRI$_{EUV}$ images. They mostly appear as collimated structures shooting up from the footpoint brightenings. Their morphology is very similar to the known larger-scale coronal jets.
As an example, we show 6 coronal microjets in Figure\,\ref{fig:example}.
Figure\,\ref{fig:example}(a) gives a closer look at Jet-04. To highlight the jet, we manually drew a green contour enclosing the jet structure.
The green arrow indicates the propagation direction of the jet.
This jet appears to consist of two footpoints and a collimated jet body, forming an inverted-Y shaped structure.
Figure\,\ref{fig:example}(b) gives a closer look at Jet-17.
In addition to the two footpoints at the base and the quasi-collimated jet body, two blobs marked by the red arrows in Figure\,\ref{fig:example}(b) can be clearly seen in the jet. These moving blobs appear to be similar to those reported in large-scale coronal jets \citep{2014A&A...567A..11Z,2016SoPh..291..859Z,2018ApJ...854..155K,2019ApJ...885L..15K,2019ApJ...873...93K}.
Two blobs can also be identified from Jet-35, as marked by the red arrows in Figure\,\ref{fig:example}(c). For a detailed evolution of these moving blobs, we refer to the associated animation of Figure\,\ref{fig:example}. The jet shown in Figure\,\ref{fig:example}(d) (Jet-07) is a highly collimated one. It includes a brighter, confined base and a fuzzy but identifiable jet body.
Figure\,\ref{fig:example}(e) shows a bi-directional jet (Jet-20). We see that plasma is ejected towards the opposite directions from the jet base, which is marked by the plus symbol.
The last example is Jet-50, shown in Figure\,\ref{fig:example}(f).
Two microjets occur close to each other simultaneously. The footpoints of these two jets appear to be kinked or reveal two branches, possibly indicating an inverted-Y shape morphology.
From the associated animation, we can see the appearance of two dark regions following the disappearance of Jet-50, which might be similar to the coronal dimmings after eruptions of coronal mass ejections \citep[e.g.,][]{1997ApJ...491L..55S,2020Innov...100059X}.
Moreover, from Figure\,\ref{fig:example} and the associated animation we see that most microjets appear as isolated structures without any indication of pre-erupion mini-filaments at their bases, which is different from many previously reported coronal jets in ARs and coronal holes.
\begin{figure*}
\centering
\includegraphics[trim=0.0cm 0.8cm 0.0cm 0.0cm,width=1.0\textwidth]{jets_example_v1.eps}
\caption{Morphologies of Jet-04, Jet-17, Jet-35, Jet-07, Jet-20, and Jet-50 in the HRI$_{EUV}$ images.
The green contours outline the microjets, and the green arrows indicate the propagation directions of the microjets.
In (a) and (b), the blue arrows mark the footpoints of the microjets.
In (b) and (c), the red arrows indicate some blobs within the microjets.
In (e), the green plus marks the location of the starting brightening in Jet-20.
An animation of this figure is available, showing the evolution of these six microjets in the HRI$_{EUV}$ images.
It includes six panels covering durations of 2 min from 16:09:38 UT to 16:11:38 UT on 2020 May 21, 10.6 min from 20:02:24 UT to 20:13:00 UT on 2020 October 19, 8.3 min from 14:23:51 UT to 14:32:06 UT on 2020 November 19, 5.7 min from 16:41:28 UT to 16:47:08 UT on 2020 May 21, 3.3 min from 20:00:41 UT to 20:04:00 UT on 2020 October 19, and 7.8 min from 14:07:51 UT to 14:15:36 UT on 2020 November 19, respectively.
In this animation, the green arrows mark the microjets, the red arrows in (b) and (c) mark the blobs, and the white arrows in (f) denote two dark regions that appear after the disappearance of Jet-50.
}
\label{fig:example}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[trim=0.0cm 0.6cm 0.0cm 0.0cm,width=1.0\textwidth]{jets_paras_v1.eps}
\caption{Parameters of the coronal microjets.
(a) An HRI$_{EUV}$ image showing Jet-14 at 14:51:35 UT on 2020 May 30.
The green arrow indicates the propagation direction, and the green dotted line marks the trajectory of the ejected plasma.
(b) The time-distance map for the green dotted line in (a). Four tracks of ejected plasma and the corresponding projected speeds in the unit of {\rm km\;s$^{-1}$}\ are marked by the green lines and the numbers, respectively.
(c) The intensity variation along the long side of the rectangle s$-$e shown in (a).
The original intensity profile and the Gaussian fit are indicated by the black and red curves, respectively.
(d)$-$(h) Distributions of the parameters for the microjets.
In each panel, `A' and `S' represent the average value and standard deviation, respectively.}
\label{fig:paras}
\end{figure*}
We measured the physical parameters for these coronal microjets, i.e. the duration, projected speed, length, width, and aspect ratio, which are listed in Table\,\ref{paras}.
The lifetime of each event was calculated as the time difference between the first appearance of the footpoint brightening and the disappearance of the jet body.
For several events either the first appearance of the footpoint brightening or the disappearance of the jet body was not captured by EUI, we thus
cannot accurately estimate their full lifetimes. Instead, we registered the existing periods of these jets in the HRI$_{EUV}$ images (see Table\,\ref{paras}).
We marked the trajectory of each microjet and calculated the speed from the corresponding time-distance map.
As an example, Figure\,\ref{fig:paras}(a) and (b) show the trajectory and the time-distance map for Jet-14, respectively.
Four slant stripes, marking four episodes of plasma ejection in Jet-14, can be seen from Figure\,\ref{fig:paras}(b). The speeds for the four ejections were estimated from the slopes, which range from 89 to 162 {\rm km\;s$^{-1}$}. We took the average speed 123 {\rm km\;s$^{-1}$}\ as the speed of this jet for further analysis. We would like to mention that most microjets only show one episode of ejection, and in such cases there is no need to average the speeds.
We calculated the length of a microjet when it was fully developed and clearly resolved.
To estimate the width of a microjet, we put a narrow white rectangle perpendicular to the microjet, as exemplified in Figure\,\ref{fig:paras}(a). We then plotted
the intensity along the long side of the rectangle and obtained the intensity profile across the jet, e.g., the black line in Figure\,\ref{fig:paras}(c).
We applied a Gaussian fit to the intensity profile, and the full-width-at-half-maximum (FWHM) was taken as the width of the microjet.
Finally, the aspect ratio of the microjet could be calculated as the ratio of the length and width.
Figure\,\ref{fig:paras}(d)$-$(h) show the distributions of their parameters.
Their durations are less than 11 min, with an average of $\sim$4.6 min.
Their projected speeds are mostly less than 100 {\rm km\;s$^{-1}$}\, with an average value of $\sim$62\,{\rm km\;s$^{-1}$}.
These microjets have lengths ranging from 2 to 20 Mm, with an average of $\sim$7.7 Mm. Their widths, obtained as the FWHM of a Gaussian function fitted to the intensity profile, are in the range of 0.3 to 3.0 Mm, and the average value is $\sim$1.0 Mm.
The aspect ratios of the microjets range from 3 to 20, and the average is 7.7.
\subsection{Response in the HRI$_{Ly\alpha}$ images}
\label{subsec:lya}
\begin{figure*}
\centering
\includegraphics[trim=0.0cm 0.5cm 0.0cm 0.0cm,width=0.78\textwidth]{jets_lya_v1.eps}
\caption{Responses of Jet-30, Jet-40, and Jet-50 in the HRI$_{Ly\alpha}$ images.
Left and middle columns: snapshots of three microjets in the HRI$_{EUV}$ and HRI$_{Ly\alpha}$ images.
The green contours outline the microjets, and the green arrows represent the propagation directions of the jets.
The red contours were used to obtain the average intensity variations at 174 \AA\ and H\,{\sc i}\ Ly$\alpha$,
and the corresponding light curves are shown in the right column.
In the right column, the vertical bars indicate the standard errors of the mean values, and the vertical dashed lines represent the starting and ending times of the events.
}
\label{fig:lya}
\end{figure*}
We also examined responses of the identified coronal microjets in the HRI$_{Ly\alpha}$ images.
Datasets 3 (3 events) and \,6 (27 events) have simultaneous observations in the HRI$_{Ly\alpha}$ passband.
We found that 11 out of the 30 jets (see the details in Table\,\ref{paras}) have emission responses at the jet locations in the HRI$_{Ly\alpha}$ images, and three examples are shown in Figure\,\ref{fig:lya}.
We chose one region around the footpoint of each microjet
(red contours in the left and middle columns of Figure\,\ref{fig:lya}) and
plotted the light curves of the total intensities at 174 \AA\ and H\,{\sc i}\ Ly$\alpha$\ in the right column of Figure\,\ref{fig:lya}.
The two light curves reveal a similar trend for these three microjets, indicating that emissions in the two passbands are mainly contributed by the same process. These microjets with clear response in Ly$\alpha$\ might be generated at lower heights as compared to others.
By comparing the left and middle columns of Figure\,\ref{fig:lya},
we can see that these microjets are located at or near the network lanes that are characterized by the enhanced emission in the HRI$_{Ly\alpha}$ images.
Actually, at least half of the 30 microjets (including the three microjets shown in Figure\,\ref{fig:lya}) appear to be located at the edges of the network lanes. Only one microjet (Jet-43, see Table\,\ref{paras}) is not near the network lane.
\begin{figure*}
\centering
\includegraphics[trim=0.0cm 0.8cm 0.0cm 0.0cm,width=0.9\textwidth]{jets_sdo_v1.eps}
\caption{From left to right: Images of the HRI$_{EUV}$, AIA 171\,\AA, 211 \AA, and HMI LOS magnetic field for Jet-02 (upper panels) and Jet-04 (lower panels).
The green arrows indicate the propagation directions of the jets.
The green contours mark the locations of the microjets.
The red and blue contours in (a4) and (b4) represent magnetic field strength with the levels of $\pm$10 G, respectively.
The HMI magnetograms are saturated at $\pm$50\,G.
In (a4) and (b4), the overplotted cyan curves show the normalized variations of the minority-polarity (positive for (a4) and negative for (b4)) magnetic fluxes in the white boxes.
The black squares and triangles represent the starting and ending times for the events, respectively.
}
\label{fig:sdo}
\end{figure*}
\subsection{Response in the AIA images and the associated magnetic field}
\label{subsec:mgrec}
The FOVs of datasets \,1 (2 events), \,2 (11 events), and \,3 (3 events)
have also been observed by AIA and HMI.
We thus examined the responses of these events in the AIA UV and EUV images, and investigated the possible origin of these jets using the HMI line-of-sight magnetograms.
We did not find any signature of these events in the AIA 1600 \AA\ images.
Many microjets reveal obvious signatures in the AIA 304 \AA, 171 \AA, 193 \AA, and 211 \AA\ passbands,
and for some cases weak signatures can be identified from the AIA 335 \AA\ and 131 \AA\ passbands (Table\,\ref{paras}).
As an example, Figure\,\ref{fig:sdo} shows two events, Jet-02 and Jet-04,
in the HRI$_{EUV}$, AIA 171 \AA, and 211 \AA\ images.
These two events reveal clear signatures in the AIA 171 \AA\ and 211 \AA\ images, especially around their footpoint regions.
The jet signatures are fuzzier in the AIA EUV passbands
than in the HRI$_{EUV}$ images.
Without the high-resolution HRI$_{EUV}$ observations, it would be difficult to identify these coronal microjets from the AIA EUV passbands.
With the HMI magnetograms, we have examined the magnetic field structures around the coronal microjets.
We found that ten events are located in regions with opposite magnetic polarities (see the details in Table\,\ref{paras}).
In Figure\,\ref{fig:sdo}\,(a4) and (b4), we show the line-of-sight magnetic field around Jet-02 and Jet-04.
Jet-02 is obviously located above a region with opposite polarities, where the negative polarity is dominated.
The footpoint region of Jet-04 is also a mixed-polarity region, where a weak negative flux is close to the strong positive flux of the network.
In some cases, there are more than one patches of the minor polarity around their footpoints.
These events were also considered to be related to opposite-polarity fluxes if one patch consists of more than 4 pixels
with a field strength larger than 10\,G.
The resolution of the HMI magnetograms is not high enough to reveal clear flux changes of the magnetic field associated with most of the microjets. However, we did find clear magnetic flux changes for a few microjets. We present the variations of the minority-polarity magnetic fluxes for two events in Figure\,\ref{fig:sdo}\,(a4) and (b4), where we can see clear signatures of flux decrease between the starting and ending times of each event. In addition, the positive magnetic flux (not shown in Figure\,\ref{fig:sdo}\,(b4)) associated with Jet-04 reveals an obvious decrease during the occurrence of the jet, similar to the negative flux.
Magnetic cancellation is also associated with some coronal jets \citep[e.g.,][]{2015ApJ...815...71C,2017ApJ...840...54C}.
These observational signatures indicate that magnetic cancellation is likely associated with at least some microjets.
Using the coaligned AIA 1600 \AA\ images and HMI magnetograms, we can also find that almost all of the 16 microjets identified from datasets 1, 2 and 3 originate from the bright network lanes or the relatively strong magnetic flux concentrations.
The footpoints of some microjets are clearly located at the edges of magnetic flux concentrations.
As mentioned above, a similar result was also found from the HRI$_{Ly\alpha}$ images (see the details in Table\,\ref{paras}).
\subsection{DEM analysis}
\label{subsec:details}
\begin{figure*}
\centering
\includegraphics[trim=0.0cm 0.3cm 0.0cm 0.0cm,width=0.99\textwidth]{dem_v2.eps}
\caption{DEM analysis for Jet-04.
(a) Evolution of the DEM. The two white dotted lines indicate the temperature range within which the DEM is obviously enhanced when the jet occurs.
(b) Temporal evolution of the electron density (black) and AIA 171 \AA\ intensity (yellow) averaged over the green contour in Figure\,\ref{fig:sdo}\,(b2).}
\label{fig:dem}
\end{figure*}
For the coronal microjets observed in 2020 May (except Jet-15 without emission response in the AIA EUV passbands),
we performed a differential emission measure (DEM)
analysis \citep{2015ApJ...807..143C,2018ApJ...856L..17S,2020ApJ...898...88X,2021Innov...200083S} to investigate their plasma properties with observations
in the AIA 94 \AA, 131 \AA, 171 \AA, 193 \AA, 211 \AA, and 335 \AA\ passbands.
As an example, we averaged the intensities within the green contour in Figure\,\ref{fig:sdo}\,(b2) and then performed the DEM analysis for Jet-04. We show the temporal evolution of the resultant DEM in Figure\,\ref{fig:dem}\,(a).
From 16:09:40 UT to 16:10:20 UT, Jet-04 appears in the AIA EUV passbands and
the DEM in the jet region is obviously enhanced in a broad temperature range of log (T/K)\,=\,5.65$-$6.55.
We also calculated its EM-weighted temperature and obtained a value of log (T/K)\,=\,$\sim$6.20, which is similar to that reported by \citet{2015A&A...579A..96P} using the filter-ratio technique.
A similar behavior was also found for other coronal microjets.
This indicates the multi-thermal nature for the plasma in these microjets.
Taking the width (0.4\,Mm) of Jet-04 as the LOS integration depth, we can estimate the electron density of Jet-04
by integrating the DEM over the temperature range of log (T/K)\,=\,5.65$-$6.55.
Figure\,\ref{fig:dem}\,(b) shows the temporal evolution of the density, which closely matches the light curve of the AIA 171 \AA\ intensity.
From Figure\,\ref{fig:dem}\,(b), we can see that the density of Jet-04 is $\sim$2.2$\times$10$^{9}$ cm$^{-3}$.
The densities for other microjets are shown in Table\,\ref{paras}.
The average density for these quiet-Sun microjets is $\sim$1.4$\times$10$^{9}$ cm$^{-3}$, which is roughly one order of magnitude lower than the typical density of coronal jets observed in ARs \citep[e.g.,][]{2008A&A...481L..57C,2012ApJ...748..106T}.
Since we have obtained the physical parameters (spatial scale, projected speed, temperature, density), we can calculate the thermal energy
(E$_{t}$ = 3N$_{e}$k$_{B}$TV) and kinetic energy (E$_{k}$ = 0.5N$_{e}$m$_{p}$$v^{2}$V) for the coronal microjets \citep[e.g.,][]{2014ApJ...790L..29T,2020ApJ...899...19C}. Here N$_{e}$, m$_{p}$, k$_{B}$, $v$, V, and T are the electron number density, proton mass, Boltzmann constant, velocity, volume, and temperature of a microjet.
By assuming a coronal microjet with a cylindrical shape (a height of 7.7\,Mm and a cross section diameter of 1.0\,Mm), a projected speed of 62\,{\rm km\;s$^{-1}$},
a mean coronal temperature of 10$^{6}$\,K, and an electron density of $\sim$1.4$\times$10$^{9}$ cm$^{-3}$,
its thermal and kinetic energies were estimated to be $\sim$3.9$\times$10$^{24}$\,erg and $\sim$2.9$\times$10$^{23}$\,erg, respectively.
Thus, the total energy of a typical microjet appears to be of the same order of the released energy predicted by the nanoflare theory \citep[$\sim$10$^{24}$\,erg,][]{1988ApJ...330..474P}.
\section{Discussion}
\label{sec:dis}
At first sight, our microjets appear to be different from the campfires recently discovered from the HRI$_{EUV}$ images. Campfires are characterized by small-scale brightenings in quiet-Sun regions \citep{2021arXiv210403382B}. Most campfires have a loop-like morphology, with a length smaller than 4 Mm and a length to width ratio of 1--5. While the microjets detected here generally have a much larger aspect ratio, 7.7$\pm$3.2. The lengths of microjets, 7.7$\pm$4.3 Mm are also considerably larger than those of most campfires. In addition, the microjets are characterized as plasma moving upward from a footpoint brightening, which is not typical for campfires. Another difference is that some coronal microjets also reveal a response in the HRI$_{Ly\alpha}$ images, whereas campfires generally show no signature in Ly$\alpha$.
Nevertheless, we have examined the same dataset (dataset\,3) used by \citet{2021arXiv210403382B} and found that
the coronal microjets we identified from dataset\,3 have also been identified as campfires by the detection method of \citet{2021arXiv210403382B}.
In several cases, a coronal microjet was identified as sequential campfires or spatially adjacent campfires.
Thus, we conclude that the identified coronal microjets are a peculiar subset of coronal campfires.
Different from the suggestion that many campfires might correspond to apexes of low-lying small-scale network loops heated at the coronal heights by component reconnection between different loops \citep{2021arXiv210410940C}, the coronal microjets are more likely to be generated by magnetic reconnection between small-scale magnetic loops and the ambient network field at lower heights.
Another group of small-scale dynamic events resulting from magnetic reconnection are TREEs \citep[e.g., ][]{1991JGR....96.9399D,1997Natur.386..811I,2006SoPh..238..313C}, which are characterized by non-Gaussian line profiles at TR temperatures \citep[e.g.,][]{1983ApJ...272..329B,2003A&A...403..287P}.
Observations have shown that TREEs are usually located at the boundary of magnetic network \citep[e.g., ][]{2003A&A...403..731M,2004A&A...427.1065T}.
Furthermore, they are often associated with jet-like structures \citep[e.g., ][]{2014ApJ...797...88H,2017MNRAS.464.1753H,2019ApJ...873...79C}.
Here we found that the bases of almost all coronal microjets are located at the network lanes, with many of them at the edges of the lanes.
Future investigations are required to understand whether the coronal microjets and TREEs are related.
With AIA observations, \citet{2014ApJ...787..118R} reported a new type of coronal jets, called `jetlets', which are smaller than the typical coronal jets.
They are often located at the bases of coronal plumes and associated with opposite magnetic polarities \citep[e.g.,][]{2015ApJ...807...71P,2018ApJ...868L..27P}. Sometimes they can also be identified from the TR images taken by the Interface Region Imaging Spectrograph \citep[IRIS,][]{2014SoPh..289.2733D}.
These jetlets have an average length of 27\,Mm and an average width of 3\,Mm, about three times larger than the coronal microjets we report here.
More recently, using observation from the High-resolution Coronal Imager 2.1 \citep[Hi-C 2.1,][]{2019SoPh..294..174R},
\citet{2019ApJ...887L...8P} found six smaller jetlet-like structures with an average length of 9\,Mm, at edges of network lanes close to an AR.
Some of these jetlets appear to be the larger population of TR network jets \citep{2014Sci...346A.315T}. Some other jetlets might be similar to our quiet-Sun microjets, although our microjets are generally smaller and not associated with coronal plumes. More investigations are required to understand the relationship between the coronal microjets and jetlets. With observations of Hi-C 2.1, \citet{2019ApJ...887...56T} recently discovered some fine-scale surge/jet-like features in the core of an AR. Future observations should be performed to examine whether these features are the AR counterpart of our microjets.
Huge efforts have been made to investigate the physical nature and formation mechanisms of solar jets. Magnetic reconnection is generally believed to be the primary cause of solar jets. Numerical simulations of reconnection driven jets have reproduced many observational features of solar jets. For example, \citet{1995Natur.375...42Y} and \citet{1996PASJ...48..353Y} performed MHD simulations and reproduced both the anemone jets (inverted-Y shaped jets) and two-sided-loop jets (bidirectional jets).
These features, as well as the brightenings caused by plasma heating at the jet footpoints, have also been detected in some of our identified coronal microjets.
In reconnection regions, plasma blobs (or plasmoids) can form and move within the jets \citep[e.g.,][]{1995Natur.375...42Y,1996PASJ...48..353Y}.
In a high plasma-$\beta$ environment, the Kelvin$-$Helmholtz instability may develop and lead to the formation of vortex-like blobs \citep{2017ApJ...841...27N,2018RAA....18...45Z}.
On the other hand, the plasmoid instability could occur in the low plasma-$\beta$ case, and multi-thermal plasmoid blobs may form and move upward in the jets \citep{2017ApJ...841...27N}.
Considering that the corona has a low-$\beta$, the moving blobs identified in some microjets are more likely to be plasmoids.
The fact that many microjets are associated with mixed-polarity magnetic fluxes and located at the edges of network lanes favors the scenario of jet production by magnetic reconnection between small-scale magnetic loops and the ambient network field in the quiet-Sun regions.
This scenario appears to be consistent with that of the simulations performed by \citet{2013ApJ...777...16Y} and \citet{2018ApJ...852...16Y}.
In these simulations, magnetic reconnection occurs between closed field and the locally open network field, leading to the generation of an inverted-Y shaped jet and moving blobs in the simulated jet.
Some of these microjets might also be part of a continuum of jets that emanate from embedded bipoles, i.e., the three-dimensional (3D) fan-spine topology, consisting of a minority polarity surrounded by majority-polarity fluxes.
This scenario has been extensively studied through observational analysis \citep[e.g.,][]{2018ApJ...854..155K,2019ApJ...885L..15K,2019ApJ...873...93K} and 3D simulations
\citep[e.g.,][]{2009ApJ...691...61P,2015A&A...573A.130P,2016A&A...596A..36P,2016ApJ...820...77W,2016ApJ...827....4W,2017Natur.544..452W,2017ApJ...834...62K}. Mini-filament eruptions are often involved in this process.
For the microjets reported here, no obvious signatures of small-scale (or tiny-, micro-) filaments were observed by the HRI$_{EUV}$ images. However, this might be due to the fact that the associated filament structures are too small to be resolved by EUI.
Our observations of numerous microjets in quiet-Sun regions might be consistent with the results of \citet{2016SoPh..291.1357W}, who suggested interchange reconnection as the coronal source of superhalo electrons \citep{2012ApJ...753L..23W}. Specifically, they proposed that
small-scale interchange reconnections in quiet-Sun regions produce an upward-traveling population of accelerated electrons,
which could escape into the interplanetary space and form the superhalo electrons measured in the solar wind.
\section{Summary}
\label{sec:sum}
We have identified 52 coronal microjets from the HRI$_{EUV}$ images taken on six different days.
These coronal microjets appear as quasi-collimated plasma ejections from clear brightenings in the quiet-Sun regions.
Some of these microjets reveal an inverted-Y shape and include moving blobs, which are similar to many previously known jets.
The footpoints of most microjets are located at the edges of network lanes and associated with mixed-polarity magnetic fluxes.
These observational features suggest that the coronal microjets are the small-scale version of coronal jets, which are likely generated by magnetic reconnection between small-scale magnetic loops and the ambient network field in the quiet-Sun regions.
We have measured various physical parameters of the coronal microjets, and found that
(1) the durations are mostly a few minutes,
(2) the projected speeds are mostly 40--90 {\rm km\;s$^{-1}$},
(3) the average lengths and width are $\sim$7.7 Mm and $\sim$1.0 Mm, respectively,
(4) the aspect ratios of the jets are generally 4--13.
We have examined the response of the coronal microjets in the HRI$_{Ly\alpha}$ images and found that
11 out of 30 jets show a response, possibly indicating the generation of these 11 microjets at lower heights compared to others.
When the AIA data are available, we found that almost all of the microjets have signatures in the AIA EUV passbands,
especially in the 304 \AA, 171 \AA, 193 \AA, and 211 \AA\ passbands.
Through a DEM analysis, we found that these coronal microjets are multi-thermal, and their average density is $\sim$1.4$\times$10$^{9}$ cm$^{-3}$.
The thermal and kinetic energies of these coronal microjets were estimated to be
$\sim$3.9$\times$10$^{24}$\,erg and $\sim$2.9$\times$10$^{23}$\,erg, respectively, which fall into the energy range of coronal nanoflares.
\acknowledgments
This work was supported by NSFC grants 11825301 and 11790304, 41874200, and the Strategic Priority Research Program of CAS (grant XDA17040507).
H.C.C. was supported by the National Postdoctoral Program for Innovative Talents (BX20200013) and China Postdoctoral Science Foundation (2020M680201).
Solar Orbiter is a space mission of international collaboration between ESA and NASA, operated by ESA. The EUI instrument was built by CSL, IAS, MPS, MSSL/UCL, PMOD/WRC, ROB, LCF/IO with funding from the Belgian Federal Science Policy Office (BELSPO/PRODEX PEA 4000112292); the Centre National d'Etudes Spatiales (CNES); the UK Space Agency (UKSA); the Bundesministerium für Wirtschaft und Energie (BMWi) through the Deutsches Zentrum für Luft- und Raumfahrt (DLR); and the Swiss Space Office (SSO).
AIA and HMI are instruments onboard the Solar Dynamics Observatory, a mission for NASA's Living With a Star program.
\bibliographystyle{aasjournal}
| {'timestamp': '2021-08-23T02:13:45', 'yymm': '2108', 'arxiv_id': '2108.08718', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.08718'} |
\section{Introduction}\label{INTRODUCTION}
Ann decides to go to her local Italian restaurant for dinner. Ann whittles down her choice to either the spaghetti or the risotto. The spaghetti is always the same and always pretty good - a safe choice. The risotto is risky, sometimes it is excellent but on occasion it has been bad. Ann suffers from regret. If she chooses the risotto and it turns out to be bad then she will regret her choice; likewise, if she chooses the spaghetti and the risotto turns out to be be excellent. If Ann is regret averse, then she will account for the possibility of these unpleasant regretful experiences when making her decision.
Information plays an integral role in the above story. Ann's regret is evaluated as the utility loss experienced from comparing a choice made - that turned out to be suboptimal - to a foregone alternative. But implicit in this construction is that such a comparison can always be made. While this may be true for financial assets listed on an exchange, there are many situations in life where information about unchosen options is not automatically available, as in the case of our restaurant example as it is uncommon to order two main courses. In this paper we allow for this possibility, by allowing for the ex-post information available to a decision maker to depend on choice.
Now suppose the environment is modified slightly in that if Ann orders the spaghetti then she will never learn the quality of the risotto, but since the spaghetti is always the same its quality is learned whether it is chosen or not. In this environment there is an informational asymmetry. Does this matter? The answer is:\ yes, it can matter because differing levels of ex-post information matter for behavioural individuals like Ann who factor in ex-post information into their decision-making. In this new environment, the spaghetti dish is now more relatively desirable than before because choosing it completely insures against regret, while the total benefit associated with choosing the risotto remains unchanged. Depending on the parameters, it is easy to see that ex-post information structures varying with choice can lead to preference reversals.
People may learn from others. Imagine that Ann has a friend called Barry who joins her for dinner. If Barry orders the risotto, then ordering the spaghetti is no longer a safe haven from regret for Ann as it was when she was dining alone. Previously by ordering the spaghetti, Ann was completely insured against regret. But now, if Ann orders spaghetti and learns via Barry's that the risotto was excellent, then she will experience regret. In this case, Barry's choice has imposed an externality on Ann, because his ordering the risotto Barry will impact Ann's ex-post information, which in turn impacts the psychological payoff (i.e., the payoff from the choice made plus the potential psychological loss due to regret) she associates with a given choice. So, what on the surface appeared to be two independent decision problems made in isolation is in fact a behavioural game since each person's choice can reveal information to others.
In this paper we present a model that captures the Ann and Barry story above.\footnote{The example is based on \citet{ArielyLevav:2000:JCR}. In that paper, the focus was on the differences in patrons' orders in two different settings:\ one where patrons order simultaneously and another where they order sequentially. \citet{ArielyLevav:2000:JCR} noted disparities in order choice (i.e., the propensity to ``coordinate'') depending on the protocol. The model in this paper can explain this finding.} While the model can handle environments far richer than this example, we believe that the simple story captures all the salient features. That is, we start out with the (obvious) observation that ex-post information matters for regret. We then provide some formal machinery that allows the analyst to model how the ex-post information available to a regret averse individual can vary with choice and hence affect optimal choice.\footnote{In fact we allow for the possibility that the ex-post information structure is not fixed. That is, there may be multiple ex-post informational environments associated with a given choice and there is a given probability distribution over the different ex-post information structures.} Effectively, this requires extending the domain of preferences from `objects of choice' to `objects of choice and their associated information environment'. Our first result, Theorem \ref{thm:regretEquivalent}, shows that when there is full information ex-post information, an individual with standard preferences and an individual who is regret averse with a linear regret term are observationally equivalent in their choice behaviour.
We then observe how Theorem \ref{thm:regretEquivalent} breaks down once ex-post information depends on choice. In our opinion it seems important that experimenters are aware of this fact. A full classification of precisely how and when ex-post information environments impact optimal choice is not possible: not all information sets can be ordered. But our second result, Theorem \ref{thm:lessInfo}, shows how information sets can be partially ordered. In particular, we show that `more' ex-post information (formally defined later) is never desirable for a regret averse decision maker. Ignorance is bliss. In a sense this is obvious:\ a regret averse individual `zeroes-in' on the best-performing lottery that is learned about in each state, creating a state-dependent reference point that can only be matched but never exceeded. So when the number of outcomes that are learned about increases, regret can only go up.
After this, we allow for the possibility that the ex-post information available depends not only on one's own choice but also on the choices of others. Precisely, we consider an environment wherein a large number of regret averse individuals choose from a common choice set. We show that such an environment is not a series of independent decision problems to be analysed in isolation, but is in fact a rich multi-player behavioural game, that we term the regret game. When the behaviour of others can impact one's ex-post information, for certain parameter specifications the regret game is a game of coordination that admits multiple equilibria; this despite the fact that the same individuals all have a strictly dominant choice when faced with the same decision problem in isolation. Theorem \ref{thm:Nashmany} is a precise statement of this.
The term `asymmetric information' is typically understood to mean `ex-ante asymmetric information'. The idea is that in advance of some economic interaction, say a potential exchange, one individual is relatively more informed. The focus is on how the outcomes in such an incomplete information environment may differ from that wherein information is complete. The regret game has, in a sense, exactly the opposite core feature of `ex-post asymmetric information'. At the onset everyone possesses the same information; but after actions have been taken individuals may possess differing levels of information. A particular type of market failure can emerge:\ all individuals take a decision that is collectively suboptimal so as to avoid being the one with more, not less, information ex-post.
We then move to testing our theory experimentally. In a big-picture sense, we begin by eliciting certainty equivalents when ex-post information is total and compare it with the elicited certainty equivalents when ex-post information depends on choice. We find strong support for Theorem \ref{thm:lessInfo} in that many subjects make different choices when the only difference is the ex-post information supplied. (Theorem \ref{thm:regretEquivalent} ensures that our mechanism is incentive compatible.) The results from these initial decisions also allow us to classify participants into those who are regret averse and those who are not,\footnote{As further discussed in Section \ref{experiment1}, the idea that the expectation of feedback affects anticipated regret and thus behaviour has been studied in the psychology literature \cite{Zeelenberg:1996}. Yet ours is the first paper that identifies regret averse individuals by manipulating feedback and using an incentive-compatible mechanism.} and to calibrate the parameters used in the second part of the experiment wherein our participants play a version of the regret game.
More specifically, we run two experiments both using a within-subject design. The first experiment consists of two parts. In the first part, we elicit participants' preferences over a sure amount of money and a risky lottery under two different information conditions -- one where participants learn the risky lottery's outcome even if they do not choose it, and one where they do not learn the risky lottery's outcome unless they choose it.
In the second part of the experiment, participants are matched in pairs and play the regret game. They must choose between a sure amount of money and a lottery. If they do not choose the lottery, they will learn its outcome only if their partner chose the lottery.
We find that, when the regret game has a unique equilibrium in dominant strategies, the vast majority of participants choose it whether they are regret averse or not. When the regret game is a game of coordination, regret averse participants try to coordinate with their partner. This supports our model's predictions. We observe a positive and highly significantly impact of beliefs on choice for regret averse participants. When we focus on the last iteration of the game, the effect is even stronger. We also observe a positive impact of beliefs on choice for non regret averse participants, but the effect is smaller and only marginally significant.
These results indicate that regret aversion can drive coordination. However, in our setting there may have been two additional drivers of coordination:\ preferences for conformism \cite{CharnessRigottiRustichini:2017:GEB,CharnessNaefSontuoso:2019:JET} and inequity averse preferences \cite{FehrSchmidt:1999:QJE}. To eliminate these potential confounds we ran a second experiment. The second experiment consists of two parts. In the first part, participants choose between a sure amount of money and a risky lottery, with subjects revealing to us their certainty equivalent for the risky lottery. Subjects are later asked whether they want to find out how much they would have earned had they chosen the risky lottery. If they choose not to find out, they forgo a small amount of money. This procedure allows us to classify participants into those who are regret averse and those who are not. In the second part of the experiment, participants are matched in pairs and play a variant of the regret game:\ they must choose whether they want to find out the risky lottery's outcome or not, but they can avoid finding out only if their partner also decides not to find out.
Again we find that regret averse participants try to coordinate with their partner. The probability that they choose what they think their partner chose is significantly higher than the probability that they choose the alternative. All regret averse participants who believe that their partner chose to avoid information, choose to avoid information. In contrast, non regret averse participants play the dominant strategy, and some of them do so under the belief that their partner chose the alternative option. Thus, once the aforementioned confounds are removed by design, we still find strong support for our model's predictions.
Regret aversion dates back to the classic works of \citet{Bell:1982} and \citet{LoomesSugden:1982:EJ}. However, these and subsequent studies of regret aversion assume that ex-post information is total.
In order for any behavioural bias to ``have bite'' after a decision has been made, information about foregone alternatives is required. We show how to model behavioural agents with biases that ``kick in'' after decisions have been taken, and further to document how choice can vary with uncertainty over what environment will be faced ex-post (in addition to the standard uncertainty associated with the outcomes of each choice). While we have focused on regret, we are hopeful the machinery will be useful to address other biases too.
While a number of papers explore how regret impacts choice in individual decision problems, the impact of regret averse preferences in strategic environments is extremely limited. A notable exception is \citet{Filiz-OzbayOzbay:2007:AER} who incorporate anticipated regret into the preferences of agents partaking in a sealed-bid auction.\footnote{There is a large literature that studies the connection between dynamics based on ``regret minimisation'' and convergence to Nash equilibria and correlated equilibria \cite{Aumann:1974:JME}. See for example, \citet{FosterVohra:1997:GEB}, \citet{HartMas-Colell:2000:E}, and the textbook \citet{Cesa-BianchiLugosi:2006:}.} However, unlike in our paper, individuals in their environment either \emph{always} find out the outcome of the foregone alternative or never find out. That is, the informational environment is set exogenously and does not depend endogenously on the behaviour of individuals. In our set up, anticipated regret can serve as a coordination device.
In models of \emph{social learning} \cite{BikhchandaniHirshleifer:1992:JPE,Banerjee:1992:QJE}, the behaviour of others generates information but is payoff irrelevant. Mapping this to the Ann and Barry story, individuals choose sequentially basing their choice on a weighted combination of prior belief and the choices of those who went before. In a model of \emph{social interaction} (a game), someone else's choice is payoff relevant, but does not generate information. The regret game is a behavioural game that, in a sense, lies somewhere between the two settings above:\ someone else's choice is payoff relevant, but only because it can affect ex-post information and this can be potentially harmful to a behavioural individual.
The two papers closest to ours are \citet{Benabou:2013:RES} and \citet{CooperRege:2011:GEB}. Using preferences for late-resolution of uncertainty from \citet{KrepsPorteus:1978:E}, \citet{Benabou:2013:RES} presents a dynamic model that addresses the harmful issue of ``groupthink''. In the model each individual's payoff depends on the effort level of everyone (including his own) and the realisation of a random variable. The key feature is the inclusion of \emph{anticipatory utility} experienced from thinking about one's future prospects. The more positive the forecast, the better for the individual. This allows for multiple equilibria including, for example, one in which everyone in the population collectively ignores a negative public signal about the random variable. Such delusions can persist because individual $j$'s informational decision and effort choice can affect the risks of individual $i$'s psychological payoff, leading $i$ to make a different informational decision than he otherwise would. In our model by contrast, $j$'s choice of lottery can impact the informational environment that $i$ faces, changing $i$'s psychological payoffs and leading $i$ to make a different risk-taking decision than he would in isolation.
\citet{CooperRege:2011:GEB} present a model of ``social regret'' wherein the regret from a choice that turned out suboptimal is dampened if others chose the same. An individual tells himself that his decision could not have been too wrong if many others acted the same. Misery loves company. This model's key feature is the belief that an individual assigns to others choosing an alternative. The more likely another individual is to choose an alternative, the greater the expected regret from not choosing that alternative. Results from laboratory experiments confirm their hypothesis that social regret is a powerful force. In our model, an individual's choice imposes an externality on others by changing the information environment that they may face. In \citet{CooperRege:2011:GEB}, the effect of social regret is magnified by the number of others who make a different choice, but social regret is less intense when others make the same choice.\footnote{A related literature is that on envy or ``keeping up with the Joneses'', according to which people care not only about the absolute value of their own consumption, but also about the average (or per capita) level of consumption in the economy \cite{Gali:1994:JMCB}.}
The remainder of the paper is structured as follows. Section \ref{MODEL} presents the model.
Section \ref{experiments} describes the designs and results of our two experiments. Section \ref{CONCLUSION} concludes.
\section{The Model}\label{MODEL}
Ex-post information feedback is an integral part of the regret story. If the outcome of a foregone alternative is never learned, then how can it be regretted? In subsection \ref{subsec:RegPref}, we build on this insight to formally explore the mechanics of regret aversion. In effect, a regret averse individual `zeroes-in' on the best-performing lottery in each state, creating a state-dependent reference point that can only be matched but never exceeded. Formally, identifying each state of nature by its best-performing lottery generates a \emph{regret-relevant information partition} of the state space. However, implicit in this construction is that alternatives, in particular the best-performing lottery, in each state will be learned about even if unchosen. We consider how preference reversals may occur when the regret-relevant partition that is faced depends on choice. {Using our example from Section \ref{INTRODUCTION}, we imagine that Ann only learns the outcome of the risotto if she orders it (since the spaghetti dish is a sure thing, it is always known)}.\footnote{We emphasise that risk free lotteries therefore play a very important role for regret averse individuals.} This asymmetry in ex-post information, and hence regret, increases the relative benefit of choosing spaghetti as it is a safe haven from regret.
In Subsection \ref{subsec:RegPrefUncertainInfo}, we extend the environment to one in which the information that will be learned ex-post is unknown in advance. That is, in addition to the `standard' uncertainty captured by risky lotteries, there is an additional layer of uncertainty due to not knowing what regret-relevant information partition will be faced ex-post. Regret may or may not be experienced. {Again using our example, we suppose that there is a possibility that Ann will learn of the risotto's quality even if she orders the spaghetti (perhaps a she will observe someone at a different table order it). Ordering the spaghetti no longer provides full insurance from regret.} In Subsection \ref{subsec:strategicSetting} we extend to an environment where there are many regret averse individuals, and the regret-relevant information partition that is faced by an individual depends probabilistically upon her own choice \emph{and} the choices of others. {We suppose that Ann has a friend, Barry, who joins her for dinner. If Barry orders the risotto, then Barry's choice means that Ann will learn the risotto's quality.} In this case, Barry's behaviour imposes a negative externality on Ann's ex-post psychological payoff. This is a game in which Barry does not alter Ann's payoff directly, but rather he impacts the ex-post informational environment that Ann will face.
\subsection{The Decision Environment}\label{subsec:RegPref}
Let $\Omega$ denote a finite state space with typical element given by $\omega$. Uncertainty is captured by a probability measure $\mathbb{P}$ defined on $2^{\Omega}$, where $\mathbb{P}[\set{\omega}] >0$ for all $\omega \in \Omega$.\footnote{Since $\Omega$ is finite there are no technical headaches.} There is a choice set, $\mathcal{L}$, containing $n$ risky lotteries, labelled $\ell_{1}, \ell_{2}, \dots, \ell_{n}$, and a safe (risk-free) lottery, $\ell_S$. The outcome of lottery $\ell$ in state $\omega \in \Omega$ is denoted by $\ell(\omega)$. For simplicity we assume that each risky lottery, $\ell_{i}$, $i = 1, \dots, n$, is an independent Bernoulli random variable with outcome $\underline{\ell}_{i}$ occurring with probability $1-p_{i} \in (0, 1)$ and outcome $\overline{\ell}_{i}$ occurring with probability $p_{i}$. We further assume that there are no payoff ties and that outcomes are structured such that $\max_{i}{\underline{\ell}_{i}} < \ell_{S} < \min_{i}{\overline{\ell}_{i}}$.
Therefore there are exactly $2^{n}$ states in $\Omega$. Note that the risk free lottery, $\ell_S$, is the lottery with highest return in only one state; in all other states, at least one of the risky lotteries will outperform it.\footnote{\label{fn:risk-free}Since the $n$ risky lotteries have uncorrelated returns, then the probability that the risk-free lottery is the best-performing lottery equals $\prod_{i=1}^{n}(1-p_{i})$. This tends to zero as $n$ gets large.}
Let $u(\cdot)$ be a real-valued \emph{choiceless utility} function defined on $\mathcal{L} \times \Omega$ that satisfies the usual conditions.\footnote{The term ``choiceless utility'' was introduced by \citet{LoomesSugden:1982:EJ}. It is so-called as it is the utility experienced if the decision maker is simply \emph{assigned} lottery $\ell$ and the resulting state is $\omega$} Let $R(\ell; \omega)$ capture the experienced regret in state $\omega$ when lottery $\ell$ was chosen; formally it is a function of the difference between `choiceless utility from what turned out to be the best possible decision' and `choiceless utility from the decision made'.\footnote{This is different to the set up of \citet{LoomesSugden:1982:EJ}, who view regret as stemming from pairwise comparisons. Our formulation is based on \citet{Sarver:2008:E}, where the comparison lottery is that which performed best in the realised state.}
The total utility experienced by a decision maker in state $\omega \in \Omega$, when lottery $\ell \in \mathcal{L}$ is chosen, is denoted by $u^{T}$ and is defined as
\begin{align}
u^{T}\big(\ell(\omega)\big) &= u\big(\ell(\omega)\big) - R(\ell; \omega) \label{eq:regret1}\\
&= u\big(\ell(\omega)\big) - \kappa \big( \max_{\ell' \in \mathcal{L}}u(\ell'(\omega)) - u(\ell(\omega)) \big) \label{eq:regret2} \nonumber
\end{align}
where parameter $\kappa$ $(\geq0)$ is the coefficient of regret aversion that is assumed to be state-independent.
A decision maker compares the expected total utility of all the lotteries. Letting $\mathbb{E}_{\omega}$ denote the expectation operator with respect to $\mathbb{P}$, we can state the decision maker's optimisation problem as,
\begin{equation}
\max_{\ell \in \mathcal{L}} \mathbb{E}_{\omega}\Big[u^{T} \big(\ell(\omega)\big)\Big] \label{eq:Dproblem}
\end{equation}
We now state our first result.
\begin{theorem}\label{thm:regretEquivalent}
Suppose the outcome of all lotteries will always be learned ex-post. Then, for every pair of lotteries $\ell_{i}$ and $\ell_{j}$ in $\mathcal{L}$,
\[
\mathbb{E}_{\omega}\Big[\big(u (\ell_{i}(\omega)\big)\Big] \geq \mathbb{E}_{\omega}\Big[\big(u (\ell_{j}(\omega)\big)\Big] \iff \mathbb{E}_{\omega}\Big[\big(u^{T} (\ell_{i}(\omega)\big)\Big] \geq \mathbb{E}_{\omega}\Big[\big(u^{T}(\ell_{j}(\omega)\big)\Big]
\]
\end{theorem}
The proof is in the Appendix. Theorem \ref{thm:regretEquivalent} states that two decision makers, one with standard preferences ($\kappa=0$) and the other regret averse ($\kappa>0$), are indistinguishable in their choice behaviour. While the proof is almost immediate, it is important for our experimental purposes as it ensures that the incentive compatible mechanism that we use in our experiments, the BDM \citep{BDM}, remains incentive compatible under regret averse preferences. We note that while Theorem \ref{thm:regretEquivalent} is stated for Bernoulli lotteries, it is easily extended to an environment with more general lotteries provided everything remains finite.
While interesting, Theorem \ref{thm:regretEquivalent} does require that the outcome of every lottery is learned ex-post. However, while a lottery's outcome will always be learned when it is chosen, the same may not be true for unchosen lotteries. In such environments, what really matters to a regret averse individual is not the best performing lottery in each state, but the best performing lottery in each state that is learned about. The standard regret framework is insufficient for these purposes and needs to be extended to allow for the possibility that the information available ex-post depends on choice. We develop such an extension now.
Begin by relabelling the lotteries in such a way that $\overline{\ell}_{1} > \overline{\ell}_{2} > \overline{\ell}_{3} > \dots > \overline{\ell}_{n-1} > \overline{\ell}_{n} > \ell_{S}$. Now, labelling lottery $S$ by $n+1$, for each $i =1, \dots, n+1$, define the event $F(i)$ as follows:
\begin{equation}
F(i) := \set{\omega \in \Omega \,\, \Big| \,\, \ell_{i}(\omega) = \max_{\ell_j\in\mathcal{L}} \ell_{j}(\omega)} \label{eq:fullPartition}
\end{equation}
In words, event $F(i)$ is the set of states upon which lottery $\ell_{i}$ is the best-performing. {From our assumptions on the structure of lottery outcomes each $F(i)$ is nonempty and the collection $\set{F(j)}_{j = 1}^{n+1}$ forms a partition of the state space $\Omega$.} In words, a regret-averse individual creates a reference point for every state, where the state-dependent reference point is the outcome of the best-performing lottery. Such a reference point can only be matched but never exceeded.
The partition given in \eqref{eq:fullPartition} assumes that ex-post information will be total. But what happens when the ex-post information the decision maker receives depends on choice? That is, consider the event $F(j)$ but suppose that the environment is changed such that the outcome of $\ell_{j}$ in state $\omega \in F(j)$ is not learned when lottery $\ell_{k}$ is chosen. Clearly the state dependent reference point $\ell_{j}(\omega)$ is no longer valid. We assume that it is replaced with the payoff to the best-performing lottery that is learned about in state $\omega$.
When lottery $\ell_{k}$ is chosen, we let $\mathcal{O}_{k}\subseteq \mathcal{L}$ denote the set of lotteries whose outcomes are observed.\footnote{Note that this assumes that, conditional on choosing lottery $k$, the set of lotteries whose outcomes are learned does not vary with the realised state. While this generalisation is easily incorporated, the simpler version is sufficient for our purposes.} Clearly $\mathcal{O}_{k}$ is non empty for all $k$ as the outcome of the chosen lottery is always learned. To incorporate ex-post information that depends on choice, we amend \eqref{eq:fullPartition} above, and define $F_k(j)$ as follows.
\begin{equation}
F_k(j) := \set{\omega \in \Omega \,\, \Big| \,\, \ell_{j}(\omega) = \max_{\ell_h\in\mathcal{O}_{k}} \ell_{h}(\omega)} \label{eq:partition}
\end{equation}
That is, $F_k(j)$ is the set of states where lottery $j$ is the best performing lottery that is learned about conditional on lottery $k$ being chosen. Unlike the events defined in \eqref{eq:fullPartition}, it is possible for $F_k(j)$ to be empty. This is despite the fact that every lottery is the best-performing in at least one state.
So, for every lottery $k$ the decision maker associates a partition of $\Omega$ given by $\pi_{k} = \set{F_{k}(j)}_{\ell_{j}\in \mathcal{L}}$. We call this the \emph{regret relevant information partition} associated with lottery $k$. Note that when the outcomes of all lotteries can be learned ex-post, $F_{k}(j)=F (j)$, for every lottery $k$.\footnote{In this case the collection $\set{F(i)}_{i=1}^{n+1}$ forms the \emph{fundamental regret relevant information partition} that we denote by $\pi_{0}$.}
We define an \emph{ex-post information environment}, $\Pi$, as the collection of regret relevant information partitions, one for each lottery. That is, $\Pi = \set{\pi_{k}}_{k=1}^{n+1}$. We now show how to define a partial order on the collection of all ex-post information environments, that allows us to rank them in terms of greater ex-post `informativeness'.
\begin{defn}\label{def:moreInfo}
We say that ex-post information environment $\Pi' = \set{\pi_{k}'}_{k=1}^{n+1}$ is \emph{more informative} than ex-post information environment $\Pi'' = \set{\pi_{k}''}_{k=1}^{n+1}$ if for every $k = 1, \dots, n+1$, regret relevant information partition $\pi_{k}'$ is as fine as regret relevant information partition $\pi_{k}''$, and $\pi_{j}'$ is strictly finer than $\pi_{j}''$ for at least one lottery $j$.\footnote{Given a set $S$ and two partitions of $S$, $\rho_{1}$ and $\rho_{2}$, we say that $\rho_{1}$ is finer than $\rho_{2}$ if every element $\alpha$ of $\rho_{1}$ is a subset of some element of $\rho_{2}$.}
\end{defn}
We now show that a more informative ex-post information environment is not desirable for a regret averse individual.
\begin{theorem}\label{thm:lessInfo}
Consider two ex-post information environments $\Pi'$ and $\Pi''$ such that $\Pi'$ is more informative than $\Pi''$. Then a regret averse individual prefers ex-post information environments $\Pi''$ to $\Pi'$, in the sense that the expected total utility associated with a given choice of lottery is never higher in $\Pi'$.
\end{theorem}
Theorem \ref{thm:lessInfo} is very intuitive. A regret averse individual constructs a reference point for every (lottery,state) pair, that is given by the best-performing lottery that is learned about. A higher reference point is bad as the difference between the reference point and the chosen lottery's outcome can only increase, thereby increasing the regret that is experienced. By the definition of more informativeness, the reference point is never decreasing and strictly increases for at least one (lottery,state) pair. Since the choiceless utility experienced does not depend on the ex-post information environment and a more informative environment brings with it more regret, then such an environment is not attractive to a regret averse individual.
We now show how to encode the example of Ann deciding between restaurants from Section \ref{INTRODUCTION} into our framework.
\begin{ex}
Regret averse Ann must decide between two pizza-serving restaurants, $R$ and $S$. There are two states of the world, $\omega_{1}$ and $\omega_{2}$, with the probability of state $\omega_{1}$ given by $p \in (0, 1)$. Restaurant $R$ is risky and its pizza brings payoff ${\underline{\ell}_{R}}$ in state $\omega_{1}$ and payoff ${\overline{\ell}_{R}}$ in state $\omega_{2}$. Restaurant $S$ is the safe choice, with its pizza bringing payoff $\ell_{S}$ in both states. Payoffs are structured such that ${\underline{\ell}_{R}} < \ell_{S} < {\overline{\ell}_{R}}$.
There are two information environments $\Pi'$ and $\Pi''$. Under $\Pi'$, the outcome of both lotteries is always learned ex post, while under $\Pi''$, the outcome of lottery $R$ is learned only if chosen. Therefore, we have
\begin{align*}
\text{Under } \Pi': & \hspace{.2in} \mathcal{O}_{\ell_{R}} = \set{\ell_{R}, \ell_{S}}, \mathcal{O}_{\ell_{S}} = \set{\ell_{R}, \ell_{S}} \text{ and } \pi'_{\ell_{R}} = \set{\set{\omega_{1}}, \set{\omega_{2}} }= \pi'_{\ell_{S}} \\
\text{Under } \Pi'': & \hspace{.2in} \mathcal{O}_{\ell_{R}} = \set{\ell_{R}, \ell_{S}}, \mathcal{O}_{\ell_{S}} = \set{\ell_{S}} \text{ and } \pi''_{\ell_{R}} = \set{\set{\omega_{1}}, \set{\omega_{2}}} \neq \pi''_{\ell_{S}} = \set{\set{\omega_{1}, \omega_{2}}}
\end{align*}
Let us now consider the expected total utility associated with each choice of lottery. That is, we evaluate $u^{T}(\ell, \Pi)$, where we note the explicit dependence on the information environment $\Pi$. We have
\begin{align*}
u^{T}(\ell_{R}, \Pi') &= p\Big(u({\underline{\ell}_{R}}) - \kappa\big(u({\underline{\ell}_{R}}) - u(\ell_{S})\big)\Big) &+ (1-p)\big(u({\overline{\ell}_{R}})\big)\\
u^{T}(\ell_{R}, \Pi') &= pu({\ell}_{S}) &+ (1-p)\Big(u(\ell_{S}) - \kappa\big(u(\ell_{S}) - u({\overline{\ell}_{R}})\big)\Big)
\end{align*}
while
\begin{align*}
u^{T}(\ell_{R}, \Pi'') &= p\Big(u({\underline{\ell}_{R}}) - \kappa\big(u({\underline{\ell}_{R}}) - u(\ell_{S})\big)\Big) &+ (1-p)\big(u({\overline{\ell}_{R}})\big)\\
u^{T}(\ell_{R}, \Pi'') &= pu({\ell}_{S}) &+ (1-p)u(\ell_{S})
\end{align*}
where we note that under information environment $\Pi''$, by choosing $\ell_{S}$ Ann is completely insured against regret. Note that $S$ has become relatively more attractive to Ann in $\Pi''$.
\end{ex}
We conclude this section by reiterating the main observation. Models of regret aversion assume that the decision maker identifies each state by its best-performing lottery. But this is not always possible. When this is the case the regret averse decision maker identifies each state by the best-performing lottery that is learned about. Finally, what lottery outcomes are learned and what are not may be choice dependent. It then immediately follows that varying the information environment can impact (optimal) choice. In the next subsection we will allow for the possibility that multiple regret-relevant information partitions can be associated with each lottery choice. That is, there will be further uncertainty about what ex-post information environment will be faced.
\subsection{Uncertainty Over the Ex-post Information Structure}\label{subsec:RegPrefUncertainInfo}
In this section we allow for the possibility that the outcomes of unchosen lotteries may be learned about or may not be learned about. That is, there may be more than one regret relevant partition associated with a lottery choice. To illustrate the effect this can have on optimal choice we limit attention to the environment where the choice set $\mathcal{L}$ only contains one risky lottery $\ell_{r}$ and the risk free lottery $\ell_{S}$. With only one risky lottery the state space is $\set{\omega_{1}, \omega_{2}}$, where $\omega_{1}$ and payoff $\bar{\ell}_{r}$ occur with probability $p \in (0, 1)$, and $\omega_{2}$ and payoff $\underline{\ell}_{r}$ occur with probability $1-p$.
The risk-free lottery has the property that its outcome will always be known whether it was chosen or not. The same need not be true of the risky lottery. We let $q \in [0, 1]$ be the probability that an agent learns the outcome of the risky lottery, $\ell_{r}$, conditional on choosing the safe lottery, $\ell_{S}$. That is, conditional on choosing the risk-free lottery, with probability $q$ the individual faces regret relevant partition $\pi_{0} = \set{F(S), F(\ell_{r})}$, and with probability $1-q$ he faces partition $\pi_{S}' = \set{F'(S)}$ where $F'(S) = \Omega$. Utility is then given by
\begin{equation}\label{eq:URstate0}
u^{T}(\ell_{S}, \omega, q) = \left\{ \begin{array}{rl}
u(\ell_{S}) - q\kappa\big(u(\overline{\ell}_{r}) - u(\ell_{S}) \big), &\mbox{ if } \omega = \omega_{1} \\
u(\ell_{S}), &\mbox{ if } \omega = \omega_{2}
\end{array} \right.
\end{equation}
and
\begin{equation}\label{eq:URstate1}
u^{T}(\ell_{r}, \omega, q) = \left\{ \begin{array}{rl}
u(\overline{\ell}_{r}) , &\mbox{ if } \omega = \omega_{1} \\
u(\underline{\ell}_{r}) - \kappa\big( u(\ell_{S}) - u(\underline{\ell}_{r}) \big), &\mbox{ if } \omega = \omega_{2}
\end{array} \right.
\end{equation}
Both the risky lottery, $\ell_{r}$, and the risk-free lottery, $\ell_{S}$, bring with them a benefit and a cost. The benefit is the direct choiceless utility associated with each; the cost is the psychological penalty that may be incurred in the event your choice is not optimal in the realised state. For the risky lottery, $\ell_{r}$, both the expected benefit and expected cost are fixed. For the risk-free lottery, $\ell_{S}$, the expected benefit is fixed but the expected cost is not. It may or may not be incurred.
Using \eqref{eq:URstate0} and \eqref{eq:URstate1}, it is simple to calculate when the risky option is preferable to a regret averse individual. To make things as clear as possible, we normalise the choiceless utility function $u$ so that $u(\underline{\ell_{r}}) = 0$ and $u(\ell_{S}) = 1$. With this, the condition for the risky lottery $\ell_{r}$ to be preferred to the risk-free lottery $\ell_{S}$ reduces to
\begin{equation}\label{eq:EURcondition}
u(\overline{\ell}_{r}) \geq \frac{1}{p}\left(\frac{1+\kappa \big(1-p(1-q)\big)}{1+q\kappa}\right)
\end{equation}
Expression \eqref{eq:EURcondition} is bookended by two important cases. When $q=1$ the decision maker will learn the outcome of the risky option no matter what. Here, there is no distortion to the threshold rule relative to standard preferences, in that,
\begin{equation}\label{eq:EURconditionq=1}
u(\overline{\ell_{r}}) \geq \frac{1}{p}
\end{equation}
When $q = 0$ however, the decision maker knows that he will definitely \emph{not} learn the outcome of the risky option unless he opts for it. Then \eqref{eq:EURcondition} becomes,
\begin{equation}\label{eq:EURconditionq=0}
u(\overline{\ell}_{r}) \geq \frac{1}{p}\big( 1+\kappa(1-p) \big)
\end{equation}
Given that $\kappa>0$ and $p \in (0, 1)$, the inequality in \eqref{eq:EURconditionq=0} is a more demanding condition on the risky lottery $\ell_{r}$ than that in \eqref{eq:EURconditionq=1}. That is, when a regret averse decision maker will certainly not find out the realisation of the risky option without choosing it, he requires it to have a more desirable payoff distribution. (It can be checked that the RHS of expression in \eqref{eq:EURcondition} is strictly decreasing in $q$ over the interval $[0, 1]$.)
The reason for the discrepancy above is that the risk-free lottery can provide insurance against regret. When $q=0$, the risk-free lottery provides complete insurance against regret. There is complete asymmetry in anticipated regret:\ if the decision maker chooses the risky option, he knows he will be able to make an ex-post comparison and feel regret if the risky option is not successful, whereas if he chooses the risk-free lottery, he knows he will not be able to make such a comparison. Because of the insurance against regret offered by the risk-free lottery, the outcome of the risky lottery in the event of a success must be high enough to tempt the decision maker away from the security of the risk-free lottery. Ignorance is bliss. On the other hand, when $q=1$, the risk-free lottery offers no insurance against regret. The decision maker knows he will learn the outcome of the risky lottery whether he chooses it or not. The regret considerations are symmetric and cancel each other out, and the individual's condition is the same as if he were regret neutral.
Intuitively, the benefit that the risky lottery must yield in order to be chosen becomes lower as $q$ increases due to the reduction in anticipated regret. As $q$ decreases (i.e., the likelihood of making an {ex-post} comparison in the case the risky option is not chosen reduces) the agent is increasingly - from an {ex-ante} perspective - insured against regret. And because of this insurance against potential regret, the risky lottery's outcome must increase to tempt the agent away from the safe option.
In the next subsection, we extend the setting to one with multiple regret averse agents, and we suppose that $q$ is endogenously determined by the choices of others. In particular, we assume that $q$ is increasing in the number of other agents who choose the risky lottery. This seemingly minor amendment turns a series of single-person decision problems into a multi-player behavioural game.
\subsection{Strategic Setting: The Regret Game}\label{subsec:strategicSetting}
\subsubsection{The set up}
We stick with the 2-state 2-lottery environment from the previous subsection. Suppose now that there are $N$ decision makers, each of whom is regret averse, i.e., with preferences represented by $u^{T}$ with $\kappa>0$. The key modelling feature we introduce is that an individual's likelihood of learning about the risky outcome conditional on choosing the risk-free lottery is a function of the behaviour of others.
To capture the above, we define a symmetric $N$-player (simultaneous-move) behavioural game with common action set $A$ and common utility function $u^{T}$.\footnote{This is an abuse of notation as the domain on which utility is defined will now be extended to include the behaviour of others.} We identify the action set $A$ with the 2-element choice set $\mathcal{L} = \set{\ell_{r}, \ell_{S}}$. Each player $i$, chooses an action $a_{i} \in A = \set{\ell_{r}, \ell_{S}}$, and has utility function $\Ui^{T} : \mathbf{A} \times \set{\omega_{1}, \omega_{2}} \to \mathbb{R}$, where $\mathbf{A} := \prod_{j =1}^{n} A_{j}$, with typical element $\mathbf{a} = (a_{1}, \dots, a_{n})$. From player $i$'s perspective, a pure action profile $\mathbf{a} \in \mathbf{A}$ can be viewed as $(a_{i}, \mathbf{a}_{-i})$, so that $(\hat{a}_{i}, \mathbf{a}_{-i})$ will refer to the profile $(a_{1}, \dots, a_{i-1}, \hat{a}_{i}, a_{i+1}, \dots, a_{n})$, i.e., the action profile $\mathbf{a}$ with $\hat{a}_{i}$ replacing $a_{i}$. The utility function of agent $i$ is as defined in \eqref{eq:URstate0} and \eqref{eq:URstate1} save one difference:\ the probability that $i$ learns the outcome of the risky lottery when it is not chosen, denoted $q_{i}$, depends on the behaviour of the other agents. Formally,
\begin{equation}\label{eq:UiRstate0}
\Ui^{T}\big((\ell_{S}, \mathbf{a}_{-i}), \omega\big) = \left\{ \begin{array}{rl}
u(\ell_{S}) - q_{i}(\mathbf{a}_{-i})\kappa\big(u(\overline{\ell}_{r}) - u(\ell_{S}) \big), &\mbox{ if } \omega = \omega_{1} \\
u(\ell_{S}), &\mbox{ if } \omega = \omega_{2}
\end{array} \right.
\end{equation}
and
\begin{equation}\label{eq:UiRstate1}
\Ui^{T}\big((\ell_{r}, \mathbf{a}_{-i}), \omega\big) = \left\{ \begin{array}{rl}
u(\overline{\ell}_{r}) , &\mbox{ if } \omega = \omega_{1} \\
u(\underline{\ell}_{r}) - \kappa\big( u(\ell_{S}) - u(\underline{\ell}_{r}) \big), &\mbox{ if } \omega = \omega_{2}
\end{array} \right.
\end{equation}
where $q_{i}(\mathbf{a}_{-i})$
is assumed to be strictly increasing in the number of others that choose $\ell_{r}$.\footnote{Assuming that $q_{i}$ is increasing seems natural to us. The more individuals who choose the risky lottery, the harder it ought be not to learn about its performance ex-post. But while it seems implausible that the function $q_{i}$ would be decreasing at any point, we leave its precise functional form unspecified. One can imagine settings in which $q_{i}$ is linear, concave, and convex. One can even envisage settings where $q_{i}$ is step function, in that the outcome of $\ell_{r}$ cannot be avoided once \emph{enough} individuals have chosen it.} That is, abusing notation somewhat, we let $\abs{\mathbf{a}} = \abs{(a_{i}, \mathbf{a}_{-i})} := \#\set{j \neq i : a_{j} = \ell_{r} }$. Then for any two profiles $\mathbf{a}'$ and $\mathbf{a}''$ we have $q_{i}(\abs{\mathbf{a}'}) > q_{i}(\abs{\mathbf{a}''})$ if and only if $\abs{\mathbf{a}_{-i}'} > \abs{\mathbf{a}_{-i}''}$. Finally, we assume that $q(0) = 0$ and $q(N-1) = 1$.
We emphasise that, while player $i$'s utility depends on the choices of everyone, and hence the set up is a strategic game, the dependence is not direct. Rather it manifests through the likelihood that player $j$'s choice of lottery will impact player $i$'s ex-post regret relevant partition. It is individual $j$'s risk taking that can generate information for individual $i$, in turn altering $i$'s psychological payoffs, in turn altering the relative benefits of each choice. We now characterise the set of equilibria to this set up:\ showing how individual psychological motives can lead to socially interdependent decisions.
\subsubsection{Equilibria}
The above defines the Regret Game. There are three classes of the game, depending upon the parameters. When
$u(\overline{\ell}_{r})<1/p$, it is a dominant choice for each player to choose the safe lottery $\ell_{S}$. Similarly, when $u(\overline{\ell}_{r}) > \frac{1}{p}\left( 1+\kappa(1-p) \right)$, choosing the risky lottery $\ell_{r}$ is the dominant action. For intermediate values of $u(\overline{\ell}_{r})$ however, the game is one of coordination and has two pure-strategy Nash equilibria:\ all players choose $\ell_{S}$, and all players choose $\ell_{r}$.\footnote{There is also a completely mixed strategy equilibrium but we ignore it as it is very unstable.} Theorem \ref{thm:Nashmany} below states this formally.
\begin{theorem}\label{thm:Nashmany}
In the regret game,
\begin{enumerate}
\item\label{condition:Nashmany1}
When $u(\overline{\ell}_{r}) < \frac{1}{p}$, uniform adoption of the risk-free lottery, $\ell_{S}$, is the unique (dominant) pure strategy Nash equilibrium.
\item\label{condition:Nashmany2}
When $u(\overline{\ell}_{r}) > \frac{1}{p}\left( 1+\kappa(1-p) \right)$, uniform adoption of the risky lottery, $\ell_{r}$, is the unique (dominant) pure strategy Nash equilibrium.
\item\label{condition:Nashmany3}
When $u(\overline{\ell}_{r}) \in \big[\frac{1}{p}, \frac{1}{p}\left( 1+\kappa(1-p) \right) \big]$, uniform adoption of the risk-free lottery, $\ell_{S}$, is a pure strategy Nash equilibrium and uniform adoption of the risky lottery, $\ell_{r}$, is a pure strategy Nash equilibrium
\end{enumerate}
\end{theorem}
Theorem \ref{thm:Nashmany} can be understood as follows. While the risky lottery, $\ell_{r}$, brings a guaranteed utility, for every (other) individual who chooses the risky lottery, the expected utility associated with the risk-free lottery is decreasing. The reason for this being that the likelihood of learning about the alternative, and hence experiencing regret, is going up. Thus, for the parameters of case \ref{condition:Nashmany3}, we have a slightly unusual coordination game in that the number of others who choose $\ell_{r}$ is decreasing the associated net utility of choosing $\ell_{S}$ without improving the net utility of choosing $\ell_{r}$.
\subsubsection{Welfare and equilibrium selection}
It is interesting to compare the (common) expected utility levels at each equilibrium in case \ref{condition:Nashmany3}. From \eqref{eq:UiRstate0} and \eqref{eq:UiRstate1} we can compute that uniform adoption of the risky lottery, $\ell_{r}$, is Pareto optimal if and only if $u(\overline{\ell}_{r}) \geq \frac{1}{p}\big( 1+\kappa(1-p) \big)$. But this is precisely the threshold at which individuals would always choose the risky lottery in any case. Thus, for parameters such that the regret game is a coordination game (case \ref{condition:Nashmany3}), coordinating on the risk-free lottery is Pareto-optimal (and always preferred ex-ante).
There is a large literature addressing the tension that exists between the multiple equilibria in coordination games. The most commonly studied environment is a large-population binary-action game (e.g., the Stag Hunt) where the Pareto efficient equilibrium does not coincide with the \emph{safe} risk-dominant equilibrium.\footnote{Probably the best known experimental test of the efficiency vs safety issue is \cite{Van-HuyckBattalio:1990:AER} who study human behaviour in the \emph{minimum effort game}, of which the Stag Hunt is a simple version.} Existing equilibrium selection techniques - be they evolutionary like \emph{stochastic stability} \citep{KandoriMailath:1993:E, Young:1993:E} or higher-order belief-based like \emph{global games} \citep{CarlssonDamme:1993:E, MorrisShin:2003:} - favour the equilibrium that is more difficult to destabilise (i.e., the risk-dominant one). For our regret game, simple algebra shows that $q \geq q^{\star} = 1/2$ is the threshold at which the risky lottery becomes most desirable. But if we imagine a \emph{steep} specification of $q$ (in that $q_{i}(\abs{\mathbf{a}}) \approx 1$ when $\abs{\mathbf{a}} \geq 1$), then uniform adoption of the risky lottery would be the prediction. But that means the regret game has the quite curious property that the risk-free Pareto dominant outcome need not be risk-dominant \citep{HarsanyiSelten:1988:}.
\subsubsection{Extensions}
In what follows, we briefly describe some extensions of our model. Further details can be found in Appendix \ref{extensions}.
In the regret game above we have assumed that everyone is regret averse. However, in reality it is possible that only some proportion of a population are regret averse ($\kappa > 0$) and the remainder are regret neutral ($\kappa = 0$). The first extension in Appendix \ref{hetero-prefs} considers the case in which some proportion of the population are similarly regret averse and the remaining proportion are regret neutral. The preferences over lotteries for regret neutral individuals
are fixed no matter the information feedback. So these individuals always choose the lottery with the highest expected (choiceless) utility. When the environment becomes a game these individuals have a dominant strategy. The presence of regret neutral individuals makes it more difficult for the regret averse individuals to coordinate on the \emph{other} lottery. The reason is that the regret neutral individuals will always choose one of the lotteries.
The second extension in Appendix \ref{hetero-prefs} considers the more general case where all individuals may have a different coefficient of regret aversion. Ex-ante, the regret coefficients are unknown, with everyone's coefficient being a random draw from some distribution over $[0, \infty)$. This generates a rich Bayesian game. In this sense, the first extension above is is one of many possible realisations of the strategic environment.
Individuals who are not regret averse may be regret neutral or \emph{rejoice lovers}.
Rejoice is defined as the psychological gain that a decision maker -- so called rejoice lover -- experiences when the option chosen turns out to be better than the unchosen option. Appendix \ref{rejoice} describes how rejoice can be incorporated in our model. When a rejoice lover does not learn the lottery's outcome, he will be \emph{more} likely to choose the lottery, as he wants to know whether his choice was the best. It is easy to see that, while in equilibrium regret averse players will coordinate, rejoice lovers will anti-coordinate.
\section{Experiments}\label{experiments}
We test the predictions of a two-player variant of the regret game through two
experiments. Both experiments use a within-subject design and have two main goals. The first goal is identifying the participants who are regret averse. The second goal is testing whether regret averse participants behave as our model predicts. The two experiments achieve these goals through two different designs, with the second experiment serving as a robustness check of the results by eliminating some confounds that may have been present in the first experiment.
\subsection{Experiment 1}\label{experiment1}
\subsubsection{Design}
\paragraph{Overview}
The experiment consists of two parts. In the first part we elicit participants' preferences over a riskless option (a sure amount of money) and a risky option (a lottery) under two different information environments. In the first environment participants learn the risky lottery's outcome even if they do not choose it, while in the second environment they do not learn the risky lottery's outcome unless chosen. These initial decisions allow us to classify participants into regret averse types and non regret averse types and to calibrate the parameters of the second part of the experiment.
We also ask an additional question as a robustness check -- to verify whether participants behave according to the type they were classified as.
In the second part of the experiment, participants are matched in pairs and play the regret game. They must choose between a sure amount of money and a risky lottery. If they do not choose the lottery, they will learn its outcome only if their partner chose the lottery.
\paragraph{Part 1} In the first part of the experiment, Decisions 1 through 3, participants have to choose between a sure amount (\euro{5} with certainty) and a
risky lottery (\euro{}$x$ with 50\% probability and \euro{0}
with 50\% probability) under different conditions.
The standard incentive compatible mechanism for eliciting lottery thresholds is the BDM \citep{BDM}, and by Theorem \ref{thm:regretEquivalent} the BDM remains incentive compatible for regret averse individuals. We ask each participant to state the smallest lottery outcome $x$ (henceforth \emph{lottery threshold}) such that they prefer playing the lottery than receiving the sure
amount. They
can choose any number from the list $\{5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15\}$. After they submit their choice, the computer randomly picks a number from the same list, independently drawn for
each participant. All the numbers are equally likely. If the number
picked by the computer is smaller than the number $x$ chosen by the participant, the sure amount is the implemented option,
i.e., the participant receives \euro{5}. If the number picked by the
computer is equal or larger than the number $x$ chosen by the participant, the lottery is the implemented option, i.e.,
the participant receives the number picked by the computer in
\euro{} with 50\% probability and \euro{0}
otherwise.\footnote{Whether the lottery is successful or not is perfectly correlated across participants.}
The above is common to Decisions 1 and 2 (and, as discussed later, to Decision 3). The difference between the decisions is seen in the information feedback provided to the participants who do not choose the lottery. Labelling the number $x$ chosen by participants in Decision 1 as $x_1$ and that chosen by participants in Decision 2 as $x_2$, we have the following.
\bigskip
\emph{Decision 1}. If a participant's choice, $x_1$, results in the sure amount being the implemented option, they are nevertheless informed about the lottery's
outcome.
\bigskip
\emph{Decision 2}. If a participant's choice, $x_2$, results in the sure amount being the implemented option, they are \emph{not} informed about the lottery's outcome.
\bigskip
After each of the first two decisions, participants learn the number randomly picked by the computer,
and thus whether the implemented option is the lottery or the sure amount. In the latter case, the information provided varies across the two decisions as described above.
Participants who choose a higher lottery threshold in Decision 2 (no information) than in Decision 1 (information), i.e., $x_2>x_1$, are classified as \emph{regret averse}. The sure amount is more appealing to them when feedback about the lottery is withheld, as it allows them to remain ignorant about the outcome of the unchosen option.
Participants who choose the same or a lower lottery threshold in Decision 2 than in Decision 1, i.e., $x_2\leq x_1$, are classified as \emph{non regret averse}. These participants can be further classified in two categories. Participants who choose $x_2=x_1$ are \emph{regret neutral}, as they do not react to feedback about the unchosen option. Participants who choose $x_2<x_1$ are \emph{rejoice lovers}. The sure amount is less appealing to them when feedback about the lottery is withheld, as they cannot learn the lottery's outcome unless they choose it.\footnote{Since we use a within-subject design, we are potentially exposed to the risk of experimenter demand effect. However, we believe that it is very difficult for participants to figure out what the experimenter is trying to achieve through Decisions 1 and 2, and the heterogeneity in participants' responses seems to support our belief.}
To the best of our knowledge, our paper is the first to classify participants into regret averse and non regret averse types by using an incentive-compatible mechanism under different feedback conditions. However, the idea that the expectation of feedback affects anticipated regret and thus the behaviour of regret averse individuals is has been documented by psychologists. For example, \citet{Zeelenberg:1996} show that, when faced with the choice between two equally attractive gambles, most participants chose the gamble without feedback and thus ex-post comparisons of outcomes.\footnote{For a different method to classify subjects into regret averse and not, see \citet{BleichrodtCillo:2010:MS}. Similarly to our design, \citet{ImasLameWilson} compare participants' valuations for identical lotteries under two different feedback scenarios. However, they use a between-subject design and it is not possible to back out the number of regret averse individuals from their data.}
By classifying participants into types on the basis of a single decision, we are exposed to the risk of measurement error. However, if multiple decisions had been used to classify participants into types, we would have faced two other issues. First, earlier decisions may have affected later decisions, and varying the order to control for any possible order effect may not have been possible. Second, by going through multiple, similar questions participants may have felt bored and answered later question less accurately. In our second experiment, we use an alternative method to classify participants into types. While the new classification is still based on one decision only, by employing a different method it can serve as a robustness check for the classification into types used in our first experiment.
Finally, we ask participants an additional question (Decision 3) to verify whether they behave according to the type they were classified as through Decisions 1 and 2. Each participant is randomly paired with another participant and faces the same decision as in Decisions 1 and 2, with one difference:\ the feedback provided to the participants who does not play the lottery. If the number $x$ chosen by a participant in Decision 3, labelled $x_3$, results in the sure amount being the implemented option, they will be informed about the lottery's outcome only if their partner played the lottery.
This means that if their partner played the lottery,
participants are under the same information environment as in Decision 1, where they learn the
lottery's outcome. If their partner did not play the lottery, they are under the same information environment as in Decision 2, where they do not learn the lottery's outcome unless they play it.
Participants are also asked what they believe is the lottery threshold $x$ chosen by their partner in Decision 3. If their guess is within one of the number chosen by their partner, then they receive additional \euro{1} at the end of the study (provided that Decision 3 is randomly selected for payment).
As with Decisions 1 and 2, in Decision 3 participants are informed as to the number randomly picked by the computer,
and thus whether the implemented option is the lottery or the sure amount.
\paragraph{Part 2}
In the second part of the experiment (Decision 4 onwards), participants play the regret game described in Section \ref{subsec:strategicSetting}. They must choose between a sure amount (earning \euro{5} with certainty) and a lottery (earning ``an amount'' in \euro{} with 50\% probability and \euro{0} with 50\% probability).
Differently from the previous part of the experiment, the lottery's outcome in the good state is given and, for each participant, it is determined using the lottery thresholds $x_1$ and $x_2$ elicited in Decisions 1 and 2.
The sure amount is known regardless of the decision made. If participants choose the sure amount, they learn the lottery's outcome only if their partner chooses the lottery. This implies that if participants choose the lottery, they experience regret if the bad state materialises, whereas if they choose the sure amount, they experience regret if the good state materialises \emph{and} they learn that.
This setting is identical in all the decisions in the second part of the experiment.
However there are three type of decisions that differ in the amount the lottery yields in the good state corresponding to the three parts of the parameter space classified in Theorem \ref{thm:Nashmany}. Two types of decisions are designed to test the regret game when it has a dominant strategy -- in one the dominant strategy is the sure amount and in the other the dominant strategy is the lottery. The third type of decision is designed to test the regret game when it is a game of coordination. More interestingly, in each of these three decisions the amount that the lottery yields in the good state is individual-specific:\ it depends on a participant's choices in Decisions 1 and 2. This ensures that each participant, despite potential heterogeneity in their (regret averse) preferences, plays the regret game in its three key cases.
\bigskip
\emph{Decision 4}. The lottery's outcome in the good state is \emph{smaller} than the amount a participant chose in Decision 1, namely $x_1-2$. This allows us to test the prediction of the regret game when choosing the sure amount is the unique equilibrium in dominant strategies.
\bigskip
\emph{Decision 5}. The lottery's outcome in the good state is \emph{bigger} than the amount a participant chose in Decision 2, namely $x_2+2$. This allows us to test the prediction of the regret game when choosing the lottery is the unique dominant strategy equilibrium.
\bigskip
\emph{Decision 6}. The lottery's outcome in the good state is \emph{in between} the amount chosen in Decision 1 and that chosen in Decision 2, namely $\frac{x_1+x_2}{2}$. This allows us to test the prediction of the regret game when it is a game of coordination.
\bigskip
Decision 6, being the most interesting case of the regret game and thus the core part of the experiment, is repeated 20 times (Decisions 6 to 25).
Pairs are rematched before Decisions 4, 5 and 6. This means that a participant's partner could be the same as before or a different one. To mitigate potential dependence resulting from the repeated interaction of participants, we use matching groups of size 4. That is, for each participant, there are three potential participants who can be randomly assigned to them. In Decision 6 and its repetitions, participants keep the same partner.
From Decision 4 onwards, we elicit first order beliefs. We ask participants to guess whether their partner chose the sure amount of money or the risky lottery. If they guess their partner's choice, they receive additional \euro{1} at the end of the study, if that decision is randomly selected for payment.
After the last iteration of Decision 6, i.e., Decision 25, the participants are asked some non-incentivised questions:\ the Regret Scale \cite{Schwartz:2002:JPSP}, the Big-Five personality traits \cite{Gosling:2003:JRP} and demographic characteristics.
\paragraph{Procedure}
The sessions were run in May 2017 in the experimental laboratory at
the University of Bonn. The experiment was programmed and conducted
with the software z-Tree \cite{Fishbacher:2007:EE}.
At the end of the experiment, each participant received:\ (i) a show-up fee of \euro{4}, (ii) the payment for one randomly chosen decision out of the 25 decisions made, (iii) the payment for one randomly chosen belief out of the 23 beliefs elicited, but only in case of a correct guess. On average, a participant earned \euro{11.50}.
\subsubsection{Testable predictions}\label{predictions}
Decision 6 and its repetitions test Theorem \ref{thm:Nashmany} Part \ref{condition:Nashmany3}, i.e., they test the predictions of the regret game when it is a game of coordination with two pure strategy Nash Equilibria. A regret averse type will choose the lottery if he believes that his partner chose the lottery, and the sure amount otherwise. This yields our first testable prediction.
\begin{pred}\label{pred3} In Decision 6 and its repetitions, believing that his partner will choose the lottery (sure amount) increases a regret averse agent's probability of choosing the lottery (sure amount).
\end{pred}
\bigskip
Decision 4 tests Theorem \ref{thm:Nashmany} Part \ref{condition:Nashmany1}, i.e., it tests the predictions of the regret game when choosing the sure amount is the unique equilibrium in dominant strategies. Decision 5 tests Theorem \ref{thm:Nashmany} Part \ref{condition:Nashmany2}, i.e., it tests the predictions of the regret game when choosing the lottery is the unique equilibrium in dominant strategies. When the regret game has a unique equilibrium in dominant strategies, an agent will choose it regardless of their regret preferences and regardless of their beliefs.
This yields our second testable prediction.
\begin{pred}\label{pred2} Regret averse agents will choose the sure amount in Decision 4 and the lottery in Decision 5, regardless of their beliefs.
\end{pred}
We also have a third prediction, aimed at testing whether, under a partner-dependent information condition (basically a threshold-based variant of the regret game), participants behave according to the type they were classified as in Decisions 1 and 2.
In Decision 3, regret averse (and rejoice loving) participants should choose the same lottery threshold as in Decision 1 if they believe that their partner played the lottery (i.e., if they expect to learn the lottery's outcome) and the same lottery as in Decision 2 if they believe that their partner did not play the lottery (i.e., if they expect not to learn the lottery's outcome).
That is, they should choose a lottery threshold in between the lottery thresholds chosen under information and under no information.
As $x_2>x_1$ for regret averse types and $x_2<x_1$ for rejoice loving types, for regret averse (rejoice loving) participants $x_3$ should exceed (fall short of) $x_1$ and fall short of (exceed) $x_2$. For regret neutral participants $x_3$ should equal $x_1$ and $x_2$:\ they should choose the same lottery threshold as in Decisions 1 and 2, independently of their beliefs.
\begin{pred}\label{pred1} For regret averse (rejoice loving) agents, $x_3$ will exceed (fall short of) $x_1$ and fall short of (exceed) $x_2$. For regret neutral agents, $x_1=x_2=x_3$.
\end{pred}
\subsubsection{Results}
A total of 144 subjects participated in the experiment with over 90\% of them students and 56\% of them female. The average age was 25 (24 among students and 27 among non students).
In our sample, 22\% of the participants chose $x_2>x_1$ and are classified as regret averse. Half the participants chose $x_2=x_1$ and are classified as regret neutral. The remaining participants chose $x_2<x_1$ and are classified as rejoice loving. Figure \ref{fig:thresholds} shows the distribution of the difference between $x_2$ and $x_1$, which is a proxy for the strength of participants' regret aversion.
\begin{figure}[hhh]
\centering \includegraphics[scale=0.8]{thresholds}
\caption{Difference between $x_2$ and $x_1$.}
\label{fig:thresholds}\end{figure}
To test Prediction \ref{pred3}, we estimate a
logit panel-model with random effects, where the unit of observation is the individual observed in Decisions 6 through 25. We report the marginal effect of an agent's belief that his partner chose the lottery on her own likelihood of choosing the lottery.\footnote{We cluster the standard errors at the matching group-level.}
The dependent variable \emph{lottery choice} takes value 1 if the agent chooses the lottery and 0 if he chooses the sure amount. We use the following independent variables. The variable
\emph{belief} equals 1 if the agent believes that his partner
chose the lottery in the current round and 0 if he believes
that his partner chose the sure amount. The variables \emph{belief if regret averse} and \emph{belief if non regret averse} equal \emph{belief} if, respectively, the agent is regret averse and non regret averse (i.e., regret neutral or rejoice lover).
The variable \emph{past regret} captures regret
generated by previous decisions, and equals 1 (i) if an agent
has not chosen the lottery in the previous round, while his partner
has, and the lottery has been successful, or (ii) if an agent
has chosen the lottery in the previous round and the lottery was not
successful. It equals 0 otherwise.\footnote{Note that, while both
(i) and (ii) can be interpreted as regret driven by past,
unsuccessful decisions, their nature can potentially differ. While
(i) captures peer-induced regret, as well as personal loss, (ii)
only captures loss. Given that, we also repeat our regressions
splitting \emph{past regret} into two dummies respectively
corresponding to cases (i) and (ii). Our results do not change.}
In column (1), we only control for \emph{beliefs if regret averse} and \emph{beliefs if non regret averse}. In column (2), we additionally control for \emph{past regret}.
In column (3), we additionally control for demographics (female dummy, student dummy and age).
We find that, consistently with Prediction \ref{pred3}, believing that their partner chose the lottery significantly increases regret averse participants' likelihood of choosing the lottery. We observe a positive impact of beliefs on choice also for non regret averse participants. However, for non regret averse participants the magnitude of the marginal effects is smaller than for regret averse participants.
\begin{table}[htbp]\centering
\def\sym#1{\ifmmode^{#1}\else\(^{#1}\)\fi}
\caption{Impact of beliefs on choice (all rounds)\label{regret-belief}}
\setlength\tabcolsep{20pt}
\begin{tabular}{l*{3}{c}}
\hline
DV: lottery choice &\multicolumn{1}{c}{(1)} &\multicolumn{1}{c}{(2)} &\multicolumn{1}{c}{(3)} \\
\hline
& & &\\
belief if regret averse & 0.175\sym{***}& 0.167\sym{***}& 0.166\sym{***}\\
& (0.032) & (0.033) & (0.033) \\
belief if non regret averse & 0.172\sym{***}& 0.159\sym{***}& 0.157\sym{***}\\
& (0.022) & (0.022) & (0.022) \\
past regret & No & Yes & Yes\\
female dummy & No & No & Yes \\
student dummy & No & No & Yes \\
age & No & No & Yes \\
\hline
N & 2880 & 2736 & 2736 \\
\hline
\end{tabular}
\begin{tablenotes}
\item {\footnotesize Marginal effects from logit regression. \sym{*} \(p<0.1\), \sym{**} \(p<0.05\), \sym{***} \(p<0.01\). Standard errors in parentheses, clustered at matching group-level. DV equals 1 if the agent chose the lottery and 0 otherwise.}
\end{tablenotes}
\end{table}
Interestingly, when we focus on the last iteration of Decision 6 (Table \ref{last_round}), where we expect that learning may have helped participants to converge to equilibrium, we observe that the impact of beliefs on choice is larger than in Table \ref{regret-belief} and highly significant for regret averse participants. In contrast, it is only marginally significant for non regret averse participants. Moreover, the difference in the magnitude of the marginal effects between regret averse and non regret averse is larger in the last iteration than in all rounds pooled. Our first and core result follows.
\begin{table}[htbp]\centering
\def\sym#1{\ifmmode^{#1}\else\(^{#1}\)\fi}
\caption{Impact of beliefs on choice (last round)\label{last_round}}
\setlength\tabcolsep{20pt}
\begin{tabular}{l*{3}{c}}
\hline
DV: lottery choice &\multicolumn{1}{c}{(1)} &\multicolumn{1}{c}{(2)} &\multicolumn{1}{c}{(3)} \\
\hline
& & &\\
belief if regret averse & $0.301\sym{***}$ & $0.291\sym{***}$ & $0.284\sym{***}$ \\
& (0.091) & (0.092) & (0.093) \\
belief if non regret averse & $0.195\sym{**}$ & $0.179^{**}$ & $0.162\sym{*}$ \\
& (0.085) & (0.085) & (0.087) \\
past regret & No & Yes & Yes\\
female dummy & No & No & Yes \\
student dummy & No & No & Yes \\
age & No & No & Yes \\
\hline
N & 144 & 144 & 144 \\
\hline
\end{tabular}
\begin{tablenotes}
\item {\footnotesize Marginal effects from logit regression. \sym{*} \(p<0.1\), \sym{**} \(p<0.05\), \sym{***} \(p<0.01\). Standard errors in parentheses, clustered at matching group-level. DV equals 1 if the agent chose the lottery and 0 otherwise.}
\end{tablenotes}
\end{table}
\begin{result} In Decision 6 and its repetitions, believing that their partner chose the lottery (sure amount) significantly increases regret averse participants' probability to choose the lottery (sure amount). This effect becomes stronger in the last iteration of the game.
\end{result}
This result indicates that regret aversion drives coordination.
However, in our setting there may have been two additional drivers of coordination:\ preferences for conformism \cite{CharnessRigottiRustichini:2017:GEB,CharnessNaefSontuoso:2019:JET} and inequity averse preferences \cite{FehrSchmidt:1999:QJE}.
As further discussed in Section \ref{experiment2}, we ran a second experiment to eliminate these potential confounds.
Table \ref{prediction2} reports the percentages of participants choosing the lottery in Decisions 4 and 5, i.e., when the regret game has a dominant strategy. Consistently with Prediction \ref{pred2}, the large majority of regret averse participants chose the dominant strategy, i.e., the sure amount in Decision 4 and the lottery in Decision 5. As expected, also the large majority of non regret averse participants followed this pattern of decisions.\footnote{In Table \ref{prediction2} the non regret averse participants include regret neutral participants, and rejoice lovers when the game had a dominant strategy ($x_1-x_2\leq 2$). As expected, when the game had no dominant strategy ($x_1-x_2> 2$), the percentages of rejoice lovers choosing the sure amount in Decision 4 and the lottery in Decision 5 were remarkably lower.}
\begin{table}\centering
\def\sym#1{\ifmmode^{#1}\else\(^{#1}\)\fi}
\caption{Participants choosing the lottery in Decisions 4 and 5}\label{prediction2}
\centerline{
{
\begin{tabular}{c|c|c}
\hline
& \textbf{regret averse} & \textbf{non regret averse}\\
\hline
Agents who chose lottery in D4 & $6\%$ & $20\%$ \\
Agents who chose lottery in D5 & $66\%$ & $80\%$ \\
\hline
\end{tabular}
}
}
\end{table}
\bigskip
\begin{result} Most regret averse agents choose the sure amount in Decision 4 and the lottery in Decision 5.
\end{result}
Finally, as a robustness check, we verify whether under a partner-dependent information condition (Decision 3), participants behave consistently with the type they were classified as through Decisions 1 and 2. Table \ref{robustness} presents the amounts chosen in Decision 1, Decision 2 and
Decision 3 -- overall and broken down by type. We find that, in line with Prediction \ref{pred1}, for regret averse participants the mean lottery threshold chosen in Decision 3, $x_3$, is higher than mean $x_1$ and lower than mean $x_2$. For rejoice loving participants, the opposite happens, and for regret neutral participants mean $x_3$ does not significantly differ from mean $x_1$ and mean $x_2$.
\begin{table}\centering
\def\sym#1{\ifmmode^{#1}\else\(^{#1}\)\fi}
\caption{Mean lottery thresholds chosen in Decisions 1-3 by type}\label{robustness}
\setlength\tabcolsep{10pt}
\centerline{
{
\begin{tabular}{lccc|c}
\hline
& \textbf{regret averse} & \textbf{regret neutral} & \textbf{rejoice lover} & \textbf{all} \\
\hline
Decision 1 ($x_1$) & 9.15 & 11.53 & 12.33 & 11.22\\
& (2.40)& (2.04) & (1.98) &(2.39)\\
Decision 2 ($x_2$) & 12 & 11.10 & 9.77 & 11.16 \\
& (2.56)& (2.70) & (2.34) &(2.40)\\
Decision 3 ($x_3$) & 11.25 & 11.64 & 11.10 & 11.41\\
& (2.59)& (2.30) & (2.70) &(2.47)\\
\hline
$x_1-x_3$ & -2.1\sym{***} & -0.11 & 1.23\sym{**} & -0.19\\
$x_2-x_3$ & 0.75\sym{*} & -0.54 & -1.33\sym{*} & -0.25\\
\hline
N & 32 & 73 & 39 & 144 \\
\hline
\end{tabular}
}
}
\begin{tablenotes}
\small\item {\footnotesize Standard deviation in parentheses. The Wilcoxon test tests $H_0 : x_1-x_3=0$ and $H_0 : x_2-x_3=0$. \sym{*} \(p<0.1\), \sym{**} \(p<0.05\), \sym{***} \(p<0.01\).}
\end{tablenotes}
\end{table}
To check whether these differences are statistically significant, we run the Wilcoxon equality test on matched data. The null hypotheses $H_0 : x_1-x_3=0$ and $H_0 : x_2-x_3=0$ are rejected for regret averse participants and rejoice loving participants, and not rejected for regret neutral participants. These results strongly support Prediction \ref{pred1}, thereby offering some reassurance that participants have been classified into types accurately.
\begin{result} The partner-dependent lottery threshold $x_3$ is significantly higher (lower) than the lottery threshold $x_1$ and significantly lower (higher) than the lottery threshold $x_2$ for regret averse (rejoice loving) participants. It is not significantly higher than $x_1$ and $x_2$ for regret neutral participants.
\end{result}
Prediction \ref{pred1} implies that in Decision 3, the correlation between lottery threshold chosen and belief about the partner's lottery threshold, denoted by $\chi$, will be higher for regret averse participants than for non regret averse participants, i.e., $\chi_{R}>\chi_{NR}$.
This is due to the fact that regret aversion induces a desire to coordinate. We find that $\chi_{R}$ equals 0.77 and $\chi_{NR}$ equals 0.47.
The test for equality of correlation
coefficients rejects the null hypothesis $H_0 : \chi_{R}=\chi_{NR}$. In particular, $\chi_{R}$
is significantly higher than $\chi_{NR}$ ($p = 0.01$).
\subsection{Experiment 2}\label{experiment2}
\subsubsection{Design}
\paragraph{Overview} If participants have preferences for conformism \cite{CharnessRigottiRustichini:2017:GEB,CharnessNaefSontuoso:2019:JET}, they may have chosen what they believed their partner to choose because in the game in Experiment 1 they did not know ex ante what the best choice was and thus preferred to do what their partner did in a previous round or what they believed their partner did in the current round. Moreover, if participants have inequity averse preferences \cite{FehrSchmidt:1999:QJE}, they may have coordinated on the decision of their partner because in the game in Experiment 1 their earnings may have substantially differed from their partner's earnings and they did not want to risk to earn less than their partner.
To eliminate these potential confounds we designed and ran a second experiment.
The main goal of Experiment 2 is to test our model's key prediction (i.e., that regret averse players try to coordinate with their partner) using a one shot variant of the regret game that eliminates the aforementioned potential confounds. The secondary goal of Experiment 2 is to provide an alternative and simple method to classify participants into regret averse and non regret averse types, which serves as a robustness check for the classification into types provided by Experiment 1.
Experiment 2 consists of two parts. In the first part, participants have to choose between a sure amount of money and a lottery. This decision is calibrated such that they prefer the sure amount. Then they are asked whether they want \emph{to find out} how much they would have earned had they chosen the risky lottery.
If they choose \emph{not to find out}, they forgo a small amount of money. This question allows us to classify participants into regret averse types and non regret averse types. In the second part of the experiment, participants are matched in pairs and play a variant of the regret game: they must choose whether they want to find out the risky lottery's outcome or not, but they can avoid finding out only if their partner decides not to find out too.
\paragraph{Part 1} In this part subjects have to take 3 decisions.
\emph{Decision 1.} We elicit each participant's valuation of a lottery. We ask each participant for the smallest sure amount of money that they would choose over a risky lottery paying \textsterling80 with 20\% probability and \textsterling0 with 20\% probability.\footnote{To make the lottery easier to understand, we told them that the computer would randomly draw a ball from an urn containing 4 blue balls and 1 red ball. If the ball drawn was blue, they would earn \textsterling0; if the ball drawn was red, they would earn \textsterling80. We also showed a picture of the urn. See Appendix \ref{instructions2} for further details.} We use an incentive compatible mechanism, the BDM, in the simpler version developed by \citet{healy}. Participants are shown a list of 80 questions, each asking if they prefer the risky lottery (referred to as Option A) or a sure amount of money (referred to as Option B). The sure amount of money ranges from \textsterling1 (first question) to \textsterling80 (last question). Participants are then asked at which value of Option B they want to switch from Option A to Option B. Only 10\% of the participants are paid for this decision.\footnote{To ensure that only participants who read the instructions carefully remain in the study, participants who chose a number higher than twice the expected value of the lottery (24\% of the initial sample) ended the experiment after this decision.}
\emph{Decision 2.} We ask participants to choose between a sure amount of money (equal to the amount elicited in Decision 1 plus \textsterling2) and the risky lottery paying \textsterling80 with 20\% probability and \textsterling0 with 80\% probability. In order to be consistent with Decision 1, participants should choose the sure amount of money. All participants are paid for their decision in Decision 2.
\emph{Decision 3}. All the participants who choose the sure amount in Decision 2 are asked whether they want to find out how much they would have earned, had they chosen the risky lottery.\footnote{The participants who chose the lottery in Decision 2 (9\%), thereby contradicting their previous decision, were then asked alternative questions and excluded from our data analysis. For further details, see Appendix \ref{instructions2}.} By choosing not to find out, they forgo a small amount of money. In particular, if they chose to find out, they are informed about the lottery's outcome at the end of the experiment, and their earnings are increased by \textsterling0.04. If they chose not to find out, they are not informed about the lottery's outcome at the end of the experiment, and their current earnings are not increased by \textsterling0.04.
Those who choose not to find out are classified as regret averse and those who chose to find out as non regret averse.
\paragraph{Part 2} In this part subjects play one shot of a variant of the regret game.
\emph{Decision 4.} Participants are matched in pairs. Again, they must choose whether they want to find out the risky lottery's outcome or not. However, whether they learn the lottery's outcome also depends on their partner's behaviour. If they choose not to find out, they can avoid learning the lottery's outcome only if their partner also chose not to find out. Either this decision or the previous decision is randomly selected and implemented. If a participant chooses to find out in the randomly selected decision, their total earnings are increased by \textsterling0.04.
It is now easy to see how the design of Experiment 2 removes the potential confounds generated by preferences for conformism and inequity averse preferences. First, we use a one shot game in Decision 4, thus participants cannot imitate their partner's decisions in previous rounds. Second, in the variant of the game in Decision 4 participants can immediately identify the best decision to take given their preferences. This eliminates the concern generated by conformism motives, which would be at play if participants were unsure of the best thing to choose. Finally, in Decision 4 the potential earning difference generated by a participant's decision is negligible (at most \textsterling0.04). This eliminates the concern generated by inequity averse motives.
After making their decision, participants are also asked to guess their partner's decision. If they guess their partner's decision, they earn additional \textsterling0.50. Only one of the last two questions is randomly selected and implemented.
Differently from Experiment 1, in Experiment 2 the strategic decision aimed at testing the predictions of the regret game was not repeated. The reasons are the following. First, we wanted to avoid repeated-game effects and particularly the effect of conforming with the partner's previous decisions. Second, given that the strategic decision was simple and built on the previous decision, it did not appear necessary to provide participants with learning opportunities. Third, we thought that the effect of regret on behaviour may be more salient when the game is played only once.
\paragraph{Procedure} Due to COVID 19-related lab closures, the sessions were run online in May 2021. To increase the robustness and external validity of our results, we used two different samples:\ students from Royal Holloway University of London and Prolific participants.
The experiment was programmed and conducted
with the software o-Tree \cite{ChenSchongerWickens}.
At the end of the experiment, 10\% of participants were paid for Part 1 and every participant was paid for Decision 2. If in the randomly selected decision out of Decision 3 and Decision 4, a participant chose to find out the lottery's outcome, their total earnings were increased by \textsterling0.04. If they guessed their partner's decision, they earned additional \textsterling0.50. On average, a participant earned \textsterling18.21.
\subsubsection{Testable predictions}
Decision 4 tests the prediction of the regret game when it is a game of coordination. In Decison 4, a regret averse participant will choose to find out the lottery's outcome if he believes that his partner chose to find out the lottery's outcome too. He will choose not to find out the lottery's outcome if he believes that his partner also chose not to find out the lottery's outcome. This implies that the share of regret averse agents choosing to find out under the belief that their partner chose to find out will be significantly higher than the share of regret averse agents choosing to find out under the belief that their partner chose not to find out. Similarly, the share of regret averse agents choosing not to find out under the belief that their partner also chose not to find out will be significantly higher than the share of regret averse agents choosing not to find out under the belief that their partner chose to find out.
This yields our testable prediction.
\begin{pred}\label{pred4} The fraction of regret averse agents choosing the option that they believe their partner chose will be significantly higher than the fraction choosing the alternative option.
\end{pred}
\subsubsection{Results}
We have a sample of 213 participants who completed the experiment: 84 students from Royal Holloway University and 129 participants from Prolific. 54\% of the participants were female, 44\% were male and the remaining 2\% classified themselves as ``other''. The gender distribution is very similar across our two subsamples. The average age was 27 (21 among Royal Holloway students and 32 among Prolific participants).
In our sample, 12\% of the participants chose not to find out the lottery's outcome and are classified as \emph{regret averse} and 88\% of the participants chose to find out the lottery's outcome and are classified as \emph{non regret averse}.
Table \ref{tab_q3_ra} shows the distribution of choices -- between finding out and not finding out -- and beliefs for regret averse participants. We can observe that 80\% of the regret averse participants chose not to find out, as in the individual decision, and 20\% chose to find out. It makes sense that either option gets chosen, as in Decision 4 there is no dominant strategy. All the participants who expect their partner to choose not to find out, also chose not to find out. Some participants chose not to find out even if they believed that their partner found out. This may have the following explanation. Given that we elicited point beliefs, participants who reported that their partner chose to find out could still believe that with some probability their partner chose not to find out. In this case it would be optimal not to find out, because the small amount of money to forgo is compensated by the reduction of the expected regret.
\begin{table}\centering
\def\sym#1{\ifmmode^{#1}\else\(^{#1}\)\fi}
\caption{regret averse subjects' choices and beliefs in Decision 4}\label{tab_q3_ra}
\setlength\tabcolsep{10pt}
\centerline{
{
\begin{tabular}{c|cc|c}
\hline
& Believes partner & Believes partner & Total \\
& finds out & does not find out & \\
\hline
Finds out & 5 & 0 & 5\\
Does not find out & 8 & 12 & 20 \\
\hline
Total & 13 & 12 & 25 \\
\hline
\end{tabular}
}
}
\end{table}
Table \ref{tab_q3_nra} shows the distribution of choices -- between finding out and not finding out -- and beliefs for non regret averse participants. Over 98\% of the non regret averse participants (185 out of 188) chose to find out, as in the individual decision. This is expected, as finding out is a dominant strategy for them. It also shows that they are consistent across decisions. Out of these 185 participants choosing to find out, 10 believed that their partner chose not to find out. This is interesting, as it further confirms that finding out is a dominant strategy for them.
\begin{table}\centering
\def\sym#1{\ifmmode^{#1}\else\(^{#1}\)\fi}
\caption{Non regret averse subjects' choices and beliefs in Decision 4}\label{tab_q3_nra}
\setlength\tabcolsep{10pt}
\centerline{
{
\begin{tabular}{c|cc|c}
\hline
& Believes partner & Believes partner & Total \\
& finds out & does not find out & \\
\hline
Finds out & 175 & 10 & 185\\
Does not find out & 0 & 3 & 3 \\
\hline
Total & 175 & 13 & 188 \\
\hline
\end{tabular}
}
}
\end{table}
To check whether the differences observed in Table \ref{tab_q3_ra} are statistically significant, we run a t-test. Our null hypothesis is that the frequency with which that a regret averse participant chooses to find out under the belief that his partner did the same equals the frequency with which he chooses to find out under the belief that his partner chose not to find out. The relative frequencies are 1 and 0.61, respectively. We reject the null hypothesis (p=0.0151). Our results support Prediction \ref{pred4}.
\begin{result} The frequency with which regret averse participants choose the option that they believe their partner chose is significantly higher than the frequency with which they choose the alternative option.
\end{result}
\section{Conclusion}\label{CONCLUSION}
This paper began with the following simple observation:\ in many situations, ranging from technology adoption to ordering food in a restaurant, learning the outcome of unchosen alternatives is not guaranteed. Given that a decision maker can only regret a foregone alternative if she learns its outcome, what should be done? We showed how to incorporate this observation into the classic model of a decision maker who is regret averse. Our first contribution is a formalisation of ex post information structures that allow for the possibility that unchosen alternatives are / are not learned about. That is, the domain of preferences needs to be extended from simply `objects of choice' to `objects of choice and their associated information environment'. For a given choice set, we provide a definition that ranks two informational environments according to which is ``more informative''. And we show, in Theorem \ref{thm:lessInfo}, that a more informative environment is never preferred for a regret-averse decision maker.
In Section \ref{subsec:strategicSetting}, we allow for the possibility that the ex-post information environment that will be faced depends not only on one's own choice but also on the choices of others. Thus, what on the surface appears to be a collection of individual decision problems - like for example ordering food in a restaurant - is in fact a rich multi-player behavioural game. The reason, of course, is that the decisions of others - one's fellow diners - can be informative about foregone alternatives, and for a regret averse individual that matters. We term this environment the regret game, and in Theorem \ref{thm:Nashmany} we classify conditions on preferences for which it is a coordination game with multiple equilibria.
We tested the predictions of our model through two experiments. Both experiments have two main goals:\ identifying the participants who are regret averse and testing whether they behave as our theory predicts. In the first experiment, we find that, as predicted by our model, regret averse participants try to coordinate with their partner. Believing that their partner chose an option significantly increases their likelihood of choosing that option. We observe a positive impact of beliefs on choice also for non regret averse participants. However, for non regret averse participants this impact is smaller. Moreover, when we focus on the last iteration of the game, the impact of beliefs on choice is larger and highly significant for regret averse participants, but is only marginally significant for non regret averse participants.
These results indicate that regret aversion drives coordination.
However, preferences for conformism \cite{CharnessRigottiRustichini:2017:GEB,CharnessNaefSontuoso:2019:JET} and inequity averse preferences \cite{FehrSchmidt:1999:QJE} may have been two additional drivers of coordination. We ran a second experiment to eliminate these potential confounds.
The results of the second experiment support the key findings of the first experiment:\ regret averse participants try to coordinate with their partner. The probability that they choose what they think their partner chose is significantly higher than the probability that they choose the alternative option. Most notably, all the regret averse participants who believe that their partner chose to avoid information, choose to avoid information too. Instead, almost all of the non regret averse participants (over 98\%) play the dominant strategy, and some of them do so under the belief that their partner chose the alternative option. Thus, once the aforementioned confounds are removed by design, we still find strong support for our model's predictions.
It is traditional in economic modelling to view the information available to individuals as part of the environment and not something over which there is choice. While we have focused on regret, one can imagine a host of other behavioural explanations for why an individual might (i) want to avoid information, and then (ii) take steps in order to avoid information. It then immediately follows, that since the behaviour of others can impact one's available information, there are strategic interdependencies at play. It is our hope that more models of the sort presented here can be developed to better-understand the role that evolving information plays for economic agents, and in particular the way in which it connects them.
\newpage
\bibliographystyle{elsarticle-harv}
| {'timestamp': '2021-09-24T02:01:28', 'yymm': '2109', 'arxiv_id': '2109.10968', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.10968'} |
\section{Architecture}
\begin{figure}[htb]
\includegraphics[width=\columnwidth] {framework.png}
\caption{\label{fig:framework} Illusion of the system of DCAF. \textit{Request Value Online Estimation} module will score each request conditioned on action $j$ through online features of which estimator is trained offline. \textit{Policy Execution} module mainly takes charge of executing the final action $j$ for each request based on the system status collected by \textit{Information Collection and Monitoring} module, $\lambda$ calculated offline and $Q_{ij}$ obtained from previous module.}
\medskip
\end{figure}
In general, DCAF is comprised of online decision maker and offline estimator:
\begin{itemize}
\item The online modules make the final decision based on personalized request value and system status.
\item The offline modules leverage the logs to calculate the Lagrange Multiplier $\lambda$ and train a estimator for the request expected value conditioned on actions based on historical data.
\end{itemize}
\subsection{Online Decision Maker}
\subsubsection{Information Collection and Monitoring}~\\
This module monitors and provides timely information about the system current status which includes GPU-utils, CPU-utils, runtime (RT), failure rate, and etc. The acquired information enables the framework to dynamically allocate the computation resource without exceeding the budget by limiting the action space.
\subsubsection{Request Value Estimation}~\\
This module estimates the request's $Q_{ij}$ based on the features provided in information collection module. Notably, to avoid growing the system load, the online estimator need to be light-weighted, which necessitates the balance between efficiency and accuracy. One possible solution is that the estimation of $Q_{ij}$ should re-utilize the request context features adequately, e.g. high-level features generated by other models in different modules.
\subsubsection{Policy Execution}~\\
Basically, this module assigns the best action $j$ to request $i$ by Equation (6). Moreover, for the stability of online system, we put forward a concept called \textit{MaxPower} which is an upper bound for $q_j$ to which each request must subject. DCAF sets a limit on the \textit{MaxPower} in order to strongly control the online engine. The \textit{MaxPower} is automatically controlled by system's runtime and failure rate through control loop feedback mechanism, e.g. Proportional Integral Derivative (PID) \cite{ang2005pid}. The introduction of \textit{MaxPower} guarantees that the system can adjust itself and remain stable automatically and timely when encountering sudden request spikes. \newline
According to the formulation of PID, $u(t)$ and $e(t)$ are the control action and system unstablity at time step $t$. For $e(t)$, we define it as the weighted sum of average runtime and fail rates over a time interval which are denoted by $rt$ and $fr$ respectively. $k_p$, $k_i$ and $k_d$ are the corresponding weights for proportional,integral and derivative control. ${\theta}$ means a tuned scale factor for the weighted sum of $rt$ and $fr$. Formally,
\begin{align}
u(t)= k_p{e(t)}+k_i{\sum_{n=1}^{t}e(t)}+k_d({e(t)-e(t-1)})
\end{align}
\begin{algorithm}
\caption{PID Control for MaxPower}
\begin{flushleft}
1: \textbf{Input:} $k_p$, $k_i$, $k_{d}$, $MaxPower$ \\
2: \textbf{Output:} $MaxPower$ \\
3: \textbf{while} (true): \\
4: \hspace{5mm} Obtain $rt$ and $fr$ from \textit{Information Collection and Monitoring} \\
5: \hspace{5mm} ${e(t)={rt}+\theta{fr}}$ \\
6: \hspace{5mm} $u(t)= k_p{e(t)}+k_i{\sum_{n=1}^{t}e(t)}+k_d({e(t)-e(t-1)})$\\
7: \hspace{5mm} Update $MaxPower$ with $u(t)$\\
8: \textbf{end while} \\
\end{flushleft}
\end{algorithm}
\subsection{Offline Estimator}
\subsubsection{Lagrange Multiplier Solver} ~\\
As mentioned above, we could get the global optimal solution of the Lagrange Multipliers by a simple bisection search method. In real case, we take logs as a request pool to search a best candidate Lagrange Multiplier $\lambda$. Formally,
\begin{itemize}
\item Sample $N$ records from the logs with $Q_{ij}$, $q_j$ and computation cost $C$, e.g. the total amount of advertisements that request the CTR model within a time interval.
\item Adjust the computation cost $C$ by the current system status in order to keep the dynamic allocation problem under constraint in time. For example, we denote regular QPS by $QPS_r$ and current QPS by $QPS_c$. Then the adjusted computation cost $\hat{C}$ is $C \times \nicefrac{QPS_r}{QPS_c}$, which could keep the $N$ records under the current computation constraint.
\item Search the best candidate Lagrange Multiplier $\lambda$ by Algorithm (1)
\end{itemize}
It's worth noting that we actually assume the distribution of the request pool is the same as online requests, which could probably introduce the bias for estimating Lagrange Multiplier. However, in practice, we could partly remove the bias by updating the $\lambda$ frequently.
\subsubsection{Expected Gain Estimator} ~\\
In our settings, for each request, $Q_{ij}$ is associated with eCPM under different action $j$ which is the common choice for performance metric in the field of online display advertising. Further, we build a CTR model to estimate the CTR, because the eCPM could be decomposed into $ctr \times bid$ where the bids are usually provided by advertisers directly. It is notable that the CTR model is conditioned on actions in our case, where it is essential to evaluate each request gain under different actions.
And this estimator is updated routinely and provides real-time inference in \textit{Policy Execution} module.
\section{Conclusion}
In this paper, we propose a noval dynamic computation allocation framework (DCAF), which can break pre-defined quota constraints within different modules in existing cascade system. By deploying DCAF online, we empirically show that DCAF consistently maintains the same performance with great computation resource reduction in online advertising system, and meanwhile, keeps the system stable when facing the sudden spike of requests. Specifically, we formulate the dynamic computational allocation problem as a knapsack problem. Then we theoretically prove that the total revenue can be maximized under a computation budget constraint by properly allocating resource according to the value of individual request. Moreover, under some general assumptions, the global optimal Lagrange Multiplier $\lambda$ can also be obtained which finally completes the constrained optimization problem in theory. Moreover, we put forward a concept called \textit{MaxPower} which is controlled by a designed control loop feedback mechanism in real-time. Through \textit{MaxPower} which imposes constraints on the range of action candidates, the system could be controlled powerfully and automatically.
\section{experiments}
\subsection{Offline Experiments}
For validating the framework's correctness and effectiveness, we extensively analyse the real-world logs collected from the display advertising system of Taobao and conduct offline experiments on it. As mentioned above, it is common practice for most systems to ignore the differences in value of requests and execute same procedure on each request. Therefore, we set equally sharing the computation budget among different requests as the \textbf{baseline} strategy.
As shown in Figure \ref{fig:cascade}, we simulate the performance of DCAF in Ranking stage by offline logs. In advance, we make it clear that all data has been rescaled to avoid breaches of sensitive commercial data. We conduct our offline and online experiments in Taobao's display advertisement system where we spend the GPU resource automatically through DCAF. In detail, we instantiate the dynamic allocation problem as follow:
\begin{itemize}
\item Action $j$ controls the number of advertisements that need to be evaluated by online CTR model in Ranking stage.
\item $q_j$ represents the advertisement's quota for requesting the online CTR model.
\item $Q_{ij}$ is the sum of top-k ad's eCPM for request $i$ conditioned on action $j$ in Ranking stage which is equivalent to online performance closely. And $Q_{ij}$ is estimated in the experiment.
\item $C$ stands for the total number of advertisements that are requesting online CTR model in a period of time within the serving capacity.
\item \textbf{Baseline}: The original system, which allocates the same computation resource to different requests. With the baseline strategy, system scores the same number of advertisements in Ranking stage for each request.
\end{itemize}
\begin{figure}[htb]
\includegraphics[width=\columnwidth] {Figure_1.png}
\caption{\label{fig:exp1} Global optima under different $\lambda$ candidates. In Figure \ref{fig:exp1}, x-axis stands for $\lambda$'s candidate; left y-axis represents $\sum_{ij}{x_{ij}Q_{ij}}$; right-axis denotes the corresponding cost. The red shadow area corresponds to the exceeding part of $max\sum_{ij}{x_{ij}Q_{ij}}$ beyond the baseline. And yellow shadow area is the reduction of $\sum_{ij}{x_{ij}q_{j}}$ under these $\lambda$'s compared with the baseline. Random strategy is also shown in Figure \ref{fig:exp1} for comparison with DCAF.}
\medskip
\end{figure}
\textbf{Global optima under different $\lambda$ candidates. } In DCAF, the Lagrange Multiplier $\lambda$ works by imposing constraint on the computation budget. Figure \ref{fig:exp1} shows the relation among $\lambda$'s magnitude, $max\sum_{ij}{x_{ij}Q_{ij}}$ and its corresponding $\sum_{ij}{x_{ij}q_{j}}$ under fixed budget constraint. Clearly, $\lambda$ could monotonically impact on both $max\sum_{ij}{x_{ij}Q_{ij}}$ and its corresponding $\sum_{ij}{x_{ij}q_{j}}$. And it shows that the DCAF outperforms the baseline when $\lambda$ locates in an appropriate interval. As demonstrated by the two dotted lines, in comparison with the baseline, DCAF can achieve both higher performance with same computation budget and same performance with much less computation budget. Compared with random strategy, DCAF's performance outmatches the random strategy's to a large extent. \newline
\begin{figure}[htb]
\includegraphics[width=\columnwidth] {Figure_2.png}
\caption{\label{fig:exp2} Comparison of DCAF with the original system on computation cost. In Figure \ref{fig:exp2}, x-axis denotes the $\sum_{ij}{x_{ij}Q_{ij}}$; y-axis represents the$\sum_{ij}{x_{ij}q_{j}}$. For points on the two lines with same x-coordinate, Figure \ref{fig:exp2} shows that DCAF always perform as well as the baseline by much less computation resource.}
\medskip
\end{figure}
\textbf{Comparison of DCAF with the original system on computation cost. } Figure \ref{fig:exp2} shows that DCAF consistently accomplish same performance as the baseline and save the cost by a huge margin. Furthermore, DCAF plays much more important role in more resource-constrained systems.
\begin{figure}[htb]
\includegraphics[width=\columnwidth] {Figure_3.png}
\caption{\label{fig:exp3} Total eCPM and its cost over different actions. In this figure, x-axis stands for action $j$'s and left y-axis represents $\sum_{ij}{x_{ij}Q_{ij}}$ conditioned on action $j$; right-axis denotes the corresponding cost. For each action $j$, we sum over $Q_{ij}$ which is the the sum of top-k ad's eCPM for requests that are assigned action $j$ by DCAF.}
\medskip
\end{figure}
\textbf{Total eCPM and its cost over different actions. } As shown by the distributions in Figure \ref{fig:exp3}, we could see that DCAF treats each request differently by taking different action $j$. And $\nicefrac{\sum_{ij}Q_{ij}}{\sum_{ij}q_{j}}$ is decreasing with action $j$'s which empirically show that the relation between expected gain and its corresponding cost follows the law of diminishing marginal utility in total.
\subsection{Online Experiments}
DCAF is deployed in Alibaba display advertising system since 2020. From 2020-05-20 to 2020-05-30, we conduct online A/B testing experiment to validate the effectiveness of DCAF. The settings of online experiments are almost identical to offline experiments. Action $j$ controls the number of advertisements for requesting the CTR model in Ranking stage. And we use a simple linear model to estimate the $Q_{ij}$. The original system without DCAF is set as baseline. The DCAF is deployed between Pre-Ranking stage and Ranking stage which is aimed at dynamically allocating the GPU resource consumed by Ranking's CTR model. Table \ref{exp:samecost} shows that DCAF could bring improvement while using the same computation cost. Considering the massive daily traffic of Taobao, we deploy DCAF to reduce the computation cost while not hurting the revenue of the ads system. The results are illustrated in Table \ref{exp:samerpm}, and DCAF reduces the computation cost with respect to the total amount of advertisements requesting CTR model by 25\% and total utilities of GPU resource by 20\%. It should be noticed that, in online system, the $Q_{ij}$ is estimated by a simple linear model which may be not sufficiently complex to fully capture data distribution. Thus the improvement of DCAF in online system is less than the results of offline experiments. This simple method enables us to demonstrate the effectiveness of the overall framework which is our main concern in this paper. In the future, we will dedicate more efforts in modeling $Q_{ij}$. Figure \ref{fig:exp4} shows the performance of DCAF under the pressures of online traffic in extreme case e.g. Double 11 shopping festival. By the control mechanism of \textit{MaxPower}, the online serving system can react to the sudden rising of traffic quickly, and make the system back to normal status by consistently keeping the fail rate and runtime at a low level. It is worth noticing that the control mechanism of \textit{MaxPower} is superior to human interventions in the scenario that the large traffic arrives suddenly and human interventions inevitably delay.
\begin{table}[!h]
\caption{Results with Same Computation Budget}
\label{exp:samecost}
\begin{tabular}{lcl}
\toprule
& CTR & RPM\\
\midrule
Baseline & +0.00\% & +0.00\% \\
DCAF & +0.91\% & +0.42\% \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[!h]
\caption{Results with Same Revenue}
\label{exp:samerpm}
\begin{tabular}{lcccl}
\toprule
& CTR & RPM & Computation Cost & GPU-utils\\
\midrule
Baseline & +0.00\% & +0.00\% & -0.00\% & -0.00\%\\
DCAF & -0.57\% & +0.24\% & -25\% & -20\%\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[htb]
\centering
\begin{subfigure}[b]{\columnwidth}
\includegraphics[width=1\linewidth]{Figure_4.png}
\caption{}
\label{fig:Ng1}
\end{subfigure}
\begin{subfigure}[b]{\columnwidth}
\includegraphics[width=1\linewidth]{Figure_5.png}
\caption{}
\label{fig:Ng2}
\end{subfigure}
\caption{\label{fig:exp4} The effect of \textit{MaxPower} mechanism. In this experiments, we manually change the traffic of system at time $158$ and the requests per second increase 8-fold. Figure \ref{fig:Ng1} shows the trend of \textit{MaxPower} over time and Figure \ref{fig:Ng2} shows the trend of fail rate over time. As shown in Figure \ref{fig:exp4}, the \textit{MaxPower} takes effect immediately when the QPS is rising suddenly which makes the fail rate keep at a lower level. At the same time, the base strategy fails to serve some requests, because it does not change the computing strategy while the computation power of system is insufficient.}
\medskip
\end{figure}
\section{Formulation}
We formulate the dynamic computation allocation problem as a knapsack problem which is aimed at maximizing the total revenue under the computation budget constraint. We assume that there are $N$ requests $\{i=1,\dots,N\}$ requesting the e-commerce platform within a time period. For each request, $M$ actions $\{j=1,\dots,M\}$ can be taken. We define $Q_{ij}$ and $q_j$ as the expected gain for request $i$ that is assigned action $j$ and the cost for action $j$ respectively. $C$ represents the total computation budget constraint within a time period. For
instance, in the display advertising system deployed in e-commerce, $q_j$ usually stands for items (ads) quota that request the online engine to evaluate, which positively correlate with system load in usual. And $Q_{ij}$ usually represent the eCPM~(effective cost per mille) conditioned on action $j$ which directly proportional to $q_j$. $x_{ij}$ is the indicator that request $i$ is assigned action $j$. For each request, there is one and only one action $j$ can be taken, in other words, $x_{i.}$ is an one-hot vector. \newline
Following the definitions above, for each request, our target is to maximize the total revenue under computation budget by assigning each request $i$ with appropriate action $j$. Formally,
\begin{align*}
{\rm \max_j}&\sum_{ij}{x_{ij}Q_{ij}} \\
&{\rm s.t.} \sum_{ij}{x_{ij}q_{j}} \le C \\
&\sum_j{x_{ij}} \le 1 \\
&x_{ij} \in \{0,1\} \numberthis
\end{align*}
where we assume that each individual request has its "personalized" value, thus should be treated differently. Besides, request expected gain is correlated with action $j$ which will be automatically taken by the platform in order to maximize the objective under the constraint. In this paper, we mainly focus on proving the effectiveness of DCAF's framework as a whole. However, in real case, we are faced with several challenges which are beyond the scope of this paper. We simply list them as below for considering in the future:
\begin{itemize}
\item The dynamic allocation problem \cite{berger2018robinhood} are usually coupled with real-time request and system status. As the online traffic and system status are both varying with time, we should consider the knapsack problem to be real-time, e.g. real-time computation budget.
\item $Q_{ij}$ are unknown, thus needs to be estimated. $Q_{ij}$ prediction is vital to maximize the objective, which means real-time and efficient approaches are required to estimate the value. Besides, to avoid increasing the system's burden, it is essential for us to consider light-weighted methods.
\end{itemize}
\section{Future Work}
Fairness has attracted more and more concerns in the fields of recommendation system and online display advertisements. In this paper we propose DCAF, which allocate the computation resource dynamically among requests. The values of request vary with time, scenario, users and other factors, that incite us to treats each request differently and customize the computation resource for it. But we also noticed that DCAF may discriminate among users. While the allocated computation budgets varying with users, DCAF may leave a impression that it would aggravate the unfairness phenomenon of system further. In our opinion, the unfair problem stems from that all the approaches to model users are data-driven. Meanwhile
most of systems create a data feedback loop that a system is trained and evaluated on the data impressed to users \cite{chaney2018algorithmic}. We think the fairness of recommender system and ads system is important and needs to be paid more attention to. In the future, we will analyse the long-term effect for fairness of DCAF extensively and include the consideration of it in DCAF carefully. \newline
Besides, DCAF is still in the early stage of development, where modules in the cascade system are considered independently and the action $j$ is defined as the number of candidate to be evaluated in our experiments. Obviously, DCAF could work with diverse actions, such as models with different calculation complexity. Meanwhile, instead of maximizing the total revenue in particular module, DCAF will achieve the global optima in the view of the whole cascade system in the future. Moreover, in the subsequent stages, we will endow DCAF with the abilities of quick adaption and fast reactions. These abilities will enable DCAF to exert its full effect in any scenario immediately.
\section{Introduction}
Modern large-scale systems such as recommender system and online advertising are built upon computation-intensive infrastructure \cite{cheng2016wide} \cite{zhou2018deep} \cite{zhou2019deep}. With the popularity of e-commerce shopping, e-commerce platform such as Taobao, the world's leading e-commerce platforms, are now enjoying a huge boom in traffic \cite{cardellini1999dynamic}, e.g. user requests at Taobao are increasing year by year. As a result, the system load is under great pressures \cite{zhou2018rocket}. Moreover, request fluctuation also gives critical challenge to online serving system. For example, the Taobao recommendation system always bears many spikes of requests during the period of Double 11 shopping festival.
\begin{figure}[thb]
\includegraphics[width=\columnwidth] {cascade.png}
\caption{\label{fig:cascade} Illustration of our cascaded display advertising system. Each request will be served through these modules sequentially. Considering the limitation of computation resource and latency for online serving system, the fixed quota of candidate advertisements, denoted by $N$ for each module, is usually pre-defined manually by experience. }
\medskip
\small
\end{figure}
To address the above challenges, the prevailing practices for online engine are: 1) decomposing the cascade system \cite{liu2017cascade} into multiple modules and manually allocating a fixed quota for each module by experience, as shown in Figure~\ref{fig:cascade}; 2) designing many computation downgrade plans in case the sudden traffic spikes arrive and manually executing these plans when needed.
These non-automated strategies are often lack of flexibility and require human interventions. Furthermore, most of these practices often impact on all requests once they are executed and ignore the fact that the value of requests varies greatly. Obviously it is a straightforward but better strategy to allocate the computation resource by biasing towards the requests that are more valuable than others, for maximizing the total revenue.
Considering the shortcomings of existing works, we aim at building a dynamic allocation framework that can allocate the computation budget flexibly among requests. Moreover, this framework should also take into account the stability of the online serving system which are frequently challenged by request boom and spike. Specifically, we formulate the problem as a knapsack problem, of which objective is to maximize the total revenue under a computation budget constraint. We propose a dynamic allocation framework named DCAF which could consider both computation budget allocation and stability of online serving system simultaneously and automatically.
Our main contributions are summarized as follow:
\begin{itemize}
\item We break through the stereotypes in most cascaded systems where each individual module is limited by a static computation budget independently. We introduce a brand-new idea that computation budget can be allocated w.r.t the value of traffic requests in a "personalized" manner.
\item We propose a dynamic allocation framework DCAF which could guarantee in theory that the total revenue can be maximized under a computation budget constraint. Moreover, we provide an powerful control mechanism that could always keep the online serving system stable when encountering the sudden spike of requests.
\item DCAF has been deployed in the display advertising system of Taobao, bringing a notable improvement. To be specific, system with DCAF maintains the same business performance with 20\% GPU (Graphics Processing Unit) resource reduction for online ranking system. Meanwhile, it greatly boosts the online engine's stability.
\item By defining the new paradigm, DCAF lays the cornerstone for jointly optimizing the cascade system among different modules and raising the ceiling height for performance of online serving system further.
\end{itemize}
\section{Acknowledgment}
We thanks Zhenzhong Shen, Chi Zhang for helping us on deep neural net inference optimization and conducting the dynamic resource allocation experiments.
\bibliographystyle{ACM-Reference-Format}
\section{Methodology}
\subsection{Global Optimal Solution and Proof}
To solve the problem, we firstly construct the Lagrangian from the formulation above,
\begin{align*}
L = -\sum_{ij}{x_{ij}Q_{ij}}+\lambda (\sum_{ij}{x_{ij}q_{j}}-C)+\sum_i{(\mu_i(\sum_j{x_{ij}}-1))} \\
= \sum_{ij}{x_{ij}(-Q_{ij}+\lambda q_j+\mu_i)}-\lambda C-\sum_i{\mu_i} \\
{\rm s.t.} \lambda \ge 0 \\
\mu_i \ge 0 \\
x_{ij} \ge 0 \numberthis
\end{align*}
where we relax the discrete constraint for the indicator $x_{ij}$, we could show that the relaxation does no harm to the optimal solution. From the primal, the dual function \cite{boyd2004convex} is
\begin{align}
{\rm \max_{\lambda,\mu}}\ {{\rm \min_{x_{ij}}}}(\sum_{ij}{x_{ij}(-Q_{ij}+\lambda q_j+\mu_i)}-\lambda C-\sum_i{\mu_i})
\end{align}
With $x_{ij} \ge 0$ ($x_{ij} \le 1$ is implicitly described in the Lagrangian), the linear function is bounded below only when $-Q_{ij}+\lambda q_j+\mu_i\ge0$. And only when $-Q_{ij}+\lambda q_j+\mu_i=0$, the inequality $x_{ij} > 0$ could hold which means $x_{ij} = 1$ in our case (remember that the $x_{i.}$ is an one-hot vector). Formally,
\begin{align*}
{\rm \max_{\lambda,\mu}}(-\lambda C-\sum_i{\mu_i}) \\
{\rm s.t.} -Q_{ij}+\lambda q_j+\mu_i\ge0 \\
\lambda \ge 0 \\
\mu_i \ge 0 \\
x_{ij} \ge 0 \numberthis
\end{align*}
As the dual objective is negatively correlated with $\mu$, the global optimal solution for $\mu$ would be
\begin{align}\label{equ5}
\mu_{i} = {\rm \max_{j}}(Q_{ij}-\lambda q_j)
\end{align}
Hence, the global optimal solution to $x_{ij}$ that indicate which action $j$ could be assigned to request $i$ is
\begin{align}
j = {\rm arg\max_{j}}(Q_{ij}-\lambda q_j)
\end{align}
From Slater’s theorem \cite{slater1950lagrange}, it can be easily shown that the Strong Duality holds in our case, which means that this solution is also the global optimal solution to the primal problem.
\subsection{Parameter Estimation}
\subsubsection{Lagrange Multiplier}~\\
The analytical form of Lagrange multiplier cannot be easily, or even possibly derived in our case. And meanwhile, the exact global optimal solution in arbitrary case is computationally prohibitive. However, under some general assumptions, simple bisection search could guarantee that the global optimal $\lambda$ could be obtained. Without loss of generality, we reset the indices of action space by following the ascending order of $q_j$'s magnitude.
\begin{assumption}\label{ass1}
$Q_{ij}$ is monotonically increasing with $j$.
\end{assumption}
\begin{assumption}\label{ass2}
$\nicefrac{Q_{ij}}{q_j}$ is monotonically decreasing with $j$.
\end{assumption}
\begin{lemma}\label{lem0}
Suppose Assumptions (\ref{ass1}) and (\ref{ass2}) hold, for each $i$, $\nicefrac{Q_{i{j_1}}}{q_{j_1}} \ge \nicefrac{Q_{i{j_2}}}{q_{j_2}}$ will hold if $\lambda_1 \ge \lambda_2$, where $j_1$ and $j_2$ are the actions that maximize the objective under $\lambda_1$ and $\lambda_2$ respectively.
\end{lemma}
\begin{proof}
As Equation (\ref{equ5}) and $\mu_{i}\ge0$, the inequality $Q_{ij}-\lambda q_j \ge 0$ holds. Equally, $\nicefrac{Q_{ij}}{q_j} \ge \lambda$ holds. Suppose $\nicefrac{Q_{i{j_1}}}{q_{j_1}} < \nicefrac{Q_{i{j_2}}}{q_{j_2}}$, we have $\nicefrac{Q_{i{j_2}}}{q_{j_2}} > \nicefrac{Q_{i{j_1}}}{q_{j_1}} \ge \lambda_1 \ge \lambda_2$. However, we could always find $j_2^*$ such that $Q_{i{j_2^*}} \ge Q_{i{j_2}}$ and ${q_{j_2^*}} > {q_{j_2}}$ where $\nicefrac{Q_{i{j_1}}}{q_{j_1}} \ge \nicefrac{Q_{i{j_2^*}}}{q_{j_2^*}} \ge \lambda_1 \ge \lambda_2$ such that $Q_{i{j_2^*}} \ge Q_{i{j_2}}$ by following the Assumptions (\ref{ass1}) and (\ref{ass2}). In order words, $j_2$ is not the action that maximize the objective. Therefore, we have $\nicefrac{Q_{i{j_1}}}{q_{j_1}} \ge \nicefrac{Q_{i{j_2}}}{q_{j_2}}$.
\end{proof}
\begin{lemma}\label{lem1}
Suppose Assumptions (\ref{ass1}) and (\ref{ass2}) could be satisfied, both $max\sum_{ij}{x_{ij}Q_{ij}}$ and its corresponding $\sum_{ij}{x_{ij}q_{j}}$ are monotonically decreasing with $\lambda$.
\end{lemma}
\begin{proof}
With $\lambda$ increasing, $\nicefrac{Q_{ij}}{q_j}$ is also increasing monotonically by Lemma (\ref{lem0}). Moreover, by Assumptions (\ref{ass1}) and (\ref{ass2}), we conclude that both $max\sum_{ij}{x_{ij}Q_{ij}}$ and its corresponding $\sum_{ij}{x_{ij}q_{j}}$ are monotonically decreasing with $\lambda$.
\end{proof}
\begin{theorem}
Suppose Lemma (\ref{lem1}) holds, the global optimal Lagrange Multiplier $\lambda$ could be obtained by finding a solution that make $\sum_{ij}{x_{ij}q_{j}} = C$ hold through bisection search.
\end{theorem}
\begin{proof}
By Lemma (\ref{lem1}), this proof is almost trivial. We denote the Lagrange Multiplier that makes $\sum_{ij}{x_{ij}q_{j}} = C$ hold as $\lambda^*$. Obviously, the increase of $\lambda^*$ will result in computation overload and the decrease of $\lambda^*$ will inevitably reduce max$\sum_{ij}{x_{ij}Q_{ij}}$ due to the monotonicity in Lemma (\ref{lem1}). Hence, $\lambda^*$ is the global optimal solution to the constrained maximization problem. Besides, the bisection search must work in this case which is also guaranteed by the monotonicity.
\end{proof}
Assumption (\ref{ass1}) usually holds because the gain is directly proportional to the cost in general,e.g. more sophisticated models usually bring better online performance. For Assumption (\ref{ass2}), it follows the law of diminishing marginal utility \cite{scott1955fishery}, which is an economic phenomenon and reasonable in our constrained dynamic allocation case.
The algorithm for searching Lagrange Multiplier $\lambda$ is described in Algorithm \ref{algo:lagrange}. In general, we implement the bisection search over a pre-defined interval to find out the global optimal solution for $\lambda$. Suppose $\min_j\sum_{j}{q_{j}} \le C \le \max_j \sum_{j}{q_{j}}$ (o.w there is no need for dynamic allocation), it can be easily shown that $\lambda$ locates in the interval $[0, \min_{ij}(\nicefrac{Q_{ij}}{q_{j}})]$. Then we get the global optimal $\lambda$ through bisection search of which target is the solution of $\sum_{ij}{x_{ij}q_{j}} = C$.
\begin{algorithm}
\caption{\label{algo:lagrange} Calculate Lagrange Multiplier}
\begin{flushleft}
1: \textbf{Input:} $Q_{ij}$, $q_j$, $C$, interval $[0, \min_{ij}(\frac{Q_{ij}}{q_{j}})]$ and tolerance $\epsilon$ \\
2: \textbf{Output:} Global optimal solution of Lagrange Multiplier $\lambda$\\
3: Set $\lambda_l = 0$, $\lambda_r = \min_{ij}(\frac{Q_{ij}}{q_{j}})$, $gap = +\infty$ \\
4: \textbf{while} ($gap > \epsilon$): \\
5: \hspace{5mm} $\lambda_m = \lambda_l + \frac{\lambda_r - \lambda_l}{2}$ \\
6: \hspace{5mm} Choose action $j_m^*$ by
\centerline{$\{j: {\rm arg\max_{j}}(Q_{ij}-\lambda_m q_j), Q_{ij}-\lambda_m q_j \ge 0\}$}\\
7: \hspace{5mm} Calculate the $\sum_i q_{j_l^*}$ denoted by $C_m$ \\
8: \hspace{5mm} $gap = |C_m - C|$ \\
9: \hspace{5mm} \textbf{if} $gap \le \epsilon$: \\
10: \hspace{1cm} \textbf{return} $\lambda_m$ \\
11: \hspace{5mm} \textbf{else if} $C_m \le C$: \\
12 : \hspace{1cm} $\lambda_l = \lambda_m$ \\
13 : \hspace{5mm} \textbf{else}: \\
14: \hspace{1cm} $\lambda_r = \lambda_m$ \\
15: \textbf{end while} \\
16: \textbf{Return} the global optimal $\lambda_m$ which satisfies $|\sum_i q_{j_l^*} - C| \le \epsilon$.
\end{flushleft}
\end{algorithm}
For more general cases, more sophisticated method other than bisection search, e.g. reinforcement learning, will be conducted to explore the solution space and find out the global optimal $\lambda$.
\subsubsection{Request Expected Gain}~\\
In e-commerce, the expected gain is usually defined as online performance metric e.g. Effective Cost Per Mile (eCPM), which could directly indicate each individual request value with regard to the platform.
Four categories of feature are mainly used: User Profile, User Behavior, Context and System status. It is worth noticing that our features are quite different from typical CTR model:
\begin{itemize}
\item Specific target ad feature isn't provided because we estimate the CTR conditioned on actions.
\item System status is included where we intend to establish the connection between system and actions.
\item The context feature consists of the inference results from previous modules in order to re-utilize the request information efficiently.
\end{itemize}
\section{Related work}
Quite a lot of research have been focusing on improving the serving performance. Park et al. \cite{park2018deep} describes practice of serving deep learning models in Facebook. Clipper \cite{crankshaw2017clipper} is a general-purpose low-latency prediction serving system. Both of them use latency, accuracy, and throughput as the optimization target of the system. They also mentioned techniques like batching, caching, hyper-parameters tuning, model selection, computation kernel optimization to improve overall serving performance. Also, many research and system use model compression \cite{han2015deep}, mix-precision inference, quantization \cite{gupta2015deep,courbariaux2015binaryconnect}, kernel fusion \cite{chen2018tvm}, model distillation \cite{hinton2015distilling, zhou2018rocket} to accelerate deep neural net inference.
Traditional work usually focus on improving the performance of individual blocks, and the overall serving performance across all possible queries. Some new systems have been designed to take query diversity into consideration and provide dynamic planning. DQBarge \cite{chow2016dqbarge} is a proactive system using monitoring data to make data quality tradeoffs. RobinHood \cite{berger2018robinhood} provides tail Latency aware caching to dynamically allocate cache resources. Zhang et al. \cite{zhang2019e2e} takes user heterogeneity into account to improve quality of experience on the web. Those systems provide inspiring insight into our design, but existing systems did not provide solutions for computation resource reduction and comprehensive study of personalized planning algorithms.
| {'timestamp': '2020-06-18T02:09:47', 'yymm': '2006', 'arxiv_id': '2006.09684', 'language': 'en', 'url': 'https://arxiv.org/abs/2006.09684'} |
\section{Introduction}
Formation of relativistic jets in active galactic nuclei (AGN) is one of the
biggest challenges in astrophysics. The radio galaxy M87 is one of the closest
examples of this phenomena, and its jet has been investigated across the entire
electromagnetic spectrum over years~\citep[e.g.,][]{owen1989, biretta1999,
harris2006, abramowski2012}. Due to its proximty
\citep[$D=16.7$~Mpc;][]{jordan2005} and a large estimated mass of the central
black hole~\citep[$M_{\rm BH} \simeq (3-6) \times
10^9~M_{\odot}$;][]{macchetto1997, gebhardt2009, walsh2013}\footnote{In this
Letter we adopt $M_{\rm BH} = 6.0\times 10^9M_{\odot}$ along with \citet{hada2011,
hada2012}.}, 1 milliarcsecond attains 0.08~pc or 140~Schwarzschild radii $(R_{\rm
s})$, providing an unprecedented opportunity to probe the jet formation processes
at its base.
The inner jet structure of M87 has been intensively investigated with
Very-Long-Baseline-Interferometry (VLBI). \citet{junor1999} discovered a
broadening of the jet opening angle at $\sim$100~$R_{\rm s}$ from the radio core
with an edge-brightened structure, and this was later confirmed in several
works~\citep{ly2004, dodson2006, krichbaum2006, ly2007, kovalev2007}. More
recently, \citet[][hereafter AN12]{asada2012} discovered a maintanance of a
parabolic structure between a few $100~R_{\rm s}$ and $10^5~R_{\rm s}$ from the
core with a transition into a conical streamline above a location of
$\sim10^6~R_{\rm s}$. These results provide compelling evidence that the
collimation region is beginning to be resolved near the base of this jet.
However, the jet structure within $\sim$100$~R_{\rm s}$ remains unclear. Probing
the innermost region is essential to directly test theoretical models of
relativistic jet formation. Because the radio core at the jet base corresponds to
an optically-thick surface of synchrotron emission~\citep{bk1979}, previous
studies were not possible to determine the location of the central engine
relative to the radio core, preventing them from estimating the exact collimation
profile in this region.
Recently, we have overcome this difficulty~\citep[][hereafter H11]{hada2011}. By
measuring the core position shift~\citep{konigl1981, lobanov1998} with
multi-frequency phase-referencing Very-Long-Baseline-Array (VLBA) observations, we
have constrained the location of the central engine of the M87 jet as
$\sim$20$~R_{\rm s}$ upstream of the 43-GHz core. This allows us to probe the
radial profile of the jet structure as a function of the distance from the central
engine. Moreover, the determination of the frequency dependence of the core
position $r_{\rm c}\propto \nu^{-0.94\pm0.09}$ enables us to reveal the
structure of the innermost jet by investigating the multi-frequency properties of
the core. Indeed, the recent VLBI observation of M87 at 230~GHz detected a compact
core which is comparable to the size of event horizon~\citep{doeleman2012}, being
consistent with the core of this jet to be located in the black hole vicinity.
In this paper, we explore the collimation profile of the inner jet of M87,
especially based on the multi-frequency properties of the VLBI core as well as the
edge-brightened jet. The data analysis is described in the next section. In
section 3, we show our new results. In the final section, we discuss the derived
inner jet structure.
\begin{table*}[htbp]
\begin{minipage}[t]{1.0\textwidth}
\centering
\caption{Multi-frequency VLBA observations of M87}
\scalebox{1.00}{
\begin{tabular}{lllccccccc}
\hline
\hline
Freq. & Date & Beam $\theta_{\rm bm}$ & $I_{\rm rms}$ & \multicolumn{ 6}{c}{JMFIT Gaussian} \\\cline{5-10}
band & & size, P.A. & & & $\theta_{\rm maj}$ & $\theta_{\rm min}$ & P.A. &
$\theta_{20^{\circ}} (\equiv W_{\rm c})$ & $\frac{\theta_{20^{\circ}}}{\theta_{\rm bm, 20^{\circ}}}$ \\
(GHz) & & (mas)$^2$, (deg.) & $\left(\frac{\rm mJy}{\rm beam}\right)$ & & (mas) & (mas) & (deg.) & (mas) & \\
& & (a) & (b) & & (c) & (d) & (e) & (f) & (g) \\
\hline
2.3 & 2010 Apr 8 & 6.01 $\times$ 2.92, $-3$ & 0.79 & & $3.26\pm 0.08$ & $< 1.06$ & $286\pm 3$ & $<1.06$ & $<0.20$ \\
& 2010 Apr 18 & 5.97 $\times$ 3.00, $-4$ & 0.66 & & $3.47\pm 0.08$ & $< 1.06$ & $287\pm 3$ & $<1.06$ & $<0.20$ \\\cline{5-10}
& & & & Ave. & 3.37 & $< 1.06$ & 287 & $<1.06$ & $<0.20$ \\
& & & & & & & & & \\
5.0 & 2010 Apr 8 & 2.63 $\times$ 1.36, 1 & 0.43 & & $1.36\pm 0.06$ & $0.57\pm 0.16$ & $285\pm 4$ & $0.58\pm 0.17$ & 0.25\\
& 2010 Apr 18 & 2.70 $\times$ 1.31, 0 & 0.37 & & $1.38\pm 0.06$ & $0.56\pm 0.13$ & $288\pm 3$ & $0.56\pm 0.14$ & 0.24 \\\cline{5-10}
& & & & Ave. & 1.37 & 0.57 & 287 & 0.57 & 0.25 \\
& & & & & & & & & \\
8.4 & 2009 May 23 & 1.48 $\times$ 0.55, $-9$ & 0.21 & & $0.61\pm 0.07$ & $0.38\pm 0.05$ & $292\pm 5$ & $0.38\pm 0.06$ & 0.40 \\
& 2010 Apr 8 & 1.67 $\times$ 0.84, 2 & 0.37 & & $0.84\pm 0.04$ & $0.42\pm 0.06$ & $295\pm 4$ & $0.42\pm 0.07$ & 0.29 \\
& 2010 Apr 18 & 1.63 $\times$ 0.85, $-3$ & 0.34 & & $0.87\pm 0.05$ & $0.40\pm 0.08$ & $295\pm 4$ & $0.40\pm 0.09$ & 0.29 \\\cline{5-10}
& & & & Ave. & 0.77 & 0.40 & 294 & 0.40 & 0.33 \\
& & & & & & & & & \\
15 & 2009 Jan 7 & 1.04 $\times$ 0.47, $-3$ & 0.82 & & $0.45\pm 0.02$ & $0.28\pm 0.01$ & $298\pm 5$ & $0.28\pm 0.02$ & 0.34 \\
& 2009 Jul 5 & 1.00 $\times$ 0.46, $-8$ & 0.63 & & $0.39\pm 0.05$ & $0.26\pm 0.03$ & $305\pm 7$ & $0.26\pm 0.06$ & 0.35 \\
& 2010 Feb 11 & 1.00 $\times$ 0.43, $-2$ & 1.07 & & $0.41\pm 0.02$ & $0.21\pm 0.04$ & $286\pm 2$ & $0.21\pm 0.04$ & 0.27 \\
& 2010 Apr 8 & 0.94 $\times$ 0.48, $-$5 & 0.59 & & $0.47\pm 0.02$ & $0.25\pm 0.04$ & $300\pm 5$ & $0.25\pm 0.04$ & 0.33 \\
& 2010 Apr 18 & 0.92 $\times$ 0.49, $-10$ & 0.44 & & $0.49\pm 0.02$ & $0.24\pm 0.02$ & $298\pm 3$ & $0.24\pm 0.03$ & 0.33 \\
& 2010 Sep 29 & 1.00 $\times$ 0.45, $-2$ & 0.72 & & $0.42\pm 0.07$ & $0.26\pm 0.03$ & $307\pm 4$ & $0.27\pm 0.08$ & 0.35 \\
& 2011 May 21 & 0.93 $\times$ 0.43, $-5$ & 0.74 & & $0.51\pm 0.05$ & $0.28\pm 0.05$ & $302\pm 5$ & $0.29\pm 0.07$ & 0.38 \\\cline{5-10}
& & & & Ave. & 0.45 & 0.25 & 299 & 0.26 & 0.34 \\
& & & & & & & & & \\
23.8 & 2010 Jan 18 & 0.60 $\times$ 0.29, $-8$ & 0.76 & & $0.25\pm 0.04$ & $0.20\pm 0.02$ & $300\pm 3$ & $0.20\pm 0.04$ & 0.42 \\
& 2010 Apr 4 & 0.63 $\times$ 0.29, $-4$ & 0.68 & & $0.27\pm 0.02$ & $0.18\pm 0.03$ & $313\pm 7$ & $0.19\pm 0.04$ & 0.38 \\
& 2010 Apr 8 & 0.54 $\times$ 0.29, $-5$ & 1.36 & & $0.24\pm 0.03$ & $0.18\pm 0.04$ & $326\pm 6$ & $0.20\pm 0.05$ & 0.43 \\
& 2010 Apr 18 & 0.57 $\times$ 0.28, $-13$ & 1.12 & & $0.25\pm 0.02$ & $0.20\pm 0.03$ & $327\pm 3$ & $0.21\pm 0.04$ & 0.54 \\
& 2010 May 1 & 0.62 $\times$ 0.28, $-9$ & 0.73 & & $0.29\pm 0.02$ & $0.19\pm 0.03$ & $315\pm 6$ & $0.20\pm 0.04$ & 0.42 \\
& 2010 May 15 & 0.64 $\times$ 0.29, $-12$ & 0.80 & & $0.31\pm 0.03$ & $0.20\pm 0.04$ & $317\pm 4$ & $0.21\pm 0.05$ & 0.48 \\
& 2010 May 30 & 0.62 $\times$ 0.28, $-11$ & 0.83 & & $0.28\pm 0.02$ & $0.16\pm 0.05$ & $314\pm 5$ & $0.17\pm 0.05$ & 0.38 \\\cline{5-10}
& & & & Ave. & 0.27 & 0.19 & 316 & 0.20 & 0.44 \\
& & & & & & & & & \\
43 & 2009 Mar 13 & 0.29 $\times$ 0.13, $-6$ & 0.55 & & $0.15\pm 0.01$ & $0.13\pm 0.02$ & $313\pm 9$ & $0.13\pm 0.02$ & 0.61 \\
& 2010 Jan 18 & 0.29 $\times$ 0.13, 2 & 1.01 & & $0.11\pm 0.02$ & $0.10\pm 0.02$ & $262\pm 3$ & $0.10\pm 0.03$ & 0.39 \\
& 2010 Apr 8 & 0.26 $\times$ 0.13, $-3$ & 0.91 & & $0.13\pm 0.02$ & $0.11\pm 0.02$ & $354\pm 4$ & $0.13\pm 0.03$ & 0.57 \\
& 2010 Apr 18 & 0.26 $\times$ 0.13, $-8$ & 1.12 & & $0.13\pm 0.01$ & $0.11\pm 0.01$ & $358\pm 3$ & $0.13\pm 0.01$ & 0.63 \\
& 2010 May 1 & 0.29 $\times$ 0.14, $-6$ & 0.89 & & $0.11\pm 0.02$ & $0.11\pm 0.01$ & $318\pm 19$& $0.11\pm 0.02$ & 0.49 \\
& 2010 May 15 & 0.29 $\times$ 0.14, $-9$ & 0.95 & & $0.12\pm 0.01$ & $0.10\pm 0.02$ & $254\pm 6$ & $0.11\pm 0.02$ & 0.48 \\
& 2010 May 30 & 0.30 $\times$ 0.15, $-6$ & 1.07 & & $0.13\pm 0.01$ & $0.09\pm 0.01$ & $257\pm 8$ & $0.10\pm 0.02$ & 0.42 \\\cline{5-10}
& & & & Ave. & 0.13 & 0.11 & 280 & 0.11 & 0.51 \\
& & & & & & & & & \\
86 & 2007 Feb 18 & 0.25 $\times$ 0.08, $-18$ & 6.47 & & $0.079\pm 0.021$ & $0.065\pm 0.023$ & $52\pm 23$ & $0.074\pm 0.032$ & 0.64 \\
\hline
\end{tabular}
}
\medskip
\end{minipage}
\label{tab:tab1} Note: (a) uniformly-weighted beam size; (b) image noise level;
(c),(d),(e) FWHM sizes of major/minor axes and position angle of derived model;
(f) projected FWHM size of the model in $\rm{P.A.}=20^{\circ}$; (g) ratio of the
Gaussian size devided by the beam size in $\rm{P.A.}=20^{\circ}$.
\end{table*}
\begin{figure}[htbp]
\centering
\includegraphics[angle=0,width=1.0\columnwidth]{fig01.eps}
\caption{Uniformly-weighted, averaged VLBA image of M87 at 43~GHz. Contours
start from 3$\sigma$ image rms level and increasing by factors of 1.4.}
\label{fig:image}
\end{figure}
\section{Observations and Data Reduction}
We observed M87 with VLBA at 2, 5, 8.4, 15, 23.8 and 43~GHz on 2010 April 8 and
18. These are the same data presented in H11, where we investigated the core shift
of M87 using the phase-referencing technique relative to the nearby radio source
M84. Details of the observations and the data reductions processes are described
in H11.
To better constrain the averaged multi-frequency properties of the core and the
inner jet, we also analyzed VLBA archival data at 8.4, 15, 23.8 and 43~GHz. We
selected data observed after 2009 with sufficiently good qualities (all 10
stations participated, good $uv$-coverages, and thus high angular resolutions
obtained). Initial data calibration before imaging was performed using the
National Radio Astronomy Observatory (NRAO) Astronomical Image Processing System
(AIPS) based on the standard VLBI data reduction procedures. These data were not
acquired with phase-referencing mode.
Moreover, we added one VLBA archival data set at 86~GHz, which allows us to probe
the inner jet even closer to the black hole because of its higher transparency and
resolution. While several VLBA observations of M87 have been performed at 86~GHz,
we selected the data observed in 2007 February, because this is the only 86~GHz
data at present for which a reliaiable self-calibrated image has been published
with VLBA alone (peak-to-noise ratio of the self-calibrated image higher than
$\sim$70), as shown in \citet{rioja2011}. The observation is operated at 8
stations without Saint-Croix and Hancock. We analyzed this data based on the
procedures described in their paper.
Images were created in DIFMAP software with iterative phase/amplitude
self-calibration. We used uniform weighting scheme to produce high resolution
images.
\section{Results}
The M87 jet was clearly detected for all of the analyzed data. In Figure~1 we show
a representative image of M87 at 43~GHz, which was made by stacking the seven sets
of data. We confirmed the jet to be characterized by the compact core with the
edge-brightened structure directing an overall position angle of
P.A.$\sim$290$^{\circ}$.
\subsection{Model fitting on the core region}
In the present study, we aim at measuring the width of the innermost jet. For this
purpose, we fitted a single elliptical Gaussian model to each image with the AIPS
task JMFIT, and derived deconvolved parameters of the core region. Note that the
derived Gaussian size in this simple modeling could yield a larger size than that
of the true core (i.e., optical depth $\sim1$ surface) in the jet propagation
direction, because of blending of the optically-thin jet part. Also in the
transverse direction to the jet axis, this method would give a total size of the
true core plus surrounding emission, if the core region has a sub-structure that
is not resolved by VLBA baselines in this direction~\citep{dodson2006}. However,
here we are interested in measuring such a entire width of the innermost jet.
The results are summarized in Table~\ref{tab:tab1}. Most of the derived values
(especially for minor axes) are smaller than the beam size for each
data. Nonetheless, it is known that such sizes are measurable when the emitting
region is sufficeintly bright and robust self-calibration using as many as 10 VLBA
stations (8 at 86~GHz) can calibrate the fringe visibilities accurately. For the
M87 core, the derived sizes along the minor axes at 5, 8.4, 15, 23.8, 43 and
86~GHz correspond to amplitude decreases of 15\%, 18\%, 25\%, 30\%, 33\% and 30\%
at $\sim$60\% of the longest baseline (80M$\lambda$, 140M$\lambda$, 240M$\lambda$,
380M$\lambda$, 700M$\lambda$ and 1100M$\lambda$ respectively, which yield
effective angular resolutions in ${\rm P.A.=20^{\circ}}$). These decreases are
sufficiently larger than a typical VLBA amplitude calibration accuracy of
$\sim$5\% \citep[analogous discussion is presented in][]{kellermann1998}. At
86~GHz, previous Global-Millimeter-VLBI-Array observations report similar sizes to
the present value in Table~\ref{tab:tab1} \citep[$\lesssim 50$~$\mu$as or
$99\pm21$~$\mu$as;][respectively]{krichbaum2006, lee2008}.
We estimated the standard errors of the derived Gaussian parameters for each data
as follows. Formally, statistical size uncertainties purely based on
SNR~\citep[size of the fitted model devided by its peak-to-noise ratio;
e.g.,][]{fomalont1999} results in very small values for M87 (a level of a few
$\mu$as or smaller) because the core is bright at each frequency (peak-to-noise
$>70\sim1000$). However in practice, the model parameters are more strongly
affected by imperfect CLEAN/deconvolution processes under limited
$uv$-coverages. Then, we devided each individual data set into three subsets with
equal integration time, and repeated deconvolution and JMFIT processes for each
data individually. Through this procedure, we obtained three sets of
quasi-independent fitting results for each epoch, and the rms scatters can be
calculated for each parameter (i.e., major/minor axes and P.A.). These scatters
are adopted as realistic errors for each model parameter in Table~\ref{tab:tab1}.
As an additional check, we conducted a fake visibility analysis to examine how
precisely sizes smaller than beam sizes can be recovered~\citep[a similar analysis
is described in][]{taylor2004}; using the AIPS task UVCON, we created fake
visibilities which are equivalent to the derived Gaussian parameters at each
frequency with similar $uv$-coverages of the actual observations. We produced 10
independent visibility data sets at each frequency by adding random noises at a
level seen in the actual observations, repeated CLEAN/deconvolution and JMFIT on
each data, and calculated rms scatters of the recovered model parameters at each
frequency. We confirmed that these scatters were smaller than one-third of the
quoted errors in Table~\ref{tab:tab1} at frequencies smaller than 43~GHz, and less
than one-half at 86~GHz.
We note that, only regarding the minor axis of the 2-GHz core, JMFIT derived a
quite small size (less than 15\% of the beam size), which indicates a marginal
constraint. So in Table~\ref{tab:tab1} we instead set one-fifth of the beam size
as an upper limit, which corresponds to $\sim$5\% amplitude decrease at the
longest baseline in this direction.
\begin{figure}[bbb]
\centering \includegraphics[angle=0,width=1.0\columnwidth]{fig02.eps}
\caption{Gaussian sizes for the core region as a function of frequency. Sets of
filled/open rectangle indicates projected sizes in ${\rm
P.A.}=20^{\circ}/290^{\circ}$. Two lines are power-law fit to each distribution
using 8.4, 15, 23.8 and 43~GHz data. A gray rectangle indicates the derived
core size at 230~GHz~\citep{doeleman2012}. } \label{fig:coresize}
\end{figure}
\begin{figure*}[htbp]
\centering \includegraphics[angle=0,width=0.75\textwidth]{fig03.eps}
\caption{Jet width profile of M87 as a function of distance. A jet viewing
angle of $i=15^{\circ}$ and a black hole mass of $M_{\rm
BH}=6.0\times10^9M_{\odot}$ are adopted. Densely-sampled part indicates the
width profile of the edge-brightened jet measured at multiple frequency with
errors. Discretely-sampled part indicates the FWHM size profile for the core
region projected in P.A=20$^{\circ}$ ($W_{\rm c}$). The core positions and the
errors are estimated based on the previous astrometry
result~\citep{hada2011}. A solid line indicates the best fit solution for the
edge-brightened jet width $W_{\rm j}\propto r^{0.56}$, while a dashed line
indicates $r^{0.76}$. The grey/black rectangles at the bottom left corner of
the figure represent surfaces (event horizons) for
non-spinning/maximally-spinning black holes, respectively.}
\label{fig:jetwidth}
\end{figure*}
In Figure~\ref{fig:coresize}, we show a Gaussian size projected along ${\rm
P.A.=20^{\circ}}$ as a function of frequency (hereafter we denote $W_{\rm
c}$)\footnote{Here $W_{\rm c}$ is defined as a FWHM of the derived ellipical
Gaussians when sliced along a position angle of $20^{\circ}$ i.e., $W_{\rm c}
\equiv \left[\frac{\theta_{\rm maj}^2 \theta_{\rm min}^2 (1 + \tan^2 (20^{\circ} -
{\rm P.A.})}{\theta_{\rm min}^2 + \theta_{\rm maj}^2 \tan^2 (20^{\circ} - {\rm
P.A.})}\right]^{1/2}$, where $\theta_{\rm maj}, \theta_{\rm min}$ and
P.A. indicate values in columns (c), (d) and (e) of Table~1. The errors in this
direction, shown in column (g) of Table~\ref{tab:tab1}, are calculated from usual
error propagation analysis using the errors $\theta_{\rm maj}, \theta_{\rm min}$
and P.A..}. This projection is perpendicular to an overall jet axis, correnponding
to a direction of the jet width. We found $W_{\rm c}$ to be clearly frequency
dependent, becoming smaller as frequency increases. To determine an averaged
frequency dependence of $W_{\rm c}$, we fitted a power law function to this plot
using the data at 5, 8.4, 15, 23.8, 43 and 86~GHz (2~GHz data are excluded because
of upper limits). We found the best-fit solution to be $W_{\rm c}(\nu) \propto
\nu^{\xi}$ where $\xi = -0.71\pm0.05$. Interestingly, when this frequency dependence
is extended toward higher frequency, its extrapolated size appears to result in a
similar value to the measured size by the recent 230~GHz VLBI
experiment~\citep{doeleman2012}, which is determined with a circular Gaussian
fit. For reference, we also show a Gaussian size distribution projected along a
jet propagation direction, but this does not seem to fit the 230~GHz core size.
\subsection{Jet width measurements}
In Figure~\ref{fig:jetwidth}, we show the radial profile of the M87 jet width as a
function of (de-projected) distance along jet \citep[assuming a jet inclination
angle $i=15^{\circ}$;][]{biretta1999, perlman2011}. Here we investigated the jet
width profile in the following two procedures: (1) measurements of $W_{\rm j}(r)$;
for the region where the jet is clearly resolved into a two-humped shape at each
frequency, we made transverse slices of the jet every 0.01$\sim$0.5~mas distance
along the jet, and each of the two-humped structure was fitted by a
double-Gaussian function. We then defined the separation between the outer sides
of the half-maximum points of the two Gaussains as jet width. When the jet at a
frequency becomes single-humped toward the upstream region, we measured the jet
width using higher frequency images because the jet is clearly resolved again into
a two-humped shape at higher resolutions. By using the images between 2 and
43~GHz, such a measurement was repeated over a distance of $\sim$10$^{5}$~$R_{\rm
s}$ down to $\sim$100~$R_{\rm s}$ along the jet. This process is basically the
same as that used in AN12, but we exclude measurements for the single-humped
region. We aligned radial positions of $W_{\rm j}(r)$ profiles between different
frequencies by adding the amount of core shift measured in H11 (described
below). This amount and the associated position error at each frequency provide
only tiny fractions of the distance where $W_{\rm j}(r)$ was measured at each
frequency (a level of $10^{-1\sim-2}$ at 43~GHz to $10^{-3}$ at 2~GHz), so
horizontal error bars for $W_{\rm j}(r)$ are removed in
Figure~\ref{fig:jetwidth}. At 86~GHz, we could not perform reliable measurements
of the jet width because the edge-brightened jet was only marginally imaged at a
level of (2--3)$\sigma$. (2) measurements of $W_{\rm c}(r)$; closer to the central
engine, we further constructed a radial distribution of $W_{\rm c}$. Because our
previous astrometric study H11 measured locations of the cores as 41, 69, 109,
188, 295 and 674~$\mu$as at 43, 23.8, 15, 8.4, 5 and 2~GHz from the convergence
point of the core shift (in R.A. direction or P.A$=270^{\circ}$), we can set their
de-projected distances along the jet (P.A.$=290^{\circ}$) as 24, 40, 63, 108, 170
and 388~$R_{\rm s}$ for $i=15^{\circ}$, respectively. For 86/230~GHz cores, we can
also set their de-projected positions as 12 and 5~$R_{\rm s}$ by assuming the same
asymptotic relation ($r_{\rm c}\propto\nu^{-0.94}$; H11) for the upstream of the
43~GHz core. Here we assume that the central engine is located at the convergence
point of the core shift specified in H11.
We confirmed that the edge-brightened region is well-expressed as a parabolic
structure of $W_{\rm j}(r)\propto r^{0.56\pm0.03}$ (solid line in Figure~3). This
is in good agreement with their finding $r^{0.58\pm0.02}$ in AN12. For the region
around $\sim$100~$R_{\rm s}$, where the independent measurements of $W_{\rm j}$
and $W_{\rm c}$ are overlapped each other, $W_{\rm c}$ at 5 and 8,4~GHz smoothly
connect with $W_{\rm j}$ at 43~GHz. The combination of the core size ($W_{\rm
c}\propto \nu^{\xi}$ where $\xi=-0.71\pm0.05$) and core position ($r\propto
\nu^{\alpha}$ where $\alpha={-0.94\pm0.09}$; H11) yields a radial dependence of
$W_{\rm c}$ as $W_{\rm c}(r) \propto r^{\frac{\xi}{\alpha}} = r^{0.76\pm0.13}$
(dotted line in Figure~3), which is slightly steeper than that of the outer jet
$W_{\rm j}(r)$, although the uncertainty is still large. In the present result, it
is still difficult to distinguish whether $W_{\rm c}$ at 5, 8.4, 15 and 23.8~GHz
are on the solid line or the dashed line due to their position uncertainties in
addition to those of sizes. On the other hand, $W_{\rm c}$ at 43 and 86~GHz tends
to be below the solid line. At 230~GHz, the exact profile cannot be discriminated
again because the data is totally dominated by its position uncertainty.
We note that the two methods used for $W_{\rm j}(r)$ (a double-Gaussain fit on
each slice image) and $W_{\rm c}(r)$ (based on a single two-dimensional elliptical
Gaussian model on each 2-D image) are different from each other. Nevertheless, the
observed consistencies of the values between the two methods were confirmed in the
overlapped region, indicating that $W_{\rm c}$ is actually a good tracer for the
width of the innermost jet region.
\section{Discussion}
Probing the collimation profile of the M87 jet is crucial to understand the
formation processes of relativistic jets. Although the actual energetics of the
M87 jet is still under debate~\citep[e.g.,][]{abdo2009}, here we focus on the
framework of magnetohydrodynamic (MHD) jets, because it is widely explored as the
most successful scenario for jet production.
\subsection{$r\gtrsim100~R_{\rm s}$ Region}
Theoretically, the shape of a magnetized jet is determined by the detailed force
balance across the poloidal field lines. This is described as the trans-field
equation, which was originally derived by \citet{okamoto1975} for steady,
axisymmetric, rotating magnetized flow in a gravitational field without external
pressure. Later, it is invoked that magnetic hoop stresses associated with
toroidal field lines play a major role to realize a global collimation of a
magnetized jet~\citep[]{bp1982, heyvaerts1989, sakurai1985, chiueh1991}.
Here we observationally confirmed that the M87 jet is well characterized as a
parabolic collimation profile between $\sim$100 and $\sim$10$^{5}~R_{\rm s}$. This
is consistent with the prior work by AN12, where they also found a transition into
a conical shape above $\sim$$10^{6}~R_{\rm s}$. Regarding formation of a parabolic
shape, recent theoretical studies indicate the importance of external pressure
support at the jet boundary. \citet{okamoto1999} analytically showed that hoop
stresses alone would not realize global collimation, and numerical studies also
come along this line~\citep[e.g.,][]{nakamura2006, komissarov2007,
komissarov2009, toma2013}.
\citet{komissarov2009} shows that when the external gas
pressure follows $p_{\rm ext}\propto r^{-a}$ where $a \lesssim 2$, the jet
maintains parabolic as $W_{\rm j}\propto r^{a/4}$, whereas for $a > 2$ the jet
eventually becomes conical due to insufficient external support. If the observed
radio emission of M87 traces the exterior edge of a magnetized jet, the measured
width profile suggests $a \sim2$. As a source of such confinement medium, AN12
propose an intersteller medium bounded by the gravitational influence of the
central black hole, such as a giant ADAF~\citep{narayan2011}. We add to note that
a purely hydrodynamic (HD) jet is also possible to produce the gradual parabolic
collimation of M87~\citep[e.g.,][]{bromberg2009}.
Interestingly, at the end of the collimation region, both AN12 and
\citet{bromberg2009} suggest HST-1 (a peculiar knot at a deprojected distance
$\gtrsim$120~pc or $2\times10^5R_{\rm s}$) as a stationary shocked feature
resulting from overcollimation of the inner jet, which is originally proposed by
\citet{stawarz2006} and \citet{cheung2007} to explain the broadband flaring
activity. While our recent kimematic observations of HST-1 shows clear evidence of
superluminal motions of the overall structure, a weak, slowly-moving feature is
also found in the upstream of HST-1~\citep{giroletti2012}, which indeed could be
related to a recollimation shock.
\begin{figure}[ttt]
\centering \includegraphics[angle=0,width=1.01\columnwidth]{fig04.eps}
\caption{The same image as Fig.~1 but convolved with a circular Gaussian beam
of 0.14~mas diameter, yielding roughly twice higher resolution in north-south
direction. Contours in the inner region starts from 20$\sigma$ image rms and
increasing by factors of 1.4. The length bars correspond to projected scale.}
\label{fig:m87q_sr}
\end{figure}
\subsection{$r\lesssim100~R_{\rm s}$ Region}
For the first time, we have revealed a detailed collimation profile down to $r\sim
10~R_{\rm s}$ by investigating the multi-frequency properties of the radio
core. An intriguing point here is that the measured collimation profile suggests a
possible tendency of a wider jet opening angle around $\sim$10 to $\sim$100~$R_{\rm
s}$ from the central engine.
Since the two methods of our width measurements are switched around
$r\sim100~R_{\rm s}$, one could speculate that the profile change is related to
some systematic effects due to the different methods. Then, to check the
two-humped jet shape further close to the core ($r\lesssim 100~R_{\rm s}$) more
clearly, we created a 43-GHz image convolved with a slightly higher resolution (a
circular beam of 0.14~mas diameter). The image is shown in Figure 4. Above
$\sim$0.4~mas downstream of the core (de-projected distance $\sim$220~$R_{\rm s}$
for $i=15^{\circ}$), where the jet is parabolic on logarithmic distance, the two
ridges are already oriented into a similar direction (dark-blue region in Figure
4). On the other hand, the opening angle made by the two ridges appear to broaden
more rapidly within $\sim$0.3~mas of the core (the region with contours in Figure
4), resulting in a more radially-oriented structure near the base. Such a tendency
is consistent with the observed possible transition from the solid line to the
(steeper) dashed line around $r\sim100~R_{\rm s}$ in Figure 3.
A transition of jet collimation profile near the black hole is actually suggested
from some of theoretical aspects. In the framework of relativistic MHD jet models,
most of the energy conversion from magnetic-to-kinetic occurs after the flow
passes through the fast-magnetosonic point~\citep[``magnetic nozzle''
effect;][]{li1992, vlahakis2003}. Beyond this point, the magnetized jet starts to
be collimated asymptotically into a parabolic shape, because the increasing plasma
inertia winds the field lines into toroidal directions and thus amplifies the hoop
stresses~\citep[e.g.,][]{tomimatsu2003}. The radius of the fast point is typically
a few times the light cylinder radius $R_{\rm lc}$~\citep{li1992}, where $R_{\rm
lc}$ is of the order of (1$\sim$5)~$R_{\rm s}$~\citep{komissarov2007}. Thus, if
the M87 jet is magnetically launched, the observed possible transition of the jet
shape around $10\sim100~R_{\rm s}$ could be explained by this process. Moreover,
the jet in this scale is likely to have complicated interactions with surrounding
medium such as accretion flow, corona and disk
wind~\citep[e.g.,][]{mckinney2006}. Their geometries and the local pressure
balance at the jet boundary would affect the initial jet shape. Alternatively,
such a change of the jet shape could happen as an apparent effect due to
projection, if the jet inclination angle is not constant down to the black
hole~\citep{mckinney2013}.
It is interesting to note that a time-averaged dependence $\nu^{-0.71}$ of $W_{\rm
c}$ appears to connect with the 230~GHz core size. However, this apparent
connection should be compared in a cautious manner; the $uv$-coverage used in
\citet{doeleman2012} yields the highest ($\sim$3000M$\lambda$) angular resolution
\textit{along} the jet direction for M87, while $\sim$5 times shorter projected
baselines transverse to the jet. Thus the derived size of $\sim$40~$\mu$as by
their circular Gaussian fit could be more weighted to the structure along the jet,
unless the brightness pattern of the jet base (when projected toward us) is
actually close to a circular shape. To clarify the exact relationship of the core
size at the higher frequency side, the addition of north-south baselines in the
future Event-Horizon-Telescope array is crucial, which can be realized by
including Chilean stations such as ASTE, APEX and ALMA~\citep{broderick2009,
doeleman2009}.
The results presented here have newly shed light on the crucial issues which
should be addressed more rigorously in future observations; where is the exact
location of the profile change between $\sim$10 and $\sim$100~$R_{\rm s}$ and how
does it change (e.g., a sharp break or a gradual transition)? In addition,
simultaneous observations at multiple frequencies are important; a dynamical
timescale of our target region is an order of $t_{\rm dyn}\sim 10~R_{\rm s}/c \sim
7$~days, so the jet structure (size and position of the core) can be variable on
this timescale~\citep[e.g.,][]{acciari2009}. To address these issues, we are
currently conducting new high-sensitivity VLBA observations at 86~GHz in
combination with quasi-simultaneous sessions at lower frequencies, which allows
more robust investigations of the jet structure within $100~R_{\rm s}$ and thus
test some specific models more quantitatively. Finally, we also stress the
significance of future imaging opportunities with RadioAstron at 22~GHz or
global-VLBI at 43/86~GHz including ALMA baselines, because these will provide
images at drastically improved resolutions, especially in the direction transverse
to the M87 jet.
\acknowledgments
We acknowledge the anonymous referee for his/her careful review and suggestions
for improving the paper. We thank K.~Asada and M.~Nakamura for valuable
discussion. We are also grateful to M.~Takahashi, A.~Tomimatsu, M.~Rioja,
R.~Dodson, Y.~Asaki, S.~Mineshige, S.~Kameno, K.~Sekimoto, T.~Tatematsu and
M.~Inoue for useful comments. The Very-Long-Baseline-Array is operated by National
Radio Astronomy Observatory, a facility of the National Science Foundation,
operated under cooperative agreement by Associated Universities, Inc. This work
made use of the Swinburne University of Technology software
correlator~\citep{deller2011}, developed as part of the Australian Major National
Research Facilities Programme and operated under license. This research has made
use of data from the MOJAVE database that is maintained by the MOJAVE
team~\citep{lister2009}. This work was partially supported by KAKENHI (24340042
and 24540240). Part of this work was done with the contribution of the Italian
Ministry of Foreign Affairs and University and Research for the collaboration
project between Italy and Japan. KH is supported by Canon Foundation between April
2012 and March 2013, and by the Research Fellowship from the Japan Society for the
Promotion of Science (JSPS) from April 2013.
| {'timestamp': '2013-08-08T02:00:26', 'yymm': '1308', 'arxiv_id': '1308.1411', 'language': 'en', 'url': 'https://arxiv.org/abs/1308.1411'} |
\section{Introduction}
In quantum systems with purely unitary evolutions, the total energy is a conserved quantity, since the corresponding Hamiltonian always commutes with itself. However, this argument does not apply to systems with Hamiltonians which are self-adjoint but not Hermitian \cite{Kurcz_NJP} and to open quantum systems \cite{Bartana}. The spontaneous emission of a photon is always related to the loss of energy from its source \cite{Hegerfeldt93,Dalibard,Carmichael}. For example, in laser sideband cooling, a red-detuned laser field excites an electronic state of a strongly confined ion via the annihilation of a phonon from its motion. When followed by spontaneous photon emission, the phonon is permanently lost \cite{Wineland}. Contrarily to common believe, we show here that even un-excited and un-driven quantum systems might constantly leak energy into their environment. The origin of the predicted effect are non-zero decay rates and the counter-rotating terms in the interaction Hamiltonian which are usually neglected as part of the RWA.
Although the validity of this approximation has been questioned in the past \cite{agarwal,knight}, it is commonly used to describe quantum optical systems. An exception is Hegerfeldt \cite{hegerfeldt2}, who shows that the counter-rotating terms in the interaction between two atoms and the free radiation field can result in a small violation of Einstein's causality. Zheng {\em et al.}~\cite{Zubairy} also avoided the RWA and predicted corrections to the spontaneous decay rate of a single atom at very short times. Recently, Werlang {\em et al.}~\cite{Werlang} pointed out that it might be possible to obtain photons by simply placing an atom inside an optical cavity but no quantitative predictions have been made and no justification for the relevant master equation has been given. When avoiding the RWA, it has to be avoided everywhere, also in the system-bath interaction.
This paper contains a rigorous derivation of the quantum optical master equation for bosonic systems which uses only the Born and the dipole approximation. We apply our results to individual quantum systems (trapped atoms and optical cavities), and to a composite quantum system consisting of many atoms inside an optical resonator. It is shown that for sufficiently large numbers of atoms inside the cavity, the stationary state photon emission rate can be as large as typical detector dark count rates. Its parameter dependence could be verified experimentally using currently available atom-cavity systems \cite{Trupke,Reichel,Esslinger}. Our calculations confirm the relevance of the effects predicted in \cite{Werlang} when the atom-cavity interactions are collectively enhanced \cite{Holstein,Shah}. Similar energy concentrating effects might contribute significantly to the sudden heating in sonoluminescence experiments \cite{SL} and are responsible for temperature limits in cooling experiments \cite{review,cool}.
There are five sections in this paper. Section \ref{Model} contains a rigorous derivation of the master equation of a single bosonic quantum system beyond the validity range of the RWA. Section \ref{single} calculates the corresponding stationary photon emission rate. Assuming that the photon emission in the absence of external driving remains negligible in this case, we find that two of the constants in this master equation are approximately zero. Section \ref{comp} uses an analogously derived master equation to calculate the stationary state cavity photon emission rate for a composite quantum system consisting of many atoms inside an optical resonator. For feasible experimental parameters, we predict stationary state emission rates as high as $300$ photons per second. A detailed discussion of our results can be found in Section \ref{conc}.
\section{Master equation of a single quantum system beyond the RWA} \label{Model}
Let us begin by studying individual quantum systems, like optical cavities and tightly confined atoms, with the ability to emit photons. Their Hamiltonian $H$ in the Schr\"odinger picture and in the Born and the dipole approximation can be written as $H = H_0 + H_{\rm int}$ with \footnote{This form of $H_{\rm int}$ avoids fixing phase factors of states.}
\begin{eqnarray} \label{H}
H_0 &=& \hbar \omega ~s^+ s^- + \sum _{{\bf k},\lambda} \hbar \omega _k ~ a ^\dagger _{{\bf k} \lambda} a _{{\bf k} \lambda}~, \nonumber \\
H_{\rm int} &=& \sum_{{\bf k}, \lambda} \hbar ~ \big( g_{{\bf k} \lambda} ~ a _{{\bf k} \lambda} + \tilde g_{{\bf k} \lambda} ~ a _{{\bf k} \lambda}^\dagger \big) ~ s^+ + {\rm h.c.}
\end{eqnarray}
and $|\tilde g_{{\bf k} \lambda} | = |g_{{\bf k} \lambda}|$. The free radiation field consists of an infinite number of one-dimensional harmonic oscillators with wave vectors {\bf k}, frequencies $\omega_k$, polarisations $\lambda$, annihilation operators $a_{{\bf k} \lambda}$, and coupling constants $g _{{\bf k} \lambda}$ and $\tilde g _{{\bf k} \lambda}$. In case of an optical cavity, $\omega \equiv \omega_{\rm c}$ is the frequency of its field mode and the $s^\pm$ are the photon creation and annihilation boson operators $c^\dagger$ and $c$:
\begin{equation} \label{bose}
\left [ c, c^\dagger \right] =1 ~, ~~ \left [ c^\dagger, c^\dagger \right] = 0 = \left [ c, c \right] ~,
\end{equation}
In case of a large number of tightly confined two-level atoms with states $|0 \rangle $ and $| 1 \rangle $, $\hbar \omega \equiv \hbar \omega_0$ is the energy of the excited state $|1 \rangle$ of a single atom and the $s^\pm$ are the collective raising and lowering operators $S^\pm$ with
\begin{equation} \label{Dicke4}
\left [ S^-, S^+ \right ] = 1 ~, ~~ \left [ S^+, S^+ \right] = 0 = \left [ S^-, S^- \right] ~,
\end{equation}
as we shall see in the next paragraph.
Suppose the atoms are confined in a region with linear dimensions that are much smaller than the wavelength of the emitted light. Then $|{\bf k} \cdot ( {\bf r}_j - {\bf r}_i )| \ll 1$ for most particle positions ${\bf r}_i$ and ${\bf r}_j$ and for a wide range of wave vectors ${\bf k}$. This implies that all particles experience approximately the same $g _{{\bf k} \lambda}$ and $\tilde g _{{\bf k} \lambda}$. The Hamiltonian $H$ remains therefore the same, if we replace $s^+s^-$ in $H_0$ by $\sigma_3$ and $s^\pm$ in $H_{\rm int}$ by $\sigma^\pm$. Here $\sigma^\pm$ and $\sigma_3$ are defined as
\begin{eqnarray} \label{sigma}
\sigma^{\pm} \equiv \sum _{i=1} ^N \sigma _i ^{\pm} ~, ~~
\sigma _3 \equiv \sum_{i=1} ^N \sigma _{3i}
\end{eqnarray}
with $\sigma _{3i} = {1 \over 2} \left ( | 1 \rangle_{ii} \langle 1 |-| 0 \rangle_{ii} \langle 0 | \right )$, $\sigma_i^+ = | 1 \rangle_{ii} \langle 0 |$ and $\sigma_i ^- = | 0 \rangle_{ii} \langle 1 |$ being the su(2) spin-like operators of atom $i$:
\begin{equation} \label{su2}
\left [ \sigma _{3i} , \sigma_i ^{\pm} \right ] = \pm \sigma_i ^ {\pm} \, , ~ \left [ \sigma_i ^- , \sigma_i ^+\right] = - 2 \sigma _{3i} ~ .
\end{equation}
If the atoms are initially all in their ground state, they evolve under the
action of the operators (\ref{sigma}) into the Dicke-symmetric states:
\begin{eqnarray}
| l \rangle _{\rm p} &\equiv& \left [ | 0_1 0_2 0_3 \dots 0_{N-l} 1_{N-l+1} 1_{N-l+2} \dots 1 \rangle + \dots \right . \nonumber\\
&& \left . + | 1_1 1_2 \dots 1_l 0_{l+1} 0_{l+2} \dots 0 \rangle \right ] / \left (
\begin{array}{c} N \\ l \end{array} \right )^{1/2}
\end{eqnarray}
which are the eigenstates of $\sigma_3$. The difference between excited and unexcited particles is counted by $\sigma _3$, since $_{\rm p} \langle l | \sigma _3 | l \rangle _{\rm p} = l - {1 \over 2} N$. For any $l$ we have \cite{cool}:
\begin{eqnarray}
\sigma ^+ ~ | l \rangle _{\rm p} &=& \sqrt{l+1} \sqrt{N-l} ~ | l + 1 \rangle _{\rm p} ~ , \nonumber \\ \label{Dicke1}
\sigma ^- ~ | l \rangle _{\rm p} &=& \sqrt{N-(l-1)} \sqrt{l} ~ | l - 1 \rangle _{\rm p} ~ .
\end{eqnarray}
This shows that $\sigma ^{\pm}$ and $\sigma _3$ are represented on $| l
\rangle_{\rm p}$ by the Holstein-Primakoff non-linear boson realization
\cite{Holstein,Shah} $\sigma ^+ = \sqrt{N} S ^+ A_s$, $\sigma ^- = \sqrt{N}
A_s S^- $ with $ \sigma _3 = S^+ S^- - {1 \over 2} N$, $ A_s = \sqrt{1-S^+ S^-
/N}$, $S^+ | l \rangle _{\rm p} = \sqrt{l+1} | l + 1 \rangle _{\rm p}$, and
$S^- | l \rangle _{\rm p} = \sqrt{l} | l - 1 \rangle _{\rm p}$ for any
$l$. The $\sigma$'s still satisfy the su(2) algebra (\ref{su2}). However, for
$N \gg l$, (\ref{Dicke1}) becomes
\begin{equation}
\sigma ^{\pm} ~ |l \rangle _{\rm p} = \sqrt{N} S^{\pm} ~ | l \rangle _{\rm p}
~.
\end{equation}
Consequently, in the large $N$ limit, (\ref{su2}) contracts to the projective algebra e(2) \cite{Shah}
\begin{equation} \label{Dicke3}
\left[ S_3, S^{\pm} \right ] = \pm S^{\pm} ~, ~~ \left [ S^-, S^+ \right ] = 1 ~,
\end{equation}
in terms of $S^{\pm}$ and $S_3 \equiv \sigma_3$. This means, the $s^\pm$ operators in the cavity case and in the many atom case are formally the same (cf.~(\ref{H})--(\ref{Dicke4})).
To derive the master equation of a single system, we assume that its state $|\varphi \rangle$ at $t=0$ is known. Moreover, we notice that spontaneously emitted photons leave at a very high speed and cannot be reabsorbed. The free radiation field is hence initially in a state with only a negligible photon population in the optical regime \cite{Hegerfeldt93,Dalibard,Carmichael}. Denoting this state by $|{\cal O} \rangle$, the (unnormalised) state vector of system and bath equals
\begin{equation} \label{kernel}
|{\cal O} \rangle |\varphi^0_{\rm I} \rangle = |{\cal O} \rangle \langle {\cal O}| ~ U_{\rm I} (\Delta t,0) ~ |{\cal O} \rangle |\varphi \rangle
\end{equation}
under the condition of {\em no} photon emission in $(0,\Delta t)$. In the interaction picture with respect to $H_0$, this equation can be calculated using second oder perturbation theory, even when $\Delta t \gg 1/\omega$. Doing so, we find
\begin{equation} \label{kernel2}
|\varphi^0_{\rm I} \rangle = \big[ 1 - A ~ s^+ s^- - B ~ s^- s^+ - C ~ s^{+2} - D ~ s^{-2} \big] ~ |\varphi \rangle
\end{equation}
with
\begin{eqnarray} \label{ABC}
A &=& \int_0^{\Delta t} \!\! {\rm d}t \int_0^t \!\! {\rm d}t' ~ \sum _{{\bf k}, \lambda} g_{{\bf k} \lambda} \tilde g_{{\bf k} \lambda}^* ~ {\rm e}^{{\rm i}(\omega-\omega_k)(t-t')} ~, \nonumber \\
B &=& \int_0^{\Delta t} \!\! {\rm d}t \int_0^t \!\! {\rm d}t' ~ \sum _{{\bf k}, \lambda} g_{{\bf k} \lambda}^* \tilde g_{{\bf k} \lambda} ~ {\rm e}^{-{\rm i}(\omega+\omega_k)(t-t')} ~, \nonumber \\
C &=& \int_0^{\Delta t} \!\! {\rm d}t \int_0^t \!\! {\rm d}t' ~ \sum _{{\bf k}, \lambda} g_{{\bf k} \lambda} \tilde g_{{\bf k} \lambda} ~ {\rm e}^{{\rm i} (\omega -\omega_k) t + {\rm i} (\omega + \omega_k) t'} ~, \nonumber \\
D &=& \int_0^{\Delta t} \!\! {\rm d}t \int_0^t \!\! {\rm d}t' ~ \sum _{{\bf k}, \lambda} g_{{\bf k} \lambda}^* \tilde g_{{\bf k} \lambda}^* ~ {\rm e}^{- {\rm i} (\omega + \omega_k) t - {\rm i} (\omega - \omega_k) t'} ~.~~~~~~
\end{eqnarray}
All four parameters could, in principle, be of first order in $\Delta t$ due to the sum over the infinitely many modes of the free radiation field.
In analogy to (\ref{kernel}), the (unnormalised) density matrix of the system in case of an emission equals
\begin{equation} \label{kernel4}
\rho^>_{\rm I} = {\rm Tr}_{\rm R} \left[ \sum _{{\bf k}, \lambda} a ^\dagger _{{\bf k} \lambda} a _{{\bf k} \lambda}
~ U_{\rm I} (\Delta t,0) ~ \tilde \rho ~ U_{\rm I}^\dagger (\Delta t,0) \right]
\end{equation}
with $\tilde \rho = |{\cal O} \rangle \langle {\cal O}| \otimes \rho$ being the initial state of system and bath. Proceeding as above and using again second order perturbation theory, this yields
\begin{equation} \label{kernel5}
\rho^>_{\rm I} = \tilde A ~ s^- \rho s^+ + \tilde B ~ s^+ \rho s^- + \tilde C ~ s^- \rho s^- + \tilde D ~ s^+ \rho s^+ ~.
\end{equation}
The coefficients $\tilde A$, $\tilde B$, $\tilde C$, and $\tilde D$ are obtained when taking the complex conjugate of the coefficients $A$, $B$, $C$, and $D$ in Eq.~(\ref{ABC}) and extending the integration of the inner integral to $\Delta t$. To obtain relations between these coefficients, we decompose
\begin{equation}
\int_0^{\Delta t} \!\! {\rm d}t \int_0^{\Delta t} \!\! {\rm d}t' ... =
\int_0^{\Delta t} \!\! {\rm d}t \int_0^t \!\! {\rm d}t' ... + \int_0^{\Delta
t} \!\! {\rm d}t \int_t^{\Delta t} \!\! {\rm d}t' ... ~.
\end{equation}
Substituting $u=\Delta t - t$ and $u' = \Delta t - t'$ in the second integral (which maps its area onto that of the first one) we find
\begin{eqnarray} \label{CD}
&& \tilde A = 2 {\rm Re} A ~, ~~ \tilde C = C^* + {\rm e}^{- 2 {\rm i} \omega \Delta t} ~ C ~,~~~ \nonumber \\
&& \tilde B = 2 {\rm Re} B ~, ~~ \tilde D = D^* + {\rm e}^{2 {\rm i} \omega \Delta t} ~ D ~.
\end{eqnarray}
Choosing the overall phase of $C$ accordingly \footnote{This is done by adjusting the phases of the states of the free radiation field which affects the $g_{{\bf k} \lambda}$ and $\tilde g_{{\bf k} \lambda}$ in (\ref{H}).}, the parameters $C$, $D$, $\tilde C$, and $\tilde D$ can hence be written as
\begin{equation}
C = D^* = {1 \over 2}f ~ \gamma_{\rm C} ~, ~~ \tilde C = \tilde D^* = f^* ~
\gamma_{\rm C} ~,
\end{equation}
with
\begin{equation}
f \equiv {\rm e}^{{\rm i} \omega \Delta t} ~ \sin(\omega \Delta t) / \omega
\end{equation}
and with $\gamma_{\rm C}$ being a real but not specified function of $\Delta t$. One can easily check that this notation is consistent with $\tilde D = \tilde C^*$ (cf.~(\ref{ABC})) and with (\ref{CD}).
Averaging over the subensemble with and the subensemble without photon
emission (cf.~(\ref{kernel2}) and (\ref{kernel5})) at $\Delta t$ hence yields
the density matrix
\begin{eqnarray} \label{kernel7}
\rho_{\rm I} (\Delta t) &=& \rho - \big[ \big( A ~ s^+ s^- + B ~ s^- s^+) ~ \rho + {\rm h.c.} \big] \nonumber \\
&& \hspace*{-0.4cm} - {1 \over 2} \gamma_{\rm C} ~ \big[ \big( f ~ s^{+2} + {\rm h.c.} \big) ~ \rho + {\rm h.c.} \big] + 2 {\rm Re} A ~ s^- \rho s^+ \nonumber \\
&& \hspace*{-0.4cm} + 2 {\rm Re} B ~ s^+ \rho s^- + \gamma_{\rm C} ~ \big[ f ~ s^+ \rho s^+ + {\rm h.c.} \big] ~.
\end{eqnarray}
In the following we return into the Schr\"odinger picture considering a master equation \footnote{This equation is different from the one in
G. S. Agarwal, {\em Quantum Optics}, Springer Tracts of Modern Physics
Vol. {\bf 70} (Spinger-Verlag, Berlin 1974).}
\begin{eqnarray} \label{deltarho2}
\dot \rho &=& - {{\rm i} \over \hbar} \left[ H_{\rm cond} \rho - \rho H_{\rm cond}^\dagger \right] + {\cal R}(\rho) ~ , \nonumber \\
{\cal R}(\rho) &=& \gamma_{\rm A} ~ s^- \rho s^+ + \gamma_{\rm B} ~ s^+ \rho s^- + \gamma_{\rm C} ~ \big( s^- \rho s^- + {\rm h.c.} \big) ~ , \nonumber \\
H_{\rm cond} &=&- {{\rm i} \over 2} \hbar \big[ \gamma_{\rm A} ~ s^+ s^- + \gamma_{\rm B} ~ s^- s^+ + \gamma_{\rm C} ~ \big( s^{+2} + {\rm h.c.} \big) \big] \nonumber \\
&& + \hbar {\widetilde \omega} ~ s^+ s^-
\end{eqnarray}
with $\gamma_{\rm A} = 2 {\rm Re} A/\Delta t$, $\gamma_{\rm B} = 2 {\rm Re} B/\Delta t$, and with ${\widetilde \omega}$ being the shifted bare transition frequency. Checking the validity of (\ref{deltarho2}) can be done easily by returning into the interaction picture and integrating $\dot \rho_{\rm I} (t)$ from zero to $\Delta t$. The result is indeed (\ref{kernel7}).
Before continuing, we remark that the master equation (\ref{deltarho2}) has been derived using second order perturbation theory. This means, it applies to quantum optical systems with the ability to spontaneously emit photons. The assumption of a Markovian bath has been avoided. Instead, we assumed rapidly repeated (absorbing) measurements whether or not a photon has been emitted \cite{Hegerfeldt93,Dalibard,Carmichael}. These measurements constantly reset the free radiation field into $ |{\cal O} \rangle$ and make the system dynamics on the coarse grained time scale $\Delta t$ with $1/\omega \ll \Delta t \ll 1/\gamma$ automatically Markovian. Predictions based on this assumption have already been found in good agreement with actual experiments \cite{Toschek,Schoen}.
\section{Photon emission from a single quantum system} \label{single}
Let us continue by calculating the probability density $I_\gamma = {\rm Tr} ({\cal R}(\rho))$ for a photon emission of a system prepared in $\rho$,
\begin{eqnarray} \label{kernel3}
I_\gamma &=& \left \langle \gamma_{\rm A} ~ s^+ s^- + \gamma_{\rm B} ~ s^- s^+ + \gamma_{\rm C} ~ \left( s^{+2} + s^{-2} \right) \right \rangle ~.~~~~
\end{eqnarray}
Using (\ref{bose}) or (\ref{Dicke4}), respectively, and considering the time evolution of the expectation values $\mu_1 \equiv \langle s^+ s^- \rangle$, $\xi_1 \equiv {\rm i} \langle s^{-2} - s^{\dagger +2} \rangle$, and $\xi_2 \equiv \langle s^{-2} + s^{+2} \rangle$, we obtain a closed set of rate equations,
\begin{eqnarray}
&& \dot \mu_1 = - (\gamma_{\rm A} - \gamma_{\rm B}) ~ \mu_1 + \gamma_{\rm B} ~, \nonumber \\
&& \dot \xi_1 = - (\gamma_{\rm A} - \gamma_{\rm B}) ~ \xi_1 + 2 \widetilde \omega ~ \xi_2 ~, \nonumber \\
&& \dot \xi_2 = - (\gamma_{\rm A} - \gamma_{\rm B}) ~ \xi_2 - 2 \tilde \omega ~ \xi_1 - 2 \gamma_{\rm C} ~.
\end{eqnarray}
Setting these derivatives equal to zero, we find that the stationary photon emission rate of a single bosonic system (like an optical cavity or many tightly trapped atoms) is
\begin{eqnarray}
I_\gamma = {2 \gamma_{\rm A} \gamma_{\rm B} \over \gamma_{\rm A} - \gamma_{\rm B}} - {2 \gamma_{\rm C}^2 (\gamma_{\rm A} - \gamma_{\rm B}) \over 4 \widetilde \omega^2 + (\gamma_{\rm A} - \gamma_{\rm B})^2} ~.
\end{eqnarray}
No photon emissions occur in the absence of external driving only when
\begin{eqnarray} \label{last}
\gamma_{\rm B} = \gamma_{\rm C} = 0 ~,
\end{eqnarray}
as it is assumed almost everywhere in the literature \cite{Hegerfeldt93,Dalibard,Carmichael,Werlang}. However, this assumption relies strongly on how the integrals in (\ref{ABC}) are evaluated and whether relations like $\tilde D(\omega) = \tilde C(-\omega)$ are taken into account or not.
\section{Photon emission from a composite quantum system} \label{comp}
Let us now have a look at a large number $N$ of tightly confined atoms inside an optical cavity. The energy of this composite system is the sum of the free energy of both subsystems, their interaction with the free radiation field, and the interaction between the atoms and the cavity field which changes (\ref{H}) into
\begin{eqnarray} \label{H2}
H_0 &=& \hbar ~ \omega_{\rm c} ~c^\dagger c + \hbar \omega_0 ~ S^+ S^- + \sum _{{\bf k},\lambda} \hbar \omega _k ~ a ^\dagger _{{\bf k} \lambda} a _{{\bf k} \lambda} ~ , \nonumber \\
H_{\rm int} &=& \sum_{{\bf k}, \lambda} \hbar \big( g_{{\bf k} \lambda} ~ a _{{\bf k} \lambda} + \tilde g_{{\bf k} \lambda} ~ a _{{\bf k} \lambda}^\dagger \big) ~ c^\dagger + \sqrt{N} \hbar ~ \big( q_{{\bf k} \lambda} ~ a _{{\bf k} \lambda} \nonumber \\
&& \hspace*{-0.5cm} + \tilde q_{{\bf k} \lambda} ~ a _{{\bf k} \lambda}^\dagger \big) ~ S^+ + \sqrt{N} \hbar g_{\rm c} ~ \big( c+c^\dagger \big) ~ S^+ + {\rm h.c.} ~~~~
\end{eqnarray}
with $g_{\rm c}$, $g_{{\bf k} \lambda}$, $\tilde g_{{\bf k} \lambda}$,
$q_{{\bf k} \lambda}$ and $\tilde q_{{\bf k} \lambda}$ being coupling
constants. For simplicity, the cavity photon states should be chosen such that
$g_{\rm c}$ becomes real. Proceeding as in the single system case, assuming
the same properties of the bath, and returning into the Schr\"odinger picture
we receive again the master equation (\ref{deltarho2}) but with
\begin{eqnarray} \label{last2}
{\cal R}(\rho) &=& \kappa~ c \rho c^\dagger + N\Gamma ~ S^+ \rho S^- ~, \nonumber \\
H_{\rm cond} &=& \hbar \Big( \widetilde \omega_{\rm c} - {{\rm i} \over 2} \kappa \Big) ~ c^\dagger c + \hbar \Big( \widetilde \omega_0 - {{\rm i} \over 2} N \Gamma \Big) ~ S^+ S^- \nonumber \\
&& + \sqrt{N} \hbar g_{\rm c} ~ \big( c+ c^\dagger \big) \big( S^+ + S^- \big) ~.
\end{eqnarray}
Here $\widetilde \omega_{\rm c}$ and $\widetilde \omega_0$ denote the bare atom and cavity frequencies, $\kappa$ is the cavity decay rate, and $\Gamma$ is the decay rate of the excited state of a single atom. The crucial difference to the usual Jaynes-Cummings model \cite{Knight} is the presence of the $c S^-$ and the $c^\dagger S^+$ term in (\ref{H2}) which vanish in the RWA. As we shall see below, these operators result in a non-zero stationary state population in excited states and the continuous emission of photons, even without external driving.
To calculate this rate, we take a conservative point of view and neglect $\gamma_{\rm B}$ and $\gamma_{\rm C}$ as suggested in (\ref{last}), since this assures that no emissions occur in the absence of external driving in the single system case. Using (\ref{bose}), (\ref{Dicke4}), (\ref{deltarho2}), and (\ref{last2}) we obtain again a closed set of the rate equations:
\begin{eqnarray}
&&\dot \mu _1 = \sqrt{N} g_{\rm c} \eta _1 - \kappa \mu _1 \, , ~~
\dot \mu _2 = \sqrt{N} g_{\rm c} \eta _2 - N \Gamma \mu _2 ~ , \nonumber \\
&&\dot \eta _1 = 2 \sqrt{N} g_{\rm c} (1 + 2 \mu_2 + \xi_4) + \widetilde \omega_0 \eta _3 + \widetilde \omega_{\rm c} \eta _4 - {\textstyle {1 \over 2}} \zeta \eta_1 ~ , \nonumber \\
&&\dot \eta _2 = 2 \sqrt{N} g_{\rm c} (1 + 2 \mu_1 + \xi_2) + \widetilde \omega _0 \eta _4 + \widetilde \omega _{\rm c} \eta _3 - {\textstyle {1 \over 2}} \zeta \eta_2 ~ , \nonumber \\
&&\dot \eta _3 = - 2 \sqrt{N} g_{\rm c} (\xi_1 + \xi_3) - \widetilde \omega _0 \eta _1 - \widetilde \omega _{\rm c} \eta _2 - {\textstyle {1 \over 2}} \zeta \eta _3 ~ , \nonumber \\
&&\dot \eta _4 = - \widetilde \omega _0 \eta _2 - \widetilde \omega_{\rm c} \eta _1 - {\textstyle {1 \over 2}} \zeta \eta_4 ~, \nonumber \\
&&\dot \xi _1 = 2 \sqrt{N} g_{\rm c} \eta_4 + 2 \widetilde \omega_{\rm c} \xi_2 - \kappa \xi_1 ~, \nonumber \\
&&\dot \xi _2 = - 2 \sqrt{N} g_{\rm c} \eta_1 - 2 \widetilde \omega_{\rm c} \xi_1 - \kappa \xi_2 ~, \nonumber \\
&&\dot \xi _3 = 2 \sqrt{N} g_{\rm c} \eta_4 + 2 \widetilde \omega_0 \xi_4 - N \Gamma \xi_3 ~, \nonumber \\
&&\dot \xi _4 = - 2 \sqrt{N} g_{\rm c} \eta_2 - 2 \widetilde \omega_0 \xi_3 - N \Gamma \xi_4
\end{eqnarray}
with $\mu_1 \equiv \langle c^\dagger c \rangle$, $\mu_2 \equiv \langle S^+ S^- \rangle$, $\eta_{1,2} \equiv {\rm i} \langle (S ^- \pm S ^+) (c \mp c^\dagger) \rangle$, $\eta _{3,4} \equiv\langle (S ^- \mp S ^+) (c \mp c^\dagger )\rangle$, $\xi_1 \equiv {\rm i} \langle c^2 - c^{\dagger 2} \rangle$, $\xi_2 \equiv \langle c^2 + c^{\dagger 2} \rangle$, $\xi_3 \equiv {\rm i} \langle S^{-2} - S^{+2} \rangle$, $\xi_4 \equiv \langle S^{-2} + S^{+2} \rangle$, and $\zeta \equiv \kappa + N \Gamma$. Combined with (\ref{kernel3}), the stationary state of these equations yields the cavity photon emission rate
\begin{eqnarray} \label{IN}
I_{\kappa} &=& {N \zeta \kappa g_{\rm c}^2 \left [ \, 8 \zeta g_{\rm c}^2 + \zeta^2 \Gamma + 4 \Gamma \left ( \widetilde \omega _0 - \widetilde \omega_{\rm c} \right )^2 \, \right] \over 16 \zeta^2 g_{\rm c}^2 \widetilde \omega_0 \widetilde \omega_{\rm c} + 2 \zeta^2 \kappa \Gamma \left ( \widetilde \omega _0 ^2 + \widetilde \omega_{\rm c}^2 \right ) + 4 \kappa \Gamma \left( \widetilde \omega _0^2 - \widetilde \omega_{\rm c}^2 \right)^2} \nonumber \\
\end{eqnarray}
which applies for $N \Gamma, \sqrt{N} g_{\rm c}, \kappa \ll \widetilde \omega_0,\widetilde \omega_{\rm c}$. For example, the parameters of the recent cavity experiment with $^{85}$Rb $[4]$ combined with $N=10^4$ are expected to result in an $I_\kappa$ as large as $I_\kappa =301~\rm{s}^{-1}$ which can be detected experimentally (cf.~Fig.~\ref{fig2}).
\begin{figure}[t]
\begin{minipage}{\columnwidth}
\begin{center}
{\includegraphics[scale=0.7]{fig.eps}}
\end{center}
\caption{The cavity photon emission rate $I_\kappa$ as a function of time for $N=10^4$ atoms inside the resonator obtained from a numerical solution of the respective rate equations. Here $\omega_{\rm c} = \omega_0 = 384.2 \cdot 10^{12} \, {\rm s}^{-1}$ ($D_2$ line), $g_{\rm c} = 6.1\cdot 10^8 \, {\rm s}^{-1}$, $\Gamma = 1.9 \cdot 10^7 \, {\rm s}^{-1}$, and $\kappa = 1.3 \cdot 10^{10} \, {\rm s}^{-1}$, as in Ref.~\cite{Trupke}. After being initially empty, the cavity becomes populated --- even in the absence of external driving.} \label{fig2}
\end{minipage}
\end{figure}
\section{Conclusions} \label{conc}
Let us now discuss the same topic in a more physical way. Among the reasons usually produced to justify the RWA, there is the one of the time scale, since the counter-rotating terms oscillate very rapidly so that their contribution to the dynamics remains in general negligible with respect to the resonating ones \cite{Grynberg}. Other authors apply the RWA in order to preserve quantum numbers and energy \cite{Milonni,Schleich}, since this approximation drops all the processes transferring energy between non-resonating modes. However, in Nature, there exist processes, where there is a redistribution of energy among different system degrees of freedom making possible some amounts of system self-organization. In particular, one could examine the possibility of concentrating the total energy of the system into a subset of degrees of freedom producing a decrease of its entropy, which in order to avoid a violation of the second law of thermodynamics, would compel the release of energy to the environment, thus keeping the free energy constant. This is possible, of course, only when the system is open.
In this paper, we consequently examined the situation occurring when the
counter-rotating terms are not dropped. The avoidance of the RWA and the
consequent more exact solution of the dynamics of the system under consideration gives rise to non-trivial consequences. As a matter of fact, the mechanism of concentration of energy on a subset of degrees of freedom could help the understanding of the hitherto rather mysterious processes of self-organization. The mathematical analysis we have done above shows indeed that in a quantum system a leakage of energy can occur among
different degrees of freedom. This leakage is not necessarily triggered by an external pump of energy, but
could be also triggered by the virtual photons coming from the quantum
vacuum as, e.g., it occurs in the Casimir effect or in the Lamb shift. From the
standpoint of the receiving system, the origin of the triggering energy is not important as far as
the balance between the variations of energy and entropy is satisfied so
to keep the free energy constant. In this respect, we recall that the ratio between these variations is just
the temperature, as required by the thermodynamic definition:
\begin{eqnarray}
k_{\rm B} T &=& {{\mathrm d}U \over {\mathrm d}S} ~.
\end{eqnarray}
The interplay between the microscopic quantum dynamics and the thermal properties certainly deserves further analysis, which, however, is out of the scope of the present paper. Here we are limiting ourselves to a specific physical picture, which however, does not exclude other physical scenarios, as, for example, the conversion of
energy from the thermal bath phonons to leaking photons (as it occurs in the laser
cooling mechanism). More
appealing could be the occurrence of a dynamics, where a system is able
to reach a state having
a lower energy jumping over a separating barrier with the help of
virtual photons coming from the vacuum. The possibility of realization
of these scenarios needs of course further studies. First indications
along the above lines can be found in the literature \cite{Kurcz_NJP,hegerfeldt2,Zubairy,Werlang}. In all these examples, the system dynamics is irreversible.
In summary, we derived the master equation for a single bosonic system (an optical cavity and a large number of tightly confined particles) without making any approximation other than the usual dipole and Born approximation. We find that the effect of the counter-rotating terms in the interaction between a quantum optical system and its free radiation field might be annihilated by environment-induced measurements whether or not a photon has been emitted. Assuming that this is the case, we then show that these measurements cannot suppress the interaction between a large number of atoms and an optical cavity. The result is the continuous leakage of photons through the resonator mirrors, even in the absence of external driving. For sufficiently many atoms, a relatively strong signal might be created. Its dependence on the system parameters can be verified experimentally using optical cavities like those described in \cite{Trupke,Reichel,Esslinger}. We recognize that in order to better understand the physical mechanisms responsible for the mathematical results presented in this paper, some more work is needed which is in our plans for future publications.
\vspace*{0.05cm}
\noindent {\em Acknowledgement.} A. B. acknowledges a James Ellis University Research Fellowship from the Royal Society and the GCHQ. This work was supported in part by the UK Research Council EPSRC, the EU Research and Training Network EMALI, University of Salerno, and INFN.
| {'timestamp': '2010-06-04T02:01:31', 'yymm': '0909', 'arxiv_id': '0909.5337', 'language': 'en', 'url': 'https://arxiv.org/abs/0909.5337'} |
\section{Introduction.}
\label{section:intro}
Since being introduced by W. Hoeffding \cite{hoeffding1948class}, U-statistics have become an active topic of research.
Many classical results in estimation and testing are related to U-statistics; detailed treatment of the subject can be found in excellent monographs \cite{Decoupling, korolyuk2013theory,serfling2009approximation,kowalski2008modern}.
A large body of research has been devoted to understanding the asymptotic behavior of real-valued U-statistics.
Such asymptotic results, as well as moment and concentration inequalities, are discussed in the works \cite{PM-decoupling-1995,Decoupling,gine2000exponential,gine1992hoffmann,ibragimov1999analogues, gine2001lil,houdre2003exponential}, among others.
The case of vector-valued and matrix-valued U-statistics received less attention; natural examples of matrix-valued U-statistics include various estimators of covariance matrices, such as the usual sample covariance matrix and the estimators based on Kendall's tau \cite{wegkamp2016adaptive,han2017statistical}.
Exponential and moment inequalities for Hilbert space-valued U-statistics have been developed in \cite{adamczak2008lil}.
The goal of the present work is to obtain moment and concentration inequalities for generalized degenerate U-statistics of order 2 with values in the set of matrices with complex-valued entries equipped with the operator (spectral) norm.
The emphasis is made on expressing the upper bounds in terms of \emph{computable} parameters.
Our results extend the matrix Rosenthal's inequality for the sums of independent random matrices due to Chen, Gittens and Tropp \cite{chen2012masked} (see also \cite{junge2013noncommutative,mackey2014matrix}) to the framework of U-statistics.
As a corollary of our bounds, we deduce a variant of the Matrix Bernstein inequality for U-statistics of order 2.
We also discuss connections of our bounds with general moment inequalities for Banach space-valued U-statistics due to R. Adamczak \cite{adamczak2006moment}, and leverage Adamczak's inequalities to obtain additional refinements and improvements of the results.
We note that U-statistics with values in the set of self-adjoint matrices have been considered in \cite{chen2016bootstrap}, however, most results in that work deal with the element-wise sup-norm, while we are primarily interested in results about the moments and tail behavior of the spectral norm of U-statistics.
Another recent work \cite{minsker2018robust} investigates robust estimators of covariance matrices based on U-statistics, but deals only with the case of non-degenerate U-statitistics that can be reduced to the study of independent sums.
The key technical tool used in our arguments is the extension of the non-commutative Khintchine's inequality (Lemma \ref{bound-on-expectation}) which could be of independent interest.
\section{Notation and background material.}
Given $A\in \mathbb C^{d_1\times d_2}$, $A^\ast\in \mathbb C^{d_2\times d_1}$ will denote the Hermitian adjoint of $A$.
$\mathbb H^d\subset \mathbb C^{d\times d}$ stands for the set of all self-adjoint matrices.
If $A=A^\ast$, we will write $\lambda_{\mbox{\footnotesize{max}\,}}(A)$ and $\lambda_{\mbox{\footnotesize{min}\,}}(A)$ for the largest and smallest eigenvalues of $A$.
Everywhere below, $\|\cdot\|$ stands for the spectral norm $\|A\|:=\sqrt{\lambda_{\mbox{\footnotesize{max}\,}}(A^\ast A)}$.
If $d_1=d_2=d$, we denote by $\mbox{tr\,} (A)$ the trace of $A$.
The Schatten p-norm of a matrix $A$ is defined as $\|A\|_{S_p} = \left( \mbox{tr\,} (A^\ast A)^{p/2} \right)^{1/p}.$
When $p=1$, the resulting norm is called the nuclear norm and will be denoted by $\|\cdot\|_\ast$.
The Schatten 2-norm is also referred to as the Frobenius norm or the Hilbert-Schmidt norm, and is denoted by $\|\cdot\|_{\mathrm{F}}$; and the associated inner product is
$\dotp{A_1}{A_2}=\mbox{tr\,}(A_1^\ast A_2)$.
Given $z\in \mathbb C^d$, $\left\| z \right\|_2=\sqrt{z^\ast z}$ stands for the usual Euclidean norm of $z$.
Let $A,~B\in\mathbb{H}^{d}$.
We will write $A\succeq B (\textrm{or}A\succ B)$ iff $A-B$ is nonnegative (or positive) definite.
For $a,b\in \mathbb R$, we set $a\vee b:=\max(a,b)$ and $a\wedge b:=\min(a,b)$.
We use $C$ to denote absolute constants that can take different values in various places.
Finally, we introduce the so-called Hermitian dilation which is a tool that often allows to reduce the problems involving general rectangular matrices to the case of Hermitian matrices.
\begin{definition}
Given a rectangular matrix $A\in\mathbb C^{d_1\times d_2}$, the Hermitian dilation $\mathcal D: \mathbb C^{d_1\times d_2}\mapsto \mathbb C^{(d_1+d_2)\times (d_1+d_2)}$ is defined as
\begin{align}
\label{eq:dilation}
&
\mathcal D(A)=\begin{pmatrix}
0 & A \\
A^\ast & 0
\end{pmatrix}.
\end{align}
\end{definition}
\noindent Since
$\mathcal D(A)^2=\begin{pmatrix}
A A^\ast & 0 \\
0 & A^\ast A
\end{pmatrix},$
it is easy to see that $\| \mathcal D(A) \|=\|A\|$.
The rest of the paper is organized as follows.
Section \ref{sec:setup} contains the necessary background on U-statistics.
Section \ref{sec:moment} contains our main results -- bounds on the $\mathbb H^d$-valued Rademacher chaos and moment inequalities for $\mathbb H^d$-valued U-statistics of order 2.
Section \ref{section:adamczak} provides comparison of our bounds to relevant results in the literature, and discusses further improvements.
Finally, Section \ref{sec:proof} contains the technical background and proofs of the main results.
\subsection{Background on U-statistics.}
\label{sec:setup}
Consider a sequence of i.i.d. random variables $X_1,\ldots,X_n$ ($n\geq2$) taking values in a measurable space
$(\mathcal S,\mathcal{B})$, and let $P$ denote the distribution of $X_1$.
Define
\[
I_n^m:=\{(i_1,\ldots,i_m):~1\leq i_j\leq n,,~i_j\neq i_k~\textrm{if}~j\neq k\},
\]
and assume that $H_{i_1,\ldots,i_m}: \mathcal S^m\rightarrow\mathbb H^d$, $(i_1,\ldots,i_m)\in I_n^m$, $2\leq m\leq n$, are $\mathcal{S}^m$-measurable, permutation-symmetric kernels, meaning that $H_{i_1,\ldots,i_m}(x_1,\ldots,x_m) = H_{i_{\pi_1},\ldots,i_{\pi_m}}(x_{\pi_1},\ldots,x_{\pi_m})$ for any $(x_1,\ldots,x_m)\in \mathcal S^m$ and any permutation $\pi$. For example, when $m=2$, this conditions reads as $H_{i_1,i_2}(x_1,x_2) = H_{i_2,i_1}(x_2,x_1)$ for all $i_1\ne i_2$ and $x_1,x_2$.
The generalized U-statistic is defined as \cite{Decoupling}
\begin{equation}
\label{u-stat}
U_n:=\sum_{(i_1,\ldots,i_m)\in I_n^m}H_{i_1,\ldots,i_m}(X_{i_1},\ldots,X_{i_m}).
\end{equation}
When $H_{i_1,\ldots,i_m}\equiv H$, we obtain the classical U-statistics.
It is often easier to work with the decoupled version of $U_n$ defined as
\[
U'_n = \sum_{(i_1,\ldots, i_m)\in I_n^m} H_{i_1,\ldots,i_m}\left( X^{(1)}_{i_1},\ldots,X^{(m)}_{i_m} \right),
\]
where $\left\{ X_i^{(k)} \right\}_{i=1}^n, \ k=1,\ldots,m$ are independent copies of the sequence $X_1,\ldots,X_n$.
Our ultimate goal is to obtain the moment and deviation bounds for the random variable $\| U_n - \mathbb E U_n \|$.
Next, we recall several useful facts about U-statistics.
The projection operator $\pi_{m,k}~(k\leq m)$ is defined as
\[
\pi_{m,k}H(\mathbf{x}_{i_1},\ldots,\mathbf{x}_{i_k})
:= (\delta_{\mathbf{x}_{i_1}}- P)\ldots(\delta_{\mathbf{x}_{i_k}} - P) P^{m-k}H,
\]
where
\[
\mathcal{Q}^mH := \int\ldots\int H(\mathbf{y}_1,\ldots,\mathbf{y}_m)dQ(\mathbf{y}_1)\ldots dQ(\mathbf{y}_m),
\]
for any probability measure $Q$ on $(\mathcal S,\mathcal{B})$, and $\delta_{x}$ is a Dirac measure concentrated at $x\in \mathcal S$.
For example, $\pi_{m,1}H(x) = \mathbb E \left[ H(X_1,\ldots,X_m)| X_1=x\right] - \mathbb E H(X_1,\ldots,X_m)$.
\begin{definition}
Let $F: \mathcal S^m\rightarrow\mathbb H^d$ be a measurable function. We will say that $F$ is $P$-degenerate of order $r$
($1\leq r<m$) iff
\[
\mathbb EF(\mathbf{x}_1,\ldots,\mathbf{x}_r,X_{r+1},\ldots,X_m)=0~\forall \mathbf{x}_1,\ldots,\mathbf{x}_r\in \mathcal S,
\]
and $\mathbb E F(\mathbf{x}_1,\ldots,\mathbf{x}_r, \mathbf{x}_{r+1},X_{r+2},\ldots,X_m)$ is not a constant function.
Otherwise, $F$ is non-degenerate.
\end{definition}
For instance, it is easy to check that $\pi_{m,k}H$ is degenerate of order $k-1$.
If $F$ is degenerate of order $m-1$, then it is called \emph{completely degenerate}.
From now on, we will only consider generalized U-statistics of order $m=2$ with completely degenerate (that is, degenerate of order 1) kernels.
The case of non-degenerate U-statistics is easily reduced to the degenerate case via the \emph{Hoeffding's decomposition;} see page 137 in \cite{Decoupling} for the details.
\section{Main results.}
\label{sec:moment}
Rosenthal-type moment inequalities for sums of independent matrices
have appeared in a number of previous works, including \cite{chen2012masked, mackey2014matrix, JT-matrix}. For example, the following inequality follows from Theorem A.1 in \cite{chen2012masked}:
\begin{lemma}[Matrix Rosenthal inequality]
\label{lemma:rosenthal-1}
Suppose that $q \geq 1$ is an integer and fix $r\geq q\vee \log d$. Consider a finite sequence of $\{\mathbf{Y}_i\}$ of independent $\mathbb H^d$-valued random matrices. Then
\begin{multline}
\label{iid-rosenthal}
\left(\mathbb E\left\| \sum_i \left( \mathbf{Y}_i -\mathbb E \mathbf{Y}_i \right)\right\|^{2q} \right)^{1/2q}\leq 2\sqrt{er}\left\| \left(\sum_i \mathbb E
\left( \mathbf{Y}_i - \mathbb E\mathbf{Y}_i \right)^2 \right)^{1/2} \right\|
\\
+4\sqrt{2} er\left( \mathbb E\max_i\|\mathbf{Y}_i - \mathbb E\mathbf{Y}_i \|^{2q} \right)^{1/2q}.
\end{multline}
\end{lemma}
The bound above improves upon the moment inequality that follows from the matrix Bernstein's inequality (see Theorem 1.6.2 in \cite{JT-matrix}):
\begin{lemma}[Matrix Bernstein's inequality]
\label{bernstein}
Consider a finite sequence of $\{\mathbf{Y}_i\}$ of independent $\mathbb H^d$-valued random matrices such that
$\|\mathbf{Y}_i - \mathbb E\mathbf{Y}_i\|\leq B$ almost surely. Then
\[
\Pr\left( \left\| \sum_i \left( \mathbf{Y}_i -\mathbb E\mathbf{Y}_i \right) \right\| \geq 2\sigma\sqrt{u} + \frac{4}{3}Bu \right)\leq 2de^{-u},
\]
where $\sigma^2:= \left\| \sum_i \mathbb E\left(\mathbf{Y}_i -\mathbb E\mathbf{Y}_i\right)^2 \right\|$.
\end{lemma}
\noindent Indeed, Lemma \ref{tail-to-moment} implies, with $a_0 = C\left(\sigma\sqrt{\log(2d)} +B\log (2d)\right)$ for some absolute constant $C>0$ and after some simple algebra, that
\[
\left( \mathbb E\left\| \sum_i \left( \mathbf{Y}_i - \mathbb E\mathbf{Y}_i \right) \right\|^q \right)^{1/q}\leq C_2\left( \sqrt{q+\log(2d)}\,\sigma + (q+\log(2d))B \right),
\]
for an absolute constant $C_2>0$ and all $q\geq1$.
This bound is weaker than \eqref{iid-rosenthal} as it requires almost sure boundedness of $\|\mathbf{Y}_i - \mathbb E \mathbf{Y}_i\|$ for all $i$.
One the the main goals of this work is to obtain operator norm bounds similar to inequality \eqref{iid-rosenthal} for $\mathbb H^d$-valued U-statistics of order 2.
\subsection{Degenerate U-statistics of order 2.}
Moment bounds for scalar U-statistics are well-known, see for example the work \cite{gine2000exponential} and references therein.
Moreover, in \cite{adamczak2006moment}, author obtained moment inequalities for general Banach-space valued U-statistics.
Here, we aim at improving these bounds for the special case of $\mathbb H^d$-valued U-statistics of order 2.
We discuss connections and provide comparison of our results with the bounds obtained by R. Adamczak \cite{adamczak2006moment} in Section \ref{section:adamczak}.
\subsection{Matrix Rademacher chaos.}
The starting point of our investigation is a moment bound for the matrix Rademacher chaos of order 2.
This bound generalizes the spectral norm inequality for the matrix Rademacher series, see \cite{JT-matrix, tropp2016expected, tropp2016second,random-matrix-2010}.
We recall Khintchine's inequality for the matrix Rademacher series for the ease of comparison: let $A_1,\ldots,A_n\in\mathbb{H}^{d}$ be a sequence of fixed matrices, and $\varepsilon_1,\ldots,\varepsilon_n$ -- a sequence of i.i.d. Rademacher random variables. Then
\begin{align}
\label{matrix-Khintchine}
&
\left( \mathbb E\left\|\sum_{i=1}^n\varepsilon_i A_i\right\|^2 \right)^{1/2}\leq \sqrt{e(1+2\log d)}\cdot
\left\| \sum_{i=1}^n A_i^2 \right\|^{1/2}.
\end{align}
Furthermore, Jensen's inequality implies this bound is tight (up to a logarithmic factor).
Note that the expected norm of $\sum \varepsilon_i A_i$ is controlled by the single ``matrix variance'' parameter $\left\| \sum_{i=1}^n A_i^2 \right\|$.
Next, we state the main result of this section, the analogue of inequality \eqref{matrix-Khintchine} for the Rademacher chaos of order 2.
\begin{lemma}
\label{bound-on-expectation}
Let $\{A_{i_1,i_2}\}_{i_1,i_2=1}^{n}\in \mathbb H^d$ be a sequence of fixed matrices.
Assume that $\left\{\varepsilon_j^{(i)}\right\}_{j\in\mathbb{N}}, \ i=1,2,$ are two independent sequences of i.i.d. Rademacher random variables, and define
\[
X= \sum_{(i_1,i_2)\in I_n^2} A_{i_1,i_2}\varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)}.
\]
Then for any $q\geq1$,
\begin{multline}
\label{eq:khintchine}
\max\left\{\left\|GG^\ast \right\|, \left\| \sum_{(i_1,i_2)\in I_n^2}A_{i_1,i_2}^2 \right\| \right\}^{1/2}
\leq \left( \mathbb E\|X\|^{2q}\right)^{1/(2q)} \\
\leq
\frac{4}{\sqrt{e}}\cdot r\cdot\max\left\{ \left\| GG^\ast \right\|, \left\| \sum_{(i_1,i_2)\in I_n^2}A_{i_1,i_2}^2 \right\| \right\}^{1/2},
\end{multline}
where $r:= q \vee\log d$, and the matrix $G\in\mathbb{H}^{nd}$ is defined via its block structure as
\begin{equation}
\label{eq:G}
G:=
\left(
\begin{array}{cccc}
0 & A_{1,2} & \ldots & A_{1,n} \\
A_{2,1} & 0 & \ldots & A_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
A_{n,1} & A_{n,2} & \ldots & 0
\end{array}
\right).
\end{equation}
\end{lemma}
\begin{remark}[Constants in Lemma \ref{bound-on-expectation}]
\label{remark:log-constant}
Matrix Rademacher chaos of order 2 has been studied previously in \cite{rauhut2009circulant}, \cite{pisier1998non} and \cite{HR-CS}, where Schatten-$p$ norm upper bounds were obtained by iterating Khintchine's inequality for Rademacher series. Specifically, the following bound holds for all $p\geq 1$ (see Lemma \ref{k-inequality-2} for the details):
\begin{align*}
&
\mathbb E\left\| X \right\|_{S_{2p}}^{2p}
\leq
2\left( \frac{2\sqrt 2}{e} p\right)^{2p}
\max\left\{ \left\| \left(G G^\ast \right)^{1/2}\right\|_{S_{2p}}^{2p}, \left\|\left(\sum_{i_1,i_2=1}^n A_{i_1,i_2}^2\right)^{1/2}\right\|_{S_{2p}}^{2p} \right\}.
\end{align*}
Using the fact that for any $B\in\mathbb{H}^{d}$, $\|B\|\leq\|B\|_{S_{2p}}\leq d^{1/2p}\|B\|$
and taking $p=q\vee \log(nd)$, one could obtain a ``na\"{i}ve'' extension of the inequality above, namely
\begin{align*}
&
\left( \mathbb E\|X\|^{2q}\right)^{1/(2q)}\leq
C\max\left( q,\log(nd) \right) \max\left\{ \left\| GG^\ast \right\|, \left\|\sum_{(i_1,i_2)\in I^n_2}A_{i_1,i_2}^2\right\| \right\}^{1/2}
\end{align*}
that contains an extra $\log(n)$ factor which is removed in Lemma \ref{bound-on-expectation}.
\end{remark}
One may wonder if the term $\left\| GG^\ast \right\|$ in Lemma \ref{bound-on-expectation} is redundant.
For instance, in the case when $\{A_{i_1,i_2}\}_{i_1,i_2}$ are scalars, it is easy to see
$\left\| \sum_{(i_1,i_2)\in I_n^2 } A_{i_1,i_2}^2\right\|\geq\left\| GG^\ast \right\|$.
However, a more careful examination shows that there is no strict dominance among $\left\| GG^\ast \right\|$ and
$\left\| \sum_{(i_1,i_2)\in I_n^2 } A_{i_1,i_2}^2\right\|$.
The following example presents a situation where $\left\| \sum_{(i_1,i_2)\in I_n^2 } A_{i_1,i_2}^2\right\|<\left\| GG^\ast \right\|$.
\begin{example}
\label{example:01}
Assume that $d~\geq~n~\geq~2$, let
$\{\mathbf{a}_1,\ldots,\mathbf{a}_d\}$ be any orthonormal basis in $\mathbb{R}^d$,
and $\mathbf{a}:=[\mathbf{a}_1^T,\ldots,\mathbf{a}_n^T]^T \in \mathbb R^{nd}$ be the ``vertical concatenation'' of $\mathbf{a}_1,\ldots,\mathbf{a}_d$.
Define
\[
A_{i_1,i_2} := \mathbf{a}_{i_1}\mathbf{a}_{i_2}^T + \mathbf{a}_{i_2}\mathbf{a}_{i_1}^T,
~~i_1,i_2\in\{1,2,\ldots,n\},
\]
and
\[
X:= \sum_{(i_1,i_2)\in I_n^2} \varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)}
A_{i_1,i_2}.
\]
Then $\left\| GG^\ast \right\| = \left\| GG^T \right\| \geq (n-2) \| \mathbf{a}\|_{2}^2 = (n-2)n$, and
$\left\| \sum_{(i_1,i_2)\in I_n^2} A_{i_1,i_2}^2 \right\| = 2(n-1)$.
Details are outlined in Section \ref{section:example-proof}.
\end{example}
\noindent It follows from Lemma \ref{lemma:block-matrix} that
\begin{align}
\label{eq:useful-bound}
&
\left\| GG^\ast \right\| \leq \sum_{i_1}\left\| \sum_{i_2: i_2\ne i_1} A^2_{i_1,i_2}\right\|.
\end{align}
Often, this inequality yields a ``computable'' upper bound for the right-hand side of the inequality \eqref{eq:khintchine}, however, in some cases it results in the loss of precision, as the following example demonstrates.
\begin{example}
\label{example:02}
Assume that $n$ is even, $d~\geq~n~\geq~2$, let
$\{\mathbf{a}_1,\ldots,\mathbf{a}_d\}$ be an orthonormal basis in $\mathbb{R}^d$, and let $\mathcal C\in \mathbb R^{n\times n}$ be an orthogonal matrix with entries $c_{i,j}$ such that $c_{i,i}=0$ for all $i$. Define
\[
A_{i_1,i_2}= c_{i_1,i_2}\left(\mathbf{a}_{i_1}\mathbf{a}_{i_2}^T + \mathbf{a}_{i_2}\mathbf{a}_{i_1}^T \right),
~~i_1,i_2\in\{1,2,\ldots,n\},
\]
and $X:= \sum_{(i_1,i_2)\in I_n^2} \varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)} A_{i_1,i_2}.$
Then $\left\| GG^\ast \right\| = 1$, $\left\| \sum_{(i_1,i_2)\in I_n^2} A_{i_1,i_2}^2 \right\| = 2$, but
\[
\sum_{i_1} \left\| \sum_{i_2:i_2\neq i_1}A_{i_1,i_2}^2\right\| = n.
\]
Details are outlined in Section \ref{section:example-proof}.
\end{example}
\subsection{Moment inequalities for degenerate U-statistics of order 2.}
\label{sec:main-results}
Let $H_{i_1,i_2}:\mathcal{S}\times \mathcal{S}\mapsto \mathbb H^d$, $(i_1,i_2)\in I_n^2$, be a sequence of degenerate kernels, for example, $H_{i_1,i_2}(x_1,x_2)=\pi_{2,2}\widehat H_{i_1,i_2}(x_1,x_2)$ for some non-degenerate permutation-symmetric $\widehat H_{i_1,i_2}$.
Recall that $U_n$, the generalized U-statistic of order 2, has the form
\[
U_n:=\sum_{(i_1,i_2)\in I_n^2}H_{i_1,i_2}(X_{i_1},X_{i_2}).
\]
Everywhere below, $\mathbb E_j[\cdot], \ j=1,2,$ stands for the expectation with respect to $\left\{X_i^{(j)}\right\}_{i=1}^n$ only (that is, conditionally on all other random variables).
The following Theorem is our most general result; it can be used as a starting point to derive more refined bounds.
\begin{theorem}
\label{thm:degen-moment}
Let $\left\{X_i^{(j)}\right\}_{i=1}^n, \ j=1,2,$ be $\mathcal{S}$-valued i.i.d. random variables, $H_{i,j}:\mathcal{S}\times \mathcal{S}\mapsto \mathbb H^d$ -- permutation-symmetric degenerate kernels. Then for all $q \geq 1$ and $r=\max(q ,\log (ed))$,
\begin{align*}
\left(\mathbb E \left\| U_n \right\|^{2q} \right)^{1/2q}
\leq &
4 \left(\mathbb E \left\| \sum_{(i_1,i_2)\in I^2_n} H_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^{2q}\right)^{1/2q}
\\
\leq &
128/\sqrt{e} \Bigg[
16 r^{3/2} \left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1 } H^2_{i_1,i_2} \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/(2q)}
\\
&
+ r \left\| \sum_{(i_1,i_2)\in I_n^2 } \mathbb E H^2_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^{1/2}
+ r\left( \mathbb E\left\| \mathbb E_2 \widetilde G \widetilde G^\ast \right\|^{q}\right)^{1/2q} \Bigg],
\end{align*}
where the matrix $\widetilde G\in \mathbb H^{n d}$ is defined as
\begin{align}
\label{eq:g}
&
\widetilde G:=
\left(
\begin{array}{cccc}
0 & H_{1,2}\left(X_{1}^{(1)}, X_{2}^{(2)} \right) & \ldots & H_{1,n}\left(X_{1}^{(1)}, X_{n}^{(2)} \right) \\
H_{2,1}\left(X_{2}^{(1)}, X_{1}^{(2)} \right) & 0 & \ldots & H_{2,n}\left(X_{2}^{(1)}, X_{n}^{(2)} \right) \\
\vdots & \vdots & \ddots & \vdots \\
H_{n,1}\left(X_{n}^{(1)}, X_{1}^{(2)} \right) & H_{n,2}\left(X_{n}^{(1)}, X_{2}^{(2)} \right) & \ldots & 0
\end{array}
\right).
\end{align}
\end{theorem}
\begin{proof}
See Section \ref{proof:degen-moment}.
\end{proof}
The following lower bound (proven in Section \ref{proof:lower}) demonstrates that all the terms in the bound of Theorem \ref{thm:degen-moment} are necessary.
\begin{lemma}
\label{lemma:degen-lower-bound}
Under the assumptions of Theorem \ref{thm:degen-moment},
\begin{multline*}
\left(\mathbb E \left\| U_n \right\|^{2q} \right)^{1/2q} \geq
C \left[
\left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H^2_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/(2q)} \right.
\\
\left.+ \left( \mathbb E\left\| \mathbb E_2 \widetilde G \widetilde G^\ast \right\|^{q}\right)^{1/2q} +
\left( \mathbb E\left\| \sum_{(i_1,i_2)\in I_n^2 } \mathbb E_2 H^2_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^{q} \right)^{1/2q}\right]
\end{multline*}
where $C>0$ is an absolute constant.
\end{lemma}
\begin{example}
Let $\{ A_{i_1,i_2} \}_{1\leq i_1<i_2\leq n}$ be fixed elements of $\mathbb H^d$ and $X_1,\ldots,X_n$ -- centered i.i.d. real-valued random variables such that $\mbox{Var}(X_1)=1$. Consider $\mathbf{Y}:=\sum_{i_1\ne i_2} A_{i_1,i_2} X_{i_1}X_{i_2}$, where $A_{i_2,i_1}=A_{i_1,i_2}$ for $i_2>i_1$.
We will apply Theorem \ref{thm:degen-moment} to obtain the bounds for $\left( \mathbb E\|\mathbf{Y}\|^{2q} \right)^{1/2q}$. In this case, $H_{i_1,i_2} \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) = A_{i_1,i_2}X_{i_1}^{(1)} X_{i_2}^{(2)}$, and it is easy to see that
\[
\left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1 } H^2_{i_1,i_2} \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/(2q)} \leq
\max_i \left\| \sum_{j\ne i} A^2_{i,j}\right\|^{1/2} \left( \mathbb E\max_{1\leq i\leq n} |X_i|^{2q} \right)^{1/q}
\]
and $ \left\| \sum_{(i_1,i_2)\in I_n^2 } \mathbb E H^2_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^{1/2} = \left\| \sum_{(i_1,i_2)\in I_n^2} A^2_{i_1,i_2}\right\|^{1/2}$. Moreover,
\[
\left( \mathbb E_2 \widetilde G \widetilde G^\ast \right)_{i,j} = X_i^{(1)} X_j^{(1)}\sum_{k\ne i,j} A_{i,k} A_{j,k},
\]
implying that $\mathbb E_2 \widetilde G \widetilde G^\ast = D \, G \, D$, where $G$ is defined as in \eqref{eq:G} and $D\in \mathbb H^{n d}$ is a diagonal matrix $D = \mathrm{diag}(X_1^{(1)},\ldots,X_n^{(1)}) \otimes I_d$, where $\otimes$ denotes the Kronecker product.
It yields that $\left\| \mathbb E_2 \widetilde G \widetilde G^\ast \right\| \leq \max_i \left| X_i^{(1)} \right|^2 \cdot \left\| G G^\ast\right\|$, hence
\[
\left( \mathbb E\left\| \mathbb E_2 \widetilde G \widetilde G^\ast \right\|^{q}\right)^{1/2q} \leq \left\| G G^\ast\right\|^{1/2} \left( \mathbb E\max_{1\leq i\leq n} |X_i|^{2q} \right)^{1/2q}.
\]
Combining the inequalities above, we deduce from Theorem \ref{thm:degen-moment} that
\begin{multline}
\label{eq:polynomial}
\left( \mathbb E\|\mathbf{Y}\|^{2q} \right)^{1/2q} \leq C\Bigg[ r \left( \left\| \sum_{(i_1,i_2)\in I_n^2} A^2_{i_1,i_2}\right\|^{1/2} + \left( \mathbb E\max_{1\leq i\leq n} |X_i|^{2q} \right)^{1/2q} \left\| GG^\ast \right\|^{1/2} \right) \\
+ r^{3/2} \max_{i_1} \left\| \sum_{i_2\ne i_1} A^2_{i_1,i_2}\right\|^{1/2} \left( \mathbb E\max_{1\leq i\leq n} |X_i|^{2q} \right)^{1/q} \Bigg],
\end{multline}
where $r = \max(q,\log (ed))$. If for instance $|X_1|\leq M$ almost surely for some $M\geq 1$, it follows that
\begin{equation*}
\left( \mathbb E\|\mathbf{Y}\|^{2q} \right)^{1/2q} \leq C\Bigg[ r \left( M \, \left\| GG^\ast \right\|^{1/2} + \left\| \sum_{(i_1,i_2)\in I_n^2} A^2_{i_1,i_2}\right\|^{1/2} \right)
+ r^{3/2} M^2 \max_{i_1} \left\| \sum_{i_2\ne i_1} A^2_{i_1,i_2}\right\|^{1/2} \Bigg].
\end{equation*}
On the other hand, if $X_1$ is not bounded but is sub-Gaussian, meaning that $\left(\mathbb E|X_1|^q \right)^{1/q} \leq C \sigma \sqrt{q}$ for all $q\in \mathbb N$ and some $\sigma>0$, then it is easy to check that
\[
\left(\mathbb E\max_{1\leq i\leq n} |X_i|^{2q}\right)^{1/2q} \leq C_1 \sqrt{\log(n)} \sigma \sqrt{2q},
\]
and the estimate for $\left(\mathbb E \left\|\mathbf{Y} \right\|^{2q}\right)^{1/2q}$ follows from \eqref{eq:polynomial}.
\end{example}
Our next goal is to obtain more ``user-friendly'' versions of the upper bound, and we first focus on the term
$ \mathbb E \big\| \mathbb E_2 \widetilde G \widetilde G^\ast \big\|^q$ appearing in Theorem \ref{thm:degen-moment} that might be difficult to deal with directly.
It is easy to see that the $(i,j)$-th block of the matrix $\mathbb E_2 \widetilde G\widetilde G^\ast$ is
\[
\left( \mathbb E_2 \widetilde G \widetilde G^\ast\right)_{i,j} = \sum_{k\ne i,j} \mathbb E_2 \left[ H_{i,k}(X_i^{(1)},X_k^{(2)}) H_{j,k}(X_j^{(1)},X_k^{(2)})\right].
\]
It follows from Lemma \ref{lemma:block-matrix} that
\begin{align}
\label{eq:d20}
\| \mathbb E_2 \widetilde G \widetilde G^\ast \| & \leq \sum_{i} \left\| \left( \mathbb E_2 \widetilde G \widetilde G^\ast\right)_{i,i}\right\| =
\sum_{i_1} \left\| \sum_{i_2:i_2\ne i_1}\mathbb E_2 H_{i_1,i_2}^2\left(X^{(1)}_{i_1}, X_{i_2}^{(2)}\right) \right\|,
\end{align}
hence
\begin{multline*}
\left( \mathbb E\left\| \mathbb E_2\widetilde G \widetilde G^\ast \right\|^{q}\right)^{1/2q}\leq
\left( \mathbb E \left( \sum_{i_1} \left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2\left(X^{(1)}_{i_1}, X_{i_2}^{(2)}\right) \right\| \right)^q \right)^{1/2q}
\\
\leq \left( \sum_{i_1}\mathbb E\left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2\left( X^{(1)}_{i_1}, X_{i_2}^{(2)}\right)\right\| \right)^{1/2} \\
+ 2\sqrt{2eq} \left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2\left( X^{(1)}_{i_1}, X_{i_2}^{(2)}\right)\right\|^q \right)^{1/2q},
\end{multline*}
where we used Rosenthal's inequality (Lemma \ref{lemma:rosenthal-pd} applied with $d=1$) in the last step.
Together with the fact that $\left\| \mathbb E H^2\left( X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\| \leq \mathbb E\left\| \mathbb E_2 H^2\left( X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|$ for all $i_1,i_2$, and the inequality
\[
\left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2\left( X^{(1)}_{i_1}, X_{i_2}^{(2)}\right)\right\|^q \right)^{1/2q}
\leq
\left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/(2q)},
\]
we obtain the following result.
\begin{corollary}
\label{cor:simple}
Under the assumptions of Theorem \ref{thm:degen-moment},
\begin{align*}
\left(\mathbb E \left\| U_n\right\|^{2q} \right)^{1/2q}
\leq &
256/\sqrt{e}\Bigg[
r \left( \sum_{i_1}\mathbb E\left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2\left( X^{(1)}_{i_1}, X_{i_2}^{(2)}\right)\right\| \right)^{1/2}
\\
&
+ 11 \, r^{3/2}\left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/(2q)}
\Bigg].
\end{align*}
\end{corollary}
\begin{remark}
Assume that $H_{i,j} = H$ is independent of $i,j$ and is such that $\|H(x_1,x_2)\|\leq M$ for all $x_1,x_2\in S$.
Then
\[
\mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q
\leq (n-1)^q M^{2q},
\]
and it immediately follows from Lemma \ref{moment-to-tail} and Corollary \ref{cor:simple} that for all $t\geq 1$ and an absolute constant $C>0$,
\begin{align}
\label{eq:concentration}
&
\Pr\left( \left\| U_n \right\| \geq C\left( \sqrt{ \mathbb E\left\| \mathbb E_2 H^2(X_1^{(1)},X_2^{(2)})\right\| }\,\left(t+\log d\right)\cdot n + M\sqrt{n} \left(t+\log d\right)^{3/2} \right) \right)\leq e^{-t}.
\end{align}
\end{remark}
\noindent Next, we obtain further refinements of the result that follow from estimating the term
\[
r^{3/2}\left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/(2q)} .
\]
\begin{lemma}
\label{lemma:remainder}
Under the assumptions of Theorem \ref{thm:degen-moment},
\begin{multline*}
r^{3/2} \left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/(2q)} \\
\leq
4e\sqrt{2} \sqrt{1 + \frac{\log d}{q}} \Bigg[ r \left( \sum_{i_1} \mathbb E \left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|\right)^{1/2}
\\
+ r^{3/2}\left( \mathbb E\max_{i_1}\left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/2q}
\\
+ r^2 \left( \sum_{i_1}\mathbb E\max_{i_2:i_2\ne i_1} \left\| H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/2q}
\Bigg].
\end{multline*}
\end{lemma}
\begin{proof}
See Section \ref{proof:extraterm}.
\end{proof}
One of the key features of the bounds established above is the fact that they yield estimates for $\mathbb E \left\| U_n \right\|$: for example,
Theorem \ref{thm:degen-moment} implies that
\begin{align}
\label{eq:mom1}
\nonumber
\mathbb E\left\| U_n \right\| \leq \,& C\log d\bigg( \left( \mathbb E \left\| \mathbb E_2 \widetilde G \widetilde G^\ast \right\| \right)^{1/2} +
\left\| \sum_{(i_1,i_2)\in I_n^2 } \mathbb E H^2_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^{1/2} \\
&
+\sqrt{\log d} \left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1 } H^2_{i_1,i_2} \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\| \right)^{1/2}\Bigg)
\end{align}
for some absolute constant $C$.
On the other hand, direct application of the non-commutative Khintchine's inequality \eqref{matrix-Khintchine} followed by Rosenthal's inequality (Lemma \ref{lemma:rosenthal-pd}) only gives that
\begin{align}
\label{eq:mom3}
\nonumber
\mathbb E\left\| U_n \right\| \leq &C \log d \left( \sum_{i_1} \mathbb E\left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\| \right)^{1/2}
\\
\leq & C\log d \Bigg( \left(\sum_{i_1} \mathbb E \left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H^2_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\| \right)^{1/2}
\\ \nonumber
&+ \sqrt{\log d}\left( \sum_{i_1}\mathbb E \max_{i_2: i_2\ne i_1} \left\| H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|\right)^{1/2} \Bigg),
\end{align}
and it is easy to see that the right-hand side of \eqref{eq:mom1} is never worse than the bound \eqref{eq:mom3}.
To verify that it can be strictly better, consider the framework of Example \ref{example:02}, where it is easy to check (following the same calculations as those given in Section \ref{section:example-proof}) that
\begin{multline*}
\left( \mathbb E \left\| \mathbb E_2 \widetilde G \widetilde G^\ast \right\| \right)^{1/2} = 1, \
\left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1 } H^2_{i_1,i_2} \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\| \right)^{1/2} = 1,
\\
\left\| \sum_{(i_1,i_2)\in I_n^2 } \mathbb E H^2_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^{1/2} = 2 ,
\end{multline*}
while $\left(\sum_{i_1} \mathbb E \left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H^2_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\| \right)^{1/2} = \sqrt{n}.$
\begin{remark}[Extensions to rectangular matrices]
All results in this section can be extended to the general case of $\mathbb C^{d_1\times d_2}$ - valued kernels by considering
the Hermitian dilation $\mathcal D(U_n)$ of $U_n$ as defined in \eqref{eq:dilation}, namely
\[
\mathcal D(U_n) = \sum_{(i_1,i_2)\in I_n^2} \mathcal D\left( H_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)} \right) \right) \in \mathbb H^{d_1+d_2},
\]
and observing that $\|U_n\| = \left\| \mathcal D(U_n)\right\|$.
\end{remark}
\section{Adamczak's moment inequality for U-statistics.}
\label{section:adamczak}
The paper \cite{adamczak2006moment} by R. Adamczak developed moment inequalities for general Banach space-valued completely degenerate U-statistics of arbitrary order.
More specifically, application of Theorem 1 in \cite{adamczak2006moment} to our scenario $\mathbb{B} = \left(\mathbb{H}^{d},~\|\cdot\|\right)$ and $m=2$ yields the following bounds for all $q\geq 1$ and $t\geq 2$:
\begin{align}
\label{adam-1}
&
\left( \mathbb E\| U_n \|^{2q} \right)^{1/(2q)}
\leq C \Bigg( \mathbb E \left\| U_n \right\|
+ \sqrt{q}\cdot A + q \cdot B
+ q^{3/2}\cdot \Gamma + q^2\cdot D \Bigg), \\
&
\nonumber
\Pr\left( \|U_n\| \geq C \left( \mathbb E \left\| U_n \right\| + \sqrt{t}\cdot A + t \cdot B
+ t^{3/2}\cdot \Gamma + t^2\cdot D\right) \right)\leq e^{-t},
\end{align}
where $C$ is an absolute constant, and the quantities $A,B,\Gamma,D$ will be specified below (see Section \ref{section:adamczak-calculations} for the complete statement of Adamczak's result).
Notice that inequality \eqref{adam-1} contains the ``sub-Gaussian'' term corresponding to $\sqrt{q}$ that did not appear in the previously established bounds.
We should mention another important distinction between \eqref{adam-1} and the results of Theorem \ref{thm:degen-moment} and its corollaries, such as inequality \eqref{eq:concentration}: while \eqref{adam-1} describes the deviations of $\| U_n \|$ from its expectation, \eqref{eq:concentration} states that $U_n$ is close to its expectation \emph{as a random matrix}; similar connections exist between the Matrix Bernstein inequality \cite{tropp2012user} and Talagrand's concentration inequality \cite{boucheron2013concentration}.
It particular, \eqref{adam-1} can be combined with a bound \eqref{eq:mom1} for $\mathbb E\| U_n \|$ to obtain a moment inequality that is superior (in a certain range of $q$) to the results derived from Theorem \ref{thm:degen-moment}.
\begin{theorem}
\label{thm:adamczak}
Inequalities \eqref{adam-1} hold with the following choice of $A,B,\Gamma$ and $D$:
\begin{align*}
A = &
\sqrt{\log(de)} \left( \mathbb E\left\| \mathbb E_2 \widetilde G \widetilde G^\ast \right\| +
\left\| \sum_{(i_1,i_2)\in I_n^2} \mathbb E H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)} \right)\right\| \right)^{1/2} \\
& +\log (de) \left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\| \right)^{1/2},
\\
B = & \left( \sup_{z\in \mathbb C^d:\| z \|_2\leq 1} \sum_{(i_1,i_2)\in I_n^2} \mathbb E\left( z^\ast H_{i_1,i_2}\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right) z \right)^2\right)^{1/2}
\\
&\leq \left(\left\| \sum_{(i_1,i_2)\in I_n^2} \mathbb E H_{i_1,i_2}^2(X^{(1)}_{i_1},X^{(2)}_{i_2})\right\| \right)^{1/2},
\\
\Gamma = & \,\sqrt{1+\frac{\log d}{q}} \left( \sum_{i_1}\, \mathbb E_1 \left\| \sum_{i_2: i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right) \right\|^{q} \right)^{1/2q},
\\
D = & \left( \sum_{(i_1,i_2)\in I_n^2} \mathbb E\left\| H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^{q} \right)^{1/(2q)}
\\
&+ \left( 1 + \frac{\log d}{q} \right)\left( \sum_{i_1}\,\mathbb E \max_{i_2: i_2\ne i_1} \left\| H^2_{i_1,i_2}\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right)\right\|^{q}\right)^{1/2q},
\end{align*}
where $\widetilde G_i$ were defined in \eqref{eq:g}.
\end{theorem}
\begin{proof}
See Section \ref{section:adamczak-calculations}.
\end{proof}
\noindent It is possible to further simplify the bounds for $A$ (via Lemma \ref{lemma:remainder}) and $D$ to deduce that one can choose
\begin{align}
\nonumber
A = &
\log(de) \left( \mathbb E\left\| \mathbb E_2 \widetilde G \widetilde G^\ast \right\| +
\left\| \sum_{(i_1,i_2)\in I_n^2} \mathbb E H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)} \right)\right\| \right)^{1/2},
\\
\nonumber
B = & \left( \sup_{z\in \mathbb C^d:\| z \|_2\leq 1} \sum_{(i_1,i_2)\in I_n^2} \mathbb E\left( z^\ast H_{i_1,i_2}\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right) z \right)^2\right)^{1/2},
\\
\nonumber
\Gamma = & \,(\log(de))^{3/2} \left( \sum_{i_1}\, \mathbb E_1 \left\| \sum_{i_2: i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right) \right\|^{q} \right)^{1/2q},
\\
\label{eq:adam-simplified}
D = & \log(de)\left( \sum_{(i_1,i_2)\in I_n^2} \mathbb E\left\| H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^{q} \right)^{1/(2q)}.
\end{align}
The upper bound for $A$ can be modified even further as in \eqref{eq:d20}, using the fact that
\[
\mathbb E\left\| \mathbb E_2 \widetilde G \widetilde G^\ast \right\| \leq
\sum_{i_1} \mathbb E \left\| \sum_{i_2:i_2\ne i_1}\mathbb E_2 H_{i_1,i_2}^2\left(X^{(1)}_{i_1}, X_{i_2}^{(2)}\right) \right\|.
\]
\begin{comment}
\begin{remark}
We note that all results obtained in this paper can be easily extended to the case of Hilbert space-valued U-statistics.
Indeed, assume that $H_{i_1,i_2}:=H_{i_1,i_2}\left(X_{i_1}^{(1)}, X_{i_2}^{(2)} \right) \in \mathbb C^d$ and note that the Hermitian dilation of $H_{i_1,i_2}$ satisfies
\[
\mathcal D^2\left( H_{i_1,i_2} \right) = \begin{pmatrix}
H_{i_1,i_2} H_{i_1,i_2}^\ast & 0 \\
0 & H_{i_1,i_2}^\ast H_{i_1,i_2}
\end{pmatrix},
\]
hence $\left\| \mathcal D^2\left( H_{i_1,i_2}\right) \right\| = \left\| H_{i_1,i_2}\right\|^2_2$.
\end{remark}
\end{comment}
\begin{comment}
\section{Bounds for vector-valued U-statistics. REMOVE}
\label{section:vector}
In this section, we show that obtained results easily adapt to the case of Hilbert space-valued U-statistics, that is, when $H_{i_1,i_2}\in \mathbb C^d$ for all $i_1,i_2$; for brevity, we will write $H_{i_1,i_2}$ for $H_{i_1,i_2}\left(X_{i_1}^{(1)}, X_{i_2}^{(2)} \right)$.
Note that the Hermitian dilation of $H_{i_1,i_2}$ satisfies
\[
\mathcal D^2\left( H_{i_1,i_2} \right) = \begin{pmatrix}
H_{i_1,i_2} H_{i_1,i_2}^\ast & 0 \\
0 & H_{i_1,i_2}^\ast H_{i_1,i_2}
\end{pmatrix},
\]
hence $\left\| \mathcal D^2\left( H_{i_1,i_2}\right) \right\| = \left\| H_{i_1,i_2}\right\|^2_2$ and
\begin{align*}
\sum_{i_1}\mathbb E \left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 \mathcal D^2\left( H_{i_1,i_2}\right) \right\| &=
\Bigg\| \sum_{(i_1,i_2)\in I_n^2} \mathbb E \mathcal D^2\left( H_{i_1,i_2}\right) \Bigg\|
= \mathbb E \sum_{(i_1,i_2)\in I_n^2} \left\| H_{i_1,i_2} \right\|_2^2.
\end{align*}
Moreover, it is easy to see that
\[
\mathbb E\max_{i_1}\left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 \mathcal D^2\left( H_{i_1,i_2} \right) \right\|^q
= \mathbb E \max_{i_1} \left( \sum_{i_2:i_2\ne i_1} \mathbb E_2 \left\| H_{i_1,i_2} \right\|_2^2 \right)^q
\]
and $\sum_{i_1}\mathbb E\max_{i_2:i_2\ne i_1} \left\| \mathcal D^2\left( H_{i_1,i_2} \right) \right\|^q = \sum_{i_1} \mathbb E\max_{i_2:i_2\ne i_1}\left\| H_{i_1,i_2} \right\|_2^{2q}$.
Theorem \ref{thm:degen-moment} and Lemma \ref{lemma:remainder} imply that for all $q\geq 1$ and $r=q\vee \log d$,
\begin{multline*}
\left( \mathbb E \left\| U_n \right\|_2^{2q} \right)^{1/2q} \leq
C\sqrt{1+\frac{\log d}{q}} \Bigg[
r \left( \mathbb E \sum_{(i_1,i_2)\in I_n^2} \left\| H_{i_1,i_2} \right\|_2^2 \right)^{1/2} \\
+r^{3/2} \left( \mathbb E \max_{i_1} \left( \sum_{i_2:i_2\ne i_1} \mathbb E_2 \left\| H_{i_1,i_2} \right\|_2^2 \right)^q \right)^{1/(2q)}
+r^2 \left( \sum_{i_1} \mathbb E\max_{i_2:i_2\ne i_1}\left\| H_{i_1,i_2} \right\|_2^{2q} \right)^{1/2q}\Bigg].
\end{multline*}
If moreover $m,M$ are such that $\left\| H_{i_1,i_2} \right\|_2\leq M$ and $\sqrt{\mathbb E_2 \left\| H_{i_1,i_2} \right\|_2^2}\leq m$ almost surely for all $i_1,i_2$, then Lemma \ref{moment-to-tail} implies that for an absolute constant $C>0$ and all $t\geq 1$,
\begin{multline}
\label{eq:exp1}
\Pr\Bigg( \left\| U_n \right\|_2 \geq C\sqrt{1+\log(d)/t}\Bigg(\left( \mathbb E \sum_{(i_1,i_2)\in I_n^2} \left\| H_{i_1,i_2} \right\|_2^2 \right)^{1/2}
(t+\log d) \\
+ m \sqrt{n} (t^{3/2}+\log^{3/2}(d)) + M n^{1/2t} (t^2 + \log^{2} d)
\Big) \Bigg)
\leq e^{-t}.
\end{multline}
To provide comparison and illustrate the improvements achievable via Theorem \ref{thm:adamczak}, observe that \eqref{eq:adam-simplified} and Lemma \ref{moment-to-tail} imply (after some simple algebra) the following bound for all $t\geq 1$:
\begin{multline*}
\Pr\Bigg( \left\| U_n \right\|_2 \geq C\Bigg(\log(de)\left( \mathbb E \sum_{(i_1,i_2)\in I_n^2} \left\| H_{i_1,i_2} \right\|_2^2 \right)^{1/2}\sqrt{t} +
\left\| \mathbb E \sum_{(i_1,i_2)\in I_n^2} H_{i_1,i_2}H_{i_1,i_2}^\ast \right\|^{1/2} t \\
+ m \sqrt{n} \log^{3/2}(de) \, t^{3/2} + M \log(de) n^{1/2t} \,t^2
\Big) \Bigg)
\leq e^{-t},
\end{multline*}
which is better than \eqref{eq:exp1} for small values of $t$.
\end{comment}
\section{Proofs.}
\label{sec:proof}
\subsection{Tools from probability theory and linear algebra.}
This section summarizes several facts that will be used in our proofs.
The first inequality is a bound connecting the norm of a matrix to the norms of its blocks.
\begin{lemma}
\label{lemma:block-matrix}
Let $M\in \mathbb H^{d_1+ d_2}$ be nonnegative definite and such that
$M=\begin{pmatrix}
A & X \\
X^\ast & B
\end{pmatrix}
$, where $A\in \mathbb H^{d_1}$ and $B\in \mathbb H^{d_2}$.
Then
\[
\vvvert M \vvvert\leq \vvvert A \vvvert + \vvvert B \vvvert
\]
for any unitarily invariant norm $\vvvert \cdot \vvvert$.
\end{lemma}
\begin{proof}
It follows from the result in \cite{bourin2012unitary} that under the assumptions of the lemma, there exist unitary operators $U,V$ such that
\[
\begin{pmatrix}
A & X \\
X^\ast & B
\end{pmatrix} =
U \begin{pmatrix}
A & 0 \\
0 & 0
\end{pmatrix} U^\ast +
V\begin{pmatrix}
0 & 0 \\
0 & B
\end{pmatrix} V^\ast,
\]
hence the result is a consequence of the triangle inequality.
\end{proof}
The second result is the well-known decoupling inequality for U-statistics due to de la Pena and Montgomery-Smith \cite{PM-decoupling-1995}.
\begin{lemma}
\label{decoupling-lemma}
Let $\{X_i\}_{i=1}^n$ be a sequence of independent random variables with values in a measurable space $(S,\mathcal{S})$, and let $\{X_i^{(k)}\}_{i=1}^n$, $k=1,2,\ldots,m$ be $m$ independent copies of this sequence.
Let $B$ be a separable Banach space and, for each $(i_1,\ldots,i_m)\in I^m_n$, let $H_{i_1,\ldots,i_m}: S^m\rightarrow B$ be a measurable function.
Moreover, let $\Phi:[0,\infty)\rightarrow[0,\infty)$ be a convex nondecreasing function such that
\[
\mathbb E\Phi(\|H_{i_1,\ldots,i_m}(X_{i_1},\ldots, X_{i_m})\|) < \infty
\]
for all $(i_1,\ldots,i_m)\in I^m_n$. Then
\begin{multline*}
\mathbb E\Phi\left(\left\|\sum_{(i_1,\ldots,i_m)\in I^m_n}H_{i_1,\ldots,i_m}(X_{i_1},\ldots, X_{i_m})\right\|\right)
\leq
\\
\mathbb E\Phi\left(C_m\left\|\sum_{(i_1,\ldots,i_m)\in I^m_n}H_{i_1,\ldots,i_m}\left(X_{i_1}^{(1)},\ldots, X_{i_m}^{(m)}\right)\right\|\right),
\end{multline*}
where $C_m:=2^m(m^m-1)\cdot((m-1)^{m-1}-1)\cdot\ldots\cdot 3$.
Moreover, if $H_{i_1,\ldots,i_m}$ is $P$-canonical, then the constant $C_m$ can be taken to be $m^m$.
Finally, there exists a constant $D_m>0$ such that for all $t>0$,
\begin{multline*}
\Pr\left(\left\|\sum_{(i_1,\ldots,i_m)\in I^m_n}H_{i_1,\ldots,i_m}(X_{i_1},\ldots, X_{i_m})\right\|\geq t\right)\\
\leq
D_m \Pr\left(D_m\left\|\sum_{(i_1,\ldots,i_m)\in I^m_n}H_{i_1,\ldots,i_m}\left(X_{i_1}^{(1)},\ldots, X_{i_m}^{(m)}\right)\right\|\geq t\right).
\end{multline*}
Furthermore, if $H_{i_1,\ldots,i_m}$ is permutation-symmetric, then, both of the above inequalities can be reversed (with different constants $C_m$ and $D_m$).
\end{lemma}
The following results are the variants of the non-commutative Khintchine's inequalities (that first appeared in the works by Lust-Piquard and Pisier) for the Rademacher sums and the Rademacher chaos with explicit constants, see \cite{lust1991non,lust1986inegalites}, page 111 in \cite{pisier1998non}, Theorems 6.14, 6.22 in \cite{HR-CS} and Corollary 20 in \cite{tropp2008conditioning}.
\begin{lemma}
\label{k-inequality}
Let $B_{j}\in\mathbb{C}^{r\times t} \ ,j=1,\ldots,n$ be the matrices of the same dimension, and let
$\{\varepsilon_j\}_{j\in\mathbb{N}}$ be a sequence of i.i.d. Rademacher random variables.
Then for any $p\geq1$,
\begin{align*}
&
\mathbb E\left\|\sum_{j=1}^n\varepsilon_j B_{j}\right\|_{S_{2p}}^{2p}
\leq
\left( \frac{2\sqrt 2}{e} p\right)^p \cdot
\max\left\{\left\|\left(\sum_{j=1}^n B_{j}B_{j}^*\right)^{1/2}\right\|_{S_{2p}}^{2p},
\left\|\left(\sum_{j=1}^n B_{j}^*B_{j}\right)^{1/2}\right\|_{S_{2p}}^{2p}\right\}.
\end{align*}
\end{lemma}
\begin{lemma}
\label{k-inequality-2}
Let $\{A_{i_1,i_2}\}_{i_1,i_2=1}^{n}$ be a sequence of Hermitian matrices of the same dimension, and let
$\left\{\varepsilon_i^{(k)}\right\}_{i=1}^n, \ k=1,2,$ be i.i.d. Rademacher random variables.
Then for any $p\geq 1$,
\begin{multline*}
\mathbb E\left\| \sum_{i_1=1}^n\sum_{i_2=1}^n A_{i_1,i_2}\varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)} \right\|_{S_{2p}}^{2p}
\leq
\\
2\left( \frac{2\sqrt 2}{e} p\right)^{2p}
\max\left\{ \left\| \left( G G^\ast \right)^{1/2}\right\|_{S_{2p}}^{2p}, \left\|\left(\sum_{i_1,i_2=1}^n A_{i_1,i_2}^2\right)^{1/2}\right\|_{S_{2p}}^{2p} \right\},
\end{multline*}
where the matrix $G\in\mathbb{H}^{nd}$ is defined as
\[
G:=
\left(
\begin{array}{cccc}
A_{11} & A_{12} & \ldots & A_{1n} \\
A_{21} & A_{22} & \ldots & A_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
A_{n1} & A_{n2} & \ldots & A_{nn}
\end{array}
\right).
\]
\end{lemma}
The following result (Theorem A.1 in \cite{chen2012masked}) is a variant of matrix Rosenthal's inequality for nonnegative definite matrices.
\begin{lemma}
\label{lemma:rosenthal-pd}
Let $Y_1,\ldots,Y_n\in \mathbb H^d$ be a sequence of independent nonnegative definite random matrices.
Then for all $q\geq 1$ and $r=\max(q, \log(d))$,
\[
\left( \mathbb E \left\| \sum_j Y_j \right\|^q \right)^{1/2q} \leq \left\| \sum_j \mathbb E Y_j \right\|^{1/2} + 2\sqrt{2er} \left( \mathbb E\max_j \|Y_j\|^q\right)^{1/2q}.
\]
\end{lemma}
The next inequality (see equation (2.6) in \cite{gine2000exponential}) allows to replace the sum of moments of nonnegative random variables with maxima.
\begin{lemma}
\label{lemma:sum-max}
Let $\xi_1,\ldots,\xi_n$ be independent random variables. Then for all $q>1$ and $\alpha\geq 0$,
\[
q^{\alpha q} \sum_{i=1}^n |\xi_i|^q \leq 2(1 + q^{\alpha})\max\left( q^{\alpha q} \mathbb E\max_i |\xi_i|^q, \left(\sum_{i=1}^n \mathbb E|\xi_i| \right)^q \right).
\]
\end{lemma}
Finally, the following inequalities allow transitioning between moment and tail bounds.
\begin{lemma}
\label{moment-to-tail}
Let $X$ be a random variable satisfying
$
\left(\mathbb E |X|^p\right)^{1/p}\leq a_4 p^2 + a_3 p^{3/2} + a_2 p + a_1\sqrt{p} + a_0
$
for all $p\geq 2$ and some positive real numbers $a_j, \ j=0,\ldots,3$.
Then for any $u\geq 2$,
\[
\Pr \left(|X| \geq e(a_4 u^2 + a_3 u^{3/2} + a_2 u + a_1\sqrt{u} + a_0)\right)\leq\exp\left(-u\right).
\]
\end{lemma}
\noindent See Proposition 7.11 and 7.15 in \cite{foucart2013mathematical} for the proofs of closely related bounds.
\begin{lemma}
\label{tail-to-moment}
Let $X$ be a random variable such that
$
\Pr\left( |X| \geq a_0 + a_1\sqrt{u} + a_2 u \right) \leq e^{-u}
$
for all $u\geq 1$ and some $0\leq a_0,a_1,a_2<\infty$. Then
\[
\left( \mathbb E |X|^p \right)^{1/p}\leq C(a_0 + a_1\sqrt{p} + a_2 p)
\]
for an absolute constant $C>0$ and all $p\geq 1$.
\end{lemma}
The proof follows from the formula $\mathbb E|X|^p = p\int_0^\infty \Pr\left( |X|\geq t\right)t^{p-1} dt$, see Lemma A.2 in \cite{dirksen2015tail} and Proposition 7.14 in \cite{foucart2013mathematical} for the derivation of similar inequalities.
Next, we will use Lemma \ref{decoupling-lemma} combined with a well-known argument to obtain the symmetrization inequality for degenerate U-statistics.
\begin{lemma}
\label{lemma:symmetry}
Let $H_{i_1,i_2}:S\times S\mapsto \mathbb H^d$ be degenerate kernels, $X_1,\ldots,X_n$ -- i.i.d. $S$-valued random variables, and assume that $\{X_i^{(k)}\}_{i=1}^n$, $k=1,2,$ are independent copies of this sequence.
Moreover, let $\left\{\varepsilon_i^{(k)}\right\}_{i=1}^n, \ k=1,2,$ be i.i.d. Rademacher random variables.
Define
\begin{align}
\label{eq:v'}
U'_n := \sum_{(i_1,i_2)\in I^2_n}\varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)} H_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right).
\end{align}
Then for any $p\geq1$,
\begin{equation*}
\Big( \mathbb E\left\| U_n \right\|^p \Big)^{1/p}
\leq 16\Big( \mathbb E\left\| U'_n \right\|^p \Big)^{1/p}.
\end{equation*}
\end{lemma}
\begin{proof}
Note that
\begin{align*}
\mathbb E\|U_n\|^p =& \mathbb E \left\|\sum_{(i_1,i_2)\in I^2_n} H_{i_1,i_2}(X_{i_1},X_{i_2})\right\|^p \\
\leq
& \mathbb E\left\| 2^2\sum_{(i_1,i_2)\in I^2_n} H_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right)\right\|^p,
\end{align*}
where the inequality follows from the fact that $H_{i_1,i_2}$ is $\mathcal{P}$-canonical, hence Lemma \ref{decoupling-lemma} applies with constant equal to $C_2=4$.
Next, for $i=1,2$, let $\mathbb{E}_i[\cdot]$ stand for the expectation with respect to $\left\{ X_j^{(i)}, \varepsilon_j^{(i)} \right\}_{j\geq 1}$ only (that is, conditionally on $\left\{ X_j^{(k)}, \varepsilon_j^{(k)} \right\}_{j\geq 1}, \ k\ne i$).
Using iterative expectations and the symmetrization inequality for the Rademacher sums twice (see Lemma 6.3 in \cite{LT-book-1991}), we deduce that
\begin{align*}
\mathbb E\|U_n\|^p
\leq
&4^p\,\mathbb E\left\| \sum_{(i_1,i_2)\in I^2_n} H_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right)\right\|^p
\\
=&4^p\,\mathbb E\left[ \mathbb{E}_1 \left\| \sum_{i_1=1}^n\sum_{i_2\ne i_1} H_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right)\right\|^p\right]
\\ \leq &
4^p\,\expect{\mathbb{E}_1 \left\| 2\sum_{i_1=1}^n \varepsilon_{i_1}^{(1)} \sum_{i_2\ne i_1}
H_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right)\right\|^p }
\\ =&
4^p\,\expect{\mathbb{E}_2 \left\| 2\sum_{i_2=1}^n
\sum_{i_1\ne i_2} \varepsilon_{i_1}^{(1)} H_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right)\right\|^p }
\\ \leq &
4^p\,\expect{\left\| 4 \sum_{(i_1,i_2)\in I^2_n}\varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)}
H_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right)\right\|^p}.
\end{align*}
\end{proof}
\subsection{Proofs of results in Section \ref{sec:moment}.}
\subsubsection{Proof of Lemma \ref{bound-on-expectation}.}
Recall that
\[
X=\sum_{i_1=1}^n\sum_{i_2\neq i_1} A_{i_1,i_2}\varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)},
\]
where $A_{i_1,i_2}\in \mathbb H^d$ for all $i_1,i_2$, and let
$C_p:=2\left( \frac{2\sqrt 2}{e} p\right)^{2p}$.
We will first establish the upper bound.
Application of Lemma \ref{k-inequality-2} (Khintchine's inequality) to the sequence of matrices $\left\{ A_{i_1,i_2}\right\}_{i_1,i_2=1}^n$ such that $A_{j,j}=0$ for $j=1,\ldots,n$ yields
\begin{align}
\label{main-bound}
&
\left(\mathbb E\|X\|_{S_{2p}}^{2p} \right)^{1/2p} \leq 2^{1/2p} \frac{2\sqrt 2 }{e}\cdot p \cdot
\max\left\{\left\|\left(GG^\ast\right)^{1/2}\right\|_{S_{2p}}, \left\|\left(\sum_{(i_1,i_2)\in I_n^2} A_{i_1,i_2}^2\right)^{1/2}\right\|_{S_{2p}}\right\},
\end{align}
where
\[
G:=
\left(
\begin{array}{cccc}
0 & A_{12} & \ldots & A_{1n} \\
A_{21} & 0 & \ldots & A_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
A_{n1} & A_{n2} & \ldots & 0
\end{array}
\right) \in \mathbb R^{nd\times nd}.
\]
Our goal is to obtain a version of inequality \eqref{main-bound} for $p=\infty$.
To this end, we need to find an upper bound for
\[
\inf_{p\geq q} \left[ \frac{p \cdot
\max\left\{\left\|\left(GG^\ast\right)^{1/2}\right\|_{S_{2p}}, \left\|\left(\sum_{(i_1,i_2)\in I_n^2} A_{i_1,i_2}^2\right)^{1/2}\right\|_{S_{2p}}\right\}}
{\max\left\{\left\|\left(GG^\ast\right)^{1/2}\right\|, \left\|\left(\sum_{(i_1,i_2)\in I_n^2} A_{i_1,i_2}^2\right)^{1/2}\right\| \right\}} \right].
\]
Since $G$ is a $nd\times nd$ matrix, a naive upper bound is of order $\log(nd)$.
We will show that it can be improved to $\log d$.
To this end, we need to distinguish between the cases when the maximum in \eqref{main-bound} is attained by the first or second term.
Define
\[
\widehat{B}_{i_1,i_2} = [0~|~0~|\ldots|~A_{i_1,i_2}~|\ldots|~0~|~0]\in\mathbb{C}^{d\times nd},
\]
where $A_{i_1i_2}$ sits on the $i_1$-th position of the above block matrix. Moreover, let
\begin{align}
\label{eq:b}
B_{i_2}=\sum_{i_1:i_1\neq i_2}\widehat{B}_{i_1,i_2}.
\end{align}
Then it is easy to see that
\begin{align*}
GG^\ast & = \sum_{i_2} B_{i_2}^\ast B_{i_2}, \\
\sum_{(i_1,i_2)\in I_n^2} A_{i_1,i_2}^2 & = \sum_{i_2} B_{i_2} B_{i_2}^\ast.
\end{align*}
The following bound gives a key estimate.
\begin{lemma}
\label{lemma:eigen-compare}
Let $M_1,\ldots,M_N$ be a sequence of $\mathbb C^{d\times nd}$-valued matrices.
Let $\lambda_1,\ldots,\lambda_{nd}$ be eigenvalues of $\sum_{j} M_{j}^\ast M_{j}$ and let
$\nu_1,\ldots,\nu_d$ be eigenvalues of $\sum_{j} M_{j} M_{j}^\ast$.
Then
$\sum_{i=1}^{nd}\lambda_i = \sum_{j=1}^d\nu_j$.
Furthermore,
if $\max_i\lambda_i\leq\frac{1}{d}\sum_{j=1}^d\nu_j$, then
\[
\left\|\left(\sum_{j}M_{j} M_{j}^\ast\right)^{1/2}\right\|_{S_{2p}}^{2p} \geq \left\|\left(\sum_{j} M_{j}^\ast M_{j}\right)^{1/2}
\right\|_{S_{2p}}^{2p},
\]
for any integer $p\geq2$.
\end{lemma}
The proof of the Lemma is given in Section \ref{sec:proof-eigencompare}.
We will apply this fact with $M_j = B_j$, $j=1,\ldots,n$.
Assuming that $\max_i \lambda_i\leq\frac{1}{d}\sum_{j=1}^{nd}\lambda_j$, it is easy to see that the second term in the maximum in \eqref{main-bound} dominates, hence
\begin{multline}
\mathbb E\|X\|_{S_{2p}}^{2p} \leq C_p \left\|\left(\sum_{(i_1,i_2)\in I_n^2} A_{i_1,i_2}^2\right)^{1/2}\right\|_{S_{2p}}^{2p}
=C_p \, \mbox{tr\,} \left(\sum_{(i_1,i_2)\in I_n^2}A_{i_1i_2}^2\right)^p
\\
\leq C_p \cdot d \cdot\left\|\left(\sum_{(i_1,i_2)\in I_n^2} A_{i_1i_2}^2\right)^p\right\|
= C_p \cdot d\cdot\left\|\sum_{(i_1,i_2)\in I^2_n}A_{i_1,i_2}^2\right\|^p,
\label{inter-square-bound}
\end{multline}
where the last equality follows from the fact that for any positive semidefinite matrix $H$, $\|H^p\|=\|H\|^p$.
On the other hand, when $\max_i \lambda_i>\frac{1}{d}\sum_{j=1}^{nd}\lambda_j$, it is easy to see that for all $p\geq 1$,
\begin{align*}
d>\sum_{j=1}^{nd}\frac{\lambda_j}{\max_i \lambda_i}\geq
\sum_{j=1}^{nd}\left(\frac{\lambda_j}{\max_i \lambda_i}\right)^{p},
\end{align*}
which in turn implies that
\begin{align}
\label{eq:eig1}
d \left(\max_i\lambda_i\right)^{p}\geq\sum_{j=1}^{nd}\lambda_j^{p}.
\end{align}
Moreover,
\begin{align}
\label{eq:eig2}
\left\|\left(\sum_{i_2}B_{i_2}^\ast B_{i_2}\right)^{1/2}
\right\|_{S_{2p}}^{2p} =
\mbox{tr\,}\left(\left(\sum_{i_2}B_{i_2}^\ast B_{i_2}\right)^p\right) = \sum_{i=1}^{nd}\lambda_i^{p}.
\end{align}
Combining \eqref{eq:eig1}, \eqref{eq:eig2}, we deduce that
\[
\left\|\left(\sum_{i_2}B_{i_2}^\ast B_{i_2} \right)^{1/2}
\right\|_{S_{2p}}^{2p} \leq d \left\| \left(\sum_{i_2}B_{i_2}^\ast B_{i_2}\right)^p \right\| = d\left\| \left(GG^\ast\right)^p \right\|
=d\left\| GG^\ast \right\|^p,
\]
where the second from the last equality follows again from the fact that for any positive semi-definite matrix $H$, $\|H^p\|=\|H\|^p$. Thus, combining the bound above with \eqref{main-bound} and \eqref{inter-square-bound}, we obtain
\[
\mathbb E\|X\|_{S_{2p}}^{2p}
\leq d\cdot C_p\max\left\{ \left\| GG^\ast \right\|^p, \left\|\sum_{(i_1,i_2)\in I^2_n}A_{i_1,i_2}^2\right\|^p \right\}.
\]
Finally, set $p=\max(q, \log(d))$ and note that $d^{1/2p}\leq \sqrt{e}$, hence
\[
\left( \mathbb E\|X\|^{2q}\right)^{1/2q}\leq \left( \mathbb E\|X\|^{2p}\right)^{1/2p}\leq \frac{4}{\sqrt{e}}\max\{\log d, q\}\cdot\max\left\{ \left\| GG^\ast\right\|, \left\|\sum_{(i_1,i_2)\in I^n_2}A_{i_1,i_2}^2\right\| \right\}^{1/2}.
\]
This finishes the proof of upper bound.
Now, we turn to the lower bound.
Let $\mathbb E_{1}[\cdot]$ stand for the expectation with respect to $\left\{ \varepsilon_j^{(1)}\right\}_{j\geq 1}$ only.
Then
\begin{align*}
\left( \mathbb E\|X\|^{2p}\right)^{1/(2p)}\geq& \left(\mathbb E\|X\|^{2}\right)^{1/2} =
\left(\mathbb{E}\mathbb{E}_1 \left\| \left(\sum_{(i_1,i_2)\in I_n^2} \varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)}A_{i_1,i_2} \right)^2\right\| \right)^{1/2}\\
\geq&
\left( \mathbb{E}\left\|\mathbb{E}_1\left(\sum_{(i_1,i_2)\in I_n^2} \varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)}A_{i_1,i_2}\right)^2 \right\| \right)^{1/2}\\
=&
\left( \mathbb{E}\left\|\sum_{i_1} \left(\sum_{i_2:i_2\neq i_1}\varepsilon_{i_2}^{(2)}A_{i_1,i_2}\right)^2\right\| \right)^{1/2}.
\end{align*}
It is easy to check that
\[
\sum_{i_1=1}^n\left(\sum_{i_2:i_2\neq i_1}\varepsilon_{i_2}^{(2)}A_{i_1,i_2}\right)^2 =
\left(\sum_{i_2}\varepsilon_{i_2}^{(2)}B_{i_2}\right) \left(\sum_{i_2}\varepsilon_{i_2}^{(2)} B_{i_2}\right)^\ast,
\]
where $B_i$ were defined in \eqref{eq:b}. Hence
\[
\left(\mathbb E\|X\|^{2p}\right)^{1/(2p)}\geq \left( \mathbb E\left\|\left(\sum_{i_2}\varepsilon_{i_2}^{(2)}B_{i_2}\right)
\left(\sum_{i_2}\varepsilon_{i_2}^{(2)}B_{i_2}\right)^\ast \right\| \right)^{1/2}.
\]
Next, for any matrix $A\in\mathbb{C}^{d_1\times d_2}$,
\begin{align*}
\left\|
\left(
\begin{array}{ccc}
0 & A^\ast \\
A & 0 \\
\end{array}
\right)^2
\right\|
=
\left\|
\left(
\begin{array}{ccc}
A^\ast A & 0 \\
0 & A A^\ast \\
\end{array}
\right)
\right\|
= \max\{\|A^\ast A\|, \|A A^\ast\|\}=\|AA^\ast\|,
\end{align*}
where the last equality follows from the fact that $\|AA^\ast\|=\|A^\ast A\|$.
Taking $A=\sum_{i_2}\varepsilon_{i_2}^{(2)}B_{i_2}$ yields that
\begin{align*}
\left(\mathbb E\|X\|^{2p}\right)^{1/(2p)}
\geq&
\left( \mathbb E \left\| BB^\ast \right\| \right)^{1/2}
=\left( \mathbb E
\left\| \left( \sum_{i_2}\varepsilon_{i_2}^{(2)}\left(
\begin{array}{ccc}
0 & B_{i_2}^\ast \\
B_{i_2} & 0 \\
\end{array}
\right) \right)^2\right\| \right)^{1/2}
\\
\geq &
\left\| \mathbb E \left( \sum_{i_2}\varepsilon_{i_2}^{(2)}\left(
\begin{array}{ccc}
0 & B_{i_2}^\ast \\
B_{i_2} & 0 \\
\end{array}
\right) \right)^2 \right\|
^{1/2}
=\left\|\sum_{i_2}\left(
\begin{array}{ccc}
B_{i_2}^\ast B_{i_2} & 0 \\
0 & B_{i_2} B_{i_2}^\ast \\
\end{array}
\right) \right\|
^{1/2}\\
=&\max\left\{\left\| \sum_{i_2}B_{i_2}^\ast B_{i_2} \right\|, \left\| \sum_{i_2}B_{i_2} B_{i_2}^\ast \right\|\right\}^{1/2}
= \max\left\{ \left\| GG^\ast\right\|, \left\|\sum_{(i_1,i_2)\in I^2_n}A_{i_1,i_2}^2\right\| \right\}^{1/2}.
\end{align*}
\subsubsection{Proof of Lemma \ref{lemma:eigen-compare}.}
\label{sec:proof-eigencompare}
The equality of traces is obvious since
\[
\mbox{tr\,}\left( \sum_{j=1}^N M_j M_j^\ast\right) = \sum_{j=1}^N \mbox{tr\,} \left(M_j M_j ^\ast\right) =
\sum_{j=1}^N \mbox{tr\,} \left(M_j^\ast M_j \right) = \mbox{tr\,}\left( \sum_{j=1}^N M_j^\ast M_j\right).
\]
Set
\[
S:=\sum_{i=1}^{nd}\lambda_i = \sum_{i=1}^d\nu_i.
\]
Note that
\begin{align*}
&\left\|\left(\sum_{j=1}^N M_{j}^\ast M_{j}\right)^{1/2}\right\|_{S_{2p}}^{2p} = \mbox{tr\,}\left(\left(\sum_{j=1}^N M_{j}^\ast M_{j}\right)^p\right) =
\sum_{i=1}^{nd}\lambda_i^p, \\
&\left\|\left(\sum_{j=1}^N M_{j} M_{j}^\ast\right)^{1/2}\right\|_{S_{2p}}^{2p}=\mbox{tr\,}\left(\left(\sum_{j=1}^N M_{j}M_{j}^\ast\right)^p\right) =
\sum_{i=1}^{d}\nu_i^p.
\end{align*}
Moreover, $\lambda_i\geq0$, $\nu_j\geq0$ for all $i,j$, and $\max_i\lambda_i\leq \frac{1}{d}\sum_{j=1}^d \nu_j = \frac{S}{d}$ by assumption.
It is clear that
\begin{align*}
&\left\|\left(\sum_{j=1}^N M_{j}^\ast M_{j}\right)^{1/2}\right\|_{S_{2p}}^{2p} \leq \max_{0\leq\lambda_i\leq\frac{S}{2d},~\sum_{i=1}^{nd}\lambda_i= S}\sum_{i=1}^{nd}\lambda_i^p, \\
&\left\|\left(\sum_{j=1}^N M_{j} M_{j}^\ast\right)^{1/2}\right\|_{S_{2p}}^{2p}\geq\min_{\nu_i\geq0~,\sum_{i=1}^{d}\nu_i= S}\sum_{i=1}^{d}\nu_i^p.
\end{align*}
Hence, it is enough to show that
\begin{align}
\label{optimize-problem}
\max_{0\leq\lambda_i\leq\frac{S}{d},~\sum_{i=1}^{nd}\lambda_i=S}\sum_{i=1}^{nd}\lambda_i^p
\leq \min_{\nu_i\geq0,~\sum_{i=1}^{d}\nu_i= S}\sum_{i=1}^{d}\nu_i^p.
\end{align}
The right hand side of the inequality \eqref{optimize-problem} can be estimated via Jensen's inequality as
\begin{multline}
\label{rhs}
\min_{\nu_i\geq0,~\sum_{i=1}^{d}\nu_i= S}\sum_{i=1}^{d}\nu_i^p
=d\cdot\min_{\nu_i\geq0,~\sum_{i=1}^{d}\nu_i= S}\frac1d\sum_{i=1}^{d}\nu_i^p \\
\geq d\cdot\min_{\nu_i\geq0,~\sum_{i=1}^{d}\nu_i= S}\left(\frac{1}{d}\sum_{i=1}^{d}\nu_i\right)^p
= d\cdot\left(\frac{S}{d}\right)^{p}.
\end{multline}
It remains to show that $\sum_{i=1}^{nd} \lambda_i^p \leq d\cdot\left(\frac{S}{d}\right)^{p}$.
For a sequence $\left\{ a_j \right\}_{j=1}^N \subset \mathbb R$, let $a_{(j)}$ be the j-th smallest element of the sequence, where the ties are broken arbitrary.
A sequence $\left\{ a_j \right\}_{j=1}^N$ majorizes a sequence $\left\{ b_j \right\}_{j=1}^N$ whenever
$\sum_{j=0}^k a_{(N-j)} \geq \sum_{j=0}^k b_{(N-j)}$ for all $0\leq k\leq N-2$, and $\sum_j a_j = \sum_j b_j$.
A function $g:\mathbb R^N\mapsto \mathbb R$ is called Schur-convex if $g( a_1,\ldots, a_N)\geq g(b_1,\ldots,b_N)$ whenever
$\left\{ a_j \right\}_{j=1}^N$ majorizes $\left\{ b_j \right\}_{j=1}^N$. It is well known that if $f:\mathbb R\mapsto \mathbb R$ is convex, then $g(a_1,\ldots,a_N)=\sum_{j=1}^N f(a_j)$ is Schur convex. In particular,
$g(a_1,\ldots,a_N) = \sum_{j=1}^N a_j^p$, where $a_1,\ldots,a_N\geq 0$, is Schur convex for $p\geq 1$.
Consider the sequence $a_1=\ldots=a_d = \frac{S}{d}, \ a_{d+1}=\ldots = a_{nd}=0$ and $b_1 = \lambda_1,\ldots,b_{nd} = \lambda_{nd}$.
Since $\max_i\lambda_i\leq\frac{S}{d}$ by assumption, the sequence $\{a_j\}$ majorizes $\{b_j\}$, hence Schur convexity yields that $\sum_{i=1}^{nd} \lambda_i^p \leq \sum_{i=1}^d \left( \frac{S}{d}\right)^p = d\cdot \left( \frac{S}{d}\right)^p$, implying the result.
\footnote{We are thankful to the anonymous Referee for suggesting an argument based on Schur convexity, instead of the original proof that was longer and not as elegant.}
\begin{comment}
Given positive integers $K\geq d$ and $K'\geq d'$, we will write $(K,d)>(K',d')$ if $K\geq K'$, $d\geq d'$ and at least one of the inequalities is strict.
We will now prove by induction that for all $(K,d), \ K\geq d$ and any $S>0$,
\[
\max_{\lambda_1,\ldots,\lambda_K\in \Lambda(K,d,S)}\sum_{i=1}^{K}\lambda_i^p = d\left(\frac{S}{d}\right)^p,
\]
where
\[
\Lambda(K,d,S)=\left\{\lambda_1,\ldots,\lambda_K: \ 0\leq\lambda_j\leq \frac{S}{d} \ \forall j,~\sum_{i=1}^{K}\lambda_i = S \right\}.
\]
The inequality is obvious for all pairs $(K,d)$ with $K=d$ or with $d=1$.
Fix $(K,d)$ with $K>d>1$, and assume that the claim holds for all $(K',d')$ such that $K'\geq d'$ and $(K,d)>(K',d')$.
It is easy to check that the only critical point of the Lagrangian
\[
F(\lambda_1,\ldots,\lambda_K,\tau)=\sum_{i=1}^K \lambda_i^p +\tau\left(\sum_{i=1}^K \lambda_i - S\right)
\]
in the relative interior of the set $\Lambda(K,d,S)$ is $\hat\lambda_1=\ldots=\hat\lambda_K=S/K$ where the function achieves its minimum, hence the maximum is attained on the boundary of the set $\Lambda(K,d,S)$.
There are 2 possibilities:
\begin{enumerate}
\item Without loss of generality, $\lambda_1=0$. Then the situation is reduced to $(K',d')=(K-1,d)$, whence we conclude that
\[
\max\limits_{\lambda_1,\ldots,\lambda_{K-1}\in \Lambda(K-1,d,S)}\sum_{i=1}^{K-1}\lambda_i^p = d\left(\frac{S}{d}\right)^p.
\]
\item Without loss of generality, $\lambda_1=S/d$. Then the situation is reduced to $(K',d')=(K-1,d-1)$ for $S'=S(d-1)/d$, hence
\[
(S/d)^p + \max\limits_{\lambda_1,\ldots,\lambda_{K-1}\in \Lambda(K-1,d-1,S')}\sum_{i=1}^{K-1}\lambda_i^p =
(S/d)^p + (d-1)\left(\frac{S'}{d-1}\right)^p = d(S/d)^p.
\]
\end{enumerate}
\end{comment}
\subsubsection{Proof of Theorem \ref{thm:degen-moment}.}
\label{proof:degen-moment}
The first inequality in the statement of the theorem follows immediately from Lemma \ref{decoupling-lemma}. Next, it is easy to deduce from the proof of Lemma \ref{lemma:symmetry} that
\begin{equation}
\label{initial-1}
\left(\mathbb E \left\| \sum_{(i_1,i_2)\in I^2_n} H_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^{2q}\right)^{1/2q}
\leq 4\Big( \mathbb E\left\| U'_n \right\|^{2q} \Big)^{1/2q},
\end{equation}
where $U'_n$ was defined in \eqref{eq:v'}.
Applying Lemma \ref{bound-on-expectation} conditionally on $\{X_i^{(j)}\}_{i=1}^{n}, \ j=1,2$, we get
\begin{multline}
\label{inter-chaos-1}
\Big( \mathbb E\left\| U'_n \right\|^{2q} \Big)^{1/2q}=
\left( \mathbb E\left\| \sum_{(i_1,i_2)\in I_n^2}\varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)} H_{i_1,i_2}(X_{i_1}^{(1)},X_{i_2}^{(2)}) \right\|^{2q} \right)^{1/(2q)}
\\
\leq 4e^{-1/2}\max(q,\log d) \,
\left( \mathbb E\max\left\{ \| \widetilde G \widetilde G^\ast\|, \left\|\sum_{(i_1,i_2)\in I_n^2} H_{i_1,i_2}^2\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right) \right\| \right\}^q \right)^{1/2q},
\end{multline}
where $\widetilde G$ was defined in \eqref{eq:g}.
Let $\widetilde G_i$ be the $i$-th column of $\widetilde G$, then
\[
\widetilde G \widetilde G^\ast = \sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast,
~~\sum_{(i_1,i_2)\in I_n^2} H_{i_1,i_2}^2(X_{i_1}^{(1)},X_{i_2}^{(2)}) = \sum_{i=1}^n \widetilde G_i^\ast \widetilde G_i.
\]
Let $Q_i\in\mathbb{H}^{(n+1)d\times(n+1)d}$ be defined as
\[
Q_i =
\left(
\begin{array}{ccc}
0 & \widetilde G_{i}^\ast \\
\widetilde G_{i} & 0 \\
\end{array}
\right),
\]
so that
\[
Q_i^2 =
\left(
\begin{array}{ccc}
\widetilde G_{i}^\ast \widetilde G_i & 0 \\
0 & \widetilde G_{i} \widetilde G_i^\ast \\
\end{array}
\right).
\]
Inequality \eqref{inter-chaos-1} implies that
\begin{equation}
\label{inter-chaos-2}
\left( \mathbb E\left\| \sum_{(i_1,i_2)\in I_n^2}\varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)} H_{i_1,i_2}\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^{2q} \right)^{1/(2q)}
\leq 4e^{-1/2}\max(q,\log d) \left( \mathbb E\left\| \sum_{i=1}^n Q_i^2 \right\|^q\right)^{1/(2q)}.
\end{equation}
Let $\mathbb{E}_2[\cdot]$ stand for the expectation with respect to $\left\{X^{(2)}_i\right\}_{i=1}^n$ only (that is, conditionally on
$\left\{X^{(1)}_i\right\}_{i=1}^n$).
Then Minkowski inequality followed by the symmetrization inequality imply that
\begin{align}
\left( \mathbb E\left\| \sum_{i=1}^n Q_i^2 \right\|^q\right)^{1/(2q)} \leq
&
\left( \mathbb E\left\| \sum_{i=1}^n \left( Q_i^2 - \mathbb E_2 Q_i^2\right) \right\|^q \right)^{1/(2q)} +
\left( \mathbb E \left\| \sum_{i=1}^n \mathbb E_2 Q_i^2 \right\|^{q} \right)^{1/2q}
\nonumber\\
=& \left( \mathbb{E}\,\mathbb{E}_2 \left\| \sum_{i=1}^n Q_i^2 - \mathbb E_2{Q_i^2} \right\|^q \right)^{1/(2q)} +
\left( \mathbb E \left\| \sum_{i=1}^n \mathbb E_2 Q_i^2 \right\|^{q} \right)^{1/2q}
\nonumber\\
\leq& \sqrt 2 \left( \mathbb E\left\| \sum_{i=1}^n\varepsilon_i Q_i^2 \right\|^q \right)^{1/(2q)} +
\left( \mathbb E \left\| \sum_{i=1}^n \mathbb E_2 Q_i^2 \right\|^{q} \right)^{1/2q}.
\label{inter-chaos-3}
\end{align}
\noindent
Next, we obtain an upper bound for $ \left( \mathbb E\left\| \sum_{i=1}^n\varepsilon_i Q_i^2 \right\|^q \right)^{1/(2q)}$.
To this end, we apply Khintchine's inequality (Lemma \ref{k-inequality}).
Denote $C_r:=\left( \frac{2\sqrt 2}{e} r\right)^{2r}$, and let
$\mathbb{E}_{\varepsilon}[\cdot]$ be the expectation with respect to $\{\varepsilon_i\}_{i=1}^n$ only.
Then for $r>q$ we deduce that
\begin{align*}
\mathbb{E}_{\varepsilon}\left\| \sum_{i=1}^n\varepsilon_i Q_i^2 \right\|_{S_{2r}}^{2r} \leq&
C_r^{1/2} \left\| \left(\sum_{i=1}^n Q_i^4\right)^{1/2} \right\|_{S_{2r}}^{2r}
\\
=& C_r^{1/2} \mbox{tr\,} \left(\sum_{i=1}^n Q_i^4\right)^r \leq C_r^{1/2} \mbox{tr\,} \left(\sum_{i=1}^n Q_i^2\cdot \left\| Q_i^2 \right\|\right)^r \\
\leq& C_r^{1/2} \max_i \left\| Q_i^2 \right\|^r \cdot \left\| \left(\sum_{i=1}^n Q_i^2\right)^{1/2} \right\|_{S_{2r}}^{2r},
\end{align*}
where we used the fact that $Q_i^4\preceq \|Q_i^2\| Q_i^2$ for all $i$, and the fact that $A\preceq B$ implies that $\mbox{tr\,} \,g(A)\leq \mbox{tr\,} \,g(B)$ for any non-decreasing $g:\mathbb R\mapsto \mathbb R$.
Next, we will focus on the term
\[
\left\| \left( \sum_{i=1}^n Q_i^2 \right)^{1/2} \right\|_{S_{2r}}^{2r}
= \mbox{tr\,}\left( \left(\sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast\right)^r \right) + \mbox{tr\,}\left( \left(\sum_{i=1}^n \widetilde G_i^\ast \widetilde G_i\right)^r \right).
\]
Applying Lemma \ref{lemma:eigen-compare} with $M_j = \widetilde G_j^\ast, \ j=1,\ldots,n$, we deduce that
\begin{itemize}
\item if $\left\| \sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast \right\| \leq \frac{1}{d}\mbox{tr\,}\left( \sum_{i=1}^n \widetilde G_i^\ast \widetilde G_i \right)$, then
$\left\| \left(\sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast\right)^{1/2} \right\|_{S_{2r}}^{2r}\leq \left\| \left(\sum_{i=1}^n \widetilde G_i^\ast \widetilde G_i\right)^{1/2} \right\|_{S_{2r}}^{2r}$, which implies that
$\mbox{tr\,}\left( \left(\sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast \right)^r \right)\leq \mbox{tr\,}\left( \left(\sum_{i=1}^n \widetilde G_i^\ast \widetilde G_i\right)^r \right)$, and
\begin{align*}
&
\left\| \left(\sum_{i=1}^n Q_i^2\right)^{1/2} \right\|_{S_{2r}}^{2r}
\leq 2d\cdot\left\| \sum_{i=1}^n \widetilde G_i^\ast \widetilde G_i \right\|^r.
\end{align*}
\item if $\left\| \sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast \right\| > \frac{1}{d}\mbox{tr\,}\left( \sum_{i=1}^n \widetilde G_i^\ast \widetilde G_i \right)$, let $\lambda_j$ be the $j$-th eigenvalue of $\sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast$, and note that
\begin{multline*}
d > \frac{\mbox{tr\,}\left( \sum_{i=1}^n \widetilde G_i^\ast \widetilde G_i \right)}{\left\| \sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast \right\|} =
\frac{\mbox{tr\,}\left( \sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast \right)}{\left\| \sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast \right\|}
= \sum_{i=1}^{nd} \frac{\lambda_i}{\max_j \lambda_j}\geq \sum_{i=1}^{nd} \left(\frac{\lambda_i}{\max_j\lambda_j}\right)^r,\end{multline*}
where $r\geq 1$. In turn, it implies that
\begin{equation*}
\mbox{tr\,} \left(\left( \sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast \right)^r\right) <
d \left\| \sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast \right\|^r.
\end{equation*}
Thus
\begin{align*}
\left\| \left(\sum_{i=1}^n Q_i^2\right)^{1/2} \right\|_{S_{2r}}^{2r} =& \mbox{tr\,}\left( \left(\sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast\right)^r \right) + \mbox{tr\,}\left( \left(\sum_{i=1}^n \widetilde G_i^\ast \widetilde G_i\right)^r \right)\\
\leq& d \left\| \sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast \right\|^r + d \left\| \sum_{i=1}^n \widetilde G_i^\ast \widetilde G_i \right\|^r.
\end{align*}
\end{itemize}
Putting the bounds together, we obtain that
\begin{align}
\label{eq:f10}
\mathbb{E}_{\varepsilon} \left\| \sum_{i=1}^n\varepsilon_i Q_i^2 \right\|_{S_{2r}}^{2r} \leq
&
2d C_r^{1/2} \max_i \left\| Q_i^2 \right\|^r \max \left\{ \left\| \sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast \right\|^r, \left\| \sum_{i=1}^n \widetilde G_i^\ast \widetilde G_i \right\|^r \right\} \\
\nonumber
\leq &
2d C_r^{1/2} \max_i \left\| Q_i^2 \right\|^r \cdot\left\| \sum_{i=1}^n Q_i^2 \right\|^r.
\end{align}
Next, observe that for $r$ such that $2r\geq q$,
$\mathbb E_\varepsilon \left\| \sum_{j=1}^n \varepsilon_j Q_j^2 \right\|^{q}\leq \left( \mathbb E_\varepsilon \left\| \sum_{j=1}^n \varepsilon_j Q_j^2 \right\|^{2r}\right)^{q/2r}$ by H\"{o}lder's inequality, hence
\begin{align*}
\left( \mathbb E \left\| \sum_{j=1}^n \varepsilon_j Q_j^2 \right\|^q \right)^{1/2q} &
= \left( \mathbb E \mathbb E_\varepsilon \left\| \sum_{j=1}^n \varepsilon_j Q_j^2 \right\|^{q} \right)^{1/2q}
\\
& \leq \left( \mathbb E \left( \mathbb E_\varepsilon \left\| \sum_{j=1}^n \varepsilon_j Q_j^2 \right\|^{2r} \right)^{q/2r} \right)^{1/2q}
\leq \left( \mathbb E \left( \mathbb E_\varepsilon \left\| \sum_{j=1}^n \varepsilon_j Q_j^2 \right\|_{S_{2r}}^{2r} \right)^{q/2r} \right)^{1/2q}
\\
&\leq \left( 2d C_r^{1/2}\right)^{1/4r} \left( \mathbb E \left[ \max_i \left\| Q_i^2 \right\|^{q/2} \cdot\left\| \sum_{i=1}^nQ_i^2 \right\|^{q/2}\right] \right)^{1/2q}.
\end{align*}
Set $r = q\vee\log d$ and apply Cauchy-Schwarz inequality to deduce that
\begin{align}
\label{eq:f20}
&
\left( \mathbb E \left\| \sum_{j=1}^n \varepsilon_j Q_j^2 \right\|^q \right)^{1/2q} \leq
(8r)^{1/4}
\left(\mathbb E \max_i \left\| Q_i^2 \right\|^{q}\right)^{1/(4q)} \left( \mathbb E\left\| \sum_{i=1}^n Q_i^2 \right\|^{q}\right)^{1/(4q)}.
\end{align}
Substituting bound \eqref{eq:f20} into \eqref{inter-chaos-3} and letting
\[
R_q := \left( \mathbb E\left\| \sum_{i=1}^n Q_i^2 \right\|^{q}\right)^{1/(2q)},
\]
we obtain
\[
R_q \leq (8r)^{1/4}\sqrt{2R_q} \, \left( \mathbb E\max_i \left\| Q_i^2 \right\|^{q}\right)^{1/(4q)} +
\left( \mathbb E \left\| \sum_{i=1}^n \mathbb E_2 Q_i^2 \right\|^{q} \right)^{1/2q}.
\]
If $x,a,b>0$ are such that $x\leq a\sqrt{x}+b$, then $x\leq 4a^2 \vee 2b$, hence
\[
R_q\leq 16\sqrt{2r} \left( \mathbb E\max_i \left\| Q_i^2 \right\|^{q}\right)^{1/(2q)} +
2\left( \mathbb E \left\| \sum_{i=1}^n \mathbb E_2 Q_i^2 \right\|^{q} \right)^{1/2q}.
\]
Finally, it follows from \eqref{inter-chaos-2} that
\begin{align}
\label{eq:d10}
\nonumber
\Bigg( \mathbb E\Bigg\| \sum_{(i_1,i_2)\in I_n^2}\varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)} & H(X_{i_1}^{(1)},X_{i_2}^{(2)}) \Bigg\|^{2q}\Bigg)^{1/(2q)} \\
\leq \nonumber
64\sqrt{\frac{2}{e}} \, r^{3/2} & \left( \mathbb E\max_i \left\| Q_i^2 \right\|^{q}\right)^{1/(2q)} +
\frac{8}{\sqrt{e}} r \left( \mathbb E \left\| \sum_{i=1}^n \mathbb E_2 Q_i^2 \right\|^{q} \right)^{1/2q}
\\ \nonumber
\leq
64\sqrt{\frac{2}{e}} \, r^{3/2} & \left(\mathbb E \max_i \left\| \widetilde G_i^\ast \widetilde G_i \right\|^q \right)^{1/(2q)}
+\frac{8}{\sqrt{e}} r \left( \mathbb E\left\| \sum_{i=1}^n \mathbb E_2 \widetilde G_i \widetilde G_i^\ast \right\|^q + \mathbb E \left\| \sum_{i=1}^n \mathbb E_2 \widetilde G_i^\ast \widetilde G_i \right\|^{q} \right)^{1/2q}
\\ \nonumber
=
64\sqrt{\frac{2}{e}} r^{3/2} & \left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/(2q)}
\\
&
+ \frac{8}{\sqrt{e}} r \left( \mathbb E\left\| \sum_{i=1}^n \mathbb E_2 \widetilde G_i \widetilde G_i^\ast \right\|^{q} +
\mathbb E\left\| \sum_{i_1=1}^n \left( \sum_{i_2:i_2\ne i_1} \mathbb E_2H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right)\right\|^{q} \right)^{1/2q},
\end{align}
where the last equality follows from the definition of $\widetilde G_i$.
To bring the bound to its final form, we will apply Rosenthal's inequality (Lemma \ref{lemma:rosenthal-pd}) to the last term in \eqref{eq:d10} to get that
\begin{multline*}
\left(\mathbb E\left\| \sum_{i_1=1}^n \left( \sum_{i_2:i_2\ne i_1} \mathbb E_2H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right) \right\|^{q} \right)^{1/2q}
\leq \left\| \sum_{(i_1, i_2)\in I_n^2} \mathbb EH_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^{1/2}
\\
+2\sqrt{2er} \left( \mathbb E \max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/2q}.
\end{multline*}
Moreover, Jensen's inequality implies that
\begin{align*}
&
\mathbb E \max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q
\leq
\mathbb E \max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)} \right)\right\|^q,
\end{align*}
hence this term can be combined with one of the terms in \eqref{eq:d10}.
\subsubsection{Proof of Lemma \ref{lemma:degen-lower-bound}.}
\label{proof:lower}
Let $\mathbb E_i[\cdot]$ stand for the expectation with respect to the variables with the upper index $i$ only.
Since $H_{i_1,i_2}\left(\cdot, \cdot\right)$ are permutation-symmetric, we can apply the second part of Lemma \ref{decoupling-lemma} and (twice) the desymmetrization inequality (see Theorem 3.1.21 in \cite{gine2016mathematical}) to get that for some absolute constant $C_0>0$
\begin{align*}
\left( \mathbb E\left\| U_n \right\|^{2q} \right)^{1/(2q)}\geq
& \frac{1}{C_0} \left( \mathbb E\left\| \sum_{(i_1,i_2)\in I_n^2} H_{i_1,i_2}\left(X_{i_1}^{(1)}, X_{i_2}^{(2)}\right) \right\|^{2q} \right)^{1/(2q)}
\\
=& \frac{1}{C_0} \left( \mathbb E_2\,\mathbb{E}_1 \left\| \sum_{i_1}\sum_{i_2:i_2\ne i_1} H_{i_1,i_2}\left(X_{i_1}^{(1)}, X_{i_2}^{(2)}\right) \right\|^{2q} \right)^{1/(2q)}
\\
\geq&
\frac{1}{2C_0} \left( \mathbb E\left\| \sum_{i_1}\varepsilon_{i_1}^{(1)}\sum_{i_2:i_2\ne i_1} H_{i_1,i_2}\left(X_{i_1}^{(1)}, X_{i_2}^{(2)}\right) \right\|^{2q} \right)^{1/(2q)}
\\
=& \frac{1}{2C_0}\left( \mathbb E\mathbb{E}_2 \left\| \sum_{i_2}\sum_{i_1:i_1\ne i_2}\varepsilon_{i_1}^{(1)} H_{i_1,i_2}\left(X_{i_1}^{(1)}, X_{i_2}^{(2)}\right) \right\|^{2q} \right)^{1/(2q)}
\\
\geq&
\frac{1}{4C_0} \left( \mathbb E\left\| \sum_{(i_1,i_2)\in I_n^2}\varepsilon_{i_1}^{(1)}\varepsilon_{i_2}^{(2)} H_{i_1,i_2}\left(X_{i_1}^{(1)}, X_{i_2}^{(2)}\right) \right\|^{2q} \right)^{1/(2q)}.
\end{align*}
Applying the lower bound of Lemma \ref{bound-on-expectation} conditionally on $\left\{ X_{i}^{(1)} \right\}_{i=1}^n$ and $\left\{ X_{i}^{(2)} \right\}_{i=1}^n$, we obtain
\begin{align}
\left( \mathbb E\left\| U_n \right\|^{2q}\right)^{1/(2q)}
\geq&
c \left(\mathbb E\max\left\{ \left\| \sum_i \widetilde G_i^\ast \widetilde G_i \right\|, \left\| \sum_i \widetilde G_i \widetilde G_i^\ast \right\| \right\}^{q} \right)^{1/2q}
\label{lower-bound-1}\\
\geq&\frac{1}{4\sqrt 2C_0} \left( \left( \mathbb E\left\| \sum_{i} \widetilde G_i \widetilde G_i^\ast \right\|^{q} \right)^{1/2q} +
\left( \mathbb E\left\| \sum_{i} \widetilde G_i^\ast \widetilde G_i\right\|^{q} \right)^{1/2q} \right)
\nonumber \\
\geq & \frac{1}{4\sqrt{2} C_0} \left( \left( \mathbb E\left\| \sum_{i} \mathbb E_2 \widetilde G_i \widetilde G_i^\ast \right\|^{q}\right)^{1/2q} +
\left( \mathbb E\left\| \sum_{i}\mathbb E_2 \widetilde G_i^\ast \widetilde G_i\right\|^{q} \right)^{1/2q} \right),
\nonumber
\end{align}
where $\widetilde G_i$ is the $i$-th column if the matrix $\widetilde G$ defined in \eqref{eq:g}; we also used the identities
$\widetilde G \widetilde G^\ast = \sum_{i=1}^n \widetilde G_i \widetilde G_i^\ast,
~~\sum_{(i_1,i_2)\in I_n^2} H_{i_1,i_2}^2(X_{i_1}^{(1)},X_{i_2}^{(2)}) = \sum_{i=1}^n \widetilde G_i^\ast \widetilde G_i$.
The inequality above takes care of the second and third terms in the lower bound of the lemma.
To show that the first term is necessary, let
\[
Q_i =
\left(
\begin{array}{ccc}
0 & \widetilde G_i^\ast \\
\widetilde G_i & 0 \\
\end{array}
\right).
\]
It follows from the first line of \eqref{lower-bound-1} that
\begin{align*}
\left( \mathbb E\left\| U_n \right\|^{2q} \right)^{1/(2q)}
\geq&
\frac{1}{4 C_0} \left( \mathbb E\left\| \sum_{i=1}^n Q_i^2 \right\|^{q} \right)^{1/(2q)}.
\end{align*}
Let $i_\ast$ be the smallest value of $i\leq n$ where $\max_i \left\| Q_i^2 \right\|$ is achieved.
Then
$
\sum_{i=1}^n Q_i^2 \succeq Q_{i_\ast}^2,
$
hence $\left\|Q_{i_\ast}^2\right\|\leq \left\|\sum_{i=1}^n Q_i^2\right\|$.
Jensen's inequality implies that
\begin{align*}
\left( \mathbb E\left\| U_n \right\|^{2q} \right)^{1/(2q)}
&\geq \frac{1}{4 C_0}\left( \mathbb E\left\| \sum_{i=1}^n Q_i^2 \right\|^{q} \right)^{1/(2q)}
\\
&\geq
\frac{1}{4 C_0} \left( \mathbb E\max_i \left\|Q_i^2\right\|^q \right)^{1/(2q)}
\geq
\frac{1}{4 C_0} \left( \mathbb E \max_{i}\left\| \widetilde G_i^\ast \widetilde G_i \right\|^q \right)^{1/2q},
\end{align*}
where the last equality holds since $\left\| \widetilde G_i^\ast \widetilde G_i \right\| = \left\| \widetilde G_i \widetilde G_i^\ast \right\|$.
The claim follows.
\subsubsection{Proof of Lemma \ref{lemma:remainder}.}
\label{proof:extraterm}
Note that
\begin{multline}
\label{eq:app20}
r^{3/2} \left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/(2q)}
\\
\leq
r^{3/2} \left( \mathbb E\sum_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/(2q)}
\\
=r^{3/2} \left( \mathbb E_1\sum_{i_1} \mathbb E_2 \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/(2q)}.
\end{multline}
Next, Lemma \ref{lemma:rosenthal-pd} implies that, for $r=\max(q,\log(d))$,
\begin{align}
\label{eq:app25}
\mathbb E_2 \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q
\leq
2^{2q-1} \Bigg[ & \left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \\
& \nonumber
+(2\sqrt{2e})^{2q} r^{q} \, \mathbb E_2 \max_{i_2:i_2\ne i_1} \left\| H^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right)\right\|^q \Bigg].
\end{align}
We will now apply Lemma \ref{lemma:sum-max} with $\alpha=1$ and
$\xi_{i_1}:= \left\| \sum_{i_2\ne i_1} \mathbb E_2 H^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|$ to get that
\begin{multline}
\label{eq:app30}
\sum_{i_1}\mathbb E\xi^q_{i_1} \leq
2(1 + q) \Bigg( \mathbb E\max_{i_1}\left\| \sum_{i_2: i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q
\\
+q^{-q} \left( \sum_{i_1} \mathbb E\left\| \sum_{i_2:i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\| \right)^q \Bigg).
\end{multline}
Combining \eqref{eq:app20} with \eqref{eq:app25} and \eqref{eq:app30}, we obtain (using the inequality $1+q\leq e^q$) that
\begin{multline}
\label{eq:app40}
r^{3/2} \left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/(2q)}
\\
\leq 4e\sqrt{2} \Bigg[ r^{3/2}\left( \mathbb E\max_{i_1}\left\| \sum_{i_2: i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{2}^{(2)}\right) \right\|^q \right)^{1/2q} \\
+ r \sqrt{1 + \frac{\log d}{q}}\left( \sum_{i_1} \mathbb E \left\| \sum_{i_2: i_2\ne i_1} \mathbb E_2 H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|\right)^{1/2}
\\
+ r^2 \left( \sum_{i_1}\mathbb E\max_{i_2:i_2\ne i_1} \left\| H_{i_1,i_2}^2 \left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|^q \right)^{1/2q}
\Bigg],
\end{multline}
which yields the result.
\subsection{Proof of Theorem \ref{thm:adamczak}.}
\label{section:adamczak-calculations}
Let $J\subseteq I\subseteq \{1,2\}$.
We will write $\mathbf i$ to denote the multi-index $(i_1,i_2)\in \{1,\ldots,n\}^2$.
We will also let $\mathbf i_I$ be the restriction of $\mathbf i$ onto its coordinates indexed by $I$, and, for a fixed value of
$\mathbf i_{I^c}$, let $\left( H_\mathbf i \right)_{\mathbf i_I}$ be the array $\left\{ H_\mathbf i, \ \mathbf i_I \in \{1,\ldots,n\}^{| I |} \right\}$, where
$H_{\mathbf i}:=H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2})$.
Finally, we let $\mathbb E_I$ stand for the expectation with respect to the variables with upper indices contained in $I$ only.
Following section 2 in \cite{adamczak2006moment}, we define
\begin{align}
\label{eq:adam-norm}
\nonumber
\left\| \left( H_\mathbf i \right)_{\mathbf i_I}\right\|_{I,J} = \mathbb E_{I\setminus J}
\sup & \left\{ \mathbb E_J \sum_{\mathbf i_I}\langle \Phi, H_\mathbf i\rangle \prod_{j\in J} f^{(j)}_{ i_j}(X^{(j)}_{ i_j}): \ \|\Phi\|_\ast\leq 1, \right.
\\
&
\left. f_i^{(j)}:S\mapsto \mathbb R \text{ for all } i,j, \text{ and }
\sum_i \mathbb E \left| f_i^{(j)}(X_i^{(j)})\right|^2 \leq 1, \ j\in J \right\}
\end{align}
and $\big\| \left( H_\mathbf i \right)_{\mathbf i_\emptyset}\big\|_{\emptyset,\emptyset}:=\left\| H_\mathbf i\right\|$,
where $\langle A_1,A_2\rangle:=\mbox{tr\,}(A_1\,A_2^\ast)$ for $A_1,A_2\in \mathbb H^d$ and $\|\cdot\|_\ast$ denotes the nuclear norm.
Theorem 1 in \cite{adamczak2006moment} states that for all $q\geq 1$,
\begin{align*}
&
\left( \mathbb E \left\| U_n \right\|^{2q} \right)^{1/2q}\leq C \left[ \sum_{I\subseteq \{1,2\} }\sum_{J\subseteq I}
q^{|J|/2 + | I^c |} \left( \sum_{\mathbf i_{I^c}} \mathbb E_{I^c} \left\| \left( H_\mathbf i \right)_{\mathbf i_I}\right\|^{2q}_{I,J} \right)^{1/2q}
\right],
\end{align*}
where $C$ is an absolute constant. Obtaining upper bounds for each term in the sum above, we get that
\begin{align*}
&
\left( \mathbb E \left\| U_n \right\|^{2q} \right)^{1/2q}\leq C\, \Big[ \mathbb E\left\| U_n \right\| +
\sqrt{q} \cdot A + q\cdot B + q^{3/2}\cdot \Gamma + q^2 \cdot D \Big],
\end{align*}
where
\begin{align*}
A \leq & 2 \, \mathbb E_1 \left(\sup_{\Phi:\|\Phi\|_\ast\leq 1} \sum_{i_2} \mathbb E_2 \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}), \Phi \right\rangle^2 \right)^{1/2}, \\
B \leq & \left( \sup_{\Phi:\|\Phi\|_\ast\leq 1} \sum_{(i_1, i_2) \in I_n^2} \mathbb E\left\langle H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}), \Phi \right\rangle^2\right)^{1/2} \\
&+ 2\left( \sum_{i_2} \mathbb E_2 \left( \mathbb E_1 \sup_{\Phi:\|\Phi\|_\ast\leq 1} \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}), \Phi \right\rangle\right)^{2q}\right)^{1/2q}, \\
\Gamma \leq & 2 \left( \sum_{i_2} \mathbb E_2 \left( \sup_{\Phi:\|\Phi\|_\ast\leq 1} \sum_{i_1} \mathbb E_1 \left\langle H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\Phi \right\rangle^2\right)^{q} \right)^{1/2q}, \\
D \leq & \left( \sum_{(i_1, i_2) \in I_n^2} \mathbb E \left\| H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}) \right\|^{2q}\right)^{1/2q}.
\end{align*}
The bounds for $A,B,\Gamma,D$ above are obtained from \eqref{eq:adam-norm} via the Cauchy-Schwarz inequality.
For instance, to get a bound for $A$, note that it corresponds to the choice $I = \{1,2\}$ and $J=\{1\}$ or $J=\{2\}$. Due to symmetry of the kernels, it suffices to consider the case $J=\{2\}$, and multiply the upper bound by a factor of $2$.
When $J=\{2\}$,
\begin{multline*}
\left( \sum_{\mathbf i_{I^c}} \mathbb E_{I^c} \left\| \left( H_\mathbf i \right)_{\mathbf i_I}\right\|^{2q}_{I,J} \right)^{1/2q} =
\left\| \left( H_\mathbf i \right)_{\mathbf i_{\{1,2\}}}\right\|_{\{1,2\},\{2\}}
\\
= \mathbb E_1 \sup\left\{ \mathbb E_2 \sum_{(i_1,i_2)\in I_n^2 } \langle H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\Phi\rangle \cdot f^{(2)}_{i_2}\left( X^{(2)}_{i_2}\right) : \|\Phi\|_\ast \leq 1, \ \sum_{i_2} \mathbb E\left|f^{(2)}_{i_2}\left( X^{(2)}_{i_2}\right)\right|^2 \leq 1 \right\}
\\
= \mathbb E_1\sup\left\{ \mathbb E_2 \sum_{i_2} \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\Phi \right\rangle \cdot f^{(2)}_{i_2}\left( X^{(2)}_{i_2}\right) : \|\Phi\|_\ast \leq 1, \ \sum_{i_2} \mathbb E\left|f^{(2)}_{i_2}\left( X^{(2)}_{i_2}\right)\right|^2 \leq 1 \right\}
\\
\leq \mathbb E_1 \sup_{\Phi, f^{(2)}_{1},\ldots, f^{(2)}_{n} }\left\{ \sum_{i_2} \sqrt{ \mathbb E_2 \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\Phi \right\rangle^2 }
\sqrt{\mathbb E \left|f^{(2)}_{i_2}\left( X^{(2)}_{i_2}\right)\right|^2} \right\}
\\
\leq \mathbb E_1 \sup \left\{ \left( \sum_{i_2} \mathbb E_2 \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\Phi \right\rangle^2 \right)^{1/2} : \|\Phi\|_\ast \leq 1\right\}
\\
= \mathbb E_1 \left(\sup_{\Phi:\|\Phi\|_\ast\leq 1} \sum_{i_2} \mathbb E_2 \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}), \Phi \right\rangle^2 \right)^{1/2}.
\end{multline*}
It is not hard to see that the inequality above is in fact an equality, and it is attained by setting, for every fixed $\Phi$,
\[
f^{(2)}_{i_2}\left( X_{i_2}^{(2)}\right) = \alpha_{i_2}\frac{ \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\Phi \right\rangle}
{ \sqrt{ \mathbb E_2 \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\Phi \right\rangle^2} },
\]
where $\alpha_{i_2} = \frac{ \sqrt{\mathbb E_2 \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\Phi \right\rangle^2} }
{\sqrt{ \sum_{i_2} \mathbb E_2 \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\Phi \right\rangle^2 } }$ are such that $\sum_{i_2} \alpha^2_{i_2}=1$.
The bounds for other terms are obtained quite similarly.
Next, we will further simplify the upper bounds for $A,B,\Gamma,D$ by analyzing the supremum over $\Phi$ with nuclear norm not exceeding $1$. To this end, note that
\[
\Phi\mapsto \mathbb E\left\langle H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}), \Phi \right\rangle^2
\]
is a convex function, hence its maximum over the convex set $\{\Phi\in\mathbb H^d: \ \|\Phi \|_\ast \leq 1\}$ is attained at an extreme point that in the case of a unit ball for the nuclear norm must be a rank-1 matrix of the form $\phi \phi^\ast$ for some $\phi\in \mathbb C^d$.
It implies that
\begin{align}
\label{eq:app70}\nonumber
\sup_{\Phi:\|\Phi\|_\ast\leq 1} \mathbb E\left\langle H_{i_1,i_2}(X^{(1)}_{1},X^{(2)}_{2}), \Phi \right\rangle^2
&\leq \sup_{\phi:\|\phi\|_2\leq 1} \mathbb E \left\langle H_{i_1,i_2}(X^{(1)}_{1},X^{(2)}_{2}), \phi \phi^\ast \right\rangle^2 \\
&
\leq \left\| \mathbb E H_{i_1,i_2}^2(X^{(1)}_{1},X^{(2)}_{2})\right\|.
\end{align}
Moreover,
\begin{multline}
\label{eq:app80}
\sum_{i_2} \mathbb E_2 \left( \mathbb E_1 \sup_{\Phi:\|\Phi\|_\ast\leq 1} \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}), \Phi \right\rangle\right)^{2q}
\\
= \sum_{i_2} \mathbb E_2 \left( \mathbb E_1 \left\| \sum_{i_1} H_{i_1,i_2}\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right) \right\| \right)^{2q}
\\
\leq \sum_{i_2} \mathbb E_2\, \mathbb E_1 \left\| \sum_{i_1} H_{i_1,i_2}\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right) \right\|^{2q}
\\
\leq
\sum_{i_2} \mathbb E_2 \Bigg( 2\sqrt{er}\left\| \sum_{i_1} \mathbb E_1 H_{i_1,i_2}^2\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right) \right\|^{1/2}
\\
+4\sqrt{2} e r \left( \mathbb E_1 \max_{i_1} \left\| H_{i_1,i_2}\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right)\right\|^{2q} \right)^{1/2q} \Bigg)^{2q},
\end{multline}
where we have used Lemma \ref{lemma:rosenthal-1} in the last step, and $r=q\vee \log d$.
Combining \eqref{eq:app70},\eqref{eq:app80}, we get that
\begin{multline}
\label{eq:app90}
B\leq \left( \left\| \sum_{(i_1, i_2) \in I_n^2} \mathbb E H_{i_1,i_2}^2(X^{(1)}_{i_1},X^{(2)}_{i_2})\right\| \right)^{1/2}
+4\sqrt{er} \left( \sum_{i_2}\, \mathbb E_2 \left\| \sum_{i_1} \mathbb E_1 H_{i_1,i_2}^2\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right) \right\|^{q} \right)^{1/2q} \\
+ 8\sqrt{2}er \left( \sum_{i_2}\,\mathbb E \max_{i_1} \left\| H_{i_1,i_2}\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right)\right\|^{2q}\right)^{1/2q}.
\end{multline}
It is also easy to get the bound for $\Gamma$: first, recall that
\[
\Phi\mapsto \sum_{i_1} \mathbb E_1 \left\langle H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\Phi \right\rangle^2
\]
is a convex function, hence its maximum over the convex set $\{\Phi\in\mathbb H^d: \ \|\Phi \|_\ast \leq 1\}$ is attained at an extreme point of the form $\phi \phi^\ast$ for some unit vector $\phi$.
Moreover,
\begin{multline*}
\left\langle H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\phi\phi^\ast \right\rangle^2 =
\phi^\ast H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2})\phi \phi^\ast H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}) \phi
\\
\leq \phi^\ast H^2_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}) \phi
\end{multline*}
due to the fact that $\phi\phi^\ast \preceq I$. Hence
\begin{multline*}
\sup_{\Phi:\|\Phi\|_\ast\leq 1} \sum_{i_1} \mathbb E_1 \left\langle H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\Phi \right\rangle^2 \leq
\sup_{\phi: \ \|\phi\|_2=1} \sum_{i_1} \mathbb E_1 \left(\phi^\ast H^2_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}) \phi \right)
\\
= \sup_{\phi: \ \|\phi\|_2=1} \phi^\ast \left( \mathbb E_1 \sum_{i_1} H^2_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}) \right)\phi
= \left\| \mathbb E_1 \sum_{i_1} H^2_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}) \right\|,
\end{multline*}
and we conclude that
\begin{multline}
\label{eq:a110}
\Gamma \leq 2 \left( \sum_{i_2} \mathbb E_2 \left( \sup_{\Phi:\|\Phi\|_\ast\leq 1} \sum_{i_1} \mathbb E_1 \left\langle H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}),\Phi \right\rangle^2\right)^{q} \right)^{1/2q}
\\
\leq 2 \left( \sum_{i_2} \mathbb E_2 \left\| \sum_{i_1} \mathbb E_1 H_{i_1,i_2}^2\left(X^{(1)}_{i_1},X^{(2)}_{i_2}\right) \right\|^{q} \right)^{1/2q}.
\end{multline}
The bound for $A$ requires a bit more work. The following inequality holds:
\begin{lemma}
\label{lemma:A-bound}
The following inequality holds:
\begin{multline*}
A\leq 2\mathbb E_1 \left(\sup_{\Phi:\|\Phi\|_\ast\leq 1} \sum_{i_2} \mathbb E_2 \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}), \Phi \right\rangle^2 \right)^{1/2} \leq \\
64\sqrt{e}\log (de) \left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|\right)^{1/2}
\\
+8\sqrt{2e\log (de)} \left( \mathbb E\left\| \sum_{i} \mathbb E_2 \widetilde G_i \widetilde G_i^\ast \right\| +
\left\| \sum_{(i_1,i_2)\in I_n^2} \mathbb E H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)} \right)\right\| \right)^{1/2},
\end{multline*}
where $\widetilde G$ was defined in \eqref{eq:g}.
\end{lemma}
Combining the bounds \eqref{eq:app90}, \eqref{eq:a110} and Lemma \ref{lemma:A-bound}, and grouping the terms with the same power of $q$, we get the result of Theorem \ref{thm:adamczak}.
It remains to prove Lemma \ref{lemma:A-bound}.
To this end, note that Jensen's inequality and an argument similar to \eqref{eq:app70} imply that
\begin{multline*}
\mathbb E_1 \left(\sup_{\Phi:\|\Phi\|_\ast\leq 1} \sum_{i_2} \mathbb E_2 \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}), \Phi \right\rangle^2 \right)^{1/2} \\
\leq
\left( \mathbb E \sup_{\Phi:\|\Phi\|_\ast\leq 1} \sum_{i_2} \left\langle \sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2}), \Phi \right\rangle^2 \right)^{1/2}
\\
\leq \left(\mathbb E\left\| \sum_{i_2} \left(\sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2})\right)^2 \right\| \right)^{1/2}.
\end{multline*}
Next, arguing as in the proof of Lemma \ref{bound-on-expectation}, we define
\[
\widehat{B}_{i_1,i_2} = [0~|~0~|\ldots|~H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2})~|\ldots|~0~|~0]\in\mathbb{R}^{d\times nd},
\]
where $H_{i_1,i_2}$ sits on the $i_1$-th position of the block matrix above. Moreover, let
\begin{align}
\label{eq:b}
B_{i_2}=\sum_{i_1:i_1\neq i_2}\widehat{B}_{i_1,i_2}.
\end{align}
Using the representation \eqref{eq:b}, we have
\begin{multline*}
\left(\mathbb E\left\| \sum_{i_2} \left(\sum_{i_1} H_{i_1,i_2}(X^{(1)}_{i_1},X^{(2)}_{i_2})\right)^2 \right\| \right)^{1/2}
= \left( \mathbb E\left\| \left(\sum_{i_2}B_{i_2}\right)\left(\sum_{i_2}B_{i_2}\right)^\ast \right\| \right)^{1/2} \\
=\left( \mathbb E
\left\| \left( \sum_{i_2}\left(
\begin{array}{ccc}
0 & B_{i_2}^\ast \\
B_{i_2} & 0 \\
\end{array}
\right) \right)^2\right\| \right)^{1/2}
\leq2\left( \mathbb E
\left\| \left( \sum_{i_2}\varepsilon_{i_2}\left(
\begin{array}{ccc}
0 & B_{i_2}^\ast \\
B_{i_2} & 0 \\
\end{array}
\right) \right)\right\|^2 \right)^{1/2},
\end{multline*}
where $\left\{\varepsilon_{i_2}\right\}_{i_2=1}^n$ is sequence of i.i.d. Rademacher random variables, and the last step follows from the symmetrization inequality.
Next, Khintchine's inequality \eqref{matrix-Khintchine} yields that
\begin{align*}
A\leq& 4\sqrt{e(1+2\log d)}\left(\mathbb E\left\|\sum_{i_2}\left(
\begin{array}{ccc}
B_{i_2}^\ast B_{i_2} & 0 \\
0 & B_{i_2}B_{i_2}^\ast \\
\end{array}
\right) \right\|\right)
^{1/2}\\
=&4\sqrt{e(1+2\log d)}\left(\mathbb E\max\left\{\left\| \sum_{i_2}B_{i_2}^\ast B_{i_2} \right\|, \left\| \sum_{i_2}B_{i_2}B_{i_2}^\ast \right\|\right\}\right)^{1/2}
\\
=& 4\sqrt{e(1+2\log d)}\left( \mathbb E\max\left\{ \left\| \widetilde G\widetilde G^\ast\right\|, \left\|\sum_{(i_1,i_2)\in I^2_n}H_{i_1,i_2}^2\left( X^{(1)}_{i_1},X^{(2)}_{i_2} \right) \right\| \right\} \right)^{1/2}.
\end{align*}
Note that the last expression is of the same form as equation \eqref{inter-chaos-1} in the proof of Theorem \ref{thm:degen-moment} with $q=1$. Repeating the same argument, one can show that
\begin{align*}
A \leq 64\sqrt{e}\log (de)& \left( \mathbb E\max_{i_1} \left\| \sum_{i_2:i_2\ne i_1} H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)}\right) \right\|\right)^{1/2}
\\
&
+8\sqrt{2e\log (de)} \left( \mathbb E\left\| \sum_{i} \mathbb E_2 \widetilde G_i \widetilde G_i^\ast \right\| +
\left\| \sum_{(i_1,i_2)\in I_n^2} \mathbb E H_{i_1,i_2}^2\left(X_{i_1}^{(1)},X_{i_2}^{(2)} \right)\right\| \right)^{1/2},
\end{align*}
which is an analogue of \eqref{eq:d10}.
\subsection{Calculations related to Examples \ref{example:01} and \ref{example:02}.}
\label{section:example-proof}
We will first estimate $\left\|GG^\ast\right\|$. Note that the $(i,i)$-th block of the matrix $GG^\ast$ is
\[
\left(GG^\ast\right)_{ii} = \sum_{j:j\neq i}A_{i,j}^2 = \sum_{j:j\neq i}\left(\mathbf{a}_i\mathbf a_j^T+\mathbf a_j\mathbf a_i^T\right)^2
=(n-1)\mathbf a_i\mathbf a_i^T + \sum_{j\neq i}\mathbf a_j\mathbf a_j^T.
\]
The $(i,j)$-block for $j\neq i$ is
\[
\left(GG^\ast\right)_{ij} = \sum_{k\neq i,j}A_{i,k}A_{j,k}
=\sum_{k\neq i,j}\left(\mathbf a_i\mathbf a_k^T+\mathbf a_k\mathbf a_i^T\right)\left(\mathbf a_j\mathbf a_k^T+\mathbf a_k\mathbf a_j^T\right)=(n-2)\mathbf a_i\mathbf a_j^T.
\]
We thus obtain that
\begin{align*}
GG^\ast = (n-2)\mathbf{a}\mathbf{a}^T + \mathrm{Diag}\left( \underbrace{ \sum_{j=1}^n\mathbf{a}_j\mathbf{a}_j^T
,\ldots, \sum_{j=1}^n\mathbf{a}_j\mathbf{a}_j^T }_{\text{n terms}}\right),
\end{align*}
where $\mathrm{Diag}(\cdot)$ denotes the block-diagonal matrix with diagonal blocks in the brackets.
Since
$$\mathrm{Diag}\left( \sum_{j=1}^n\mathbf{a}_j\mathbf{a}_j^T
,\ldots, \sum_{j=1}^n\mathbf{a}_j\mathbf{a}_j^T\right)\succeq 0,$$
it follows that
\[
\left\|GG^\ast \right\| \geq (n-2) \|\mathbf{a}\|^2_2=(n-2)n.
\]
On the other hand,
\begin{multline*}
\left\| \sum_{(i_1,i_2)\in I_n^2} A_{i_1,i_2}^2\right\|
= \left\| \sum_{(i_1,i_2)\in I_n^2} \left( \mathbf{a}_{i_1}\mathbf{a}_{i_2}^T + \mathbf{a}_{i_2}\mathbf{a}_{i_1}^T \right)^2\right\|
= \left\| \sum_{(i_1,i_2)\in I_n^2} \left( \mathbf{a}_{i_1}\mathbf{a}_{i_1}^T +\mathbf{a}_{i_2}\mathbf{a}_{i_2}^T \right) \right\|
\\
=2(n-1)\left\| \sum_{i} \mathbf{a}_{i}\mathbf{a}_{i}^T \right\| = 2(n-1).
\end{multline*}
where the last equality follows from the fact that $\{\mathbf{a}_1,\ldots,\mathbf{a}_n\}$ are orthonormal.
For Example \ref{example:02}, we similarly obtain that
\begin{align*}
\left(GG^\ast\right)_{ii} &= \sum_{j:j\neq i}c^2_{i,j}\left(\mathbf{a}_i\mathbf a_j^T+\mathbf a_j\mathbf a_i^T\right)^2
\\
&=\left( \sum_{j:j\ne i} c^2_{i,j}\right)\mathbf a_i\mathbf a_i^T + \sum_{j:j\neq i} c^2_{i,j}\mathbf a_j\mathbf a_j^T
= \mathbf a_i\mathbf a_i^T + \sum_{j:j\neq i} c^2_{i,j}\mathbf a_j\mathbf a_j^T,
\\
\left(GG^\ast\right)_{ij} &= \sum_{k\neq i,j}c_{i,k}c_{j,k}\left(\mathbf a_i\mathbf a_k^T+\mathbf a_k\mathbf a_i^T\right)\left(\mathbf a_j\mathbf a_k^T+\mathbf a_k\mathbf a_j^T\right)=0, \ i\ne j,
\end{align*}
hence $\left\| GG^\ast \right\|=\max_i \left\| (GG^\ast)_{ii}\right\|=1$. On the other hand,
\begin{multline*}
\left\| \sum_{(i_1,i_2)\in I_n^2} A_{i_1,i_2}^2\right\|
= \left\| \sum_{(i_1,i_2)\in I_n^2} c^2_{i_1,i_2}\left( \mathbf{a}_{i_1}\mathbf{a}_{i_2}^T + \mathbf{a}_{i_2}\mathbf{a}_{i_1}^T \right)^2\right\| \\
= \left\| \sum_{(i_1,i_2)\in I_n^2} c^2_{i_1,i_2}\left( \mathbf{a}_{i_1}\mathbf{a}_{i_1}^T +\mathbf{a}_{i_2}\mathbf{a}_{i_2}^T \right) \right\| = 2\left\| \sum_{i} \mathbf{a}_{i} \mathbf{a}_{i}^T \right\|=2,
\end{multline*}
and
\begin{multline*}
\sum_{i_1} \left\| \sum_{i_2:i_2\neq i_1}A_{i_1,i_2}^2\right\| = \sum_{i_1} \left\| \sum_{i_2\neq i_1} c^2_{i_1,i_2}\left( \mathbf{a}_{i_1}\mathbf{a}_{i_2}^T + \mathbf{a}_{i_2}\mathbf{a}_{i_1}^T \right)^2\right\| \\
= \sum_{i_1}\left\| \sum_{i_2:i_2\ne i_1} c_{i_1,i_2}^2 \left( \mathbf{a}_{i_1} \mathbf{a}_{i_1}^T + \mathbf{a}_{i_2} \mathbf{a}_{i_2}^T\right) \right\|
= \sum_{i_1}\left\| \mathbf{a}_{i_1} \mathbf{a}_{i_1}^T + \sum_{i_2:i_2\ne i_1} c_{i_1,i_2}^2 \mathbf{a}_{i_2} \mathbf{a}_{i_2}^T \right\| = n.
\end{multline*}
| {'timestamp': '2019-10-22T02:09:19', 'yymm': '1801', 'arxiv_id': '1801.05921', 'language': 'en', 'url': 'https://arxiv.org/abs/1801.05921'} |
\section{Introduction}
In a circuit of capacitively coupled metallic islands, static screening
describes the redistribution of polarization charges on the capacitive plates
of the islands in response to a static offset charge on one of the islands.
The resulting voltages are determined by a (screened) Poisson equation. The
way in which the solution decays with the distance from the offset charge
depends on the effective circuit dimensionality. In one-dimensional networks,
the polarization charge generically is constant up to the screening length
and then follows a purely exponential decay. For metallic islands coupled by
capacitances $C$, the screening length $\sqrt{2 C/C_g}$ is determined by the
ratio of $C$ to the capacitances $C_g$ of the islands to
ground.\cite{fazio:01}
Josephson junctions in superconducting circuits add an interesting twist to
the screening of static offset charge configurations. Conventional, linear
inductances coupling two metal grains are normally not of interest since they
invalidate the notion of islands with well-defined offset charges. In
contrast, nonlinear Josephson inductances only allow tunneling of single
Cooper-pairs such that offset charge cannot simply flow off an island
contacted by a junction. While Josephson junctions formally leave charge
quantization on the islands intact, charge quantization effects are
effectively weakened by large quantum fluctuations of the charge when the
charging energy $E_C$ of the islands is much smaller than the Josephson energy
$E_J$. This gives rise to what we will in the following refer to as nonlinear
screening, an effect that has been exploited very successfully in the transmon
qubit~\cite{koch:07}.
In the regime of dominating Josephson energy, the dynamics of a single
junction is dominated by quantum phase slips corresponding to tunneling of the
superconducting phase difference by $2\pi$. For one-dimensional chains of
Josephson junctions coupling the islands, the effects of phase slips have been
extensively studied theoretically both in infinite~\cite{bradley:84,
korshunov:86, korshunov:89} and finite~\cite{matveev:02,
Hekking13,ribeiro:14,vogt:15} networks in the past. There has also
been considerable effort in studying these systems
experimentally~\cite{pop:10,manucharyan:12,ergul:13}. Although many junctions
are present in these systems, the junctions are not strongly coupled such that
the dynamics are dominated by independent phase-slip events of the individual
junctions and interactions do not play a crucial role.
\begin{figure}
\includegraphics[scale=1.0]{setup}
\caption{\label{Fig:Setup} In a) we show a conventional one dimensional
infinite chain of capacitors with capacitance $C$. Each island is connected to
the ground by a capacitance $C_g$. There is a fixed bias charge $q_0$ on a
single island, inducing charges on the neighboring capacitor plates. However,
due to the ground capacitances there is an exponential screening of this
charge along the chain. At each island a fraction of the induced charge is
stored on the ground capacitance so that the charge on the coupling
capacitances $C$ decays exponentially with the distance to the original bias
charge. The system in b) is similar to the system in a) with a Josephson
junction (Josephson energy $E_J$) providing an additional shunt to the ground.
Induced charges can now also be screened by tunneling through the Josephson
junction, giving rise to a novel nonlinear screening behavior. }
\end{figure}
While the nonlinear screening effected by a single Josephson junction as in
the transmon is well-studied, the screening properties of systems of many
junctions that are strongly coupled have not been investigated to the best of
our knowledge. Motivated by the efficient screening of a single transmon, we
therefore study a one-dimensional system of transmons that are strongly
coupled by large capacitances $C$, see Fig.~\ref{Fig:Setup}. This corresponds
to a system of superconducting islands that are coupled by capacitances $C$
and shunted to ground by a Josephson junction with Josephson energy $E_J$ and
associated capacitance $C_g$. We are interested in a regime of nonlinear
screening dominated by the Josephson junctions which corresponds to a very
small capacitance $C_g$ to ground with associated large (linear) screening
length $\sqrt{2 C/C_g}$. Phase slips dominate when the Josephson energy $E_J$
is much larger than the energy $E_{C} = e^2/2 C$ associated with a nonzero
voltage with respect to the ground on a single island. We will show below
that the nonlinear screening due to the Josephson junctions leads to a
universal power-law decay of the electron-electron interaction with power two.
This implies a power-law decay of the polarization charge markedly different
from the conventional exponential decay obtained for static screening due to
capacitances.
The outline of the paper is as follows. In Sec.~\ref{Sec:Setup}, we introduce
the problem and its corresponding imaginary-time partition function in the
path-integral formulation. We discuss the dilute instanton-gas approximation
in Sec.~\ref{Sec:DiluteInstantonGas} and introduce the equal-time action that
is accumulated when slips of the island phases by $2\pi$ occur simultaneously
on different islands in Sec.~\ref{Sec:FirstOrderAction}. In
Sec.~\ref{Sec:PartitionFunction}, we use the equal-time action to compute the
action of an arbitrary tunneling path of the island phases. The corresponding
partition function maps onto a classical (interacting) partition function
which we compute using a Mayer expansion. We use these results in
Sec.~\ref{Sec:GroundStateEnergy} to compute the ground state energy. We
discuss the consequences for charge screening in
Sec.~\ref{Sec:ChargeScreening} and conclude with a short discussion of our
results.
\section{Setup and Model}
\label{Sec:Setup}
The system of interest is shown in Fig.~\ref{Fig:Setup}b). We analyze an
infinite one-dimensional chain of superconducting islands with the
superconducting phase $\varphi_j$ on the $j$-th island. The islands are
coupled by capacitances $C$ and connected to the ground by a Josephson
junction with Josephson energy $E_J$ and a capacitance $C_g$ in parallel. This
gives charges the possibility to tunnel on and off the island, changing the
screening behavior of the chain. We treat the problem within the quantum
statistical path integral approach to calculate the partition function $Z$ of
the system. As the phase-variables of the islands are compact and defined only
on the circle $[0,2\pi)$ it is useful to introduce the winding number $n_j \in
\mathbb{Z}$ for the $j$-th island. With this, we can split the path integral
for each phase $\varphi_j$ into sectors containing paths that wind $n_j$
times around the circle. For the full partition function, we have to sum
over all closed paths corresponding to all possible winding numbers.
Additionally, we have to integrate over the starting positions $\phi_j$.
Hence, the partition function is given by
\begin{align}
\label{Eq:PartitionSum}
Z=\prod_j\Biggl(\sum_{n_j}\int_0^{2\pi}\!d\phi_j\int_{\begin{subarray}{l} \varphi_j=\phi_j\end{subarray}}^{\varphi_f=\phi_j+2\pi n_j}\mathcal{D}[\varphi_j]\Biggr) e^{-S/\hbar},
\end{align}
where we have introduced the Euclidean action $S=\int_0^\beta\!
d\tau\;(L_C+L_q)$ with the inverse temperature $\beta=\hbar/k_B T$. The Lagrangian $L_C$ corresponding to the circuit without bias charges is given by
\begin{align}
L_C&=\sum_{j=-\infty}^{\infty} \Bigl\{ \frac{\hbar^2}{16 E_{C_g}} \dot{\varphi}_j^2+\frac{\hbar^2}{16 E_{C}}(\dot{\varphi}_{j+1}-\dot{\varphi}_j)^2\nonumber\\
&\quad-E_J[1-\cos(\varphi_j)] \Bigr \},
\end{align}
where $\dot\varphi_i=d\varphi_i/d\tau$. The first term in the sum describes
the capacitive coupling to the ground, the second term the coupling between
the islands, and the last term the Josephson junctions with Josephson energy
$E_J$ connecting the islands to the ground. The energy scales of the capactive terms are given by $E_{C_g}=e^2/2C_g$ and $E_{C}=e^2/2C$. To study the screening effect of the system in the presence of bias charges on selected islands, we need the additional Lagrangian
\begin{align}
\label{Eq:BiasChargeLagrangian}
L_q=\sum_{j=-\infty}^\infty \frac{i\hbar}{2e}q_j\dot{\varphi}_j
\end{align}
which implements the bias charges $q_j$ on the $j$-th island. This term is
special for two reasons: On one hand, it is a total time derivative and thus
does not enter the classical equations of motion. On the other hand, it is
imaginary so that it only adds a phase to the partition function underlining its nonclassicality.
From the free energy $F=-\hbar\log(Z)/\beta$, we can calculate the ground state energy $E$ of the system by applying the low temperature limit
\begin{align}
\label{Eq:LowTempLimit}
E=\lim_{\beta\rightarrow\infty} F.
\end{align}
The aim of this work is to calculate this ground state energy as a function of two bias charges and use it to gain information about the screening behavior of the chain.
\section{Partition function}\label{Sec:DiluteInstantonGas}
We are in particular interested in the regime where $E_{C}\ll E_J\ll E_{C_g}$ so
that the conventional capacitance to the ground $C_g$ is very small and the
Josephson junctions are mainly responsible for any charge screening on the
islands. From the fact that $E_J/E_{C}\gg 1$, we know that the ground state of
the system will be well-localized in the phase variables $\varphi_j$.
Therefore, the main contributions in (\ref{Eq:PartitionSum}) are due to paths
starting and ending in the minimum of the cosine potentials. As we are only
interested in exponential accuracy for the calculation of the ground state
energy with (\ref{Eq:LowTempLimit}), we can set $\phi_j=0$ and omit the
integral over $\phi_j$. We are left with the evaluation of
\begin{align}
\label{Eq:PartitionSum_0}
Z_0=\prod_j\Biggl(\sum_{n_j}\int_{\begin{subarray}{l} \varphi_j=0\end{subarray}}^{\varphi_f=2\pi n_j}\mathcal{D}[\varphi_j] \Biggr)e^{-S_C/\hbar+ i\pi\bm{n} \cdot \bm{q}/e}.
\end{align}
Here, the action $S_C$ is $S_C = \int_0^\beta d \tau \, L_C$ and we have
already carried out the time integral over the term
\begin{align}
\int_0^\beta\! d\tau \;L_q&=\frac{i\hbar}{2e}\sum_j q_j\int_0^\beta \!d\tau\;\dot{\varphi}_j
=\frac{i\hbar\pi}{e}\; \bm{n} \cdot \bm{q}
\end{align}
due to the bias charges.
The vector $\bm{n}$ with components $n_j$ encodes the winding sector and $\bm{q}$ with components $q_j$ is the vector of bias charges.
As we are analyzing a regime where the phases are good variables, fluctuations
around the classical paths defined by the solutions of the Euler-Lagrange
equations (corresponding to $L_C$) are small. Hence, we apply an instanton
approximation where we replace the path integral by a sum over all classical
solutions, while quantum fluctuations around the classical paths play just a
sub-dominant role. In general, the main contribution to these fluctuations
arise from Gaussian integration of the action expanded to second order around
the classical paths. We assume that the fluctuations can be factorized so
that they simply renormalize the bare parameters. This amounts to introducing
the weight prefactor $K(\bm{n},\varphi_\text{cl})$, accounting for the
fluctuations. By summing over all saddle point solutions of the Euler Lagrange
equations $\{\varphi_{\text{cl}}(\bm{n})\}$, the partition function in the instanton approximation reads
\begin{align}
\label{Eq:PartitionSum_Instanton}
Z_{0}=\sum_{\bm{n}}\sum_{\{\varphi_{\text{cl}}(\bm{n})\}} K(\bm{n},\varphi_\text{cl}) e^{-S_{\text{cl}}[\varphi_{\text{cl}}]/\hbar+i\pi \bm{n} \cdot \bm{q}/e},
\end{align}
where $S_{\text{cl}}[\varphi_{\text{cl}}]$ is the action corresponding to the classical path $\varphi_{\text{cl}}(\bm{n})$.
\begin{figure}
\includegraphics[scale=1.0]{PathLong}
\caption{\label{Fig:Path} A possible saddle-point solution of the of
equations of motion for a single island with phase variable $\varphi$. Every
step (phase slip) in the curve is described by an instanton particle localized in time. The action corresponding to such a path can be approximated by the action of a single phase slip times the number of total phase slips. This is possible because the constant parts of the steps do not contribute to the action. }
\end{figure}
An example of such a simple path for just a single island (only one $n_j$ is
different from 0) is shown in Fig.~\ref{Fig:Path}.
The fact that we analyze the semi-classical regime where the phase is
well-localized allows to use the dilute instanton gas approximation. The
approximation
holds as the phase-slip rate is so small that the phase slips (instantons) are well
separated from each other, i.e., there is at most a single phase slip present within
the duration $\tau_0=\hbar/\sqrt{E_J E_{C}}$ of a single phase-slip process. Thus, the classical paths
consist of almost instantaneous individual phase slips that are well-separated
in (imaginary) time. These phase slips are centered at their occurrence
times $\tau_j$ for the $j$-th phase slip. In between the phase slips, the phase stays
constant. In that way, we treat paths with more than a single phase slip at the
same island as independent phase slips, i.e., there is no temporal interaction
between the instantons. As a consequence the total action is simply the sum of
individual instanton contributions
\cite{Coleman85}. However, simultaneous phase slips at different islands cannot
be treated independently because they are subject to a spatial interaction due
to the coupling between the islands. Therefore, this case needs special
treatment that we deal with in the next section.
In principle we also have to calculate the prefactor $K(\bm{n})$ due to the
fluctuations around the classical action. These fluctuations are not
important when it comes to exponential accuracy. However, due to time
translation invariance, in the single instanton sector, the second-order
integration for the fluctuations additionally contains an integration of a zero
mode, corresponding to a simple shift of the full instanton solution in time.
We separate the prefactor $K(\bm{n})=\tilde{K}(\bm{n})\beta$ into a factor
$\tilde{K}(\bm{n})$ containing the real fluctuations on one side and the imaginary time interval
$\beta$ resulting from the zero mode integration on the other side. Thus, every
contribution is weighted by the fluctuations $\tilde{K}(\bm{n})$ and the length
of the time intervall $\beta$ in which we consider the evolution of the system.
For a single instanton on a single island, which is a noninteracting problem,
it is known that $\tilde{K}\simeq1/\tau_0$ (compare, e.g., to
Refs.~\onlinecite{koch:07,pop:10,matveev:02}). However, as the precision of
this prefactor is not as important as the precision for the exponentiated
instanton action we assume the single instanton value $1/\tau_0$ to be
sufficient even for the interacting problem with many simultaneous instantons
at different islands\cite{Sethna81}. This approximation is appropriate because
the terms become smaller with the number of instantons such that in the end
only prefactors $\tilde{K}$ with a moderate amount of instantons are
relevant.
\section{Equal-Time Action}
\label{Sec:FirstOrderAction}
\begin{figure}
\includegraphics[scale=1.0]{PhaseSlipIllustrationsvg}
\caption{\label{Fig:ConfigLattice}Example configuration of the system where
we have sliced the time dimension into a lattice to better visualize the finite
time an instanton process needs. The arrows pointing up correspond to
instantons, while the arrows pointing down correspond to anti-instantons.
(Anti-)instantons occurring at the same time are subject to a spatial
interaction (here marked by the same gray scale) and add a contribution to the
full action given by their equal-time action. For the full partition function,
we need to sum over all possible configurations. To that end, we consider each
phase slip event at an island as a particle with the island position,
occurrence time and instanton type as generalized coordinates. In this
picture, evaluating the full partition function corresponds to calculating the
the classical grand-canonical partition function of the particles.} \end{figure}
Considering a single island with a single phase variable, the dilute instanton
gas approximation allows to treat the different phase slips (in imaginary
time) independently. However, in our problem we have many interacting phase
degrees of freedom (in space) rendering the situation more involved.
Therefore, in this section, we determine the irreducible equal-time action
$S_{\text{ET}}$ including the simultaneous phase-slip processes explicitly.
For a proper definition of the equal-time action, we use the fact that within
a time interval of size $\tau_0$ there can be at most a single phase slip per
island. Together with the diluteness of the instanton gas, it is convenient
to define the equal-time action as the action picked up by the total system in
a time window $\tau_0$ around a given time $\tau^*$. In this context it is
useful to imagine the (imaginary) time to be discrete with a temporal lattice
constant $\tau_0$. Figure~\ref{Fig:ConfigLattice} illustrates an example
configuration on such a lattice. For the calculation of $S_{\text{ET}}$ at the
time $\tau^*$, we only need the information which of the phases execute a
(anti-)phase slip. Later on, we will employ the equal-time action in the limit
of short instanton processes $\tau_0\rightarrow 0$, which applies in our
regime of interest, to construct the full action by adding the different
contributions independently in accordance with the dilute gas approximation.
In principle, for the explicit calculation of $S_{\text{ET}}$ at time $\tau^*$, we need to
extract the part of the classical paths matching the time window of size
$\tau_0$ around $\tau^*$ and insert this into the Lagrangian. However, as the
phase slips are almost instantaneous, we are only interested whether a
particular island $j$ exhibits a phase slip ($n^*_j=1)$, an anti phase slip
($n^*_j=-1$) or no phase slip ($n^*_j=0$) at time $\tau^*$. Moreover, since the system does not pick up any action as long as the phases are constant,
we can extend the
time-integration from minus infinity to plus infinity as the phases are only nonconstant for the short time interval $\tau_0$ around
$\tau^*$. This yields $S_{\text{ET}}=\int_{-\infty}^{\infty}\!d\tau\; L_{C1}(\tau)$,
where $L_{C1}$ is the circuit Lagrangian with the classical solution for a
single phase slip per phase inserted. The boundary conditions are provided by
\begin{align}
\varphi_j(\tau)=2\pi \begin{cases}m_j^*,&\tau<\tau^*-\tau_0/2,\\
m_j^*+n_j^*,&\tau>\tau^*+\tau_0/2.
\end{cases}
\end{align}
Here, the discrete variable $m_j^*\in\mathbb{Z}$ contains the information
about the phase before the time interval of size $\tau_0$ around $\tau^*$.
The task is the calculation of the classical action for an interacting
nonlinear system that in general cannot be carried out exactly. Therefore, we
introduce an approximation to the nonlinear Josephson cosine potential by
replacing it by a periodic parabolic potential called the Villain
approximation\cite{Villain75}, i.e., $1-
\cos(\varphi_j)\approx \text{min}_{m_j}(\varphi_j-2\pi m_j)^2/2$ with
$m_j\in\mathbb{Z}$. Taking the minimum with respect to the discrete variable
$m_j$ corresponds to taking the phases modulo $2\pi$. In the case of no phase
slip with $n_j^*=0$ this means $m_j=m_j^*$ independent of the time. However, in
the case of $n_j^*=\pm1$ we have $m_j=m_j^*$ for $\tau<\tau^*$ and $m_j=m_j^*+n_j^*$ for $\tau >\tau^*$. This can be summarized for all cases as
\begin{align}
m_j(\tau)=m_j^*+n_j^*\Theta(\tau-\tau^*),
\end{align}
where $\Theta(\tau)$ is the Heaviside theta function. With the Euler-Lagrange
equations for the Villain potential, it is straightforward to show that the
Lagrangian $L_{C1}$ is mirror symmetric with respect to the time $\tau^*$.
Thus it is sufficient to calculate the action for times before $\tau^*$ and
double the result yielding $S_{\text{ET}}=2\int_{-\infty}^{\tau^*}\!d\tau
L_C(\tau)$. Additionally, the symmetry provides the boundary condition
$\varphi_j(\tau^*)=\pi (m_j^*+n_j^*)$.
Except from the bias charge term that does not change the
classical equations of motion, the system is translationally invariant and
thus can be diagonalized by the Fourier transform
\begin{align}
\varphi_j&=\frac{1}{2\pi}\int_0^{2\pi}\! dk\; e^{i kj}\varphi_k,&
\varphi_k&=\sum_{j} e^{ -i kj}\varphi_j.
\end{align}
Expressing the circuit Lagrangian $L_{C1}$ for $\tau\leq\tau^*$ in terms of $\varphi_k$ gives rise to
\begin{align}
L_{C1}=\frac{1}{2\pi}\int\!dk\;&\biggl\{\frac{\hbar^2}{16 E_{C_\Sigma} }\biggl[1-\frac{\cos(k)}{1+\varepsilon^2}\biggr]|\dot{\varphi}_k|^2\nonumber\\
&+E_J/2 |\varphi_k|^2 \biggr\},
\end{align}
where $E_{C_\Sigma}=e^2/2(C_g+2C)$ is the full charging energy and
\begin{equation}
\varepsilon=
\sqrt{C_g/2C}
\end{equation}
the inverse screening length. At this point we make use of the fact that the Hamiltonian corresponding to $L_C$ is a conserved quantity. For the
instanton, it is equal to zero because the instantons correspond to
saddle-point solutions in the minima of the potentials. The conservation of
the Hamiltonian directly yields
\begin{align}
\frac{\hbar^2}{16 E_{C_\Sigma} }\biggl[1-\frac{\cos(k)}{1+\varepsilon^2}\biggr]|\dot{\varphi}_k|^2=\frac{E_J}{2} |\varphi_k|^2.
\end{align}
With this equation, we can express the equal-time action as
\begin{align}
S_{\text{ET}}(\bm{n}^*)&=2\int_{-\infty}^{\tau^*}\!d\tau\; L_{C1}\nonumber\\
&=\int\!\frac{dk}{\pi}\!\int_0^{\pi n_k^*}\!d|\varphi_k |\sqrt{\frac{E_J}{8E_{C_\Sigma}}\biggl[1-\frac{\cos(k)}{1+\varepsilon^2}\biggr]}|\varphi_k|\nonumber\\
&=\int \! dk\; U(k) |n_k^*|^2,
\end{align}
with $U(k)=\pi\sqrt{[1- \cos(k)/(1+\varepsilon^2)]E_J/32E_{C_\Sigma}}$ and $n_k^*$ the Fourier transform of $n_j^*$. In real space, we obtain the expression
\begin{align}
\label{Eq:RealSpaceAction}
S_{\text{ET}}(\bm{n}^*)&=\sum_{i,j}\; n_i^* U(i-j) n_j^*,
\end{align}
where $U(j)$ is the Fourier transform of $U(k)$. For small $\varepsilon$ an accurate approximation for the real space potential can be given by
\begin{align}
U(j)&=\alpha \frac{2\varepsilon j K_1(2\varepsilon j)}{\tfrac14-j^2}
\approx\alpha \frac{1}{\tfrac14-j^2} \quad(\text{for }\varepsilon\ll1).
\end{align}
Here, $K_1$ is the first modified Bessel function of the second kind with
$K_1(x)\approx 1/x$ for $x\ll1$. Hence the potential shows an inverse-square
decay until it reaches the screening length $\varepsilon^{-1}$ and turns into an exponential decay. The coupling strength is given by
\begin{equation}
\alpha=\pi\sqrt{E_J/8E_{C}}.
\end{equation}
Note that the equal-time action of interacting instantons can be fully described by the two-particle interaction $U(j)$ between all corresponding instantons. Additionally, we want to highlight that though the equal-time action describes the action picked up at a selected time it does not explicitly depend on the time but only on the underlying instanton configuration $\bm{n}$.
With the action at a given moment in time, we can proceed to calculate the
full action for a specific instanton configuration within the dilute gas
approximation. In the next section, we are going to use the equal-time action
to calculate the partition function by summing over all instanton
configurations $\bm{n}$ and integrating over all times the instantons
occur; this step is analogous to going over from a first to a second quantized
description of the problem.
\section{Ground State Energy}
\label{Sec:PartitionFunction}
We now turn to the estimation of the ground state Energy $E$. As a first step, we calculate the full partition function $Z_0$. With the instanton approximation and the equal-time action in real space, this is equivalent to a classical interacting statistical mechanics problem.
To make this correspondence clearer, we introduce a particle picture for the
phase slips. The general task is to evaluate
(\ref{Eq:PartitionSum_Instanton}) in the dilute gas approximation, which means
summing over all configurations of instantons on all islands
at all possible times. In the particle picture, the sum over all configurations
is realized by a sum over all numbers of instanton particles together with the
sum over the generalized coordinate $x_a$ of every particle $a=1,\dots,N$. The generalized coordinate $x_a$ of every particle includes its island coordinate $r_a \in
\mathbb{Z}$, the time $\tau_a$, and the instanton type $\sigma_a\in \{+,-\}$ (instanton or
anti-instanton) . Hence, such an instanton particle
corresponds to a single phase slip at a specific time and location. We use the shorthand notation
\begin{align}
\int dx_a=\sum_{r_a,\sigma_a}\int_0^\beta\!\frac{d\tau_a}{\tau_0} \;
\end{align}
to express the summation over all configurations of the $a$-th particle. With
that, we can rewrite (\ref{Eq:PartitionSum_Instanton}) as
\begin{align}
\label{Eq:PartitionSum_Classical}
Z_{0}&=\sum_{N=0}^{\infty}\frac{z^N}{N!}\int \!dx_1\cdots dx_N\nonumber\\
&\qquad\quad
\times\exp\biggl[{-\sum_{\makebox[2.5em]{$\scriptstyle 1\leq a<b \leq N$}}V_{a,b}+i\sum_{1\leq a\leq N}\pi\sigma_a q(r_a)/e}\biggr],
\end{align}
which is a classical partition function in the grand canonical ensemble.
Note that the factor $N!$ prevents overcounting of the configurations. The
fugacity $z$ is defined by the self-interaction part of
Eq.~(\ref{Eq:RealSpaceAction}) (with $a=b$) of a single instanton with $z=\exp[{-U(0)}]$. In this context we can interpret $z/\tau_0$ as the instanton rate. As $z\ll1$ there will be much less than a single instanton per time $\tau_0$ on average, justifying the dilute gas approximation. The rest of the interacting part is absorbed in the interaction potential
\begin{align}
V_{a,b}=\begin{cases}\infty, \hspace{2pt} &r_a=r_b,
|\tau_a-\tau_b|\lesssim\tau_0/2 \\
2\sigma_a U(r_a-r_b)\sigma_b,\hspace{2pt}&r_a\neq r_b, |\tau_a-\tau_b|\lesssim
\tau_0/2\\
0,\hspace{5pt}&\text{else}.
\end{cases},
\end{align}
We implement the potential as a hardcore potential, so that only a single
phase slip can happen on a given island at a given time.
For phase slips occurring at different times, the interaction potential is zero because in the dilute gas approximation a spatial interaction between phase slips is only included for simultaneously events as explained in section \ref{Sec:FirstOrderAction}. The bias charge part is implemented by the single particle potential $\pi\sigma_a q(r_a)/e$, where $q(r)$ is the charge distribution over the islands.
The free energy corresponding to such an interacting partition function can be evaluated perturbatively in $z$
by the Mayer cluster expansion \cite{Mayer41}. The idea is to rewrite
\begin{align}
\label{Eq:MayerTrick}
e^{-V_{a,b}}=1+f_{a,b},
\end{align}
so that we split the contribution in noninteracting part and interacting
part. The interacting part $f_{a,b}$ is, in the limit $\tau_0\rightarrow 0$,
proportional to a Dirac delta function \cite{Note1} $\tau_0\delta(\tau_a-\tau_b)$ with the width
$\tau_0$. This again reflects the fact that in a dilute gas spatial
interaction affects only simultaneous instantons. For large distances
$|r_a-r_b|$, the interacting part $f_{a,b}$ is negligibly small, hence
suggesting an expansion in the number of interacting particles. We can proceed
similar with the bias charge potential. Here, we consider only two charges
separated by $M$ with the charge distribution
$q(r)=q_0\delta_{r,0}+q_M\delta_{r,M}$. We can write
\begin{align}
e^{i\pi\sigma_a q(r_a)/e}&=\exp({i\pi\sigma_a q_0 \delta_{r_a,0}/e})\exp({i\pi\sigma_a q_M \delta_{r_a,M}/e})\nonumber\\
&=(1+g_{0,a})(1+g_{M,a}),
\end{align}
where $g_{a,b}=\exp[i\pi\sigma_b q_a \delta_{r_b,a}/e]-1$ expresses the
interaction of phase slip $b$ with the charge on island $a$. Using these
relations, the partition function assumes the form
\begin{align}
\label{Eq:Mayer_PartitionSum}
Z_{0}=\sum_{N=0}^{\infty}\frac{z^N}{N!}&\int \!dx_1\cdots dx_N\nonumber\\
&\times\prod_{\begin{subarray}{c}a<b\\ l\end{subarray}} (1+f_{a,b})(1+g_{0,l})(1+g_{M,l}),
\end{align}
where the part in the product of the partition function contains terms with different numbers of $f$ functions like
\begin{align}
\prod_{a<b}(1+f_{a,b})&=[1+(f_{1,2}+f_{1,3}+\cdots)\nonumber\\
&\quad +(f_{1,2}f_{1,3}+f_{1,2}f_{1,4}+\cdots)+\cdots]
\end{align}
and similar for the $g$ functions. A simple way to keep track the terms
appearing in the expansion is given by a diagrammatic approach: For every
particle coordinate $x_a$ in an $n$-particle term we draw a circle (node) with
the particle label $a$ inside. If an $f$-function $f_{a,b}$ is part of the
term, we connect the $a$-th and $b$-th circle by a straight line (link). A
$g_{0/M,l}$ is accounted for with a wiggled line starting from the $l$-th
circle and ending in a circle with the corresponding label $q_0$ or $q_M$. In
the end, we have to carry out a $dx_a$ integral for every node. Connected nodes represent interacting cluster of particles, which means that the integration of connected coordinates (clusters) is not necessarily independent, while nonconnected parts of the diagrams can be integrated independently.
\begin{figure}
\includegraphics[scale=1.0]{diagrams}
\caption{\label{Fig:ClusterVars}The contributions to the cluster variables
$b_1$ and $b_2$ as given in Eq.~(\ref{Eq:ClusterVars}). The first-order
diagrams in $b_1$ contain no spatial interaction at all, while the last
diagram in $b_2$ mediates an interaction between the charges $q_0$ and $q_M$.
The factor of 2 in front of some of the diagrams is due to the fact that these
contributions can additionally be realized with the labels $1$ and $2$ interchanged. }
\end{figure}
It is important to realize that the value of such clusters after the
integration does not depend on their labels, but only on the cluster topology
and the number of coordinates included. Thus, we introduce the cluster variables $b_j$ given by
\begin{align}
\label{Eq:ClusterVars}
b_1&=\frac{1}{1!\,L}\int\! dx_1\;(1+g_{0,1}+g_{M,1}),\nonumber\\
b_2&=\frac{1}{2!\,L}\int \! dx_1dx_2 \;f_{1,2} \;[1+g_{0,1}+g_{M,1}+g_{0,2}+g_{M,2}\nonumber\\
&\qquad\qquad
+g_{0,1}g_{M,2}+g_{0,2}g_{M,1}+g_{0,1}g_{0,2}+g_{M,1}g_{M,2}],\nonumber\\
&\qquad\qquad\qquad\vdots
\end{align}
where $L$ is the number of islands in the system so that $b_n$ is a finite
quantity that includes all connected diagrams with $n$ particles and all their
possible interactions with the bias charges. Terms corresponding to a single
phase slip that are interacting with two charges at different islands do not
contribute and thus terms involving $g_{0,a}g_{M,a}$ vanish. In Fig.~\ref{Fig:ClusterVars} we show the diagrams corresponding to $b_1$ and $b_2$.
Every term in (\ref{Eq:Mayer_PartitionSum}) consists of different numbers of one,~two,~three and more-particle clusters. A term including $m_1$ single-particle clusters, $m_2$ two-particle clusters and so on contains $N$ particles with $N=\sum_j m_j j$. It contributes
\begin{align}
T&=C(1!\,Lb_1)^{m_1}(2!\,Lb_2)^{m_2}(3!\,Lb_3)^{m_3}\cdots,\nonumber\\
C&=\frac{N!}{[(1!)^{m_1}(2!)^{m_2}\cdots][m_1!\,m_2!\cdots]}
\end{align}
where the Fa\`{a} di Bruno coefficient $C$ counts the number of ways of
partitioning the $N$ particles into the different particle clusters. The sum
over the instanton number $N$ in (\ref{Eq:Mayer_PartitionSum}) translates into
a sum over all cluster numbers $m_i$. Finally, we arrive at the expression for the partition function
\begin{align}
Z_0=\lim_{L\rightarrow\infty}\sum_{m_1,m_2,\dots}\biggl[\frac{(Lzb_1)^{m_1}}{m_1!}\frac{(Lz^2b_2)^{m_2}}{m_2!}\cdots\biggr]
\end{align}
and hence with (\ref{Eq:LowTempLimit}) the ground state energy is given by
\begin{align}
\label{Eq:FreeEnergyExpansion}
E=\lim_{\begin{subarray}{c}\beta\rightarrow\infty\\
L\rightarrow\infty\end{subarray}}\Bigl(-\frac{L\hbar}{\beta}\sum_l b_l z^l
\Bigr).
\end{align}
This is an expansion in the fugacity $z$, where the order $l$ corresponds to the maximum number of particle clusters we take into account. Truncating the series at order 2 for example does not mean that we consider only terms with two instantons but that we only take interactions between pairs of instantons into account. In other words, we treat the system as a sum of many two-body problems. For small $z\ll1$, like in our case of $E_J/E_{C_\Sigma}\gg1$, such a truncation is justified.
We analyze an infinite system and therefore the extensive energy $E$, scaling with the system size $L$, diverges. As we are interested in charge screening, we split the ground state energy
\begin{align}
E=\mathcal{E}_0L+E_1(q_0)+E_1(q_M)+E_{2}(q_0,q_M),
\end{align}
where $\mathcal{E}_0$ is the energy density accounting for the bias charge
nonrelated energy per island that is stored in the chain. From the latter,
corresponding to all diagrams without a bias charge circle, we can in
principle get information about the pressure and other thermodynamic variables
in the system. However, here we are only interested in the influence due to
the bias charges. The parts including information about these break the
translational invariance of the chain and therefore do not scale with the
system size. This gives rise to the two energies $E_1(q)$ and $E_2(q_1,q_2)$,
where the first provides the change in the ground state energy due to a single charge and the latter corresponds to the interaction energy between two charges.
\section{Single Charge and Interaction Energy}\label{Sec:GroundStateEnergy}
We proceed by evaluating the ground state energy. First, we want to
calculate the ground state energy dependency on a single bias charge
corresponding to $E_1(q_0)$. In our case, it is enough to consider the first
order in (\ref{Eq:FreeEnergyExpansion}), because the second order is already
suppressed by an additional factor of the fugacity $z$. The task is to
calculate the part of $b_1$ corresponding to $E_1(q_0)$. The diagrammatic
expansion makes it easy to select the correct terms. There is only a single
diagram that we have to take into account: A single particle circle connected
to a single bias charge $q_0$ (Fig.~\ref{Fig:ClusterVars}, the second diagram in $b_1$), which is given by the simple expression
\begin{align}
b_1(q_0)&=\frac{1}{1!\,L}\int\! dx_1\;g_{0,1},\nonumber\\
&=\frac{\beta}{L\tau_0}(e^{i\pi q_0/e}-1)+(e^{-i\pi q_0/e}-1)\nonumber\\
&=\frac{2\beta}{L\tau_0}[\cos(\pi q_0/e)-1].
\end{align}
As the $g$-functions are zero everywhere expect at the island where the
corresponding charge is located, they act as Kronecker-Delta and project the whole sum over the instanton coordinate on the location of the bias charge. From $b_1$, we find
\begin{align}
\label{Eq:SingleEnergy}
E_1(q_0)&=\frac{2\hbar}{\tau_0} e^{-\pi\sqrt{2E_J/E_{C}}}[1-\cos(\pi q_0/e)].
\end{align}
This energy is similar to known results for problems with only single junctions
(see e.g.~\onlinecite{koch:07}). The exponent is slightly different from the
conventional scaling $\sqrt{8E_J/E_{C_g}}$ as every junction in the chain,
different from single junction systems, is coupled to other junctions. If we
redid the calculation in the limit $\varepsilon\rightarrow\infty$, which turns
off the coupling between the islands, and scale $E_J\mapsto (8/\pi^2)^2E_J$ to
compensate an error due to the Villain approximation \cite{Hekking13}, we
would recover the known result.
The next step is the calculation of the interaction energy $E_{2}(q_0,q_M)$.
Here, it is not enough to consider only first-order terms in
(\ref{Eq:FreeEnergyExpansion}), because single-instanton diagrams can only
contain single charges. Thus, the leading order of the interaction energy is
given by the two-instanton diagrams where every particle circle is connected
to a charge circle (the last contribution in $b_2$ in Fig.~\ref{Fig:ClusterVars}) corresponding to the expression
\begin{align}
b_2(q_0,q_M)&=\frac{1}{2!\,L}\int \! dx_1dx_2 \;2f_{1,2}g_{0,1}g_{M,2}.
\end{align}
By considering solely the terms depending on both charges in the result for $b_2(q_0,q_M)$,
we find the interaction energy
\begin{align}\label{eq:e2}
E_{2}(q_0,q_M)&=\frac{2\hbar}{\tau_0} e^{-2\pi\sqrt{2E_J/E_{C}}}\nonumber\\
&\quad\times \biggl\{\Bigl[e^{-2U(M)}-1\Bigr]\cos[\pi(q_0+q_M)/e]\nonumber\\
&\quad\quad+\Bigl[e^{2U(M)}-1\Bigr]\cos[\pi(q_0-q_M)/e]\biggr\}.
\end{align}
For large enough $M$, we have $U(M)\ll1$ and can therefore expand the
interaction energy in $U$ with the result
\begin{align}
\label{Eq:InteractionEnergy}
E_{2}\approx \frac{8\hbar}{\tau_0}e^{-2\pi\sqrt{2E_J/E_{C}}}
U(M)\sin(\pi q_0/e)\sin(\pi q_M/e).
\end{align}
In this approximation, the charge interaction is directly proportional to the
instanton-interaction $U(M)$ and obeys a decay proportional to the
inverse-square distance between the charges (below the screening length).
\section{Charge Screening}\label{Sec:ChargeScreening}
In the final section, we return to the original question about the charge
screening effect of Josephson junctions. In a first step, we treat the case of
a single bias charge $q_0$ on island $0$. The presence of a charge on an
island induces an average charge on neighboring capacitor plates. To calculate
the latter we need to know the average voltages $V_M$ at the different
islands ($M\neq 0$). We can handle this task by using the results (\ref{Eq:SingleEnergy}) and (\ref{Eq:InteractionEnergy}) and employing linear response theory. The derivative of the ground state energy with respect to an external parameter
gives the average value of the derivative of the action with respect to the
same parameter. From (\ref{Eq:BiasChargeLagrangian}) and
(\ref{Eq:LowTempLimit}) and the time translation invariance in the system, we
obtain that the voltages $V_M$ are given by the derivative of the ground state
energy $E$ with respect to the bias charge. To this end, we shift the bias
charges on the $M$-th island such that the ground state energy
$E(q_0,\delta_M)$ depends on the small shift $\delta_M$.
We then find
\begin{align}
\label{Eq:HellmanFeynman}
\frac{\partial E(q_0+\delta_0,\delta_M)}{\partial
\delta_M}\bigg|_{\delta_0,\delta_M=0}&=\frac{i\hbar}{2e}\langle
\dot\varphi_M\rangle = V_M;
\end{align}
here, in order to determine the voltages expectation values $V_M$ on island
$M$, we have used the Josephson relation $d\varphi_M/dt=2eV_M/\hbar$ together
with the relation $\tau = it$ between real and imaginary time.
\begin{figure}
\includegraphics[scale=1.0]{voltagesBoth}
\caption{\label{Fig:Charges}Double logarithmic plots of the voltage
$V_M$ induced on island $M$ by an offset charge at
distance $M$ for $E_J/E_{C}=5$ and $\varepsilon=0.01$. In panel a), we
show the full result
corresponding to Eq.~\eqref{Eq:SeconOrderVoltage}. The screened voltage
follows a power-law behavior until the screening length is reached at which point
the
exponential screening due to the ground capacitances $C_g$ takes over.
In b) we compare the result obtained by using the power-law approximation
\eqref{Eq:ApproxVoltages} represented by the dashed line with the black
squares representing the full result. It can be
seen
that the decay of the induced voltages follows the inverse-square law essentially starting from $M=2$. }
\end{figure}
The leading contribution to $ E(q_0+\delta_0,\delta_M)$ is given by the
interaction energy
$E_2(q_0+\delta_0,\delta_M)$ of \eqref{eq:e2}. Taking the derivative, we
obtain the result ($\tilde{V}=2\pi\hbar/e\tau_0$)
\begin{align}
\label{Eq:SeconOrderVoltage}
V_M&=\tilde{V} e^{-2\pi\sqrt{2E_J/E_{C}}}
\Bigl[e^{-2U(M)}-e^{2U(M)}\Bigr]\sin(\pi q_0/e),
\end{align}
which can be approximated by the expression
\begin{align}
\label{Eq:ApproxVoltages}
V_M&\approx -4\tilde{V} e^{-2\pi\sqrt{2E_J/E_{C}}} U(M)\sin(\pi q_0/e)
\nonumber\\
& \approx \frac{4 \tilde{V} \alpha
e^{-2\pi\sqrt{2E_J/E_{C}}} \sin(\pi q_0/e)}{M^2} .
\end{align}
By expanding the exponential in (\ref{Eq:SeconOrderVoltage}) we see that all
even orders cancel so that there are no corrections until the third order in
$U(M)$. This makes the approximation (\ref{Eq:ApproxVoltages}) already
accurate for $M>\sqrt{2\alpha-1/4}$, where the exponent is smaller than 1.
Even in the regime of our interest $E_J\gg E_{C}$, $M$ does not have to be too
large as $\alpha$ scales with the square root of $E_J/E_{C}$.
Hence for intermediate distances (smaller than the screening length
$\varepsilon^{-1}$, larger than $\sqrt{2\alpha-1/4}$) the voltages obey a
universal
inverse-square decay given by $U(M)$. In Figure~\ref{Fig:Charges} we show a plot of the decay of the induced voltages. Note that an additional charge can be
treated by adding up the voltage contributions of every single charge.
Deviations from this simple rule are induced by a three particle
interaction-energy at least. Such contributions appear in three-instanton
diagrams or higher and thus they are strongly suppressed. For completeness, we
provide also the expression for the voltage on the $0$-th island
\begin{align}
\label{Eq:FirstOrderVoltage}
V_0=\tilde{V} e^{-\pi\sqrt{2E_J/E_{C}}}\sin(\pi q_0/e),
\end{align}
obtained
from $E\approx E_1(q_0+\delta_0)$.
With the voltages at hand, it is a simple task to determine the charges on all
capacitor plates. From the relation for capacitors $C(V_M-V_{M+1})=Q_M$, the
charge on the capacitor plate on the right of the $M$-th island (relative to
the bias charge) $Q_M$ is simply proportional to the voltage difference over
the capacitance. The largest voltage difference can be found between the bias
charge island and its two direct neighbors. Here we have to take the
difference of the first order contribution (\ref{Eq:FirstOrderVoltage}) and
the second-order contribution (\ref{Eq:SeconOrderVoltage}), where the latter
is suppressed by $z$. Thus, the two capacitor plates directly attached to the
bias charge island are charged the most. From the second island on, we only
need to take differences of (\ref{Eq:SeconOrderVoltage}). This results in the
power law decay
\begin{equation}\label{eq:cubic}
Q_M \approx \frac{8 C \tilde{V} \alpha
e^{-2\pi\sqrt{2E_J/E_{C}}} \sin(\pi q_0/e)}{M^3}.
\end{equation}
for $1\ll M \ll \epsilon^{-1}$. The result is thus fundamentally different
from an exponential decay in usual linear screening.
\section{Conclusion}
In this work, we have calculated the effect of Josephson junctions on the
charge screening in the ground state of an one-dimensional chain of
capacitively coupled superconducting islands in the semi-classical limit
$E_J/E_{C}\gg 1$. We have solved the problem of the interacting nonlinear
system by using an instanton approximation within the quantum statistical path
integral approach. To deal with the interactions in the chain, we have
introduced the equal-time action corresponding to the action picked up by the
whole system at a given moment in time. The latter includes spatial
correlations between simultaneous phase slips on different islands. With this
action and a dilute instanton gas approximation, which applies in the regime
of interest, we have mapped the task of solving the quantum system onto a
classical statistical mechanics problem. With a slightly modified Mayer
cluster expansion, supporting the interaction with bias charges at selected
islands of the chain, we have calculated a power series of the ground state
energy $E(q_0,q_M)$ in the number of interacting instantons as a function of
two bias charges $q_0$ and $q_M$. We have calculated the average induced
voltages in the linear response regime and furthermore the induced charges on
the capacitor plates. Compared to the known exponential decay for chains
without the Josephson junctions, we have found that the induced voltages decay
with the inverse of the squared distance. This power-law decay is
fundamentally different from the conventional exponential screening. The
effect arises due to interacting quantum phase slips through the nonlinear Josephson potentials of the Josephson junctions.
\section{Acknowledgments} The authors acknowledge support from the Alexander
von Humboldt foundation and the Deutsche Forschungsgemeinschaft (DFG) under
grant HA 7084/2-1.
| {'timestamp': '2016-06-22T02:00:31', 'yymm': '1606', 'arxiv_id': '1606.06300', 'language': 'en', 'url': 'https://arxiv.org/abs/1606.06300'} |
\section{Introduction}
\subsubsection{Derandomized LWE.} The learning with errors (LWE) problem~\cite{Reg[05]} is at the basis of multiple cryptographic constructions~\cite{Peikert[16],Hamid[19]}. Informally, LWE requires solving a system of `approximate' linear modular equations. Given positive integers $w$ and $q \geq 2$, an LWE sample is defined as: $(\textbf{a}, b = \langle \textbf{a}, \textbf{s} \rangle + e \bmod q)$, where $\textbf{s} \in \mathbb{Z}^w_q$ and $\textbf{a} \xleftarrow{\; \$ \;} \mathbb{Z}^w_q$. The error term $e$ is sampled randomly, typically from a normal distribution with standard deviation $\alpha q$ where $\alpha = 1/\poly(w)$, followed by which it is rounded to the nearest integer and reduced modulo $q$. Banerjee et al.~\cite{Ban[12]} introduced a derandomized variant of LWE, called learning with rounding (LWR), wherein instead of adding a random small error, a \textit{deterministically} rounded version of the sample is announced. Specifically, for some positive integer $p < q$, the elements of $\mathbb{Z}_q$ are divided into $p$ contiguous intervals containing (roughly) $q/p$ elements each. The rounding function, defined as: $\lfloor \cdot \rceil_p: \mathbb{Z}_q \rightarrow \mathbb{Z}_p$, maps the given input $x \in \mathbb{Z}_q$ into the index of the interval that $x$ belongs to. An LWR instance is generated as: $(\textbf{a}, \lfloor \langle \textbf{a}, \textbf{s} \rangle \rceil_p)$ for vectors $\textbf{s} \in \mathbb{Z}_q^w$ and $\textbf{a} \xleftarrow{\; \$ \;} \mathbb{Z}_q^w$. For certain range of parameters, Banerjee et al. proved the hardness of LWR under the LWE assumption. In this work, we propose a new derandomized variant of LWE, called learning with linear regression (LWLR). We reduce the hardness of LWLR to that of LWE for certain choices of parameters.
\subsubsection{Physical Layer Communications and Shared Secret Extraction.} In the OSI model, physical layer consists of the fundamental hardware transmission technologies. It provides electrical, mechanical, and procedural interface to the transmission medium. Physical layer communication between parties has certain inherent characteristics that make it an attractive source of renewable and shared secrecy. Multiple methods to extract secret bits from channel measurements have been explored~\cite{Prem[13],Xiao[08],Zhang[08],Zeng[15],Kepe[15],Jiang[13],Ye[10],Ye[06],Ye[07]}. Papers~\cite{Sheh[15],Poor[17]} survey the notable results in the area. It follows from \textit{channel reciprocity} that two nodes of a physical layer channel obtain identical channel state information. Secrecy of this information follows directly from the \textit{spatial decorrelation} property. Specifically, the channel reciprocity property implies that the signal
distortion (attenuation, delay, phase shift, and fading) is identical in both directions of a link. At the same time, spatial decorrelation property means that in rich scattering environments, the
receivers located a few wavelengths away experience uncorrelated channels; this ensures that an eavesdropper separated by at least half a wavelength from the communicating nodes experiences a different channel, and hence makes inaccurate measurements. Both of these properties have been demonstrated to hold in practice \cite{MarPao[14],ZenZimm[15]}. In this work, we use these two properties to securely generate sufficiently independent yet deterministic errors to derandomize LWE. Specifically, we use the Gaussian errors occurring in physical layer communications to generate special linear regression models that are later used to derandomize LWE.
\subsubsection{Rounded Gaussians.} Using discrete Gaussian elements to hide secrets is a common approach in lattice-based cryptography. The majority of digital methods for generating Gaussian random variables are based on transformations of uniform random variables~\cite{Knuth[97]}. Popular methods include Ziggurat~\cite{Zigg[00]}, inversion~\cite{Invert[03]}, Wallace~\cite{Wallace[96]}, and Box-Muller~\cite{Box[58]}. Sampling discrete Gaussians can also be done by sampling from some continuous Gaussian distribution, followed by rounding the coordinates to nearby integers~\cite{Pie[10],Box[58],Hul[17]}. Using such rounded Gaussians can lead to better efficiency and, in some cases, better security guarantees for lattice-based cryptographic protocols~\cite{Hul[17]}. In our work, we use rounded Gaussian errors that are derived from continuous Gaussians.
\subsubsection{Key-homomorphic PRFs.} In a pseudorandom function (PRF) family~\cite{Gold[86]}, each function is specified by a key such that it can be evaluated deterministically given the key but behaves like a random function without the key. For a PRF $F_k$, the index $k$ is called its key or seed. A PRF family $F$ is called key-homomorphic if the set of keys has a group structure and if there is an efficient algorithm that, given $F_{k_1}(x)$ and $F_{k_2}(x)$, outputs $F_{k_2 \oplus k_2}(x)$, where $\oplus$ is the group operation~\cite{Naor[99]}. Multiple key-homomorphic PRF families have been constructed via varying approaches~\cite{Naor[99],Boneh[13],Ban[14],Parra[16],SamK[20],Navid[20]}. In this work, we introduce and construct an extended version of key-homomorphic PRFs, called star-specific key-homomorphic (SSKH) PRFs, which are defined for settings wherein parties constructing the PRFs are part of an interconnection network that can be (re)arranged as a graph comprised of only (undirected) star graphs with restricted vertex intersections. An undirected star graph $S_k$ can be defined as a tree with one internal node and $k$ leaves. \Cref{FigStar} depicts an example star graph, $S_7$, with seven leaves.
\begin{figure}[h!]
\centering
\stargraph{7}{2}
\caption{An Example Star Graph, $S_7$}\label{FigStar}
\end{figure}
Henceforth, we use the terms star and star graph interchangeably.
\subsubsection{Cover-free Families with Restricted Intersections.} Cover-free families were first defined by Kautz and Singleton~\cite{Kautz[64]} in 1964 --- as superimposed binary codes. They were motivated by investigating binary codes wherein disjunction of at most $r ~(\geq 2)$ codewords is distinct. In early 1980s, cover-free families were studied in the context of group testing \cite{BushFed[84]} and information theory \cite{Ryk[82]}. Erd\"{o}s et al. called the corresponding set systems $r$-cover-free and studied their cardinality for $r=2$~\cite{PaulFrankl[82]} and $r < n$~\cite{PaulFrankl[85]}.
\begin{definition}[$r$-cover-free Families~\cite{PaulFrankl[82],PaulFrankl[85]}]
\emph{We say that a family of sets $\mathcal{H} = \{H_i\}_{i=1}^\alpha$ is $r$-cover-free for some integer $r < \alpha$ if there exists no $H_i \in \mathcal{H}$ such that:
\[H_i \subseteq \bigcup_{H_j \in \mathcal{H}^{(r)}} H_j, \]
where $\mathcal{H}^{(r)} \subset \mathcal{H}$ is some subset of $\mathcal{H}$ with cardinality $r$.}
\end{definition}
Cover-free families have found many applications in cryptography and communications, including blacklisting~\cite{RaviSri[99]}, broadcast encryption~\cite{CanGara[99],Garay[00],DougR[97],DougRTran[98]}, anti-jamming~\cite{YvoSafavi[99]}, source authentication in networks~\cite{Safavi[99]}, group key predistribution~\cite{ChrisFred[88],Dyer[95],DougRTran[98],DougRTranWei[00]}, compression schemes \cite{Thasis[19]}, fault-tolerant signatures \cite{Gunnar[16],Bardini[21]}, frameproof/traceability codes~\cite{Staddon[01],WeiDoug[98]}, traitor tracing \cite{DonTon[06]}, batch signature verification \cite{Zave[09]}, and one-time and multiple-times digital signature schemes \cite{Josef[03],GM[11]}.
In this work, we initiate the study of new variants of $r$-cover-free families. The motivation behind exploring this direction is to compute the maximum number of SSKH PRFs that can be constructed by overlapping sets of parties. We prove various bounds on the novel variants of $r$-cover-free families and later use them to establish the maximum number of SSKH PRFs that can be constructed by overlapping sets of parties in the presence of active/passive and internal/external adversaries.
\subsection{Our Contributions}
\subsubsection{Cryptographic Contributions.}
We know that physical layer communications over Gaussian channels introduce independent Gaussian errors. Therefore, it is logical to wonder whether we can use some processed form of those Gaussian errors to generate deterministic --- yet sufficiently independent --- errors to derandomize LWE. Such an ability would have direct applications to use-cases wherein LWR is used to realize derandomized LWE. Our algorithm to derandomize LWE uses physical layer communications as the training data for linear regression analysis, whose (optimal) hypothesis is used to generate deterministic errors belonging to a truncated Gaussian distribution. We round the resulting errors to the nearest integer, hence moving to a rounded Gaussian distribution, which is reduced modulo the LWE modulus to generate the desired errors. It is worth mentioning that many hardness proofs for LWE, including Regev's initial proof~\cite{Reg[05]}, used an analogous approach --- but without the linear regression component --- to sample random ``LWE errors''~\cite{Reg[05],Albre[13],Gold[10],Duc[15],Hul[17]}. We call our derandomized variant of LWE: learning with linear regression (LWLR). Under certain parameter choices, we prove that LWLR is as hard as LWE. After establishing theoretically the validity of our idea, we test it via some experiments to establish its practicality.
We introduce a new class of PRFs, called star-specific key-homomorphic (SSKH) PRFs, which are key-homomorphic PRFs that are defined by the sets of parties that construct them. In our construction, the sets of parties are arranged as star graphs wherein the leaves represent the parties and edges denote communication channels between them. For instance, a SSKH PRF $F^{(\partial_i)}_k$ is unique to the set/star of parties, $\partial_i$, that constructs it, i.e., $\forall i \neq j: F^{(\partial_i)}_k \neq F^{(\partial_j)}_k$. As an example application of LWLR, we replace LWR with LWLR in the LWR-based key-homomorphic PRF construction from \cite{Ban[14]} to construct the first SSKH PRF family. Due to their conflicting goals, statistical inference and cryptography are almost dual of each other. Given some data, statistical inference aims to identify the distribution that they belong to, whereas in cryptography, the central aim is to design a distribution that is hard to predict. Interestingly, in our work, we use statistical inference to construct a novel cryptographic tool. In addition to all known applications of key-homomorphic PRFs --- as given in \cite{Boneh[13],Miranda[21]} --- our SSKH PRF family also allows collaborating parties to securely generate pseudorandom nonce/seed without relying on any pre-provisioned secrets.
\subsubsection{Mutual Information between Linear Regression Models.}
To quantify the relation between different SSKH PRFs, we examine the mutual information between linear regression hypotheses that are generated via (training) datasets with overlapping data points. A higher mutual information translates into a stronger relation between the corresponding SSKH PRFs that are generated by using those linear regression hypotheses. The following text summarizes the main result that we prove in this context.
Suppose, for $i=1,2,\ldots,\ell$, we have:
$$y_i\sim \mathcal{N}(\alpha+\beta x_i,\sigma^2)\quad\text{and}\quad z_i\sim \mathcal{N}(\alpha+\beta w_i,\sigma^2),$$
with $x_i=w_i$ for $i=1,\ldots,a$. Let $h_1(x)=\hat{\alpha}_1 x+\hat{\beta}_1$ and $h_2(w)=\hat{\alpha}_2 w+\hat{\beta}_2$ be the linear regression hypotheses obtained from the samples $(x_i,y_i)$ and $(w_i,z_i)$, respectively.
\begin{theorem}\label{MutualThm}
The mutual information between $(\hat{\alpha_1},\hat{\beta_1})$ and $(\hat{\alpha_2},\hat{\beta_2})$ is:
\begin{align*}
&\ -\frac{1}{2}\log\left(1-\frac{\left(\ell C_2-2C_1X_1+aX_2\right)\left(\ell C_2-2C_1W_1+aW_2\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right. \\
&\qquad\left.+\frac{\left((a-1)C_2-C_3\right)\left((a-1)C_2-C_3+\ell(X_2+W_2)-2X_1W_1\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right),
\end{align*}
where $X_1=\sum_{i=1}^{\ell} x_i$, $X_2=\sum_{i=1}^{\ell} x_i^2$, $W_1=\sum_{i=1}^{\ell} w_i$, $W_2=\sum_{i=1}^{\ell} w_i^2$, $C_1=\sum_{i=1}^a x_i=\sum_{i=1}^a w_i$, $C_2=\sum_{i=1}^a x_i^2=\sum_{i=1}^a w_i^2$ and $C_3=\sum_{i=1}^{\ell}\sum_{j=1,j\neq i}^{\ell}x_ix_j$.
\end{theorem}
\subsubsection{New Bounds on $t$-intersection Maximally Cover Free Families.}
Since we use physical layer communications to generate deterministic LWE instances, a large enough overlap among different sets of devices can lead to reduced collective and conditional entropy for the different SSKH PRFs constructed by those sets.
We say that a set system $\mathcal{H}$ is (i) $k$-uniform if: $\forall A \in \mathcal{H}: |A| = k$, (ii) at most $t$-intersecting if: $\forall A, B \in \mathcal{H}, B \neq A: |A \cap B| \leq t$.
\begin{definition}[Maximally Cover-free Families]
\emph{A family of sets $\mathcal{H}$ is \textit{maximally} cover-free if it holds that:
\[\forall A \in \mathcal{H}: A \not\subseteq \bigcup\limits_{\substack{B \in \mathcal{H} \\ B \neq A}} B.\]}
\end{definition}
It follows trivially that if the sets of parties --- each of which is arranged as a star --- belong to a maximally cover-free family, then no SSKH PRF can have zero conditional entropy since each set of parties must have at least one member/device that is exclusive to it. We know from \Cref{MutualThm} that based on the overlap in training data, we can compute the mutual information between different linear regression hypotheses. Since the training dataset for a set of parties performing linear regression analysis is simply their mutual communications, it follows that the mutual information between any two SSKH PRFs increases with the overlap between the sets of parties that construct them. Hence, given a maximum mutual information threshold, \Cref{MutualThm} can be used to compute the maximum acceptable overlap between different sets of parties. To establish the maximum number of SSKH PRFs that can be constructed by such overlapping sets, we revisited cover-free families. Based on our requirements, we focused on the following two cases:
\begin{itemize}
\item $\mathcal{H}$ is at most $t$-intersecting and $k$-uniform,
\item $\mathcal{H}$ is maximally cover-free, at most $t$-intersecting and $k$-uniform.
\end{itemize}
We derive multiple bounds on the size of $\mathcal{H}$ for both these cases and later use them to establish the maximum number of SSKH PRFs that can be constructed securely against active/passive and internal/external adversaries. We mention our central results here.
\begin{theorem}\label{MainThm1}
Let $k,t\in\mathbb{Z}^+$, and $C<1$ be any positive real number.
\begin{enumerate}[label = {(\roman*)}]
\item Suppose $t<k-1$. Then, for all sufficiently large $N$, the maximum size $\nu(N,k,t)$ of a maximally cover-free, at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[N]}$ satisfies
$$CN\leq\nu(N,k,t)<N.$$
\item Suppose $t<k$. Then, for all sufficiently large $n$, the maximum size $\varpi(n,k,t)$ of an at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[n]}$ satisfies
$$\frac{Cn^{t+1}}{k(k-1)\cdots(k-t)}\leq\varpi(n,k,t)<\frac{n^{t+1}}{k(k-1)\cdots(k-t)}.$$
\end{enumerate}
In particular, $\nu(N,k,t)\sim N$ and $\varpi(n,k,t)\sim\frac{n^{t+1}}{k(k-1)\cdots(k-t)}$.
\end{theorem}
We also provide an explicit construction for at most $t$-intersecting and $k$-uniform set systems.
\subsubsection{Maximum Number of SSKH PRFs.}
We use the results from \Cref{MainThm1,MutualThm} to derive the maximum number, $\zeta$, of SSKH PRFs that can be constructed against various adversaries (modeled as probabilistic polynomial time Turing machine). To outline, we prove the following:
\begin{itemize}
\item For an external/eavesdropping adversary with oracle access to the SSKH PRF family, we get:
\[\zeta \sim \dfrac{n^k}{k!}.\]
\item For non-colluding semi-honest parties, we get:
\[\zeta \geq Cn,\]
where $C<1$ is a positive real number.
\end{itemize}
We also establish the ineffectiveness of the man-in-the-middle attack against our SSKH PRF construction.
\subsection{Organization}
The rest of the paper is organized as follows: Section~\ref{Sec2} recalls the concepts and constructs that are relevant to our solutions and constructions. Section~\ref{Sec3} reviews the related work. Section~\ref{Sec4} gives a formal definition of SSKH PRFs. We prove various bounds on (maximally cover-free) at most $t$-intersecting $k$-uniform families in \Cref{Extremal}. In Section~\ref{Sec6}, we present our protocol for generating rounded Gaussian errors from physical layer communications. The section also discusses the implementation, simulation, test results, error analysis, and complexity for our protocol. In \Cref{Mutual}, we analyze the mutual information between different linear regression hypotheses that are generated over overlapping training datasets. In \Cref{LWLRsec}, we define LWLR and generate LWLR instances. In the same section, we reduce the hardness of LWLR to that of LWE. In Section~\ref{Sec7}, we use LWLR to adapt the key-homomorphic PRF construction from~\cite{Ban[14]} to construct the first star-specific key-homomorphic PRF family, and prove its security under the hardness of LWLR (and hence LWE). In the same section, we use our results from \Cref{Extremal,Mutual} to establish the maximum number of SSKH PRFs that can be constructed by a given set of parties in the presence of active/passive and external/internal adversaries. Section~\ref{Sec8} gives the conclusion.
\section{Preliminaries}\label{Sec2}
For a positive integer $n$, let: $[n] = \{1, \dots, n\}.$ As mentioned earlier, we use the terms star and star graph interchangeably. For a vector $\mathbf{v}= (v_1, v_2, \ldots, v_w) \in \mathbb{R}^w$, the Euclidean and infinity norms are defined as: $||\mathbf{v}|| = \sqrt{(\sum_{i=1}^w v_i^2)}$ and $||\mathbf{v}||_\infty = \max(|v_1|, |v_2|, \ldots, |v_w|),$ respectively. In this text, vectors and matrices are denoted by bold lower case letters and bold upper case letters, respectively.
\subsection{Entropy}
The concept of entropy was originally introduced as a thermodynamic construct by Rankine in 1850 \cite{True[80]}. It was later adapted to information theory by Shannon \cite{Shannon[48]}, where it denotes a measure of the uncertainty associated with a random variable, i.e., (information) entropy is defined as a measure of the average information content that is missing when value of a random variable is not known.
\begin{definition}
\emph{For a finite set $S = \{s_1, s_2, \ldots, s_n\}$ with probabilities $p_1, p_2, \ldots, p_n$, the entropy of the probability distribution over $S$ is defined as:
\[H(S) = \sum\limits_{i=1}^n p_i \log \dfrac{1}{p_i}. \]}
\end{definition}
\subsection{Lattices}
A lattice $\mathrm{\Lambda}$ of $\mathbb{R}^w$ is defined as a discrete subgroup of $\mathbb{R}^w$. In cryptography, we are interested in integer lattices, i.e., $\mathrm{\Lambda} \subseteq \mathbb{Z}^w$. Given $w$-linearly independent vectors $\textbf{b}_1,\dots,\textbf{b}_w \in \mathbb{R}^w$, a basis of the lattice generated by them can be represented as the matrix $\mathbf{B} = (\textbf{b}_1,\dots,\textbf{b}_w) \in \mathbb{R}^{w \times w}$. The lattice generated by $\mathbf{B}$ is the following set of vectors:
\[\mathrm{\Lambda}(\textbf{B}) = \left\{ \sum\limits_{i=1}^w c_i \textbf{b}_i: c_i \in \mathbb{Z} \right\}.\]
Historically, lattices received attention from illustrious mathematicians, including Lagrange, Gauss, Dirichlet, Hermite, Korkine-Zolotareff, and Minkowski (see \cite{Laga[73],Gauss[81],Herm[50],Kork[73],Mink[10],JacSte[98]}). Problems in lattices have been of interest to cryptographers since 1997, when Ajtai and Dwork~\cite{Ajtai[97]} proposed a lattice-based public key cryptosystem following Ajtai's~\cite{Ajtai[96]} seminal worst-case to average-case reductions for lattice problems. In lattice-based cryptography, \textit{q-ary} lattices are of particular interest; they satisfy the following condition: $$q \mathbb{Z}^w \subseteq \mathrm{\Lambda} \subseteq \mathbb{Z}^w,$$ for some (possibly prime) integer $q$. In other words, the membership of a vector $\textbf{x}$ in $\mathrm{\Lambda}$ is determined by $\textbf{x}\bmod q$. Given a matrix $\textbf{A} \in \mathbb{Z}^{w \times n}_q$ for some integers $q, w, n,$ we can define the following two $n$-dimensional \textit{q-ary} lattices,
\[\mathrm{\Lambda}_q(\textbf{A}) = \{\textbf{y} \in \mathbb{Z}^n: \textbf{y} = \textbf{A}^T\textbf{s} \bmod q \text{ for some } \textbf{s} \in \mathbb{Z}^w \}, \]
\[\hspace{-28mm} \mathrm{\Lambda}_q^{\perp}(\textbf{A}) = \{\textbf{y} \in \mathbb{Z}^n: \textbf{Ay} = \textbf{0} \bmod q \}.\]
The first \textit{q-ary} lattice is generated by the rows of $\textbf{A}$; the second contains all vectors that are orthogonal (modulo $q$) to the rows of $\textbf{A}$. Hence, the first \textit{q-ary} lattice, $\mathrm{\Lambda}_q(\textbf{A})$, corresponds to the code generated by the rows of $\textbf{A}$ whereas the second, $\mathrm{\Lambda}_q^{\perp}(\textbf{A})$, corresponds to the code whose parity check matrix is $\textbf{A}$. For a complete introduction to lattices, we refer the interested reader to the monographs by Gr\"{a}tzer~\cite{Gratzer[03],Gratzer[09]}.
\subsection{Gaussian Distributions}
Gaussian sampling is an extremely useful tool in lattice-based cryptography. It was introduced by Gentry et al. \cite{Gentry[08]}, Gaussian sampling takes a short basis $\textbf{B}$ of a lattice $\mathrm{\Lambda}$ and an arbitrary point $\textbf{v}$ as inputs and outputs a point from a Gaussian distribution discretized on the lattice points and centered at $\textbf{v}$. Gaussian sampling does not leak any information about the lattice $\mathrm{\Lambda}$. It has been used directly to construct multiple cryptographic schemes, including hierarchical identity-based encryption \cite{AgDan[10],CashDenn[10]}, standard model signatures \cite{AgDan[10],XavBoy[10]}, and attribute-based encryption \cite{DanSer[14]}. In addition, Gaussian sampling/distribution also plays an important role in other hard lattice problems, such as, learning single periodic neurons \cite{SongZa[21]}, and has direct connections to standard lattice problems \cite{DivDan[15],NoDan[15],SteDavid[15]}.
\begin{definition}
\emph{A continuous Gaussian distribution, $\mathcal{N}^w(\textbf{v},\sigma^2)$, over $\mathbb{R}^w$, centered at some $\mathbf{v} \in \mathbb{R}^w$ with standard deviation $\sigma$ is defined for $\textbf{x} \in \mathbb{R}^w$ as the following density function:
\[\mathcal{N}_\textbf{x}^w(\textbf{v},\sigma^2) = \left( \dfrac{1}{\sqrt{2 \pi \sigma^2}} \right)^w \exp \left(\frac{-||\textbf{x} - \textbf{v}||^2}{2 \sigma^2}\right). \]}
\end{definition}
A rounded Gaussian distribution can be obtained by simply rounding the samples from a continuous Gaussian distribution to their nearest integers. Rounded Gaussians have been used to establish hardness of LWE \cite{Reg[05],Albre[13],Gold[10],Duc[15]} --- albeit not as frequently as discrete Gaussians.
\begin{definition}[Adapted from \cite{Hul[17]}]\label{roundGauss}
\emph{A rounded Gaussian distribution, $\mathrm{\Psi}^w(\textbf{v},\hat{\sigma}^2)$, over $\mathbb{Z}^w$, centered at some $\textbf{v} \in \mathbb{Z}^w$ with parameter $\sigma$ is defined for $\textbf{x} \in \mathbb{Z}^w$ as:
\[\mathrm{\Psi}^w_\textbf{x}(\textbf{v},\hat{\sigma}^2) = \int_{A_\textbf{x}} \mathcal{N}_\textbf{s}^w(\textbf{v},\sigma^2)\,d\textbf{s} = \int_{A_\textbf{x}} \left( \dfrac{1}{\sqrt{2 \pi \sigma^2}} \right)^w \exp\left( \dfrac{-||\textbf{s} - \textbf{v}||^2}{2 \sigma^2} \right)\,d\textbf{s}, \]
where $A_\textbf{x}$ denotes the region $\prod_{i=1}^{w} [x_i - \frac{1}{2}, x_i + \frac{1}{2})$; $\hat{\sigma}$ and $\sigma$ are the standard deviations of the rounded Gaussian and its underlying continuous Gaussian, respectively, such that: $\hat{\sigma} = \sqrt{\sigma^2 + 1/12}$.}
\end{definition}
\begin{definition}[Gaussian channel]\label{Gauss}
\emph{A Gaussian channel is a discrete-time channel with input $x_i$ and output $y_i = x_i + \varepsilon_i$, where $\varepsilon_i$ is drawn i.i.d. from a Gaussian distribution $\mathcal{N}(0, \sigma^2)$, with mean 0 and standard deviation $\sigma$, which is assumed to be independent of the signal $x_i$.}
\end{definition}
\begin{definition}[Discrete Gaussian over Lattices]
\emph{Given a lattice $\mathrm{\Lambda} \in \mathbb{Z}^w$, the discrete Gaussian distribution over $\mathrm{\Lambda}$ with standard deviation $\sigma \in \mathbb{R}$ and center $\textbf{v} \in \mathbb{R}^w$ is defined as:
\[D(\mathrm{\Lambda},\textbf{v},\sigma^2)_\textbf{x} = \dfrac{\rho_\textbf{x}(\textbf{v},\sigma^2)}{\rho_\mathrm{\Lambda}(\textbf{v},\sigma^2)};\ \forall \textbf{x} \in \mathrm{\Lambda},\]
where $\rho_\mathrm{\Lambda}(\textbf{v}, \sigma^2) = \sum_{\textbf{x}_i \in \mathrm{\Lambda}} \rho_{\textbf{x}_i}(\textbf{v},\sigma^2)$
and $$\rho_\textbf{x}(\textbf{v},\sigma^2) = \exp\left( \pi \dfrac{||\textbf{x} - \textbf{v}||_{GS}^2}{\sigma^2}\right)$$
and $||\cdot||_{GS}$ denotes the Gram-Schmidt norm. }
\end{definition}
The smoothing parameter is defined as a measure of the ``difference'' between discrete and standard Gaussians, that are defined over identical parameters. Informally, it is the smallest $\sigma$ required by a discrete Gaussian distribution, over a lattice $\mathrm{\Lambda}$, to behave like a continuous Gaussian --- up to some acceptable statistical error. For more details, see \cite{MicReg[04],DO[07],ChungD[13]}.
\begin{theorem}[Drowning/Smudging \cite{MartDeo[17],Gold[10],Dodis[10]}]
Let $\sigma > 0$ and $y \in \mathbb{Z}$. The statistical distance between $\mathrm{\Psi}(v,\sigma^2)$ and $\mathrm{\Psi}(v,\sigma^2) + y$ is at most $|y|/\sigma$.
\end{theorem}
Let $X_1, X_2, \ldots, X_n$ be i.i.d. random variables from the same distribution, i.e., all $X_i$'s have the same mean $\mu$ and standard deviation $\sigma$. Let random variable $\overline{X}_n$ be the average of $X_1, \ldots, X_n$. Then, the following theorems hold.
\begin{theorem}[Strong Law of Large Numbers]
$\overline{X}_n$ converges almost surely to $\mu$ as $n\rightarrow\infty$.
\end{theorem}
\subsection{Learning with Errors}\label{LWE}
The learning with errors (LWE) problem~\cite{Reg[05]} is at the center of the majority of lattice-based cryptographic constructions~\cite{Peikert[16]}. LWE is known to be hard based on the worst-case hardness of standard lattice problems such as GapSVP (decision version of the Shortest Vector Problem) and SIVP (Shortest Independent Vectors Problem)~\cite{Reg[05],Pei[09]}. Multiple variants of LWE such as ring LWE~\cite{Reg[10]}, module LWE~\cite{Ade[15]}, cyclic LWE~\cite{Charles[20]}, continuous LWE~\cite{Bruna[20]}, \textsf{PRIM LWE}~\cite{SehrawatVipin[21]}, middle-product LWE~\cite{Miruna[17]}, group LWE~\cite{NicMal[16]}, entropic LWE \cite{ZviVin[16]}, universal LWE \cite{YanHua[22]}, and polynomial-ring LWE~\cite{Damien[09]} have been developed since 2010. Many cryptosystems rely on the hardness of LWE, including (identity-based, leakage-resilient, fully homomorphic, functional, public-key/key-encapsulation, updatable, attribute-based, inner product, predicate) encryption~\cite{AnaFan[19],KimSam[19],WangFan[19],Reg[05],Gen[08],Adi[09],Reg[10],Shweta[11],Vinod[11],Gold[13],Jan[18],Hayo[19],Bos[18],Bos[16],WBos[15],Brak[14],Fan[12],Joppe[13],Adriana[12],Lu[18],AndMig[22],MiaSik[22],Boneh[13],Vipin[19],LiLi[22],RaviHow[22],ShuiTak[20],SerWe[15]}, oblivious transfer~\cite{Pei[08],Dott[18],Quach[20]}, (blind) signatures~\cite{Gen[08],Vad[09],Markus[10],Vad[12],Tesla[20],Dili[17],FALCON[20]}, PRFs with special algebraic properties~\cite{Ban[12],Boneh[13],Ban[14],Ban[15],Zvika[15],Vipin[19],KimDan[17],RotBra[17],RanChen[17],KimWu[17],KimWu[19],Qua[18],KevAna[14],SinShi[20],BanDam[18],PeiShi[18]}, verifiable/homomorphic/function secret sharing~\cite{SehrawatVipin[21],GHL[21],Boy[17],DodHal[16],GilLin[17],LisPet[19]}, hash functions~\cite{Katz[09],Pei[06]}, secure matrix multiplication computation~\cite{Dung[16],Wang[17]}, verifiable quantum computations~\cite{Urmila[18],Bra[21],OrNam[21],ZhenAlex[21]}, noninteractive zero-knowledge proof system for (any) NP language~\cite{Sina[19]}, classically verifiable quantum computation~\cite{Urmila[18]}, certifiable randomness generation \cite{Bra[21]}, obfuscation~\cite{Huijia[16],Gentry[15],Hal[17],ZviVin[16],AnanJai[16],CousinDi[18]}, multilinear maps \cite{Grg[13],Gentry[15],Gu[17]}, lossy-trapdoor functions \cite{BellKil[12],PeiW[08],HoWee[12]}, quantum homomorphic encryption \cite{Mahadev[18]}, key exchange \cite{Cost[15],Alkim[16],StebMos[16]}, and many more~\cite{Peikert[16],JiaZhen[20],KatzVadim[21]}.
\begin{definition}[Decision-LWE \cite{Reg[05]}]\label{defLWE}
\emph{For positive integers $w$ and $q \geq 2$, and an error (probability) distribution $\chi$ over $\mathbb{Z}$, the decision-LWE${}_{w, q, \chi}$ problem is to distinguish between the following pairs of distributions:
\[((\textbf{a}_i, \langle \textbf{a}_i, \textbf{s} \rangle + e_i \bmod q))_i \quad \text{and} \quad ((\textbf{a}_i, u_i))_i,\]
where $i \in [\poly(w)], \textbf{a}_i \xleftarrow{\; \$ \;} \mathbb{Z}^{w}_q, \textbf{s} \in \mathbb{Z}^w_q, e_i \xleftarrow{\; \$ \;} \chi,$ and $u_i \xleftarrow{\; \$ \;} \mathbb{Z}_q$.}
\end{definition}
Regev~\cite{Reg[05]} showed that for certain noise distributions and a sufficiently large $q$, the LWE problem is as hard as the worst-case SIVP and GapSVP under a quantum reduction (see~\cite{Pei[09],Zvika[13],ChrisOded[17]} for other reductions). Standard instantiations of LWE assume $\chi$ to be a rounded or discrete Gaussian distribution. Regev's proof requires $\alpha q \geq 2 \sqrt{w}$ for ``noise rate'' $\alpha \in (0,1)$. These results were extended by Applebaum et al.~\cite{Benny[09]} to show that the fixed secret $\textbf{s}$ can be sampled from a low norm distribution. Specifically, they showed that sampling $\textbf{s}$ from the noise distribution $\chi$ does not weaken the hardness of LWE. Later, Micciancio and Peikert discovered that a simple low-norm distribution also works as $\chi$~\cite{Micci[13]}.
\subsection{Pseudorandom Functions}
In a pseudorandom function (PRF) family~\cite{Gold[86]}, each function is specified by a key such that, with the key, it can be evaluated deterministically but behaves like a random function without it. Here, we recall the formal definition of a PRF family. Recall that an ensemble of probability distributions is a sequence $\{X_n\}_{n \in \mathbb{N}}$ of probability distributions.
\begin{definition}[Negligible Function]\label{Neg}
\emph{For security parameter $\L$, a function $\eta(\L)$ is called \textit{negligible} if for all $c > 0$, there exists a $\L_0$ such that $\eta(\L) < 1/\L^c$ for all $\L > \L_0$.}
\end{definition}
\begin{definition}[Computational Indistinguishability~\cite{Gold[82]}]
\emph{Let $X = \{X_\lambda\}_{\lambda \in \mathbb{N}}$ and $Y = \{Y_\lambda\}_{\lambda \in \mathbb{N}}$ be ensembles, where $X_\lambda$'s and $Y_\lambda$'s are probability distributions over $\{0,1\}^{\kappa(\lambda)}$ for $\lambda \in \mathbb{N}$ and some polynomial $\kappa(\lambda)$. We say that $\{X_\lambda\}_{\lambda \in \mathbb{N}}$ and $\{Y_\lambda\}_{\lambda \in \mathbb{N}}$ are polynomially/computationally indistinguishable if the following holds for every (probabilistic) polynomial-time algorithm $\mathcal{D}$ and all $\lambda \in \mathbb{N}$:
\[\Big| \Pr[t \leftarrow X_\lambda: \mathcal{D}(t) = 1] - \Pr[t \leftarrow Y_\lambda: \mathcal{D}(t) = 1] \Big| \leq \eta(\lambda),\]
where $\eta$ is a negligible function.}
\end{definition}
\begin{remark}[Perfect Indistinguishability]
We say that $\{X_\lambda\}_{\lambda \in \mathbb{N}}$ and $\{Y_\lambda\}_{\lambda \in \mathbb{N}}$ are perfectly indistinguishable if the following holds for all $t$:
\[\Pr[t \leftarrow X_\lambda] = \Pr[t \leftarrow Y_\lambda].\]
\end{remark}
We consider adversaries interacting as part of probabilistic experiments called games. For an adversary $\mathcal{A}$ and two games $\mathfrak G_1, \mathfrak G_2$ with which it can interact, $\mathcal{A}'s$ distinguishing advantage is:
\[Adv_{\mathcal{A}}(\mathfrak{G}_1, \mathfrak{G}_2) := \Big|\Pr[\mathcal{A} \text{ accepts in } \mathfrak G_1] - \Pr[\mathcal{A} \text{ accepts in } \mathfrak G_2]\Big|.\]
For the security parameter $\L$, the two games are said to be computationally indistinguishable if it holds that: $$Adv_{\mathcal{A}}(\mathfrak{G}_1, \mathfrak{G}_2) \leq \eta(\L),$$ where $\eta$ is a negligible function.
\begin{definition}[PRF]
\emph{Let $A$ and $B$ be finite sets, and let $\mathcal{F} = \{ F_k: A \rightarrow B \}$ be a function family, endowed with an efficiently sampleable distribution (more precisely, $\mathcal{F}, A$ and $B$ are all indexed by the security parameter $\L)$. We say that $\mathcal{F}$ is a PRF family if the following two games are computationally indistinguishable:
\begin{enumerate}[label=(\roman*)]
\item Choose a function $F_k \in \mathcal{F}$ and give the adversary adaptive oracle access to $F_k$.
\item Choose a uniformly random function $U: A \rightarrow B$ and give the adversary adaptive oracle access to $U.$
\end{enumerate}}
\end{definition}
Hence, PRF families are efficient distributions of functions that cannot be efficiently distinguished from the uniform distribution. For a PRF $F_k \in \mathcal{F}$, the index $k$ is called its key/seed. PRFs have a wide range of applications, most notably in cryptography, but also in computational complexity and computational learning theory. For a detailed introduction to PRFs and review of the noteworthy results, we refer the interested reader to the survey by Bogdanov and Rosen \cite{AndAlo[17]}. In 2020, Liu and Pass \cite{LiuPass[20]} made a remarkable breakthrough and tied the existence of pseudorandom functions --- and one-way functions, in general --- to the average-case hardness of $K^{\poly}$-complexity, which denotes polynomial-time-bounded Kolmogorov complexity (see \cite{Solomon[64],Chai[69],Kko[86]} for an introduction to Kolmogorov complexity).
\subsection{Linear Regression}\label{Sec5}
Linear regression is a linear approach to model relationship between a dependent variable and explanatory/independent variable(s). As is the case with most statistical analysis, the goal of regression is to make sense of the observed data in a useful manner. It analyzes the training data and attempts to model the relationship between the dependent and explanatory/independent variable(s) by fitting a linear equation to the observed data. These predictions (often) have errors, which cannot be predicted accurately~\cite{Trevor[09],Montgo[12]}. For linear regression, the mean and variance functions are defined as:
\[\E(Y |X = x) = \beta_0 + \beta_1 x \quad \text{and} \quad \text{var}(Y |X = x) = \sigma^2,\]
respectively, where $\E(\cdot)$ and $\sigma$ denote the expected value and standard deviation, respectively; $\beta_0$ represents the intercept, which is the value of $\E(Y |X = x)$ when $x$ equals zero; $\beta_1$ denotes the slope, i.e., the rate of change in $\E(Y |X = x)$ for a unit change in $X$. The parameters $\beta_0$ and $\beta_1$ are also known as \textit{regression coefficients}.
For any regression model, the observed value $y_i$ might not always equal its expected value $\E(Y |X = x_i)$. To account for this difference between the observed data and the expected value, statistical error, which is defined as: $$\epsilon_i = y_i - \E(Y |X = x_i),$$ was introduced. The value of $\epsilon_i$ depends on \emph{unknown} parameters in the mean function. For linear regression, errors are random variables that correspond to the vertical distance between the point $y_i$ and the mean function $\E(Y |X = x_i)$. Depending on the type and size of the training data, different algorithms like gradient descent, least squares may be used to compute the values of $\beta_0$ and $\beta_1$. In this paper, we employ least squares linear regression to estimate $\beta_0$ and $\beta_1$, and generate the optimal hypothesis for the target function. Due to the inherent error in all regression models, it holds that: $$h(x) = f(x) + \varepsilon_x,$$ where $h(x)$ is the (optimal) hypothesis of the linear regression model, $f(x)$ is the target function and $\varepsilon_x$ is the total (reducible + irreducible) error at point $x$.
\subsection{Interconnection Network}
In an interconnection network, each device is independent and connects with other devices via point-to-point links, which are two-way communication lines. Therefore, an interconnected network can be modeled as an undirected graph $G = (V, E)$, where each device is a vertex in $V$ and edges in $E$ represent links between the devices. Next, we recall some basic definitions/notations for undirected graphs.
\begin{definition}
\emph{The degree $\deg(v)$ of a vertex $v \in V$ is the number of adjacent vertices it has in a graph $G$. The degree of a graph $G$ is defined as: $\deg(G) = \max\limits_{v \in V}(\deg(v))$.}
\end{definition}
If $\deg(v_i) = \deg(v_j)$ for all $v_i, v_j \in V$, then $G$ is called a regular graph. Since it is easy to construct star graphs that are hierarchical, vertex edge symmetric, maximally fault tolerance, and strongly resilient along with having other desirable properties such as small(er) degree, diameter, genus and fault diameter \cite{Akera[89],Sheldon[94]}, networks of star graphs are well-suited to model interconnection networks. For a detailed introduction to interconnection networks, we refer the interested reader to the comprehensive book by Duato et al. \cite{Sudha[02]}.
\section{Related Work}\label{Sec3}
\subsection{Learning with Rounding}\label{LWR}
Naor and Reingold \cite{Naor[9]} introduced synthesizers to construct PRFs via a hard-to-learn deterministic function. The obstacle in using LWE as the hard learning problem in their synthesizers is that LWE's hardness depends on random errors. In fact, without the error, LWE becomes a trivial problem, that can be solved via Gaussian elimination. Therefore, in order to use these synthesizers for constructing LWE-based PRFs, there was a need to replace the random errors with deterministic --- yet sufficiently independent --- errors such that the hardness of LWE is not weakened. Banerjee et al.~\cite{Ban[12]} addressed this problem by introducing the learning with rounding (LWR) problem, wherein instead of adding a small random error, as done in LWE, a deterministically rounded version of the sample is generated. For $q \geq p \geq 2$, the rounding function, $\lfloor \cdot \rceil_p: \mathbb{Z}_q \rightarrow \mathbb{Z}_p$, is defined as:
\[\lfloor x \rceil_p = \left\lfloor \dfrac{p}{q} \cdot x \right\rceil,\]
i.e., if $\lfloor x \rceil_p = y$, then $y \cdot \lfloor q/p \rceil$ is the integer multiple of $\lfloor q/p \rceil$ that is nearest to $x$. The error in LWR comes from deterministically rounding $x$ to a (relatively) nearby value in $\mathbb{Z}_p$.
\begin{definition}[LWR Distribution~\cite{Ban[12]}]
\emph{Let $q \geq p \geq 2$ be positive integers, then: for a vector $\textbf{s} \in \mathbb{Z}^w_q$, LWR distribution $L_\textbf{s}$ is defined to be a distribution over $\mathbb{Z}^w_q \times \mathbb{Z}_p$ that is obtained by choosing a vector $\textbf{a} \xleftarrow{\; \$ \;} \mathbb{Z}^w_q$, and outputting: $(\textbf{a},b = \lfloor \langle \textbf{a},\textbf{s} \rangle \rceil_p).$}
\end{definition}
For a given distribution over $\textbf{s} \in \mathbb{Z}^w_q$ (e.g., the uniform distribution), the decision-LWR${}_{w,q,p}$ problem is to distinguish (with advantage non-negligible in $w)$ between some fixed number of independent samples $(\textbf{a}_i,b_i) \leftarrow L_\textbf{s}$, and the same number of samples drawn uniformly from $\mathbb{Z}^w_q \times \mathbb{Z}_p$. Banerjee et al. proved decision-LWR to be as hard as decision-LWE for a setting of parameters where the modulus and modulus-to-error ratio are superpolynomial in the security parameter \cite{Ban[12]}. Alwen et al.~\cite{Alwen[13]}, Bogdanov et al.~\cite{Andrej[16]}, and Bai et al.~\cite{ShiBai[18]} made further improvements on the range of parameters and hardness proofs for LWR. LWR has been used to construct pseudorandom generators/functions~\cite{Ban[12],Boneh[13],Ban[14],Vipin[19],VipinThesis[19],BenoSan[17]}, and probabilistic~\cite{Jan[18],Hayo[19]} and deterministic~\cite{Xie[12]} encryption schemes.
As mentioned earlier, hardness reductions of LWR hold for superpolynomial approximation factors over worst-case lattices. Montgomery~\cite{Hart[18]} partially addressed this issue by introducing a new variant of LWR, called Nearby Learning with Lattice Rounding problem, which supports unbounded number of samples and polynomial (in the security parameter) modulus.
\subsection{LWR/LWE-based Key-homomorphic PRF\lowercase{s}}\label{foll}
Since LWR allows generating derandomized LWE instances, it can be used as the hard-to-learn deterministic function in Naor and Reingold's synthesizers, and therefore, construct LWE-based PRF families for specific parameters. Due to the indispensable small error, LWE-based key-homomorphic PRFs only achieve what is called `almost homomorphism'~\cite{Boneh[13]}.
\begin{definition}[Key-homomorphic PRF~\cite{Boneh[13]}]
\emph{Let $F: \mathcal{K} \times \mathcal{X} \rightarrow \mathbb{Z}^w_q$ be an efficiently computable function such that $(\mathcal{K}, \oplus)$ is a group. We say that the tuple $(F, \oplus)$ is a $\gamma$-almost key-homomorphic PRF if the following two properties hold:
\begin{enumerate}[label=(\roman*)]
\item $F$ is a secure PRF,
\item for all $k_1, k_2 \in \mathcal{K}$ and $x \in \mathcal{X}$, there exists $\textbf{e} \in [0, \gamma]^w$ such that: $$F_{k_1}(x) + F_{k_2}(x) = F_{k_1 \oplus k_2}(x) + \textbf{e} \bmod q.$$
\end{enumerate}}
\end{definition}
Multiple key-homomorphic PRF families have been constructed via varying approaches~\cite{Naor[99],Boneh[13],Ban[14],Parra[16],SamK[20],Navid[20]}. In addition to key-homomorphism, PRF families have been defined/constructed with various other properties and features, e.g., bi-homomorphic PRFs~\cite{Vipin[19]}, (private) constrained PRFs~\cite{Zvika[15],KimDan[17],RotBra[17],RanChen[17],DanDav[15],SinShi[20],PeiShi[18]}, Legendre PRFs \cite{IVD[88]}, power residue PRFs \cite{Ward[20]}, traceable PRFs \cite{GoyalWu[21],MaiWu[22]}, quantum PRFs \cite{MaZha[12],NicPu[20]}, oblivious PRFs \cite{SilJul[22],BonKog[20],FreePin[05]}, domain-preserving PRFs \cite{WaGu[19]}, structure-preserving PRFs \cite{MasaRafa[19]}, related-key attack (RKA) secure PRFs \cite{MihiDav[10],MihiDav[03],DavLis[10],Lucks[04],KrzT[08],EyaWid[14],Fab[19],KevAna[14],Boneh[13],Vipin[19],Ban[14]}, threshold/distributed PRFs \cite{CacKla[05],StanHug[18],BanDam[18],Boneh[13],Vipin[19],Ban[14]}, privately programmable PRFs \cite{DanDav[15],PeiShi[18]}, (zero-knowledge) provable PRFs \cite{BenoSan[17],CarMit[19]}, and watermarkable PRFs~\cite{KimWu[17],KimWu[19],Qua[18],ManHo[20],Yukun[21],SinShi[20]}. Note that key-homomorphic PRFs directly lead to RKA-secure PRFs and (one-round) distributed PRFs \cite{Boneh[13]}.
\section{Star-specific Key-homomorphic PRF: Definition}\label{Sec4}
In this section, we formally define star-specific key-homomorphic (SSKH) PRF. Let $G = (V,E)$ be a graph, representing an interconnection network, containing multiple star graphs wherein the leaves of each star graph, $\partial$, represent unique parties and the root represents a central hub that broadcasts messages to all leaves/parties in $\partial$. Different star graphs may have an arbitrary number of shared leaves. Henceforth, we call such a graph as an \textit{interconnection graph}. \Cref{DefFig} depicts a simple interconnection graph with two star graphs, each containing one central hub, respectively, along with eight parties/leaves where one leaf is shared by both star graphs. Note that an interconnection graph is simply a bipartite graph with a given partition of its vertices into two disjoint and independent subsets $V_1$ and $V_2$. (The vertices in $V_1$ are the central hubs and the vertices in $V_2$ are the parties.)
\begin{figure}
\centering
\includegraphics[scale=.5]{PRF2.png}
\caption{Example Interconnection Graph}\label{DefFig}
\end{figure}
\begin{definition}\label{MainDef}
\emph{Let graph $G = (V, E)$ be an interconnection graph with a set of vertices $V$ and a set of edges $E$. Let there be $\rho$ star graphs $\partial_1, \ldots, \partial_\rho$ in $G$. Let $F=\left(F^{(\partial_i)}\right)_{i=1,\ldots,\rho}$ be a family of PRFs, where, for each $i$, $F^{(\partial_i)}: \mathcal{K} \times \mathcal{X} \rightarrow \mathbb{Z}^w_q$ with $(\mathcal{K}, \oplus)$ a group. Then, we say that the tuple $(F, \oplus)$ is a star-specific $(\delta,\gamma,p)$-almost key-homomorphic PRF if the following two conditions hold:
\begin{enumerate}[label=(\roman*)]
\item for all $\partial_i \neq \partial_j~(i,j \in [\rho]), k \in \mathcal{K}$ and $x \in \mathcal{X}$, it holds that:
\[\Pr[F^{(\partial_i)}_k(x) = F^{(\partial_j)}_k(x)] \leq \delta^w+\eta(\L),\]
where $F^{(\partial)}_k(x)$ denotes the PRF computed by parties in star graph $\partial \subseteq V(G)$ on input $x \in \mathcal{X}$ and key $k \in \mathcal{K}$, and $\eta(\L)$ is a negligible function in the security parameter $\L$,
\item for all $k_1, k_2 \in \mathcal{K}$ and $x \in \mathcal{X}$, there exists a vector $\textbf{e}=(e_1,\ldots,e_w)$ satisfying:
$$F_{k_1}^{(\partial)}(x) + F^{(\partial)}_{k_2}(x) = F^{(\partial)}_{k_1 \oplus k_2}(x) + \textbf{e} \bmod q,$$
such that for all $i\in [w]$, it holds that: $\Pr[-\gamma\leq e_i\leq\gamma]\geq p$.
\end{enumerate}}
\end{definition}
\section{Maximally Cover-free At Most $t$-intersecting $k$-uniform Families}\label{Extremal}
Extremal combinatorics deals with the problem of determining or estimating the maximum or minimum cardinality of a collection of finite objects that satisfies some specific set of requirements. It is also concerned with the investigation of inequalities between combinatorial invariants, and questions dealing with relations among them. For an introduction to the topic, we refer the interested reader to the books by Jukna \cite{Stas[11]} and Bollob\'{a}s \cite{BollB[78]}, and the surveys by Alon \cite{Alon1[03],Alon1[08],Alon1[16],Alon1[20]}. Extremal combinatorics can be further divided into the following distinct fields:
\begin{itemize}
\item Extremal graph theory, which began with the work of Mantel in 1907 \cite{Aign[95]} and was first investigated in earnest by Tur\'{a}n in 1941 \cite{Tura[41]}. For a survey of the important results in the field, see \cite{Niki[11]}.
\item Ramsey theory, which was popularised by Erd\H{o}s and Szekeres \cite{Erd[47],ErdGeo[35]} by extending a result of Ramsey from 1929 (published in 1930 \cite{FrankRam[30]}). For a survey of the important results in the field, see \cite{JacFox[15]}.
\item Extremal problems in arithemetic combinatorics, which grew from the work of van der Waerden in 1927 \cite{BL[27]} and the Erd\H{o}s-Tur\'{a}n conjecture of 1936 \cite{PLPL[36]}. For a survey of the important results in the field, see \cite{Choon[12]}.
\item Extremal (finite) set theory, which was first investigated by Sperner~\cite{Sperner[28]} in 1928 by establishing the maximum size of an antichain, i.e., a set-system where no member is a superset of another. However, it was Erd\H{o}s et al. \cite{Erdos[61]} who started systematic research in extremal set theory.
\end{itemize}
Extremal set theory deals with determining the size of set-systems that satisfy certain restrictions. It is one of the most rapidly developing areas in combinatorics, with applications in various other branches of mathematics and theoretical computer science, including functional analysis, probability theory, circuit complexity, cryptography, coding theory, probabilistic methods, discrete geometry, linear algebra, spectral graph theory, ergodic theory, and harmonic analysis \cite{Beimel[15],Beimel[12],Zeev[15],Zeev[11],Klim[09],Sergey[08],Liu[17],SehrawatVipin[21],VipinYvo[20],Polak[13],Blackburn[03],WangThesis[20],Sudak[10],GarVac[94],Gro[00],IWF[20]}. For more details on extremal set theory, we refer the reader to the book by Gerbner and Patkos \cite{GerbBala[18]}; for probabilistic arguments/proofs, see the books by Bollob\'{a}s \cite{Boll[86]} and Spencer \cite{JSpen[87]}.
Our work in this paper concerns a subfield of extremal set theory, called \textit{intersection theorems}, wherein set-systems under specific intersection restrictions are constructed, and bounds on their sizes are derived. A wide range of methods have been employed to establish a large number of intersection theorems over various mathematical structures, including vector subspaces, graphs, subsets of finite groups with given group actions, and uniform hypergraphs with stronger or weaker intersection conditions. The methods used to derive these theorems have included purely combinatorial methods such as shifting/compressions, algebraic methods (including linear-algebraic, Fourier analytic and representation-theoretic), analytic, probabilistic and regularity-type methods. We shall not give a full account of the known intersection theorems, but only touch upon the results that are particularly relevant to our set-system and its construction. For a broader account, we refer the interested reader to the comprehensive surveys by Ellis \cite{Ell[21]}, and Frankl and Tokushige~\cite{Frankl[16]}. For an introduction to intersecting and cross-intersecting families related to hypergraphs, see~\cite{AMDD[2020],Kleit[79]}.
\begin{note}
Set-system and hypergraph are very closely related terms, and commonly used interchangeably. Philosophically, in a hypergraph, the focus is more on vertices, vertex subsets being in ``relation'', and subset(s) of vertices satisfying a specific configuration of relations; whereas in a set-system, the focus is more on set-theoretic properties of the sets.
\end{note}
In this section, we derive multiple intersection theorems for:
\begin{enumerate}
\item at most $t$-intersecting $k$-uniform families of sets,
\item maximally cover-free at most $t$-intersecting $k$-uniform families of sets.
\end{enumerate}
We also provide an explicit construction for at most $t$-intersecting $k$-uniform families of sets. Later, we use the results from this section to establish the maximum number of SSKH PRFs that can be constructed securely by a set of parties against various active/passive and internal/external adversaries.
For $a,b\in\mathbb{Z}$ with $a\leq b$, let $[a,b]:=\{a,a+1,\ldots,b-1,b\}$.
\begin{definition}
\emph{$\mathcal{H}\subseteq 2^{[n]}$ is $k$-uniform if $|A|=k$ for all $A\in\mathcal{H}$.}
\end{definition}
\begin{definition}
\emph{$\mathcal{H}\subseteq 2^{[n]}$ is maximally cover-free if
$$A\not\subseteq\bigcup_{B\in\mathcal{H}, B\neq A}B$$
for all $A\in\mathcal{H}$.}
\end{definition}
It is clear that $\mathcal{H}\subseteq 2^{[n]}$ is maximally cover-free if and only if every $A\in\mathcal{H}$ has some element $x_A$ such that $x_A\not\in B$ for all $B\in\mathcal{H}$ with $B\neq A$. Furthermore, the maximum size of a $k$-uniform family $\mathcal{H}\subseteq 2^{[n]}$ that is maximally cover-free is $n-k+1$, and it is given by the set system
$$\mathcal{H}=\left\{[k-1]\cup\{x\}: x\in[k, n]\right\}$$
(and this is unique up to permutations of $[n]$).
\begin{definition}
\label{intersecting_definitions}
\emph{Let $t$ be a non-negative integer. We say the set system $\mathcal{H}$ is
\begin{enumerate}[label = {(\roman*)}]
\item at most $t$-intersecting if $|A\cap B|\leq t$,
\item exactly $t$-intersecting if $|A\cap B|=t$,
\item at least $t$-intersecting if $|A\cap B|\geq t$,
\end{enumerate}
for all $A, B\in\mathcal{H}$ with $A\neq B$.}
\end{definition}
Property (iii) in \Cref{intersecting_definitions} is often simply called ``$t$-intersecting'' \cite{Borg[11]}, but we shall use the term ``at least $t$-intersecting'' for clarity.
\begin{definition}
\emph{Let $\mathcal{F},\mathcal{G}\subseteq 2^{[n]}$. We say that $\mathcal{F}$ and $\mathcal{G}$ are equivalent (denoted $\mathcal{F}\sim\mathcal{G}$) there exists a permutation $\pi$ of $[n]$ such that $\pi^\ast(\mathcal{F})=\mathcal{G}$, where
$$\pi^\ast(\mathcal{F})=\left\{\{\pi(a) :a\in A\}: A\in\mathcal{F}\right\}.$$}
\end{definition}
For $n,k,t,m\in\mathbb{Z}^+$ with $t\leq k\leq n$, let $N(n,k,t,m)$ denote the collection of all set systems $\mathcal{H}\subseteq 2^{[n]}$ of size $m$ that are at most $t$-intersecting and $k$-uniform, and $M(n,k,t,m)$ denote the collection of set systems $\mathcal{H}\in N(n,k,t,m)$ that are also maximally cover-free.
The following proposition establishes a bijection between equivalence classes of these two collections of set systems (for different parameters):
\begin{proposition}
\label{maximally_cover_free}
Suppose $n,k,t,m\in\mathbb{Z}^+$ satisfy $t\leq k\leq n$ and $m<n$. Then there exists a bijection
$$M(n,k,t,m)\ /\sim\ \leftrightarrow N(n-m,k-1,t,m)\ /\sim.$$
\end{proposition}
\begin{proof}
We will define functions
\begin{align*}
\bar{f}:&& M(n,k,t,m)\ /\sim\ &\to N(n-m,k-1,t,m)\ /\sim \\
\bar{g}:&& N(n-m,k-1,t,m)\ /\sim\ &\to M(n,k,t,m)\ /\sim
\end{align*}
that are inverses of each other.
Let $\mathcal{H}\in M(n,k,t,m)$. Since $\mathcal{H}$ is maximally cover-free, for every $A\in\mathcal{H}$, there exists $x_A\in A$ such that $x_A\not\in B$ for all $B\in\mathcal{H}$ such that $B\neq A$. Consider the set system $\{A\setminus\{x_A\}: A\in\mathcal{H}\}$.
First, note that although this set system depends on the choice of $x_A\in A$ for each $A\in\mathcal{H}$, the equivalence class of $\{A\setminus\{x_A\}: A\in\mathcal{H}\}$ is independent of this choice, hence this gives us a map
\begin{align*}
f:M(n,k,t,m)&\to N(n-m,k-1,t,m)\ /\sim \\
\mathcal{H}&\mapsto [\{A\setminus\{x_A\}: A\in\mathcal{H}\}].
\end{align*}
Furthermore, it is clear that that if $\mathcal{H}\sim\mathcal{H}'$, then $f(\mathcal{H})\sim f(\mathcal{H}')$, so $f$ induces a well-defined map
$$\bar{f}: M(n,k,t,m)\ /\sim\ \to N(n-m,k-1,t,m)\ /\sim.$$
Next, for a set system $\mathcal{G}=\{G_1,\ldots, G_m\}\in N(n-m,k-1,t,m)$, define
\begin{align*}
g: N(n-m,k-1,t,m) &\to M(n,k,t,m)\ /\sim \\
\mathcal{G} &\mapsto [\{G_i\cup\{n-m+i\}:i\in[m]\}].
\end{align*}
Again, this induces a well-defined map
$$\bar{g}: N(n-m,k-1,t,m)\ /\sim\ \to M(n,k,t,m)\ /\sim$$
since $g(\mathcal{G})\sim g(\mathcal{G}')$ for any $\mathcal{G}$, $\mathcal{G}'$ such that $\mathcal{G}\sim\mathcal{G}'$.
We can check that $\bar{f}\circ \bar{g}=id_{N(n-m,k-1,t,m)}$ and that $\bar{g}\circ \bar{f}=id_{M(n,k,t,m)}$, and thus $\bar{f}$ and $\bar{g}$ are bijections, as desired. \qed
\end{proof}
\begin{corollary}
\label{maximally_cover_free_corollary}
Let $n,k,t,m\in\mathbb{Z}^+$ be such that $t\leq k\leq n$ and $m<n$. Then there exists a maximally cover-free, at most $t$-intersecting, $k$-uniform set system $\mathcal{H}\subseteq 2^{[n]}$ of size $m$ if and only if there exists an at most $t$-intersecting, $(k-1)$-uniform set system $\mathcal{G}\subseteq 2^{[n-m]}$.
\end{corollary}
\begin{remark}
Both Proposition \ref{maximally_cover_free} and Corollary \ref{maximally_cover_free_corollary} remain true if, instead of at most $t$-intersecting families, we consider exactly $t$-intersecting or at least $t$-intersecting families.
\end{remark}
At least $t$-intersecting families have been completely characterized by Ahlswede and Khachatrian \cite{Ahlswede[97]}, but the characterization of exactly $t$-intersecting and at most $t$-intersecting families remain open.
Let $\varpi(n,k,t)=\max\left\{|\mathcal{H}|: \mathcal{H}\subseteq 2^{[n]}\text{ is at most }t\text{-intersecting and }k\text{-uniform}\right\}.$
\begin{proposition}
\label{simple_bound}
Suppose $n,k,t\in\mathbb{Z}^+$ are such that $t\leq k\leq n$. Then
$$\varpi(n,k,t)\leq\frac{\binom{n}{t+1}}{\binom{k}{t+1}}.$$
\end{proposition}
\begin{proof}
Let $\mathcal{H}\subseteq 2^{[n]}$ be an at most $t$-intersecting and $k$-uniform family. The number of pairs $(X, A)$, where $A\in\mathcal{H}$ and $X\subseteq A$ is of size $t+1$, is equal to $|\mathcal{H}|\cdot\binom{k}{t+1}$.
Since $\mathcal{H}$ is at most $t$-intersecting, any $(t+1)$-element subset of $[n]$ lies in at most one set in $\mathcal{H}$. Thus,
$$|\mathcal{H}|\cdot\binom{k}{t+1}\leq\binom{n}{t+1}\implies |\mathcal{H}|\leq\frac{\binom{n}{t+1}}{\binom{k}{t+1}}.$$ \qed
\end{proof}
Using Proposition \ref{maximally_cover_free}, we immediately obtain the following as a corollary:
\begin{corollary}
\label{simple_bound_corollary}
Suppose $\mathcal{H}\subseteq 2^{[n]}$ is maximally cover-free, at most $t$-intersecting and $k$-uniform. Then
$$|\mathcal{H}|\leq\frac{\binom{n-|\mathcal{H}|}{t+1}}{\binom{k-1}{t+1}}.$$
\end{corollary}
Similarly, by applying Proposition \ref{maximally_cover_free}, other results on at most $t$-intersecting and $k$-uniform set systems can also be translated into results on set systems that, in addition to having these two properties, are maximally cover-free. Often, we will not explicitly state these results, but it will be understood that such results can be easily derived.
\subsection{Bounds for Small $n$}
In this section, we give several bounds on $\varpi(n,k,t)$ for small values of $n$.
\begin{lemma}
\label{bound_for_small_n_lemma}
Let $n,k,t\in\mathbb{Z}^+$ be such that $t\leq k\leq n<\frac{1}{2}k\left(\frac{k}{t}+1\right)$. Let $m'$ be the least positive integer such that $n<m'k-\frac{1}{2}m'(m'-1)t$. Then
$$\varpi(n,k,t)=m'-1.$$
\end{lemma}
\begin{proof}
We will first show that there exists $m^\star\in\mathbb{Z}^+$ such that $n<m^\star k-\frac{1}{2}m^\star(m^\star-1)t$. Consider the quadratic polynomial $p(x)=xk-\frac{1}{2}x(x-1)t$. Note that $p(x)$ achieves its maximum value at $x=\frac{k}{t}+\frac{1}{2}$. If we let $m^\star$ be the unique positive integer such that $\frac{k}{t}\leq m^\star<\frac{k}{t}+1$, then
$$p(m^\star)\geq p\left(\frac{k}{t}\right)=\frac{1}{2}k\left(\frac{k}{t}+1\right)>n,$$
as required.
Next, suppose $\mathcal{H}$ is an at most $t$-intersecting, $k$-uniform set family with $|\mathcal{H}|\geq m'$. Let $A_1,\ldots, A_{m'}\in\mathcal{H}$ be distinct. Then
$$n\geq\left|\bigcup_{i=1}^{m'} A_i\right|=\sum_{i=1}^{m'} \left|A_i\setminus\bigcup_{j=1}^{i-1}A_j\right|\geq\sum_{i=0}^{m'-1} (k-it)=m'k-\frac{1}{2}m'(m'-1)t,$$
which is a contradiction. This proves that $|\mathcal{H}|\leq m'-1$.
It remains to construct an at most $t$-intersecting, $k$-uniform set family $\mathcal{H}\subseteq 2^{[n]}$ with $|\mathcal{H}|=m'-1$. Let $m=m'-1$. The statement is trivial if $m=0$, so we may assume that $m\in\mathbb{Z}^+$. By the minimality of $m'$, we must have $n\geq mk-\frac{1}{2}m(m-1)t$. Let $k=\alpha t+\beta$ with $\alpha,\beta\in\mathbb{Z}$ and $0\leq\beta\leq t-1$. Now, define a set system $\mathcal{H}=\{A_1,\ldots,A_m\}$ as follows:
$$A_i=\left\{(l,\{i,j\}):l\in[t], j\in[\alpha+1]\setminus\{i\}\right\}\cup\left\{(i,j):j\in[\beta]\right\}.$$
It is clear, by construction, that $\mathcal{H}$ is at most $t$-intersecting and $k$-uniform. Furthermore, since $\alpha=\lfloor k/t\rfloor\geq m$, the number of elements in the universe of $\mathcal{H}$ is
\begin{align*}
&t\cdot\left|\{i,j\}:1\leq i<j\leq\alpha+1,\,i\leq m\right|+m\beta \\
=\ &t\left(\binom{\alpha+1}{2}-\binom{\alpha+1-m}{2}\right)+m\beta \\
=\ &t\left(m\alpha-\frac{1}{2}m(m-1)\right)+m\beta \\
=\ &mk-\frac{1}{2}m(m-1)t.
\end{align*} \qed
\end{proof}
\begin{proposition}
\label{bound_for_small_n}
Let $n,k,t\in\mathbb{Z}^+$ be such that $t\leq k\leq n$.
\begin{enumerate}[label = {(\alph*)}]
\item If $n<\frac{1}{2}k\left(\frac{k}{t}+1\right)$, then
$$\varpi(n,k,t)=\left\lfloor\frac{1}{2}+\frac{k}{t}-\sqrt{\left(\frac{1}{2}+\frac{k}{t}\right)^2-\frac{2n}{t}}\right\rfloor.$$
\item If $t\mid k$ and $n=\frac{1}{2}k\left(\frac{k}{t}+1\right)$, then
$$\varpi(n,k,t)=\frac{k}{t}+1.$$
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}[label = {(\alph*)}]
\item Note that $m=\left\lfloor\frac{1}{2}+\frac{k}{t}-\sqrt{\left(\frac{1}{2}+\frac{k}{t}\right)^2-\frac{2n}{t}}\right\rfloor$ satisfies $n\geq mk-\frac{1}{2}m(m-1)t$ and $m'=m+1$ satisfies $n< m'k-\frac{1}{2}m'(m'-1)t$, hence the result follows immediately from Lemma \ref{bound_for_small_n_lemma}.
\item Let $\mathcal{H}\subseteq 2^{[n]}$ be an at most $t$-intersecting, $k$-uniform set family. We may assume that $|\mathcal{H}|\geq\frac{k}{t}$. We will first show that any three distinct sets in $\mathcal{H}$ have empty intersection. Let $A_1$, $A_2$ and $A_3$ be any three distinct sets in $\mathcal{H}$, and let $A_4,\,\ldots,\,A_{\frac{k}{t}}\in\mathcal{H}$ be such that the $A_i$'s are all distinct. Then
$$\left|\bigcup_{i=1}^{\frac{k}{t}} A_i\right|=\sum_{i=1}^{\frac{k}{t}} \left|A_i\setminus\bigcup_{j=1}^{i-1}A_j\right|\geq\sum_{i=0}^{\frac{k}{t}-1} (k-it)=\frac{1}{2}k\left(\frac{k}{t}+1\right)=n,$$
and thus we have must equality everywhere. In particular, we obtain $|A_3\setminus (A_1\cup A_2)|=k-2t$, which together which the fact that $\mathcal{H}$ is at most $t$-intersecting, implies that $A_1\cap A_2\cap A_3=\varnothing$, as claimed. Therefore, every $x\in[n]$ lies in at most $2$ sets in $\mathcal{H}$. Now,
$$|\mathcal{H}|\cdot k=|(A,x):A\in\mathcal{H},\ x\in A|\leq 2n\implies |\mathcal{H}|\leq\frac{2n}{k}=\frac{k}{t}+1,$$
proving the first statement.
Next, we shall exhibit an at most $t$-intersecting, $k$-uniform set family $\mathcal{H}\subseteq 2^{[n]}$, where $n=\frac{1}{2}k\left(\frac{k}{t}+1\right)$, with $|\mathcal{H}|=\frac{k}{t}+1$. Let $\mathcal{H}=\{A_1,\ldots,A_{\frac{k}{t}+1}\}$ with
$$A_i=\left\{(l,\{i,j\}):l\in[t], j\in\left[\frac{k}{t}+1\right]\setminus\{i\}\right\}.$$
It is clear that $\mathcal{H}$ is exactly $t$-intersecting and $k$-uniform, and that it is defined over a universe of $t\cdot\dbinom{k/t+1}{2}=n$ elements.
\end{enumerate} \qed
\end{proof}
\begin{remark}
The condition $n<\frac{1}{2}k\left(\frac{k}{t}+1\right)$ in Proposition \ref{bound_for_small_n}(a) is necessary. Indeed, if $n=\dfrac{1}{2}k\left(\dfrac{k}{t}+1\right)$, then $$\left\lfloor\frac{1}{2}+\frac{k}{t}-\sqrt{\left(\frac{1}{2}+\frac{k}{t}\right)^2-\frac{2n}{t}}\right\rfloor=\frac{k}{t}<\frac{k}{t}+1.$$
\end{remark}
We will now look at the case where $n=\frac{1}{2}k\left(\frac{k}{t}+1\right)+1$. Unlike above, we do not have exact bounds for this case. But what is perhaps surprising is that, for certain $k$ and $t$, the addition of a single element to the universe set can increase the maximum size of the set family by $3$ or more.
\begin{proposition}
\label{bound_beyond_small_n}
Let $n,k,t\in\mathbb{Z}^+$ be such that $t\leq k\leq n$ and $t\mid k$. If $n=\frac{1}{2}k\left(\frac{k}{t}+1\right)+1$, then
$$\varpi(n,k,t)\leq\frac{\frac{k}{t}+1}{1-\frac{k}{n}}=\left(\frac{k^2+kt+2t}{k^2-kt+2t}\right)\left(\frac{k}{t}+1\right).$$
\end{proposition}
\begin{proof}
Let $\mathcal{H}\subseteq 2^{[n]}$ be an at most $t$-intersecting and $k$-uniform family. There exists some element $x\in[n]$ such that $x$ is contained in at most $\lfloor\frac{k|\mathcal{H}|}{n}\rfloor$ sets in $\mathcal{H}$. We construct a set family $\mathcal{H}'\subseteq 2^{[n]\setminus\{x\}}$ by taking those sets in $\mathcal{H}$ that do not contain $x$. Since $\mathcal{H}'$ is defined over a universe of $\frac{1}{2}k\left(\frac{k}{t}+1\right)$ elements, applying Proposition \ref{bound_for_small_n}, we obtain
\begin{align*}
|\mathcal{H}|-\left\lfloor\frac{k|\mathcal{H}|}{n}\right\rfloor\leq|\mathcal{H}'|\leq\frac{k}{t}+1&\implies\left\lceil|\mathcal{H}|-\frac{k|\mathcal{H}|}{n}\right\rceil\leq\frac{k}{t}+1 \\
&\implies|\mathcal{H}|-\frac{k|\mathcal{H}|}{n}\leq\frac{k}{t}+1 \\
&\implies|\mathcal{H}|\leq\frac{\frac{k}{t}+1}{1-\frac{k}{n}}.
\end{align*} \qed
\end{proof}
\begin{remark}
\begin{enumerate}[label = {(\alph*)}]
\item If $k=3$, $t=1$ and $n=\frac{1}{2}k\left(\frac{k}{t}+1\right)+1=7$, then the bound in the Proposition \ref{bound_beyond_small_n} states that $\varpi(n,k,t)\leq\left(\frac{k^2+kt+2t}{k^2-kt+2t}\right)\left(\frac{k}{t}+1\right)=7$. The Fano plane is an example of a $3$-uniform family of size $7$ defined over a universe of $7$ elements that is exactly $1$-intersecting. Thus, the bound in \Cref{bound_beyond_small_n} can be achieved, at least for certain choices of $k$ and $t$.
\begin{figure}[H]
\centering
\includegraphics[scale=.5]{Fano.png}
\caption{The Fano plane}
\label{fano_plane}
\end{figure}
\item Note that
$$\left(\frac{k^2+kt+2t}{k^2-kt+2t}\right)\left(\frac{k}{t}+1\right)-\left(\frac{k}{t}+1\right)=\frac{2k^2+2kt}{k^2-kt+2t}.$$
We can show that the above expression is (strictly) bounded above by $6$ (for $k\neq t$), with slightly better bounds for $t=1,\,2,\,3,\,4$. It follows that
$$\varpi(n,k,t)\leq\begin{cases}
\frac{k}{t}+4 & \text{if }t=1, \\
\frac{k}{t}+5 & \text{if }t=2,\,3,\,4, \\
\frac{k}{t}+6 & \text{if }t\geq 5.
\end{cases}
$$
Furthermore, $\lim_{k\rightarrow\infty}\frac{2k^2+2kt}{k^2-kt+2t}=2$, thus, for fixed $t$, we have $\varpi(n,k,t)\leq\frac{k}{t}+3$ for large enough $k$.
\end{enumerate}
\end{remark}
Next, we give a necessary condition for the existence of at most $t$-intersecting and $k$-uniform families $\mathcal{H}\subseteq 2^{[n]}$, which implicitly gives a bound on $\varpi(n,k,t)$.
\begin{proposition}
\label{larger_n}
Let $n,k,t\in\mathbb{Z}^+$ satisfy $t\leq k\leq n$, and $\mathcal{H}\subseteq 2^{[n]}$ be an at most $t$-intersecting and $k$-uniform family with $|\mathcal{H}|=m$. Then
$$(n-r)\left\lfloor\frac{km}{n}\right\rfloor^2+r\left\lceil\frac{km}{n}\right\rceil^2\leq (k-t)m+tm^2$$
where $r=km-n\left\lfloor\frac{km}{n}\right\rfloor$.
\end{proposition}
\begin{proof}
Let $\alpha_j$ be the number of elements that is contained in exactly $j$ sets in $\mathcal{H}$. We claim that the following holds:
\begin{align}
\sum_{j=0}^{m}\alpha_j&=n, \label{eq_1} \\
\sum_{j=0}^{m}j\alpha_j&=km, \label{eq_2} \\
\sum_{j=0}^{m}j(j-1)\alpha_j&\leq tm(m-1) \label{eq_3}.
\end{align}
(\ref{eq_1}) is immediate, (\ref{eq_2}) follows from double counting the set $\{(A,x):A\in\mathcal{H},\ x\in A\}$, and (\ref{eq_3}) follows from considering $\{(A,B,x):A,B\in\mathcal{H},\ A\neq B,\ x\in A\cap B\}$ and using the fact that $\mathcal{H}$ is at most $t$-intersecting. This proves the claim.
Next, let us find non-negative integer values of $\alpha_0,\ldots,\alpha_m$ satisfying both (\ref{eq_1}) and (\ref{eq_2}) that minimize the expression $\sum_{j=0}^{m}j(j-1)\alpha_j$. Note that
$$\sum_{j=0}^{m}j(j-1)\alpha_j=\sum_{j=0}^{m}(j^2\alpha_j-j\alpha_j)=\sum_{j=0}^{m}j^2\alpha_j-km,$$
so we want to minimize $\sum_{j=0}^{m}j^2\alpha_j$ subject to the restrictions (\ref{eq_1}) and (\ref{eq_2}). If $n\nmid km$, this is achieved by letting $\alpha_{\lfloor\frac{km}{n}\rfloor}=n-r$ and $\alpha_{\lceil\frac{km}{n}\rceil}=r$, with all other $\alpha_j$'s equal to $0$. If $n\mid km$, we let $\alpha_{\frac{km}{n}}=n$ with all other $\alpha_j$'s equal to $0$.
Indeed, it is easy to see that the above choice of $\alpha_0,\ldots,\alpha_m$ satisfy both (\ref{eq_1}) and (\ref{eq_2}). Now, let $\alpha_0,\ldots,\alpha_m$ be some other choice of the $\alpha_j$'s that also satisfy both (\ref{eq_1}) and (\ref{eq_2}). We will show that the function $f(\alpha_0,\ldots,\alpha_m)=\sum_{j=0}^{m}j^2\alpha_j$ can be decreased with a different choice of $\alpha_0,\ldots,\alpha_m$.
Suppose $\alpha_i\neq 0$ for some $i\neq\lfloor\frac{km}{n}\rfloor,\lceil\frac{km}{n}\rceil$, and assume that $i<\lfloor\frac{km}{n}\rfloor$ (the other case where $i>\lceil\frac{km}{n}\rceil$ is similar). Since the $\alpha_j$'s satisfy both (\ref{eq_1}) and (\ref{eq_2}), there must be some $i_1$ with $i_1\geq\lceil\frac{km}{n}\rceil$ (the inequality is strict if $n\mid km$) such that $\alpha_{i_1}\neq 0$. Then if we decrease $\alpha_i$ and $\alpha_{i_1}$ each by one, and increase $\alpha_{i+1}$ and $\alpha_{i_1-1}$ each by one, constraints (\ref{eq_1}) and (\ref{eq_2}) continue to be satisfied. Furthermore,
\begin{align*}
f(&\alpha_1,\ldots,\alpha_i,\alpha_{i+1},\ldots,\alpha_{i_1-1},\alpha_{i_1},\ldots,\alpha_m)\\ &-f(\alpha_1,\ldots,\alpha_i-1,\alpha_{i+1}+1,\ldots,\alpha_{i_1-1}+1,\alpha_{i_1}-1,\ldots,\alpha_m) \\
&=\ i^2-(i+1)^2-(i_1-1)^2+i_1^2 = 2i_1-2i-2>0
\end{align*}
since $i_1\geq\lfloor\frac{km}{n}\rfloor+1>i+1$.
This proves the claim that the choice of $\alpha_{\lfloor\frac{km}{n}\rfloor}=n-r$ and $\alpha_{\lceil\frac{km}{n}\rceil}=r$ minimizes $f$.
Therefore, we can only find non-negative integers $\alpha_0,\ldots,\alpha_m$ satisfying all three conditions above if and only if
$$(n-r)\left\lfloor\frac{km}{n}\right\rfloor^2+r\left\lceil\frac{km}{n}\right\rceil^2-km\leq tm(m-1),$$
as desired. \qed
\end{proof}
\begin{remark}
For fixed $k$ and $t$, if $n$ is sufficiently large, then the inequality in Proposition \ref{larger_n} will be true for all $m$. Thus, the above proposition is only interesting for values of $n$ that are not too large.
\end{remark}
\subsection{Asymptotic Bounds}
The study of Steiner systems has a long history, dating back to the mid 19th century work on triple block designs by Pl\"{u}cker \cite{Plucker[35]}, Kirkman \cite{Kirkman[47]}, Steiner \cite{Steiner[53]}, and Reiss \cite{Reiss[59]}. The term Steiner (triple) systems was coined in 1938 by Witt \cite{Witt[38]}. A Steiner system is defined as an arrangement of a set of elements in triples such that each pair of elements is contained in exactly one triple. Steiner systems have strong connections to a wide range of topics, including statistics, linear coding, finite group theory, finite geometry, combinatorial design, experimental design, storage systems design, wireless communication, low-density parity-check code design, distributed storage, batch codes, and low-redundancy private information retrieval. For an broader introduction to the topic, we refer the interested reader to~\cite{Wilson[03]} (also see \cite{Tri[99],Charles[06]}).
In this section, we will see how at most $t$-intersecting families are related to Steiner systems. Using a remarkable recent result from Keevash \cite{Keevash[19]} about the existence of Steiner systems with certain parameters, we obtain an asymptotic bound for the maximum size of at most $t$-intersecting families.
\begin{definition}
\emph{A Steiner system $S(t,k,n)$, where $t\leq k\leq n$, is a family $\mathcal{S}$ of subsets of $[n]$ such that
\begin{enumerate}
\item $|A|=k$ for all $A\in\mathcal{S}$,
\item any $t$-element subset of $[n]$ is contained in exactly one set in $\mathcal{S}$.
\end{enumerate}
The elements of $\mathcal{S}$ are known as blocks.}
\end{definition}
From the above definition, it is clear that there exists a family $\mathcal{H}$ that achieves equality in Proposition \ref{simple_bound} if and only if $S(t+1,k,n)$ exists. It is easy to derive the following well known necessary condition for the existence of a Steiner system with given parameters:
\begin{proposition}
If $S(t,k,n)$ exists, then $\binom{n-i}{t-i}$ is divisible by $\binom{k-i}{t-i}$ for all $0\leq i\leq t$, and the number of blocks in $S(t,k,n)$ is equal to $\binom{n}{t}/\binom{k}{t}$.
\end{proposition}
In 2019, Keevash \cite{Keevash[19]} proved the following result, providing a partial converse to the above, and answering in the affirmative a longstanding open problem in the theory of designs.
\begin{theorem}[\cite{Keevash[19]}]
\label{keevash}
For any $k,t\in\mathbb{Z}^+$ with $t\leq k$, there exists $n_0(k,t)$ such that for all $n\geq n_0(k,t)$, a Steiner system $S(t,k,n)$ exists if and only if
$$\binom{k-i}{t-i}\text{ divides }\binom{n-i}{t-i}\quad\text{for all }i=0,1\ldots,t-1.$$
\end{theorem}
Using this result, we will derive asymptotic bounds for the maximum size of an at most $t$-intersecting and $k$-uniform family.
\begin{proposition}
\label{asymptotic_bound}
Let $k,t\in\mathbb{Z}^+$ with $t<k$, and $C<1$ be any positive real number.
\begin{enumerate}[label = {(\roman*)}]
\item There exists $n_1(k,t,C)$ such that for all integers $n\geq n_1(k,t,C)$, there is an at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[n]}$ with
$$|\mathcal{H}|\geq\frac{Cn^{t+1}}{k(k-1)\cdots(k-t)}.$$
\item For all sufficiently large $n$,
$$\frac{Cn^{t+1}}{k(k-1)\cdots(k-t)}\leq\varpi(n,k,t)<\frac{n^{t+1}}{k(k-1)\cdots(k-t)}.$$
In particular,
$$\varpi(n,k,t)\sim\frac{n^{t+1}}{k(k-1)\cdots(k-t)}.$$
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}[label = {(\roman*)}]
\item Let $t'=t+1$. By Theorem \ref{keevash}, there exists $n_0(k,t')$ such that for all $N\geq n_0(k,t')$, a Steiner system $S(t',k,N)$ exists if
\begin{equation}
\tag{$\ast$}
\label{keevash_condition}
\binom{k-i}{t'-i}\text{ divides }\binom{N-i}{t'-i}\quad\text{for all }i=0,1\ldots,t'-1.
\end{equation}
Suppose $n$ is sufficiently large. Let $n'\leq n$ be the largest integer such that (\ref{keevash_condition}) is satisfied with $N=n'$. Since
\begin{align*}
&\binom{k-i}{t'-i}\text{ divides }\binom{N-i}{t'-i} \\
\iff\ &(k-i)\cdots(k-t'+1)\mid(N-i)\cdots(N-t'+1),
\end{align*}
all $N$ of the form $\lambda k(k-1)\cdots(k-t'+1)+t'-1$ with $\lambda\in\mathbb{Z}$ will satisfy (\ref{keevash_condition}). Hence,
$n-n'\leq k(k-1)\cdots(k-t'+1)$.
By our choice of $n'$, there exists a Steiner system $S(t',k,n')$. This Steiner system is an at most $t$-intersecting and $k$-uniform set family, defined on the universe $[n']\subseteq [n]$, such that
\begin{align*}
|S(t',k,n')|&=\frac{\binom{n'}{t'}}{\binom{k}{t'}}=\frac{n'(n'-1)\cdots(n'-t'+1)}{k(k-1)\cdots(k-t'+1)} \\
&\geq\frac{(n-\alpha)(n-\alpha-1)\cdots(n-\alpha-t'+1)}{k(k-1)\cdots(k-t'+1)},
\end{align*}
where $\alpha=\alpha(k,t')=k(k-1)\cdots(k-t'+1)$ is independent of $n$. Since $C<1$, there exists $n_2(k,t',C)$ such that for all $n\geq n_2(k,t',C)$,
$$\frac{(n-\alpha)(n-\alpha-1)\cdots(n-\alpha-t'+1)}{n^{t'}}\geq C,$$
from which it follows that
$$|S(t',k,n')|\geq\frac{Cn^{t'}}{k(k-1)\cdots(k-t'+1)}=\frac{Cn^{t+1}}{k(k-1)\cdots(k-t)}$$
for all sufficiently large $n$. From the above argument, we see that we can pick $$n_1(k,t,C)=\max\left(n_0(k,t')+\alpha(k,t'), n_2(k,t',C)\right). $$
\item By Proposition \ref{simple_bound},
$$\varpi(n,k,t)\leq\frac{\binom{n}{t+1}}{\binom{k}{t+1}}=\frac{n(n-1)\cdots(n-t)}{k(k-1)\cdots(k-t)}<\frac{n^{t+1}}{k(k-1)\cdots(k-t)}.$$
The other half of the inequality follows immediately from (a).
\end{enumerate} \qed
\end{proof}
\begin{proposition}
\label{asymptotic_bound_for_maximally_cover_free}
Let $k,t\in\mathbb{Z}^+$ with $t<k-1$, and $C<1$ be any positive real number. Then for all sufficiently large $N$,
\begin{enumerate}[label = {(\roman*)}]
\item there exists a maximally cover-free, at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[N]}$ with $|\mathcal{H}|\geq CN$,
\item the maximum size $\nu(N,k,t)$ of a maximally cover-free, at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[N]}$ satisfies
$$CN\leq\nu(N,k,t)<N.$$
\end{enumerate}
\end{proposition}
\begin{proof}
We note that (ii) follows almost immediately from (i). Let us prove (i).
Fix $C_0$ such that $C<C_0<1$. By Propositions \ref{maximally_cover_free} and \ref{asymptotic_bound}, for all integers $n\geq n_1(k-1,t,C_0)$, there exists a maximally cover-free, at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{\left[n+\frac{n^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}\right]}$ with
$$|\mathcal{H}|\geq\frac{C_0n^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}.$$
Since $C<C_0$, there exist $\delta>1$ and $\varepsilon>0$ such that $C_0>\delta(1+\varepsilon)C$. Given $N$, let $n\in\mathbb{Z}^+$ be maximum such that
$$n+\frac{n^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}\leq N.$$
Assume that $N$ is sufficiently large so that $n\geq n_1(k-1,t,C_0)$. Then, by the above, there is a maximally cover-free, at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[N]}$ so that
$$|\mathcal{H}|\geq\frac{C_0n^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}.$$
Since $n$ is maximal, we have
$$N<(n+1)+\frac{(n+1)^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}.$$
If $N$ (and thus $n$) is sufficiently large such that
$$(n+1)<\frac{\varepsilon(n+1)^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}\quad\text{and}\quad\left(1+\frac{1}{n}\right)^{t+1}<\delta,$$
then
$$N<\frac{(1+\varepsilon)(n+1)^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}<\frac{\delta(1+\varepsilon)n^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}$$
and it follows that
$$|\mathcal{H}|\geq\frac{C_0n^{t+1}}{(k-1)(k-2)\cdots(k-t-1)}>\frac{C_0N}{\delta(1+\varepsilon)}>CN.$$ \qed
\end{proof}
\subsection{An Explicit Construction}
While Proposition \ref{asymptotic_bound} provides an answer for the maximum size of an at most $t$-intersecting and $k$-uniform family for large enough $n$, we cannot explicitly construct such set families since Theorem \ref{keevash} (and hence Proposition \ref{asymptotic_bound}) is nonconstructive. In this section, we will look at a method that explicitly constructs set families with larger parameters from set families with smaller parameters.
Fix a positive integer $t$. For an at most $t$-intersecting and $k$-uniform family $\mathcal{H}\subseteq 2^{[n]}$, define
$$s(\mathcal{H})=\frac{k|\mathcal{H}|}{n}.$$
$s(\mathcal{H})$ is a measure of the ``relative size'' of $\mathcal{H}$ that takes into account the parameters $k$ and $n$. Note that the maximum possible value of $|\mathcal{H}|$ should increase with larger $n$ and decrease with larger $k$, hence $s(\mathcal{H})$ is a reasonable measure of the ``relative size'' of $\mathcal{H}$.
The following result shows that it is possible to construct a sequence of at most $t$-intersecting and $k_j$-uniform families $\mathcal{H}\subseteq 2^{[n_j]}$, where $k_j\rightarrow\infty$, such that all set families in the sequence have the same relative size.
\begin{proposition}
Let $\mathcal{H}\subseteq 2^{[n]}$ be an at most $t$-intersecting and $k$-uniform family. Then there exists a sequence of set families $\mathcal{H}_j$ such that
\begin{enumerate}[label = {(\alph*)}]
\item $\mathcal{H}_j$ is an at most $t$-intersecting and $k_j$-uniform set family,
\item $s(\mathcal{H}_j)=s(\mathcal{H})$ for all $j$,
\item $\lim_{j\rightarrow\infty}k_j=\infty$.
\end{enumerate}
\end{proposition}
\begin{proof}
We will define the set families $\mathcal{H}_j$ inductively. First, let $\mathcal{H}_1=\mathcal{H}$. Suppose we have defined $\mathcal{H}_j\subseteq 2^{[n_j]}$, an at most $t$-intersecting and $k_j$-uniform family for some $j\in\mathbb{Z}^+$. Let $m=|\mathcal{H}_j|$.
Consider set families $\mathcal{G}^{(1)},\ldots,\mathcal{G}^{(m)},\mathcal{H}^{(1)},\ldots,\mathcal{H}^{(m)}$ defined on disjoint universes such that each $\mathcal{G}^{(\ell)}$ (and similarly, each $\mathcal{H}^{(\ell)}$) is isomorphic to $\mathcal{H}_j$. Write
$$\mathcal{G}^{(\ell)}=\{B_1^{(\ell)},\ldots,B_m^{(\ell)}\},\quad \mathcal{H}^{(\ell)}=\{C_1^{(\ell)},\ldots,C_m^{(\ell)}\}.$$
Now, for $1\leq h,i\leq m$, define the sets $A_{h,i}=B_h^{(i)}\sqcup C_i^{(h)}$, and let
$$\mathcal{H}_{j+1}=\{A_{h,i}: 1\leq h,i\leq m\}.$$
It is clear that $\mathcal{H}_{j+1}$ is a $2k_j$-uniform family defined over a universe of $2mn_j$ elements, and that $|\mathcal{H}_{j+1}|=m^2$. We claim that $\mathcal{H}_{j+1}$ is at most $t$-intersecting. Indeed, if $(h_1,i_1)\neq (h_2,i_2)$, then
\begin{align*}
|A_{h_1,i_1}\cap A_{h_2,i_2}|&=|(B_{h_1}^{(i_1)}\sqcup C_{i_1}^{(h_1)})\cap (B_{h_2}^{(i_2)}\sqcup C_{i_2}^{(h_2)})| \\
&=|B_{h_1}^{(i_1)}\cap B_{h_2}^{(i_2)}|+|C_{i_1}^{(h_1)}\cap C_{i_2}^{(h_2)}| \\
&=
\begin{cases}
|C_{i_1}^{(h_1)}\cap C_{i_2}^{(h_2)}|\leq t & \text{if }h_1=h_2\text{ and }i_1\neq i_2, \\
|B_{h_1}^{(i_1)}\cap B_{h_2}^{(i_2)}|\leq t & \text{if }h_1\neq h_2\text{ and }i_1=i_2, \\
0 & \text{if }h_1\neq h_2\text{ and }i_1\neq i_2.
\end{cases}
\end{align*}
Finally,
$$s(\mathcal{H}_{j+1})=\frac{k_{j+1}|\mathcal{H}_{j+1}|}{n_{j+1}}=\frac{2k_jm^2}{2mn_j}=\frac{k_j|\mathcal{H}_j|}{n_j}=s(\mathcal{H}_j).$$ \qed
\end{proof}
\begin{remark}
In the above proposition, $n_j$, $k_j$ and $|\mathcal{H}_j|$ grow with $j$. Clearly, given a family $\mathcal{H}$ as in the above proposition, it is also possible to construct a sequence of set families $\mathcal{H}_j$ such that $s(\mathcal{H}_j)=s(\mathcal{H})$ for all $j$, where $n_j$ and $|\mathcal{H}_j|$ grow with $j$, while $k_j$ stays constant.
It is natural to ask, therefore, if it is possible to construct a sequence of set families satisfying $s(\mathcal{H}_j)=s(\mathcal{H})$, where $n_j$ and $k_j$ grow with $j$, but $|\mathcal{H}_j|$ stays constant. In fact, this is not always possible. Indeed, let $\mathcal{H}$ be the Fano plane, then $t=1$, $n=7$, $k=3$ and $|\mathcal{H}|=7$. Note that $\mathcal{H}$ satisfies Proposition \ref{larger_n} with equality, i.e.
$$\frac{(k|\mathcal{H}|)^2}{n}=k|\mathcal{H}|+t(|\mathcal{H}|^2-|\mathcal{H}|).$$
If we let $n'=\lambda n$ and $k'=\lambda k$ for some $\lambda>1$, then
$$\frac{(k'|\mathcal{H}|)^2}{n'}=\lambda\frac{(k|\mathcal{H}|)^2}{n}=\lambda\left(k|\mathcal{H}|+t(|\mathcal{H}|^2-|\mathcal{H}|)\right)>k'|\mathcal{H}|+t(|\mathcal{H}|^2-|\mathcal{H}|).$$
Hence, by Proposition \ref{larger_n}, there is no $k'$-uniform and at most $t$-intersecting family $\mathcal{H}'\subseteq 2^{[n']}$ such that $|\mathcal{H}'|=|\mathcal{H}|=7$.
\end{remark}
\section{Generating Rounded Gaussians from Physical Communications}\label{Sec6}
In this section, we describe our procedure, called rounded Gaussians from physical (layer) communications (RGPC), to generate deterministic errors from a rounded Gaussian distribution --- which we later prove to be sufficiently independent in specific settings. RGPC is comprised of the following two subprocedures:
\begin{itemize}
\item Hypothesis generation: a protocol to generate a linear regression hypothesis from the training data, which, in our case, is comprised of the physical layer communications between participating parties.
\item Rounded Gaussian error generation: this procedure allows us to use the linear regression hypothesis --- generated by using physical layer communications as training data --- to derive deterministic rounded Gaussian errors. That is, this process samples from a rounded Gaussian distribution in a manner that is (pseudo)random to a polynomial external/internal adversary but is deterministic to the authorized parties.
\end{itemize}
\subsection{Setting and Central Idea}\label{broad}
For the sake of intelligibility, we begin by giving a brief overview of our central idea. Let there be a set of $n \geq 2$ parties, $\mathcal{P} = \{P_i\}_{i=1}^n$. All parties agree upon a function $f(x) = \beta_0 + \beta_1 x,$ with intercept $\beta_0 \leftarrow \mathbb{Z}$ and slope $\beta_1 \leftarrow \mathbb{Z}$. Let $\mathcal{H} \subseteq 2^{\mathcal{P}}$ be a family of sets such that each set $H_i \in \mathcal{H}$ forms a star graph $\partial_i$ wherein each party is connected to a central hub $C_i \notin H_i$ (for all $i \in [|\mathcal{H}|])$ via two channels: one Gaussian and another error corrected. If $\mathcal{H}$ is $k$-uniform and at most $t$-intersecting, then each star in the interconnection graph formed by the sets $H_i \in \mathcal{H}$ contains exactly $k$ members and $2k$ channels such that $|\partial_i \cap \partial_j| \leq t$. During the protocol, each party $P_j$ sends out message pairs of the form $x_j, f(x_j)$, where $x_j \leftarrow \mathbb{Z}$ and $f$ is a randomly selected function of specific type (more on this later), to the central hubs of all stars that it is a member of, such that:
\begin{itemize}
\item $f(x)$ is sent over the Gaussian channel,
\item $x$ is sent over the error corrected channel.
\end{itemize}
For the sake of simplicity, in this section, we only consider a single star. Due to the guaranteed errors occurring in the Gaussian channel, the messages recorded at each central hub $C_i$ are of the form: $y = f(x) + \varepsilon_x,$ where $\varepsilon_x$ belongs to some Gaussian distribution $\mathcal{N}(0, \sigma^2)$ with mean zero and standard deviation $\sigma$ (in our experiments, which are discussed in \Cref{Sim}, we sample from $\sigma \in [10,300])$. $C_i$ forwards $\{x, y\}$ to all parties over the respective error corrected channels in $\partial_i$.
In our algorithm, we use least squares linear regression which aims to minimize the sum of the squared residuals. We know that the hypothesis generated by linear regression is of the form: $h(x) = \hat{\beta}_0 + \hat{\beta}_1 x$. Thus, the statistical error, with respect to our target function, comes out as:
\begin{equation}\label{hypoEq}
\bar{e}_x = |y - h(x)|.
\end{equation}
Due to the nature of the physical layer errors and independent channels, we know that the errors $\varepsilon_x$ are random and independent. Thus, it follows that for restricted settings, the error terms $\bar{e}_{x_i}$ and $\bar{e}_{x_j}$ are almost independent for all $x_i \neq x_j$, and --- are expected to --- belong to a Gaussian distribution. Next, we round $\bar{e}_x$ to the nearest integer as: $e_x = \lfloor \bar{e}_x \rceil$ to get the final error, $e_x$, which:
\begin{itemize}
\item is determined by $x$,
\item belongs to a rounded Gaussian distribution.
\end{itemize}
We know from~\cite{Reg[05],Albre[13],Gold[10],Duc[15],Hul[17]} that --- with appropriate parameters --- rounded Gaussians satisfy the hardness requirements for multiple LWE-based constructions. Next, we discuss our RGPC protocol in detail.
\begin{note}\label{ImpNote}
With a sufficiently large number of messages, $f(x)$ can be very closely approximated by the linear regression hypothesis $h(x)$. Therefore, with a suitable choice of parameters, it is quite reasonable to expect that the error distribution is Gaussian (which is indeed the case --- see \Cref{Lemma1}, where we use drowning/smudging to argue about the insignificance of negligible uniform error introduced by linear regression analysis). Considering this, we also examine the more interesting case wherein the computations are performed in $\mathbb{Z}_m$ (for some $m \in \mathbb{Z}^+ \setminus \{1\}$) instead of over $\mathbb{Z}$. However, our proofs and arguments are presented according to the former case, i.e., where the computations are performed over $\mathbb{Z}$. We leave adapting the proofs and arguments for the latter case --- i.e., computations over $\mathbb{Z}_m$ --- as an open problem.
\end{note}
\subsection{Hypothesis Generation from Physical Layer Communications}\label{Gen}
In this section, we describe hypothesis generation from physical layer communications which allows us to generate an optimal linear regression hypothesis, $h(x)$, for the target function $f(x) + \varepsilon_x$. As mentioned in \Cref{ImpNote}, we consider the case wherein the error computations are performed in $\mathbb{Z}_m$. As described in \Cref{broad}, the linear regression data for each subset of parties $H_i \in \mathcal{H}$ is comprised of the messages exchanged within star graph $\partial_i$ --- that is formed by the parties in $H_i$.
\subsubsection{Assumptions.}\label{assumption}
We assume that the following conditions hold:
\begin{enumerate}
\item Value of the integer modulus $m$:
\begin{itemize}
\item is either known beforehand, or
\item can be derived from the target function.
\end{itemize}
\item \label{assume2} Size of the dataset, i.e., the total number of recorded physical layer messages, is reasonably large such that there are enough data points to accurately fit linear regression on any function period. In our experiments, we set it to $2^{16}$ messages.
\item \label{assume} For a dataset $\mathcal{D} = \{(x_i,y_i)\}~(i \in [\ell])$ of unique function input, message received pairs, it holds for the slope, $\beta_1$, of $f(x)$, that $\ell/\beta_1$ is superpolynomial. For our experiments, we use values of $\beta_1$ with $\ell/\beta_1\geq 100$.
\end{enumerate}
\subsubsection{Setup.}
Recall that the goal of linear regression is to find subset(s) of data points that can be used to generate a hypothesis $h(x)$ to approximate the target function, which in our case is $f(x) + \varepsilon_x$. Then, we extrapolate it to the entire dataset. However, since modulo is a periodic function, there is no explicit linear relationship between $x \leftarrow \mathbb{Z}$ and $y = f(x) + \varepsilon_x \bmod m$, even without the error term $\varepsilon_x$. Thus, we cannot directly apply linear regression to the entire dataset $\mathcal{D} = \{(x_i,y_i)\}~(i \in [\ell])$ and expect meaningful results unless $\beta_0=0$ and $\beta_1 = 1$.
We arrange the dataset, $\mathcal{D}$, in ascending order with respect to the $x_i$ values, i.e., for $1 \leq i < j \leq m$ and all $x_i, x_j \in \mathcal{D}$, it holds that: $x_i < x_j$. Let $\mathcal{S} = \{x_i\}_{i=1}^\ell$ denote the ordered set with $x_i$ $(\forall i \in [\ell])$ arranged in ascending order. Observe that the slope of $y = f(x) + \varepsilon_x \bmod m$ is directly proportional to the number of periods on any given range, $[x_i, x_j]$. For example, observe the slope in \Cref{Fig1}, which depicts the scatter plot for $y = 3x + \varepsilon_x \bmod 12288$ with three periods. Therefore, in order to find a good linear fit for our target function on a subset of dataset that lies inside the given range, $[x_i, x_j]$, we want to correctly estimate the length of a single period. Consequently, our aim is to find a range $[x_i, x_j]$ for which the following is minimized:
\begin{equation}\label{eq}
\Big| \hat{\beta}_1 - \dfrac{m}{x_j - x_i} \Big|,
\end{equation}
where $\hat{\beta}_1$ denotes the slope for our linear regression hypothesis $h(x) = \hat{\beta}_0 + \hat{\beta}_1 x$ computed on the subset with $x$ values in $[x_i,x_j]$.
\begin{figure}[h!]
\centering
\includegraphics[scale=.084]{1_8x.png}
\caption{Scatter plot for $y = 3 x + \varepsilon_x \bmod 12288$ (three periods)}\label{Fig1}
\end{figure}
\subsubsection{Generating Optimal Hypothesis.}
The following procedure describes our algorithm for finding the optimal hypothesis $h(x)$ and the target range $[x_i, x_j]$ that satisfies \Cref{eq} for $\beta_0=0$. When $\beta_0$ is not necessarily $0$, a small modification to the procedure (namely, searching over all intervals $[x_i, x_j]$, instead of searching over only certain intervals as described below) is needed.
Let $\kappa$ denote the total number of periods, then it follows from Assumption~\ref{assume} (given in Section~\ref{assumption}) that $\kappa \leq \lceil \ell/100 \rceil$. Let $\delta_{\kappa,i} = |\hat{\beta}_1(\kappa, i) - \kappa|$, where $\hat{\beta}_1(\kappa, i)$ denotes that $\hat{\beta}_1$ is computed over the range $\left[x_{\left\lfloor(i-1)\ell/\kappa\right\rfloor+1},x_{\left\lfloor i\ell/\kappa\right\rfloor}\right]$.
\begin{enumerate}
\item Initialize the number of periods with $\kappa = 1$ and calculate $\delta_{1,1} = |\hat{\beta}_1(1,1) - 1|$.
\item Compute the $\delta_{\kappa,i}$ values for all $1 < \kappa \leq \lceil \ell/100 \rceil$ and $i \in [\kappa]$. For instance, $\kappa = 2$ denotes that we consider two ranges: $\hat{\beta}_1 (2,1)$ is calculated on $\left[ x_1, x_{\lfloor \ell/\kappa \rfloor}\right]$ and $\hat{\beta}_1 (2,2)$ on $\left[x_{\lfloor \ell/\kappa \rfloor +1}, x_\ell\right]$. Hence, we compute $\delta_{2,i}$ for these two ranges. Similarly, $\kappa =3$ denotes that we consider three ranges $\left[ x_1, x_{\lfloor \ell/\kappa \rfloor}\right]$, $\left[x_{\lfloor \ell/\kappa \rfloor +1}, x_{\lfloor 2\ell/\kappa \rfloor}\right]$ and $\left[x_{\lfloor 2\ell/\kappa \rfloor +1}, x_\ell\right]$, and we compute $\hat{\beta}(3,i)$ and $\delta_{3,i}$ over these three ranges. Hence, $\delta_{\kappa,i}$ values are computed for all $(\kappa,i)$ that satisfy $1 \leq i \leq \kappa \leq \lceil \ell/100 \rceil$.
\item Identify the optimal value $\delta = \min_{\kappa,i}(\delta_{\kappa,i})$, which is the minimum over all $\kappa \in [\lceil \ell/100 \rceil]$ and $i \in [\kappa]$.
\item After finding the minimal $\delta$, output the corresponding (optimal) hypothesis $h(x)$.
\end{enumerate}
What the above algorithm does is basically a grid search over $\kappa$ and $i$ with the performance metric being minimizing the $\delta_{\kappa,i}$ value.
\begin{center}
\textbf{Grid search: more details}
\end{center}
\begin{myenv}
Grid search is an approach used for hyperparameter tuning. It methodically builds and evaluates a model for each combination of parameters. Due to its ease of implementation and parallelization, grid search has prevailed as the de-facto standard for hyperparameter optimization in machine learning, especially in lower dimensions. For our purpose, we tune two parameters $\kappa$ and $i$. Specifically, we perform grid search to find hypotheses $h(x)$ for all $\kappa$ and $i$ such that $\kappa \in [\lceil \ell/100 \rceil]$ and $i \in [\kappa]$. Optimal hypothesis is the one with the smallest value of the performance metric $\delta_{\kappa,i}$.
\end{myenv}
\subsection{Simulation and Testing}\label{Sim}
\begin{figure}
\centering
\includegraphics[scale=.080]{4_1_4x.png}
\caption{Distribution plot of $\bar{e}_x$ for $y = 546 x + \varepsilon_x \bmod 12288$. Slope estimate: $\beta_1 = 551.7782$.}\label{Fig2}
\end{figure}
We tested our RGPC algorithm with varying values of $m$ and $\beta_1$ for the following functions:
\begin{itemize}
\item $f(x) = \beta_0 + \beta_1 x,$
\item $f(x) = \beta_0 + \beta_1 \sqrt{x},$
\item $f(x) = \beta_0 + \beta_1 x^2,$
\item $f(x) = \beta_0 + \beta_1 \sqrt[3]{x},$
\item $f(x) = \beta_0 + \beta_1 \ln (x+1).$
\end{itemize}
To generate the training data, we simulated the channel noise, $\varepsilon_x$, as a random Gaussian noise (introduced by the Gaussian channel), which we sampled from various distributions with zero mean and standard deviation ranging from $10$ to $300$; and we used values of the integer modulus $m$ up to $20000$. Channel noise was computed by rounding $\varepsilon_x$ to the nearest integer and reducing the result modulo $m$.
For each function, we simulated $2^{16}$ input, output pairs, exchanged over Gaussian channels, i.e., for each function, the dataset $\mathcal{D} = \{(x_i,y_i)\}~(i \in [2^{16}])$ contains $2^{16}$ unique pairs $(x_i, y_i)$. As expected, given the dataset $\mathcal{D}$ with data points $x_i, y_i = f(x_i) + \varepsilon_i \bmod m$, our algorithm always converged to the optimal range, yielding close approximations for the target function with deterministic errors, $\bar{e}_{x} = |y - h(x)| \bmod m$. \Cref{Fig2} shows a histogram of the errors $\bar{e}_x$ generated by our RGPC protocol --- with our training data --- for the target (linear) function $y = 546 x + \varepsilon_x \bmod 12288$. The errors indeed appear to belong to a truncated Gaussian distribution, bounded by the modulus $12288$ from both tails.
\begin{figure}
\centering
\includegraphics[scale=.115]{2_8x.png}
\caption{Distribution plot of $\bar{e}_x$ for $y = 240 \sqrt{x} + \varepsilon_x \bmod 12288$. Slope estimate: $\beta_1 = 239.84$.}\label{Fig3}
\end{figure}
Moving on to the cases wherein the independent variable $x$ and the dependent variable $y$ have a nonlinear relation: the most representative example of such a relation is the power function $f(x)=\beta_1 x^\vartheta$, where $\vartheta \in \mathbb{R}$. We know that nonlinearities between variables can sometimes be linearized by transforming the independent variable. Hence, we applied the following transformation: if we let $x_{\upsilon}=x^\vartheta$, then $f_\upsilon(x_\upsilon) = \beta_1 x_\upsilon = f(x)$ is linear in $x_\upsilon$. This can now be solved by applying our hypothesis generation algorithm for linear functions. \Cref{Fig3,Fig4,Fig6,Fig7} show the histograms of the errors $\bar{e}_x$ generated by our training datasets for the various nonlinear functions from the list given at the beginning of \Cref{Sim}. Again, the errors for these nonlinear functions appear to belong to truncated Gaussian distributions, bounded by their respective moduli from both tails.
\begin{figure}
\centering
\includegraphics[scale=.115]{3_8x.png}
\caption{Distribution plot of $\bar{e}_x$ for $y = 125 x^2 + \varepsilon_x \bmod 10218$. Slope estimate: $\beta_1 = 124.51$.}\label{Fig4}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.115]{5_8x.png}
\caption{Distribution plot of $\bar{e}_x$ for $y = 221 \sqrt[3]{x} + \varepsilon_x \bmod 11278$. Slope estimate: $\beta_1 = 221.01$.}\label{Fig6}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.115]{6_8x.png}
\caption{Distribution plot of $\bar{e}_x$ for $y = 53 \ln (x+1) + \varepsilon_x \bmod 8857$. Slope estimate: $\beta_1 = 54.48$.}\label{Fig7}
\end{figure}
\subsection{Complexity}\label{Time}
Let the size of the dataset collected by recording the physical layer communications be $\ell$. Then, the complexity for least squares linear regression is $\mathrm{\Theta}(\ell)$ additions and multiplications. It follows from Assumption~\ref{assume} (from \Cref{assumption}) that $\ell^2$ is an upper bound on the maximum number of evaluations required for grid search. Therefore, the overall asymptotic complexity of our algorithm to find optimal hypothesis, and thereafter generate deterministic rounded Gaussian errors is $O(\ell^3)$. In comparison, the complexity of performing least squares linear regression on a dataset $\{(x_i, y_i)\}$ that has not been reduced modulo $m$ is simply $\mathrm{\Theta}(\ell)$ additions and multiplications.
\subsection{Error Analysis}
Before moving ahead, we would recommend the reader revisit \Cref{ImpNote}.
Suppose that there are a total of $\ell$ samples, and that the difference of two values modulo $m$ is always represented by an element in $(-m/2,(m+1)/2]$. We make the further assumption that there exists a constant $b\geq 1$ such that the $x_i$ values satisfy:
\begin{equation}
\label{assumption_on_x}
\tag{$\dagger$}
\frac{(x_i-\bar{x})^2}{\sum_{j=1}^{\ell} (x_j-\bar{x})^2}\leq\frac{b}{\ell}
\end{equation}
for all $i=1,\ldots,\ell$, where $\bar{x}=\sum_{j=1}^{\ell}\frac{x_j}{\ell}$.
Observe that, if $\ell-1$ divides $m-1$ and the $x_i$ values are $0,\frac{m-1}{\ell-1},\ldots,\frac{(\ell-1)(m-1)}{\ell-1}$, then $\sum_{j=1}^{\ell} (x_j-\bar{x})^2=\frac{\ell(\ell^2-1)(m-1)^2}{12(\ell-1)^2}$, and the numerator is bounded above by $\frac{(m-1)^2}{4}$, thus the above is satisfied with $b=3$. In general, by the strong law of large numbers, choosing a large enough number of $x_i$'s uniformly at random from $[0, m-1]$ will, with very high probability, yield $\bar{x}$ close to $\frac{m-1}{2}$ and $\frac{1}{\ell}{\sum_{j=1}^{\ell} (x_j-\bar{x})^2}$ close to $\frac{(m^2-1)}{12}$ (since $X\sim U(0, m-1)\implies \E(X)=\frac{m-1}{2}$ and $\var(X)=\frac{m^2-1}{12}$), hence (\ref{assumption_on_x}) will be satisfied with a small constant $b$, say with $b=4$.
The dataset is $\mathcal{D} = \{(x_i, y_i)\}_{i=1}^\ell$, where $y_i=f(x_i)+\varepsilon_i=\beta_0+\beta_1 x_i+\varepsilon_i$, with $\varepsilon_i\sim \mathcal{N}(0,\sigma^2)$. Suppose the regression line is given by $y=\hat{\beta_0}+\hat{\beta}_1x$. Then the error $\bar{e}_i$ is given by
\begin{align*}
\bar{e}_i&=(\hat{\beta_0}+\hat{\beta}_1x_i)-y_i=(\hat{\beta_0}+\hat{\beta}_1x_i)-(\beta_0+\beta_1 x_i+\varepsilon_i) \\
&=(\hat{\beta_0}-\beta_0)+(\hat{\beta_1}-\beta_1)x_i-\varepsilon_i.
\end{align*}
The joint distribution of the regression coefficients $\hat{\beta_0}$ and $\hat{\beta_1}$ is given by the following well known result:
\begin{proposition}
\label{regression_hypothesis_distribution}
Let $y_1,y_2,\ldots,y_\ell$ be independently distributed random variables such that $y_i\sim \mathcal{N}(\alpha+\beta x_i,\sigma^2)$ for all $i=1,\ldots ,\ell$. If $\hat{\alpha}$ and $\hat{\beta}$ are the least square estimates of $\alpha$ and $\beta$ respectively, then:
$$\begin{pmatrix}
\hat{\alpha} \\
\hat{\beta}
\end{pmatrix}
\sim \mathcal{N}\left(
\begin{pmatrix}
\alpha \\
\beta
\end{pmatrix}
,\,
\sigma^2
\begin{pmatrix}
\ell & \sum_{i=1}^{\ell} x_i \\
\sum_{i=1}^{\ell} x_i & \sum_{i=1}^{\ell} x_i^2
\end{pmatrix}^{-1}
\right).$$
\end{proposition}
Applying Proposition \ref{regression_hypothesis_distribution}, and using the fact that $\mathbf{X}\sim\mathcal{N}(\boldsymbol{\mu},\mathbf{\Sigma})\implies\mathbf{A}\mathbf{X}\sim\mathcal{N}(\mathbf{A}\boldsymbol{\mu},\mathbf{A}\mathbf{\Sigma}\mathbf{A}^T)$, we have
\begin{align*}
(\hat{\beta_0}-\beta_0)+(\hat{\beta_1}-\beta_1)x_i&\sim
\mathcal{N}\left(0,\,
\sigma^2\frac{\sum_{j=1}^\ell x_j^2-2x_i\sum_{j=1}^\ell x_j+\ell x_i^2}{\ell\sum_{j=1}^\ell x_j^2-(\sum_{j=1}^\ell x_j)^2}
\right) \\
&=
\mathcal{N}\left(0,\,
\frac{\sigma^2}{\ell}\left(1+\frac{\ell(x_i-\bar{x})^2}{\sum_{j=1}^\ell (x_j-\bar{x})^2}\right)
\right).
\end{align*}
Thus, by assumption (\ref{assumption_on_x}), the variance of $(\hat{\beta_0}-\beta_0)+(\hat{\beta_1}-\beta_1)x_i$ is bounded above by $(1+b)\frac{\sigma^2}{\ell}$.
Since $Z\sim \mathcal{N}(0,1)$ satisfies $|Z|\leq 2.807034$ with probability $0.995$, by the union bound, $\bar{e}_i$ is bounded by
$$|\bar{e}_i|\leq 2.807034\left(1+\sqrt{\frac{1+b}{\ell}}\right)\sigma$$
with probability at least $0.99$.
\begin{note}\label{note1}
Our protocol allows fine-tuning the Gaussian errors by tweaking the standard deviation $\sigma$ and mean $\mu$ for the Gaussian channel. Hence, in our proofs and arguments, we only use the term ``target rounded Gaussian''.
\end{note}
\begin{lemma}\label{Lemma1}
Suppose that the number of samples $\ell$ is superpolynomial, and that \emph{(\ref{assumption_on_x})} is satisfied with some constant $b$. Then, the errors $e_i$ belong to the target rounded Gaussian distribution.
\end{lemma}
\begin{proof}
Recall that the error $\bar{e}_i$ has two components: one is the noise introduced by the Gaussian channel and second is the error due to regression fitting. The first component is naturally Gaussian. The standard deviation for the second component is of order $\sigma/\sqrt{\ell}$. Hence, it follows from drowning/smudging that for a superpolynomial $\ell$, the error distribution for $\bar{e}_i$ is statistically indistinguishable from the Gaussian distribution to which the first component belongs. Therefore, it follows that the final output, $e_i = \lceil \bar{e}_i \rfloor$, of the RGPC protocol belongs to the target rounded Gaussian distribution (see \Cref{note1}). \qed
\end{proof}
Hence, our RGPC protocol generates errors, $e_{x} = \lceil |y - h(x)| \rfloor$, in a deterministic manner by generating a mapping defined by the hypothesis $h$; let $M_h: x \mapsto e_x$ be this mapping.
\begin{lemma}\label{Obs1}
For an external adversary $\mathcal{A}$, it holds that $M_h: x \mapsto e_x$ maps a random element $x \leftarrow \mathbb{Z}$ to a random element $e_x$ in the target rounded Gaussian distribution $\mathrm{\Psi}(0, \hat{\sigma}^2)$.
\end{lemma}
\begin{proof}
It follows from \Cref{Lemma1} that $M_h$ outputs from the target rounded Gaussian distribution. Note the following straightforward observations:
\begin{itemize}
\item Since each coefficient of $f(x)$ is randomly sampled from $\mathbb{Z}_m$, $f(x)$ is a random linear function.
\item The inputs $x \leftarrow \mathbb{Z}$ are sampled randomly.
\item The Gaussian channel introduces a noise $\varepsilon_x$ to $f(x)$, that is drawn i.i.d. from a Gaussian distribution $\mathcal{N}(0,\sigma^2)$. Hence, the receiving parties get a random element $f(x) + \varepsilon_x$.
\item It follows that $\lceil |f(x) - h(x)| \rfloor$ outputs a random element from the target rounded Gaussian $\mathrm{\Psi}(0, \hat{\sigma}^2)$.
\end{itemize}
Hence, $M_h: x \mapsto e_x$ is a deterministic mapping from a random element $x \leftarrow \mathbb{Z}$ to a random element $e_x$ in the desired rounded Gaussian $\mathrm{\Psi}(0, \hat{\sigma}^2)$. So, given an input $x$, an external adversary $\mathcal{A}$ has no advantage in guessing $e_x$. \qed
\end{proof}
\section{Mutual Information Analysis}\label{Mutual}
\begin{definition}
\emph{Let $f_X$ be the probability density function (p.d.f.) of the continuous random variable $X$. Then, the \emph{differential entropy} of $X$ is
$$H(X)=-\int f_X(x)\log f_X(x)\,dx.$$}
\end{definition}
\begin{definition}
\emph{The \emph{mutual information}, $I(X;Y)$, of two continuous random variables $X$ and $Y$ with joint p.d.f.\ $f_{X,Y}$ and marginal p.d.f's $f_X$ and $f_Y$ respectively, is
$$I(X;Y)=\int\int f_{X,Y}(x,y)\log\left(\frac{f_{X,Y}(x,y)}{f_X(x)f_Y(y)}\right)\,dy\,dx.$$}
\end{definition}
From the above definitions, it is easy to prove that $I(X;Y)$ satisfies the equality:
$$I(X;Y)=H(X)+H(Y)-H(X,Y).$$
Let us now describe our aim. Suppose, for $i=1,2,\ldots,\ell$, we have
$$y_i\sim \mathcal{N}(\alpha+\beta x_i,\sigma^2)\quad\text{and}\quad z_i\sim \mathcal{N}(\alpha+\beta w_i,\sigma^2),$$
with $x_i=w_i$ for $i=1,\ldots,a$. Let $h_1(x)=\hat{\alpha}_1 x+\hat{\beta}_1$ and $h_2(w)=\hat{\alpha}_2 w+\hat{\beta}_2$ be the linear regression hypotheses obtained from the samples $(x_i,y_i)$ and $(w_i,z_i)$, respectively. We would like to compute an expression for the mutual information
$$I((\hat{\alpha_1},\hat{\beta_1});(\hat{\alpha_2},\hat{\beta_2})).$$
First, we recall the following standard fact:
\begin{proposition}
\label{multivariate_normal_differential_entropy}
Let $X\sim \mathcal{N}(\mathbf{v},\Sigma)$, where $\mathbf{v}\in\mathbb{R}^d$ and $\Sigma\in\mathbb{R}^{d\times d}$. Then:
$$H(X)=\frac{1}{2}\log(\det\Sigma)+\frac{d}{2}(1+\log(2\pi)).$$
\end{proposition}
Our main result is the following:
\begin{proposition}\label{mainProp}
Let $\hat{\alpha_1}$, $\hat{\beta_1}$, $\hat{\alpha_2}$, $\hat{\beta_2}$ be as above. Then
\begin{gather*}
H(\hat{\alpha_1},\hat{\beta_1})=2\log\sigma-\frac{1}{2}\log(\ell X_2-X_1^2)+(1+\log(2\pi)), \\
H(\hat{\alpha_2},\hat{\beta_2})=2\log\sigma-\frac{1}{2}\log(\ell W_2-W_1^2)+(1+\log(2\pi)),
\end{gather*}
and
\begin{align*}
&\ H(\hat{\alpha_1},\hat{\beta_1},\hat{\alpha_2},\hat{\beta_2}) \\
=&\ 4\log\sigma-\frac{1}{2}\log\left((\ell X_2-X_1^2)(\ell W_2-W_1^2)\right)+2(1+\log(2\pi)) \\
&\qquad+\frac{1}{2}\log\left(1-\frac{\left(\ell C_2-2C_1X_1+a X_2\right)\left(\ell C_2-2C_1W_1+a W_2\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right. \\
&\qquad\left.+\frac{\left((a -1)C_2-C_3\right)\left((a -1)C_2-C_3+\ell(X_2+W_2)-2X_1W_1\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right),
\end{align*}
where $X_1=\sum_{i=1}^{\ell} x_i$, $X_2=\sum_{i=1}^{\ell} x_i^2$, $W_1=\sum_{i=1}^{\ell} w_i$, $W_2=\sum_{i=1}^{\ell} w_i^2$, $C_1=\sum_{i=1}^a x_i=\sum_{i=1}^a w_i$, $C_2=\sum_{i=1}^a x_i^2=\sum_{i=1}^a w_i^2$ and $C_3=\sum_{i=1}^{\ell}\sum_{j=1,j\neq i}^{\ell}x_ix_j$. The mutual information between $(\hat{\alpha_1},\hat{\beta_1})$ and $(\hat{\alpha_2},\hat{\beta_2})$ is:
\begin{align*}
&I((\hat{\alpha_1},\hat{\beta_1});(\hat{\alpha_2},\hat{\beta_2})) \\
=&\ -\frac{1}{2}\log\left(1-\frac{\left(\ell C_2-2C_1X_1+aX_2\right)\left(\ell C_2-2C_1W_1+aW_2\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right. \\
&\qquad\left.+\frac{\left((a-1)C_2-C_3\right)\left((a-1)C_2-C_3+\ell(X_2+W_2)-2X_1W_1\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right).
\end{align*}
\end{proposition}
\begin{proof}
The expressions for $H(\hat{\alpha_1},\hat{\beta_1})$ and $H(\hat{\alpha_2},\hat{\beta_2})$ follow from Propositions \ref{regression_hypothesis_distribution} and \ref{multivariate_normal_differential_entropy}, and the expression for $I((\hat{\alpha_1},\hat{\beta_1});(\hat{\alpha_2},\hat{\beta_2}))$ follows from
$$I((\hat{\alpha_1},\hat{\beta_1});(\hat{\alpha_2},\hat{\beta_2}))=H(\hat{\alpha_1},\hat{\beta_1})+H(\hat{\alpha_2},\hat{\beta_2})-H(\hat{\alpha_1},\hat{\beta_1},\hat{\alpha_2},\hat{\beta_2}).$$
It remains to derive the expression for $H(\hat{\alpha_1},\hat{\beta_1},\hat{\alpha_2},\hat{\beta_2})$.
First, define the matrices
$$X=
\begin{pmatrix}
1 & x_1 \\
1 & x_2 \\
\vdots & \vdots \\
1 & x_\ell
\end{pmatrix},
\qquad
W=
\begin{pmatrix}
1 & w_1 \\
1 & w_2 \\
\vdots & \vdots \\
1 & w_\ell
\end{pmatrix}.
$$
Then
\begin{align*}
\boldsymbol{\hat{\theta}}:=
\begin{pmatrix}
\hat{\alpha_1} \\
\hat{\beta_1} \\
\hat{\alpha_2} \\
\hat{\beta_2}
\end{pmatrix}
&=
\begin{pmatrix}
\alpha \\
\beta \\
\alpha \\
\beta
\end{pmatrix}
+
\begin{pmatrix}
(X^TX)^{-1}X^TU \\
(W^TW)^{-1}W^TV
\end{pmatrix} \\
&=\begin{pmatrix}
\alpha \\
\beta \\
\alpha \\
\beta
\end{pmatrix}
+
\begin{pmatrix}
(X^TX)^{-1}X^T & 0 \\
0 & (W^TW)^{-1}W^T
\end{pmatrix}
\begin{pmatrix}
U \\
V
\end{pmatrix},
\end{align*}
where $U$, $V\sim \mathcal{N}(0,\sigma^2 I_\ell)$; so:
$$\var(\boldsymbol{\hat{\theta}})=
\begin{pmatrix}
(X^TX)^{-1}X^T & 0 \\
0 & (W^TW)^{-1}W^T
\end{pmatrix}
\var{
\begin{pmatrix}
U \\
V
\end{pmatrix}
}
\begin{pmatrix}
X(X^TX)^{-1} & 0 \\
0 & W(W^TW)^{-1}
\end{pmatrix}
.$$
For any matrix $M=(M_{i,j})$, let $[M]_a$ denote the matrix with the same dimensions as $M$, and with entries
$$([M]_a)_{i,j}=
\begin{cases}
M_{i,j} & \text{if }i,j\leq a, \\
0 & \text{otherwise}.
\end{cases}
$$
Note that
$$\var{
\begin{pmatrix}
U \\
V
\end{pmatrix}
}=
\begin{pmatrix}
\sigma^2 I_\ell & \sigma^2 [I_\ell]_a \\
\sigma^2 [I_\ell]_a & \sigma^2 I_\ell
\end{pmatrix},
$$
hence
\begin{align*}
\var(\boldsymbol{\hat{\theta}})=\sigma^2
\begin{pmatrix}
(X^TX)^{-1} & [(X^TX)^{-1}X^T]_a(W(W^TW)^{-1}) \\
[(W^TW)^{-1}W^T]_a(X(X^TX)^{-1}) & (W^TW)^{-1}
\end{pmatrix},
\end{align*}
and
$$\det(\var(\boldsymbol{\hat{\theta}}))=\sigma^8\det(A-BD^{-1}C)\det(D)$$
where
\begin{align*}
A&=(X^TX)^{-1}=
\begin{pmatrix}
\frac{X_2}{\ell X_2-X_1^2} & -\frac{X_1}{\ell X_2-X_1^2} \\
-\frac{X_1}{\ell X_2-X_1^2} & \frac{\ell}{\ell X_2-X_1^2}
\end{pmatrix}, \\
B&=[(X^TX)^{-1}X^T]_a(W(W^TW)^{-1}) \\
&=
\begin{pmatrix}
\frac{\sum_{i=1}^{a} (X_2-x_iX_1)(W_2-w_iW_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} & \frac{\sum_{i=1}^{a} (\ell w_i-W_1)(X_2-x_iX_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} \\
\frac{\sum_{i=1}^{a} (\ell x_i-X_1)(W_2-w_iW_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} & \frac{\sum_{i=1}^{a} (\ell x_i-X_1)(\ell w_i-W_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}
\end{pmatrix},\\
C&=[(W^TW)^{-1}W^T]_a(X(X^TX)^{-1}) \\
&=
\begin{pmatrix}
\frac{\sum_{i=1}^{a} (X_2-x_iX_1)(W_2-w_iW_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} & \frac{\sum_{i=1}^{a} (\ell x_i-X_1)(W_2-w_iW_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} \\
\frac{\sum_{i=1}^{a} (\ell w_i-W_1)(X_2-x_iX_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)} & \frac{\sum_{i=1}^{a} (\ell x_i-X_1)(\ell w_i-W_1)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}
\end{pmatrix}, \\
D&=(W^TW)^{-1}=
\begin{pmatrix}
\ell & W_1 \\
W_1 & W_2
\end{pmatrix}^{-1}.
\end{align*}
After a lengthy computation, we obtain the following expression for $\det(\var(\boldsymbol{\hat{\theta}}))$:
\begin{align*}
&\frac{\sigma^8}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\left(1-\frac{\left(\ell C_2-2C_1X_1+aX_2\right)\left(\ell C_2-2C_1W_1+aW_2\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right. \\
&\qquad\left.+\frac{\left((a-1)C_2-C_3\right)\left((a-1)C_2-C_3+\ell(X_2+W_2)-2X_1W_1\right)}{(\ell X_2-X_1^2)(\ell W_2-W_1^2)}\right).
\end{align*}
The expression for $H(\hat{\alpha_1},\hat{\beta_1},\hat{\alpha_2},\hat{\beta_2})$ follows by applying Proposition \ref{multivariate_normal_differential_entropy}. \qed
\end{proof}
\section{Learning with Linear Regression (LWLR)}\label{LWLRsec}
In this section, we define LWLR and reduce its hardness to LWE. As mentioned in Note \ref{note1}, our RGPC protocol allows us freedom in tweaking the target rounded Gaussian distribution $\mathrm{\Psi}(0, \hat{\sigma}^2)$ by simply selecting the desired standard deviation $\sigma$. Therefore, when referring to the desired rounded Gaussian distribution for the hardness proofs, we use $\mathrm{\Psi}(0, \hat{\sigma}^2)$ (or $\mathrm{\Psi}(0, \hat{\sigma}^2) \bmod m$, i.e., $\mathrm{\Psi}_m(0, \hat{\sigma}^2))$ without divulging into the specific value of $\sigma$.
Let $\mathcal{P} = \{P_i\}_{i=1}^n$ be a set of $n$ parties.
\begin{definition}
\emph{For modulus $m$ and a uniformly sampled $\textbf{a} \leftarrow \mathbb{Z}^w_m$, the learning with linear regression (LWLR) distribution LWLR${}_{\textbf{s},m,w}$ over $\mathbb{Z}_m^w \times \mathbb{Z}_m$ is defined as: $(\textbf{a}, x + e_{x}),$ where $x = \langle \textbf{a}, \textbf{s} \rangle$ and $e_{x} \in \mathrm{\Psi}(0, \hat{\sigma}^2)$ is a rounded Gaussian error generated by the RGPC protocol, on input $x$.}
\end{definition}
\begin{theorem}\label{LWLRthm}
For modulus $m$, security parameter $\L$, $\ell = g(\L)$ samples (where $g$ is a superpolynomial function), polynomial adversary $\mathcal{A} \notin \mathcal{P}$, some distribution over secret $\textbf{s} \in \mathbb{Z}_m^w$, and a mapping $M_h: \mathbb{Z} \to \mathrm{\Psi}(0, \hat{\sigma}^2)$ generated by the RGPC protocol, where $\mathrm{\Psi}(0, \hat{\sigma}^2)$ is the target rounded Gaussian distribution, solving decision-LWLR${}_{\textbf{s},m,w}$ is at least as hard as solving the decision-LWE${}_{\textbf{s},m,w}$ problem for the same distribution over $\textbf{s}$.
\end{theorem}
\begin{proof}
Recall from \Cref{Lemma1} that, since $\ell$ is superpolynomial, the errors belong to the desired rounded Gaussian distribution $\mathrm{\Psi}(0, \hat{\sigma}^2)$. As given, for a fixed secret $\textbf{s} \in \mathbb{Z}^w_m$, a decision-LWLR${}_{\textbf{s},m,w}$ instance is defined as $(\textbf{a}, x + e_x)$ for $\textbf{a} \xleftarrow{\; \$ \;} \mathbb{Z}^w_m$ and $x = \langle \textbf{a},\textbf{s} \rangle$. Recall that a decision-LWE${}_{\textbf{s},m,w}$ instance is defined as $(\textbf{a}, \langle \textbf{a}, \textbf{s} \rangle + e)$ for $\textbf{a} \leftarrow \mathbb{Z}^w_m$ and $e \leftarrow \chi$ for a rounded (or discrete) Gaussian distribution $\chi$. We know from \Cref{Obs1} that $M_h$ is a deterministic mapping --- generated by the RGPC protocol --- from random inputs $x \leftarrow \mathbb{Z}$ to random elements $e_x \in \mathrm{\Psi}(0, \hat{\sigma}^2)$.
Next, we define the following two games.
\begin{itemize}
\item $\mathfrak{G}_1$: in this game, we begin by fixing a secret $\textbf{s}$. Each query from the attacker is answered with an LWLR${}_{\textbf{s},m,w}$ instance as: $(\textbf{a}, x + e_x)$ for a unique $\textbf{a} \xleftarrow{\; \$ \;} \mathbb{Z}^w_m$, and $x = \langle \textbf{a},\textbf{s} \rangle$. The error $e_x \in \mathrm{\Psi}(0, \hat{\sigma}^2)$ is generated as: $e_x = M_h(x)$.
\item $\mathfrak{G}_2$: in this game, we begin by fixing a secret $\textbf{s}$. Each query from the attacker is answered with an LWE${}_{\textbf{s},m,w}$ instance as: $(\textbf{a}, \langle \textbf{a}, \textbf{s} \rangle + e)$ for $\textbf{a} \xleftarrow{\; \$ \;} \mathbb{Z}^w_m$ and $e \leftarrow \mathrm{\Psi}(0, \tilde{\sigma}^2)$, where $\mathrm{\Psi}(0, \tilde{\sigma}^2)$ denotes a rounded Gaussian distribution that is suitable for sampling LWE errors.
\end{itemize}
Let $\mathcal{A} \notin \mathcal{P}$ be able to distinguish LWLR${}_{\textbf{s},m,w}$ from LWE${}_{\textbf{s},m,w}$ with some non-negligible advantage, i.e., $Adv_{\mathcal{A}}(\mathfrak{G}_1, \mathfrak{G}_2) \geq \varphi(w)$ for a non-negligible function $\varphi$. Hence, it follows that $Adv_{\mathcal{A}}(\mathrm{\Psi}(0, \tilde{\sigma}^2), \mathrm{\Psi}(0, \hat{\sigma}^2)) \geq \varphi(w)$. However, we have already established in \Cref{Obs1} that $M_h$ is random to $\mathcal{A} \notin \mathcal{P}$. Furthermore, we know that $\hat{\sigma}$ can be brought arbitrarily close to $\tilde{\sigma}$ (see \Cref{note1}). Therefore, with careful selection of parameters, it must hold that $Adv_{\mathcal{A}}(\mathrm{\Psi}(0, \tilde{\sigma}^2), \mathrm{\Psi}(0, \hat{\sigma}^2)) \leq \eta(w)$ for a negligible function $\eta$, which directly leads to $Adv_{\mathcal{A}}(\mathfrak{G}_1, \mathfrak{G}_2) \leq \eta(w)$. Hence, for any distribution over a secret $\textbf{s} \in \mathbb{Z}_m^w$, solving decision-LWLR${}_{\textbf{s},m,w}$ is at least as hard as solving the decision-LWE${}_{\textbf{s},m,w}$ problem for the same distribution over $\textbf{s}$. \qed
\end{proof}
\section{Star-specific Key-homomorphic PRF\lowercase{s}}\label{Sec7}
In this section, we use LWLR to construct the first star-specific key-homomorphic (SSKH) PRF family. We adapt the key-homomorphic PRF construction from~\cite{Ban[14]} by replacing the deterministic errors generated via the rounding function from LWR with the deterministic errors generated by our RGPC protocol.
\subsection{Background}
For the sake of completeness, we begin by recalling the key-homomorphic PRF construction from~\cite{Ban[14]}. Let $T$ be a full binary tree with at least one node, i.e., in $T$ every non-leaf node has two children. Let $T.r$ and $T.l$ denote its right and left subtree, respectively. Let $\lfloor \cdot \rceil_p$ denote the rounding function used in LWR (see \Cref{LWR} for an introduction to LWR).
Let $q \geq 2$, $d = \lceil \log q \rceil$ and $x[i]$ denote the $i^{th}$ bit of a bit-string $x$. Define a gadget vector as:
\[\textbf{g} = (1,2,4,\dots, 2^{d-1}) \in \mathbb{Z}^d_q,\]
where $q$ is the LWE modulus. Define a decomposition function $\textbf{g}^{-1}: \mathbb{Z}_q \rightarrow \{0,1\}^d$ such that $\textbf{g}^{-1}(a)$ is a ``short'' vector and $ \forall a \in \mathbb{Z}_q$, it holds that: $\langle \textbf{g}, \textbf{g}^{-1}(a) \rangle = a$, where $\langle \cdot \rangle$ denotes the inner product. Function $\textbf{g}^{-1}$ is defined as:
\begin{center}
$\textbf{g}^{-1}(a) = (x[0], x[1], \dots, x[d-1]) \in \{0,1\}^d,$
\end{center}
where $a = \sum\limits^{d-1}_{i=0} x[i] \cdot 2^i$ is the binary representation of $a$. The gadget vector is used to define the gadget matrix $\textbf{G}$ as:
\[\textbf{G} = \textbf{I}_w \otimes \textbf{g} = \text{diag}(\textbf{g}, \dots, \textbf{g}) \in \mathbb{Z}^{w \times wd}_q,\]
where $\textbf{I}_w$ is the $w \times w$ identity matrix and $\otimes$ denotes the Kronecker product~\cite{Kath[04]}. The binary decomposition function, $\textbf{g}^{-1}$, is applied entry-wise to vectors and matrices over $\mathbb{Z}_q$. Thus, $\textbf{g}^{-1}$ can be extended to get another deterministic decomposition function as: $$\textbf{G}^{-1}: \mathbb{Z}^{w \times u}_q \rightarrow \{0,1\}^{wd \times u}$$ such that $\textbf{G} \cdot \textbf{G}^{-1}(\textbf{A}) = \textbf{A}$.
Given uniformly sampled matrices, $\textbf{A}_0, \textbf{A}_1 \in \mathbb{Z}^{w \times wd}_q$, define function $\textbf{A}_T(x): \{0,1\}^{|T|} \rightarrow \mathbb{Z}^{w \times wd}_q$ as:
\begin{align*}
\textbf{A}_T(x) &=
\begin{cases}
\textbf{A}_x \qquad \qquad \qquad \qquad \qquad \qquad \; \text{if } |T| = 1, \\
\textbf{A}_{T.l}(x_l) \cdot \textbf{G}^{-1}(\textbf{A}_{T.r}(x_r)) \qquad \; \; \; \text{otherwise},
\end{cases}
\numberthis \label{AlignEq}
\end{align*}
where $|T|$ denotes the total number of leaves in $T$ and $x \in \{0,1\}^{|T|}$ such that $x = x_l || x_r$ for $x_l \in \{0,1\}^{|T.l|}$ and $x_r \in \{0,1\}^{|T.r|}$. The key-homomorphic PRF family is defined as:
\[\mathcal{F}_{\textbf{A}_0, \textbf{A}_1, T, p} =
\left\lbrace F_\textbf{s}: \{0,1\}^{|T|} \rightarrow \mathbb{Z}^{wd}_p \right\rbrace,\]
where $p \leq q$ is the modulus. A member of the function family $\mathcal{F}$ is indexed by the seed $\textbf{s} \in \mathbb{Z}^w_q$ as: \[F_{\textbf{s}}(x) = \lfloor \textbf{s} \cdot \textbf{A}_{T}(x) \rceil_p.\]
\subsection{Our Construction}
We are now ready to present the construction for the first SSKH PRF family.
\subsubsection{Settings.}
Let $\mathcal{P} = \{P_i\}_{i=1}^n$ be a set of $n$ honest parties, that are arranged as the vertices of an interconnection graph $G = (V, E)$, which is comprised of $S_k$ stars $\partial_1, \ldots, \partial_\rho$, i.e., each subgraph $\partial_i$ is a star with $k$ leaves. As mentioned in Section~\ref{broad}, we assume that each party in $\partial_i$ is connected to $\partial_i$'s central hub $C_i$ via two channels: one Gaussian channel with the desired parameters and another error corrected channel. Each party in $\mathcal{P}$ receives parameters $\textbf{A}_0, \textbf{A}_1$, i.e., all parties are provisioned with identical parameters. Hence, physical layer communications and measurements are the exclusive source of variety and secrecy in this protocol. Since we are dealing with vectors, the data points for linear regression analysis, i.e., the messages exchanged among the parties in the stars, are of the form $\{(\mathbf{x}_i, \mathbf{y}_i)\}_{i=1}^\ell$, where $\textbf{x}_i, \textbf{y}_i \in \mathbb{Z}^{wd}.$ Hence, the resulting rounded Gaussian --- generated by the RGPC protocol --- becomes $\mathrm{\Psi}^{wd}(\textbf{0}, \hat{\sigma}^2)$. Let the parties in each star graph exchange messages in accordance to the RGPC protocol such that messages from different central hubs $C_i, C_j~(\forall i,j \in [\rho];\ i \neq j)$ are distinguishable to the parties belonging to multiple star graphs.
\subsubsection{Construction.}
Without loss of generality, consider a star $\partial_i \subseteq V(G)$. Each party in $\partial_i$ follows the RGPC protocol to generate its linear regression hypothesis $h^{(\partial_i)}$. Parties in star $\partial_i$ construct a $\partial_i$-specific key-homomorphic PRF family, whose member $F_{\textbf{s}_{i}}^{(\partial_i)}(x)$, indexed by the key/seed $\textbf{s}_{i} \in \mathbb{Z}^w_m$, is defined as:
\begin{equation}
F_{\textbf{s}_{i}}^{(\partial_i)}(x) = \textbf{s}_{i} \cdot \textbf{A}_{T}(x) + \textbf{e}^{(\partial_i)}_{\textbf{b}} \bmod m, \label{eq1}
\end{equation}
where $\textbf{A}_T(x)$ is as defined by Equation \eqref{AlignEq}, $\textbf{b} = \textbf{s}_{i} \cdot \textbf{A}_{T}(x)$, and $\textbf{e}^{(\partial_i)}_{\textbf{b}}$ denotes a rounded Gaussian error computed --- on input $\textbf{b}$ --- by our RGPC protocol via its star-specific least squares hypothesis $h^{(\partial_i)}$ for $\partial_i$. The star-specific secret $\textbf{s}_{i}$ can be generated by using a reconfigurable antenna (RA) \cite{MohAli[21],inbook} at the central hub $C_i$ and thereafter reconfiguring it to randomize the error-corrected channel between itself and the parties in $\partial_i$. Specifically, $\textbf{s}_{i}$ can be generated via the following procedure:
\begin{enumerate}
\item After performing the RGPC protocol, each party $P_j \in \partial_i$ sends a random $r_j \in [\ell]$ to $C_i$ via the error corrected channel. $C_i$ broadcasts $r_j$ to all parties in $\partial_i$ and randomizes all error-corrected channels by reconfiguring its RA. If two parties' $r_j$ values arrive simultaneously, then $C_i$ randomly discards one of them and notifies the concerned party to resend another random value. This ensures that the channels are re-randomized after receiving each $r_j$ value. By the end of this cycle, each party receives $k$ random values $\{r_j\}_{j=1}^k$. Let $\wp_i$ denote the set of all $r_j$ values received by the parties in $\partial_i$.
\item Each party in $\partial_i$ computes $\bigoplus\limits_{r_j \in \wp_i} r_j = s \bmod m$.
\item This procedure is repeated to extract the required number of bits to generate the vector $\textbf{s}_{i}$.
\end{enumerate}
Since $C_i$ randomizes all its channels by simply reconfiguring its RA, no active or passive adversary can compromise all $r_j \in \wp_i$ values \cite{Aono[05],MehWall[14],YanPan[21],MohAli[21],inbook,Alan[20],PanGer[19],MPDaly[12]}. In honest settings, secrecy of the star-specific secret $\textbf{s}_{i}$, generated by the aforementioned procedure, follows directly from the following three facts:
\begin{itemize}
\item All parties are honest.
\item All data points $\{x_i\}_{i=1}^\ell$ are randomly sampled integers, i.e., $x_i \leftarrow \mathbb{Z}$.
\item The coefficients of $f(x)$, and hence $f(x)$ itself, are sampled randomly.
\end{itemize}
We examine other settings, wherein there are active/passive and internal/external adversaries, in the following section. Note that the protocol does not require the parties to share their identities. Hence, the above protocol is trivially anonymous over anonymous channels (see \cite{GeoCla[08],MattYen[09]} for surveys on anonymous communications). Since anonymity has multiple applications in cryptographic protocols \cite{Stinson[87],Phillips[92],Blundo[96],Kishi[02],Deng[07],SehrawatVipin[21],Sehrawat[17],AmosMatt[07],Daza[07],Gehr[97],Anat[15],Mida[03],OstroKush[06],DijiHua[07]}, it is a useful feature of our construction.
\subsection{Maximum number of SSKH PRFs and Defenses Against Various Attacks}\label{Max}
In this section, we employ our results from \Cref{Extremal} to derive the maximum number of SSKH PRFs that can be constructed by a set of $n$ parties. Recall that we use the terms star and star graph interchangeably. We know that in order to construct an SSKH PRF family, the parties are arranged in an interconnection graph $G$ wherein the --- possibly overlapping --- subsets of $\mathcal{P}$ form different star graphs, $\partial_1, \ldots, \partial_\rho$, within $G$. We assume that for all $i \in [\rho]$, it holds that: $|\partial_i| = k$. Recall from \Cref{Extremal} that we derived various bounds on the size of the following set families $\mathcal{H}$ defined over a set of $n$ elements:
\begin{enumerate}
\item $\mathcal{H}$ is an at most $t$-intersecting $k$-uniform family,
\item $\mathcal{H}$ is a maximally cover-free at most $t$-intersecting $k$-uniform family.
\end{enumerate}
We set $n$ to be the number of nodes/parties in $G$. Hence, $k$ represents the size of each star with $t$ being equal to (or greater than) $\max\limits_{i\neq j}(|\partial_i \cap \partial_j|)$.
In our SSKH PRF construction, no member of a star $\partial$ has any secrets that are hidden from the other members of $\partial$. Also, irrespective of their memberships, all parties are provisioned with identical set of initial parameters. The secret keys and regression models are generated via physical layer communications and collaboration. Due to these facts, the parties in our SSKH PRF construction must be either honest or semi-honest but non-colluding. We consider these factors while computing the maximum number of SSKH PRFs that can be constructed securely against various types of adversaries. For a star $\partial$, let $\mathcal{O}_{\partial}$ denote an oracle for the SSKH PRF $F^{(\partial)}_{\textbf{s}}$, i.e., on receiving input $x$, $\mathcal{O}_{\partial}$ outputs $F^{(\partial)}_{\textbf{s}}(x)$. Given oracle access to $\mathcal{O}_{\partial}$, it must holds that for a probabilistic polynomial-time adversary $\mathcal{A}$ who is allowed $\poly(\L)$ queries to $\mathcal{O}_{\partial}$, the SSKH PRF $F^{(\partial)}_{\textbf{s}}$ remains indistinguishable from a uniformly random function $U$ --- defined over the same domain and range as $F^{(\partial)}_{\textbf{s}}$.
Let $E_{i}$ denote the set of Gaussian and error-corrected channels that are represented by the edges in star $\partial_i$.
\subsubsection{External Adversary with Oracle Access.}
In this case, the adversary can only query the oracle for the SSKH PRF, and hence the secrecy follows directly from the hardness of LWLR. Therefore, at most $t$-intersecting $k$-uniform families are sufficient for this case, i.e., we do not need the underlying set family $\mathcal{H}$ to be maximally cover-free. Moreover, $t = k-1$ suffices for this case because maximum overlap between different stars can be tolerated. Hence, it follows from \Cref{asymptotic_bound} (in \Cref{Extremal}) that the maximum number $\zeta$ of SSKH PRFs that can be constructed is:
$$\zeta\sim\frac{n^{k}}{k!}.$$
\subsubsection{Eavesdropping Adversary with Oracle Access.}
Let $\mathcal{A}$ be an eavesdropping adversary, who is able to observe a subset of Gaussian and/or error-corrected channels between parties and central hubs. We call this subset $E'$ and we assume that $E_{i} \not\subseteq E'$. Let us analyze the security w.r.t. this adversary.
\begin{enumerate}
\item Secrecy of $\textbf{s}_i$: After each party $P_z \in \partial_i$ contributes to the generation of $\textbf{s}_i$ by sending a random value $r_z$ to $C_i$ and $r_z$ is broadcasted to all parties in $\partial_i$, $C_i$ randomizes all error-corrected channels by reconfiguring its RA. This means that an adversary $\mathcal{A}$ is unable to compromise all $r_z$ values. Hence, it follows that no information about $\textbf{s}_i$ is leaked to $\mathcal{A}$.
\item Messages exchanged via the channels in $E'$: leakage of enough messages exchanged within star $\partial_i$ would allow $\mathcal{A}$ to closely approximate the RGPC protocol within $\partial_i$. Hence, if $\mathcal{A}$ can eavesdrop enough channels in $\partial_i$, it can approximate the deterministic errors generated for any given input. We know that without proper errors, LWE is reduced to a set of linear equations that is easily solvable via Gaussian elimination. Hence, this leakage can be devastating. Fortunately, the use of physical layer security technologies can provide information-theoretic protection --- against an eavesdropping adversary --- for the messages exchanged over physical layer \cite{YonWu[18],ShiJia[21]}.
\end{enumerate}
Hence, with these physical layer security measures, an eavesdropping adversary with oracle access has no advantage over an adversary with oracle access alone.
\subsubsection{Man-in-the-Middle.}
Physical-layer-based key generation schemes exploit the channel reciprocity for secret key extraction, which can achieve information-theoretic secrecy against eavesdroppers. However, these schemes have been shown to be vulnerable against man-in-the-middle (MITM) attacks. During a typical MITM attack, the adversary creates separate connection(s) with the communicating node(s) and relays altered transmission packets to them. Eberz et al. \cite{EbeMatt[12]} demonstrated a practical MITM attack against RSS-based key generation protocols \cite{JanaSri[09],SuWade[08]}, wherein the MITM adversary $\mathcal{A}$ exploits the same channel characteristics as the target/communicating parties $P_1, P_2$. To summarize, in the attack from \cite{EbeMatt[12]}, $\mathcal{A}$ injects packets that cause a similar channel measurement at both $P_1$ and $P_2$. This attack enables $\mathcal{A}$ to recover up to 47\% of the secret bits generated by $P_1$ and $P_2$.
To defend against such attacks, we can apply techniques that allow us to detect an MITM attack over physical layer \cite{LeAle[16]}, and if one is detected, the antenna of $\partial_i$'s central hub, $C_i$, can be reconfigured to randomize all channels in $\partial_i$ \cite{YanPan[21]}. This only requires a reconfigurable antenna (RA) at each central hub. An RA can swiftly reconfigure its radiation pattern, polarization, and frequency by rearranging its antenna currents \cite{MohAli[21],inbook}. It has been shown that due to multipath resulting from having an RA, even small variation by the RA can create large variations in the channel, effectively creating fast varying channels with a random distribution \cite{JunPhD[21]}. One way an RA may randomize the channels is by randomly
selecting antenna configurations in the transmitting array at the symbol rate, leading to a random phase and amplitude multiplied to the transmitted symbol. The resulting randomness is compensated by appropriate element weights so that the intended receiver does not experience any random variations. In this manner, an RA can be used to re-randomize the channel and hence break the temporal correlation of the channels between $\mathcal{A}$ and the attacked parties, while preserving the reciprocity of the other channels.
Therefore, even if an adversary $\mathcal{A}$ is able to perform successful injection in communication round {\ss}, its channels with the attacked parties change randomly when it attempts injection attacks in round {\ss}+1. On the other hand, the channels between the parties in star $\partial_i$ and their central hub $C_i$ remain reciprocal, i.e., they can still make correct (and identical) measurements from the randomized channel. Hence, by reconfiguring its RA, $C_i$ can prevent further injections from $\mathcal{A}$ without affecting the legitimate parties' ability to make correct channel measurements. Further details on this defense technique are beyond the scope of this paper.
For detailed introduction to the topic and its applications in different settings, we refer the interested reader to \cite{Aono[05],MehWall[14],YanPan[21],MohAli[21],inbook,Alan[20],PanGer[19],MPDaly[12]}. In this manner, channel state randomization can be used to effectively reduce an MITM attack to the less harmful jamming attack \cite{MitChor[21]}. See \cite{HossHua[21]} for a thorough introduction to jamming and anti-jamming techniques.
\subsubsection{Non-colluding Semi-honest Parties.}
Suppose that some or all parties in $\mathcal{P}$ are semi-honest (also referred to honest-but-curious), who follow the protocol correctly but try to gain/infer more information than what is allowed by the protocol transcript. Further suppose that the parties do not collude with each other. In such settings, the only way any party $P_j \notin \partial_i$ can gain additional information about the SSKH PRF $F^{(\partial_i)}_{\textbf{s}_{i}}$ is to use the SSKH PRFs of the stars that it is a member of. For instance, if $P_j \in \mathcal{P}_{\partial_d}, \mathcal{P}_{\partial_j}, \mathcal{P}_{\partial_o}$ and $\mathcal{P}_{\partial_i} \subset \mathcal{P}_{\partial_o} \cup \mathcal{P}_{\partial_j} \cup \mathcal{P}_{\partial_d}$, then because the parties send identical messages to all central hubs they are connected to, it follows that $H(F^{(\partial_i)}_{\textbf{s}_{i}} | P_j) = 0$. This follows trivially because $P_j$ can compute $\textbf{s}_i$. Having maximally cover-free families eliminates this vulnerability against non-colluding semi-honest parties. This holds because with maximally cover-free families with member sets denoting the stars, the following can never hold true for any integer $\varrho \in [\rho]$:
\[\mathcal{P}_{\partial_i} \subseteq \bigcup_{j \in [\varrho]} \mathcal{P}_{\partial_j}, \text{ where } \forall j \in [\varrho], \text{ it holds that: } \partial_i \neq \partial_j.\]
It follows from \Cref{asymptotic_bound_for_maximally_cover_free} that the maximum number of SSKH PRFs that can be constructed with non-colluding semi-honest parties is at least $Cn$ for some positive real number $C < 1$.
Hence, in order to construct SSKH PRFs that are secure against all the adversaries and models discussed above, the representative/underlying family of sets must be maximally cover-free, at most $(k-1)$-intersecting and $k$-uniform.
\subsection{Runtime and Key Size}
We know that the complexity of a single evaluation of the key-homomorphic PRF from~\cite{Ban[14]} is $\mathrm{\Theta}(|T| \cdot w^\omega \log^2 m)$ ring operations in $\mathbb{Z}_m$, where $\omega \in [2, 2.37286]$ is the exponent of matrix multiplication~\cite{Josh[21]}. Using the fast multiplication algorithm in \cite{HarveyHoeven[21]}, this gives a time complexity of $\mathrm{\Theta}(|T| \cdot w^\omega \cdot m\log^3 m)$.
The time required to complete the setup in our SSKH PRF construction is equal to the time required by our RGPC algorithm to find the optimal hypothesis, which we know from Section~\ref{Time} to be $\mathrm{\Theta}(\ell)$ additions and multiplications. If $B$ is an upper bound on $x_i$ and $y_i$, then the time complexity is $O\left(\ell B\log B\right)$. Once the optimal hypothesis is known, it takes $\mathrm{\Theta}(wm\log^2 m)$ time to generate a deterministic LWLR error for a single input. Hence, after the initial setup, the time complexity of a single function evaluation of our star-specific key-homomorphic PRF remains $\mathrm{\Theta}(|T| \cdot w^\omega \cdot m\log^3 m)$.
Similarly, the key size for our star-specific key-homomorphic PRF family is the same as that of the key-homomorphic PRF family from~\cite{Ban[14]}. Specifically, for security parameter $\L$ and $2^{\L}$ security against known lattice reduction algorithms~\cite{Ajtai[01],Fincke[85],Gama[06],Gama[13],Gama[10],LLL[82],Ngu[10],Vid[08],DanPan[10],DaniPan[10],Poha[81],Schnorr[87],Schnorr[94],Schnorr[95],Nguyen[09],EamFer[21],RicPei[11],MarCar[15],AvrAda[03],GuaJoh[21],SatoMasa[21],TamaStep[20],AleQi[21]}, the key size for our star-specific key-homomorphic PRF family is $\L$.
\subsection{Correctness and Security}
Recall that LWR employs rounding to hide all but some of the most significant bits of $\lfloor \textbf{s} \cdot \textbf{A} \rceil_p$; therefore, the rounded-off bits become the deterministic error. On the other hand, our solution, i.e., LWLR uses special linear regression hypothesis to generate the desired rounded Gaussian errors, which are derived from the (independent) errors occurring in the physical layer communications over Gaussian channel(s). For the sake of simplicity, the proofs assume honest parties in the absence of any adversary. For other supported cases, it is easy to adapt the statements of the results according to the bounds/conditions established in \Cref{Max}.
Observe that the RGPC protocol ensures that all parties in a star $\partial$ receive an identical dataset $\mathcal{D}$, and therefore arrive at the same linear regression hypothesis $h^{(\partial)}$, and the same errors $\textbf{e}^{(\partial)}_{\textbf{b}}$.
\begin{theorem}\label{thm1}
The function family defined by \Cref{eq1} is a star-specific key-homomorphic PRF under the decision-LWE assumption.
\end{theorem}
\begin{proof}
We know from \Cref{LWLRthm} that for $\textbf{s}_i \xleftarrow{\$} \mathbb{Z}_m^w$ and a superpolynomial number of samples $\ell$, the LWLR instances generated in \Cref{eq1} are as hard as LWE --- to solve for $\textbf{s}_i$ (and $\textbf{e}_\textbf{b}^{(\partial_i)})$. The randomness of the function family follows directly from the randomness of $\textbf{s}_i$. The deterministic behavior follows from the above observation and the fact that $\textbf{A}_T(x)$ is a deterministic function. Hence, the family of functions defined in \Cref{eq1} is a PRF family.
Define
$$G_{\mathbf{s}}^{(\partial_i)}(x) = \textbf{s} \cdot \mathbf{A}_{T}(x) + \lfloor\boldsymbol{\varepsilon}^{(\partial_i)}_{\textbf{b}}\rceil \bmod m,$$
where $\boldsymbol{\varepsilon}^{(\partial_i)}_{\textbf{b}}$ is the (raw) Gaussian error corresponding to $\mathbf{b}$ for star $\partial_i$, and define $G_{\mathbf{s}}^{(\partial_j)}$ similarly. Since the errors $\boldsymbol{\varepsilon}^{(\partial_i)}_{\textbf{b}}$ and $\boldsymbol{\varepsilon}^{(\partial_j)}_{\textbf{b}}$ are independent Gaussian random variables, each with variance $\sigma^2$,
\begin{align*}
\Pr[G_{\mathbf{s}}^{(\partial_i)}(x)=G_{\mathbf{s}}^{(\partial_j)}(x)]&=\Pr[\lfloor\boldsymbol{\varepsilon}^{(\partial_i)}_{\textbf{b}}\rceil=\lfloor\boldsymbol{\varepsilon}^{(\partial_j)}_{\textbf{b}}\rceil] \\
&\leq \Pr[||\boldsymbol{\varepsilon}^{(\partial_i)}_{\textbf{b}}-\boldsymbol{\varepsilon}^{(\partial_j)}_{\textbf{b}}||_\infty<1] \\
&= \Pr[|Z|<(\sqrt{2}\sigma)^{-1}]^w
\end{align*}
where $Z$ is a standard Gaussian random variable.
Furthermore, since the number of samples is superpolynomial in the security parameter $\L$, by drowning/smudging, the statistical distance between $\boldsymbol{e}^{(\partial_i)}_{\textbf{b}}$ and $\boldsymbol{\varepsilon}^{(\partial_i)}_{\textbf{b}}$ is negligible (similarly for $\boldsymbol{\varepsilon}^{(\partial_j)}_{\textbf{b}}$ and $\boldsymbol{e}^{(\partial_j)}_{\textbf{b}}$). Hence,
\begin{align*}
\Pr[F_{\mathbf{s}}^{(\partial_i)}(x)=F_{\mathbf{s}}^{(\partial_j)}(x)]&=\Pr[G_{\mathbf{s}}^{(\partial_i)}(x)=G_{\mathbf{s}}^{(\partial_j)}(x)]+\eta(\L) \\
&\leq\Pr[|Z|<(\sqrt{2}\sigma)^{-1}]^w+\eta(\L),
\end{align*}
where $\eta(\L)$ is a negligible function in $\L$. By choosing $\delta=\Pr[|Z|<(\sqrt{2}\sigma)^{-1}]$, this function family satisfies \Cref{MainDef}(a).
Finally, by Chebyshev's inequality and the union bound, for any $\tau>0$,
\[F_{\textbf{s}_1}^{(\partial)}(x) + F_{\textbf{s}_2}^{(\partial)}(x) = F_{\textbf{s}_1 + \textbf{s}_2}^{(\partial)}(x) + \textbf{e}' \bmod m,\]
where each entry of $\textbf{e}'$ lies in $[-3\tau\hat{\sigma}, 3\tau\hat{\sigma}]$ with probability at least $1-3/\tau^2$. For example, choosing $\tau=\sqrt{300}$ gives us the bound that the absolute value of each entry is bounded by $\sqrt{2700}\hat{\sigma}$ with probability at least $0.99$.
Therefore, the function family defined by \Cref{eq1} is a family of star-specific key-homomorphic PRFs --- as defined by \Cref{MainDef} --- under the decision-LWE assumption. \qed
\end{proof}
\section{Conclusion}\label{Sec8}
In this paper, we introduced a novel derandomized variant of the celebrated learning with errors (LWE) problem, called learning with linear regression (LWLR), which derandomizes LWE via deterministic --- yet sufficiently independent --- errors that are generated by using special linear regression models whose training data consists of physical layer communications over Gaussian channels. Prior to our work, learning with rounding and its variant nearby learning with lattice rounding were the only known derandomized variant of the LWE problem; both of which relied on rounding. LWLR relies on the naturally occurring errors in physical layer communications to generate deterministic yet sufficiently independent errors from the desired rounded Gaussian distributions.
We also introduced star-specific key-homomorphic (SSKH) pseudorandom functions (PRFs), which are directly defined by the physical layer communications among the respective sets of parties that construct them. We used LWLR to construct the first SSKH PRF family. In order to quantify the maximum number of SSKH PRFs that can be constructed by sets of overlapping parties, we derived:
\begin{itemize}
\item a formula to compute the mutual information between linear regression models that are generated via overlapping training datasets,
\item bounds on the size of at most $t$-intersecting $k$-uniform families of sets; we also gave an explicit construction to construct such set systems,
\item bounds on the size of maximally cover-free at most $t$-intersecting $k$-uniform families of sets.
\end{itemize}
Using these results, we established the maximum number of SSKH PRFs that can be constructed by a given set of parties in the presence of active/passive and internal/external adversaries.
\bibliographystyle{plainurl}
| {'timestamp': '2022-05-03T02:41:24', 'yymm': '2205', 'arxiv_id': '2205.00861', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.00861'} |
\section{Acknowledgement}
The authors like to thank Vitaly Feldman and Jalaj Upadhyay for helpful discussions in the process of preparing this paper and Daniel Levy for comments on an earlier draft.
Part of this work was done while HA and AF were interning at Apple.
\section{Proofs of Section~\ref{sec:LB} (Lower bounds)}
\label{sec:proofs-LB}
\subsection{Proof of~\Cref{prop:sign-LB-var}}
\label{sec:proof-LB-signs}
\newcommand{\sigma_{\max}}{\sigma_{\max}}
\newcommand{\sigma_{\min}}{\sigma_{\min}}
We begin with the following lemma which gives
a lower bound for the sign estimation problem when $\sigma_j = \sigma$ for all $j \in [d]$. \citet{AsiFeKoTa21} use similar result to prove lower bounds for private optimization over $\ell_1$-bounded domains. For completeness, we give a proof in Section~\ref{sec:proof-LB-signs-identity}.
\begin{lemma}
\label{prop:sign-LB-identity}
Let $\mech$ be $(\diffp,\delta)$-DP and $\Ds = (z_1,\dots,z_n) $
where $z_i \in \mc{Z} = \{-\sigma,\sigma \}^d $.
Then
\begin{equation*}
\sup_{\Ds \in \mc{Z}^n }
\E\left[ \sum_{j=1}^d |\bar{z}_j|
\indic {\sign(\mech_j(\Ds)) \neq \sign(\bar{z}_j)} \right]
\ge \min \left( \sigma d , \frac{\sigma d^{3/2} }{n \diffp \log d} \right).
\end{equation*}
\end{lemma}
We are now ready to complete the proof of~\Cref{prop:sign-LB-var}
using bucketing-based techniques.
First, we assume without loss of generality that
$\sigma_j \le 1$ for all $1 \le j \le d$ (otherwise
we can divide by $\max_{1 \le j \le d} \sigma_j$).
Now, we define
buckets of coordinates $B_0,\dots,B_K$ such that
\begin{equation*}
B_i = \{j : 2^{-i-1} \le \sigma_j \le 2^{-i} \}.
\end{equation*}
For $i=K$, we set $B_K = \{j : \sigma_j \le 2^{-K} \}$.
We let $\sigma_{\max}(B_i) = \max_{j \in B_i} \sigma_j$ denote the maximal
value of $\sigma_j$ inside $B_i$. Similarly,
we define $\sigma_{\min}(B_i) = \min_{j \in B_i} \sigma_j$.
Focusing now on the $i$'th bucket, since
$\sigma_j \ge \sigma_{\min}(B_i) $ for all $j \in B_i$, ~\Cref{prop:sign-LB-identity}
now implies (as $d \log^2 d \le (n\diffp)^2 $) the lower bound
\begin{equation*}
\sup_{\Ds \in \mc{Z}^n }
\E\left[ \sum_{j \in B_i} |\bar{z}_j|
\indic {\sign(\mech_j(\Ds)) \neq \sign(\bar{z}_j)} \right]
\ge \frac{\sigma_{\min}(B_i) |B_i|^{3/2} }{n \diffp \log d} .
\end{equation*}
Therefore this implies that
\begin{equation*}
\sup_{\Ds \in \mc{Z}^n }
\E\left[ \sum_{j =1}^d |\bar{z}_j|
\indic {\sign(\mech_j(\Ds)) \neq \sign(\bar{z}_j)} \right]
\ge \max_{0 \le i \le K} \frac{\sigma_{\min}(B_i) |B_i|^{3/2} }{n \diffp \log d} .
\end{equation*}
To finish the proof of the theorem, it is now enough to prove that
\begin{equation*}
\sum_{j=1}^d \sigma_j^{2/3} \le
O(1) ~ \log d \max_{0 \le i \le K} {\sigma_{\min}(B_i)^{2/3} |B_i|}.
\end{equation*}
We now have
\begin{align*}
\sum_{j=1}^{d} \sigma_j^{2/3}
& \le \sum_{i=0}^K |B_i| \sigma_{\max}(B_i)^{2/3} \\
& \le K \max_{0 \le i \le K-1} |B_i| \sigma_{\max}(B_i)^{3/2} \\
& \le 4 K \max_{0 \le i \le K-1} |B_i| \sigma_{\min}(B_i)^{3/2},
\end{align*}
where the second inequality follows since the maximum cannot
be achieved for $i=K$ given our choice of $K = 10 \log d$,
and the last inequality follows since $\sigma_{\max}(B_i) \le 2 \sigma_{\min}(B_i)$
for all $i \le K - 1$.
This proves the claim.
\subsection{Proof of~\Cref{prop:sign-LB-identity}}
\label{sec:proof-LB-signs-identity}
Instead of proving lower bounds on the error of private mechanisms,
it is more convenient for this result to prove lower bounds on the
sample complexity required to achieve a certain error.
Given a mechanism $\mech$ and data $\Ds \in \mc{Z}^n$,
define the error of the mechanism to be:
\newcommand{\mathsf{Err}}{\mathsf{Err}}
\begin{equation*}
\mathsf{Err}(\mech,\Ds) =
\E\left[ \sum_{j=1}^d |\bar{z}_j|
\indic {\sign(\mech_j(\Ds)) \neq \sign(\bar{z}_j)} \right] .
\end{equation*}
The error of a mechanism for datasets of size $n$ is
$\mathsf{Err}(\mech, n) = \sup_{\Ds \in \mc{Z}^n} \mathsf{Err}(\mech,\Ds)$.
We let $n\opt(\alpha,\diffp)$ denote the minimal $n$
such that there is an $(\diffp,\delta)$-DP (with $\delta = n^{-\omega(1)}$) mechansim $\mech$
such that $\mathsf{Err}(\mech,n\opt(\alpha,\diffp)) \le \alpha$.
We prove the following lower bound on the sample
complexity.
\begin{proposition}
\label{prop:sample-complexity-LB}
If $\linf{z} \le 1$ then
\begin{equation*}
n\opt(\alpha,\diffp)
\ge \Omega(1) \cdot \frac{d^{3/2} }{\alpha \diffp \log d} .
\end{equation*}
\end{proposition}
To prove this result, we first state the following
lower bound for constant $\alpha$ and $\diffp$
which follows from Theorem 3.2 in~\cite{TalwarThZh15}.
\begin{lemma}[\citet{TalwarThZh15}, Theorem 3.2]
\label{lemma:sample-complexity-LB-low-accuracy}
Under the above setting,
\begin{equation*}
n\opt(\alpha = d/4,\diffp = 0.1)
\ge \Omega(1) \cdot \frac{\sqrt{d}}{\log d} .
\end{equation*}
\end{lemma}
We now prove a lower bound on the sample
complexity for small values of $\alpha$ and $\diffp$
which implies Proposition~\ref{prop:sample-complexity-LB}.
\begin{lemma}
\label{lemma:low-to-high-accuracy}
Let $\diffp_0 \le 0.1$.
For $\alpha \le \alpha_0/2$ and $\diffp \le \diffp_0/2$,
\begin{equation*}
n\opt(\alpha,\diffp )
\ge \frac{\alpha_0 \diffp_0}{\alpha \diffp}
n\opt(\alpha_0, \diffp_0) .
\end{equation*}
\end{lemma}
\begin{proof}
Assume there exists an $(\diffp,\delta)$-DP mechanism
$\mech$ such that $\mathsf{Err}(\mech,n) \le \alpha$. Then
we now show that there is $ \mech'$
that is $(\diffp_0, \frac{2 \diffp_0}{\diffp} \delta)$-DP
with $ n' = \Theta(\frac{\alpha \diffp}{\alpha_0 \diffp_0} n )$ such that
$\mathsf{Err}( \mech', n') \le \alpha_0 $. This proves the claim.
Let us now show how to define $ \mech'$ given $\mech$.
Let $k = \floor{\log (1+\diffp_0)/\diffp}$. For $\Ds' \in \mc{Z}^{n'}$,
we define $\Ds$ to have $k$ copies of $\Ds'$ and
$(n - kn')/2$ users which have $z_i = (\sigma,\dots,\sigma)$ and $(n - kn')/2$ users which have $z_i = (-\sigma,\dots,-\sigma)$.
Then we simply define $\mech'(\Ds') = \mech(\Ds)$.
Notice that now we have
\begin{equation*}
\bar{z} = \frac{ k n'}{n} \bar{z'} .
\end{equation*}
Therefore for a given $\Ds'$ we have that:
\begin{equation*}
\mathsf{Err}(\mech', \Ds') = \frac{n}{k n'} \mathsf{Err}(\mech,\Ds) \le \frac{n \alpha}{k n'}
\end{equation*}
Thus if $n' \ge \frac{2n \alpha}{k \alpha_0} $ then
\begin{equation*}
\mathsf{Err}(\mech', \Ds') \le \alpha_0.
\end{equation*}
Thus it remains to argue for the privacy of $\mech'$.
By group privacy,
$\mech'$ is $(k\diffp, \frac{e^{k \diffp} - 1}{e^\diffp- 1} \delta)$-DP,
hence our choice of $k$ implies that $ k \diffp \le \diffp_0$ and
$\frac{e^{k \diffp} - 1}{e^\diffp- 1} \delta \le \frac{2 \diffp_0}{\diffp} \delta$.
\end{proof}
\subsection{Proof of~\Cref{thm:LB-opt-l2-var}}
\label{sec:proof-LB-l2}
We assume without loss of generality that
$\sigma_j \le 1$ for all $1 \le j \le d$ (otherwise
we can divide by $\max_{1 \le j \le d} \sigma_j$).
We follow the bucketing-based technique we had in the proof
of~\Cref{prop:sign-LB-var}.
We define
buckets of coordinates $B_0,\dots,B_K$ such that
\begin{equation*}
B_i = \{j : 2^{-i-1} \le \sigma_j \le 2^{-i} \}.
\end{equation*}
For $i=K$, we set $B_K = \{j : \sigma_j \le 2^{-K} \}$.
We let $\sigma_{\max}(B_i) = \max_{j \in B_i} \sigma_j$ denote the maximal
value of $\sigma_j$ inside $B_i$. Similarly,
we define $\sigma_{\min}(B_i) = \min_{j \in B_i} \sigma_j$.
Focusing now on the $i$'th bucket, since
$\sigma_j \ge \sigma_{\min}(B_i) $ for all $j \in B_i$, ~\Cref{prop:LB-opt-l2-identity}
now implies the lower bound
\begin{equation*}
\sup_{\Ds \in \mc{Z}^n }
\E \left[ f(\mech(\Ds);\Ds) - f(x\opt_\Ds;\Ds) \right]
\ge \min \left( \sigma_{\min}(B_i) \sqrt{|B_i|} , \frac{|B_i| \sigma_{\min}(B_i)}{n \diffp} \right).
\end{equation*}
Since $ d \le (n \diffp)^2$,
taking the maximum over buckets, we get that the error of
any mechanism is lower bounded by:
\begin{equation*}
\sup_{\Ds \in \mc{Z}^n }
\E \left[ f(\mech(\Ds);\Ds) - f(x\opt_\Ds;\Ds) \right]
\ge \max_{0 \le i \le K}
\frac{|B_i| \sigma_{\min}(B_i)}{n \diffp} .
\end{equation*}
To finish the proof, we only need to show now that
\begin{equation*}
\frac{\sum_{j=1}^d \sigma_j }{\log d}
\le O(1) \max_{0 \le i \le K} {|B_i| \sigma_{\min}(B_i)}.
\end{equation*}
Indeed, we have that
\begin{align*}
\sum_{j=1}^d \sigma_j
& \le \sum_{i=0}^K |B_i| \sigma_{\max}(B_i) \\
& \le K \max_{0 \le i \le K-1} |B_i| \sigma_{\max}(B_i) \\
& \le 2 K \max_{0 \le i \le K-1} |B_i| \sigma_{\min}(B_i),
\end{align*}
where the second inequality follows since the maximum cannot
be achieved for $i=K$ given our choice of $K = 10 \log d$,
and the last inequality follows since $\sigma_{\max}(B_i) \le 2 \sigma_{\min}(B_i)$
for all $i \le K - 1$. The claim follows.
\section{Proof of Theorem~\ref{thm:unknown-cov}}
\label{sec:proof-unknown-cov}
We begin with the following lemma, which upper bounds the bias from truncation.
\begin{lemma}
\label{lemma:trunc-bias}
Let $Z$ be a random vector
satisfying Definition~\ref{definition:bouned-moments-ratio}. Let $ \sigma_j^2 = \E[z_j^2]$ and $\Delta \ge 4 r \sigma_j \log r$. Then we have
\begin{equation*}
|\E[\min(z_j^2,\Delta^2)] - \E[z_j^2] | \le \sigma_j^2/8 .
\end{equation*}
\end{lemma}
\begin{proof}
Let $\sigma_j^2 = \E[z_j^2]$.
To upper bound the bias, we need to upper bound $P(z_j^2 \ge t \Delta^2)$. We have that $z_j$ is $r^2\sigma_j^2$-sub-Gaussian therefore
\begin{equation*}
P(z_j^2 \ge t r^2 \sigma_j^2) \le 2 e^{-t}.
\end{equation*}
Thus, if $Y = |\min(z_j^2,\Delta^2) - z_j^2|$ then
$P(Y \ge t r^2 \sigma_j^2) \le 2 e^{-t}$ hence
\begin{align*}
\E[Y]
& = \int_{0}^\infty P(Y \ge t) dt \\
& = \int_{0}^\infty P(z_j^2 \ge \Delta^2 + t) dt \\
& \le \int_{0}^\infty 2 e^{-(\Delta^2 + t)/r^2 \sigma_j^2} dt \\
& \le 2 r^2 \sigma_j^2 e^{-\Delta^2/r^2 \sigma_j^2}
\le \sigma_j^2/8,
\end{align*}
where the last inequality follows since $\Delta = 4 r \sigma_j \log r$.
\end{proof}
The following lemma demonstrates that the random variable $Y_i = \min(z_{i,j}^2,\Delta^2)$ quickly concentrates around its mean.
\begin{lemma}
\label{lemma:sub-exp-conc}
Let $Z$ be a random vector satisfying
Definition~\ref{definition:bouned-moments-ratio}. Then with probability at
least $1- \beta$,
\begin{equation*}
\left|\frac{1}{n} \sum_{i=1}^n \min(z_{i,j}^2,\Delta^2) - \E[\min(z_{j}^2,\Delta^2)]\right|
\le \frac{2 r^2 \sigma_j^2 \sqrt{\log(2/\beta)}}{\sqrt{n}}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $Y_i = \min(z_{i,j}^2,\Delta^2)$. Since $z_j$ is
$r^2\sigma_j^2$-sub-Gaussian, we get that $z_j^2$ is
$r^4\sigma_j^4$-sub-exponential, meaning that $\E[(z_j^2)^k]^{1/k} \le
\bigO(k) r^2 \sigma_j^2$ for all $k \ge 1$. Thus $Y_i$ is also
$r^4\sigma_j^4$-sub-exponential, and using Bernstein's
inequality~\citep[Theorem 2.8.1]{Vershynin19}, we obtain
\begin{equation*}
P\left(\left|\frac{1}{n} \sum_{i=1}^n Y_i - \E[Y_i]\right| \ge t \right)
\le 2 \exp \left(- \min \left\{ \frac{n t^2}{2r^4 \sigma_j^4},
\frac{n t}{r^2 \sigma_j^2}\right\} \right) .
\end{equation*}
Setting $t = r^2 \sigma_j^2 \frac{2 \sqrt{\log(2/\beta)}}{\sqrt{n}} $
yields the result.
\end{proof}
Given Lemmas~\ref{lemma:trunc-bias} and~\ref{lemma:sub-exp-conc},
we are now ready to finish the proof of~\Cref{thm:unknown-cov}.
\paragraph{Proof of Theorem~\ref{thm:unknown-cov}}
First, privacy follows immediately, as each iteration $t$ is
$(\diffp/T,\delta/T)$-DP (using standard properties of the Gaussian
mechanism~\cite{DworkRo14}), so basic composition implies that the final
output is $(\diffp,\delta)$-DP. We now proceed to prove the claim about
utility. Let $\rho_t^2$ be the truncation value at iterate $t$, i.e.,
$\rho_t = 4 r \log r/2^{t-1}$. First, note that~\Cref{lemma:sub-exp-conc}
implies that with probability $1-\beta/2$ for every $j \in [d]$
\begin{equation*}
\left| \frac{1}{n} \sum_{i=1}^n \min(z_{i,j}^2, \rho_t^2) - \E[\min(z_{j}^2,\rho_t^2)] \right|
\le \frac{2 r^2 \sigma_j^2 \sqrt{\log(8 d/\beta)}}{\sqrt{n}} \le \sigma_j^2 / 10,
\end{equation*}
and similar arguments show that
\begin{equation*}
\left| \sigma_j^2 - \frac{1}{n} \sum_{i=1}^n z_{i,j}^2\right|
\le \frac{2 r^2 \sigma_j^2 \sqrt{\log(8 d/\beta)}}{\sqrt{n}} \le \sigma_j^2 / 10,
\end{equation*}
where the last inequality follows since $n \ge 400 r^4 \log(8d/\beta)$.
Moreover, for $\sigma_j$ such that $\rho_t \ge 4 r \sigma_j \log r$,
\Cref{lemma:trunc-bias} implies that
\begin{equation*}
| \E[\min(z_{j}^2,\rho_t^2) - \sigma_j^2 | \le \sigma_j^2/8.
\end{equation*}
Let us now prove that if $\sigma_j = 2^{-k}$ then its value will be set at
most at iterate $t = k$. Indeed at iterate $t=k$ we have $\rho_t = 4 r
2^{-k} \log r \ge 4 r \sigma_j \log r$ hence we have that using the triangle
inequality and standard concentration resutls for Gaussian distributions
that with probability $1- \beta/2$
\begin{equation*}
| \hat \sigma_{k,j}^2 - \sigma_j^2|
\le \sigma_j^2 / 5 + \frac{16 r^2 T \sqrt{d} \log^2 r \log(T/\delta) \log(4 d/\beta) }{ 2^{2k} n \diffp}
\le \sigma_j^2/4,
\end{equation*}
where the last inequality follows since $n \diffp \ge 1000 r^2 T \sqrt{d}
\log^2 r \log(T/\delta) \log(4 d/\beta)$. Thus, in this case we get that
$\hat \sigma_{k,j}^2 \ge \sigma_j^2/2 \ge 2^{-k-1}$ hence the value of
coordinate $j$ will best set at most at iterate $k$ hence $\hat \sigma_j \ge
\sigma_j / 2$.
On the other hand, we now assume that $\sigma_j = 2^{-k}$ and show that the
value of $\hat \sigma_j$ cannot be set before the iterate $t= k-3$ and hence
$\hat \sigma_j \le 2^{-k+3} \le 8 \sigma_j$. The above arguments show that
at iterate $t$ we have $\hat \sigma_{t,j}^2 \le 3/2 \sigma_j^2 + \frac{1}{10
\cdot 2^{2k}} \le 2^{-2k +1} + \frac{1}{10 \cdot 2^{2k}} \le 2^{-2k+2}$
hence the first part of the claim follows.
To prove the second part, first note that $z_j$ is $r
\sigma_j$-sub-Gaussian, hence using~\Cref{theorem:private-adagrad}, it is
enough to show that $\lipobj_{2p}(\hat C) \le O(\lipobj_{2p}(C))$ and that
$\sum_{j=1}^d \hat C_j^{-1/2} \le O(1) \cdot \sum_{j=1}^d C_j^{-1/2}$ where
$C = (r \sigma_j)^{-4/3}$ is the optimal choice of $C$ as in the
bound~\eqref{eqn:pagan-subgaussian-bound}. The first condition immediately
follows from the definition of $\lipobj_{2p}$ since $\hat C_j \le C_j$ for
all $j \in [d]$. The latter condition follows immediately since $\frac{1}{2}
\max (\sigma_j,1/d^2) \le \hat \sigma_j $, implying
\begin{equation*}
\sum_{j=1}^d \hat C_j^{-1/2}
\le O(r^{-2/3}) \sum_{j=1}^d \hat \sigma_j^{-2/3}
\le O(r^{-2/3}) \sum_{j=1}^d \sigma_j^{-2/3} + 1/d
\le O(r^{-2/3}) \sum_{j=1}^d \sigma_j^{-2/3}.
\end{equation*}
\section{Proofs of Section \ref{sec:algs}} \label{sec:proofs-alg}
\subsection{Proof of Lemma \ref{lemma:privacy}}\label{sec:proof-lemma:privacy}
The proof mainly follows from Theorem 1 in \cite{AbadiChGoMcMiTaZh16} where the authors provide a tight privacy bound for mini-batch SGD with bounded gradient using the Moments Accountant technique. Here we do not have the bounded gradient assumption. However, recall that we have
\begin{equation*}
\hg^k = \frac{1}{\batch} \sum_{i=1}^\batch \tilde{g}^{k,i} + \frac{\sqrt{\log(1/\delta)}}{\batch \diffp} \noise^k, \quad \tilde{g}^{k,i} = \pi_{A_k}(g^{k,i}),
\end{equation*}
where $\norm{\tilde{g}^{k,i}}_{A_k} \leq 1$ and $\noise^k \simiid \normal(0, A_k^{-1})$. Note that for any Borel-measurable set $O \subset \reals^d$, $A_k^{1/2} O$ is also Borel-measurable, and furthermore, we have
\begin{align*}
\P\left(\hg^k \in O \right) &= \P\left(A_k^{1/2} \hg^k \in A_k^{1/2} O \right) = \P\left(\frac{1}{\batch} \sum_{i=1}^\batch A_k^{1/2} \tilde{g}^{k,i} + \frac{\sqrt{\log(1/\delta)}}{\batch \diffp} A_k^{1/2} \noise^k \in A_k^{1/2} O \right),
\end{align*}
where, now, $\norm{A_k^{1/2} \tilde{g}^{k,i}}_2 \leq 1$ and $A_k^{1/2} \noise^k \simiid \normal(0, I_d)$ and we can use Theorem 1 in \cite{AbadiChGoMcMiTaZh16}.
\subsection{The proof deferred from Example \ref{example-sub-Gaussian}} \label{sec:proof-example-sub-Gaussian}
Note that $\nabla F(x; z) = \nabla g(x) + Z$, and hence we could take $G(Z,C) = \sup_{x \in \xdomain} \|\nabla g(x)\|_C + \|Z\|_C $. As a result, by Minkowski inequality, we have
\begin{align}
\E \left[ G(Z,C)^p \right ] ^{1/p}
& \le \sup_{x \in \xdomain} \|\nabla g(x)\|_C + \E \left[ \|Z\|_C^{p} \right ] ^{1/p}
\le \mu + \sup_{x \in \xdomain} \E \left[ \|Z\|_C^{p} \right ] ^{1/p}. \label{eqn:example_1}
\end{align}
Now note that $C^{1/2} Z$ is $(C_{11}\sigma_1^2, \ldots, C_{dd}\sigma_d^2)$ sub-Gaussian. Also,
we also know that if $X$ is $\sigma^2$ sub-gaussian, then
$\E[|X|^p]^{1/p} \le O(\sigma \sqrt{p})$, which implies the desired result.
\subsection{Intermediate Results}
Before discussing the proofs of Theorems \ref{theorem:convergence-SGD} and \ref{theorem:private-adagrad}, we need to state a few intermediate results which will be used in our analysis.
First, recall the definition of $\bias_{\norm{\cdot}}(\tilde g^k)$ from Section \ref{sec:Biased-SGD-Adagrad}:
\begin{equation*}
\bias_{\norm{\cdot}}(\tilde g^k) = \E_{\mathcal{D}_k}\left[ \norm{\tilde g^k - g^k} \right]
\end{equation*}
Here, we first bound the bias term. To do so, we use the following lemma:
\begin{lemma}[Lemma 3,~\cite{BarberDu14a}]
\label{lemma:projection-bias}
Consider the ellipsoid projection operator $\pi_D$. Then, for any random vector $X$ with $\E[\| X \|_C^p]^{1/p} \le \lipobj$, we have
\begin{equation*}
\E_X[\norm{\pi_D(X) - X}_C] \leq \frac{\lipobj^p}{(p - 1) B^{p-1}}.
\end{equation*}
\end{lemma}
We will find this lemma useful in our proofs. Another useful lemma that we will use it is the following:
\begin{lemma}\label{lemma:sum_l2}
Let $a_1, a_2, \ldots$ be an arbitrary sequence in $\R$. Let
$a_{1:k} = (a_1, \ldots, a_i) \in \R^i$. Then
\begin{equation*}
\sum_{k = 1}^n \frac{a_k^2}{\ltwo{a_{1:k}}} \le
2 \ltwo{a_{1:n}}.
\end{equation*}
\end{lemma}
\begin{proof}
We proceed by induction. The base case that $n = 1$ is immediate.
Now, let us assume the result holds through index $n - 1$, and we wish to
prove it for index $n$. The concavity of $\sqrt{\cdot}$ guarantees
that $\sqrt{b + a} \le \sqrt{b} + \frac{1}{2 \sqrt{b}} a$, and so
\begin{align*}
\sum_{k = 1}^n \frac{a_k^2}{\ltwo{a_{1:k}}}
& = \sum_{k = 1}^{n - 1} \frac{a_k^2}{\ltwo{a_{1:k}}}
+ \frac{a_n^2}{\ltwo{a_{1:n}}} \\
& \leq 2 \ltwo{a_{1:n-1}}
+ \frac{a_n^2}{\ltwo{a_{1:n}}}
= 2 \sqrt{\ltwo{a_{1:n}}^2 - a_n^2}
+ \frac{a_n^2}{\ltwo{a_{1:n}}} \\
& \leq
2 \ltwo{a_{1:n}},
\end{align*}
where the first inequality follows from the inductive hypothesis and the second one uses the concavity of $\sqrt{\cdot}$.
\end{proof}
\subsection{Proof of Theorem \ref{theorem:convergence-SGD}}
\label{proof-theorem:convergence-SGD}
We first state a more general version of the theorem here:
\begin{theorem}
\label{theorem:convergence-SGD(general)}
Let $\Ds$ be a dataset with $n$ points sampled from distribution $P$. Let $C$ also be a diagonal and positive definite matrix. Consider running Algorithm \ref{Algorithm1} with $T=cn^2/b^2$, $A_k = C/B^2$ where $B > 0$ is a positive real number and $c$ is given by Lemma \ref{lemma:privacy}. Then, with probability $1-1/n$, we have
\begin{align*}
\E [f(\wb{x}^T;\Ds) - \min_{x \in \xdomain} f(x;\Ds)]
& \leq \bigO(1) \left( \frac{\diam_2(\xdomain)}{T} \sqrt{\sum_{k=1}^T \E [\ltwos{g^k}^2] } \right. \\
& \left. \quad + \frac{\diam_2(\xdomain) B \sqrt{\tr(C^{-1})\log(1/\delta)} }{n \diffp}
+ \frac{{ \diam_{\norm{\cdot}_{C^{-1}}}(\xdomain) } ~ (2\lipobj_{2p}(C))^p}{(p - 1) B^{p-1}} \right),
\end{align*}
where the expectation is taken over the internal randomness of the algorithm.
\end{theorem}
\begin{proof}
Let $x\opt \in \argmin _{x \in \xdomain} f(x;\Ds)$. Also, for simplicity, we suppress the dependence of $f$ on $\Ds$ throughout the proof.
First, by Lemma \ref{lemma:empirical_Lipschitz}, we know that with probability at least $1-1/n$, we have
\begin{equation*}
\hat{\lipobj}_p(\Ds;C) \leq 2 \lipobj_{2p}(C),
\end{equation*}
We consider the setting that this bound holds.
Now, note that by Theorem \ref{theorem:biased-sgd} we have
\begin{equation}\label{eqn:SGD_initial_bound}
\E[f(\wb{x}^T) - f(x\opt)]
\le \frac{\diam_2(\xdomain)^2}{2 T \stepsize_{T-1}}
+ \frac{1}{2T} \sum_{k = 0}^{T-1} \E[\stepsize_k \ltwo{\hg^k}^2]
+ \frac{\diam_{\norm{\cdot}_{C^{-1}}}(\xdomain)}{T} \sum_{k = 0}^{T-1} \bias_{\norm{\cdot}_C}(\tilde g^k).
\end{equation}
Using Lemma \ref{lemma:projection-bias}, we immediately obtain the following bound
\begin{equation}\label{eqn:SGD_bias_bound}
\bias_{\norm{\cdot}_{C}}(\tilde g^k)
= \E \left[ \norm{\tilde g^k - g^k}_{C} \right]
\le \frac{\hat{\lipobj}_p(\Ds;C)^p}{(p-1) B^{p-1}}
\le \frac{(2 \lipobj_{2p}(C))^p}{(p-1) B^{p-1}}
\end{equation}
Plugging \eqref{eqn:SGD_bias_bound} into \eqref{eqn:SGD_initial_bound}, we obtain
\begin{equation}\label{eqn:SGD_2_bound}
\E[f(\wb{x}^T) - f(x\opt)]
\le \frac{\diam_2(\xdomain)^2}{2 T \stepsize_{T-1}}
+ \frac{1}{2T} \sum_{k = 0}^{T-1} \E[\stepsize_k \ltwo{\hg^k}^2]
+ \frac{{ \diam_{\norm{\cdot}_{C^{-1}}}(\xdomain) } ~ (2\lipobj_{2p}(C))^p}{(p - 1)B^{p-1}}.
\end{equation}
Next, we substitute the value of $\alpha_k$ and use Lemma \ref{lemma:sum_l2} to obtain
\begin{equation*}
\sum_{k = 0}^{T-1} \E[\stepsize_k \ltwo{\hg^k}^2] \leq 2 \diam_2(\xdomain) \sqrt{\sum_{k=1}^T \E [\ltwos{\hg^k}^2] },
\end{equation*}
and by replacing it in \eqref{eqn:SGD_2_bound}, we obtain
\begin{equation}\label{eqn:SGD_3_bound}
\E[f(\wb{x}^T) - f(x\opt)]
\le \frac{3\diam_2(\xdomain)}{2T} \sqrt{\sum_{k=1}^T \E [\ltwos{\hg^k}^2] }
+ \frac{{ \diam_{\norm{\cdot}_{C^{-1}}}(\xdomain) } ~ (2\lipobj_{2p}(C))^p}{(p - 1)B^{p-1}}.
\end{equation}
Finally, note that
\begin{align}
\sqrt{\sum_{k=1}^T \E [\ltwos{\hg^k}^2] }
& = \sqrt{\sum_{k=1}^T \E [\ltwos{\tg^k}^2] + \frac{\log(1/\delta)}{\batch^2 \epsilon^2} \sum_{k=0}^{T-1} \tr(A_k^{-1})} \nonumber \\
& = \sqrt{\sum_{k=1}^T \E [\ltwos{\tg^k}^2] + T \frac{B^2 \log(1/\delta) \tr(C^{-1})}{\batch^2 \epsilon^2}} \nonumber \\
& \leq \sqrt{2} \left ( \sqrt{\sum_{k=1}^T \E [\ltwos{\tg^k}^2] }
+ \frac{B \sqrt{\log(1/\delta) \tr(C^{-1})}}{\batch \epsilon} \sqrt{T} \right), \label{eqn:SGD_4_bound}
\end{align}
where the last inequality follows from the fact that $\sqrt{x+y} \leq \sqrt{2} \left ( \sqrt{x} + \sqrt{y} \right)$ for nonnegative real numbers $x$ and $y$. Plugging \eqref{eqn:SGD_4_bound} into \eqref{eqn:SGD_3_bound} completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{theorem:private-adagrad}} \label{proof-theorem:private-adagrad}
We first state the more general version of theorem:
\begin{theorem}\label{theorem:private-adagrad(general)}
Let $\Ds$ be a dataset with $n$ points sampled from distribution $P$. Let $C$ also be a diagonal and positive definite matrix. Consider running Algorithm \ref{Algorithm1} with $T=cn^2/b^2$ $A_k = C/B^2$ where $B > 0$ is a positive real number and $c$ is given by Lemma \ref{lemma:privacy}. Then,
with probability $1-1/n$, we have
\begin{align*}
\E [f(\wb{x}^T;\Ds) &- \min_{x \in \xdomain} f(x;\Ds)]
\leq \bigO(1) \left ( \frac{ \diam_\infty(\xdomain)}{T} \sum_{j=1}^d
\E \left [ \sqrt{\sum_{k=1}^T (g^{k}_j)^2} \right ] \right. \\
& \left. \quad + \frac{\diam_\infty(\xdomain) B \sqrt{\log(1/\delta)} (\sum_{j=1}^d C_{jj}^{-\half})}{n \diffp }
+ \frac{ \diam_{\norm{\cdot}_{C^{-1}}}(\xdomain) (2\lipobj_{2p}(C))^p}{(p-1) B^{p-1}} \right),
\end{align*}
where the expectation is taken over the internal randomness of the algorithm.
\end{theorem}
\begin{proof}
Similar to the proof of Theorem \ref{theorem:convergence-SGD}, we choose $x\opt \in \argmin _{x \in \xdomain} f(x;\Ds)$. We suppress the dependence of $f$ on $\Ds$ throughout this proof as well.
Again, we focus on the case that the bound
\begin{equation*}
\hat{\lipobj}_p(\Ds;C) \leq 2 \lipobj_{2p}(C),
\end{equation*}
which we know its probability is at least $1-1/n$.
Using Theorem \ref{theorem:biased-adagrad}, we have
\begin{align*}
\E [f(\wb{x}^T) - f(x\opt)]
\leq \frac{\diam_\infty(\xdomain)}{T} \sum_{j=1}^d \E \left [ \sqrt{\sum_{k=0}^{T-1} (\hg^{k}_j)^2} \right ]
+ \frac{\diam_{\norm{\cdot}_{C^{-1}}}(\xdomain)}{T} \sum_{k = 0}^{T-1} \bias_{\norm{\cdot}_C}(\tilde g^k).
\end{align*}
Similar to the proof of Theorem \ref{theorem:convergence-SGD}, and by using Lemma \ref{lemma:projection-bias}, we could bound the second term with
\begin{equation*}
\frac{ \diam_{\norm{\cdot}_{C^{-1}}}(\xdomain) (2\lipobj_{2p}(C))^p}{(p-1) B^{p-1}}.
\end{equation*}
Now, it just suffices to bound the first term. Note that
\begin{align}
\sum_{j=1}^d \E \left [ \sqrt{\sum_{k=1}^T (\hg^{k}_j)^2} \right ]
& = \sum_{j=1}^d \E \left [ \sqrt{\sum_{k=1}^T (\tg^{k}_j + \noise^k_j )^2} \right ] \nonumber \\
& \leq \sum_{j=1}^d \E \left [ \sqrt{\sum_{k=1}^T 2 \left( (\tg^{k}_j)^2 + (\noise^k_j )^2 \right )} \right ] \nonumber \\
& \leq 2 \sum_{j=1}^d \left ( \E \left [ \sqrt{\sum_{k=1}^T (\tg^{k}_j)^2} \right ] + \E \left [\sqrt{\sum_{k=1}^T (\noise^k_j )^2} \right ] \right ) \label{eqn:AdaGrad-regret-dpnoise1} \\
& \leq 2 \sum_{j=1}^d \left ( \E \left [ \sqrt{\sum_{k=1}^T (\tg^{k}_j)^2} \right ] + \sqrt{\E \left [\sum_{k=1}^T (\noise^k_j )^2 \right ]} \right ) \label{eqn:AdaGrad-regret-dpnoise2} \\
& \leq 2 \sum_{j=1}^d \E \left [ \sqrt{\sum_{k=1}^T (\tg^{k}_j)^2} \right ]
+ 2 B \sqrt{T \log(1/\delta)} \frac{\sum_{j=1}^d C_{jj}^{-1/2}}{\batch \epsilon},
\label{eqn:AdaGrad-regret-dpnoise3}
\end{align}
where \eqref{eqn:AdaGrad-regret-dpnoise1} is obtained by using $\sqrt{x+y} \leq \sqrt{2} \left ( \sqrt{x} + \sqrt{y} \right)$ with $x= \sum_{k=1}^T (\tg^{k}_j)^2$ and $y=\sum_{k=1}^T (\noise^k_j )^2$, and \eqref{eqn:AdaGrad-regret-dpnoise2} follows from $\E \left [ X \right ] \leq \sqrt{\E \left [ X^2 \right ]}$ with $X = \sqrt{\sum_{k=1}^T (\noise^k_j )^2}$.
\end{proof}
\section{Convergence of SGD and AdaGrad with biased gradients estimates}\label{sec:Biased-SGD-Adagrad}
For the sake of our analysis, we find it helpful to first study the convergence of SGD and AdaGrad when the stochastic estimates of the subgradients may be biased and noisy (Algorithms \ref{Algorithm3} and \ref{Algorithm4}.)
\begin{algorithm}
\caption{Biased SGD}
\label{Algorithm3}
\begin{algorithmic}[1]
\REQUIRE Dataset $\mathcal{S} = (\ds_1,\dots,\ds_n) \in \domain^n$, convex set $\xdomain$, mini-batch size $\batch$, number of iterations $T$.
\STATE Choose arbitrary initial point $x^0 \in \xdomain$;
\FOR{$k = 0;\ k \leq T-1;\ k = k + 1$}
\STATE Sample a batch $\mathcal{D}_k:= \{z_i^k\}_{i=1}^\batch$ from $\mathcal{S}$ uniformly with replacement;
\STATE Set $g^k := \frac{1}{\batch} \sum_{i=1}^\batch g^{k,i}$ where $g^{k,i} \in \partial F(x^k; z_i^k)$;
\STATE Set $ \tilde g^k$ be the biased estimate of $g^k$;
\STATE Set $ \hg^k := \tilde g^k + \noise^k $ where $\noise^k$ is a zero-mean random variable, independent from previous information;
\STATE $x^{k+1} := \proj_{\xdomain}( x^k - \alpha_k \hg^k)$;
\ENDFOR
\STATE {\bfseries Return:} $\wb{x}^T \defeq \frac{1}{T} \sum_{k = 1}^T x^k$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Biased Adagrad}
\label{Algorithm4}
\begin{algorithmic}[1]
\REQUIRE Dataset $\mathcal{S} = (\ds_1,\dots,\ds_n) \in \domain^n$, convex set $\xdomain$, mini-batch size $\batch$, number of iterations $T$.
\STATE Choose arbitrary initial point $x^0 \in \xdomain$;
\FOR{$k = 0;\ k \leq T-1;\ k = k + 1$}
\STATE Sample a batch $\mathcal{D}_k:= \{z_i^k\}_{i=1}^\batch$ from $\mathcal{S}$ uniformly with replacement;
\STATE Set $ \tilde g^k$ be the biased estimate of $g^k$;
\STATE Set $ \hg^k := \tilde g^k + \noise^k $ where $\noise^k$ is a zero-mean random variable, independent from previous information;
\STATE Set $H_k = \diag\left(\sum_{i=1}^{k} \hg^i \hg^{i^T} \right)^{\frac{1}{2}}/ \diam_\infty(\xdomain)$;
\STATE $x^{k+1} = \proj_{\xdomain}( x^k - H_k^{-1} \hg^k)$ where the projection is with respect to $\norm{\cdot}_{H_k}$;
\ENDFOR
\STATE {\bfseries Return:} $\wb{x}^T \defeq \frac{1}{T} \sum_{k = 1}^T x^k$.\\
\end{algorithmic}
\end{algorithm}
Also, let
\begin{equation*}
\bias_{\norm{\cdot}}(\tilde g^k) = \E_{\mathcal{D}_k}\left[ \norm{\tilde g^k - g^k} \right]
\end{equation*}
be the bias of $\tilde g_k$ with respect to a general norm $\norm{\cdot}$. The next two theorems characterize the convergence of these two algorithms using this term.
\begin{theorem}
\label{theorem:biased-sgd}
Consider the biased SGD method (Algorithm \ref{Algorithm3}) with a non-increasing sequence of stepsizes $\{\stepsize_k\}_{k=0}^{T-1}$. Then
for any $x\opt \in \argmin_{\xdomain} f$, we have
\begin{equation*}
\E[f(\wb{x}^T) - f(x\opt)]
\le \frac{\diam_2(\xdomain)^2}{2 T \stepsize_{T-1}}
+ \frac{1}{2T} \sum_{k = 0}^{T-1} \E[ \stepsize_k \ltwo{\hg^k}^2]
+\frac{\diam_{\dnorm{\cdot}}(\xdomain)}{T} \sum_{k=0}^{T-1} \bias_{\norm{\cdot}}(\tilde g^k).
\end{equation*}
\end{theorem}
\begin{proof}
We first consider the progress of a single step of the gradient-projected
stochastic gradient method.
We have
\begin{align*}
\half \ltwos{x^{k + 1} - x\opt}^2
& \le \half \ltwos{x^k - x\opt}^2
- \stepsize_k \<\hat{g}^k, x^k - x\opt\>
+ \frac{\stepsize_k^2}{2} \ltwos{\hat{g}^k}^2 \\
& = \half \ltwos{x^k - x\opt}^2
- \stepsize_k \<f'(x^k), x^k - x\opt\>
+ \stepsize_k E_k
+ \frac{\stepsize_k^2}{2} \ltwos{\hat{g}^k}^2,
\end{align*}
where the error random variable $E_k$ is given by
\begin{equation*}
E_k \defeq \<f'(x^k) - g^k, x^k - x\opt\>
+ \<g^k - \tilde{g}^k, x^k - x\opt\>
+ \<\tilde{g}^k - \hat{g}^k, x^k - x\opt\>.
\end{equation*}
Using that $\<f'(x^k), x^k - x\opt\>
\le f(x^k) - f(x\opt)$ then yields
\begin{equation*}
f(x^k) - f(x\opt)
\le \frac{1}{2 \stepsize_k}
\left(\ltwos{x^k - x\opt}^2 - \ltwos{x^{k+1} - x\opt}^2\right)
+ \frac{\stepsize_k}{2} \ltwos{\hat{g}^k}^2
+ E_k.
\end{equation*}
Summing for $k = 0, \ldots, T-1$, by rearranging the terms and using that the stepsizes
are non-increasing, we obtain
\begin{equation}
\label{eqn:intermediate-regret-bound}
\sum_{k = 1}^T [f(x^k) - f(x\opt)]
\le \frac{\diam(\xdomain)^2}{2 \stepsize_{T-1}}
+ \sum_{k = 0}^{T-1} \frac{\stepsize_k}{2} \ltwos{\hat{g}^k}^2
+ \sum_{k = 0}^{T-1} E_k.
\end{equation}
Taking expectations from both sides, we have
\begin{align*}
\E[E_k]
&= \E \left[ \<f'(x^k) - g^k, x^k - x\opt\> \right ] + \E \left[ \<g^k - \tilde{g}^k, x^k - x\opt\> \right ] + \E \left[ \<\tilde{g}^k - \hat{g}^k, x^k - x\opt\> \right ]\\
&= \E \left[ \<g^k - \tilde{g}^k, x^k - x\opt\> \right ] \\
& \le \bias_{\norm{\cdot}}(\tilde g^k) \cdot \diam_{\dnorm{\cdot}}(\xdomain),
\end{align*}
where the second equality comes from the fact that the two other expectations are zero and the last inequality follows from the Holder's inequality.
\end{proof}
\begin{remark}
This result holds in the case that $\stepsize_k$'s are adaptive and depend on observed gradients.
\end{remark}
Next theorem states the convergence of biased Adagrad (Algorithm \ref{Algorithm4}).
\begin{theorem}
\label{theorem:biased-adagrad}
Consider the biased Adagrad method (Algorithm \ref{Algorithm4}). Then
for any $x\opt \in \argmin_{\xdomain} f$, we have
\begin{align*}
\E [f(\wb{x}^T) - f(x\opt)]
\leq \frac{\diam_\infty(\xdomain)}{T} \sum_{j=1}^d \E \left [ \sqrt{\sum_{k=0}^{T-1} (\hg^{k}_j)^2} \right ]
+ \frac{\diam_{\dnorm{\cdot}}(\xdomain)}{T} \sum_{k=0}^{T-1} \bias_{\norm{\cdot}}(\tilde g^k).
\end{align*}
\end{theorem}
\begin{proof}
Recall that $x^{k+1}$ is the projection of $x^k - H_k^{-1} \hat{g}^k$ into $\xdomain$ with respect to $\Vert . \Vert_{\mathcal{H}_k}$. Hence, since $x \opt \in \xdomain$ and projections are non-expansive, we have
\begin{equation} \label{eqn:non-expansive}
\normH{ x^{k+1} - x \opt}{k}^2
\leq \normH{ x^k - H_k^{-1} \hg^k - x \opt}{k}^2.
\end{equation}
Now, expanding the right hand side yields
\begin{align*}
\half \normH{x^{k + 1} - x\opt}{k}^2
& \leq \half \normH{x^k - x\opt}{k}^2 - \<\hg^k, x^k - x\opt\> + \half \normHi{\hg^k}{k}^2 \\
& = \half \normH{x^k - x\opt}{k}^2
- \<g^k, x^k - x\opt\>
+ \<g^k - \hg^k, x^k - x\opt\>
+ \half \normHi{\hg^k}{k}^2.
\end{align*}
Taking expectation and using that $ \E \left [\<g^k, x^k - x\opt\> \right ] \geq \E \left [ f(x^k) - f(x\opt) \right]$ from convexity along with the fact that $\E \left [ \<g^k - \hg^k, x^k - x\opt\> \right ] = \E \left [ \<g^k - \tilde g^k, x^k - x\opt\> \right ] $, we have
\begin{align*}
\half \E & \left [ \normH{x^{k + 1} - x\opt}{k}^2 \right ] \nonumber \\
& \leq \E \left [ \half \normH{x^k - x\opt}{k}^2 - (f(x^k) - f(x\opt)) + \half \normHi{\hg^k}{k}^2 \right ]
+ \E \left [ \<g^k - \tilde g^k, x^k - x\opt\> \right ].
\end{align*}
Thus, using Holder's inequality, we have
\begin{equation*}
f(x^k) - f(x\opt)
\le \half \E \left[ \normH{x^{k} - x\opt}{k}^2 - \normH{x^{k + 1} - x\opt}{k}^2
+\normHi{\hg^k}{k}^2 \right]
+ \bias_{\norm{\cdot}}(\tilde g^k) \cdot \diam_{\dnorm{\cdot}}(\xdomain).
\end{equation*}
Now the claim follows using standard techniques for Adagrad (as
for example Corollary 4.3.8 in~\cite{Duchi18}).
\end{proof}
\section{Private Adagrad using coordinate-wise projection}
The previous algorithms project the gradient as a vector to some
ellipsoids and can be thought of as dividing the clipping budget
between the different coordinates. In this section, we instead
clip each coordinate independently to some threshold and divide
the privacy budget between the coordinates.
In particular, at iterate $k$, given samples $\statrv_1,\dots,\statrv_\batch$ with
$g^{k,i} \in \partial F(x^k; \statrv_i)$, we define the following mechanism
for the $j$'th coordinate: first, we project the $j$'th coordinate to
the range $[-\lambda_j,\lambda_j]$,
\begin{equation*}
\tilde g^k_j = \frac{1}{b} \sum_{i=1}^b \pi_{\lambda_j}(g^{k,i}_j),
\end{equation*}
then we add noise to preserve $\diffp_j$-differential privacy,
\begin{equation*}
\hg^k_j = \tilde g^k_j + \frac{\rho \lambda_j}{\batch \diffp_j} \noise^k_j, \quad
\noise^k_j \simiid \normal(0, 1).
\end{equation*}
We begin by stating the privacy guarantees in the following lemma.
\begin{lemma}
\label{lemma:privacy-coordinate}
Assume $\sum_{j=1}^d \diffp_j^2 \le \diffp^2$ and
we run the above iteration for $T \lesssim \frac{n^2}{b^2}$
iterations. Then the algorithm is $(\rho^2, \diffp^2)$-Renyi differentially
private.
\end{lemma}
\begin{proof}[sketch]
First, the private mechanism of each coordinate
preserves $(\rho^2, \diffp_j^2 b^2/n^2)$-Renyi DP. Thus
the privacy of each iterate is
$(\rho^2, \diffp^2 b^2/n^2)$-Renyi DP. Therefore
the composition of $T \lesssim n^2/b^2$ iterations
results in $(\rho^2, \diffp^2)$-Renyi differentially privacy.
\end{proof}
\subsection{Regret bounds for Adagrad}
Let us now analyze the regret bounds that private Adagrad
achieves using the scheme of the previous section.
For this algorithm, we need the following modification of
Assumption~\ref{assumption:bounded-moment}.
\begin{assumption}
\label{assumption:bounded-moment-coord}
For every $1 \le j \le d$ there is $\sigma_j > 0$
such that for any selection $F'(x; \statval) \in \partial F(x;
\statval)$ we have
\begin{equation}
\label{eqn:moment-matrix-norm}
\E \left[ | \nabla F_j(x; \statrv) |^p \right ] ^{1/p} \leq \sigma_j,
\end{equation}
where $p \geq 2$ is a positive integer.
\end{assumption}
We have the following bounds in this setting.
\begin{theorem}
\label{theorem:private-adagrad-coord}
Let Assumption~\ref{assumption:bounded-moment-coord} hold
and $x^k$ follow the coordinate-based iteration.
Let $\diffp_j = \diffp \cdot \frac{\lambda_j^{1/3}}{\sqrt{\sum_{j=1}^d \lambda_j^{2/3}}}$
and $T = \frac{n^2}{b^2}$.
Define
$\wb{x}^T \defeq \frac{1}{T} \sum_{k = 1}^T x^k$. Then
for any $x\opt \in \xdomain$,
\begin{align*}
\E [f(\wb{x}^T) - f(x\opt)]
& \leq \frac{\sqrt{2} \diam_\infty(\xdomain)}{T} \sum_{j=1}^d
\E \left [ \sqrt{\sum_{k=1}^T (g^{k}_j)^2} \right ] \\
& \quad + \frac{\sqrt{2} \diam_\infty(\xdomain) \rho
\left( \sum_{j=1}^d \lambda_j^{2/3} \right)^{3/2}}{n \diffp } \\
& \quad + \frac{ \diam_{\infty}(\xdomain)}{(p-1)}
\sum_{j=1}^d \frac{\sigma_j^p}{\lambda_j^{p-1}}.
\end{align*}
In particular, setting $\lambda_j = \sigma_j \cdot
\left( \frac{n \diffp \sum_{j=1}^d \sigma_j}{( \sum_{j=1}^d \sigma_j^{2/3})^{3/2}} \right)^{1/p}$
yields,
\begin{align*}
\E [f(\wb{x}^T) - f(x\opt)]
& \lesssim \frac{\diam_\infty(\xdomain)}{T} \sum_{j=1}^d
\E \left [ \sqrt{\sum_{k=1}^T (g^{k}_j)^2} \right ] \\
& \quad + \diam_\infty(\xdomain) \rho
\left(\frac{ \left( \sum_{j=1}^d \sigma_j^{2/3} \right)^{3/2}}
{n \diffp } \right)^{1-1/p}
\left( \sum_{j=1}^d \sigma_j \right)^{1/p}.
\end{align*}
\end{theorem}
The proof follows from the next two lemmas which bound
the bias and second moment.
\begin{lemma}
\label{lemma:private-bias-coord}
Let Assumption~\ref{assumption:bounded-moment-coord} hold.
Then
\begin{equation*}
\bias_1(\tilde g^k)
= \E \left[ \lone{\tilde g^k - g^k} \right]
\le \sum_{j=1}^d \frac{\sigma_j^p}{(p-1) \lambda_j^{p-1}}.
\end{equation*}
\end{lemma}
\begin{proof}
Follows immediately from Lemma~\ref{lemma:projection-bias} applied
to each coordinate.
\end{proof}
\noindent
The following lemma upper bounds the second moment.
\begin{lemma}
\label{lemma:private-second-moment-coord}
We have that
\begin{equation*}
\E \left [ \sum_{j=1}^d \sqrt{\sum_{k=1}^T (\hg^{k}_j)^2} \right ]
\le \sqrt{2} \sum_{j=1}^d \E \left [ \sqrt{\sum_{k=1}^T (g^{k}_j)^2} \right ]
+ \frac{ \sqrt{2} \rho n \left( \sum_{j=1}^d \lambda_j^{2/3} \right)^{3/2}}{\batch^2 \diffp}.
\end{equation*}
\end{lemma}
Theorem~\ref{theorem:private-adagrad-coord} immediately gives the following
corollary for the setting of sub-gaussian coordinates ($ p \ge \log d$).
\begin{corollary}
\label{cor:sub-gaussian-adagrad-coord}
Let Assumption~\ref{assumption:bounded-moment-coord} hold
with $p \ge \log d$. If
$\batch = 1$, then the coordinate-based Adagrad has
\begin{align*}
\E [f(\wb{x}^T) - f(x\opt)]
& \lesssim \diam_\infty(\xdomain)
\frac{ \left(\sum_{j=1}^d \sigma_j ^{2/3}\right)^{3/2} }{n \diffp} .
\end{align*}
\end{corollary}
\subsection{Unknown parameters}
The advantage of the coordinate-base algorithm is that it allows
an easy (really?) extension to the case when we do not know the
parameters of the distribution.
In this section, we focus on the setting where
Assumption~\ref{assumption:bounded-moment-coord} holds
but we do not know the parameters $\sigma_j$.
The idea is to construct an online estimate $\hat \sigma^k_j$ of the
true parameters. The following proposition shows that if the online
estimate is accurate on most iterations, then we recover the same
regret bounds.
\begin{theorem}
\label{theorem:private-adagrad-coord}
Let Assumption~\ref{assumption:bounded-moment-coord} hold
and $x^k$ follow the coordinate-based iteration.
Let $\diffp^k_j = \diffp \cdot \frac{(\lambda^k_j)^{1/3}}{\sqrt{\sum_{j=1}^d (\lambda^k_j)^{2/3}}}$,
$\lambda^k_j = \hat \sigma^k_j \log d \cdot
\left( n \diffp \right)^{1/p}$,
and $T = \frac{n^2}{b^2}$.
Define
$\wb{x}^T \defeq \frac{1}{T} \sum_{k = 1}^T x^k$.
If $\hat \sigma^k_j \le \sigma_j / \log d$ for at most $O(\log d)$ iterates,
and $\hat \sigma^k_j \le \sigma_j \cdot \log d $ for all iterates, then
\begin{align*}
\E [f(\wb{x}^T) - f(x\opt)]
& \lesssim \frac{\diam_\infty(\xdomain)}{T} \sum_{j=1}^d
\E \left [ \sqrt{\sum_{k=1}^T (g^{k}_j)^2} \right ] \\
& \quad + \diam_\infty(\xdomain) \rho
\frac{ \left( \sum_{j=1}^d \sigma_j^{2/3} \right)^{3/2} \log^2 d }
{(n \diffp)^{1-1/p} } \\
& \quad + \diam_\infty(\xdomain) \frac{\log d ~ \sum_{j=1}^d \sigma_j}{T}.
\end{align*}
\end{theorem}
\begin{proof}
The proof follows from the next two lemmas and the
bounds of Theorem~\ref{theorem:biased-adagrad}:
Lemma~\ref{lemma:private-bias-coord-unknown} provides
an upper bound for the bias and
Lemma~\ref{lemma:private-second-moment-coord-unknown}
provides an upper bound for the second moment.
\end{proof}
\begin{lemma}
\label{lemma:private-bias-coord-unknown}
Let the conditions of Theorem~\ref{theorem:private-adagrad-coord} hold.
Then
\begin{equation*}
\frac{1}{T} \sum_{k=1}^T \bias_1(\tilde g^k)
= \frac{1}{T} \sum_{k=1}^T \E \left[ \lone{\tilde g^k - g^k} \right]
\le \frac{ \sum_{j=1}^d \sigma_j}{(p-1) (n \diffp)^{1 - 1/p}} +
\frac{\sum_{j=1}^d \sigma_j \log d}{T}.
\end{equation*}
\end{lemma}
\begin{proof}
As we want to upper bound the bias for $\lone{\cdot}$, we
focus on the $j$'th coordinate.
Let $S_g = \{k : \hat \sigma^k_j \ge \sigma_j /\log d \}$ be the
set of iterates where $\hat \sigma^k_j $ is large
and $S_b = [T] \setminus S_g$ be the remaining iterates.
First, we bound the bias for iterates $k \in S_g$.
If $k \in S_g$, then we get that
$\lambda_j \ge \sigma_j ~ (n \diffp)^{1/p}$. Therefore
for $k \in S_g$ we have using Lemma~\ref{lemma:private-bias-coord}
that the bias at iteration $k$ is:
\begin{equation*}
\bias(\tilde g^k_j)
= \E \left[ |\tilde g^k_j - g^k_j| \right]
\le \frac{\sigma_j^p}{(p-1) (\lambda^k_j)^{p-1}}
\le \frac{\sigma_j}{(p-1) (n \diffp)^{1 - 1/p}}.
\end{equation*}
For $k \in S_b$, we have that
\begin{equation*}
\E \left[ |\tilde g^k_j - g^k_j| \right]
\le \E \left[ |g^k_j| \right]
\le \sigma_j,
\end{equation*}
where the first inequality follows from the definition of the projection and
the second inequality follows from Assumption~\ref{assumption:bounded-moment-coord}.
Therefore as $| S_b | \le O(\log d)$, we get that
\begin{align*}
\frac{1}{T} \sum_{k=1}^T \bias(\tilde g^k_j)
& = \frac{1}{T} \sum_{k \in S_g} \bias(\tilde g^k_j)
+ \frac{1}{T} \sum_{k \in S_b} \bias(\tilde g^k_j) \\
& \le \frac{\sigma_j}{(p-1) (n \diffp)^{1 - 1/p}} +
\frac{\sigma_j \log d}{T}.
\end{align*}
Summing over all coordinates, the claim follows.
\end{proof}
\noindent
The following lemma upper bounds the second moment.
\begin{lemma}
\label{lemma:private-second-moment-coord-unknown}
Let the conditions of Theorem~\ref{theorem:private-adagrad-coord} hold.
Then
\begin{equation*}
\E \left [ \sum_{j=1}^d \sqrt{\sum_{k=1}^T (\hg^{k}_j)^2} \right ]
\le \sqrt{2} \sum_{j=1}^d \E \left [ \sqrt{\sum_{k=1}^T (g^{k}_j)^2} \right ]
+ \sqrt{2} \rho \frac{n \log^2 d \left( \sum_{j=1}^d \sigma_j^{2/3} \right)^{3/2} }
{\batch^2 \diffp} (n \diffp)^{1/p}.
\end{equation*}
\end{lemma}
\begin{proof}
The claim follows immediately from Lemma~\ref{lemma:private-second-moment-coord}
and the assumption that $\hat \sigma^k_j \le \sigma_j \log d$ for all iterates
which implies that $\lambda^k_j \le \sigma_j \log^2 d \cdot
\left( n \diffp \right)^{1/p}$ for all $k$, hence we get
\begin{align*}
{\left( \sum_{j=1}^d \lambda_j^{2/3} \right)^{3/2}}
\le {\log^2 d \left( \sum_{j=1}^d \sigma_j^{2/3} \right)^{3/2} }(n \diffp)^{1/p}.
\end{align*}
\end{proof}
\subsubsection{Constructing an online estimate}
\hacomment{How do we get an estimate $\hat \sigma^k_j$
that satisfies our assumptions?}
\section{Private Adaptive Gradient Methods} \label{sec:algs}
In this section, we study and develop \PASAN and \PAGAN, differentially private versions of
Stochastic Gradient Descent (SGD) with adaptive stepsize (Algorithm
\ref{Algorithm1}) and Adagrad \cite{DuchiHaSi11} (Algorithm
\ref{Algorithm2}). The challenge in making these algorithms private is that
adding isometric Gaussian noise---as is standard in the differentially
private optimization literature---completely eliminates the geometrical
properties that are crucial for the performance of adaptive gradient
methods. We thus add noise that adapts to gradient geometry while
maintaining privacy.
More precisely, our private versions of adaptive optimization algorithms
proceed as follows: to privatize the gradients, we first project them to an
ellipsoid capturing their geometry, then adding non-isometric Gaussian noise
whose covariance corresponds to the positive definite matrix $A$ that
defines the ellipsoid. Finally, we apply the adaptive algorithm's step with
the private gradients. We present our private versions of SGD with adaptive
stepsizes and Adagrad in Algorithms~\ref{Algorithm1} and~\ref{Algorithm2},
respectively.
\begin{algorithm}[tb]
\caption{Private Adaptive SGD with Adaptive Noise (\PASAN)}
\label{Algorithm1}
\begin{algorithmic}[1]
\REQUIRE Dataset $\mathcal{S} = (\ds_1,\dots,\ds_n) \in \domain^n$, convex set $\xdomain$, mini-batch size $\batch$, number of iterations $T$, privacy parameters $\diffp, \delta$;
\STATE Choose arbitrary initial point $x^0 \in \xdomain$;
\FOR{$k=0$ to $T-1$\,}
\STATE Sample a batch $\mathcal{D}_k:= \{z_i^k\}_{i=1}^\batch$ from $\mathcal{S}$ uniformly with replacement;
\STATE Choose ellipsoid $A_k$;
\STATE Set $\tilde g^k := \frac{1}{\batch} \sum_{i=1}^\batch \pi_{A_k}(g^{k,i})$ where $g^{k,i} \in \partial F(x^k; z_i^k)$;
\STATE Set $ \hg^k = \tilde g^k + \sqrt{\log(1/\delta)}/{(\batch \diffp)} \noise^k$ where $\noise^k \simiid \normal(0, A_k^{-1})$;
\STATE Set $\stepsize_k = \alpha/{\sqrt{\sum_{i = 0}^k \ltwos{\hg^i}^2}}$;
\STATE $x^{k+1} := \proj_{\xdomain}( x^k - \alpha_k \hg^k)$;
\ENDFOR
\STATE {\bfseries Return:} $\wb{x}^T \defeq \frac{1}{T} \sum_{k = 1}^T x^k$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[tb]
\caption{Private Adagrad with Adaptive Noise (\PAGAN)}
\label{Algorithm2}
\begin{algorithmic}[1]
\REQUIRE Dataset $\mathcal{S} = (\ds_1,\dots,\ds_n) \in \domain^n$, convex set $\xdomain$, mini-batch size $\batch$, number of iterations $T$, privacy parameters $\diffp, \delta$;
\STATE Choose arbitrary initial point $x^0 \in \xdomain$;
\FOR{$k=0$ to $T-1$\,}
\STATE Sample a batch $\mathcal{D}_k:= \{z_i^k\}_{i=1}^\batch$ from $\mathcal{S}$ uniformly with replacement;
\STATE Choose ellipsoid $A_k$;
\STATE Set $\tilde g^k := \frac{1}{\batch} \sum_{i=1}^\batch \pi_{A_k}(g^{k,i})$ where $g^{k,i} \in \partial F(x^k; z_i^k)$;
\STATE Set $ \hg^k = \tilde g^k + \sqrt{\log(1/\delta)}/{(\batch \diffp)} \noise^k$ where $\noise^k \simiid \normal(0, A_k^{-1})$;
\STATE Set $H_k = \diag\left(\sum_{i=0}^{k} \hg^i \hg^{i^T} \right)^{\frac{1}{2}}/ \alpha$;
\STATE $x^{k+1} = \proj_{\xdomain}( x^k - H_k^{-1} \hg^k)$ where the projection is with respect to $\norm{\cdot}_{H_k}$;
\ENDFOR
\STATE {\bfseries Return:} $\wb{x}^T \defeq \frac{1}{T} \sum_{k = 1}^T x^k$
\end{algorithmic}
\end{algorithm}
Before analyzing the utility of these algorithms, we provide their
privacy guarantees in the following lemma (see Appendix \ref{sec:proof-lemma:privacy} for its proof).
\begin{lemma}
\label{lemma:privacy}
There exist constants $\bar{\diffp}$ and $c$ such that, for any
$\diffp \leq \bar{\diffp}$, and with $T = c{n^2}/{b^2}$,
Algorithm~\ref{Algorithm1} and Algorithm~\ref{Algorithm2} are
$(\diffp,\delta)$-differentially private.
\end{lemma}
Having established the privacy guarantees of our algorithms, we now proceed
to demonstrate their performance. To do so, we introduce an
assumption to that, as we shall see presently, will allow
us to work in gradient geometries different than the classical Euclidean
($\ell_2$) one common to current private optimization analyses.
\begin{assumption}
\label{assumption:Lipschitz_C}
There exists a function
$\lipobj : \mc{Z} \times \R^{d \times d} \to \R_+$ such
that for any diagonal $C \succ 0$,
the function $F(\cdot; z)$ is
$\lipobj(z;C)$-Lipschitz with respect to the Mahalanobis norm
$\|.\|_{C^{-1}}$ over $\xdomain$, i.e., $\norm{\nabla f(x;z)}_C \le
\lipobj(z;C) $ for all $x \in \xdomain$.
\end{assumption}
\noindent
The moments of the Lipschitz constant $G$ will be central
to our convergence analyses, and to that end,
for $p \ge 1$ we define the shorthand
\begin{equation}
\label{eqn:moment-matrix-norm}
\lipobj_p(C):=
\E_{z \sim P} \left[ \lipobj(z;C)^p \right]^{1/p}.
\end{equation}
The quantity $\lipobj_p(C)$ are the $p$th moments of the gradients in the
Mahalanobis norm $\norm{\cdot}_C$; they are the key to our stronger
convergence guarantees and govern the error in projecting our gradients. In
most standard analyses of private optimization (and stochastic optimization
more broadly), one takes $C = I$ and $p = \infty$, corresponding to the
assumption that $F(\cdot, z)$ is $\lipobj$-Lipschitz for all $z$ and that
subgradients $F'(x, z)$ are uniformly bounded in both $x$ and $z$. Even when
this is the case---which may be unrealistic---we always have $\lipobj_p(C)
\le \lipobj_\infty(C)$, and in realistic settings there is often a
significant gap; by depending instead on appropriate moments $p$, we shall
see it is often possible to achieve far better convergence guarantees than
would be possible by relying on uniformly bounded moments. (See also
Barber and Duchi's
discussion of these issues in the context of mean
estimation~\cite{BarberDu14a}.)
An example may be clarifying:
\begin{example}\label{example-sub-Gaussian}
Let $g:\reals^d \to \reals$ be a convex and differentiable function, let
$F(x;Z) = g(x) + \<x, Z\>$ where $Z \in \R^d$ and the coordinates $Z_j$
are independent $\sigma_j^2$-subgaussian, and $C \succ 0$ be diagonal.
Then by standard moment bounds (see
Appendix~\ref{sec:proof-example-sub-Gaussian}), if $\|\nabla g(x)\|_C \leq
\mu$ we have
\begin{equation}\label{eqn:lip-sub-Gaussian}
\lipobj_p(C)
\le \mu + O(1) \sqrt{p} \sqrt{\sum_{j=1}^d C_{jj} \sigma_j^2}.
\end{equation}
As this
bound shows, while $\lipobj_\infty$ is infinite in this example,
$\lipobj_p$ is finite. As a result, our analysis extends to
settings in which the stochastic gradients are not uniformly bounded.
\end{example}
While we defined $\lipobj_p(C)$ by taking expectation with respect to the
original distribution $P$, we mainly focus on empirical risk minimization
and thus require the empirical Lipschitz
constant for a given dataset $\Ds$:
\begin{equation} \label{eqn:empirical_lipobj}
\hat{\lipobj}_p(\Ds;C) := \left(\frac{1}{n} \sum_{i=1}^n \lipobj(z_i;C)^p
\right)^{1/p}.
\end{equation}
A calculation using Chebyshev's inequality and that $p$-norms are increasing
immediately gives the next lemma:
\begin{lemma}
\label{lemma:empirical_Lipschitz}
Let $\Ds$ be a dataset with $n$ points sampled from distribution $P$.
Then with probability at least $1-1/n$, we have
\begin{equation*}
\hat{\lipobj}_p(\Ds;C) \leq \lipobj_p(C) + \lipobj_{2p}(C) \leq 2 \lipobj_{2p}(C),
\end{equation*}
\end{lemma}
\noindent
It is possible to get bounds of the form
$\hat{\lipobj}_p(\Ds; C) \lesssim \lipobj_{kp}(C)$ with probability at least
$1 - 1/n^k$ using Khintchine's inequalities, but this is secondary for us.
Given these moment bounds, we can characterize the convergence of both
algorithms, defering proofs to Appendix~\ref{sec:proofs-alg}.
\subsection{Convergence of {\PASAN}}
\label{sec:ub-sgd}
We first start with \PASAN (Algorithm
\ref{Algorithm1}). Similarly to the non-private setting where SGD (and its
adaptive variant) are most appropriate for domains $\xdomain$ with small
$\ell_2$-diameter $\diam_2(\xdomain)$, our bounds in this section
mostly depend on $\diam_2(\xdomain)$.
\begin{theorem}
\label{theorem:convergence-SGD}
Let $\Ds \in \mc{Z}^n$ and $C \succ 0$ be diagonal, $p \ge 1$, and assume
that $\hat{\lipobj}_p(\Ds; C) \le \lipobj_{2p}(C)$. Consider running \PASAN
(Algorithm \ref{Algorithm1}) with $\alpha=\diam_2(\xdomain)$, $T=cn^2/b^2$, $A_k = \frac{1}{B^2} C$,
where\footnote{We provide the general statement of this theorem for
positive $B$ in Appendix \ref{proof-theorem:convergence-SGD}}
\begin{equation*}
B =
2\lipobj_{2p}(C) \left( \frac{\diam_{\norm{\cdot}_{C^{-1}}}(\xdomain) n
\diffp}{\diam_2(\xdomain) \sqrt{\tr(C^{-1})} \sqrt{\log(1/\delta)}}
\right )^{1/p}
\end{equation*}
and $c$ is the constant in Lemma~\ref{lemma:privacy}. Then
\begin{align*}
& \E [f(\wb{x}^T;\Ds) - \min_{x \in \xdomain} f(x;\Ds)] \\
& \leq \bigO \left( \frac{\diam_2(\xdomain)}{T} \sqrt{\sum_{k=1}^T \E [\ltwos{g^k}^2] }
+ \diam_2(\xdomain) \lipobj_{2p}(C) \times \right. \\
& \left. \;\;\;\;\;~ \left( \frac{\sqrt{\tr(C^{-1}) \ln\tfrac 1 \delta} }{n \diffp} \right)^{\frac{p-1}{p}} \!\! \!
\left( \frac{\diam_{\norm{\cdot}_{C^{-1}}}(\xdomain)}{\diam_2(\xdomain)} \right)^{\frac{1}{p}} \right ),
\end{align*}
where the expectation is taken over the internal randomness of the algorithm.
\end{theorem}
\newcommand{R_{\textup{std}}(T)}{R_{\textup{std}}(T)}
\newcommand{R_{\textup{ada}}(T)}{R_{\textup{ada}}(T)}
To gain intuition for these
bounds, note that for large enough $p$, the bound from Theorem
\ref{theorem:convergence-SGD} is approximately
\iftoggle{arxiv}{
\begin{equation}\label{eqn:SGD-approx}
\diam_2(\xdomain) \Bigg(
\underbrace{\frac{1}{T} \sqrt{\sum_{k=1}^T \E [\ltwos{g^k}^2] }}_{=:
R_{\textup{std}}(T)} + \lipobj_{2p}(C) \cdot \frac{\sqrt{\tr(C^{-1}) \log(1/\delta)} }{n \diffp}
\Bigg).
\end{equation}
}
{
\begin{equation}\label{eqn:SGD-approx}
\begin{split}
\diam_2(\xdomain) \Bigg(
& \underbrace{\frac{1}{T} \sqrt{\sum_{k=1}^T \E [\ltwos{g^k}^2] }}_{=:
R_{\textup{std}}(T)} \\
& + \lipobj_{2p}(C) \cdot \frac{\sqrt{\tr(C^{-1}) \log(1/\delta)} }{n \diffp}
\Bigg).
\end{split}
\end{equation}
}
The term $R_{\textup{std}}(T)$ in~\eqref{eqn:SGD-approx} is the standard non-private
convergence rate for SGD with adaptive stepsizes~\cite{BartlettHaRa07,
Duchi18} and (in a minimax sense) is unimprovable even without
privacy; the second term is the
cost of privacy. In the standard setting of gradients uniformly bounded in
$\ell_2$-norm, where $C = I$ and $p=\infty$, this bound recovers the
standard rate $\diam_2(\xdomain) \lipobj_\infty(I) \frac{\sqrt{d
\log(1/\delta)}}{n \diffp}$. However, as we show in our examples, this
bound can offer significant improvements whenever $C \neq I$ such that
$\tr(C^{-1}) \ll d$ or $\lipobj_{2p}(C) \ll \lipobj_\infty$ for some $p <
\infty$.
\subsection{Convergence of \PAGAN}
\label{sec:ub-adagrad}
Having established our bounds for \PASAN, we now proceed to present our results for \PAGAN (Algorithm \ref{Algorithm2}).
In the non-private setting, adaptive gradient methods such as Adagrad are superior to SGD for constraint sets
such as $\xdomain = [-1,1]^d$ where $\diam_\infty(\xdomain) \ll \diam_2(\xdomain)$. Following this, our bounds in this section will depend on $\diam_\infty(\xdomain)$.
\begin{theorem}\label{theorem:private-adagrad}
Let $\Ds \in \mc{Z}^n$ and $C \succ 0$ be diagonal, $p \ge 1$, and assume
that $\hat{\lipobj}_p(\Ds; C) \le \lipobj_{2p}(C)$. Consider running \PAGAN
(Algorithm~\ref{Algorithm2}) with $\alpha = \diam_\infty(\xdomain)$, $T=cn^2/b^2$, $A_k = \frac{1}{B^2} C$,
where
\begin{equation*}
B = 2\lipobj_{2p}(C) \left( \frac{\diam_{\norm{\cdot}_{C^{-1}}}(\xdomain) n \diffp}{\diam_\infty(\xdomain) \sqrt{\log(1/\delta)} \tr(C^{-\half})} \right)^{1/p}
\end{equation*}
and $c$ is the constant in Lemma \ref{lemma:privacy}. Then
\begin{align*}
& \E [f(\wb{x}^T;\Ds) - \min_{x \in \xdomain} f(x;\Ds)] \\
& \leq \! \bigO \left( \frac{\diam_\infty(\xdomain)}{T} \sum_{j=1}^d
\E \left [ \sqrt{\sum_{k=1}^T (g^{k}_j)^2} \right ]
\! + \diam_\infty(\xdomain) \times
\right. \\
& \left. \! \;\;\;\lipobj_{2p}(C)
\left( \frac{\sqrt{\ln\tfrac 1 \delta} \tr(C^{-\half})}{n \diffp} \right)^{\frac{p-1}{p}} \!\!\!
\left( \frac{\diam_{\norm{\cdot}_{C^{-1}}}(\xdomain)}{\diam_\infty(\xdomain)} \right)^{\frac{1}{p}} \right),
\end{align*}
where the expectation is taken over the internal randomness of the algorithm.
\end{theorem}
To gain intuition, we again consider the large $p$ case, where
Theorem~\ref{theorem:private-adagrad} simplifies to roughly
\iftoggle{arxiv}{
\begin{equation*}
\diam_\infty(\xdomain) \Bigg(
\underbrace{\frac{1}{T} \sum_{j=1}^d
\E \left [ \sqrt{\sum_{k=1}^T (g^{k}_j)^2} \right ]}_{=:
R_{\textup{ada}}(T)} + \lipobj_{2p}(C) \left( \frac{\sqrt{\log(1/\delta)} \tr(C^{-\half})}{n \diffp} \right) \Bigg).
\end{equation*}
}
{
\begin{equation*}
\begin{split}
\diam_\infty(\xdomain) \Bigg(
& \underbrace{\frac{1}{T} \sum_{j=1}^d
\E \left [ \sqrt{\sum_{k=1}^T (g^{k}_j)^2} \right ]}_{=:
R_{\textup{ada}}(T)} \\
& + \lipobj_{2p}(C) \left( \frac{\sqrt{\log(1/\delta)} \tr(C^{-\half})}{n \diffp} \right) \Bigg).
\end{split}
\end{equation*}
}
In analogy with Theorem~\ref{theorem:convergence-SGD}, the first term
$R_{\textup{ada}}(T)$ is the
standard error for non-private Adagrad after $T$
iterations~\cite{DuchiHaSi11}---and hence
unimprovable~\cite{LevyDu19}---while the second is the privacy cost. In
some cases, we may have
$\diam_\infty(\xdomain) = \diam_2(\xdomain) / \sqrt{d}$, so private
Adagrad can offer significant improvements over SGD whenever the matrix $C$
has polynomially decaying diagonal.
To clarify the advantages and scalings we expect,
we may consider an extremely stylized example with
sub-Gaussian distributions. Assume now---in the context
of Example~\ref{example-sub-Gaussian}---that we are
optimizing the random linear function $F(x; Z) = \<x, Z\>$,
where $Z$ has independent $\sigma_j^2$-sub-Gaussian compoments.
In this case, by assuming that $p = \log d$ and taking
$C_{jj} = \sigma_j^{-4/3}$ and $b = 1$,
Theorem~\ref{theorem:private-adagrad} guarantees that
\PAGAN~(Algorithm~\ref{Algorithm2}) has convergence
\begin{align}
\label{eqn:pagan-subgaussian-bound}
& \E [f(\wb{x}^T;\Ds) - \min_{x \in \xdomain} f(x;\Ds)]
\leq \iftoggle{arxiv}{}{\\
&} \bigO(1)
\diam_\infty(\xdomain) \bigg[ R_{\textup{ada}}(T)
+ \frac{ (\sum_{j=1}^d \sigma_j^{2/3})^{3/2} }{n \diffp}
\log \frac{d}{\delta}\bigg].
\end{align}
On the other hand, for \PASAN~(Algorithm~\ref{Algorithm1}),
with $p = \log d$, $b=1$, the choice
$C_{jj} = \sigma_j^{-1}$ optimizes the bound
of Theorem~\ref{theorem:convergence-SGD} and yields
\begin{align}\label{eqn:pasan-subgaussian-bound-pasan}
& \E [f(\wb{x}^T;\Ds) - \min_{x \in \xdomain} f(x;\Ds)]
\leq \iftoggle{arxiv}{}{\\
&} \bigO(1)
\diam_2(\xdomain) \left[ R_{\textup{std}}(T)
+ \frac{ \sum_{j=1}^d \sigma_j }{n \diffp}
\log \frac{d}{\delta}\right].
\end{align}
Comparing these results, two differences are salient:
$\diam_\infty(\xdomain)$ replaces
$\diam_2(\xdomain)$ in Eq.~\eqref{eqn:pasan-subgaussian-bound-pasan}, which
can be an improvement by as much as $\sqrt{d}$, while
$(\sum_{j=1}^d \sigma_j
^{2/3})^{3/2}$ replaces $\sum_{j=1}^d
\sigma_j$, and H\"{o}lder's inequality gives
\begin{equation*}
\sqrt{d} \sum_{j=1}^d \sigma_j \geq \left(\sum_{j=1}^d \sigma_j ^{2/3}\right)^{3/2} \geq \sum_{j=1}^d \sigma_j.
\end{equation*}
Depending on gradient moments, there are situations in which
\PAGAN offers significant improvements; these evidently
depend on the expected magnitudes of the gradients and noise,
as the $\sigma_j$ terms evidence. As a special case, consider
$\xdomain = [-1,+1]^d$ and assume $\{\sigma_j\}_{j=1}^d$ decrease
quickly, e.g.\ $\sigma_j = 1/j^{3/2}$. In such a
setting, the upper bound of \PAGAN is roughly
$\frac{\mathsf{poly}(\log d)}{n \diffp}$ while \PASAN
achieves $\frac{\sqrt{d}}{n \diffp}$.
\section{Experiments}
\label{sec:experiments}
We conclude the paper with several experiments to demonstrate the
performance of \PAGAN~and \PASAN~algorithms. We perform experiments
both on synthetic data, where we
may control all aspects of the experiment, and a real-world example
training large-scale private language models.
\subsection{Regression on Synthetic Datasets}
\label{sec:exp-syn}
\iftoggle{arxiv}{
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\begin{overpic}[width=.34\columnwidth]
{plots/final-eps=.1}.pdf}
\end{overpic} &
\hspace{-1cm}
\begin{overpic}[width=.34\columnwidth]
{plots/final-eps=1}.pdf}
\end{overpic} &
\hspace{-1cm}
\begin{overpic}[width=.34\columnwidth]
{plots/final-eps=4}.pdf}
\end{overpic}
\\
(a) & (b) & (c)
\end{tabular}
\caption{\label{fig:abs-reg} Sample loss as a function of the iterate for
various optimization methods for synthetic
absolute regression problem~\eqref{eqn:abs-regression}
with varying privacy parameters $\diffp$. (a)
$\diffp = 0.1$. (b) $\diffp = 1$. (c) $\diffp = 4$. }
\end{center}
\end{figure}
}
{
\begin{figure*}[t]
\begin{center}
\begin{tabular}{ccc}
\begin{overpic}[width=.75\columnwidth]
{plots/final-eps=.1}.pdf}
\end{overpic} &
\hspace{-1cm}
\begin{overpic}[width=.75\columnwidth]
{plots/final-eps=1}.pdf}
\end{overpic} &
\hspace{-1cm}
\begin{overpic}[width=.75\columnwidth]
{plots/final-eps=4}.pdf}
\end{overpic}
\\
(a) & (b) & (c)
\end{tabular}
\caption{\label{fig:abs-reg} Sample loss as a function of the iterate for
various optimization methods for synthetic
absolute regression problem~\eqref{eqn:abs-regression}
with varying privacy parameters $\diffp$. (a)
$\diffp = 0.1$. (b) $\diffp = 1$. (c) $\diffp = 4$. }
\end{center}
\end{figure*}
}
If our \PAGAN~algorithm indeed captures the aspects of AdaGrad and other
adaptive methods, we expect it to outperform other private stochastic
optimization methods at the least in those scenarios where AdaGrad improves
upon stochastic gradient methods---as basic sanity check. To that end, in
our first collection of experiments, we compare \PAGAN against standard
implementations of private AdaGrad and SGD methods. We also compare our
method against Projected DP-SGD (PDP-SGD)~\cite{ZhouWuBa20}, which projects
the noisy gradients into the (low-dimensional) subspace of the top $k$
eigenvectors of the second moment of gradients.
We consider an absolute regression problem with data $(a_i, b_i)
\in \R^d \times \R$ and loss $F(x;a_i,b_i) = |\<a_i,x\> -
b_i|$. Given $n$ datapoints $(a_1,b_1),\dots,(a_n,b_n)$, we
wish to solve
\begin{equation}
\label{eqn:abs-regression}
\mbox{minimize} ~~
f(x) = \frac{1}{n} \sum_{i=1}^n |\<a_i,x\> - b_i|.
\end{equation}
We construct the data by drawing
an optimal $x\opt \sim \uniform\{-1, 1\}^d$,
sampling $a_i \simiid \normal(0,
\diag(\sigma)^2)$ for a vector $\sigma \in \R_+^d$,
and setting $b_i = \<a_i,x\opt\> + \noise_i$ for noise $\noise_i
\simiid \laplace(0,\tau)$, where $\tau \ge 0$.
We compare several algorithms in this experiment: non-private AdaGrad; the
naive implementations of private SGD (\PASAN, Alg.~\ref{Algorithm1}) and
AdaGrad (\PAGAN, Alg.~\ref{Algorithm2}), with $A_k = I$; \PAGAN~with the
optimal diagonal matrix scaling $A_k$ we derive in~\Cref{sec:ub-adagrad};
and Zhou et al.'s PDP-SGD with ranks $k = 20$ and $k = 50$. In our
experiments, we use the parameters $n = 5000$, $d=100$, $\sigma_j =
j^{-3/2}$, $\tau = 0.01$, and the batch size for all methods is $b = 70$.
As optimization methods are sensitive
to stepsize choice even non-privately~\cite{AsiDu19siopt}, we
run each method with different values of initial stepsize in $\{ 0.005,
0.01, 0.05, 0.1, 0.15, 0.2, 0.4, 0.5, 1.0 \}$ to find the best stepsize value. Then we run each method $T=30$ times and report the median of the loss as a function of the iterate with 95\% confidence intervals.
\Cref{fig:abs-reg} demonstrates the results of this experiment. Each plot
shows the loss of the methods against iteration count in various privacy
regimes. In the high-privacy setting (\Cref{fig:abs-reg}(a)), the
performance of all private methods is worse than the non-private algorithms,
though \PAGAN~(Alg.~\ref{Algorithm2}) seems to be outperforming other
algorithms. As we increase the privacy parameter---reducing privacy
preserved---we see that \PAGAN~quickly starts to enjoy faster convergence,
resembling non-private AdaGrad.
different for non-private methods). In contrast, the standard
implementation of private AdaGrad---even in the moderate privacy regime with
$\diffp=4$---appears to obtain the slower convergence of SGD rather than the
adaptive methods. This is consistent with the predictions our theory makes:
the isometric Gaussian noise addition that standard private stochastic
gradient methods (e.g.\ \PASAN~and variants) employ eliminates the geometric
properties of gradients (e.g., sparsity) that adaptive methods can---indeed,
must~\citep{LevyDu19}---leverage for improved convergence.
\subsection{Training Private Language Models on WikiText-2}
\label{sec:exp-lm}
Following our simulation results, we study the performance of \PAGAN and \PASAN for fitting a next word prediction model. Here, we train a variant of
a recurrent neural network with Long-Short-Term-Memory
(LSTM)~\cite{HochreiterJu97} on the WikiText-2
dataset~\cite{MerityXiBrSo17}, which is split into train, validation, and
test sets. We further split the train set to 59,674 data points, where each
data point has $35$ tokens. The input data to the model consists of a
one-hot-vector $x \in \{0, 1\}^{d}$, where $d = 8,\!000$. The first
7,999 coordinates correspond to the most frequent tokens in the
training set, and the model reserves the last coordinate for unknown/unseen
tokens. We train a full network, which consists of a fully connected
embedding layer $x \mapsto Wx$ mapping to 120 dimensions, where $W \in
\R^{120 \times 8000}$; two layers of LSTM units with 120 hidden units,
which then output a vector $h \in \R^{120}$; followed by a fully
connected layer $h \mapsto \Theta h$ expanding the representation
via $\Theta \in \R^{8000 \times 120}$; and then into a softmax
layer to emit next-word probabilities via a logistic regression model.
The entire model contains $2,\!160,\!320$ trainable parameters.
We use Abadi et al.'s moments accountant analysis~\cite{AbadiChGoMcMiTaZh16} to
track the privacy losses of each of the methods. In each experiment, for
\PAGAN and \PASAN we use gradients trained for one epoch on a held-out dataset (a subset of the WikiText 103 dataset~\cite{MerityXiBrSo17} which does not intersect with WikiText-2) to estimate moment
bounds and gradient norms, as in Section~\ref{sec:unknown-cov}; these
choices---while not private---reflect the common practice that we may have
access to public data that provides a reasonable proxy for the actual
moments on our data. Moreover, our convergence guarantees in
Section~\ref{sec:algs} are robust in the typical sense of stochastic
gradient methods~\citep{NemirovskiJuLaSh09}, in that mis-specifying the
moments by a multiplicative constant factor yields only constant factor
degradation in convergence rate guarantees, so we view this as an acceptable
tradeoff in practice.
It is worth noting that we ignore the model trained over the public data and use that one epoch solely for estimating the second moment of gradients.
In our experiments, we evaluate the performance of the trained models with
validation- and test-set perplexity. While we
propose adaptive algorithms, we still require
hyperparameter tuning, and thus perform a hyper-parameter search
over three algorithm-specific constants:
a multiplier $\alpha \in \{0.1, 0.2, 0.4, 0.8, 1.0, 10.0, 50.0 \}$ for step-size,
mini-batch size $b = 250$, and projection
threshold $B \in \{0.05, 0.1, 0.5, 1.0\}$. Each run of these algorithms takes $<$ 4 hours on
a standard workstation without any accelerators.
We trained the LSTM model above with
\PAGAN and \PASAN and compare its performance with DP-SGD \cite{AbadiChGoMcMiTaZh16}.
We also include completely
non-private SGD and AdaGrad for reference.
{
We do not include PDP-SGD \cite{ZhouWuBa20} in this experiment as, for our
$N=2.1 \cdot 10^6$ parameter model, computing the low-rank subspace for
gradient projection that PDP-SGD requires is quite challenging. Indeed,
computing the gradient covariance matrix Zhou et al.~\cite{ZhouWuBa20}
recommend is certainly infeasible. While power iteration or Oja's method
can make computing a $k$-dimensional projection matrix feasible, the
additional memory footprint of this $kN$-sized matrix (compared to the
original model size $N$) can be prohibitive, restricting us to smaller
models or very small $k$.
For such small values ($k=50$),
our experiments show that PDP-SGD achieves significantly worse error than the algorithms we consider hence we do not include it in the plots.
On the other hand, our (diagonal)
approach, like diagonal AdaGrad, only requires an additional memory of
size $N$. }
For
each of the privacy levels
$\diffp \in \{0.5, 1, 3\}$ we consider, we
present the performance of each algorithm in terms of best validation set
and test-set perplexity in Figure~\ref{fig:perplexing} and Table \ref{Table1}.
\iftoggle{arxiv}{
\begin{figure}[t]
\begin{center}
\begin{tabular}{ccc}
\begin{overpic}[width=.32\columnwidth]
{plots/LSTM-exp-eps=.5-arxiv}.pdf}
\end{overpic} &
\begin{overpic}[width=.32\columnwidth]
{plots/LSTM-exp-eps=1-arxiv}.pdf}
\end{overpic} &
\begin{overpic}[width=.32\columnwidth]
{plots/LSTM-exp-eps=3-arxiv}.pdf}
\end{overpic}
\\
(a) & (b) & (c)
\end{tabular}
\caption{\label{fig:perplexing} Minimum validation perplexity versus training rounds for seven epochs for \PAGAN, \PASAN, and the standard differentially private stochastic gradient method (DP{-}SGD) \cite{AbadiChGoMcMiTaZh16},
varying privacy levels (a) $\diffp = .5$, (b) $\diffp = 1$ and (b) $\diffp = 3$}
\end{center}
\end{figure}
}
{
\begin{figure*}[t]
\begin{center}
\begin{tabular}{ccc}
\begin{overpic}[width=.7\columnwidth]
{plots/LSTM-exp-eps=.5}.pdf}
\end{overpic} &
\hspace{-.3cm}
\begin{overpic}[width=.7\columnwidth]
{plots/LSTM-exp-eps=1}.pdf}
\end{overpic} &
\hspace{-.3cm}
\begin{overpic}[width=.7\columnwidth]
{plots/LSTM-exp-eps=3}.pdf}
\end{overpic}
\\
(a) & (b) & (c)
\end{tabular}
\caption{\label{fig:perplexing} Minimum validation perplexity versus training rounds for seven epochs for \PAGAN, \PASAN, and the standard differentially private stochastic gradient method (DP{-}SGD) \cite{AbadiChGoMcMiTaZh16},
varying privacy levels (a) $\diffp = .5$, (b) $\diffp = 1$ and (b) $\diffp = 3$}
\end{center}
\end{figure*}
}
\newcommand{\fontseries{b}\selectfont}{\fontseries{b}\selectfont}
\begin{table}[t]
\caption{Test perplexity error of different methods. For reference, non-private SGD and AdaGrad (without clipping) achieve 75.45 and 79.74, respectively. \label{Table1}}
\vspace{0.4cm}
\begin{center}
\begin{tabular}{|l|lll|}
\hline
Algorithm & $\diffp=3$ & $\diffp=1$ & $\diffp=.5$ \\
\hline
DP-SGD \cite{AbadiChGoMcMiTaZh16} & 238.44 & 285.11 & 350.23 \\
\PASAN & 238.87 & 274.63 & 332.52 \\
\PAGAN & \fontseries{b}\selectfont 224.82 & \fontseries{b}\selectfont 253.41 & \fontseries{b}\selectfont 291.41 \\
\hline
\end{tabular}
\end{center}
\end{table}
We highlight a few messages present in Figure~\ref{fig:perplexing}. First,
\PAGAN consistently outperforms the non-adaptive methods---though all allow
the same hyperparameter tuning---at all privacy levels, excepting the
non-private $\diffp =
+\infty$, where \PAGAN without clipping is just AdaGrad
and its performance is comparable to the non-private stochastic gradient
method. Certainly, there remain non-negligible gaps between the performance
of the private methods and non-private methods, but we hope that this
is a step at least toward effective large-scale private optimization and
modeling.
\section{Introduction}
While the success of stochastic gradient methods for solving empirical risk
minimization has motivated their adoption across much of machine learning,
increasing privacy risks in data-intensive tasks have made applying them
more challenging~\cite{DworkMcNiSm06}: gradients can leak users' data,
intermediate models can compromise individuals, and even final trained
models may be non-private without substantial care. This motivates a growing
line of work developing private variants of stochastic gradient descent
(SGD), where algorithms guarantee differential privacy by perturbing
individual gradients with random noise~\cite{DuchiJoWa13_focs, SmithTh13,
AbadiChGoMcMiTaZh16, DuchiJoWa18, BassilyFeTaTh19, FeldmanKoTa20}. Yet
these noise addition procedures typically fail to reflect the geometry
underlying the optimization problem, which in non-private cases is
essential: for high-dimensional problems with sparse parameters, mirror
descent and its variants~\cite{BeckTe03, NemirovskiJuLaSh09} are essential,
while in the large-scale stochastic settings prevalent in deep learning,
AdaGrad and other adaptive variants~\cite{DuchiHaSi11} provide stronger
theoretical and practical performance. Even more, methods that do not adapt
(or do not leverage geometry) can be provably sub-optimal, in that there
exist problems where their convergence is much slower than adaptive variants
that reflect appropriate geometry~\cite{LevyDu19}.
To address these challenges, we introduce \PAGAN~(Private AdaGrad with
Adaptive Noise), a new differentially private variant of stochastic gradient
descent and AdaGrad. Our main contributions center on a few ideas. Standard
methods for privatizing adaptive algorithms that add isometric (typically
Gaussian) noise to gradients necessarily reflect the worst-case behavior of
functions to be optimized and eliminate the geometric structure one might
leverage for improved convergence. By carefully adapting noise to the actual
gradients at hand, we can both achieve convergence rates that reflect the
observed magnitude of the gradients---similar to the approach of
\citet{BartlettHaRa07} in the non-private case---which can yield marked
improvements over the typical guarantees that depend on worst-case
magnitudes. (Think, for example, of a standard normal variable: its second
moment is 1, while its maximum value is unbounded.) Moreover, we propose a
new private adaptive optimization algorithm that analogizes AdaGrad, showing
that under certain natural distributional assumptions for the
problems---similar to those that separate AdaGrad from non-adaptive
methods~\cite{LevyDu19}---our private versions of adaptive methods
significantly outperform the standard non-adaptive private algorithms.
Additionally, we prove several lower bounds that both highlight the
importance of geometry in the problems and demonstrate the tightness of the
bounds our algorithms achieve.
Finally, we provide several experiments on real-world and synthetic datasets that support our theoretical results, demonstrating the improvements of our private adaptive algorithm (\PAGAN) over DP-SGD and other private adaptive methods.
\subsection{Related Work}
Since the introduction of differential privacy~\cite{DworkMcNiSm06,
DworkKeMcMiNa06}, differentially private empirical risk minimization has
been a subject of intense interest~\cite{ChaudhuriMoSa11, BassilySmTh14,
DuchiJoWa13_focs, SmithTh13lasso}. The current standard approach to
solving this problem is noisy SGD~\cite{BassilySmTh14, DuchiJoWa13_focs,
AbadiChGoMcMiTaZh16, BassilyFeTaTh19, FeldmanKoTa20}. Current bounds
focus on the standard Euclidean geometries familiar from classical analyses
of gradient descent~\cite{Zinkevich03, NemirovskiJuLaSh09}, and the
prototypical result~\cite{BassilySmTh14, BassilyFeTaTh19} is that, for
Lipschitz convex optimization problems on the $\ell_2$-ball in
$d$-dimensions, an $\diffp$-differentially private version of SGD achieves
excess empirical loss $O(\frac{\sqrt{d}}{n \diffp})$ given a sample of size
$n$; this is minimax optimal.
Similar bounds also hold for other geometries ($\ell_p$-balls for $1 \le p \le 2$) using noisy mirror descent~\cite{AsiFeKoTa21}.
Alternative approaches use the stability of
empirical risk minimizers of (strongly) convex functions, and include both
output perturbation, where one adds noise to a regularized empirical
minimizer, and objective perturbation, where one incorporates random linear
noise in the objective function before optimization~\cite{ChaudhuriMoSa11}.
Given the success of private SGD for such Euclidean cases and adaptive
gradient algorithms for modern large-scale learning, it is unsurprising that
recent work attempts to incorporate adaptivity into private empirical risk
minimization (ERM) algorithms~\cite{ZhouWuBa20, KairouzRiRuTh20}. In this
vein, \citet{ZhouWuBa20} propose a private SGD algorithm where the gradients
are projected to a low-dimensional subspace---which is learned using public
data---and~\citet{KairouzRiRuTh20} developed an $\diffp$-differentially
private variant of Adagrad which (similarly) projects the gradient to a low
rank subspace. These works show that excess loss $\widetilde O (\frac{1}{n
\diffp}) \ll \frac{\sqrt{d}}{n \diffp}$ is possible whenever the rank of
the gradients is small. Yet these both work under the assumption that
gradient lie in (or nearly in) a low-dimensional subspace; this misses the
contexts for which adaptive algorithms (AdaGrad and its relations) are
designed~\cite{DuchiHaSi11, McMahanSt10}. Indeed, most stochastic
optimization algorithms rely on particular dualities between the parameter
space and gradients; stochastic gradient descent requires Euclidean spaces,
while mirror descent works in an $\ell_1/\ell_\infty$ duality (that is, it
is minimax optimal when optimizing over an $\ell_1$-ball while gradients
belong to an $\ell_\infty$ ball). AdaGrad and other adaptive algorithms, in
contrast, are optimal in an (essentially) dual geometry~\cite{LevyDu19}, so
that for such algorithms, the interesting geometry is when the parameters
belong (e.g.) to an $\ell_\infty$ box and the gradients are sparse---but
potentially from a very high-rank space. Indeed, as Levy and
Duchi~\cite{LevyDu19} show, adaptive algorithms achieve benefits only when
the sets over which one optimizes are quite different from $\ell_2$ balls;
the private projection algorithms in the papers by Kairouz et
al.~\cite{KairouzRiRuTh20} and Zhou et al.~\cite{ZhouWuBa20} achieve bounds
that scale with the $\ell_2$-radius of the underlying space, suggesting that
they may not enjoy the performance gains one might hope to achieve using an
appropriately constructed and analyzed adaptive algorithm.
In more recent work, Yu et al.~\cite{YuZCL21} use PCA to decompose gradients into two
orthogonal subspaces, allowing separate learning rate treatments in the
subspaces, and achieve promising empirical results, but they provide no
provable convergence bounds.
Also related to the current paper is Pichapati et al.'s
AdaClip algorithm~\cite{PichapatiSYRK20}; they obtain parallels to
Bartlett et al.'s
non-private convergence guarantees~\cite{BartlettHaRa07}
for private SGD. In contrast to our analysis
here, their analysis applies to smooth non-convex functions, while our focus
on convex optimization allows more complete convergence guarantees and
associated optimality results.
\section{Lower bounds for private optimization}
\label{sec:LB}
To give a more complete picture of the complexity of private stochastic
optimization, we now establish (nearly) sharp lower bounds, which in turn
establish the minimax optimality of \PAGAN and \PASAN. We establish this in
two parts, reflecting the necessary dependence on geometry in the
problems~\citep{LevyDu19}: in Section~\ref{sec:LB-ell-box}, we show that
\PAGAN achieves optimal complexity for minimization over $\xdomain_\infty =
\{x \in \R^d: \linf{x} \le 1 \}$. Moreover, in
Section~\ref{sec:LB-ell2-ball} we show that \PASAN achieves optimal rates in
the Euclidean case, that is, for domain $\xdomain_2 = \{x \in \R^d: \ltwo{x}
\le 1 \}$.
As one of our foci here is for data with varying norms, we prove lower bounds
for
sub-Gaussian data---the strongest setting for our upper bounds.
In particular, we shall consider linear functionals $F(x; z) = z^T x$,
where the entries $z_j$ of $z$ satisfy $|z_j| \le \sigma_j$ for a prescribed
$\sigma_j$; this is sufficient for the data $Z$ to
be $\frac{\sigma_j^2}{4}$-sub-Gaussian~\citep{Vershynin19}.
Moreover, our upper bounds are conditional on the observed sample
$\Ds$, and so we focus on this setting in our lower bounds,
where $|g_j| \le \sigma_j$ for all subgradients $g \in \partial F(x; z)$
and $j \in [d]$.
\subsection{Lower bounds for $\ell_\infty$-box}
\label{sec:LB-ell-box}
The starting point for our lower bounds for stochastic optimization over
$\xdomain_\infty$ is the following lower bound for the
problem of estimating the sign of the mean of a dataset. This will then
imply our main lower bound for private optimization. We defer the proof of
this result to Appendix~\ref{sec:proof-LB-signs}.
\begin{proposition}
\label{prop:sign-LB-var}
Let $\mech$ be $(\diffp,\delta)$-DP and $\Ds = (\ds_1,\dots,\ds_n) $
where $\ds_i \in \domain = \{ z \in \R^d: |z_{j}| \le \sigma_j \}$.
Let $\bar z = \frac{1}{n} \sum_{i=1}^n z_i$ be the mean of the dataset $\Ds$.
If $\sqrt{d} \log d \le n \diffp$, then
\begin{align*}
\sup_{\Ds \in \domain^n }
\E & \left[ \sum_{j=1}^d |\bar{\ds}_j|
\indic {\sign(\mech_j(\Ds)) \neq \sign(\bar{\ds}_j)} \right] \iftoggle{arxiv}{}{\\
&} \ge \frac{(\sum_{j=1}^d \sigma_j^{2/3} )^{3/2} }{n \diffp \log^{5/2} d} .
\end{align*}
\end{proposition}
We can now use this lower bound to establish
a lower bound for private optimization over the
$\ell_\infty$-box by an essentially straightforward reduction.
Consider the problem
\begin{equation*}
\minimize_{x \in \xdomain_\infty}
f(x;\Ds) \defeq - \frac{1}{n} \sum_{i=1}^n x^T \ds_i
= - x^T \bar{\ds},
\end{equation*}
where $\bar z = \frac{1}{n} \sum_{i=1}^n z_i$ is the mean of the dataset.
Letting $x^\star_\Ds \in \argmin_{x \in \xdomain_\infty} f(x; \Ds)$,
we have the following result.
\begin{theorem}
\label{thm:LB-opt-l1}
Let $\mech$ be $(\diffp,\delta)$-DP and $\Ds \in \domain^n$, where
$\domain = \{ z \in \R^d: |z_{j}| \le \sigma_j \}$. If $\sqrt{d}
\log d \le n \diffp$,
then
\begin{equation*}
\sup_{\Ds \in \domain^n }
\E \left[ f(\mech(\Ds);\Ds) - f(x^\star_\Ds;\Ds) \right]
\ge \frac{(\sum_{j=1}^d \sigma_j^{2/3})^{3/2}}{n \diffp \log^{5/2} d}.
\end{equation*}
\end{theorem}
\begin{proof}
For a given dataset $\Ds$,
the minimizer $x\opt_j = \sign(\bar{\ds}_j)$.
Therefore for every $x$ we have
\begin{align*}
f(x;\Ds) - f(x\opt;\Ds)
& = \lone{\bar{\ds}} - x^T \bar{\ds} \\
& \ge \sum_{j=1}^d |\bar{\ds}_j|
\indic {\sign(x_j) \neq \sign(\bar{\ds}_j)} .
\end{align*}
As $\sign(\mech(\Ds))$ is $(\diffp,\delta)$-DP by post-processing, the
claim follows from~\Cref{prop:sign-LB-var} by taking expectations.
\end{proof}
Recalling the upper bounds that \PAGAN achieves in~\Cref{sec:ub-adagrad},
\Cref{thm:LB-opt-l1} establishes the tightness of these bounds to
within logarithmic factors.
\subsection{Lower bounds for $\ell_2$-ball}
\label{sec:LB-ell2-ball}
Having established \PAGAN's optimality for $\ell_\infty$-box constraints,
in this section we turn to proving lower bounds for optimization over the
$\ell_2$-ball, which demostrate the optimality of \PASAN. The lower
bound builds on the lower bounds of~\citet{BassilySmTh14}. Following their
arguments, let
$\xdomain_2 = \{x \in \R^d: \ltwo{x} \le 1 \}$ and consider
the problem
\begin{equation*}
\minimize_{x \in \xdomain_2}
f(x;\Ds) \defeq - \frac{1}{n} \sum_{i=1}^n x^T \ds_i
= - x^T \bar{\ds}.
\end{equation*}
The following bound follows by appropriate re-scaling of the data points in
Theorem 5.3 in~\cite{BassilySmTh14}
\begin{proposition}
\label{prop:LB-opt-l2-identity}
Let $\mech$ be $(\diffp,\delta)$-DP and $\Ds = (\ds_1,\dots,\ds_n) $
where $\ds_i \in \domain = \{ z \in \R^d: \linf{\ds} \le \sigma_j \}$.
Then
\begin{equation*}
\sup_{\Ds \in \domain^n }
\E \left[ f(\mech(\Ds);\Ds) - f(x^\star_\Ds;\Ds) \right]
\ge \min \left( \sigma \sqrt{d} , \frac{d \sigma}{n \diffp} \right).
\end{equation*}
\end{proposition}
Using~\Cref{prop:LB-opt-l2-identity}, we can establish the
tight lower bounds---to within logarithmic factors---for
\PASAN (Section~\ref{sec:algs}).
We defer the proof to Appendix~\ref{sec:proof-LB-l2}.
\begin{theorem}
\label{thm:LB-opt-l2-var}
Let $\mech$ be $(\diffp,\delta)$-DP and $\Ds = (\ds_1,\dots,\ds_n) $
where $\ds_i \in \domain = \{ z \in \R^d: |z_{j}| \le \sigma_j \}$.
If $\sqrt{d} \le n \diffp$, then
\begin{equation*}
\sup_{\Ds \in \domain^n }
\E \left[ f(\mech(\Ds);\Ds) - f(x^\star_\Ds;\Ds) \right]
\ge \frac{\sum_{j=1}^d \sigma_j }{n \diffp \log d}.
\end{equation*}
\end{theorem}
\section{Preliminaries and notation}
Before proceeding to the paper proper, we give notation. Let $\domain$ be a
sample space and $P$ a distribution on $\mc{Z}$. Given a function $F :
\xdomain \times \domain \to \R$, convex in its first argument, and a dataset
$\Ds = (\ds_1,\dots,\ds_n) \in \domain^n$ of $n$ points drawn i.i.d.\ $P$,
we wish to privately find the minimizer of the empirical loss
\begin{equation}
\label{eqn:emp_loss}
\argmin_{x \in \xdomain} f(x;\Ds) \defeq
\frac{1}{n} \sum_{i=1}^n F(x;\ds_i).
\end{equation}
We suppress dependence on $\Ds$ and simply write $f(x)$ when the
dataset is clear from context. We use the standard definitions of
differential privacy~\cite{DworkMcNiSm06, DworkKeMcMiNa06}:
\begin{definition}
\label{def:DP}
A randomized algorithm $\mech$ is
\emph{$(\diffp,\delta)$-differentially private} if for all neighboring
datasets
$\Ds,\Ds' \in \domain^n$ and all
measurable $O$ in the output
space of $\mech$,
\begin{equation*}
\P\left(\mech(\Ds) \in O \right)
\le e^{\diffp} \P\left(\mech(\Ds') \in O \right) + \delta.
\end{equation*}
If $\delta=0$, then $\mech$ is \emph{$\diffp$-differentially private}.
\end{definition}
\noindent
It will also be useful to discuss the tail properties of random variables
and vectors:
\begin{definition}\label{def:sub-Gaussian}
A random variable $X$ is \emph{$\sigma^2$
sub-Gaussian} if $\E[\exp(s(X - \E[X]))]
\leq \exp((\sigma^2 s^2)/2)$ for all $s
\in \reals$. A random vector $X \in \reals^d$ is
$\Sigma$-sub-Gaussian if for any vector $a \in
\reals^d$, $a^\top X$ is $a^\top \Sigma a$ sub-Gaussian.
\end{definition}
\noindent
We also frequently use different norms and geometries,
so it is useful to recall Lipschitz continuity:
\begin{definition}
\label{def:Lipschitz_function}
A function $\Phi : \R^d \to \R$ is \emph{$G$-Lipschitz with respect to
norm $\norm{\cdot}$ over $\mathcal{W}$} if for every $w_1,w_2 \in
\mathcal{W}$,
\begin{equation*}
|\Phi(w_1) - \Phi(w_2) | \leq G \norm{w_1 - w_2}.
\end{equation*}
\end{definition}
\noindent
A convex function $\Phi$ is $G$-Lipschitz over an open set
$\mathcal{W}$ if and only if $\| \Phi'(w) \|_{*} \leq G$ for any $w \in
\mathcal{W}$ and $\Phi'(w) \in \partial \Phi(w)$, where
$\dnorm{y} = \sup\{x^\top y \mid \norm{x} \le 1\}$ is the
dual norm of $\norm{\cdot}$~\citep{HiriartUrrutyLe93ab}.
\paragraph{Notation}
We define $\diag(a_1,\ldots,a_d)$ as a diagonal matrix with diagonal entries
$a_1,\ldots,a_d$. To state matrix $A$ is positive (semi)definite, we use the
notation $A \succ 0_{d\times d}$ ($A \succcurlyeq 0_{d \times d}$). For $A
\succcurlyeq 0_{d \times d}$, let $E_A$ denote the ellipsoid $\{x: \normA{x}
\leq 1 \}$ where $\normA{x} = \sqrt{x^\top A x}$ is the Mahalanobis norm,
and $\pi_{A}(x) = \argmin_y\{\ltwo{y - x} \mid y \in E_A\}$ is the
projection of $x$ onto $E_A$. For a set $\xdomain$,
$\diam_{\norm{\cdot}}(\xdomain) = \sup_{x, y \in \xdomain} \norm{x - y}$
denotes the diameter of $\xdomain$ with respect to the norm
$\norm{\cdot}$. For the special case of $\norm{\cdot}_p$, we write
$\diam_p(\xdomain)$ for simplicity. For an integer $n \in \N$, we let $[n] =
\{1,\dots,n\}$.
\section{Problem Formulation \& Algorithm Description}
\section{Some approaches to unknown moments}
\label{sec:unknown-cov}
As the results of the previous section demonstrate,
bounding the gradient moments allows us to establish tighter
convergence guarantees; it behooves us to
estimate them with accuracy sufficient to achieve
(minimax) optimal bounds.
\subsection{Unknown moments for generalized linear models}
\label{sec:cov-GLM}
Motivated by the standard practice of training the last layer of a
pre-trained neural network~\cite{AbadiChGoMcMiTaZh16}, in this section we
consider algorithms for generalized linear models, where we have losses of
the form $F(x;z) = \ell(z^T x)$ for $z,x\in \R^d$ and $\ell : \R \to \R_+$
is a convex and $1$-Lipschitz loss. As $\nabla F(x;z) = \ell'(z^T x) z$,
bounds on the Lipschitzian moments~\eqref{eqn:moment-matrix-norm}
follow from moment bounds on $z$ itself, as $\norm{\nabla F(x; z)} \le
\norm{z}$.
The results of Section \ref{sec:algs} suggest optimal choices for
$C$ under sub-Gaussian assumptions on the vectors $z$, where in our stylized
cases of $\sigma_j$-sub-Gaussian entries, $C_j = \sigma_j^{-4/3}$
minimizes our bounds. Unfortunately, it is hard in general to estimate
$\sigma_j$ even without privacy~\cite{Duchi19}. Therefore, we make the
following bounded moments ratio assumption, which relates higher moments to
lower moments to allow estimation of moment-based parameters
(even with privacy).
\begin{definition}
\label{definition:bouned-moments-ratio}
A random vector $z \in \R^d$ has \emph{moment ratio $r < \infty$}
if for all $1 \le p \le 2 \log d$ and $1 \le j \le d$
\begin{equation*}
\E [ z_j^p ]^{2/p} \le r^2 p \cdot \E[ z_j^2 ].
\end{equation*}
\end{definition}
When $z$ satisfies Def.~\ref{definition:bouned-moments-ratio}, we can
provide a
private procedure (Algorithm~\ref{alg:unknown-cov}) that provides good
approximation to the second moment of coordinates of $z_j$---and hence
higher-order moments---allowing the application of
a minimax optimal \PAGAN algorithm. We defer the proof
to~\Cref{sec:proof-unknown-cov}.
\begin{theorem}
\label{thm:unknown-cov}
Let $z$ have moment ratio $r$
(Def.~\ref{definition:bouned-moments-ratio}) and let $\sigma_j^2 =
\E[z_j^2]$. Let $\beta>0$, $T = \frac{3}{2} \log d$,
\begin{equation*}
n \ge 1000 r^2
\log\frac{8d}{\beta}
\max \left\{\frac{T \sqrt{d} \log^2 r \log \frac{T}{\delta}}{\diffp}, r^2
\right\},
\end{equation*}
and $\max_{1 \le j \le d} \sigma_j = 1$. Then
Algorithm~\ref{alg:unknown-cov} is $(\diffp,\delta)$-DP and
outputs $\hat \sigma$ such that with probability $1 - \beta$,
\begin{equation}
\label{eqn:sigma-hat-good}
\frac{1}{2} \max \{\sigma_j, d^{-3/2}\}
\le \hat \sigma_j
\le 2 \sigma_j
~~~~ \mbox{for~all~} j \in [d].
\end{equation}
Moreover, when condition~\eqref{eqn:sigma-hat-good} holds,
\PAGAN (Alg.~\ref{Algorithm2})
with $\hat C_j = (r \hat \sigma_j)^{-4/3} / 4$, $p =\log
d$ and $\batch = 1$ has convergence
\begin{align*}
& \E [f(\wb{x}^T;\Ds) - \min_{x \in \xdomain} f(x;\Ds)]
\le R_{\textup{ada}}(T) + \iftoggle{arxiv}{}{\\
& \qquad~~~} \bigO(1) \diam_\infty(\xdomain) r
\frac{\left(\sum_{j=1}^d \sigma_j ^{2/3}\right)^{3/2} }{n \diffp}
\log \frac{d}{\delta}.
\end{align*}
\end{theorem}
\begin{algorithm}[tb]
\caption{Private Second Moment Estimation}
\label{alg:unknown-cov}
\begin{algorithmic}[1]
\REQUIRE Dataset $\mathcal{S} = (\ds_1,\dots,\ds_n) \in \domain^n$,
number of iterations $T$,
privacy parameters $\diffp, \delta$;
\STATE Set $\Delta = 1$ and $S = [d]$
\FOR{$t=1$ to $T$\,}
\FOR{$j \in S$\,}
\STATE $ \rho_t \gets4 r \Delta \log r $
\STATE $\hat \sigma_{t,j}^2 \gets \frac{1}{n} \sum_{i=1}^n \min(z_{i,j}^2,\rho_t^2) + \noise_{t,j}$ where $\noise_{t,j} \sim \normal(0,\rho_t^4 T^2 d \log(T/\delta)/n^2 \diffp^2)$
\IF{$\hat \sigma_{t,j} \ge 2^{-t - 1}$}
\STATE $\hat \sigma_j \gets 2^{-t}$
\STATE $S \gets S \setminus \{j\}$
\ENDIF
\ENDFOR
\STATE $\Delta \gets \Delta/2$
\ENDFOR
\FOR{$j \in S$\,}
\STATE $\hat \sigma_j \gets 2^{-T}$
\ENDFOR
\STATE {\bfseries Return:} $(\hat \sigma_1,\dots,\hat \sigma_d)$
\end{algorithmic}
\end{algorithm}
| {'timestamp': '2021-06-28T02:23:19', 'yymm': '2106', 'arxiv_id': '2106.13756', 'language': 'en', 'url': 'https://arxiv.org/abs/2106.13756'} |
\section{Introduction: Cardy's Conjecture}
Except in the rare cases(e.g. \cite{2}) of integrable or otherwise solvable models, it is generally impossible to derive by hand the spectrum of a quantum theory from only the Hamiltonian. Perturbation theory is therefore a standard technique for the professional theorist, but despite its many successes in describing phenomenological aspects of quantum theory, it is plagued by challenges\cite{3}\cite{4}. For the quantum field theorist, there are challenges aplenty; the first and most glaring issue are the short-distance divergences of Feynman diagrams when using the interaction picture to compute observables or scattering amplitudes. To amend this particular problem, one defines a `renormalized' version of their quantum theory, which introduces a mass scale $\mu$ that serves as an upper bound for the momentum-space resolution of the theory - or if one likes, a coarse-graining of spacetime. Since the theory is now manifestly insensitive to arbitrarily short distance events, the operators and parameters of the original theory must be modified in a $\mu$-dependent way to both avoid the divergences and make observable quantities independent of the choice of $\mu$. Since the parameters of the renormalized theory now depend on this scale (we say they ``run" with $\mu$), there is a potential problem with the application of perturbation theory should these parameters ever become large for a given value of $\mu$. This happens in Quantum Electrodynamics (QED): when renormalized at a very high scale the electric charge of the fermions become arbitrarily large. Since this limit is exactly where the renormalized theory meets the original QFT, such theories are generally considered to be unphysical, unless they are embedded into a new theory at some intermediate scale so as to avoid this issue (as is QED in Grand Unified Theories). On the contrary, we have theories like Quantum Chromodynamics (QCD) where the running of the coupling is opposite - the beta function is negative and the coupling becomes very large when renormalized at a low energy scale\cite{5}\cite{6}. QCD renormalized at scales much higher than a few hundred MeV is considered perturbative. However, perturbative QCD has failed to provide a description of the theory's spectrum which is consistent with observations of the long-distance physics\cite{7}. Given that there are many technical challenges with perturbation theory (Haag's theorem\cite{3}, non-convergence of power series expansions\cite{4}, general difficulties with calculating to high order) it should not be surprising that it generally fails to predict the spectrum of QFTs with negative beta functions.\\
Nevertheless, perturbation theory has taught us much about QFTs. The necessary introduction of the renormalized perturbation theory opened the door for the study of the Renormalization Group (RG)\cite{8}, from which we have learned much about non-perturbative physics. One such piece of knowledge is the c-theorem: in 2-dimensional conformal field theories (CFTs), there is a number $c[\mu]$ that enters as a proportionality constant in front of the anomalous divergence of the scale current. It has been established that c decreases monotonically\cite{9} as one flows to the IR ($\mu$ is decreased), regardless of the microscopic details of the theory. This establishes that (at least for 2 dimensional CFTs) a theory renormalized in the UV has more degrees of freedom than one renormalized in the IR. John Cardy conjectured that this is true for all field theories, and proposed a candidate for $c$ in higher dimensional, or non-conformal QFTs\cite{10}. A similar result has been proven in 4 dimensions, called the $a$-theorem\cite{11}. As with the c-theorem, $a$ is a multiplicative constant of the anomalous divergence of the scale current in a CFT. It is not the only anomalous term, but it is the component that survives integration over spacetime of the anomaly. In this context, one often sees the following ``equation" in the literature
\begin{equation}
\int d^D x \braket{T^\mu_\mu}\footnote{Here we are careful to mention that the use of $\mu$ is to indicate a spacetime index, not the scale at which the theory has been renormalized at. When we wish to make RG scale dependence explicit, we will use square brackets, e.g. $c[\mu]$} \sim \text{anomaly}
\end{equation}
In all relevant classical field theories it is the case that the divergence of the scale current is equal to the trace of the stress energy tensor. Cardy's conjecture, simply put, is that the left hand side of this equation is a monotonically increasing function of $\mu$. In two and four dimensions, the $c$ and $a$-theorems respectively, are examples of Cardy's conjecture for theories which exhibit a pattern of spontaneous conformal symmetry breaking. To endow an interacting field theory with conformal invariance, it is generally necessary to couple it to the metric tensor. At least in the case of the proof of the $a$-theorem, this is done through a massless mediator known as the dilaton. The dilaton-matter and dilaton-metric interactions are tuned so that the trace of the total stess-energy tensor vanishes identically. Additionally, the dilaton can be coupled arbitrarily weakly to the matter theory, allowing one to study the trace anomaly perturbatively as a consequence of the RG flow between conformal fixed points\cite{11}. This is possible to do because the IR effective theory of the dilaton and metric tensor is highly constrained by the assumed conformal (or at least Weyl) symmetry, and the anomaly coefficient appears in the four-derivative terms of the effective action. This is effectively how the $a$ and $c$ theorems were established. In three dimensions, any attempt to replicate the proof of the $a$-theorem is doomed, since there are no conformal(Weyl)-invariant terms constructed of the Riemann tensor or its derivatives with which to build an IR effective theory of the dilaton and metric \cite{12}. It is thus often claimed that there is no trace anomaly in three-dimensional quantum field theories. We wish to emphasize now that this is meant to be a statement about CFTs and their pattern of symmetry breaking in three dimensions, not for three-dimensional field theories in general. The present work does \textit{not} consider this pattern of symmetry breaking. We will focus on the physically relevant case of non-conformal theories, and whether conformal/scale symmetry is acquired in the UV is of no consequence to our results. Therefore, we do not need to assume any particular properties of the IR effective actions, and the methods and language used here may be quite different than most literature on Cardy's conjecture and CFTs. The use of the word `anomaly' to describe this phenomena then takes on a different meaning: instead of a violation of some classical conservation law (in general there is no conserved scale current), the trace anomaly serves as an obstruction to using the stress-energy tensor as a generator of scale transformations in the quantum theory.\\
The layout of this paper is as follows. In Section 2 we will make the ``equation" above more clear. In particular, we will argue that what belongs in those brackets (what has been called the divergence of the scale current) is not the trace of the stress-energy tensor, but rather a different tensor $\theta^\mu_\mu$ which only happens to equal $T^\mu_\mu$ when quantum mechanics is turned off. This argument is phrased entirely in the context of RG invariance and makes obvious what role the anomalous scale dimensions of operators should play in the discussion.\\
In section 3, once Cardys conjecture is translated into a statement about the anomalous scaling, we will briefly discuss a criteria for the existence of solutions to the field equations that makes direct use of these anomalous dimensions. It is noticed that for IR strongly coupled theories, the anomalous dimensions generically behave in such a way that allows solutions to become manifest in the field equations of the renormalized theory which are not present in the classical theory - something which the present author has conjectured might happen\cite{13}. The conditions on which such solutions become manifest are proposed.\\
\section{Scale Transformations and RG Invariance}
\subsection{Anomalous Dimension}
When a theory is defined by an arbitrary renormalization scale $\mu$, it no longer obeys the scaling relations expected from classical field theory. For example, consider the correlation function of renormalized fields $\hat{\phi}[\mu]$ with couplings $g_i[\mu]$
\begin{equation}
G^{(3)}(\mu;g_i, x_1,x_2,x_3) = \braket{\hat{\phi}(x_1)\hat{\phi}(x_2)\hat{\phi}(x_3)}
\end{equation}
\noindent It is useful to imagine the result of re-scaling these coordinates by a proportionality constant $\lambda$. From classical physics, we have the so-called `engineering dimension' $\Delta$ of the field defined by
\begin{equation}
\phi(\lambda x) = \lambda^{-\Delta}\phi(x)
\end{equation}
\noindent and naively, one would expect the dimension of $G^{(3)}$ to be $-3\Delta$. This is not the case, as a scale transformation of this form must be accompanied by a change of renormalization scale $\mu \rightarrow \mu^\prime(\lambda)$ as well. The correct result, which is consistent with demanding invariance under change of renormalization scale, is
\begin{equation}
G^{(3)}(\mu;g_i,\lambda x_1,\lambda x_2,\lambda x_3) = \lambda^{-3(\Delta + \gamma(g_i))}G^{(3)}(\mu^\prime;g_i, x_1,x_2,x_3)
\end{equation}
\noindent where $\gamma$ is the anomalous dimension of the field, and on the right hand side of this equation, it is understood that $g_i$ is being renormalized at the scale $\mu^\prime$. In a perturbative theory, $\gamma$ is usually a polynomial function in the couplings that starts at quadratic order. Thus it is small. However, the above equation should be interpereted as an exact statement about the relationship between scale transformations and RG flow, irrespective of the theorists ability to actually calculate such quantities. In a non-perturbative regime, the anomalous dimensions could be potentially $\mathcal{O}(1)$ or greater.\\
The uniqueness of the scale $\mu^\prime$ is also an issue. We suspect that as one increases $\lambda$ and probes the large distance properties of the UV renormalized theory, the corresponding choice of $\mu^\prime$ should decrease, equating to sensitivity of the IR theory. If $\mu^\prime$ is then a monotonically decreasing function of $\lambda$, we should also then suspect that the anomalous dimension does not take the same values twice at different scales, ensuring the uniqueness of the formula. These expectations, as we shall see, are at the heart of Cardy's conjecture and are tied to expectations that RG flows cannot be cyclic\cite{9}.\\
It is also imperative that we discuss the anomalous dimension of composite field operators, like $\hat{\phi}^2$. The anomalous dimension of this operator is not equal to twice that of the fundamental field $\hat{\phi}$. It is easy enough to check that the divergence structure of $\braket{\hat{\phi}^2(x)}$ and $\braket{\hat{\phi}(x)\hat{\phi}(y)}$ are different, and so their renormalized counterparts must subtract divergences differently and therefore scale differently. The exact scaling dimension of operators at separated points will still be only the sum of the scaling dimensions of each operator; this is a consequence of the cluster decomposition property\cite{15}. This rule however does not apply for composite operators in interacting field theories: one might consider this a consequence of the operator product expansion.\\
\subsection{Scale Currents}
In a classical field theory there is a straightforward, easy way to calculate the effects of a scale transformation on solutions to the field equations. One must only identify the right hand side of
\begin{equation}
\phi(x^\mu + \delta x^\mu) = \phi(x^\mu) + \delta\phi(x^\mu)
\end{equation}
\noindent which with the identification $\delta x^\mu = (\lambda-1)x^\mu$ is exactly the scale transformation mentioned previously. With $\lambda$ expanded in a power series around the value of 1, a scale current is obtained by variation of the action in the usual way prescribed by Noether's theorem. The result is
\begin{equation}
j_\mu = x_\nu \theta^{\mu\nu}
\end{equation}
\noindent for some symmetric tensor which is conserved ($\partial_\mu \theta^{\mu\nu} = 0$) if the action is manifestly translation invariant. In the case of gauge theories, it is possible to choose $\delta \phi$ ($\phi$ is meant as a stand-in for any field) in such a way that this quantity is gauge invariant. The divergence of this scale current is therefore the trace
\begin{equation}
\partial_\mu j^\mu = \theta^\mu_\mu
\end{equation}
In a classically scale-invariant theory, this vanishes. However even when the theory is not classically scale invariant, this is a useful quantity to know. By the equations of motion, the spacetime integral of $\theta^\mu_\mu$ can be found to be proportional to any mass terms that are implicit in the Lagrangian. The scale current is therefore a useful tool for analyzing the breaking of scale invariance(or a larger conformal invariance), be it spontaneous or explicit. Given a Lagrangian composed of ``operators" $\mathcal{O}_i(x)$ and their engineering dimensions $\Delta_i$:
\begin{equation}
\mathcal{L} = \sum_i \mathcal{O}_i(x)
\end{equation}
\noindent it is quite easy to compute the divergence of this current in $D=d+1$ dimensions. By definition of the engineering dimensions, it is
\begin{equation}
\theta^\mu_\mu(x) = \sum_i (\Delta_i-D)\mathcal{O}_i(x)
\end{equation}
In many reasonable classical field theories, it happens to be the case that $\theta^{\mu\nu} = T^{\mu\nu}$, the stress energy tensor. As in the case of gravitational physics, $T^{\mu\nu}$ can be computed by variation of the action with respect to the metric tensor. In General Relativity the metric tensor is a dynamical thing so $T^{\mu\nu}$ emerges in its equations of motion, but in non-gravitational field theories the variation with respect to the metric still computes the same quantity. For a static solution to the field equations whose stresses vanish at spatial infinity, the integral of $T^\mu_\mu$ is exactly the energy/rest-mass of the solution\cite{16}, in agreement with the interpretation of $\theta^\mu_\mu$. So long as there is no explicit coordinate dependence in the Lagrangian, these two tensors will be the same, classically.\\
All of this changes once quantum mechanics is turned on. Scale invariance (if it is present) is broken by the RG flow, and can only be effectively restored if the flow terminates at a fixed point\cite{17}. This can happen, but exact scale invariance is broken and $\braket{\hat{\theta}^\mu_\mu}$ is anomalous. It is possible that this `trace anomaly' is largely responsible for the existence of \textit{all} mass scales in nature. In particular, Non-Abelian Yang-Mills theory with no matter fields is classically scale invariant, and the excitations are massless. In the quantum theory however, a mass gap is conjectured\cite{7} and the theory dynamically acquires a scale which was not present classically.\\
Furthermore one should not expect that $\hat{\theta}^{\mu\nu} = \hat{T}^{\mu\nu}$ remains to be generically true once quantum mechanics is involved. One reason is quite simple: variation with respect to the metric is not a quantum mechanical operation. If the theory is not a quantum theory of gravity, there is no reason to suspect that the metric dependence of a quantum action is sensitive to the quantum nature of the matter fields. For example, if
\begin{equation}
S_\text{int} = -\int d^D x \sqrt{-g}\hat{\phi}^4(x)
\end{equation}
\noindent then $\hat{T}^\mu_\mu$ necessarily contains a term $D \hat{\phi}^4(x)$. This is completely insensitive to the fact that the scaling dimension of this operator is dependent on what other operators are included in the Lagrangian as well as the scale $\mu$ that we have renormalized at. The derivation of $\hat{\theta}^\mu_\mu$ naturally leads to the inclusion of a term $-\left(\gamma_{\phi^4}-D\right)\hat{\phi}^4(x)$ which contains both information about the renromalization scale and the rest of the dynamics of the theory (here we have worked in a setting where $\Delta = 0$). If that is not enough to convince one that these tensors are different, consider the principle of RG invariance:
\subsection{RG Invariance and Cardy's Conjecture}
A principle tenant of the renormalization group procedure is the invariance of physical quantities with respect to $\mu$. The foremost example of a physical quantity, which is fundamental to the construction of any theory, is the invariant eigenvalue of the squared momentum operator $\hat{P}^2$. For a stationary state identified with a quanta of the theory, this is just the physical mass $M^2$. Consider a stationary state $\ket{\Psi}$, whose wavefunction vanishes sufficiently fast at spatial infinity. Manifest translation invariance implies, e.g. $\partial_\mu \hat{T}^{\mu i} =0$ or\footnote{This argument was borrowed from \cite{16}}
\begin{equation}
\int d^d x x_i \braket{\Psi|\partial_\mu \hat{T}^{\mu i}|\Psi} = \int d^d x x_i \partial_j\braket{\Psi| \hat{T}^{j i}|\Psi} = -\int d^d x \braket{\Psi|\hat{T}^i_i|\Psi} = 0
\end{equation}
Therefore, since the ``00" component of the stress energy tensor is the Hamiltonian, the spatial integral of $\braket{\Psi| \hat{T}^{\mu}_\mu|\Psi}$ is just the mass of the state, $M_\Psi$. For the vacuum, this is zero. How then could it be that the integrated trace of the stress energy tensor depends on $\mu$, for the vacuum or for other states? We argue of course that it does not, and that the operator relevant to Cardy's conjecture is not the true stress energy tensor, but rather $\hat{\theta}^\mu_\mu$ which generically differs from $\hat{T}^\mu_\mu$ by the $\mu$-dependent scale anomaly.\\
We will now re-state Cardy's conjecture. For a Lagrangian density as in eq. (8), where the terms are now true operators in a quantum theory, the divergence of the scale current is
\begin{equation}
\hat{\theta}^\mu_\mu(x) = \sum_i (\Delta_i + \gamma_i-D)\hat{\mathcal{O}}_i(x)
\end{equation}
\noindent where the RG scale dependence is now implicit in $\gamma_i$'s. Cardy's conjecture is then about the quantity
\begin{equation}
\Theta[\mu] \equiv\sum_i (\Delta_i + \gamma_i-D)\int d^D x \braket{\hat{\mathcal{O}}_i(x)} = \int d^D x\braket{\hat{T}^\mu_\mu} + \int d^D x \sum_i\gamma_i \braket{\hat{\mathcal{O}}_i}
\end{equation}
Specifically, $\Theta_\text{IR} \leq \Theta_\text{UV}$, i.e. that it is a monotonically increasing function of $\mu$. The first term in the right hand side is a constant: the anomaly is represented as the remainder. Of course there are two dependencies on $\mu$ in the anomaly: that of $\gamma_i$'s and that of the expectation values $\braket{\hat{\mathcal{O}}_i(x)}$. We claim that a model-independent interpretation of Cardy's conjecture is really just a statement about the anomalous dimensions: $\gamma_i[\mu]$ increases monotonically as we flow to the IR. This would in fact affirm our earlier assumption that eq. (4) necessitated a unique choice of $\mu^\prime$. As a reminder, were this not the case we should conclude that RG flows can be cyclic, something which certainly should not describe any physical system. For the remainder of this paper we will work under the assumption that this is the correct interpretation of Cardy's conjecture, and that the conjecture is correct.\\
\section{Finding New States}
Briefly, let's take a detour back to the land of classical field theory. Consider the case of an interacting Klein-Gordon field in 2+1 dimensions, with a $\mathbb{Z}_2$ potential in its spontaneously broken phase (perhaps $V(\phi) = -m^2\phi^2 + g_6 \phi^6 + V_0$). Naively, one might suspect that domain walls are formed, and could be stable. This is not the case. We divide the Hamiltonian into kinetic and potential terms
\begin{equation}
H = K+U
\end{equation}
\noindent where K and U are positive definite. We now imagine scaling a static domain wall of spatial size 1 to a size $\lambda^{-1}$. Since $\Delta_K = 2$ and $\Delta_U = 0$ we have in $d=2$ spatial dimensions:
\begin{equation}
H[\lambda] = K + \lambda^{-2}U
\end{equation}
\noindent which cannot be minimized except at $\lambda = \infty$. Therefore the domain wall is unstable to shrinking indefinitely (this is of course due to the unbalanced tension on the wall). Furthermore, no such extended field configurations exist and are stable in this model. The above classical scaling argument is the core of what is known as Derrick's theorem\cite{1}.\\
\subsection{Large Anomalous Dimensions}
The story is potentially much different in the quantum theory. It is easy to see how different things can be by assuming at first $K$ and $U$ acquire anomalous dimensions $\gamma_K$ and $\gamma_U$. The conditions for a stable state then become the condition for a local minima of $H[\lambda]$ at $\lambda = 1$:
\begin{align}
\gamma_K K + (\gamma_U - 2)U &= 0\\
\gamma_K (\gamma_K -1) K + (\gamma_U - 2)(\gamma_U - 3)U &> 0
\end{align}
Alternatively (by eliminating $U$ for $K$ using eq. (16)), $(2-\gamma_U)/\gamma_K> 0$ and $\gamma_K(2+\gamma_K +\gamma_U) > 0$. Now obviously in a real model the terms that appear in the potential energy will not all have identical scaling dimensions. The argument is clear though; the activation of anomalous scaling of operators that contribute to the energy density of a state can circumvent the conclusions of Derrick's theorem. This of course depends on the relative signs and magnitudes of the anomalous dimensions in question. This is where Cardy's conjecture enters the picture.\\
If the theory is weakly coupled in the IR, $\gamma_K$ and $\gamma_U$ are small, close to zero. As one increases the RG scale, these terms either decrease or stay the same. At some UV scale then, they stand the chance of being large (perhaps before the theory is embedded to become UV complete) but negative. The stated conditions for circumvention of Derrick's theorem are then not satisfied.\\
This theory in 2+1 dimensions happens to have a negative beta function - it is strongly coupled in the IR and weakly coupled in the UV\cite{18}. Suppose then that in the UV where the anomalous dimensions are close to zero (and possibly negative) Derrick's theorem is not circumvented. Cardy's conjecture implies then that as we flow to the IR, $\gamma_K$ and $\gamma_U$ grow to become large as we approach the non-perturbative regime. If the flow does not terminate at a fixed point before this happens, the anomalous dimensions stand a chance of approaching unity (and positive), where the above conditions can almost certainly be satisfied. The meaning of the title of this paper is now revealed: it is precisely the case of an IR strongly-coupled QFT where one expects new massive solitonic states to become manifest in the field equations. That they are not manifest as solutions along the entire RG trajectory should not be surprising: perturbation theory in the UV is by construction only sensitive to a subsection of the total Hilbert space of a QFT\cite{13}.\\
We did not just prove that stable domain walls will exist in the 2+1 dimensional scalar field theory described at the beginning of this section. Rather, we used this simple case to visualize the potential for strongly-coupled theories to host solutions which have no classical analogue. Whether a theory actually hosts such solitonic states must be checked on a case-by-case level, and is subject to details that we will not address here. In upcoming works we will investigate specific cases and demonstrate that signifigant deviations from classical estimates of soliton masses may occur.\\
\subsection{The Masses}
Suppose a theorist makes some exact non-perturbative evaluation of the operator dimensions in the Hamiltonian at some UV scale $\mu$, and uses the beta function for all relevant coupling constants to put bounds on how slowly those dimensions evolve with $\mu$. This theorist then discovers that by our monotinicity arguments, conditions (16) and (17) are met as one flows to the IR, at an RG scale no smaller than $\mu^*$. What then is the mass of the state which emerges there, and how does it depend on the value of $\mu^*$?\\
We don't have an exact answer to this question of course, and necessarily this is a regime where perturbative calculations of anomalous dimensions are likely not very accurate. But if the conditions above can be met, one should be able to put a lower bound on the mass of the lightest solitonic state. Generically we should consider a time-independent state (i.e. a static field configuration) and a Hamiltonian of the form
\begin{align}
H = \sum_i H_i[\mu] + \Omega[\mu]
\end{align}
\noindent where the $H_i$ are derived from the integrated expectation values (with respect to the massive state, not the vacuum) of operators in eq. (8), and $\Omega$ is a free energy. Often one will encounter texts which claim this free energy is a cosmological constant, but this is not correct: while renormalization of vacuum graphs requires the introduction of a field-independent term in the action to regulate divergences, this is an RG-variant term and is compatible with a zero vacuum energy. The true meaning of $\Omega[\mu]$ is that of the free energy associated with the invisible fluctuation modes with momenta greater than $\mu$. Both $\Omega$ and the $H_i$ are RG-variant quantities, but by construction their sum should not be.\\
What matters now is essentially applying stability conditions (16) and (17) to our Hamiltonian (18). The result is not literally (16) and (17), but rather (in $d$ spatial dimensions)
\begin{align}
&\sum_i (\Delta_i+\gamma_i - d)H_i + (\gamma_\Omega -d)\Omega = 0\\
&\sum_i (\Delta_i+\gamma_i - d)(\Delta_i+\gamma_i - d+1)H_i + (\gamma_\Omega -d)(\gamma_\Omega -d+1)\Omega > 0
\end{align}
Here $\Omega$ is assumed to have no engineering dimension, and only acquires dimension $\gamma_\Omega$ through the anomalous running of the renormalized coupling constants of which it is a function. Assuming that conditions (19) and (20) are met at and below the RG scale $\mu^*$, we now have known relationships between the numerical values of $H_i[\mu^*]$ and $\Omega[\mu^*]$. As is typical with classical extended field configurations, all of these terms should generically differ only by $\mathcal{O}(1)$ coefficients\cite{19}. The anomalous dimensions will generically\footnote{The previous 2+1 dimensional example was a special case. There it seems that these solutions turn on as soon as $\gamma$'s are positive, even if very small. However, in this number of dimensions many models already host solitonic states, e.g. vortices, a consequence of the fact that $\Delta_K = d = 2$. In higher dimensions, where soliton solutions are generically not present, the $\gamma$'s have more work to do and therefore will be $\mathcal{O}(d-\Delta)$ at $\mu^*$} be $\mathcal{O}(d-\Delta)\sim\mathcal{O}(1)$ at $\mu^*$ and since the coefficients of the $H_i$'s and $\Omega$ in eqs. (19) and (20) will be differing in sign, we argue that $H \sim \Omega[\mu^*]$ is as good an estimate for the lower bound of the mass as any. What is the value of $\Omega[\mu^*]$? Naively, it is some integral of the free energy density $\omega(g_i)$, a function of the couplings of the theory renormalized at $\mu^*$. The correct expression should depend on some normalized energy density profile of the state, and since this is at the moment indeterminate, we conjecture the following:
\begin{align}
M \sim \Omega[\mu^*] \gtrsim \frac{\omega[\mu^*]}{(\mu^*)^d}
\end{align}
Since the state only manifests as a solution to our quantum equations of motion when we are insensitive to distance scales less than $1/\mu^*$, the volume of the physical object should be at least $(1/\mu^*)^d$. The natural guess is then eq. (21).
\section{Conclusion and Going Forward}
We have interpreted Cardy's conjecture as a statement about the change of anomalous dimensions under RG flow. The basic requirements that RG flows not be cyclic and that an IR renormalized theory have less degrees of freedom than a UV renormalized theory are realized if some scaling dimensions are larger in the IR than in the UV. This becomes consequential if those dimensions deviate from classical values by $\mathcal{O}(1)$ corrections, and those deviations are positive. This happens when the theory is IR strongly coupled, and it opens the door for a circumvention of Derrick's classical no-go theorem. Should new solutions to the renormalized field equations emerge, we identify them as solitons and have proposed a way to estimate a lower bound on their mass using perturbation theory in the UV. Such a thing is only possible because of monotinicity arguments. Once the solutions are found and characterized, we propose a reorganization of the degrees of freedom at scales at and below $\mu^*$ to reflect the manifestation of these new solutions. One should expand perturbatively around such solutions, rather than the free field theory configurations.
\newpage
| {'timestamp': '2021-04-12T02:02:18', 'yymm': '2005', 'arxiv_id': '2005.07209', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.07209'} |
\section{Long-range correlated disorder and self-consistent Born approximation}
\subsection{Long-range interacting impurity}
On a square lattice, at each site $\bm{R}$, we introduce an impurity whose strength $W(\bm{R})$ independently takes a random value from a uniform distribution $[-W/2,W/2]$. These impurities are uncorrelated by construction
\begin{equation}
\mean{W(\bm{R})} = 0,\quad \mean{W(\bm{R})W(\bm{R}')} = \frac{W^2}{12}\delta_{\bm{R},\bm{R}'} = \frac{W^2a^2}{12}\delta(\bm{R}-\bm{R}')
\end{equation}
The lattice constant $a$ appears when we promote the discrete Kronecker to the continuum field description because we place an impurity every distance $a$ in a two dimensional lattice. We now argue that if these impurities interact with electrons via a finite-range interaction, the effective disorder field felt by an electron is effectively long-range correlated. We assume this electron-impurity interaction is given by a Gaussian function so that the net disorder potential is
\begin{equation}
u(\bm{r}) = \int \frac{e^{-(\bm{r}-\bm{R})^2/\xi^2}}{\pi \xi^2}W(\bm{R})d^2\bm{R}.
\end{equation}
As a result, $u(\bm{r})$ is spatially correlated by
\begin{equation}\label{correlated}
\begin{split}
\mean{u(\bm{r})u(\bm{r}')} = \frac{W^2a^2}{12\pi^2\xi^4}\int e^{-(\bm{r}-\bm{R})^2/\xi^2}e^{-(\bm{r}'-\bm{R})^2/\xi^2}d^2\bm{R} = \frac{W^2a^2}{24\pi\xi^2}e^{-(\bm{r}-\bm{r}')^2/2\xi^2}
\end{split}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{SM1.pdf}
\caption{Self-consistent diagram for the disorder-induced self-energy. The double-line represents the renormalized Green function with the self-energy included. The cross denotes the impurity and the dashed lines denote the impurity-electron interaction. The self-consistent diagram can be thought of as the sum of all non-crossing diagrams with single solid lines representing the bare Green function. After averaging over disorder configurations, each dashed line segment connecting two points of a Green function is substituted by Eq.~\ref{correlated}. \label{fig1}}
\end{figure}
The self-consistent Born approximation sums all the non-crossing diagrams as shown in in Fig.~\ref{fig1}. We also do not consider more-than-two-point correlation functions as in the $T-$matrix approximation. After averaging over disorder configurations, the theory recovers the translational symmetry so we can compute the self-energy self-consistently as
\begin{equation}\label{selfconsistent}
\begin{split}
\Sigma &= \frac{W^2a^2}{12}\int \frac{d^2\bm{k}}{(2\pi)^2} \frac{e^{-k^2\xi^2/2}}{\mu-H(m,\alpha,\beta,\gamma;\bm{k})-\Sigma} = \frac{W^2a^2}{12\xi^2}\int \frac{d^2\bm{k}}{(2\pi)^2} \frac{e^{-k^2/2}}{\mu-H(m,\alpha/\xi,\beta/\xi^2,\gamma/\xi^2;\bm{k})-\Sigma}
\end{split}
\end{equation}
The second equality is constructed by rescaling the integration variable $\bm{k}\to \bm{k}\xi$. Compared with the case of uncorrelated disorder, the extra exponential decay comes from the Fourier transform of the finite-range function $O(\bm{r}-\bm{r}')=\mean{W(\bm{r})W(\bm{r}')}$. This term provides a natural cutoff for the integral so we do not need to impose an artificial UV cutoff. For brevity, we define the rescaled quantities $\bar{W}=W/\xi, \bar{\alpha}=\alpha/\xi, \bar{\beta}=\beta/\xi^2$, and $\bar{\gamma}=\gamma/\xi^2$. The integral in Eq.~\ref{selfconsistent} can be performed exactly, yielding an explicit systems of equation
\begin{equation}\label{SCsystem}
\begin{split}
&\Sigma_0 = \frac{\bar{W}^2}{48\pi \Delta}\left[ F(x_2)\left(\mu-\Sigma_0 + 2\bar{\gamma}x_2\right) - F(x_1)\left(\mu-\Sigma_0 + 2\bar{\gamma}x_1\right)\right],\\
&\Sigma_z = \frac{\bar{W}^2}{48\pi \Delta}\left[ F(x_2)\left(m+\Sigma_z - 2\bar{\beta}x_2\right) - F(x_1)\left(m+\Sigma_z - 2\bar{\beta}x_1\right)\right]
\end{split}
\end{equation}
Here, we decompose the self-energy matrix as $\Sigma=\Sigma_0\sigma_0+\Sigma_z\sigma_z$. Other terms in Eq.~\ref{SCsystem} are given by
\begin{equation}
\begin{split}
& F(x) = e^x\left[i\pi \text{sign}[\text{Im}(x)] + \text{Ei}(-x)\right]\\
&\Delta = (a^2-4bc)^{1/2},\quad x_{1,2} = \frac{a\mp \Delta}{4b},\\
&a=-\bar{\alpha}^2-2\bar{\gamma}(\mu-\Sigma_0)-2\bar{\beta}(m+\Sigma_z),\quad b=\bar{\gamma}^2-\bar{\beta}^2, \quad c=(\mu-\Sigma_0)^2-(m+\Sigma_z)^2
\end{split}
\end{equation}
The solution of the Eq.~\eqref{SCsystem} is transcendental but can be obtained quickly through multiple numerical iterations. Then, the renormalized mass term and chemical potential are
\begin{equation}
\bar{\mu} = \mu -\text{Re}(\Sigma_0),\quad \bar{m} = m +\text{Re}(\Sigma_z).
\end{equation}
We emphasize that it is necessary to obtain the full self-consistent solution. If one assume $\Sigma=0$ in the RHS of Eq.~\ref{selfconsistent}, a simple expression for the solution can be found but its accuracy might be limited and more importantly, it is not clear how the imaginary part of the self-energy can be obtained.
\subsection{$L^{-4}$ scaling of the CTI phase}
In this subsection, we show that quantized conductance is reduced by the edge-edge hybridization probability, which is suppressed by $L^{-4}$ in the weak-coupling limit. The longitudinal conductance is given by $G=e^2v_F\rho$ where $v_F$ is the Fermi velocity and $\rho$ is the density of states at the Fermi energy. For a chiral state $\psi(x) = D^{-1/2}e^{ik_Fx}$ with $D$ being the longitudinal dimension for normalization, $v_F=\text{Re}\bra{\psi}i\partial_x\ket{\psi}/m_e$ and $\rho=1/2\pi\hbar v_F$, leading to the quantized conductance $G=e^2/h$. We assume that a small coupling between opposite chiral edges, which effectively acts as a backscattering, mainly reduces the propagating velocity. Specifically, if $\psi(x)\to\psi'(x)\sim \sqrt{1-|\eta|^2}e^{ikx} + \eta e^{-ikx}$ with $\eta$ being the mixing amplitude, then $v_F\to v_F' = v_F(1-2|\eta|^2)$. As a result, the conductance loss $1-G$ is proportional to the hybridization probability.
We now estimate the scaling of the hybridization probability. We assume that the probability of an excitation of energy $E$ to overlap with an eigenstate of energy $E+\Delta E$ is given by a function $F(\Delta E /\Gamma)$ where $\Gamma$ is the self-energy-induced incoherent level broadening. Here, $F(x)$ decreases with increasing $|x|$. Eigenstates within the energy band gap decay exponentially into the bulk, so the main contribution comes from states near the gap edge where the localization length diverges. Specifically, the probability of an edge mode of energy $E_0$ to appear at a distance $y$ in the bulk is
\begin{equation}\label{scaling}
P(E_0, y) = \int F\left(\frac{E-\bar{E}_0}{\Gamma} \right) e^{-Cy\sqrt{|\bar{m}|-E}}dE \propto y^{-2}F\left(\frac{|\bar{m}|-\bar{E}_0}{\Gamma} \right).
\end{equation}
Here, $\bar{E}_0$ is the disorder-renormalized $E_0$ and $|\bar{m}|$ is the upper edge of bulk gap with $\bar{m}$ being the renormalized mass in the disorder-averaged Hamiltonian. The same argument can be applied to the leakage into the bulk through the lower edge $-|\bar{m}|$ of the bulk band. The energy level broadening is $\Gamma=\text{Im}\Sigma_0$. The probability of inter-edge overlap then follows analytically from Eq.~\eqref{scaling} to scale as $L^{-4}$, where $L$ is the sample width or the inter-edge separation. We compare Eq.~\ref{scaling} with the disorder-induced density of states inside the bulk gap
\begin{equation}
\rho(E_0) = \frac{1}{\pi}\left[ \text{Tr} \int\frac{d^2\bm{k}}{E_0-H_0-\Sigma} \right] \approx -\frac{1}{\pi} \int\frac{ \text{Tr}\left[(\text{Im}\Sigma_0)\sigma_0 +(\text{Im}\Sigma_z)\sigma_z\right]}{(\bar{m}^2-\bar{E}_0^2) + \alpha^2k^2 +\mathcal{O}(k^4)} d^2\bm{k} \sim \frac{\text{Im}\Sigma_0}{\bar{m}^2-\bar{E}_0^2}.
\end{equation}
As $\rho(E_0)$ increases, the edge-edge hybridization probability grows concomitantly. This shows the consistence of our Born approximation with the microscopic picture that in the CTI phase, the two edges couple through bulk percolating states inside the bulk gap.
\subsection{TAI suppression by long-range disorder}
Even though an apparent solution for the self-energy is generally not known, some properties can be derived in the large-$\xi$ limit where $\bar{\beta},\bar{\gamma}$ and $\bar{\alpha}^2$ are suppressed by a factor $\xi^{-2}$. We can then simply Eq.~\eqref{SCsystem} into
\begin{equation}
\begin{split}
&\Sigma_0 = \frac{\bar{W}^2}{24\pi} \frac{\mu-\Sigma_0}{(\mu-\Sigma_0)^2-(m+\Sigma_3)^2} \\
&\Sigma_z = \frac{\bar{W}^2}{24\pi} \frac{m+\Sigma_z}{(\mu-\Sigma_0)^2-(m+\Sigma_z)^2}
\end{split}
\end{equation}
Then it easy to show that if one start at the clean energy gap $\mu=\pm m$ (actually $|\mu|=|m|-\epsilon$ with infinitesimal $\epsilon$ to avoid singularity) then the renormalized value satisfies $\bar{\mu}=\pm\bar{m}$, i.e. the width of the bulk gap is unchanged by the disorder, ruling out the existence of the TAI phase. In addition, one has the symmetry $\Sigma_0\to-\Sigma_0$ for $\mu\to-\mu$, so the the solution is symmetric around $\mu=0$. This limit is approximately reached when $\xi^2\sim (\gamma,\beta,\alpha^2)/|m|$. However, we now argue the TAI phase is suppressed much sooner than this limit mainly due to the fast growth of the imaginary part of the self-energy.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{SM2.pdf}
\caption{Variations of $\text{Im}(\Sigma_0)$ (a), $\bar{m}^2-\bar{\mu}^2$ or the squared gap width (b), and $\Xi$ (c) as a function of the disorder correlation length $\xi$. The parameters are $m=1, \alpha=120,\beta=400,\gamma=200,\bar{W}=300,\mu=5$ (corresponding to Fig.3 in the main text). \label{fig2}}
\end{figure}
As shown in Fig.~\ref{fig2}(a), the imaginary part of the Green function grows rapidly beyond the critical point. Comparing Figs.~\ref{fig2}(a) and (c), the variation of the $\Xi$-factor used to bound the TAI region is almost concomitant with that of the imaginary part. As a result, the threshold $\Xi=2$ is reached when the renormalized gap is still open and $\bar{m}<0$. For the given parameters, based on the gap-opening condition only, the threshold $\xi$ for a complete TAI loss should be 8; while according to our numerical simulation, this point $\bar{W}=300,\mu=5$ is already AI for $\xi=3$, which agrees better with the threshold $\xi=2.5$ from the self-consistent Born approximation [see the crossing point in Fig.~\ref{fig2}(c)].
\section{Additional numerical data}
\subsection{One-parameter scaling function}
In this subsection, we provide more numerical evidence to support our one-parameter scaling ansatz. In Figs.~\ref{fig3}(a-c), we present the scaling trend across the CTI-AI transition at different chemical potentials $\mu$. Within the error caused by random fluctuations, there is little size dependence for $G<0.3$, supporting that the CTI-AI critical value is universal given a certain set of lattice parameters. In the main text, we already demonstrate the one-parameter scaling function within the CTI phase. Now to fully map out this function, we add two more sets of data near the CTI-AI phase boundary as shown in Fig.~\ref{fig3}(d) and are able fit the collapsed data in a smooth line (dashed line). By taking the log derivative of the fitted line, we compute the scaling exponent $\beta=d\ln(1-G)/\ln L$ in Fig.~\ref{fig3}(e). The scaling exponent approaches $-4$ in the limit $G\to 1$, then graduate increases until the critical value $G\sim 0.3$ and stays zero afterward. For the transverse conductance, one should see the $\beta$ function cutting through zero axis at $G=0.5$ and turning positive afterward. We explain the reason in the main text.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{SM3.pdf}
\caption{(a-c) Finite-size scaling across the CTI-AI transition with respect to the disorder strength $\bar{W}$ at different applied chemical potential $\mu$. (d) Quantization error with respect to the scaled size. The red dashed line is fitted from the numerical data. (e) Scaling exponent derived from the fitted line in (d). The $1-G$ axis in (e) is plotted in the $\ln[x/(1-x)]$ scale. The lattice parameters are similar to Fig.~(1) in the main text. \label{fig3}}
\end{figure}
\subsection{Other types of disorders}
In this subsection, we consider the full time-reversal invariant Hamiltonian, i.e. two copies of the Hamiltonian we used in the main text, so that $H_{AII}(\bm{k})=h(\bm{k})\oplus -h^*(-\bm{k})$ with the time-reversing operator $i\mathcal{K}\tau_y$ and $\mathcal{K}$ being the complex conjugate. Here $\tau_{x,y,z,0}$ are the Pauli matrices corresponding to the spin degrees of freedom. Two more types of disorders can be studied in this case, i.e. the time-reversal symmetric spin-orbit coupling (SOC) disorder $u(\bm{r})\tau_x\sigma_y$ and the time-reversal breaking magnetic disorder $u(\bm{r})\tau_x\sigma_x$. In the former case, the difference from the onsite potential studied in the main text is that in one spin sector
\begin{equation}
\Sigma = \sigma_y\left[\frac{W^2a^2}{12}\int \frac{d^2\bm{k}}{(2\pi)^2} \frac{e^{-k^2\xi^2/2}}{\mu-h(\bm{k})-\Sigma}\right]\sigma_y.
\end{equation}
Since $\sigma_y\sigma_z\sigma_y=-\sigma_z$, the sign of the mass correction is inverted. As a result, a clean non-trivial system ($m<0$) will be eventually trivialized when $\bar{m}>0$. This is reflected in Fig.~\ref{fig4}(a). On the other hand, a clean trivial system $(m>0)$ continues to stay trivial. For the case of time-reversal symmetry breaking disorder, the quantization plateau is destroyed at a much weaker strength of disorder compared to the cases of symmetry-preserving disorders [see Fig.~\ref{fig4}(b)], consistent with the fact that the topology of AII systems is protected by the time-reversal symmetry.
\begin{figure}
\includegraphics[width=0.7\textwidth]{SM4.pdf}
\caption{Conductance in the presence of SOC (a) and magnetic (b) disorders. The black lines in panel (a) represent the renormalized energy gap with the mass term inverted. The parameters for the clean system are similar to Fig.~2(a) in the main text with $\xi=2$. For the symmetry-breaking disorder, the quantized conductance is lost at $\bar{W}\sim \mathcal{O}(1)$; while in the presence of symmetry-preserving disorder the quantization plateau can extend up to $\bar{W}\sim \mathcal{O}(10^2)$\label{fig4}. }
\end{figure}
\subsection{Pseudo-quantized conductance plateau}
In Fig.~\ref{fig5}, we show that in the CTI phase, due to the slow convergence of the conductance to the quantized value (especially near the CTI-AI transition), it is possible to observe a conductance plateau by tuning the chemical potential but the plateau value is not precisely quantized.
\begin{figure}
\includegraphics[width=0.5\textwidth]{SM5.pdf}
\caption{Conductance as a function of the applied chemical potential for different strengths of disorder. The parameters for the clean system are similar to Fig.~2(c) in the main text with $\xi=8$. \label{fig5}. The quantization error of the conductance plateau gets worse as the disorder strength increases, driving the system from TI to CTI phase.}
\end{figure}
\end{document} | {'timestamp': '2022-05-03T02:00:21', 'yymm': '2205', 'arxiv_id': '2205.00015', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.00015'} |
\section{Introduction}
Methods of modern spectroscopy make it possible to measure hyperfine splittings in electronic states of atoms and molecules with high accuracy, resulting in a large amount of data for the analysis and interpretation \cite{barzakh2013changes, barzakh2017changes, beiersdorfer2001hyperfine,schmidt2018nuclear,Petrov:13,Mawhorter:2011}. These data are important not only for the nuclear theory but also for the development and testing methods of precise electronic structure calculations. Such methods are required in the field of searching for the effects of spatial and time-reversal symmetry violating effects of fundamental interactions in atomic and molecular systems~\cite{GFreview,SafronovaRev:2018,Skripnikov:14c,Skripnikov:17c}.
To reproduce accurately experimental values of hyperfine splitting energy in heavy atoms with uncertainty of the order of 1\% one has to take into account both relativistic and correlation effects at a very good level of theory. Moreover, one should also take into account nuclear structure contributions (as well as the quantum electrodynamics (QED) corrections~\cite{Ginges:2018}). These are contributions from the distribution of the charge (Breit-Rosental effect) \cite{rosenthal1932isotope, crawford1949mf} and magnetization (Bohr-Weisskopf effect) \cite{bohr1950influence, bohr1951bohr} over the nucleus. For many atomic systems these effects have been extensively studied (for example see Refs.
\cite{maartensson1995magnetic, konovalova2017calculation, gomez2008nuclear, ginges2017ground, dzuba1984va}).
In the point nucleus model the ratio of the hyperfine splittings of two different isotopes is proportional to the ratio of nuclear \emph{g}-factors of the isotopes. However, this is not the case for the real system due to the mentioned effects. Corresponding correction is called a magnetic anomaly:
\begin{equation}
{}^{1}\Delta^{2}=\frac{A_1 \mu_2 I_1}{A_2 \mu_1 I_2}-1,
\end{equation}
where $A_1$ and $A_2$ are hyperfine structure (HFS) constants, $\mu_1$ and $\mu_2$ are nuclear magnetic moments and $I_1$ and $I_2$ are nuclear spins of considered isotopes.
Theoretical value of the anomaly strongly depends on the model of the magnetization distribution. However, as was noted previously (e.g. in Refs.~\cite{0954-3899-37-11-113101,schmidt2018nuclear,barzakh2012hyperfine}) the ratio of anomalies for different electronic states has a very small model dependence. This feature has been employed here to predict nuclear magnetic moments of short lived ${}^{191}$Tl and ${}^{193}$Tl isotopes.
In the present paper relativistic coupled cluster calculations of the hypefine structure constants and magnetic anomaly in the thallium atom for several electronic states are performed. We show that for this case it is possible to use a simple model of the magnetization distribution in Tl isotopes and fix the parameter of the model from the available experimental data and use it for further predictions. We show that for such electronic structure calculations one can use Gaussian basis sets which implies a direct generalization to similar molecular problems.
\section{Theory}
For both considered stable isotopes of thallium ($^{203}$Tl and $^{205}$Tl) the total nuclear moment is $1/2$. We consider a simple one-particle model in which the nuclear magnetization is due to single spherically distributed valence nucleon (proton) with the nuclear spin equal to $1/2$ and zero orbital moment. The hyperfine interaction in the case of point magnetic dipole moment is given by the following Hamiltonian:
\begin{equation}
\label{hfs1}
h^{\rm HFS}=\frac{1}{c}\frac{\mathbf{\mu}\cdot[\mathbf{r}_{el}\times \bm{\alpha}]}{r_{el}^3},
\end{equation}
where $\bm{\alpha}$ are Dirac matrices and $\mathbf{r}_{el}$ is the electron radius-vector. In the model of the uniformly magnetized ball with radius $R_m$ this interaction inside the nucleus modifies to the following form~\cite{bohr1950influence}:
\begin{equation}
\label{hfs2}
\frac{1}{c}\frac{\mathbf{\mu}\cdot[\mathbf{r}_{el}\times \bm{\alpha}]}{R_m^3}.
\end{equation}
Outside the nucleus the expression for the interaction coincides with that of the point dipole given by Eq. (\ref{hfs1}). In this paper we use the root-mean-square radius $r$ associated with the radius of the ball $R$ by $R=(\frac{5}{3} r)^{1/2}$.
For the hyperfine structure constant one often uses the following parametrization:
\begin{equation}
A=A_0(1-\delta)(1-\epsilon),
\end{equation}
where $A_0$ is the hyperfine structure constant corresponding to the point nucleus, $\delta$ is the Breit-Rosenthal correction and $\epsilon$ is the Bohr-Weisskopf (BW) correction. The former takes into account the finite charge distribution and is about 10\% for heavy atoms like Tl \cite{shabaev1994hyperfine}. The latter concerns finite magnetization distribution and usually smaller than $\delta$.
In the relativistic correlation calculations of the neutral atoms it is more practical to use the finite charge distribution model from the beginning. In the present paper the Gaussian charge distribution has been employed. Thus, we do not introduce the Breit-Rosenthal correction. In this case the Bohr-Weisskopf correction $\epsilon$ is a function of both nuclear charge and magnetic radii: $\epsilon = \epsilon(r_c, r_m)$.
For hydrogen-like ions one can use the following analytic expression for the hyperfine splitting energy~\cite{shabaev1997ground}:
\begin{equation} \label{eq1}
\begin{split}
\Delta E & = \frac{4}{3}\alpha (\alpha Z)^3
\frac{\mu}{\mu _N} \frac{m}{m _p}
\frac{2I+1}{2I}mc^2 \\
& \times \big (A(\alpha Z)(1-\delta)
(1-\varepsilon)+x_{rad} \big)
\end{split}
\end{equation}
where $\alpha$ is the fine-structure constant, $Z$ is the nuclear charge, $m_p$ is the proton mass, $m$ is the electron mass and $x_{rad}$ is the correction due to QED effects. Besides, the following analytic behavior for the Breit-Rosenthal correction was obtained~\cite{ionesco1960nuclear}:
\begin{equation}
\label{dep}
\delta = b_N \cdot R_N^{2\gamma -1}
\text{ , }
\gamma = \sqrt{\kappa^2-(\alpha Z)^2}.
\end{equation}
Here $b_N$ is a constant independent of the nuclear structure, $\kappa$ is the relativistic quantum number.
This expression can be used to test numerical approaches.
Finally, in Ref.~\cite{atoms6030039} the following parametrization of the hyperfine structure constant and Bohr-Weisskopf correction has been used:
\begin{eqnarray}
\label{d_nuc_dep}
A=A_0(1-(b_N+b_M d_{nuc}R_N^{2\gamma - 1})),\\
\epsilon(R_N, d_{nuc})=b_M d_{nuc}R_N^{2\gamma - 1},
\end{eqnarray}
where $b_M$ is the electronic factor independent of the nuclear radius and structure and $d_{nuc}$ is a factor which depends only on the nuclear spin and configuration.
\section{Calculation details}
In the calculations of the hyperfine splittings in the hydrogen-like thallium, the values of the QED corrections from Ref.~\cite{shabaev1997ground} were used. The values of the nuclear charge radii were taken from Ref.~\cite{ANGELI201369}. The nuclear magnetic moments of $^{203}$Tl and $^{205}$Tl isotopes were taken from Ref.~\cite{stone2005table}.
For test calculations of the hydrogen-like Tl three basis sets were used: CVDZ~\cite{Dyall:06,Dyall:98} which consists of $24s, 20p, 14d, 9f$ uncontracted Gaussian functions with a maximum $s$-type exponent parameter equal to $5.8 \cdot 10^7$, GEOM1 which consists of $50s$ functions with exponent parameters forming the geometric progression, where a common ratio is equal to $1.8$ and the largest term is $5 \cdot 10^8$, as well as GEOM2, which differs from the previous one only in that the first term is $5 \cdot 10^9$.
In the calculations of the neutral thallium atoms the QED effects were not taken into account. The main all-electron (81e) correlation calculations were performed within the coupled cluster with single, double and perturbative triple cluster amplitudes, CCSD(T), method \cite{bartlett2007coupled} using the Dirac-Coulomb Hamiltonian.
In the calculation the the AAE4Z basis set~\cite{Dyall:12} augmented with one $h$ and one $i$ functions was used. It includes $35s, 32p, 22d, 16f, 10g, 5h, 2i$. This basis set is called LBas below. For the calculation virtual orbitals were truncated at the energy of 10000 Hartree. Importance of the high energy cutoff for properties dependent on the behaviour of the wave function close to a heavy atom nucleus were analyzed in Ref.~\cite{Skripnikov:17a}. Besides, the basis set correction has been calculated. For this we have considered the extended basis set LBasExt which includes $44s, 40p, 31d, 24f, 15g, 9h, 8i$ basis functions. This correction has been calculated within the CCSD(T) method with frozen $1s..3d$ electrons. Virtual orbitals were truncated at the energy of 150 Hartree.
To test convergence with respect to electron correlation effects we have used the SBas basis set which consists of $30s, 26p, 15d, 9f$ functions and equals to the CVDZ~\cite{Dyall:06,Dyall:98} basis set augmented by diffuse functions.
To calculate the correlation correction $1s-3d$ electrons were excluded from the correlation treatment. In such a way correlation effects up to the level of the coupled cluster with single, double, triple and perturbative quadruple amplitudes, CCSDT(Q) method were considered. Contribution of the Gaunt interaction was calculated within the SBas basis set using the CCSD(T) method. In this calculation all electrons were correlated.
For a number of intermediate all-electron correlation calculations we also used the MBas basis set consisting of $31s, 27p, 18d, 13f, 4g, 1h$ functions and corresponding to the AAETZ basis set~\cite{Dyall:06}.
For the relativistic coupled cluster calculations the {\sc dirac15} \cite{DIRAC15} and {\sc mrcc} codes \cite{MRCC2013,Kallay:1,Kallay:2} were used. The code developed in Ref. \cite{Skripnikov:16b} has been used to compute point dipole magnetic HFS constant integrals. To treat the Gaunt interaction contributions we used the code developed in Ref.~\cite{Maison:2019}. The code to compute the Bohr-Weisskopf correction was developed in the present paper.
\section{Results and discussion}
\subsection{Hydrogen-like thallium \texorpdfstring{$1S_{1/2}$}{1S 1/2} }
Fig. \ref{figure:1} shows calculated dependence of the hyperfine splitting of the ground electronic state of the hydrogen like ${}^{205}$Tl in the point nuclear magnetic dipole moment model (without QED correction) on $r_c^{2\gamma-1}$ in different basis sets. One can see that this calculated dependence is in a very good agreement with the analytical dependence given by Eq.~(\ref{dep}). Extrapolated value of the hyperfine splitting for the point like nucleus (3.7041 eV) almost coincides with the analytical value obtained within Eq.~(\ref{eq1}) (3.7042 eV) for the GEOM2 basis set.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.0\linewidth]{1.eps}
\end{center}
\caption{Calculated dependence of the hyperfine splitting, $\Delta E$, of the ground electronic state of the hydrogen like ${}^{205}$Tl on $r_c^{2\gamma-1}$ in the point nuclear magnetic dipole moment model in different basis sets (see text). $r_c$ is the nuclear charge radius.}
\label{figure:1}
\end{figure}
Figs. \ref{figure:2} and \ref{figure:3} give calculated dependence of hyperfine splittings of the ground electronic state of the hydrogen like $^{203}$Tl and $^{205}$Tl on magnetic radii. Horizontal lines show the experimental energy splitting with the corresponding uncertainty taken from Ref.~\cite{beiersdorfer2001hyperfine}. From these data it is possible to fix magnetic radii for the used model of the magnetization distribution. For $^{205}$Tl one obtains $r_m / r_c^{exp} = $ 1.109(2) and for $^{203}$Tl $r_m / r_c^{exp} = $ 1.104(2).
Combining theoretical and experimental data, the coefficients $d_{nuc}=1.17$ for ${}^{203}$Tl and $d_{nuc}=1.18$ for ${}^{205}$Tl were obtained for the parametrization given by Eq.~(\ref{d_nuc_dep}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.0\linewidth]{2.eps}
\end{center}
\caption{
Calculated dependence of the hyperfine splitting ($\Delta E$) of the ground electronic state of the hydrogen like ${}^{203}$Tl on the ratio $r_m / r_c^{exp}$ of the magnetic radius $r_m$ and experimental charge radius $r_c^{exp}$. Solid and dashed horizontal lines show the experimental energy splitting value with the corresponding uncertainty from Ref.~\cite{beiersdorfer2001hyperfine}. Vertical lines show the extracted magnetic radius with the corresponding uncertainty.}
\label{figure:2}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.0\linewidth]{3.eps}
\end{center}
\caption{
Calculated dependence of the hyperfine splitting ($\Delta E$) of the ground electronic state of the hydrogen like ${}^{205}$Tl on the ratio $r_m / r_c^{exp}$ of the magnetic radius $r_m$ and experimental charge radius $r_c^{exp}$. Solid and dashed horizontal lines show the experimental energy splitting value with the corresponding uncertainty from Ref.~\cite{beiersdorfer2001hyperfine}. Vertical lines show the extracted magnetic radius with the corresponding uncertainty.}
\label{figure:3}
\end{figure}
\subsection{Neutral thallium
\texorpdfstring{${}^{205}$Tl}{Tl 205} in
\texorpdfstring{$6P_{1/2}$}{6P 1/2} state}
Table \ref{table:2} gives calculated values of the ${}^{205}$Tl hyperfine structure constant for the $6P_{1/2}$ state for a number of magnetic radii. The last column gives values for $r_m/r_c^{exp}=1.11$ which is close to the value obtained from the analysis given above for the hyperfine splitting in the hydrogen like ${}^{205}$Tl.
\begin{table}[ht]
\centering
\begin{tabular}{lrrrr}
\hline
\hline
$r_m/r_c^{exp}$ & 0 & 1 & 1.1 & 1.11 \\
\hline
DHF &~ 18805 &~ 18681 &~ 18660 &~ 18658\\
CCSD & 21965 & 21807 & 21781 & 21778\\
CCSD(T) & 21524 & 21372 & 21347 & 21345\\
\hline
~+Basis corr. & -21 & -21 & -21 & -21 \\
~+CCSDT-CCSD(T) & +73 & +73 & +73 & +73 \\
~+CCSDT(Q)-CCSDT & -5 & -5 & -5 & -5\\
~+Gaunt & -83 & -83 & -83 & -83 \\
\hline
Total & 21488 & 21337 & 21312 & 21309\\
\hline
\hline
\end{tabular}
\caption{Calculated values of the hyperfine structure constant of the $6P_{1/2}$ state of ${}^{205}$Tl (in MHz) at different levels of theory.}
\label{table:2}
\end{table}
The final value for the HFS constant with accounting for the Bohr-Weisskopf correction is 21309 MHz and is in a very good agreement with the experimental value 21310.8 MHz \cite{lurio1956hfs} and previous studies (see Table~\ref{tableCompare}). One can estimate the theoretical uncertainty of the calculated HFS constant to be smaller than 1\%.
\subsection{Neutral thallium
\texorpdfstring{${}^{205}$Tl}{Tl 205} in
\texorpdfstring{$6P_{3/2}$}{6P 3/2} state}
Table \ref{table:5} gives calculated values of the HFS constant for the $6P_{3/2}$ state of the ${}^{205}$Tl atom. One can see that correlation effects dramatically contribute to the constant. This has also been noted in previous studies of this state \cite{konovalova2017calculation, maartensson1995magnetic}. Interestingly that even quadruple cluster amplitudes give non-negligible relative contribution to the HFS constant.
\begin{table}[ht]
\centering
\begin{tabular}{lrrrr}
\hline
\hline
$r_m/r_c^{exp}$ & 0 & 1 & 1.1 & 1.11 \\
\hline
DHF &~ 1415 &~ 1415 &~ 1415 &~ 1415\\
CCSD & 6 & 40 & 46 & 47\\
CCSD(T) & 244 & 273 & 278 & 278\\
\hline
~+Basis corr. & +4 & +4 & +4 & +4\\
~+CCSDT-CCSD(T) & -49 & -49 & -49 & -49 \\
~+CCSDT(Q)-CCSDT & +13 & +13 & +13 & +13 \\
~+Gaunt & +1 & +1 & +1 & +1 \\
\hline
Total & 214 & 243& 248 & 248\\
\hline
\hline
\end{tabular}
\caption{Calculated values of the hyperfine structure constant of the $6P_{3/2}$ state of ${}^{205}$Tl (in MHz) at different levels of theory}
\label{table:5}
\end{table}
Calculated value of the BW correction to the HFS constant of the $6P_{3/2}$ state of ${}^{205}$Tl is $-$16\%. It has an opposite sign with respect to the BW correction to the HFS constant of the $6P_{1/2}$ (see Table \ref{table:6}).
\begin{table}[ht]
\centering
\begin{tabular}{lrrr}
\hline
\hline
$r_m/r_c^{exp}$ & 1 & 1.1 & 1.11 \\
\hline
$6P_{1/2}$ &~ 0.0070 & 0.0082 &~ 0.0083\\
$6P_{3/2}$ & -0.14 & -0.16 & -0.16\\
\hline
\hline
\end{tabular}
\caption{
Calculated values of the Bohr-Weisskopf correction to the hyperfine structure constants of the $6P_{1/2}$ and $6P_{3/2}$ states of ${}^{205}$Tl for different values of the $r_m/r_c^{exp}$ ratio.
}
\label{table:6}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{lrr}
\hline
\hline
Author, Ref. &~ $6P_{1/2}$ &~ $6P_{3/2}$ \\
\hline
Kozlov et al. \cite{kozlov2001parity} & 21663 & 248 \\
Safronova et al. \cite{safronova2005excitation} & 21390 & 353 \\
M{\aa}rtensson-Pendrill \cite{maartensson1995magnetic} & 20860 & 256 \\
This work & 21488 & 214 \\
\hline
\hline
\end{tabular}
\caption{Total values of hyperfine structure constants in the point nuclear magnetic dipole moment model for ${}^{205}$Tl (in MHz) in comparison with previous studies.}
\label{tableCompare}
\end{table}
Obtained value of the HFS constant is in a reasonable agreement with the experimental value 265 MHz \cite{gould1956hfs}. Theoretical uncertainty of the final value can be estimated as 10\%. Note, however, that it corresponds to about 2\% of the total correlation contribution -- compare the final value with the Dirac-Hartree Fock (DHF) value.
\section{Hyperfine anomaly}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.0\linewidth]{4.eps}
\end{center}
\caption{
Calculated dependence of the $6P_{1/2}$ hyperfine anomaly ${}^{205}\Delta^{203}$ on the difference of magnetic radii ${}^{205}r_m-{}^{203}r_m$. Solid and dashed horizontal lines indicate the experimental value and its uncertainty; calculated values are given by circles and vertical lines give the value of the magnetic radii difference and its uncertainty.}
\label{figure:4}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.0\linewidth]{5.eps}
\end{center}
\caption{
Calculated dependence of the $6P_{3/2}$ hyperfine anomaly ${}^{205}\Delta^{203}$ on the difference of magnetic radii ${}^{205}r_m-{}^{203}r_m$. Solid and dashed horizontal lines indicate the experimental value and its uncertainty; calculated values are given by circles and vertical lines give the value of the magnetic radii difference and its uncertainty.}
\label{figure:5}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1.0\linewidth]{6.eps}
\end{center}
\caption{
Calculated dependence of the $7S_{1/2}$ hyperfine anomaly ${}^{205}\Delta^{203}$ on the difference of magnetic radii ${}^{205}r_m-{}^{203}r_m$. Solid and dashed horizontal lines indicate the experimental value and its uncertainty; calculated values are given by circles and vertical lines give the value of the magnetic radii difference and its uncertainty.}
\label{figure:6}
\end{figure}
Magnetic moments of $^{203}$Tl and $^{205}$Tl are known with a good accuracy \cite{stone2005table}. Values of HFS constants of the $6P_{1/2}$, $6P_{3/2}$ and $7S_{1/2}$ states have been measured precisely in Refs.~\cite{lurio1956hfs, gould1956hfs, chen2012absolute}. Thus, experimental values of the magnetic anomalies ${}^{205}\Delta^{203}$ for these states are also known with a high precision. Figs. \ref{figure:4}, \ref{figure:5} and \ref{figure:6} show
calculated dependence of the values of anomalies for these states on the difference of magnetic radii ${}^{205}r_m-{}^{203}r_m$. In these calculations charge radii of $^{203}$Tl and $^{205}$Tl were set to experimental values.
Calculations were performed within the CCSD(T) method in the MBas basis set.
In Figs. \ref{figure:4}, \ref{figure:5} and \ref{figure:6} solid and dashed horizontal lines show the experimental value and its uncertainty. By considering the intersection of the calculated (without treatment of the QED effects) dependence (which is approximated by linear functions) with horizontal dashed lines one obtains the difference of magnetic radii ${}^{205}r_m-{}^{203}r_m$ and its uncertainty in the model under consideration.
One can see from Figs. \ref{figure:4} and \ref{figure:5} that the differences extracted from the data for the $6P_{1/2}$ and $6P_{3/2}$ states agree within 10\% which confirms the applicability of the model under consideration. They are also in a good agreement with the difference obtained from the data for the $7S_{1/2}$ state of the neutral Tl as well as from the data for the hydrogen like Tl above --- within the experimental uncertainty for these systems.
\section{Magnetic moments of short lived isotopes}
Magnetic anomalies can be used to determine the value of the nuclear magnetic moment of the short lived isotope (for example see Refs.~\cite{0954-3899-37-11-113101,schmidt2018nuclear,barzakh2012hyperfine}).
For this one should know the nuclear magnetic moment value of the stable isotope as well as HFS constants ($A[a]$ and $A[b]$) for two electronic states ($a$ and $b$) of this isotope and the short lived isotope. Consider isotopes 1 (${}^{205}$Tl, $I=1/2$) and 2 (${}^{193}$Tl, $I=9/2$). The latter is unstable.
From the experimental data \cite{barzakh2012hyperfine, lurio1956hfs, chen2012absolute} one obtains:
\begin{equation}
{}^1\theta^2[a,b]=
\frac{A_1[a]}{A_2[b]}\frac{A_2[a]}{A_1[b]}-1=
-0.013(7).
\end{equation}
Here $a=7S_{1/2}$, $b=6P_{1/2}$. Calculated value of the ratio of magnetic anomalies for these states is
${}^1k^2[a,b]={}^{1}\Delta^{2}[a]/{}^{1}\Delta^{2}[b]=3.4(2)$. Such ratio depends only slightly on the nuclear magnetization distribution model \cite{0954-3899-37-11-113101,schmidt2018nuclear}. Now one obtains for the nuclear magnetic moment $\mu_2$ of the isotope 2:
\begin{equation}
\mu_2=\mu_1\cdot\frac{A_2[b]}{A_1[b]}\cdot\frac{I_2}{I_1}
\cdot(1+{}^{1}\Delta^{2}[b])=3.84(3).
\end{equation}
Using the same method and experimental data from Refs.~\cite{barzakh2012hyperfine, lurio1956hfs, chen2012absolute} one obtains the nuclear magnetic moment value of the ${}^{191}$Tl isotope with nuclear spin $I=9/2$: $\mu_{191}=+3.79(2)$.
Obtained values of $\mu_{191}$ and $\mu_{193}$ are in a very good agreement with Ref.~\cite{barzakh2012hyperfine}: $\mu_{193}=+3.82(3)$ and $\mu_{191}=+3.78(2)$.
\section{Conclusion}
In the present paper the Bohr-Weisskopf effect has been calculated for the $6P_{1/2}$, $6P_{3/2}$ and $7S_{1/2}$ states for several isotopes of the Tl atom. The uniformly magnetized ball model has been tested and used.
It was found that the correlation effects strongly contribute to the HFS constant (they are about 470\% of the final value) as well as to the BW effect for the $6P_{3/2}$ state. BW correction for the $6P_{3/2}$ state was found to be about $-$16\%. Such a significant contribution makes it possible to test models of the nuclear magnetization distribution.
Combining obtained theoretical values of magnetic anomalies and available experimental data nuclear magnetic moments of short-lived ${}^{191}$Tl and ${}^{193}$Tl isotopes were predicted and found to be in a good agreement with the previous study~\cite{barzakh2012hyperfine}.
It was demonstrated that for such calculations the Gaussian basis sets can be used. Thus, the applied method can be extended to the calculation of the BW effect in molecules.
For the calculated value of the HFS constant of the $6P_{1/2}$ state a very good agreement with the experiment and the small theoretical uncertainty has been obtained. A further improvement can be achieved by the treatment of the QED effects contribution.
\section{Acknowledgment}
We are grateful to Prof. M. Kozlov, Prof. I. Mitropolsky and Dr. Yu. Demidov for helpful discussions. Electronic structure calculations were performed at the PIK data center of NRC ``Kurchatov Institute'' -- PNPI.
| {'timestamp': '2019-03-08T02:22:31', 'yymm': '1903', 'arxiv_id': '1903.03093', 'language': 'en', 'url': 'https://arxiv.org/abs/1903.03093'} |
\section{Introduction}
\label{sec:intro}
Keyword detection is like searching for a needle in a haystack: the detector must listen to continuously streaming audio, ignoring nearly all of it, yet still triggering correctly and instantly. In the last few years, with the advent of voice assistants, keyword spotting has become a common way to initiate a conversation with them (e.g. ``Ok Google", ``Alexa", or ``Hey Siri"). As the assistant use cases spread through a variety of devices, from mobile phones to home appliances and further into the internet-of-things (IoT) --many of them battery powered or with restricted computational capacity, it is important for the keyword spotting system to be both high-quality as well as computationally efficient.
Neural networks are core to the state-of-the-art keyword spotting systems. These solutions, however, are not developed as a single deep neural network (DNN). Instead, they are traditionally comprised of different subsystems, independently trained, and/or manually designed. For example, a typical system is composed by three main components: 1) a signal processing frontend, 2) an acoustic encoder, and 3) a separate decoder. Of those components, it is the last two that make use of DNNs along with a wide variety of decoding implementations. They range from traditional approaches that make use of a Hidden Markov Model (HMM) to characterize acoustic features from a DNN into both ``keyword" and ``background" (i.e. non-keyword speech and noise) classes \cite{Alexa16, Alexa17, AlexaRaw17, AlexaDelayed18, Alexa18}. Simpler derivatives of that approach perform a temporal integration computation that verifies the outputs of the acoustic model are high in the right sequence for the target keyword in order to produce a single detection likelihood score~\cite{Hotwordv1, Agc15, Cascade17, HeySiri17, AlexaCompress16}. Other recent systems make use of CTC-trained DNNs --typically recurrent neural networks (RNNs) \cite{Ctc17}, or even sequence-to-sequence trained models that rely on beam search decoding \cite{Custom17}. This last family of systems is the closest to be considered end-to-end, however they are generally too computationally complex for many embedded applications.
Optimizing independent components, however, creates added complexities and is suboptimal in quality compared to doing it jointly. Deployment also suffers due to the extra complexity, making it harder to optimize resources (e.g. processing power and memory consumption). The system described in this paper addresses those concerns by learning both the encoder and decoder components into a single deep neural network, jointly optimizing to directly produce the detection likelihood score. This system could be trained to subsume the signal processing frontend as well as in \cite{AlexaRaw17, Raw15}, but it is typically computationally costly to replace highly optimized fast Fourier transform implementations with a neural network of equivalent quality. However, it is something we consider exploring in the future. Overall, we find this system provides state-of-the-art quality across a number of audio and speech conditions compared to a traditional, non end-to-end baseline system described in \cite{Cnn15}. Moreover, the proposed system significantly reduces the resource requirements for deployment by cutting computation and size over five times compared to the baseline system.
The rest of the paper is organized as follows. In Section \ref{sec:system} we present the architecture of the keyword spotting system; in particular the two main contributions of this work: the neural network topology, and the end-to-end training methodology. Next, in Section \ref{sec:setup} we describe the experimental setup, and the results of our evaluations in Section \ref{sec:results}, where we compare against the baseline approach of \cite{Cnn15}. Finally, we conclude with a discussion of our findings in Section \ref{sec:conclusion}.
\section{End-to-End system}
\label{sec:system}
This paper proposes a new end-to-end keyword spotting system that by subsuming both the encoding and decoding components into a single neural network can be trained to produce directly an estimation (i.e. score) of the presence of a keyword in streaming audio. The following two sections cover the efficient memoized neural network topology being utilized, as well as the method to train the end-to-end neural network to directly produce the keyword spotting score.
\subsection{Efficient memoized neural network topology}
\label{svdf}
We make use of a type of neural network layer topology called SVDF (single value decomposition filter), originally introduced in~\cite{Svdf15} to approximate a fully-connected layer with a low rank approximation. As proposed in~\cite{Svdf15} and depicted in Equation \ref{eq:basic-eqn-rank-1}, the activation $a$ for each node $m$ in the rank-1 SVDF layer at a given inference step $t$ can be interpreted as performing a mix of selectivity in time ($\mathbf{\alpha}^{(m)}$) with selectivity in the feature space ($\mathbf{\beta}^{(m)}$) over a sequence of input vectors $\mathbf{x}_t = [ \mathbf{X}_{t - T}, \cdots , \mathbf{X}_{t}]$ of size $F$.
\begin{equation}
a^{(m)}_t
=
f\left( \sum_{i=0}^{T - 1} \mathbf{\alpha}^{(m)}_{i} \sum_{j=1}^F \mathbf{\beta}^{(m)}_{j} x_{(t - T + i), j} \right)
\label{eq:basic-eqn-rank-1}
\end{equation}
This is equivalent to performing, on an SVDF layer of $N$ nodes, $N\times T$ 1-D convolutions of the feature filters $\mathbf{\beta}$ (by ``sliding" each of the $N$ filters on the input feature frames, with a stride of $F$), and then filtering each of the $N$ output vectors (of size $T$) with the time filters $\mathbf{\alpha}$.
\begin{figure}
\centering
\includegraphics[height=180pt]{SVDF.pdf}
\caption{A single node (\emph{m}) in the SVDF layer.}
\label{fig:svdf_node}
\end{figure}
A more general and efficient interpretation, depicted in Figure \ref{fig:svdf_node}, is that the layer is just processing a single input vector $\mathbf{X}_{t}$ at a time. Thus for each node $m$, the input $\mathbf{X}_{t}$ goes through the feature filter $\mathbf{\beta}^{(m)}$, and the resulting scalar output gets concatenated to those $T-1$ computed in previous inference steps. The memory is initialized to zero during training for the first $T$ inferences. Finally the time filter $\mathbf{\alpha}^{(m)}$ is applied to them. This is how stateful networks work, where the layer is able to memorize the past within its state. Different from other stateful approaches \cite{Fsmn15}, and typical recurrent layers, the SVDF does not recur the outputs into the state (memory), nor rewrites the entirety of the state with each inference. Instead, the memory keeps each inference's state isolated from subsequent runs, just pushing new entries and popping old ones based on the memory size $T$ configured for the layer. This also means that by stacking SVDF layers we are extending the receptive field of the network. For example, a DNN with $D$ stacked layers, each with a memory of $T$, means that the DNN is taking into account inputs as old as $\mathbf{X}_{t-D\times (T-1)}$. This approach works very well for streaming execution, like in speech, text, and other sequential processing, where we constantly process new inputs from a large, possibly infinite sequence but do not want to attend to all of it. An implementation is available at \cite{TfliteSvdf}.
This layer topology offers a number of benefits over other approaches. Compared with the convolutions used in \cite{Cnn15}, it allows finer-grained control of the number of parameters and computation, given that the SVDF is composed by several relatively small filters. This is useful when selecting a tradeoff between quality, size and computation. Moreover, because of this characteristic, the SVDF allows creating very small networks that outperform other topologies which operate at larger granularity (e.g. our first stage, always-on network has about 13K parameters~\cite{Cascade17}). The SVDF also pairs very well with linear “bottleneck” layers to significantly reduce the parameter count as in \cite{Bottleneck08, Bottleneck12}, and more recently in \cite{AlexaCompress16}. And because it allows for creating evenly sized deep networks, we can insert them throughout the network as in Figure \ref{fig:topology}. Another benefit is that due to the explicit sizing of the receptive field, it allows for a fine grained control over how much to remember from the past. This has resulted in SVDF outperforming RNN-LSTMs, which do not benefit from, and are potentially hurt by, paying attention to theoretically infinite past. It also avoids having complex logic to reset the state every few seconds as in \cite{Custom17}.
\subsection{Method to train the end-to-end neural network}
\label{training}
The goal of our end-to-end training is to optimize the network to produce the likelihood score, and to do so as precisely as possible. This means to have a high score right at the place where the last bit of the keyword is present in the streaming audio, and not before and particularly not much after (i.e. a ``spiky" behaviour is desirable). This is important since the system is bound to an operating point defined by a threshold (between $0$ and $1$) that is chosen to strike a balance between false-accepts and false-rejects, and a smooth likelihood curve would add variability to the firing point. Moreover, any time in between the true end of the keyword and the point where the score meets the threshold will become latency in the system (e.g. the ``assistant" will be slow to respond) --a common drawback of CTC-trained RNNs \cite{Ctc15} we aim to avoid.
\subsubsection{Label generation}
\label{label_gen}
We generate input sequences composed of pairs \texttt{<$\mathbf{X}_{t}$,$c$>}. Where $\mathbf{X}$ is a 1D tensor corresponding to log-mel filter-bank energies produced by a front-end as in \cite{Hotwordv1, Svdf15, Cnn15}, and $c$ is the class label (one of $\{0, 1\}$). Each tensor $\mathbf{X}$ is first force-aligned from annotated audio utterances, using a large LVCSR system, to break up the components of the keyword \cite{Speech12}. For example, ``ok google" is broken into: ``ou", ``k", ``eI", ``\texttt{<silence>}", ``g", ``u", ``g", ``@", ``l". Then we assign labels of $1$ to all sequence entries, part of a true keyword utterance, that correspond to the last component of the keyword (``l" in our ``Ok google" example). All other entries are assigned a label of $0$, including those that are part of the keyword but that are not its last component. See Figure \ref{fig:labeling}. Additionally, we tweak the label generation by adding a fixed amount of entries with a label of $1$, starting from the first vector $\mathbf{X}_{t}$ corresponding to the final keyword component (e.g. ``l"). This is with the intention of balancing the amount of negative and positive examples, in the same spirit as \cite{Alexa16}. This proved important to make training stable, as otherwise the amount of negative examples overpowered the positive ones.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{labeling.pdf}
\caption{Input sequence generation for ``Ok google". }
\label{fig:labeling}
\end{figure}
\subsubsection{Training recipe}
\label{recipe}
The end-to-end training uses a simple frame-level cross-entropy (CE) loss that for the feature vector $\mathbf{X}_{t}$ is defined by $\lambda_{t}(\mathbf{W}) = - \log y_{c_t}(\mathbf{X}_{t}, \mathbf{W})$, where $\mathbf{W}$ are the parameters of the network, $y_{i}(\mathbf{X}_{t}, \mathbf{W})$ the $i$th output of the final softmax. Our training recipe uses asynchronous stochastic gradient descent (ASGD) to produce a single neural network that can be fed streaming input features and produce a detection score. We propose two variants of this recipe:
\textbf{Encoder+decoder}. A two stage training procedure where we first train an acoustic encoder, as in \cite{Hotwordv1,Svdf15,Cnn15}, and then a decoder from the outputs of the encoder (rather than filterbank energies) and the labels from \ref{label_gen}. We do this in a single DNN by creating a final topology that is composed of the encoder and its pre-trained parameters (including the softmax), followed by the decoder. See Figure \ref{fig:topology}. During the second stage of training the encoder's parameters are frozen, such that only the decoder is trained. This recipe is useful on models that tend to overfit to subsets of the entire training dataset.
\textbf{End-to-end}. In this option, we train the DNN end-to-end directly, with the sequences from \ref{label_gen}. The DNN may be any topology, but we use that of the encoder+decoder, except for the intermediate encoder softmax (now an unnecessary information bottleneck). See Figure \ref{fig:topology}. Similar to the encoder+decoder recipe, we can also initialize the encoder section with a pre-trained model, and use an adaptation rate $[0-1]$ to tune how much the encoder section is being adjusted (e.g. a rate of 0 is equivalent to the encoder+decoder recipe). This end-to-end pipeline, where the entirety of the topology's parameters are adjusted, tends to outperform the encoder+decoder one, particularly in smaller sized models which do not tend to overfit.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{topology.pdf}
\caption{End-to-end topology trained to predict the keyword likelihood score. Bottleneck layers reduce parameters and computation. The intermediate softmax is used in encoder+decoder training only.}
\label{fig:topology}
\end{figure}
\section{Experimental setup}
\label{sec:setup}
In order to determine the effectiveness of our approach, we compare against a known keyword spotting system proposed in \cite{Cnn15}. This section describes the setups used in the results section.
\subsection{Front-end}
\label{frontend}
Both setups use the same front-end, which generates 40-dimensional log-mel filter-bank energies out of 30ms windows of streaming audio, with overlaps of 10ms. The front-end can be queried to produce a sequence of contiguous frames centered around the \emph{current} frame $\mathbf{x}_t = [ \mathbf{X}_{t - C_l}, \cdots , \mathbf{X}_{t}, \cdots, \mathbf{X}_{t + C_r}] $. Older frames are said to form the left context $C_l$, and newer frames form the right context $C_r$. Additionally, the sequences can be requested with a given stride $\sigma$.
\subsection{Baseline model setup}
\label{baseline}
Our baseline system (Baseline\_1850K) is taken from \cite{Cnn15}. It consists of a DNN trained to predict subword targets within the keywords. The input to the DNN consists of a sequence with $C_l=30$ frames of left and $C_r=10$ frames of right context; each with a stride of $\sigma=3$. The topology consists of a 1-D convolutional layer with 92 filters (of shape 8x8 and stride 8x8), followed by 3 fully-connected layers with 512 nodes and a rectified linear unit activation each. A final softmax output predicts 9 subword targets ("k" and "h" share a label for "Ok/Hey Google" detection), obtained from the same forced alignment process described in \ref{label_gen}. This results in the baseline DNN containing 1.7M parameters, and performing 1.8M multiply-accumulate operations per inference (every 30ms of streaming audio). A keyword spotting score between 0 and 1 is computed by first smoothing the posterior values, averaging them over a sliding window of the previous 100 frames with respect to the current $t$; the score is then defined as the largest product of the smoothed posteriors in the sliding window as originally proposed in \cite{Agc15}.
\subsection{End-to-end model setup}
\label{end2end}
The end-to-end system (prefix E2E) uses the DNN topology depicted in Figure \ref{fig:topology}, and all SVDF layers are of $rank=1$. We present results with 3 distinct size configurations (infixes 700K, 318K, and 40K) each representing the approximate number of parameters, and the 2 training recipe variants (suffixes 1stage and 2stage) corresponding to end-to-end and encoder+decoder respectively, as described in \ref{recipe}. The input to all DNNs consists of a sequence with $C_l=1$ frame of left and $C_r=1$ frame of right context; each with a stride of $\sigma=2$. More specifically, the E2E\_700K model uses $N=1280$ nodes in the first 4 SVDF layers, each with a memory $T=8$, with intermediate bottleneck layers each of size $64$; the following 3 SVDF layers have $N=32$ nodes, each with a memory $T=32$. This model performs 350K multiply-accumulate operations per inference (every 20ms of streaming audio). The E2E\_318K model uses $N=576$ nodes in the first 4 SVDF layers, each with a memory $T=8$, with intermediate bottleneck layers each of size $64$; the remainder layers are the same as E2E\_700K. This model performs 159K multiply-accumulate operations per inference. Finally, the E2E\_40K model uses $N=96$ nodes in the first 4 SVDF layers, each with a memory $T=8$, with intermediate bottleneck layers each of size $32$; the remainder layers are the same as the other two models. This model performs 20K multiply-accumulate operations per inference.
\subsection{Dataset}
\label{ssec:data}
The training data for all experiments consists of 1 million anonymized hand-transcribed utterances of the keywords ``Ok Google" and ``Hey Google", with an even distribution. To improve robustness, we create ``multi-style" training data by synthetically distorting the utterances, simulating the effect of background noise and reverberation. 8 distorted utterances are created for each original utterance; noise samples used in this process are extracted from environmental recordings of everyday events, music, and YouTube videos. Results are reported on four sets representative of various environmental conditions: \emph{Clean non-accented} contains 170K non-accented English utterances of the keywords in ``clean" conditions, plus 64K samples without the keywords (1K hours); \emph{Clean accented} has 153K English utterances of the keywords with Australian, British, and Indian accents (also in ``clean" conditions), plus 64K samples without the keywords (1K hours); \emph{High pitched} has 1K high pitched utterances of the keywords, and 64K samples without them (1K hours); \emph{Query logs} contains 110K keyword and 21K non-keyword utterances, collected from anonymized voice search queries. This last set contains background noises from real usage conditions.
\section{Results}
\label{sec:results}
Our goal is to compare the effectiveness of the proposed approach against the baseline system described in \cite{Cnn15}. Inference is floating point, though (unreported) TensorFlow Lite's quantization~\cite{TfliteSvdfHybridQuant} numbers showed no meaningful degradation. We evaluate the false-reject (FR) and false-accept (FA) tradeoff across several end-to-end models of distinct sizes and computational complexities. As can be seen in the Receiver Operating Characteristic (ROC) curves in Figure \ref{fig:roc}, the 2 largest end-to-end models, with 2-stage training, significantly outperform the recognition quality of the much larger and complex Baseline\_1850K system. More specifically, E2E\_318K\_2stage and E2E\_700K\_2stage show up to 60\% relative FR rate reduction over Baseline\_1850K in most test conditions. Moreover, E2E\_318K\_2stage uses only about 26\% of the computations that Baseline\_1850K uses (once normalizing their execution rates over time), but still shows significant improvements. We also explore end-to-end models at a size that, as described in \cite{Cascade17}, is small enough (in both size and computation) to be executed continuously with very little power consumption. These 2 models, E2E\_40K\_1stage and E2E\_40K\_2stage, also explore the capacity of end-to-end training (\emph{1stage}) versus encoder+decoder training (\emph{2stage}). The ROC curves show that \emph{1stage} training outperforms \emph{2stage} training on all conditions, but particularly on both ``clean" environments where it gets fairly close to the performance of the baseline setup. That is a significant achievement considering E2E\_40K\_1stage has 2.3\% the parameters and performs 3.2\% the computations of Baseline\_1850K. Table \ref{tab:perfdiff} compares the recognition quality of all setups by fixing on a very low false-accept rate of 0.1 FA per hour on a dataset containing only negative (i.e. non-keyword) utterances. Thus the table shows the false-reject rates at that operating point. Here we can appreciate similar trends as those described above: the 2 largest end-to-end models outperform the baseline across all datasets, reducing FR rate about 40\% on the clean conditions, and 40\%-20\% on the other 2 sets depending on the model size. This table also shows how \emph{1stage} outperforms \emph{2stage} for small sized models, and presents similar FR rates as Baseline\_1850K on clean conditions.
\begin{table}[h!]
\begin{center}
\caption{FR rate over 4 test conditions at 0.1 FAh level.}
\label{tab:perfdiff}
\begin{tabular}{l|r|r|r|r}
\textbf{Models} & Non Acc. & Accented & High P. & Q. Logs \\
\hline
Baseline\_1850K & 1.46\% & 2.03\% & 14.41\% & 12.13\% \\
E2E\_318K & 0.87\% & 1.27\% & 11.39\% & 9.99\% \\
E2E\_700K & 0.87\% & 1.22\% & 8.57\% & 8.90\% \\
E2E\_40K\_2 & 2.80\% & 5.19\% & 26.54\% & 39.22\% \\
E2E\_40K\_1 & 1.52\% & 2.09\% & 23.73\% & 35.32\% \\
\end{tabular}
\end{center}
\end{table}
\begin{figure}[htb]
\begin{minipage}[b]{\linewidth}
\centering
\centerline{\includegraphics[height=137pt]{roc_cleanUS.pdf}}
\centerline{(a) Clean non-accented}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{\linewidth}
\centering
\centerline{\includegraphics[height=137pt]{roc_cleanAccented.pdf}}
\centerline{(b) Clean accented}\medskip
\end{minipage}
%
\begin{minipage}[b]{\linewidth}
\centering
\centerline{\includegraphics[height=137pt]{roc_highpitched.pdf}}
\centerline{(c) High pitched voices}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{\linewidth}
\centering
\centerline{\includegraphics[height=137pt]{roc_logs.pdf}}
\centerline{(d) Anonymous query logs}\medskip
\end{minipage}
%
\caption{ROC curves under different conditions.}
\label{fig:roc}
%
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We presented a system for keyword spotting that by combining an efficient topology and two types of end-to-end training can significantly outperform previous approaches, at a much lower cost of size and computation. We specifically show how it beats the performance of a setup taken from \cite{Cnn15} with models over 5 times smaller, and even get close to the same performance with a model over 40 times smaller. Our approach provides further benefits of not requiring anything other than a front-end and a neural network to perform the detection, and thus it is easier to extend to newer keywords and/or fine-tune with new training data. Future work includes exploring other loss-functions, as well as generalizing for multi-channel support.
\clearpage
\bibliographystyle{IEEEbib}
| {'timestamp': '2019-02-19T02:22:49', 'yymm': '1812', 'arxiv_id': '1812.02802', 'language': 'en', 'url': 'https://arxiv.org/abs/1812.02802'} |
\section{Introduction}
The problem of designing robust deterministic Model Predictive Control (MPC) schemes, has nowadays many solutions, see for example \cite{MagniBook07,rawbook,LimonEtAl-NMPC09}. However, the proposed approaches are in general computationally very demanding, since they either require the solution to difficult on-line min-max optimization problems, e.g.,~\cite{MagniDeNScatAll03}) or the off-line computations of polytopic robust positive invariant sets, see \cite{mayne2005robust}. In addition they are conservative, mainly because they (implicitly or explicitly) rely on worst-case approaches. Moreover, in case the uncertainties/disturbances are characterized as stochastic processes, constraints must be reformulated in a probabilistic framework \cite{Yaesh2003351,Hu2012477}, worst-case deterministic methods do not take advantage of the available knowledge on the characteristics of the process noises, such as their probability density function, and cannot even guarantee recursive feasibility in case of possibly unbounded disturbances.\\
Starting from the pioneering works \cite{Schwarm99,Li02}, these reasons have motivated the development of MPC algorithms for systems affected by stochastic noise and subject to probabilistic state and/or input constraints. Mainly two classes of algorithms have been developed so far. The first one relies on the randomized, or scenario-based approach, see e.g., \cite{Batina,Calaf,BlackmoreOnoBektassovWilliams-TR10}, a very general methodology that allows to consider linear or nonlinear systems affected by noise with general distributions with possibly unbounded and non-convex support. As a main drawback, randomized methods are still computationally very demanding for practical implementations and their feasibility and convergence properties are difficult to prove.\\
The second approach, referred in \cite{Cogill} as probabilistic approximation method, is based on the point-wise reformulation of probabilistic, or expectation, constraints in deterministic terms to be included in the MPC formulation. Interesting intermediate methods have been proposed in \cite{BernardiniBemporad-TAC12}, where a finite number of disturbance realizations are assumed, and in \cite{Korda}, where constraints averaged on time are considered. Among the wide class of probabilistic approximation algorithms, a further distinction can be based on the noise support assumptions, which can be either bounded, e.g., as in \cite{Korda,KouvaritakisCannonRakovicCheng-Automatica10,CannonChengKouvaritakisRakovic-Automatica12}
or unbounded, see for instance \cite{Bitmead,Primbs,Hokaymen,Cogill,CannonKouvaritakisWu-Automatica09,Ono-ACC12}. While for bounded disturbances recursive feasibility and convergence can be established, the more general case of unbounded noise poses more difficulties and some specific solutions and reformulations of these properties have been adopted, for example in \cite{CannonKouvaritakisWu-Automatica09} the concept of invariance with probability $p$ is used, while in \cite{Ono-ACC12} the definition of probabilistic resolvability is introduced. Also, linear systems with known state have generally been considered, with the notable exceptions of \cite{Bitmead,Hokaymen,CannonChengKouvaritakisRakovic-Automatica12}, where output feedback methods have been proposed.\\
Finally, it must be remarked that some of the mentioned approaches have been successfully applied in many applicative settings, such as building temperature regulation \cite{OldewurtelParisioJonesMorariEtAl-ACC10} and automotive applications \cite{GrayBorrelli-ITSC13,BlackmoreOnoBektassovWilliams-TR10,BichiRipaccioliDiCairanoBernardiniBemporadKolmanovsky-CDC10}.\\
In this paper, an output feedback algorithm for linear discrete-time systems affected by a possibly unbounded additive noise is proposed. In case the noise distribution is unknown, the chance constraints on the inputs and state variables are reformulated by means of the Chebyshev - Cantelli inequality \cite{Cantelli}, as originally proposed in \cite{Locatelli} for the design of decentralized controllers and in \cite{Pala} in the context of MPC. Later, this approach has also been considered in \cite{Cogill,GrayBorrelli-ITSC13}, and used to develop preliminary versions of the algorithm here proposed in \cite{FGMS13_CDC,FGMS14_IFAC}. With respect to \cite{FGMS13_CDC,FGMS14_IFAC}, in this paper we discuss, in a consistent fashion and in a detailed way, our control approach. In particular, we address also the case when the noise distribution is known (i.e., and it is Gaussian). We also address algorithm implementation aspects, proposing two novel and theoretically well funded approximated schemes and full implementation details. The algorithm computational load can be made similar to the one of a standard stabilizing MPC algorithm with a proper choice of the design parameters. Importantly, the computation of robust positively invariant sets is not required and, in view of its simplicity and of the required lightweight computational load, the application of the proposed approach to medium/large-scale problems is allowed. The recursive feasibility of the proposed algorithm is guaranteed by a switching MPC strategy which does not require any relaxation technique, and the convergence of the state to a suitable neighbor of the origin is proved.\\
The paper is organized as follows. In Section \ref{sec:problem_statement} we first introduce the main control problem, then we define and properly reformulate the probabilistic constraints.
In Section \ref{MPC} we formulate the Stochastic MPC optimization problem and we give the general theoretical results. Section \ref{sec:num_implementation} is devoted to the implementation issues, while in Section \ref{sec:example} two examples are discussed in detail: the first one is analytic and is aimed at comparing the conservativeness of the algorithm to the one of the well known tube based approach \cite{mayne2005robust}, while the second one is numeric and allows for a comparison of the different algorithm implementations. Finally, in Section \ref{sec:conclusions} we draw some conclusions. For clarity of exposition, the proof of the main theoretical results is postponed to the Appendix.\\
\textbf{Notation}. The symbols $\succ$ and $\succeq$ (respectively $\prec$, and $\preceq$) are used to denote positive definite and semi-positive definite (respectively negative definite and semi-negative definite) matrices. The point-to-set distance from $\zeta$ to $\mathcal{Z}$ is $\mathrm{dist}(\zeta,\mathcal{Z}):=\inf\{\|\zeta-z\|,z\in\mathcal{Z}\}$.
%
\section{Problem statement}
\label{sec:problem_statement}
\subsection{Stochastic system and probabilistic constraints}
%
Consider the following discrete-time linear system
\begin{equation}
\left\{\begin{array}{l}
x_{t+1}=Ax_t+Bu_t+Fw_t \quad t\geq 0\\
y_{t}=Cx_t+v_t
\end{array}\right .
\label{eq:model}\end{equation}
where
$x_t\in \mathbb{R}^n$
is the state,
$u_t\in\mathbb{R}^m$
is the input,
$y_t\in\mathbb{R}^p$
is the measured output and
$w_t\in\mathbb{R}^{n_w}, v_t\in\mathbb{R}^{p}$
are two independent, zero-mean, white noises with covariance matrices $W\succeq 0$ and $V\succ0$, respectively, and a-priori unbounded support. The pair $(A,C)$ is assumed to be observable, and the pairs $(A,B)$ and $(A,\tilde{F})$ are reachable, where matrix $\tilde{F}$ satisfies $\tilde{F}\tilde{F}^T=FWF^T$.\\
%
Polytopic constraints on the state and input variables of system \eqref{eq:model} are imposed in a probabilistic way, i.e., it is required that, for all $t\geq 0$
\begin{align}\label{eq:prob_constraint_state}
\mathbb{P}\{b_r^Tx_{t}\geq x^{ max}_r\}&\leq p_r^x \quad r=1,\dots, n_r\\
\label{eq:prob_constraint_input}
\mathbb{P}\{c_s^Tu_{t}\geq u^{ max}_s\}&\leq p_s^u \quad s=1,\dots, n_s
\end{align}
where $\mathbb{P}(\phi)$ denotes the probability of $\phi$, $b_r$, $c_s$ are constant vectors, $x_r^{ max}$, $u_s^{ max}$ are bounds for the state and control variables, and $p^x_r, p^u_s$ are design parameters.
It is also assumed that the set of relations $b_r^Tx\leq x^{ max}_r$, $r=1,\dots, n_r$ (respectively, $c_s^Tu\leq u^{ max}_s$, $s=1,\dots, n_s$), defines a convex set $\mathbb{X}$ (respectively, $\mathbb{U}$) containing the origin in its interior.
\subsection{Regulator structure}
For system~\eqref{eq:model}, we want to design a standard regulation scheme made by the state observer
\begin{equation}\label{eq:observer1}
\hat{x}_{t+1}=A\hat{x}_{t}+Bu_t+L_{t}(y_t-C\hat{x}_{t})
\end{equation}
coupled with the feedback control law
\begin{equation}\label{eq:fb_control_law_ideal}
u_{t}=\bar{u}_{t}-K_{t}(\hat{x}_t-\bar{x}_{t})
\end{equation}
where $\bar{x}$ is the state of the nominal model
\begin{equation}\label{eq:mean_value_evolution_freeIC}
\bar{x}_{t+1}=A\bar{x}_{t}+B\bar{u}_{t}
\end{equation}
In \eqref{eq:observer1}, \eqref{eq:fb_control_law_ideal}, the feedforward term $\bar{u}_{t}$ and the gains $L_{t}$, $K_{t}$ are design parameters to be selected to guarantee convergence properties and the fulfillment of the probabilistic constraints \eqref{eq:prob_constraint_state}, \eqref{eq:prob_constraint_input}.\\
Letting
\begin{subequations}
\label{eq:errors}
\begin{align}
e_{t}&=x_{t}-\hat{x}_{t}\label{eq:obs_error}\\
\varepsilon_{t}&=\hat{x}_{t}-\bar{x}_{t}\label{est_error}
\end{align}
\end{subequations}
from \eqref{eq:errors} we obtain that
\begin{equation}\label{eq:error2}
\delta x_{t}=x_{t}-\bar{x}_{t}=
e_{t}+\varepsilon_{t}
\end{equation}
Define also the vector
$\sigma_{t}=\begin{bmatrix}e_{t}^T& \varepsilon_{t}^T\end{bmatrix}^T$
whose dynamics, according to \eqref{eq:model}-\eqref{eq:errors}, is described by
\begin{equation}\label{eq:error_matrix}
\begin{array}{ll}
\sigma_{t+1}=&\Phi_{t} \sigma_{t}+\Psi_{t}
\begin{bmatrix}w_{t}\\v_{t}\end{bmatrix}\end{array}
\end{equation}
where $$\Phi_{t}=
\begin{bmatrix}A-L_{t}C&0\\L_{t}C&A-BK_{t}\end{bmatrix},\,\Psi_{t}=\begin{bmatrix}F&-L_{t}\\0&L_{t}\end{bmatrix}$$
In the following it is assumed that, by a proper initialization, i.e. $\mathbb{E}\left\{\sigma_{0}\right\}=0$, and recalling that the noises $v$ and $w$ are zero mean, the enlarged state $\sigma_{t}$ of system \eqref{eq:error_matrix} is zero-mean, so that $\bar{x}_{t}=\mathbb{E}\{x_{t}\}$. Then, denoting by
$\Sigma_{t}=\mathbb{E}\left\{
\sigma_{t}\sigma_{t}^T
\right\}$ and by $\Omega=\mathrm{diag}(W,V)$ the covariance matrices of $\sigma_{t}$ and $[w_{t}^T\, v_{t}^T]^T$ respectively, the evolution of $\Sigma_{t}$ is governed by
\begin{align}\label{eq:variance_evolution_error}
&\Sigma_{t+1}=\Phi_{t}\Sigma_{t}\Phi_{t}^T+\Psi_{t}\Omega\Psi_{t}^T
\end{align}
By definition, also the variable $\delta x_{t}$ defined by \eqref{eq:error2} is zero mean and its covariance matrix $X_{t}$ can be derived from $\Sigma_{t}$ as follows
\begin{align}\label{eq:variance_evolution_state}
X_{t}=\mathbb{E}\left\{\delta x_{t} \delta x_{t}^T\right\}
=\begin{bmatrix}I&I\end{bmatrix}\Sigma_{t}
\begin{bmatrix}I\\I\end{bmatrix}
\end{align}
Finally, letting $\delta u_{t}=u_{t}-\bar{u}_{t}=-K_{t}(\hat{x}_{t}-\bar{x}_{t})$, one has $\mathbb{E}\left\{\delta u_{t}\right\}=0$ and also the covariance matrix
$U_{t}=\mathbb{E}\left\{\delta u_{t} \delta u_{t}^T\right\}$ can be obtained from $\Sigma_{t}$ as follows
\begin{align}\label{eq:variance_evolution_input}
U_{t}=&\mathbb{E}\left\{K_{t}\varepsilon_{t} \varepsilon_{t}^TK_{t}^T\right\}=\begin{bmatrix}0&K_{t}\end{bmatrix}\Sigma_{t}
\begin{bmatrix}0\\K_{t}^T\end{bmatrix}
\end{align}
\subsection{Reformulation of the probabilistic constraints}
To set up a suitable control algorithm for the design of $\bar{u}_{t}$, $L_{t}$, $K_{t}$, the probabilistic constraints \eqref{eq:prob_constraint_state} and \eqref{eq:prob_constraint_input} are now reformulated as deterministic ones at the price of suitable tightening. To this end, consider, in general, a random variable $z$ with mean value $\bar{z}=\mathbb{E}\{z\}$, variance $Z=\mathbb{E}\{(z-\bar{z})(z-\bar{z})^T\}$, and the chance-constraint
\begin{equation}\mathbb{P}\{h^T z\geq z^{ max}\}\leq p\label{eq:prob_constraint_general}\end{equation}
The following result, based on the Chebyshev - Cantelli inequality \cite{Cantelli}, has been proven in~\cite{Pala}.
\begin{proposition}
\label{prop:Cantelli}
Letting $f(p)=\sqrt{(1-p)/{p}}$, constraint \eqref{eq:prob_constraint_general}
is verified if
\begin{equation}
h^T\bar{z}\leq z^{ max}-\sqrt{h^T Z h}\,f(p)
\label{eq:Cantelli_propGen}
\end{equation}
\end{proposition}
Note that this result can be proved without introducing any specific assumption on the distribution of~$z$. If, on the other hand, $z$ can be assumed to be normally distributed, less conservative constraints can be obtained, as stated in the following result.
\begin{proposition}
\label{prop:gen_distr}
Assume that $z$ is normally distributed. Then, constraint \eqref{eq:prob_constraint_general}
is verified if \eqref{eq:Cantelli_propGen} holds with $f(p)=\mathcal{N}^{-1}(1-p)$ where $\mathcal{N}$ is the cumulative probability function of a Gaussian variable with zero mean and unitary variance.
\end{proposition}
In Propositions \ref{prop:Cantelli} and \ref{prop:gen_distr}, the function $f(p)$ represents the level of constraint tightening on the mean value of $z$ needed to meet the probabilistic constraint \eqref{eq:prob_constraint_general}. In case of unknown distribution (Proposition \ref{prop:Cantelli}) the values of $f(p)$ are significantly smaller with respect to the Gaussian case (e.g., about an order of magnitude in the range $(0.1, 0.4)$). Similar results can be derived in case of different distributions (e.g., homogeneous).\\
In view of Propositions \ref{prop:Cantelli} and \ref{prop:gen_distr}, the chance-constraints \eqref{eq:prob_constraint_state}-\eqref{eq:prob_constraint_input} are verified provided that the following (deterministic) inequalities are satisfied.
\begin{subequations}
\begin{align}
b_r^T\bar{x}_{t}&\leq x_r^{max}-\sqrt{b_r^T X_{t} b_r}f(p^x_r)\label{eq:Cantelli_ineqs_state}\\
c_s^T\bar{u}_{t}&\leq u_s^{max}-\sqrt{c_s^T U_{t} c_s}f(p_s^u)\label{eq:Cantelli_ineqs_input}
\end{align}
\label{eq:Cantelli_ineqs}
\end{subequations}
If the support of the noise terms $w_k$ and $v_k$ is unbounded, the definition of state and control constraints in probabilistic terms is the only way to state feasible control problems. In case of bounded noises the comparison, in terms of conservativeness between the probabilistic framework and the deterministic one, is discussed in the example in Section \ref{app:example_constrs}.
%
%
\section{MPC algorithm: formulation and properties}
\label{MPC}
To formally state the MPC algorithm for the computation of the regulator parameters $\bar{u}_{t}$, $L_{t}$, $K_{t}$, the following notation will be adopted: given a variable $z$ or a matrix $Z$, at any time step $t$ we will denote by $z_{t+k}$ and $Z_{t+k}$, $k\geq 0$, their generic values in the future, while $z_{t+k|t}$ and $Z_{t+k|t}$ will represent their specific values computed based on the knowledge (e.g., measurements) available at time $t$.\\
The main ingredients of the optimization problem are now introduced.
\subsection{Cost function}
Assume to be at time $t$ and denote by $\bar{u}_{t,\dots, t+N-1}=\{\bar{u}_t, \dots, \bar{u}_{t+N-1}\}$ the nominal input sequence over a future prediction horizon of length $N$. Moreover, define by $K_{t,\dots, t+N-1}=\{K_t, \dots, K_{t+N-1}\}$, $L_{t,\dots, t+N-1}=\{L_t, \dots, L_{t+N-1}\}$ the sequences of the future control and observer gains, and recall that the covariance $\Sigma_{t+k}=\mathbb{E}\left\{\sigma_{t+k}\sigma_{t+k}^T\right\}$ evolves, starting from $\Sigma_{t}$, according to \eqref{eq:variance_evolution_error}.\\
The cost function to be minimized is the sum of two components, the first one ($J_{m}$) accounts for the expected values of the future nominal inputs and states, while the second one ($J_{v}$) is related to the variances of the future errors $e$, $\varepsilon$, and of the future inputs. Specifically, the overall performance index is
\begin{align}\label{eq:JTOT
J=J_m(\bar{x}_t,\bar{u}_{t,\dots, t+N-1})+J_v(\Sigma_t,K_{t,\dots, t+N-1},L_{t,\dots, t+N-1})
\end{align}
where
\begin{align}
&J_m=\sum_{i=t}^{t+N-1} \| \bar{x}_{i}\|_{Q}^2+\| \bar{u}_{i}\|_{R}^2+\| \bar{x}_{t+N}\|_{S}^2\label{eq:mean_cost_function}\\
&J_v=\mathbb{E}\left\{\sum\limits_{i=t}^{t+N-1} \| x_i-\hat{x}_i \|_{Q_L}^2+\| x_{t+N}-\hat{x}_{t+N} \|_{S_{L}}^2 \right\}+\nonumber\\ &\mathbb{E}\left\{\sum\limits_{i=t}^{t+N-1} \| \hat{x}_i-\bar{x}_i \|_Q^2+\| u_i-\bar{u}_i \|_R^2+\| \hat{x}_{t+N}-\bar{x}_{t+N} \|_{S}^2 \right\} \label{eq:variance_cost_function1}\end{align}
where the positive definite and symmetric weights $Q$, $Q_L$, $S$, and $S_L$ must satisfy the following inequality
\begin{equation}\label{eq:Lyap_S} Q_{T}-S_{T}+\Phi^TS_{T}\Phi\preceq 0 \end{equation}
where $$\Phi=\begin{bmatrix}A-\bar{L}C&0\\\bar{L}C&A-B\bar{K}\end{bmatrix}$$
$Q_{T}=\mathrm{diag}(Q_{L},Q+\bar{K}^TR\bar{K})$, $S_{T}=\mathrm{diag}(S_{L},S)$,
and $\bar{K}$, $\bar{L}$ must be chosen to guarantee that $\Phi$ is asymptotically stable.\\
By means of standard computations, it is possible to write the cost \eqref{eq:variance_cost_function1} as follows
\begin{equation}\label{eq:variance_cost_function}
J_v=\sum_{i=t}^{t+N-1} \mathrm{tr}(Q_{T} \Sigma_{i})+ \mathrm{tr} (S_{T} \Sigma_{t+N})
\end{equation}
From \eqref{eq:JTOT}-\eqref{eq:variance_cost_function1}, it is apparent that the goal is twofold: to drive the mean $\bar{x}$ to zero by acting on the nominal input component $\bar{u}_{t,\dots, t+N-1}$ and to minimize the variance of $\Sigma$ by acting on the gains $K_{t,\dots, t+N-1}$ and $L_{t,\dots, t+N-1}$. In addition, also the pair $(\bar{x}_{t},\Sigma_t)$ must be considered as an additional argument of the MPC optimization, as later discussed, to guarantee recursive feasibility.
\subsection{Terminal constraints}
As usual in stabilizing MPC, see e.g. \cite{Mayne00}, some terminal constraints must be included into the problem formulation. In our setup, the mean $\bar{x}_{t+N}$ and the variance $\Sigma_{t+N}$ at the end of the prediction horizon must satisfy
\begin{align}
\bar{x}_{t+N}&\in\bar{\mathbb{X}}_F\label{eq:term_constraint_mean}\\
\Sigma_{t+N}&\preceq \bar{\Sigma}\label{eq:term_constraint_variance}
\end{align}
where $\bar{\mathbb{X}}_F$ is a positively invariant set (see \cite{Gilbert}) such that
\begin{align}\label{eq:inv_terminal}
(A-B\bar{K})\bar{x}&\in\bar{\mathbb{X}}_F \quad &\forall \bar{x}\in \bar{\mathbb{X}}_F
\end{align}
while $\bar{\Sigma}$ is the steady-state solution of the Lyapunov equation \eqref{eq:variance_evolution_error}, i.e.,
\begin{align}
\bar{\Sigma}=&
\Phi
\bar{\Sigma}
\Phi^T+\Psi\bar{\Omega}\Psi^T
\label{eq:Riccati_1}
\end{align}
where
$\Psi=\begin{bmatrix}F&-\bar{L}\\0 &\bar{L} \end{bmatrix}$
and $\bar{\Omega}=\mathrm{diag}(\bar{W},\bar{V})$ is built by considering (arbitrary) noise variances $\bar{W}\succeq W$ and $\bar{V}\succeq V$.
In addition, and consistently with \eqref{eq:Cantelli_ineqs}, the following coupling conditions, must be verified.
\begin{subequations}
\label{eq:linear_constraint_finalc}
\begin{align}
b_r^T\bar{x}&\leq x_r^{max}-\sqrt{b_r^T \bar{X} b_r}f(p^x_r) \label{eq:linear_constraint_state_finalc}\\
-c_s^T\bar{K}\bar{x}&\leq u_s^{max}-\sqrt{c_s^T \bar{U} c_s}f(p_s^u) \label{eq:linear_constraint_input_finalc}
\end{align}
\end{subequations}
for all $r=1,\dots, n_r$, $s=1,\dots, n_s$, and for all $\bar{x}\in\bar{\mathbb{X}}_F$, where
\begin{subequations}
\label{eq:bar_def}
\begin{align}
\bar{X}&=\begin{bmatrix}I&I\end{bmatrix}\bar{\Sigma}\begin{bmatrix}I\\I\end{bmatrix}\label{eq:Xbar_def}\\
\bar{U}&=\begin{bmatrix}0&\bar{K}\end{bmatrix}\bar{\Sigma}\label{eq:Ubar_def}
\begin{bmatrix}0\\\bar{K}^T\end{bmatrix}
\end{align}\end{subequations}
%
It is worth remarking that the choice of $\bar{\Omega}$ is subject to a tradeoff. In fact, large variances $\bar{W}$ and $\bar{V}$ result in large $\bar{\Sigma}$ (and, in view of \eqref{eq:bar_def}, large $\bar{X}$ and $\bar{U}$). This enlarges the terminal constraint \eqref{eq:term_constraint_variance} but, on the other hand, reduces the size of the terminal set $\mathbb{X}_F$ compatible with \eqref{eq:linear_constraint_finalc}.
%
\subsection{Statement of the stochastic MPC (S-MPC) problem}
%
The formulation of the main S-MPC problem requires a preliminary discussion concerning the initialization. In principle, and in order to use the most recent information available on the state, at any time instant it would be natural to set the current value of the nominal state $\bar{x}_{t|t}$ to $\hat x_{t}$ and the covariance $\Sigma_{t|t}$ to $\mathrm{diag}(\Sigma_{11,t|t-1},0)$, where $\Sigma_{11,t|t-1}$ is the covariance of state prediction error $e$ obtained using the observer~\eqref{eq:observer1}. However, since we do not exclude the possibility of unbounded disturbances, in some cases this choice could lead to infeasible optimization problems. On the other hand, and in view of the terminal constraints \eqref{eq:term_constraint_mean}, \eqref{eq:term_constraint_variance}, it is quite easy to see that recursive feasibility is guaranteed provided that $\bar{x}$ is updated according to the prediction equation \eqref{eq:mean_value_evolution_freeIC}, which corresponds to the variance update given by \eqref{eq:variance_evolution_error}. These considerations motivate the choice of accounting for the initial conditions $(\bar{x}_t,\Sigma_t)$ as free variables, which will selected by the control algorithm according to the following alternative strategies.\\
\textbf{Strategy 1} Reset of the initial state: $\bar{x}_{t|t}=\hat x_{t}$, $\Sigma_{t|t}=\mathrm{diag}(\Sigma_{11,t|t-1},0)$.\\
\textbf{Strategy 2} Prediction: $\bar{x}_{t|t}=\bar{x}_{t|t-1}$, $\Sigma_{t|t}=\Sigma_{t|t-1}$.\\
%
%
The S-MPC problem can now be stated.\\\\
\textbf{S-MPC problem: }at any time instant $t$ solve
$$\min_{\bar{x}_{t},\Sigma_{t},\bar{u}_{t, \dots, t+N-1},
K_{t,\dots, t+N-1},L_{t,\dots, t+N-1}} J$$
where $J$ is defined in \eqref{eq:JTOT}, \eqref{eq:mean_cost_function}, \eqref{eq:variance_cost_function1}, subject to
\begin{itemize}
\item[-] the dynamics~\eqref{eq:mean_value_evolution_freeIC} and~\eqref{eq:variance_evolution_error};
\item[-] constraints~\eqref{eq:Cantelli_ineqs} for all $k=0,\dots,N-1$;
\item[-] the initialization constraint, corresponding to the choice between Strategies 1 and 2, i.e.,
\begin{equation}\label{eq:reset_constraint}
(\bar{x}_{t},\Sigma_{t})\in\{(\hat x_t,\mathrm{diag}(\Sigma_{11,t|t-1},0)),(\bar{x}_{t|t-1},\Sigma_{t|t-1})\}
\end{equation}
\item[-] the terminal constraints~\eqref{eq:term_constraint_mean}, \eqref{eq:term_constraint_variance}.\hfill$\square$
\end{itemize}
Denoting by
$\bar{u}_{t,\dots, t+N-1|t}=\{\bar{u}_{t|t},\dots, \bar{u}_{t+N-1|t}\}$,
$K_{t,\dots, t+N-1|t}=\{K_{t|t},\dots,$\break
$K_{t+N-1|t}\}$,
$L_{t,\dots, t+N-1|t}=$
$\{L_{t|t},\dots, L_{t+N-1|t}\}$, and
($\bar{x}_{t|t},\Sigma_{t|t}$)
the optimal solution of the S-MPC problem, the feedback control law actually used is then given by~\eqref{eq:fb_control_law_ideal} with
$\bar{u}_{t}=\bar{u}_{t|t}$, $K_{t}=K_{t|t}$, and the state observation evolves as in~\eqref{eq:observer1} with $L_{t}=L_{t|t}$.\\
We define the S-MPC problem feasibility set as
\begin{center}
$\Xi:=\{(\bar{x}_0,\Sigma_0):\exists \bar{u}_{0,\dots, N-1},K_{0,\dots, N-1},L_{0,\dots, N-1}$
such that~\eqref{eq:mean_value_evolution_freeIC},~\eqref{eq:variance_evolution_error}, and \eqref{eq:Cantelli_ineqs} hold for all $k=0,\dots,N-1$ and \eqref{eq:term_constraint_mean}, \eqref{eq:term_constraint_variance} are verified\}\end{center}
Some comments are in order.
\setlength{\leftmargini}{0.5em}
\begin{itemize}
\item[-]At the initial time $t=0$, the algorithm must be initialized by setting $\bar{x}_{0|0}=\hat{x}_{0}$ and $\Sigma_{0|0}=\mathrm{diag}(\Sigma_{11,0},0)$. In view of this, feasibility at time $t=0$ amounts to $(\hat{x}_0,\Sigma_{0|0})\in\Xi$.
\item[-] The binary choice between Strategies 1 and 2 requires to solve at any time instant two optimization problems. However, the following sequential procedure can be adopted to reduce the average overall computational burden: the optimization problem corresponding to Strategy 1 is first solved and, if it is infeasible, Strategy 2 must be used, otherwise Strategy 2 must be solved and adopted. On the contrary, if it is feasible, it is possible to compare the resulting value of the optimal cost function with the value of the cost using the sequences $\{\bar{u}_{t|t-1},\dots, \bar{u}_{t+N-2|t-1}, -\bar{K}\bar{x}_{t+N-1|t}\}$, $\{K_{t|t-1},\dots, K_{t+N-2|t-1}, \bar{K}\}$, $\{L_{t|t-1},\dots,$ $L_{t+N-2|t-1}, \bar{L}\}$. If the optimal cost with Strategy 1 is lower, Strategy 1 can be used without solving the MPC problem for Strategy 2. This does not guarantee optimality, but the convergence properties of the method stated in the result below are recovered and the computational effort is reduced.
\end{itemize}
%
Now we are in the position to state the main result concerning the convergence properties of the algorithm.
\begin{thm}\label{thm:main}
If, at $t=0$, the S-MPC problem admits a solution, the optimization problem is recursively feasible and the state and input probabilistic constraints \eqref{eq:prob_constraint_state} and \eqref{eq:prob_constraint_input} are satisfied for all $t\geq 0$. Furthermore, if there exists $\rho\in(0,1)$ such that the noise variance $\Omega$ verifies
\begin{align}
\frac{(N+\frac{\beta}{\alpha})}{\alpha}\mathrm{tr}(S_T\Psi \Omega \Psi^T)&<\min(\rho\bar{\sigma}^2,\rho\lambda_{min}(\bar{\Sigma}))\label{eq:cons_conds_on_W}
\end{align}
%
where $\bar{\sigma}$ is the maximum radius of a ball, centered at the origin, included in $\bar{\mathbb{X}}_F$, and
\begin{subequations}
\begin{align}
\alpha&=\min\{\lambda_{min}(Q),\mathrm{tr}\{Q^{-1}+Q_L^{-1}\}^{-1}\}\\
\beta&=\max\{\lambda_{max}(S),\mathrm{tr}\{S_T\}\}
\end{align}
\label{eq:lambda_def}
\end{subequations}
then, as $t\rightarrow +\infty$\\
%
\begin{align}\mathrm{dist}(\|\bar{x}_t\|^2+\mathrm{tr}\{\Sigma_{t|t}\},[0,\frac{1}{\alpha}(N+\frac{\beta}{\alpha})\,\mathrm{tr}(S_T\Psi \Omega \Psi^T)])\rightarrow 0\label{eq:thm_stat}\end{align}\hfill$\square$
\end{thm}
Note that, as expected, for smaller and smaller values of $\Omega$, also the asymptotic values of $\|\bar{x}_t\|$ and $\mathrm{tr}\{\Sigma_{t|t}\}$ tend to zero.
%
\section{Implementation issues}
\label{sec:num_implementation}
The main difficulty in the solution to the S-MPC problem is due to the non linear constraints \eqref{eq:Cantelli_ineqs} and to the non linear dependence of the covariance evolution, see~\eqref{eq:variance_evolution_error}, on $K_{t,\dots, t+N-1},L_{t,\dots, t+N-1}$. This second problem can be prevented in the state feedback case, see \cite{FGMS13_CDC}, where a reformulation based on linear matrix inequalities (LMIs) can be readily obtained. In the output feedback case here considered, two possible solutions are described in the following.\\
Also, in Section \ref{sec:inputs} we briefly describe some possible solutions for coping with the presence of additive deterministic constraints on the input variables $u_t$.
\subsection{Approximation of S-MPC for allowing a solution with LMIs}
A solution, based on an approximation of S-MPC characterized by linear constraints solely, is now presented.
First define $A^D=\sqrt{2}A,B^D=\sqrt{2}B,C^D=\sqrt{2}C$, and $V^D=2V$ and let the auxiliary gain matrices $\bar{K}$ and $\bar{L}$ be selected according to the following assumption.
\begin{assumption}
\label{ass:KandL}
The gains $\bar{K}$ and $\bar{L}$ are computed as the steady-state gains of the LQG regulator for the system $(A^D,B^D,C^D)$, with state and control weights $Q$ and $R$, and noise covariances $\bar{W}\succeq W$ and $\bar{V}\succeq V^D$.
\end{assumption}
Note that, if a gain matrix $\bar{K}$ (respectively $\bar{L}$) is stabilizing for $(A^D-B^D\bar{K})=\sqrt{2}(A-B\bar{K})$ (respectively $(A^D-\bar{L}C^D)=\sqrt{2}(A-\bar{L}C)$), it is also stabilizing for $(A-B\bar{K})$ (respectively $(A-\bar{L}C)$), i.e., for the original system. The following preliminary result can be stated.
\begin{lemma}
\label{lemma:bound_var_main}
Define $A^D_{L_t}=A^D-L_tC^D$, $A^D_{K_t}=A^D-B^DK_t$, the block diagonal matrix $\Sigma^D_t=\mathrm{diag}(\Sigma_{11,t}^D,\Sigma_{22,t}^D)$, $\Sigma_{11,t}^D\in \mathbb{R}^{n\times n}$, $\Sigma_{22,t}^D\in \mathbb{R}^{n\times n}$ and the update equations
\begin{subequations}
\begin{align}
\Sigma_{11,t+1}^D=&A^D_{L_t}\Sigma_{11,t}^D(A^D_{L_t})^T+FWF^T+L_t V^D L_t^T\label{eq:sigmaD1}\\ \Sigma_{22,t+1}^D=&A^D_{K_t}\Sigma_{22,t}^D(A^D_{K_t})^T+L_tC^D\Sigma_{11,t}^DC^{D\ T}L_t^T\nonumber\\
&+L_t V^D L_t^T\label{eq:sigmaD2}
\end{align}
\label{eq:SigmaDupdate}
\end{subequations}
Then\\
I) $\Sigma^D_t\succeq \Sigma_t$ implies that $\Sigma^D_{t+1}=\mathrm{diag}(\Sigma_{11,t+1}^D,\Sigma_{22,t+1}^D)\succeq \Sigma_{t+1}$.\\
II) We can rewrite as LMIs the following inequalities
\begin{subequations}
\begin{align} \Sigma_{11,t+1}^D\succeq &A^D_{L_t}\Sigma_{11,t}^D(A^D_{L_t})^T+FWF^T+L_t V^D L_t^T\label{eq:sigmaD1LMI}\\ \Sigma_{22,t+1}^D \succeq & A^D_{K_t}\Sigma_{22,t}^D(A^D_{K_t})^T+L_tC^D\Sigma_{11,t}^DC^{D\ T}L_t^T\nonumber\\
&+L_t V^D L_t^T\label{eq:sigmaD2LMI}
\end{align}
\label{eq:SigmaDupdateLMI}
\end{subequations}
\end{lemma}
Based on Lemma~\ref{lemma:bound_var_main}-II, we can reformulate the original problem so that the covariance matrix $\Sigma^{D}$ is used instead of $\Sigma$. Accordingly, the update equation~\eqref{eq:variance_evolution_error} is replaced by \eqref{eq:SigmaDupdate} and S-MPC problem is recast as an LMI one (see Appendix \ref{app:LMI}).\\
The inequalities \eqref{eq:Cantelli_ineqs} have a nonlinear dependence on the covariance matrices $X_{t}$ and $U_{t}$. It is possible to prove that \eqref{eq:Cantelli_ineqs} are satisfied if
\begin{subequations}\label{eq:Cantelli_ineqs_lin}
\begin{align}
b_r^T\bar{x}_{t}&\leq
(1-0.5\alpha_x)x_r^{ max}-\frac{b_r^T X_{t}b_r}{2\alpha_x x_r^{ max}} f(p_r^x)^2 \label{eq:linear_constraint_state}\\
c_s^T\bar{u}_{t}&\leq (1-0.5\alpha_u)u_s^{ max}-\frac{c_s^TU_{t}c_s}{2\alpha_u u_s^{ max}} f(p_s^u)^2\label{eq:linear_constraint_input}
\end{align}\end{subequations}
with $r=1,\dots, n_r$ and $s=1,\dots, n_s$, where $\alpha_x\in(0, 1]$ and $\alpha_u\in(0, 1]$ are free design parameters. Also, note that
$X_{t}\preceq\begin{bmatrix}
I&I
\end{bmatrix}\Sigma_{t}^D\begin{bmatrix}
I\\I
\end{bmatrix}= \Sigma_{11,t}^D+\Sigma_{22,t}^D$ and that $U_{t}\preceq\begin{bmatrix}
0&K_{t}
\end{bmatrix}\Sigma_{t}^D\begin{bmatrix}
0\\K_{t}^T
\end{bmatrix}=K_{t}\Sigma_{22,t}^DK_{t}^T$
so that, defining $X_{t}^D=\Sigma_{11,t}^D+ \Sigma_{22,t}^D$ and $U_{t}^D=K_{t}\Sigma_{22,t}^DK_{t}^T$,~\eqref{eq:Cantelli_ineqs_lin} can be written as follows
\begin{subequations}\label{eq:Cantelli_ineqs_linL}
\begin{align}
b_r^T\bar{x}_{t}&\leq
(1-0.5\alpha_x)x_r^{ max}-\frac{b_r^TX_{t}^D b_r}{2\alpha_x x_r^{ max}} f(p_r^x)^2\label{eq:linear_constraint_stateL}\\
c_s^T\bar{u}_{t}&\leq (1-0.5\alpha_u)u_s^{ max}-\frac{c_s^TU_{t}^D c_s}{2\alpha_u u_s^{ max}} f(p_s^u)^2\label{eq:linear_constraint_inputL}
\end{align}\end{subequations}
Note that the reformulation of~\eqref{eq:Cantelli_ineqs} into \eqref{eq:Cantelli_ineqs_linL} has been performed at the price of additional constraint tightening. For example, on the right hand side of \eqref{eq:linear_constraint_stateL}, $x_r^{max}$ is replaced by $(1-0.5\alpha^x)x_r^{max}$, which significantly reduces the size of the constraint set. Note that parameter $\alpha^x$ cannot be reduced at will, since it also appears at the denominator in the second additive term.\\
In view of Assumption~\ref{ass:KandL} and resorting to the separation principle, it is possible to show \cite{glad2000control} that the solution $\bar{\Sigma}^D$ to the steady-state equation
\begin{align}
\bar{\Sigma}^D=&
\Phi^D
\bar{\Sigma}^D
(\Phi^D)^T+\Psi\bar{\Omega}\Psi^T
\label{eq:Riccati_1L}
\end{align}
is block diagonal, i.e., $\bar{\Sigma}^D=\mathrm{diag}(\bar{\Sigma}^D_{11},\bar{\Sigma}_{22}^D)$,
where
$$\Phi^D=\begin{bmatrix}A^D-\bar{L}C^D&0\\\bar{L}C^D&A^D-B^D\bar{K}\end{bmatrix}$$
The terminal constraint \eqref{eq:term_constraint_variance}, must be transformed into $\Sigma_{t+N}^D\preceq \bar{\Sigma}^D$,
which corresponds to setting
\begin{equation}
\begin{array}{lcllcl}
\Sigma_{11,t+N}^D&\preceq \bar{\Sigma}^D_{11},\quad&
\Sigma_{22,t+N}^D&\preceq \bar{\Sigma}^D_{22}
\end{array}
\label{eq:term_constraints_varianceL}
\end{equation}
Defining $\bar{X}^D=\bar{\Sigma}^D_{11}+\bar{\Sigma}^D_{22}$ and $\bar{U}^D=\bar{K}\bar{\Sigma}^D_{22}\bar{K}^T$, the terminal set condition \eqref{eq:linear_constraint_finalc} must now be reformulated as
\begin{subequations}
\begin{align}
b_r^T\bar{x}&\leq (1-0.5\alpha^x)x_r^{ max}-\frac{b_r^T\bar{X}^D b_r}{2\alpha^x x_r^{ max}} f(p_r^x)^2 \label{eq:linear_constraint_state_finalL}\\
-c_s^T\bar{K}\bar{x}&\leq (1-0.5\alpha^u)u_s^{ max}-\frac{c_s^T \bar{U}^Dc_s}{2\alpha^u u_s^{ max}} f(p_s^u)^2 \label{eq:linear_constraint_input_finalL}
\end{align}
\end{subequations}
for all $r=1,\dots, n_r$, $s=1,\dots, n_s$, and for all $\bar{x}\in\bar{\mathbb{X}}_F$.\\
Also $J_v$ must be reformulated. Indeed
\begin{equation}
\begin{array}{ll}
J_v\leq J_v^D&=\sum\limits_{i=t}^{t+N-1} \mathrm{tr}\left\{Q_L\Sigma_{11,i}^D+Q\Sigma_{22,i}^D+RK_i\Sigma_{22,i}^DK_i^T\right\}\\
&+\mathrm{tr}\left\{S_L\Sigma_{11,t+N}^D+S\Sigma_{22,t+N}^D\right\}
\end{array}
\label{eq:variance_cost_functionL}
\end{equation}
where the terminal weights $S$ and $S_L$ must now satisfy the following Lyapunov-like inequalities
\begin{equation}
\begin{array}{l}
(\bar{A}^D_K)^T S \bar{A}^D_K-S+Q+\bar{K}^TR\bar{K}\preceq 0\\
(\bar{A}^D_L)^T S_L \bar{A}^D_L-S_L+Q_L+(C^D)^T\bar{L}^T S \bar{L}C^D\preceq 0
\end{array}
\label{eq:Lyap_S_L}
\end{equation}
where $\bar{A}^D_K=A^D-B^D\bar{K}$ and $\bar{A}^D_L=A^D-\bar{L}C^D$. It is now possible to formally state the S-MPCl problem.\\\\
\textbf{S-MPCl problem: }at any time instant $t$ solve
$$\min_{\bar{x}_{t},\Sigma^D_{11,t},\Sigma^D_{22,t},\bar{u}_{t, \dots, t+N-1},K_{t,\dots, t+N-1},L_{t,\dots, t+N-1}} J$$
where $J$ is defined in \eqref{eq:JTOT}, \eqref{eq:mean_cost_function}, \eqref{eq:variance_cost_functionL}, subject to
\begin{itemize}
\item[-] the dynamics~\eqref{eq:mean_value_evolution_freeIC} and~\eqref{eq:SigmaDupdate};
\item[-] the linear constraints~\eqref{eq:Cantelli_ineqs_linL} for all $k=0,\dots,N-1$;
\item[-] the initialization constraint, corresponding to the choice between Strategies 1 and 2, i.e., $(\bar{x}_{t},\Sigma^D_{11,t},\Sigma_{22,t}^D)\in\{(\hat x_t,\Sigma^D_{11,t|t-1},0),(\bar{x}_{t|t-1},\Sigma^D_{11,t|t-1},\Sigma_{22,t|t-1}^D)\}$
\item[-] the terminal constraints~\eqref{eq:term_constraint_mean},~\eqref{eq:term_constraints_varianceL}.
\end{itemize}
\hfill$\square$\\
The following corollary follows from Theorem~\ref{thm:main}.
\begin{corollary}\label{cor:LMI_soluz}
If, at time $t=0$, the S-MPCl problem admits a solution, the optimization problem is recursively feasible and the state and input probabilistic constraints \eqref{eq:prob_constraint_state} and \eqref{eq:prob_constraint_input} are satisfied for all $t\geq 0$. Furthermore, if there exists $\rho\in(0,1)$ such that the noise variance $\Omega^D=\mathrm{diag}(W,V^D)$ verifies
\begin{align}
\frac{(N+\frac{\beta}{\alpha})}{\alpha}\mathrm{tr}(S_T\Psi \Omega^D \Psi^T)&<\min(\rho\bar{\sigma}^2,\rho\lambda_{min}(\bar{\Sigma}^D))\label{eq:cons_conds_on_WL}
\end{align}
then, as $t\rightarrow +\infty$
$$\mathrm{dist}(\|\bar{x}_t\|^2+\mathrm{tr}\{\Sigma_{t|t}^D\},[0,\frac{1}{\alpha}(N+\frac{\beta}{\alpha})\,\mathrm{tr}(S_T\Psi \Omega^D \Psi^T)])\rightarrow 0$$
\hfill$\square$
\end{corollary}
\subsection{Approximation of S-MPC with constant gains}
\label{sec:num_implementation_constant}
The solution presented in this section is characterized by a great simplicity and consists in setting $L_t=\bar{L}$ and $K_t=\bar{K}$ for all $t\geq 0$. In this case, the value of $\Sigma_{t+k}$ (and therefore of $X_{t+k}$ and $U_{t+k}$) can be directly computed for any $k>0$ by means of~\eqref{eq:variance_evolution_error} as soon as $\Sigma_t$ is given. As a byproduct, the nonlinearity in the constraints \eqref{eq:Cantelli_ineqs} does not carry about implementation problems. Therefore, this solution has a twofold advantage: first, it is simple and requires an extremely lightweight implementation; secondly, it allows for the use of nonlinear less conservative constraint formulations. In this simplified framework, the following S-MPCc problem can be stated.\\\\
\textbf{S-MPCc problem: }at any time instant $t$ solve
$$\min_{\bar{x}_{t},\Sigma_{t},\bar{u}_{t, \dots, t+N-1}} J$$
where $J$ is defined in \eqref{eq:JTOT}, \eqref{eq:mean_cost_function}, \eqref{eq:variance_cost_function1}, subject to
\begin{itemize}
\item[-] the dynamics~\eqref{eq:mean_value_evolution_freeIC} , with $K_t=\bar{K}$, and
\begin{equation}
\Sigma_{t+1}=\Phi\Sigma_{t}\Phi^T+\Psi\Omega\Psi^T
\end{equation}
\item[-] the constraints~\eqref{eq:Cantelli_ineqs} for all $k=0,\dots,N-1$;
\item[-] the initialization constraint \eqref{eq:reset_constraint};
\item[-] the terminal constraints~\eqref{eq:term_constraint_mean}, \eqref{eq:term_constraint_variance}.
\end{itemize}
\hfill$\square$\\
An additional remark is due. The term $J_v$ in~\eqref{eq:variance_cost_function} does not depend only of the control and observer gain sequences $K_{t,\dots, t+N-1}$,
$L_{t,\dots, t+N-1}$, but also of the initial condition $\Sigma_t$. Therefore, it is not possible to discard it in this simplified formulation.\\
The following corollary can be derived from Theorem~\ref{thm:main}.
\begin{corollary}\label{cor:const_gains}
If, at time $t=0$, the S-MPCc problem admits a solution, the optimization problem is recursively feasible and the state and input probabilistic constraints \eqref{eq:prob_constraint_state} and \eqref{eq:prob_constraint_input} are satisfied for all $t\geq 0$. Furthermore, if there exists $\rho\in(0,1)$ such that the noise variance $\Omega$ verifies~\eqref{eq:cons_conds_on_W}, then, as $t\rightarrow +\infty$
$$\mathrm{dist}(\|\bar{x}_t\|^2+\mathrm{tr}\{\Sigma_{t|t}\},[0,\frac{1}{\alpha}(N+\frac{\beta}{\alpha})\,\mathrm{tr}(S_T\Psi \Omega \Psi^T)])\rightarrow 0$$
\hfill$\square$
\end{corollary}
\subsection{Boundedness of the input variables}
\label{sec:inputs}
The S-MPC scheme described in the previous sections does not guarantee the satisfaction of hard constraints on the input variables. However, input variables can be bounded in practice, and subject to
\begin{align}
Hu_t\leq \mathbf{1}
\label{eq:input_constr}
\end{align}
where $H\in\mathbb{R}^{n_H\times m}$ is a design matrix and $\mathbf{1}$ is a vector of dimension $n_H$ whose entries are equal to $1$.
Three possible approaches are proposed to account for these inequalities.\\
1) Inequalities \eqref{eq:input_constr} can be stated as additive probabilistic constraints~\eqref{eq:prob_constraint_input} with small violation probabilities $p_s^u$. This solution, although not guaranteeing satisfaction of~\eqref{eq:input_constr} with probability 1, is simple and of easy application.\\
2) If the S-MPCc scheme is used, define the gain matrix $\bar{K}$ in such a way that $A-B\bar{K}$ is asymptotically stable and, at the same time, $H\bar{K}=0$. From \eqref{eq:fb_control_law_ideal}, it follows that
$Hu_t=H\bar{u}_t+H\bar{K}(\hat{x}_t-\bar{x}_t)=H\bar{u}_t$. Therefore, to verify \eqref{eq:input_constr} it is sufficient to include in the problem formulation the deterministic constraint $H\bar{u}_t\leq \mathbf{1}$.\\
3) In an S-MPCc scheme, in case probabilistic constraint on the input variables were absent, we can replace \eqref{eq:fb_control_law_ideal} with $u_t=\bar{u}_t$ and set $H\bar{u}_t\leq \mathbf{1}$ in the S-MPC optimization problem to verify \eqref{eq:input_constr}. If we also define $\hat{u}_t=\bar{u}_t-\bar{K}(\hat{x}_t-\bar{x}_t)$ as the input to equation \eqref{eq:observer1}, the dynamics of variable $\sigma_t$ is given by \eqref{eq:error_matrix} with
$$\Phi_t=\begin{bmatrix}A-\bar{L}C&B\bar{K}\\
\bar{L}C&A-B\bar{K}\end{bmatrix}$$
and the arguments follow similarly to those proposed in the paper. It is worth mentioning, however, that matrix $\Phi_t$ must be asymptotically stable, which requires asymptotic stability of $A$.
\section{Examples}
\label{sec:example}
In this section a comparison between the characteristics of the proposed method and the well-known robust tube-based MPC is first discussed. Then, the approximations described in Section \ref{sec:num_implementation} are discussed with reference to a numerical example.
\subsection{Simple analytic example: comparison between the probabilistic and the deterministic robust MPC}
\label{app:example_constrs}
Consider the scalar system
$$x_{t+1}=a x_t+u_t+w_t$$
where $0<a<1$, $w\in[-w_{max},w_{max}]$, $w_{max}>0$, and the measurable state is constrained as follows
\begin{align}x_t\leq x_{max}\label{eq:ex_det_constr}
\end{align}
The limitations imposed by the deterministic robust MPC algorithm developed in \cite{mayne2005robust} and by the probabilistic (state-feedback) method described in this paper are now compared.
For both the algorithms, the control law $u_t=\bar{u}_t$ is considered, where $\bar{u}$ is the input of the nominal/average system $\bar{x}_{t+1}=a\bar{x}_{t}+b\bar{u}_{t}$ with state $\bar{x}$. Note that this control law is equivalent to \eqref{eq:fb_control_law_ideal}, where for simplicity it has been set $K_{t}=0$.\\
In the probabilistic approach, we allow the constraint \eqref{eq:ex_det_constr} to be violated with probability $p$, i.e.,
\begin{align}\mathbb{P}\{x\geq x_{ max}\}\leq p\label{eq:ex_prob_constr}
\end{align}
To verify \eqref{eq:ex_det_constr} and \eqref{eq:ex_prob_constr} the tightened constraint $\bar{x}_k\leq x_{max}-\Delta x$ must be fulfilled in both the approaches where, in case of \cite{mayne2005robust}, $\Delta x =\Delta x_{RPI}=\sum_{i=0}^{+\infty}a^iw_{max}=\frac{1}{1-a} w_{max}$ while, having defined $w$ as a stochastic process with zero mean and variance $W$, in the probabilistic framework $\Delta x=\Delta x_{S}(p) =\sqrt{X(1-p)/p}$, and $X$ is the steady state variance satisfying the algebraic equation $X=a^2X+W$, i.e. $X=W/(1-a^2)$. Notably, $W$ takes different values depending upon the noise distribution.\\
It results that the deterministic tightened constraints are more conservative provided that $\Delta x_{S}(p)<\Delta x_{RPI}$, i.e.
\begin{equation}p>\frac{(1-a)^2}{b(1-a^2)+(1-a)^2}\label{eq:ex_pbound}\end{equation}
Consider now the distributions depicted in Figure \ref{fig:distrs} with $W=w_{max}^2/b$, where\\
Case A) $b=3$ for uniform distribution;\\
Case B) $b=18$ for triangular distribution;\\
Case C) $b=25$ for truncated Gaussian distribution.\\
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{distrs}
\caption{Distributions: uniform (case A, solid line), triangular (case B, dashed line), truncated Gaussian (case C, dotted line).}
\label{fig:distrs}
\end{figure}
Setting, for example, $a=0.9$, condition \eqref{eq:ex_pbound} is verified for $p>0.0172$ in case A), $p>0.0029$ in case B), and $p>0.0021$ in case C). Note that, although formally truncated, the distribution in case C) can be well approximated with a non-truncated Gaussian distribution: if this information were available, one could use $\Delta x_S(p)=\sqrt{X}\,\mathcal{N}^{-1}(1-p)$ for constraint tightening, and in this case $\Delta x_{S}(p)<\Delta x_{RPI}$ would be verified with $p>1-\mathcal{N}\left(\frac{(1-a^2)b}{(1-a)^2}\right)\simeq 0$.
\subsection{Simulation example}
The example shown in this section is inspired by \cite{mayne2005robust}. We take
$$A=\begin{bmatrix}1&1\\0&1\end{bmatrix}, B=\begin{bmatrix}0.5\\1\end{bmatrix}$$
$F=I_2$, and $C=I_2$. We assume that a Gaussian noise affects the system, with $W=0.01I_2$ and $V=10^{-4}I_2$. The chance-constraints are $\mathbb{P}\{x_{2}\geq 2\}\leq 0.1$, $\mathbb{P}\{u\geq 1\}\leq 0.1$, and $\mathbb{P}\{-u\geq 1\}\leq 0.1$. In \eqref{eq:JTOT}, \eqref{eq:mean_cost_function}, and \eqref{eq:variance_cost_function} we set $Q_L=Q=I_2$, $R=0.01$, and $N=9$.\\
In Figure \ref{fig:sets} we compare the feasible sets obtained with the different methods presented in Section \ref{sec:num_implementation}, with different assumptions concerning the noise (namely S-MPCc (1), S-MPCc (2), S-MPCl (1), S-MPCl (2), where (1) denotes the case of Gaussian distribution and (2) denotes the case when the distribution is unknown). Apparently, in view of the linearization of the constraints (see the discussion after \eqref{eq:Cantelli_ineqs_linL}), the S-MPCl algorithm results more conservative than S-MPCc. On the other hand, concerning the dimension of the obtained feasibility set, in this case the use of the Chebyshev - Cantelli inequality does not carry about a dramatic performance degradation in terms of conservativeness.\\
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{sets}
\caption{Plots of the feasibility sets for S-MPCc (1), S-MPCc (2), S-MPCl (1), S-MPCl (2)}
\label{fig:sets}
\end{figure}
A 200-runs Montecarlo simulation campaign has been carried out for testing the probabilistic properties of the algorithm, with initial conditions $(5,-1.5)$. The fact that the control and estimation gains are free variables makes the transient behaviour of the state responses in case of S-MPCl more performing and reduces the variance of the dynamic state response (at the price of a more reactive input response), with respect to the case when S-MPCc is used. For example, the maximum variance of $x_1(k)$ (resp. of $x_1(k)$) is about $0.33$ (resp. $0.036$) in case of S-MPCc (1) and (2), while it results about $0.25$ (resp. $0.035$) in case of S-MPCl (1) and (2). On the other hand, the maximum variance of $u(k)$ is about $0.006$ in case of S-MPCc, while it is $0.008$ in case of S-MPCl.
\section{Conclusions}
\label{sec:conclusions}
The main properties of the proposed probabilistic MPC algorithm lie in its simplicity and in its light-weight computational load, both in the off-line design phase and in the online implementation. This allows for the application of the S-MPC scheme to medium/large-scale problems, for general systems affected by general disturbances.\\
Future work will focus on the use of the proposed scheme in challenging control problems, such as the control of micro-grids in presence of renewable stochastic energy sources. The application of the algorithm to cope with linear time-varying systems is envisaged, while its extension to distributed implementations is currently underway.
\section*{Acknowledgements}
We are indebted with Bruno Picasso for fruitful discussions and suggestions.
| {'timestamp': '2014-08-29T02:13:28', 'yymm': '1408', 'arxiv_id': '1408.6723', 'language': 'en', 'url': 'https://arxiv.org/abs/1408.6723'} |
\section{Introduction}
The AdS/CFT correspondence \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}
provides a controlled environment to study the physics of a strongly-coupled conformal field theory (CFT)
on the boundary by dealing with a weakly-coupled gravitational system in the Anti-de Sitter (AdS) bulk.
In particular, it gives a very useful semiclassical picture of entanglement
in strongly-coupled systems \cite{Ryu:2006bv,Hubeny:2007xt,Rangamani:2016dms},
by relating the entanglement entropy of a subsystem in a CFT
to the area of an extremal surface anchored at the boundary
of the subregion in the AdS dual. Holographic entanglement entropy is also important to understand black holes: for example,
at finite temperature and in the limit of large subregions, the entanglement entropy is dominated
by the thermodynamical contribution, and the Bekenstein-Hawking entropy \cite{Bekenstein:1973ur,Hawking:1974sw}
is recovered.
The AdS/CFT correspondence provides several setups to investigate the thermalisation of
out of equilibrium systems. Among these, quenches are conceptually simple theoretical settings, describing the evolution of a system triggered
by a sudden injection of energy or a change of coupling constants.
In the case of global quenches, the perturbation is spatially homogeneous
and, in the bulk, it corresponds to the formation of a black hole.
The evolution of the entanglement entropy has been studied
in several examples of quench protocols, both on the CFT
\cite{Calabrese:2005in} and on the gravity side
\cite{Hubeny:2007xt,AbajoArrastia:2010yt,Albash:2010mv,Balasubramanian:2010ce,Balasubramanian:2011ur,
Buchel:2012gw,Buchel:2013lla,Auzzi:2013pca,Buchel:2014gta,Liu:2013iza,Liu:2013qca}.
While the simplest features of the spread of entanglement can be described by a model
with free quasiparticles \cite{Calabrese:2005in}, a more detailed understanding
needs to take into account the role of interactions \cite{Casini:2015zua}.
A common feature of global quenches is that the energy is
injected in the whole space. Once some perturbation
in the entanglement entropy is detected, there is no clear way
to discriminate the point where it originated. Local quenches allow for a more transparent
analysis of the spread of quantum entanglement, because the
initial perturbation is localised in a finite region of space.
Moreover, local quenches can be realised in condensed matter systems
such as cold atoms \cite{Langen:2013,Meinert:2013} and they may provide a setup in which
entanglement entropy might be measured experimentally \cite{Klich:2008un,Cardy:2011zz}.
In CFT, local quenches can be modelled by joining
two initially decoupled field theories \cite{Calabrese:2007mtj,Calabrese:2016xau}
and then evolving with a time-translation invariant Hamiltonian.
Another approach in field theory is to consider excited states
obtained by acting with local operators on the vacuum \cite{Nozaki:2014hna,Caputa:2014vaa,Nozaki:2014uaa,Asplund:2014coa}.
In holography, the latter kind of local quenches can be described
by a free falling particle-like object in AdS \cite{Nozaki:2013wia}.
The topic of local quenches was studied by many authors
both on the CFT and on the gravity side: for example, mutual information
was considered in \cite{Asplund:2013zba} and finite temperature aspects
were investigated in \cite{Caputa:2014eta,Rangamani:2015agy,David:2016pzn}.
Bulk quantum corrections were recently studied in \cite{Agon:2020fqs}.
Local quenches obtained by splitting an initial CFT into two disconnected pieces
were considered in \cite{Shimaji:2018czt}.
In principle, the arbitrariness of choosing the falling particle on the gravity side corresponds to
several choices of local quench protocols on the field theory side.
In a local quench, it is interesting to understand which aspects of the physics are universal
and which one depend on the details of the quenching protocol.
This raises the question of how the choice of the falling particle in AdS
affects the physics of the local quench of the boundary theory.
Two natural candidates which are free of singularities
are black holes (BH) or solitons. The black hole case was studied by several authors,
see e.g. \cite{Nozaki:2013wia,Jahn:2017xsg,Ageev:2020acl}.
In this paper we will explore the possibility in which the falling particle
is a soliton in AdS$_4$, focusing on the case of the 't Hooft-Polyakov monopole \cite{tHooft:1974kcl}.
Monopole solutions in global AdS$_4$ have been considered by several authors,
starting from \cite{Lugo:1999fm,Lugo:1999ai}.
AdS monopole wall configurations have been studied in \cite{Bolognesi:2010nb,Sutcliffe:2011sr,Bolognesi:2012pi,Kumar:2020hif}.
Holographic phase transitions for AdS monopoles have
been investigated in \cite{Lugo:2010qq,Lugo:2010ch,Giordano:2015vsa,Miyashita:2016aqq}.
In this paper, we will consider the theoretical setting introduced in \cite{Esposito:2017qpj}.
In this situation, specialising to a multitrace deformation,
the monopole in global AdS is dual to a theory with spontaneous symmetry breaking.
A previous study of such a model was performed numerically.
In this work, we find an approximate analytic solution for the winding-one monopole,
which includes the first-order backreaction on the metric.
Applying the change of variables introduced in \cite{Horowitz:1999gf}, we map
the time-independent global AdS$_4$ solution
to a falling monopole configuration in the Poincar\'e patch.
On the field theory side, this is dual to a
local quench in a perturbed CFT, induced by the injection
of a condensate which breaks some of the global $SU(2)$ symmetries of the theory.
Outside the quench, the global symmetry of the CFT
remains unbroken.
To investigate the field theory dual of the falling monopole,
we compute the expectation values of various local operators.
Depending on the choice of boundary conditions for the scalar,
several interpretations are possible on the CFT side.
For Dirichlet or Neumann boundary conditions, the falling monopole
is dual to a CFT deformed by a time-dependent source, which performs a non-zero external
work on the system.
For a particular choice of the multitrace deformation,
given in eq. (\ref{multitrace-speciale}),
the monopole is dual to a theory with a time-independent Hamiltonian.
In this case, the expectation value of the energy-momentum
tensor has the same functional form as the one in the background of a falling black hole \cite{Nozaki:2013wia}.
In other words, the energy density of the quench is not sensitive to the presence of the condensate.
To further characterise the field theory dual of the falling monopole,
we perturbatively compute the entanglement entropy for spherical regions.
Let us denote by $\Delta S$ the difference of entropy between the excited state and the vacuum.
We find a rather different behaviour for $\Delta S$
compared to the case of the falling black hole:
for the monopole quench, $\Delta S$
for a region centered
at the origin is always negative,
while in the BH case $\Delta S$ is positive.
The negative sign of $\Delta S$ for the monopole quench
is consistent with the expectation that the
formation of bound states, which are responsible for the condensate at the core of the quench,
corresponds to a decrease of the number of degrees of freedom \cite{Albash:2012pd}.
The paper is organised as follows. In section \ref{sect:monopole-global-AdS}
we consider a static monopole solution in global AdS and we find an analytical
solution in the regime of small backreaction.
In section \ref{sect:falling-monopole} we apply the change of variables
introduced in \cite{Horowitz:1999gf} to the global AdS monopole.
This trick transforms the global AdS static solution
to a falling monopole in the Poincar\'e patch, which
provides the holographic dual of the local quench. In section \ref{sect:boundary-interpretation}
we compute the expectation value of some local operators, including the energy-momentum tensor.
In section \ref{sect:entanglement-entropy} we study the entanglement entropy for various subsystem geometries.
We conclude in section \ref{sect:conclusions}. Some technical details
are discussed in appendices.
\section{A static monopole in global AdS }
\label{sect:monopole-global-AdS}
We consider the same theoretical setting as in \cite{Esposito:2017qpj}
which, in global AdS, is dual to a boundary theory with a spontaneously-broken $SU(2)$ global symmetry.
The action of the model is:
\begin{equation}
S=\int d^4 x \sqrt{-g} \left[ \frac{1}{16 \pi \, G} \left( R -2 \Lambda \right) + \mathcal{L}_M \right] \, ,
\end{equation}
where $ \mathcal{L}_M$ is the matter lagrangian
\begin{equation}
\mathcal{L}_M =
- \frac14 F_{\mu \nu}^a F^{a \, \mu \nu} - \frac12 D_\mu \phi^a D^\mu \phi^a - \frac{ m_\phi^2}{2} (\phi^a \phi^a) \, .
\label{Lmatter}
\end{equation}
We choose the cosmological constant and the scalar mass as follows
\begin{equation}
\Lambda=-\frac{3}{L^2} \, , \qquad m^2_\phi=-\frac{2}{L^2} \, ,
\label{Lambda-e-mphi}
\end{equation}
where $L$ is the AdS radius.
In eq. (\ref{Lmatter}), $F_{\mu\nu}=F_{\mu\nu}^a \frac{\sigma_a}{2}$ denotes
the non-abelian field strength of the $SU(2)$ gauge field $A_\mu=A_\mu^a \frac{\sigma_a}{2}$, i.e.
\begin{equation}
F_{\mu\nu}^a=\partial_\mu A^a_\nu - \partial_\nu A^a_\mu+ e \, \epsilon^{abc} A^b_\mu A^c_\nu \, ,
\end{equation}
with $e$ the Yang-Mills coupling.
The covariant derivative acting on the adjoint scalar is
\begin{equation}
D_\mu \phi_a = \partial_\mu \phi_a + e \, \epsilon_{abc} A_\mu^b \phi^c \, .
\end{equation}
The equations of motion are:
\begin{eqnarray}
&& D^\mu F^{a}_{\mu \nu}
- e \, \epsilon^{abc} \phi^b D_\nu \phi^c = 0 \, ,
\qquad
g^{\mu \nu} D_\mu D_\nu \phi^a
- m_\phi^2 \phi^a=0 \, ,
\nonumber \\
&& R_{\mu \nu}- \frac12 R g_{\mu \nu} + \Lambda g_{\mu \nu} =8 \pi G \, T_{\mu \nu} \, ,
\end{eqnarray}
where $D_\mu$ denotes the combination
of the gravitational and $SU(2)$ gauge covariant derivatives,
and $T_{\mu \nu}$ is the bulk energy-momentum tensor
\begin{equation}
T_{\mu \nu} =
D_\mu \phi^a D_\nu \phi^a +F_{a \mu \alpha} F_{a \nu}^{\,\,\,\,\, \alpha} + g_{\mu \nu} \mathcal{L}_M \, .
\label{bulk_T}
\end{equation}
We first consider the monopole in a global AdS$_4$ background, with metric
\begin{equation}
ds^2= L^2 \left( -(1+r^2) d \tau^2 +\frac{dr^2}{1+r^2} +
r^2 (d \theta^2+\sin^2 \theta d \varphi^2) \right) \, .
\label{global-AdS-1}
\end{equation}
At large $r$ the field $\phi^a$ has the following expansion
\begin{equation}
\phi^a = \alpha^a \frac{1}{r^{\Delta_1}}+ \beta^a \frac{1}{r^{\Delta_2}} + \dots \, ,
\end{equation}
where $\Delta_{1,2}$ are the dimensions of the sources and vacuum expectation values (VEV)
of the global $SU(2)$ triplet of
operators $\mathcal{O}^a$ which are dual to the scalar triplet $\phi^a$.
For our choice of mass, see eq. (\ref{Lambda-e-mphi}), the dimensions are
\begin{equation}
\Delta_1=1 \, , \qquad \Delta_2=2
\end{equation}
and both the $\alpha^a$ and the $\beta^a$ modes are normalisable.
For this reason, we can choose among different possible boundary interpretations
of the source and VEV\footnote{The subscript in the operator $\mathcal{O}^a$ refers to its dimension.}:
\begin{itemize}
\item the Dirichlet quantisation, where ${\alpha}^a$ corresponds to the source
and ${\beta}^a$ to the VEV
\begin{equation}
J_D^a = {\alpha}^a \, , \qquad \langle \mathcal{O}_2^a \rangle = \beta^a \, .
\end{equation}
\item the Neumann quantisation,
where $-{\beta}^a$ corresponds to the source
and ${\alpha}^a$ to the VEV
\begin{equation}
J_N^a=- {\beta}^a \, , \qquad \langle \mathcal{O}_1^a \rangle = {\alpha}^a \, .
\end{equation}
\item
the multitrace deformation \cite{Witten:2001ua,Berkooz:2002ug,Papadimitriou:2007sj,Faulkner:2010gj,Caldarelli:2016nni},
where $\langle \mathcal{O}_1^a \rangle = {\alpha}^a$ and
the boundary dual is deformed by the action term
\begin{equation}
S_{\mathcal{F}}= \int d^3 x \sqrt{-h} \, [ J_{\mathcal{F}}^a \, {\alpha}^a+\mathcal{F}({\alpha}^a) ]\, , \qquad
J_{\mathcal{F}}^a= -{{\beta}}^a - \frac{ \partial \mathcal{F}}{\partial {\alpha}^a}\, ,
\end{equation}
where $\mathcal{F}$ is an arbitrary function.
Imposing $J_{\mathcal{F}}^a=0$, in order to consider an isolated system, we find the boundary condition
\begin{equation}
{\beta}^a=- \frac{\partial \mathcal{F}}{\partial {\alpha}^a} \, .
\label{zero-source}
\end{equation}
\end{itemize}
If we use either Dirichlet or Neumann quantisation,
there is no non-trivial monopole solution with zero boundary scalar sources.
Multitrace deformations, instead, allow finding a monopole solution with
a zero boundary source (which satisfies eq. (\ref{zero-source}) for an opportune $\mathcal{F}$),
thus in a situation compatible
with spontaneous symmetry breaking.
\subsection{Monopole solution in the probe limit}
Let us first consider the zero backreaction limit $G \rightarrow 0$.
The monopole solution can be built by a generalisation of 't Hooft-Polyakov ansatz in global AdS$_4$
(see e.g. \cite{Lugo:1999fm,Lugo:1999ai,Esposito:2017qpj}):
\begin{equation}
\phi^a=\frac{1}{L} H(r) n^a \, , \qquad
A^a_l= F(r) r \, \epsilon^{a i k} n^k \partial_l \, n^i \, ,
\label{monopole-ansatz}
\end{equation}
where $x^l=(r,\theta,\varphi)$ and $n^k$ is the unit vector on the sphere $S^2$
\begin{equation}
n^k=(\sin \theta \cos \varphi, \sin \theta \sin \varphi, \cos \theta) \, .
\label{direzione-enne}
\end{equation}
The resulting equations of motion are shown in appendix \ref{Appe-eqs-erre}.
The regularity of the solution at small $r$ requires that
both $H(r)$ and $F(r)$ approach zero linearly in $r$.
On the other hand, at $r \rightarrow \infty$, the choice of boundary conditions
depends on the physics we want to describe on the boundary.
Such a choice is determined in terms of the coefficients $(\alpha_H,\beta_H,\alpha_F,\beta_F)$
specified in the expansion of the scalar and gauge fields nearby the boundary:
\begin{equation}
H(r) = \frac{\alpha_H}{r}+\frac{\beta_H}{r^2} + \dots \, ,
\qquad
F(r) = \frac{\alpha_F}{r}+\frac{\beta_F}{r^2} + \dots \, .
\label{boundarty-condition-F-H}
\end{equation}
We choose $\alpha_F=0$ in order to describe a theory
with spontaneous breaking of the $SU(2)$ global symmetry, as in \cite{Esposito:2017qpj}.
Instead, in order to get a background which is different from empty AdS,
we have to look for solutions where
$\alpha_H$ and $\beta_H$ are generically non-vanishing.
Once that $\alpha_H$ is fixed, $\beta_H$ is determined by the requirement
that the solution is smooth.
In appendix \ref{flux-appendix} we compute the monopole magnetic flux,
which is independent of the boundary conditions expressed in eq. (\ref{boundarty-condition-F-H}).
An exact solution to eqs. (\ref{sistema-H-F-senza-backreaction}) can be found at the leading order in $\alpha_H$:
\begin{eqnarray}
H(r) &=& \frac{\alpha_H}{r} \, \left[ 1-\frac{\tan^{-1} r}{r} \right] \, ,
\nonumber \\
F(r) &=& \frac{ e \, \alpha _H^2}{16 r^3} \, \left[ \pi ^2 r^2-4 \left(r^2+1\right) (\tan ^{-1} r )^2-\left(\pi
^2-4\right) r \tan ^{-1} r \right] \, .
\label{H-F-analitico}
\end{eqnarray}
Such a solution entails the following coefficients
\begin{equation}
\label{beta-H-analitico}
\beta_H=-\frac{\pi}{2} \alpha_H \, , \qquad
\beta_F=e \, \alpha_H^2 \, \frac{12 \pi -\pi^3}{32} \, .
\end{equation}
At higher order in $\alpha_H$, eqs. (\ref{sistema-H-F-senza-backreaction})
can be solved numerically. To this purpose, it is convenient
to use the compact variable $\psi$ defined by
\begin{equation}
r=\tan \psi \, .
\end{equation}
The equations of motion in the variable $\psi$ can be found in appendix \ref{Appe-eqs-psi}.
An example of numerical solution is shown in figure \ref{monopole-profiles-ads}.
A plot of $(\alpha_H,\beta_H)$ for various numerical solutions is shown in figure \ref{alfa-beta}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.52]{monop-ads-H.pdf} \qquad
\includegraphics[scale=0.52]{monop-ads-F.pdf}
\caption{
Numerical solutions for $H(\psi)$ and $F(\psi)$ are shown in black
(the values $e=1$, $\alpha_H=1$ have been used).
As a comparison, the analytical approximations (\ref{H-F-analitico})
are shown in blue.
}
\label{monopole-profiles-ads}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{alfa-beta.pdf}
\caption{ Plot of the relation between the coefficients $\alpha_H$ and $\beta_H$
for different values of $e$. At small $\alpha_H$, eq. (\ref{beta-H-analitico}) is satisfied.
}
\label{alfa-beta}
\end{center}
\end{figure}
\subsection{Monopole backreaction}
We now introduce the monopole backreaction, modelled by the metric
\begin{equation}
ds^2= L^2 \left( -(1+r^2) h(r) g(r) d\tau^2 +\frac{h(r)}{g(r)} \frac{dr^2}{1+r^2}
+ r^2 (d \theta^2+\sin^2 \theta d \varphi^2) \right) \, .
\label{monopole-backreaction}
\end{equation}
In order to recover asymptotically global AdS, we impose the following boundary conditions at large $r$
\begin{equation}
\lim_{r \rightarrow \infty} h=\lim_{r \rightarrow \infty} g=1 \, .
\end{equation}
The full set of equations of motion in this background is given in appendix \ref{Appe-eqs-erre}. \\
The asymptotic form of the equations of motion fixes the following large $r$ expansions
\begin{equation}
h(r) = 1 + \frac{h_2}{r^2}+\frac{h_3}{r^3} + O(1/r^4) \, ,
\qquad
g(r) = 1 + \frac{g_2}{r^2}+\frac{g_3}{r^3} + O(1/r^4) \, ,
\label{hg-expansion}
\end{equation}
with
\begin{equation}
g_2=-h_2=\frac{2 \pi G \alpha_H^2 }{L^2} \, , \qquad h_3= -\frac{16 \pi G}{3 L^2} \alpha_H \beta_H \, .
\end{equation}
The unfixed parameter $g_3$ can be found by requiring that the solution is smooth.
At the leading order in $\alpha_H$, the $H(r)$ and $F(r)$ solutions
are still given by eq. (\ref{H-F-analitico}). The leading-order backreaction on the metric
can be solved analytically too, giving:
\begin{equation}
h(r)=1 + \epsilon \, h_\epsilon + O(\epsilon^2) \, , \qquad
g(r)=1 + \epsilon \, g_\epsilon + O(\epsilon^2) \, , \qquad
\epsilon=\frac{\pi \, G \, \alpha_H^2 }{L^2} \, ,
\label{epsilon-definition}
\end{equation}
where
\begin{eqnarray}
\label{h-g-analitico}
h_\epsilon &=& \pi ^2 -\frac{4}{r^2} - \frac{2}{r^2+1}
-4 \frac{2 \left(r^2-1\right) r \tan^{-1} r + \left(r^4+1\right) (\tan^{-1} r)^2 }{r^4} \, , \nonumber \\
\nonumber
g_\epsilon &=& \pi ^2 + \frac{1}{r^2} -
\frac{ 2 r \tan^{-1} r+3 }{ r^2} \left(1-\frac{2}{1+r^2} \right) \nonumber \\
&&
-2 ( \tan^{-1} r ) \frac{ 2 \left(r^4-1\right) \tan^{-1} r +r \left(3 r^2+4\right) }{ r^4}
\, .
\end{eqnarray}
These solutions set
\begin{equation}
g_3=-\frac{10 \, \pi^2 \, G \, \alpha_H^2 }{3 \, L^2} \, .
\label{g3-analitico}
\end{equation}
The profile functions at higher order in $\alpha_H$ are again accessible by numerically solving the equations of motion. As in the probe limit, it is convenient to introduce the variable $\psi =\tan^{-1} r$, getting the equations of motion shown in appendix \ref{Appe-eqs-psi}.
A comparison between the numerical and the analytical solutions at small $\alpha_H$
is shown in figure \ref{monopole-profiles-ads-back}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.52]{monop-ads-back-h.pdf} \qquad
\includegraphics[scale=0.52]{monop-ads-back-g.pdf}
\caption{
Numerical solutions for the metric functions $h(\psi)$ and $g(\psi)$
are shown in black (the values $e=1$, $\alpha_H=1$, $L=1$ and $G=0.1$ have been used).
As a comparison, the analytical approximations (\ref{h-g-analitico})
are shown in blue.
}
\label{monopole-profiles-ads-back}
\end{center}
\end{figure}
\section{A falling monopole in Poincar\'e patch}
\label{sect:falling-monopole}
The gravity dual of local quench in a CFT can be realised by
considering a falling particle in AdS \cite{Nozaki:2013wia}.
To this purpose, a nice trick was introduced in \cite{Horowitz:1999gf}.
The idea is to start from a spherically symmetric geometry
in global AdS, and to transform it to a time-dependent
Poincar\'e AdS geometry by performing a change of variables.
The Poincar\'e AdS$_4$ metric with coordinates $(t,z,x,{\varphi})$ is
\begin{equation}
ds^2=L^2 \left( \frac{dz^2 -dt^2 +d x^2 +x^2 d {\varphi}^2}{z^2} \right) \, .
\label{poincare-AdS}
\end{equation}
The metric in eq. (\ref{poincare-AdS}) and the global AdS metric
in eq. (\ref{global-AdS-1}) can be mapped into each other
via the coordinate transformations
\begin{eqnarray}
\label{A-changing}
\sqrt{1+ r^2} \cos \tau &=& \frac{A^2+ z^2+ x^2-t^2 }{2 A \, z} \, , \qquad
\sqrt{1+ r^2} \sin \tau = \frac{ t}{z} \, ,
\nonumber \\
r \, \sin \theta &=& \frac{ x}{z} \, , \qquad r \, \cos \theta = \frac{ z^2+x^2-t^2 - A^2 }{2 A \, z} \, ,
\end{eqnarray}
leaving the angular coordinate ${\varphi}$ unchanged. \\
These transformations can be inverted as follows:
\begin{eqnarray}
\label{A-changing-inverse}
r &=& \frac{\sqrt{A^4 + 2 A^2 (t^2+ x^2-z^2) + (z^2 + x^2 -t^2)^2 } }{2 A \, z} \, ,
\nonumber \\
\tau &=& \tan^{-1} \left( \frac{2 A \, t}{ z^2+x^2-t^2 + A^2} \right) \, ,
\nonumber \\
\theta &=& \tan^{-1} \left( \frac{2 A \, x}{ z^2+x^2-t^2 - A^2 } \right) \, .
\end{eqnarray}
The change of variables in eq. (\ref{A-changing}) maps
a configuration with a static particle in the center of global AdS
to a falling particle in the Poincar\'e patch, that can be used to model a local quench.
We will apply this method to the monopole solution we discussed in section
\ref{sect:monopole-global-AdS}.
The holographic quench is symmetric under time reversal $t \to -t$:
for $t<0$ the monopole is approaching the boundary, while for $t>0$
it moves in the direction of the bulk interior.
Physically, we can think of the initial condition at $t=0$ as
the initial out-of-equilibrium state, which can be prepared
in the dual conformal field theory by some
appropriate operator insertion.
The position of the monopole center, corresponding to $r=0$ in global AdS,
in the Poincar\'e patch is time-dependent and follows the curve
\begin{equation}
x=0 \, , \qquad z = \sqrt{t^2 +A^2} \, .
\label{traiettoria-monopolo}
\end{equation}
In the approximation in which the monopole is a pointlike particle,
eq. (\ref{traiettoria-monopolo}) can be interpreted as the trajectory of the monopole.
From the gravity side, the parameter $A$ can be interpreted as the initial position along the $z$-direction
of the free-falling monopole.
From the CFT perspective, the parameter $A$ fixes the size of the local quench.
\subsection{Bulk energy density of the falling monopole}
\label{subsec-energy density and flux}
One may be tempted to imagine the monopole as a pointlike particle
which is falling along the trajectory in eq. (\ref{traiettoria-monopolo}).
To check this intuition, it is natural to consider
the bulk energy-momentum tensor (\ref{bulk_T}).
Working in the limit of negligible monopole backreaction,
we perform the coordinate change in eq. (\ref{A-changing-inverse})
\begin{equation}
x^{\mu} = \left( \tau \, , r \, , \theta \, , \varphi \right) \rightarrow x'^{\mu} = \left( t \, , z \, , x \, , \varphi \right)
\end{equation}
The energy-momentum tensor in Poincar\'e patch is given by
\begin{equation}
T_{\alpha \beta}' \left( x' \right) = \frac{\partial x^{\mu}}{\partial x'^{\alpha}}
\frac{\partial x^{\nu}}{\partial x'^{\beta}} \, T_{\mu \nu}\left( x \right) \, .
\end{equation}
To properly normalise the energy-momentum tensor, we introduce the vierbein $e^{\mu}_m$ such that
\begin{equation}
T'_{mn} \left( x' \right) = e^\mu_m e^\nu_n \, T'_{\mu \nu} \left( x' \right) \, ,
\qquad
e^\mu_m \, e^\nu_n \, g'_{\mu \nu} = \eta_{mn} \, ,
\end{equation}
where $g'_{\mu \nu}$ and $\eta_{mn}$ are the Poincar\'e AdS and the Minkowski
metric tensors, respectively. In particular, we choose\footnote{In this section,
the Minkowski indices $m,n$ take the values $0,1,2,3$,
while the curved spacetime indices are $t,z,x,{\varphi}$.}
\begin{equation}
e^\mu_0 = \left( \frac{z}{L} \, , 0 \, , 0 \, , 0 \right) \, , \qquad e^\mu_1 = \left( 0 \, , \frac{z}{L} \, , 0 \, , 0 \right) \, , \qquad e^\mu_2 = \left( 0 \, , 0 \, , \frac{z}{L} \, , 0 \right) \, .
\end{equation}
The energy density as measured in such a orthonormal frame is
\begin{equation}
\rho = T'^{00} = \frac{z^2}{L^2} \, T'_{tt} \, ,
\end{equation}
and the components of the Poynting vector $\vec{s}=(s_z, s_x,s_{\varphi})$ are
\begin{equation}
s_z = T'^{01} = - \frac{z^2}{L^2} \, T'_{tz} \, , \qquad s_x = T'^{02} = - \frac{z^2}{L^2} \, T'_{tx} \, , \qquad s_{\varphi}= 0 \, .
\end{equation}
In figure \ref{EM00_monopole-plot} and \ref{EMpointing_monopole-plot}
we show the numerical results for the energy density and the energy flux into the bulk at fixed time.
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{en_density_t0}
\qquad
\includegraphics[scale=0.45]{en_density_t3}
\qquad
\includegraphics[scale=0.45]{en_density_t5}
\qquad
\includegraphics[scale=0.45]{en_density_t7}
\caption{Contour lines of constant energy density for fixed time.
The monopole center is represented by the black spot.
The numerical values $A=1$ and $L=1$ have been chosen. }
\label{EM00_monopole-plot}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{pointing_TOT}
\caption{Direction of the bulk Poynting vector for fixed time.
The numerical values $A=1$ and $L=1$ have been chosen.}
\label{EMpointing_monopole-plot}
\end{center}
\end{figure}
The pictures clearly illustrate how the energy density,
initially localised near the AdS boundary, spreads into the bulk.
The energy distribution resembles a pointlike particle only at early times,
while at late times the energy is spread along a spherical wavefront.
At $t=0$ all the components of the Poynting vector vanish,
implying that there is no energy flux at initial time.
\section{Expectation values of local operators}
\label{sect:boundary-interpretation}
To understand the physical interpretation on the boundary CFT,
it is useful to study the expectation values of some important local
operators. In particular, in this section we will focus on expectation values
of scalar operators, global $SU(2)$ currents and energy-momentum tensor.
The details depend on the boundary conditions chosen
for the scalar field $\phi^a$. In general, the boundary energy-momentum tensor $T_{mn}$
is not conserved because the external sources perform work on the system.
With a particular choice of multitrace deformation, see eq. (\ref{multitrace-speciale}),
the system is isolated and $T_{mn}$ is conserved.
\subsection{Boundary conditions for the scalar}
In the AdS/CFT correspondence, the asymptotic behaviour of the bulk fields
is dual to the source and expectation values of operators in the CFT.
For this reason, we will focus on asymptotics of the scalar field $\phi^a$
nearby the boundary.
In global AdS, the direction $n^a$ of $\phi^a$ in the internal $SU(2)$ space is
given by eq. (\ref{direzione-enne}).
In Poincar\'e patch, by performing the coordinate transformation in eq. (\ref{A-changing-inverse})
we find that, nearby the boundary at $z=0$,
\begin{equation}
n^a=\frac{1}{\omega^{1/2}} \, \left(-2 A x \cos (\varphi ),-2 A x \sin (\varphi ),A^2+t^2-x^2\right)
+O(z^2) \, ,
\label{enne-poincare}
\end{equation}
where, for convenience, we introduce the quantity $\omega(x,t)$
that appears in many subsequent expressions
\begin{equation}
\omega(x,t)=A^4+2 A^2 \left(t^2+x^2\right)+\left(t^2-x^2\right)^2 \, .
\label{omega}
\end{equation}
The core of the quench can be thought of as localised at
\begin{equation}
x=\sqrt{t^2 + A^2} \, ,
\label{cono-luce-largo}
\end{equation}
which, at large $t$, coincides with good approximation with the lightcone of the origin $x=t$.
For the value in eq. (\ref{cono-luce-largo}),
the adjoint scalar field points in the direction $n=n^a \sigma^a$ given by
\begin{equation}
n=- (\sigma_1 \cos {\varphi} + \sigma_2 \sin {\varphi}) \, .
\end{equation}
The scalar points along the $\sigma_3$ direction inside the lightcone,
and along the $-\sigma_3$ outside the lightcone, see figure \ref{direction-fig}.
As we will see later, at large $t$, the absolute value of the
scalar field is peaked on $x$ given by eq. (\ref{cono-luce-largo}),
and is almost zero both inside and outside the lightcone.
The configuration at $t=0$ resembles a baby skyrmion,
with a field pointing along $\sigma_3$ in the core and along $-\sigma_3$ far away
(actually, it is not a skyrmion because the VEV tends to zero at infinity).
As time increases, this configuration expands along the lightcone.
At large time we end up with two region of vacuum
(inside and outside lightcone) separated by an expanding
shell of energy.
\begin{figure}
\begin{center}
\includegraphics[scale=0.52]{scalar-direction.pdf}
\caption{ Value of $n^3$ as a function of $(t,x)$ for $A=1$.
Negative values of the radial cylindrical coordinate $x$ correspond to $\varphi \to - \varphi$.
}
\label{direction-fig}
\end{center}
\end{figure}
In order to extract the sources and the expectation values of the local operator
triggering the quench, it is useful to expand the change of variables in eq. (\ref{A-changing-inverse})
nearby the boundary.
The global AdS radial coordinate reads
\begin{equation}
r=\frac{a}{z} +O(z)\, , \qquad
a= \frac{ \omega^{1/2 }}{2 A} \, .
\label{erre-vs-zeta}
\end{equation}
By means of eq. (\ref{erre-vs-zeta}),
we obtain the boundary expansion of $H(r)$
\begin{equation}
H=\frac{\alpha_H}{r} + \frac{\beta_H}{r^2}+O(r^{-3}) =
\tilde{\alpha}_H z + \tilde{\beta}_H z^2 + O(z^3) \, ,
\label{H-poincare}
\end{equation}
where
\begin{equation}
\tilde{\alpha}_H = \frac{\alpha_H}{a} = \alpha_H \, \frac{2 A}{\omega^{1/2}} \, ,
\qquad
\tilde{\beta}_H = \frac{\beta_H}{a^2} = \beta_H \, \frac{4 A^2}{ \omega} \, .
\label{alfa-e-beta-tilde}
\end{equation}
A plot of $ \tilde{\alpha}_H $ and $ \tilde{\beta}_H $ is shown in figure \ref{alpha-beta-tilda}.
It is interesting to note that
\begin{equation}
\frac{\tilde{\beta}_H}{\tilde{\alpha}_H^2}=\frac{\beta_H}{\alpha_H^2} = \kappa \, ,
\label{alfa-e-beta-tilde-rel}
\end{equation}
where $\kappa$ is a constant. In the limit of small backreaction, from eq. (\ref{beta-H-analitico})
we find
\begin{equation}
\kappa=-\frac{\pi}{ 2 \alpha_H} \, .
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[scale=0.52]{alpha-tilde-H.pdf} \qquad
\includegraphics[scale=0.52]{beta-tilde-H.pdf}
\caption{ The quantities ${\tilde{\alpha}}_H$ (left) and ${\tilde{\beta}}_H$ (right) as a function of $(t,x)$.
Here we set $A=1$, $\alpha_H=1$ and we use the relations in eq. (\ref{beta-H-analitico}),
valid for small backreaction.
}
\label{alpha-beta-tilda}
\end{center}
\end{figure}
Combining eqs. (\ref{enne-poincare})
and (\ref{H-poincare}),
the expansion of $\phi^a$ nearby the Poincar\'e patch boundary is
\begin{equation}
\phi^a= \frac{H(z)}{L} n^a =\frac{1}{L} \left( {\tilde{\alpha}}^a \, z + {\tilde{\beta}}^a \, z^2 + O(z^3) \right) \, , \qquad
{\tilde{\alpha}}^a=n^a \tilde{\alpha}_H \, , \qquad
{\tilde{\beta}}^a=n^a \tilde{\beta}_H \, .
\end{equation}
As for the global AdS case, we can consider several quantisations for the scalar field $\phi^a$:
\begin{itemize}
\item the Dirichlet condition, where $\tilde{\alpha}_H$ corresponds to the source
and $\tilde{\beta}_H$ to the VEV
\begin{equation}
J_D^a = \tilde{\alpha}^a \, , \qquad \langle \mathcal{O}_2^a \rangle = \tilde{\beta}^a \, .
\end{equation}
\item the Neumann condition, where $-\tilde{\beta}_H$ corresponds to the source
and $\tilde{\alpha}_H$ to the VEV
\begin{equation}
J_N^a= - \tilde{\beta}^a \, , \qquad \langle \mathcal{O}_1^a \rangle = \tilde{\alpha}^a \, .
\end{equation}
\item
the multitrace deformation, where
the boundary dual is deformed by the action term
\begin{equation}
S_{\mathcal{F}}= \int d^3 x \sqrt{-h} \, [ J_{\mathcal{F}}^a \, \tilde{\alpha}^a+\mathcal{F}(\tilde{\alpha}^a) ]\, , \qquad
J_{\mathcal{F}}^a= -{\tilde{\beta}}^a - \frac{ \partial \mathcal{F}}{\partial \tilde{\alpha}^a}\, ,
\end{equation}
and $\langle \mathcal{O}_1^a \rangle = \tilde{\alpha}^a$.
\end{itemize}
All these boundary conditions correspond in general
to a monopole in presence of external time-dependent sources.
Among such possible choices of boundary conditions, it is
interesting to consider the multitrace deformation with
\begin{equation}
\mathcal{F}_\kappa(\tilde{\alpha}^a) = -\frac{\kappa}{3} \left( \tilde{\alpha}^a \tilde{\alpha}^a \right)^{3/2}
=-\frac{\kappa}{3} \tilde{\alpha}_H^3 \, .
\label{multitrace-speciale}
\end{equation}
In this case, the monopole is a solution with a vanishing source, because it satisfies
\begin{equation}
\tilde{\beta}^a=- \frac{\partial \mathcal{F}}{\partial \tilde{\alpha}^a} \, ,
\label{zero-source-poincare}
\end{equation}
as can be checked from eq. (\ref{alfa-e-beta-tilde-rel}).
\subsection{The boundary global currents}
Our monopole ansatz in global AdS is given by eq.
(\ref{monopole-ansatz}), with boundary conditions in
eq. (\ref{boundarty-condition-F-H}) and with $\alpha_F=0$.
As a consequence, we deduce that in Poincar\'e patch the gauge field
$A^a_\mu$ vanishes at the boundary $z=0$.
In other words, if the sources for the global symmetries are set to zero in global AdS,
they also vanish after the change of coordinates leading to the Poincar\'e patch.
From the order $z$ terms in the boundary expansion of $A^a_\mu$ we can extract the expectation value of the three currents $J^a_l$
\begin{eqnarray}
\langle J^1_l \rangle&=& \frac{8 A^2 \beta _F}{\omega^{3/2}}
\left( t x \sin (\varphi ),-\frac{1}{2} \sin (\varphi )
\left(A^2+t^2+x^2\right),-\frac{1}{2} x \cos (\varphi )
\left(A^2+t^2-x^2\right) \right) \, , \nonumber \\
%
\langle J^2_l \rangle &=& \frac{8 A^2 \beta _F}{\omega^{3/2}}
\left( -t x \cos (\varphi ),\frac{1}{2} \cos (\varphi )
\left(A^2+t^2+x^2\right),-\frac{1}{2} x \sin (\varphi )
\left(A^2+t^2-x^2\right)\right) \, ,\nonumber \\
\langle J^3_l \rangle &=& \frac{8 A^2 \beta _F}{\omega^{3/2}}
\left( 0,0,-A x^2 \right) \, ,
\end{eqnarray}
where $x^l=(t,x,\varphi)$ are the boundary spacetime coordinates,
and $a=1,2,3$ is the $SU(2)$ global index.
Plots of the charge density $J^2_t$ is shown in figure \ref{density-fig}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.52]{density.pdf}
\caption{Charge density of the second component of isospin $J^2_t$ as a function of $(t,x)$
for ${\varphi}=0$ (positive $x$) and ${\varphi}= \pi$ (negative $x$).
We set $A=1$, $\alpha_H=1$ and $e=1$ and we use the relations in eq. (\ref{beta-H-analitico}),
valid for small backreaction.
The peak and the pit correspond to positive and negative sign global charges, which are taken apart
from each other by the quench.
}
\label{density-fig}
\end{center}
\end{figure}
It is interesting to compare the direction in the $SU(2)$ space
of the current expectation value with the direction of the scalar expectation value $n^a$.
We find that the expectation value of the global current is always orthogonal to the direction of the scalar
expectation value in the isospin space
\begin{equation}
\langle J^a_m \rangle \, n^a = 0 \, .
\end{equation}
So the quench breaks
all the three global symmetry group generators.
This is true just on top of the "lightcone" in eq. (\ref{cono-luce-largo});
inside and outside this surface
both the scalar and the current expectation values tend to zero
and the $SU(2)$ global symmetry of the boundary theory is unbroken.
\subsection{The boundary energy-momentum tensor}
For illustrative purposes, we will compare the result with the one obtained for
a quench modelled by a falling black hole, studied in \cite{Nozaki:2013wia}.
In this case the metric is
\begin{equation}
ds^2= L^2 \left( -\left( 1+r^2-\frac{M}{r} \right) d\tau^2 +\frac{dr^2}{1+r^2-\frac{M}{r}}
+ r^2 (d \theta^2+\sin^2 \theta d \varphi^2) \right) \, ,
\label{metric-BH}
\end{equation}
where $M$ is a dimensionless parameter proportional to the black hole mass.
In order to find the quench background, we apply the change of variables in eq. (\ref{A-changing}).
Then, to extract the energy-momentum tensor with the method in \cite{Balasubramanian:1999re},
it is convenient to pass to Fefferman-Graham (FG) coordinates. Details of the calculation are
in appendix \ref{appe-energy-momentum-tensor:BH}, where
expressions for all the components of the energy-momentum tensor can also be found, see eq. (\ref{T-cono-luce-BH}).
In particular, the energy density is
\begin{equation}
T_{tt}^{(BH)}=\frac{A^3 L^2 M }{\pi \, G } \,\,
\frac{ \omega + 6 t^2 x^2 } { \omega^{5/2}} \, .
\end{equation}
The energy-momentum tensor is conserved and traceless, and
the total energy is
\begin{equation}
\mathcal{E}^{(BH)}= \frac{ M \, L^2 }{ 2 \, G A } \, .
\end{equation}
In the falling monopole case a more accurate discussion is needed, since
the boundary energy-momentum tensor depends on the details of the boundary conditions of bulk fields.
We first focus on Dirichlet boundary conditions \cite{deHaro:2000vlm}.
Starting from the backreacted metric in eq. (\ref{monopole-backreaction}),
we apply the change of variables in eq. (\ref{A-changing}).
The intermediate metric expression is a bit cumbersome, so in appendix \ref{appe-energy-momentum-tensor:D} we just specify the coordinates expansion that puts it in the FG form. All the non-vanishing components of the energy-momentum tensor $T_{mn}^{(D)}$ obtained from such a metric are given in eq. (\ref{T-cono-luce-D}).
The energy density is:
\begin{eqnarray}
T_{tt}^{(D)}&=&\frac{A^3 }{3 \pi G \, \omega^{5/2}}
\left[ 48 \pi G \alpha _H \beta _H \, x^2 t^2
+ (8 \pi G \alpha _H \beta _H-3 g_3 L^2 ) (\omega + 6 t^2 x^2)
\right] \nonumber \\
&=&
2 \pi \alpha_H^2 A^3 \, \frac{ \omega + 2 t^2 x^2 }{ \omega^{5/2}} \, ,
\end{eqnarray}
where in the second line we have inserted the analytic approximations for small backreaction in eqs. (\ref{beta-H-analitico})
and (\ref{g3-analitico}).
In this limit, the total energy is
\begin{equation}
\mathcal{E}^{(D)}= \frac{ \pi^2 \alpha_H^2 }{A} \left( 1- \frac{ 2}{3 } \frac{t^2}{A^2 +t^2}\right) \, .
\end{equation}
The energy is a decreasing function of time, meaning that the Dirichlet boundary
conditions absorb energy from the bulk.
The non-conservation of energy motivates the investigation of a different quantisation.
A changing of the quantisation conditions causes a
shift of $T_{mn}^{(D)}$ by finite parts (see e.g. \cite{Caldarelli:2016nni}).
Here we specialise to a class of multitrace deformations that do not break the $SU(2)$
global symmetry.
Assuming that
\begin{equation}
\tilde{\alpha}^a= n^a \tilde{\alpha}_H \, , \qquad
\tilde{\beta}^a= n^a \tilde{\beta}_H \, ,
\end{equation}
which is true for the monopole, we can
write the source as
\begin{equation}
J_{\mathcal{F}}^a = n^a \, J_{\mathcal{F}} \, .
\end{equation}
As a function $\mathcal{F}$ parameterising the multitrace deformation we choose
\begin{equation}
\mathcal{F}(\tilde{\alpha}^a)=\mathcal{F}(\tilde{\alpha}^a \tilde{\alpha}^a)=\mathcal{F}(\alpha_H) \, .
\end{equation}
The current can be written in terms of $\tilde{\alpha}_H,\tilde{\beta}_H$ as follows
\begin{equation}
J_{\mathcal{F}}=-\tilde{\beta}_H - \mathcal{F}'(\tilde{\alpha}_H) \, .
\end{equation}
The energy-momentum tensor, (see appendix \ref{appe-energy-momentum-tensor:multitrace} for further details) is
\begin{equation}
T^{(\mathcal{F})}_{ij}= T_{ij}^{(D)}+\eta_{ij} [\mathcal{F}({\tilde{\alpha}}_H) -{\tilde{\alpha}}_H {\tilde{\beta}}_H - \mathcal{F}'({\tilde{\alpha}}_H) {\tilde{\alpha}}_H ] \, .
\end{equation}
Note that this result also applies to the Neumann conditions, that can be seen as a multitrace deformation with $\mathcal{F}=0$.
If we instead specialise to $\mathcal{F}=\mathcal{F}_\kappa$, see eq. (\ref{multitrace-speciale}),
the external source is zero and the energy-momentum tensor is conserved.
Moreover, an explicit computation reveals that the energy-momentum tensor
has the same functional form as the one for the falling BH:
\begin{equation}
T^{(\kappa)}_{ij}= \frac{16 \pi G \alpha_H \beta_H-3 L^2 g_3}{3 L^2 M} \, T_{ij}^{(BH)}
\, .
\label{Tkappa-monopole}
\end{equation}
Using the analytic values for small backreaction of $\beta_H$ and $g_3$ in
eqs. (\ref{beta-H-analitico}) and (\ref{g3-analitico}), we find
\begin{equation}
T^{(\kappa)}_{ij}=\frac{2 \pi^2 G \, \alpha_H^2}{3 L^2 M} \, T_{ij}^{(BH)} \, , \qquad
T_{tt}^{(\kappa)} =
\frac{2 \pi \alpha_H^2 A^3}{3 } \,
\frac{ \omega + 6 t^2 x^2 } { \omega^{5/2}} \, .
\end{equation}
The total energy is
\begin{equation}
\mathcal{E}^{(\kappa)}=\frac{ \pi^2 \alpha_H^2 }{3 A } \, .
\end{equation}
As apparent from eq. (\ref{Tkappa-monopole}), the energy-momentum tensor
is not a probe enough precise to distinguish between a falling monopole or a falling
black hole in the bulk. In the next section, we will see that the entanglement entropy
of these two falling objects behaves instead in a radically different way.
\section{Holographic entanglement entropy}
\label{sect:entanglement-entropy}
In this section we will study the effect of
the leading-order backreaction on holographic entanglement entropy.
It is useful to consider $\epsilon$ as defined in eq. (\ref{epsilon-definition}) as
an expansion parameter. In the asymptotically global AdS case, the metric at the leading order in $\epsilon$ is
\begin{equation}
ds^2= L^2 \left( -(1+r^2) \left[ 1+ \epsilon \, (h_\epsilon + g_\epsilon) \right] d\tau^2 + \left[ 1+ \epsilon \, (h_\epsilon - g_\epsilon ) \right] \frac{dr^2}{1+r^2}
+ r^2 (d \theta^2+\sin^2 \theta d \varphi^2) \right) \, ,
\label{monopole-backreaction-epsilon}
\end{equation}
where $h_\epsilon$ and $g_\epsilon$ are given by eq. (\ref{h-g-analitico}).
We will be interested in the evolution of entanglement entropy for the quench
in Poincar\'e patch, so we apply the change of variables in eq. (\ref{A-changing-inverse}),
obtaining a time-dependent background.
The metric tensor can be written as follows
\begin{equation}
g_{\mu \nu}=g_{\mu \nu}^{(0)} + \epsilon \, g_{\mu \nu}^{(1)} + O(\epsilon^2) \, , \qquad \epsilon=\frac{\pi G \, \alpha_H^2 }{L^2} \, .
\end{equation}
Given a codimension-two surface $x^{\mu} \left( y^\alpha \right)$ parameterised with coordinates $y^\alpha = \left( y^1, y^2 \right) $, the induced metric is
\begin{equation}
G_{\alpha \beta} = \frac{\partial x^\mu}{\partial y^\alpha} \frac{\partial x^\nu}{\partial y^\beta} g_{\mu \nu} \, .
\end{equation}
Such an induced metric
can also be expanded as a power series in $\epsilon$
\begin{equation}
\label{Gk}
G_{\alpha \beta}=G_{\alpha \beta}^{(0)} + \epsilon \, G_{\alpha \beta}^{(1)} + O(\epsilon^2) \, , \qquad
G_{\alpha \beta}^{(k)} = \frac{\partial x^\mu}{\partial y^\alpha} \frac{\partial x^\nu}{\partial y^\beta} g_{\mu \nu}^{(k)} \, , \qquad k=0,1 \, .
\end{equation}
We can compute the change of area of the Ryu-Takayanagi (RT) surface at the leading order in $\epsilon$, as in \cite{Nozaki:2013wia}.
To this purpose, we can expand the determinant of the metric in the area functional.
The first order term of this expansion is
\begin{equation}
\Delta \mathcal{A}=\frac{\epsilon }{2} \int d^2 y \, \sqrt{G^{(0)}} \, {\rm Tr} \left[ G^{(1)} (G^{(0)})^{-1} \right] \, .
\label{Delta-area}
\end{equation}
It is important to note that, at first order, it is enough to work with the unperturbed RT surface $x^{\mu} \left( y^\alpha \right)$, which simplifies the computation a lot.
The difference in entropy between the excited state and the vacuum at the leading order
is proportional to eq. (\ref{Delta-area})
\begin{equation}
\Delta S = \frac{\Delta \mathcal{A}}{4 G} \, .
\end{equation}
We will apply this procedure to various examples of subregions.
\subsection{Disk centered at the origin}
We take as a boundary subregion a disk of radius $l$ centered at $x=0$ and
lying at constant time $t$. The RT surface in unperturbed Poincar\'e patch of AdS$_4$ is the half sphere
\begin{equation}
z=\sqrt{l^2-x^2} \, .
\label{minimal-surface}
\end{equation}
From eqs. (\ref{Delta-area}) and (\ref{monopole-backreaction-epsilon}) we obtain
\begin{equation}
\label{entropy-centrata}
\Delta S(l,t) = \frac{\pi^2 \, \alpha_H^2}{4} \frac{1}{l} \int_0^l
\frac{ \left(h_{\epsilon }- g_{\epsilon } \right) x^3 }
{ \left(l^2-x^2\right)^{3/2}} \,
\frac{\omega(l,t) }{ \left(A^2-l^2+t^2\right)^2+4 A^2 x^2 }
\, dx \, ,
\end{equation}
where $\omega$ is defined in eq. (\ref{omega}). The functions $h_{\varepsilon}$ and $g_{\varepsilon}$ depend on the variable $r$, which
on top of the RT surface reads
\begin{equation}
\label{erre-RT-centrata}
r= \frac{ \sqrt{\left(A^2-l^2+t^2\right)^2+4 A^2 x^2 } }{2 A \sqrt{l^2-x^2} } \, .
\end{equation}
For the entropy, the $A$ dependence can be completely reabsorbed
by the following rescaling of the quantities $l$, $x$ and $t$
\begin{equation}
l \to \frac{l}{A} \, , \qquad x \to \frac{x}{A} \, , \qquad t \to \frac{t}{A} \, .
\label{rescale-1}
\end{equation}
For this reason, the numerical analysis has been performed for $A=1$ without loss of generality.
Numerical results are shown in figure \ref{entropy-plot}.
We find that $\Delta S$ is always negative,
meaning that the perturbed entanglement entropy is always smaller than the vacuum value.
We can think of the quench as a region of
spacetime where a condensate (which breaks a global symmetry on the boundary, as in holographic
superconductors \cite{Hartnoll:2008vx}) is localised. A lower entropy
fits with the intuition that some degrees of freedom have condensed \cite{Albash:2012pd}
and so there should be fewer of them compared to the vacuum
(which has zero scalar expectation value).
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{entropy-time-dep}
\qquad
\includegraphics[scale=0.45]{entropy-l-dep}
\caption{
Left: time dependence of $\Delta S$ for a spherical subregion with fixed radius $l$ centered at the origin of the quench.
Right: dependence of $\Delta S$ at fixed $t$ as a function of $l$. The numerical values $\alpha_H=1$, $A=1$ are used.
}
\label{entropy-plot}
\end{center}
\end{figure}
Analytic results can be found in some regimes.
Nearby the boundary $r \rightarrow + \infty$, we can use the expansion
\begin{equation}
h_{\epsilon } = -\frac{2}{r^2} + \dots \, , \qquad
g_{\epsilon } = \frac{2}{r^2} + \dots \, .
\label{expansion-h-and-g}
\end{equation}
Since the minimal $r$ on the RT surface is given by eq. (\ref{erre-RT-centrata})
with $x=0$, this expansion will be valid in the whole integration region in eq. (\ref{entropy-centrata}) in the regime
\begin{equation}
{| A^2 + t^2 -l^2 |}\gg 2 l A\, .
\label{regime-approssimato}
\end{equation}
Equation (\ref{entropy-centrata}) can then be evaluated explicitly
\begin{equation}
\Delta S = - \pi^2 \, \alpha_H^2 \, \left[ \left( \frac{A l}{ \sqrt{\omega(l,t)}} + \frac{ \sqrt{\omega(l,t)}}{4 A l}\right)
\tanh^{-1} \left( \frac{2 A l}{\sqrt{\omega(l,t)}} \right) -\frac12
\right] \, .
\end{equation}
We can specialise the approximation in eq. (\ref{regime-approssimato})
to the following situations:
\begin{itemize}
\item
small $l \ll A$
\begin{equation}
\Delta S = - \frac{8}{3} \pi^2 \, \alpha_H^2 \, \frac{A^2 \, l^2}{(A^2+t^2)^2} \, .
\end{equation}
\item large $t \gg A$ and $t \gg l$
\begin{equation}
\Delta S = - \frac{8}{3} \pi^2 \, \alpha_H^2 \, \frac{ l^2 \, A^2}{t^4} \, .
\end{equation}
\item $t=0$ and $l \gg A$
\begin{equation}
\Delta S= - \frac{8}{3} \pi^2 \, \alpha_H^2 \, \frac{A^2}{l^2} \, .
\end{equation}
\end{itemize}
It is useful to note that, for given $(l,t)$, the minimal surfaces in eq. (\ref{minimal-surface}) in the Poincar\'e patch
are mapped by eq. (\ref{A-changing-inverse}) to constant $\tau$ surfaces in global AdS.
These surfaces are attached at $r \to \infty$ to a circle
with constant $\theta=\theta_0$, where
\begin{equation}
\theta_0(l,t)=\tan^{-1} \left( \frac{2 A \, l}{l^2 -t^2-A^2} \right) \, ,
\label{theta-zero}
\end{equation}
which corresponds to a parallel on the $S^2$ boundary.
This shows that $\Delta S(l,t)$ is a function just
of the combination in eq. (\ref{theta-zero}).
The RT surfaces with
\begin{equation}
l= l_0 = \sqrt{t^2 + A^2}
\end{equation}
corresponds to a parallel with $\theta_0 = \pi/2$.
These surfaces are special, because they lie on the equator of $S^2$.
Due to symmetry, we conclude that the RT surface at $l=l_0$
has either the maximal or the minimal $\Delta S$.
For the monopole case, we know that $\Delta S$ is negative
and close to zero for $t \to 0$ and large $l$.
So we expect that $l=l_0$ is a minimum of $\Delta S$,
as also confirmed by the numerics in figure \ref{entropy-plot}.
For $l=l_0$, $\Delta S$ can be computed exactly:
\begin{equation}
\Delta S_0 = \Delta S (l_0) = \frac{\pi^2 \, \alpha_H^2}{4} \int_0^{\infty}
\left(h_{\epsilon }- g_{\epsilon } \right) \frac{r}{\sqrt{1+r^2}}
\, dr = - \Upsilon \frac{\pi^2 \, \alpha_H^2}{4} \, ,
\label{Delta-S-zero}
\end{equation}
where
\begin{equation}
\Upsilon=6 \pi -12 -8 \pi \, \beta(2) +14 \, \zeta(3) \approx 0.658 \, .
\end{equation}
In this expression, $\beta(2)\approx 0.916$ is the Catalan constant
and $\zeta$ is the Riemann zeta function.
Summarising, the entropy of the disk with radius $l_0=\sqrt{t^2+A^2}$
remains constant as a function of the time $t$ and equal to the minimum $\Delta S_0$. This can be heuristically justified as follows.
At large $t$, the bound from causality on the speed of entanglement propagation is saturated:
$\Delta S$, which originated at $t=0$ from a region nearby $x=0$, spreads at the speed of light.
At small $t$, the speed of propagation is smaller, because
at $t \to 0$ also the matter of the quench has zero velocity:
entanglement spreads with matter.
\subsection{Translated disk}
For convenience, we introduce
\begin{equation}
\vec{x} = \left( x_1, x_2 \right)= \left( x \cos {\varphi}, x \sin {\varphi} \right)\, .
\end{equation}
We now consider as subregion a disk of radius $l$ centered at $\left( x_1, x_2 \right) = \left( \xi, 0 \right)$
and lying at constant time $t$. The corresponding RT surface in unperturbed Poincar\'e AdS$_4$ is the translated half sphere
\begin{equation}
z = \sqrt{l^2 - \left( x_1 - \xi \right)^2 - x_2^2} \, .
\end{equation}
In appendix \ref{appe:entanglement-entropy-trans-disk} we write the explicit integral for the
first-order correction to the
holographic entanglement entropy.
It is convenient to rescale spatial and time coordinates as in eq. (\ref{rescale-1}), with $\xi \to \xi/A$ as well.
Numerical results can be obtained for arbitrary radius $l$, see figure \ref{trans_disk-plot}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{disk_down}
\qquad
\includegraphics[scale=0.45]{disk_up}
\qquad
\includegraphics[scale=0.45]{disk_trans_l_vari}
\caption{
Left and right: Time dependence of $\Delta S$ for a disk-shaped subregion of radius $l=5$ centered
at $\left( x_1, x_2 \right) = \left( \xi, 0 \right)$ for different values of $\xi$. For large $\xi$, the maximum is reached for $t \approx \xi$.
Bottom: Plot of $\Delta S$ as a function of $l$ for a translated disk-shaped subregion, for various values of $t=\xi$.
Numerical values $\alpha_H=1$, $A=1$ have been fixed.
}
\label{trans_disk-plot}
\end{center}
\end{figure}
In the regime of small $l$, the RT surface stays at large $r$, so we can use the expansion
(\ref{expansion-h-and-g})
and a compact expression can be found
\begin{equation}
\Delta S(l,t,\xi)
= - \frac{8}{3} \pi^2 \, \alpha_H^2 \, \frac{A^2 \, l^2}{A^4+2 A^2 \left(\xi ^2+t^2\right)+\left(t^2-\xi ^2\right)^2} \, .
\label{Delta-S-small-disk}
\end{equation}
This shows that $\Delta S$ is always negative for disks with small radius $l$.
The entanglement entropy of small disks is then dominated by the negative contribution
due to the scalar condensation.
At large $l \gg \xi$, the subregion is, with good approximation, a
disk centered at $\xi \approx 0$, and so, from the results of the previous
section, we expect a negative $\Delta S$.
For intermediate $l$, the quantity $\Delta S$ can become positive,
see figure \ref{trans_disk-plot}.
In this regime we can interpret the positive contribution to $\Delta S$ as due to
quasiparticles.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\draw [->] (-4,0)--(4,0) node [at end, right] {$x_1$};
\draw [->] (0,0)--(0,4) node [at end, right] {$t$};
\draw[<->] (-0.02,-0.25)--(0.48,-0.25);
\node at (0.2,-0.6) {$A$};
\draw[red, very thick] (-0.5,0)--(0.5,0);
\draw[blue, very thick] (1.5,2.5)--(3.5,2.5);
\draw[<->] (2.52,2.75)--(3.48,2.75);
\node at (3,3.1) {$l$};
\draw[dashed] (2.5,0)--(2.5,2.5);
\node at (2.5,-0.3) {$\xi$};
\draw[->,red,thick] (0,0)--(2.5,2.5);
\draw[->,red,thick] (0,0)--(-2.5,2.5);
\draw[->,red,thick] (0.45,0)--(2.95,2.5);
\draw[->,red,thick] (0.45,0)--(-1.95,2.5);
\draw[->,red,thick] (-0.45,0)--(-2.95,2.5);
\draw[->,red,thick] (-0.45,0)--(2.05,2.5);
\end{tikzpicture}
\end{center}
\caption{In the quasiparticle model, the quench creates EPR pairs of entangled quasiparticles which
subsequently propagate without interactions. When just one of the quasiparticles belonging to an EPR pair
is inside the blue region, the entanglement entropy of the region increase.}
\label{quasi-particles-figure}
\end{figure}
Free quasiparticles provide a simple model of
entanglement propagation \cite{Calabrese:2005in}.
In this picture, the quench is assumed to create many copies of
Einstein-Podolsky-Rosen (EPR)
pairs, which hen propagate without interactions,
see figure \ref{quasi-particles-figure}.
When just one of the entangled particles in an EPR pair
is inside a given region, there is a positive contribution to the entanglement entropy.
This model can reproduce several aspects of the spread of entanglement
in global and local quenches. Models with interacting quasiparticles
have also been studied \cite{Casini:2015zua}.
In all these models, the contribution of the excitations
to the entanglement entropy is always positive.
In the monopole quench, there is also
a negative contribution to the entanglement entropy due to the scalar
condensate. In general, we expect that there is a competition
between the quasiparticle and the condensate contribution,
which is responsible for the change of sign of the entanglement
entropy of the translated disk region.
\subsection{Half-plane region}
We take as a boundary subregion the half-plane $x_1 \geq 0$ at constant time $t$.
The unperturbed RT is the bulk surface at $x_1=0$ and constant time $t$. A convenient choice of parameters is
\begin{equation}
y^{\alpha} = \left( z, x_2 \right) \, .
\end{equation}
Details of the calculations are in appendix \ref{appe:entanglement-entropy-half-plane}.
From the closed-form expression, we deduce that the entropy variation $\Delta S$ is a function of $t/A$.
Numerical result is shown in figure \ref{halfplane-plot}.
For $t=0$, the entropy is given by $\Delta S_0$ in eq. (\ref{Delta-S-zero}).
This is because, due to the change of variables in eq. (\ref{A-changing-inverse}),
the $t=0$ plane with $x_1=0$ (which corresponds to ${\varphi}=\pm \pi/2$)
is mapped in global AdS to a disk with $\tau=0$ and constant ${\varphi}=\pm \pi/2$.
Then, an explicit computation easily leads to the same entropy as in (\ref{Delta-S-zero}).
At large $t$, from the analysis in appendix \ref{appe:entanglement-entropy-half-plane}, we find
that $\Delta S$ scales in a linear way with time, i.e.
\begin{equation}
\Delta S = K \, \alpha_H^2 \, \frac{ t}{A} \, , \qquad K \approx 0.636 \, .
\label{large-time-half-plane}
\end{equation}
An exact expression for $K$ is given in eq. (\ref{K-expression}).
This is consistent
with numerical results, shown in figure \ref{halfplane-plot}.
We emphasise that this result is valid only in the regime
where we can trust our perturbative calculation in the parameter $\epsilon$.
At very large $t$, we expect that the large backreaction effects spoil
the results in eq. (\ref{large-time-half-plane}).
A linear behaviour at large $t$ is also realised for the perturbative entropy
in the case of a falling black hole in AdS$_4$ \cite{Jahn:2017xsg}.
The numerical plot in figure \ref{halfplane-plot} is consistent
with both the $t=0$ and large $t$ analytical calculations.
At $t=0$, $\Delta S$ is negative, in agreement with the expectation
that the condensate decreases the entanglement entropy.
Immediately after $t=0$, the quantity $\Delta S$ enters in a linear growth regime
and becomes positive around $t \approx 2 A$.
This linear behaviour is similar to the one of the black hole quench
and we expects that it is due to the contribution of quasiparticles.
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{halfplane_2}
\caption{
Time dependence of $\Delta S$ for the half-plane subregion.
The numerical values $\alpha_H=1$, $A=1$ have been chosen.
}
\label{halfplane-plot}
\end{center}
\end{figure}
\subsection{Comparison with the BH quench}
In the case of a falling black hole in AdS$_4$, the perturbative entropy
for a centered disk is
\begin{equation}
\Delta S^{(BH)}=\frac{\pi M L^2}{4 G \, A l}\left(\frac{l^{4}-2 l^{2} t^{2}
+\left(A^{2}+t^{2}\right)^{2}}{\sqrt{l^{4}+2 l^{2}\left(A^{2}
-t^{2}\right)+\left(A^{2}+t^{2}\right)^{2}}}-\left|t^{2}+A^{2}-l^{2}\right|\right) \, ,
\end{equation}
where $M$ is the mass parameter, as defined in eq. (\ref{metric-BH}).
In this case, $\Delta S^{(BH)}$ is always positive
and has a maximum for $l=l_0=\sqrt{t^2+A^2}$.
The First Law of Entanglement Entropy (FLEE) \cite{Bhattacharya:2012mi}
is valid in the regime of small $l$:
\begin{equation}
\Delta S= \frac{\Delta E}{ T_E} \, , \qquad T_E=\frac{2}{ \pi l} \, ,
\label{FLEE}
\end{equation}
where $T_E$ is the entanglement temperature and $\Delta E$ is the energy.
FLEE is generically invalidated in the case
of backgrounds with scalars, because the Fefferman-Graham expansion
of the metric does not start at order $z^d$ \cite{Blanco:2013joa,OBannon:2016exv}, where $d$
is the dimension of the spacetime boundary.
Indeed, as can be checked from eq. (\ref{Delta-S-small-disk}), FLEE
is not satisfied for the monopole quench background.
The behaviour of $\Delta S$ for small $l$ in eq. (\ref{Delta-S-small-disk}) is rather different from the FLEE regime. In particular:
\begin{itemize}
\item $\Delta S$ is negative;
\item the quantity $\Delta S$ scales as $l^2$, and not as $l^3$ as predicted by FLEE;
\item there is no choice of boundary conditions for which
the energy $\Delta E$ is proportional to $\Delta S$.
\end{itemize}
The FLEE can be derived from
the notion of relative entropy \cite{Blanco:2013joa,Rangamani:2016dms},
which is a quantity that measures how far is a density matrix $\rho$ from a reference density matrix $\sigma$:
\begin{equation}
S(\rho || \sigma) = {\rm Tr} (\rho \log \rho) -{\rm Tr} (\rho \log \sigma) \, .
\end{equation}
As a general property, $S(\rho || \sigma) $ is positive definite and it
vanishes if and only if $\rho= \sigma$.
The relative entropy can be written as
\begin{equation}
S(\rho || \sigma) = \Delta \langle \mathcal{K}_\sigma \rangle - \Delta S \, ,
\end{equation}
where $\mathcal{K}_\sigma$ is the modular Hamiltonian of the density matrix $\sigma$
\begin{equation}
\mathcal{K}_\sigma = - \log \sigma \, .
\end{equation}
From positivity of relative entropy we get the relation
\begin{equation}
\Delta \langle \mathcal{K}_\sigma \rangle \geq \Delta S \, .
\label{relativeE-ineq}
\end{equation}
The modular Hamiltonian operator $\mathcal{K}_\sigma$ for a spherical domain with radius $l$ in the vacuum state of a $d$-dimensional CFT
can be expressed \cite{Casini:2011kv} in terms of the energy momentum operator as follows
\begin{equation}
\mathcal{K}_\sigma= 2 \pi \int_{\rm sphere } d^{d-1} \vec{x} \, \,\, \frac{l^2 -x^2}{2 l} T_{tt}(\vec{x}) \, ,
\end{equation}
where $x=|\vec{x}|$.
In the limit of small spherical subregion and in $d=3$, we find
\begin{equation}
\Delta \langle \mathcal{K}_\sigma \rangle = \frac{\pi l}{2}\Delta E \, ,
\end{equation}
so the FLEE in eq. (\ref{FLEE}) follows from
the saturation of the identity (\ref{relativeE-ineq}), i.e.
\begin{equation}
\Delta \langle \mathcal{K}_\sigma \rangle = \Delta S \, .
\end{equation}
If we consider two nearby density matrices, i.e.
\begin{equation}
\sigma=\rho_0 \, , \qquad \rho=\rho_0 + \varepsilon \rho_1+ \mathcal{O}(\varepsilon^2) \, ,
\end{equation}
where $\varepsilon$ is an expansion parameter,
the relative entropy scales with $\varepsilon$ as follows
\begin{equation}
S(\rho || \sigma) = \mathcal{O}(\varepsilon^2) \, .
\end{equation}
Since the $ \mathcal{O}(\varepsilon)$ contribution to relative entropy vanishes,
there is a general expectation \cite{Blanco:2013joa} that
for small deformations eq. (\ref{relativeE-ineq}) is saturated.
However, the question if FLEE is satisfied in quantum field theory is subtle:
there the density matrix $\rho$ is infinite dimensional
and so it is not clear, in principle, when a perturbation might be considered small.
\section{Conclusions}
\label{sect:conclusions}
In this paper, we studied a magnetic monopole solution in the
static global AdS$_4$ setup introduced in \cite{Esposito:2017qpj}.
The boundary conditions of the monopole are
specified by the parameter $\alpha_H$, which in the multitrace quantisation
is proportional to the VEV of the scalar operator in the boundary CFT.
We found an approximate analytic solution for the monopole
in the regime of small $\alpha_H$ which includes the leading-order backreaction on the metric.
By using the map introduced in \cite{Horowitz:1999gf},
the static monopole in global AdS$_4$ is mapped into a falling monopole in the Poincar\'e patch.
This bulk configuration is dual to a local quench on the CFT side.
The expectation values of local operators depend on the choice of the boundary conditions.
With Dirichlet or Neumann conditions, the falling monopole is dual to a field theory
with a time-dependent source. With the special choice of multitrace deformation
in eq. (\ref{multitrace-speciale}), the monopole is dual to a field theory with zero sources.
In this case, there is no energy injection and the form of the energy-momentum
tensor is the same as the one of a falling black hole \cite{Nozaki:2013wia}.
The behaviour of entanglement entropy is instead rather
different compared to the case of the falling black hole.
For spherical regions centered on the local quench,
the perturbed entanglement entropy is always less than the vacuum value, i.e. $\Delta S \leq 0$.
This is consistent with the presence of a condensate
at the core of the local quench \cite{Albash:2012pd}.
In the case of a spherical region not centered at the origin,
there is a competition in $\Delta S$ between
the negative
contribution from the condensate and the
positive one due to quasiparticles \cite{Calabrese:2005in}.
Depending on the radius $l$ and on the distance $\xi$ from the origin of the spherical region,
$\Delta S$ can be positive or negative, see figure \ref{trans_disk-plot}.
In the case of half-plane region,
the negative contribution to $\Delta S$ due to the condensate wins
at early times, while the positive contribution due to quasiparticles
dominates at late times (see figure \ref{halfplane-plot}).
For a quench dual to a falling black hole,
the First Law of Entanglement Entropy (FLEE) \cite{Bhattacharya:2012mi}
is satisfied for small subregions.
For the quench dual to the monopole, we find that the FLEE is not satisfied.
This is a feature shared with other AdS backgrounds which involve the backreaction
of scalar bulk fields, see \cite{Blanco:2013joa,OBannon:2016exv}.
On the field theory side, the violation of the FLEE
comes from the non-saturation of the inequality in eq. (\ref{relativeE-ineq}).
It is still an open question whether a given deformation
obeys the FLEE \cite{OBannon:2016exv}
in a quantum field theory. It would be interesting to further
investigate the FLEE in non-equilibrium systems, in order to
understand its general validity conditions.
Analytical soliton solutions which include backreaction
are quite rare in AdS spacetime.
The monopole solution found in this paper
can be the starting point for several further investigations.
In particular, it would be interesting to study
more general solitonic objects in AdS.
For instance, vortex strings in AdS were considered by many authors
\cite{Albash:2009iq,Montull:2009fe,Keranen:2009re,Domenech:2010nf,Iqbal:2011bf,Dias:2013bwa,Maeda:2009vf,Tallarita:2019czh,Tallarita:2019amp}.
A static configuration in Poincar\'e patch with a monopole attached to a vortex
string should also be possible, as proposed in \cite{Dias:2013bwa}:
the vortex string pulls the vortex and it can counterbalance
the gravitational force that makes it fall towards the center of AdS.
It would be interesting to find explicit solutions for these objects and to investigate
their field theory duals.
Another possible direction is the study of holographic complexity \cite{Susskind:2014rva,Stanford:2014jda,Brown:2015bva}.
Quantum computational complexity is a recent quantum information entry in the holographic dictionary,
which was motivated by the desire of understanding the growth of the Einstein-Rosen bridge
inside the event horizon of black holes.
Complexity in several examples of global and local quenches
has been studied by several authors, e.g. \cite{Moosa:2017yvt,Chapman:2018dem,Chapman:2018lsv,Chen:2018mcc,Auzzi:2019mah,DiGiulio:2021oal,Ageev:2018nye,Ageev:2019fxn,DiGiulio:2021noo}.
It would be interesting to investigate complexity for quenches dual to a falling monopole.
This analysis may give us useful insights to understand the impact of condensates
on quantum complexity.
\section*{Acknowledgments}
We are grateful to Stefano Bolognesi for useful discussions.
N.Z. acknowledges the Ermenegildo Zegna's Group for the financial support.
\section*{Appendix}
\addtocontents{toc}{\protect\setcounter{tocdepth}{1}}
| {'timestamp': '2021-06-28T02:23:24', 'yymm': '2106', 'arxiv_id': '2106.13757', 'language': 'en', 'url': 'https://arxiv.org/abs/2106.13757'} |
\section{Introduction}
\label{sec:intro} Narrow-Line Seyfert 1 (NLS1) Galaxies are a relatively peculiar subclass of active galactic
nuclei (AGNs), which are characterized by their optical spectra with narrow permitted lines (FWHM (H$\beta$) $<$
2000 km s$^{-1}$), the ratio of [O III]$\lambda$5007$/$H$\beta$ $<$ 3, and the bump of Fe II (e.g., Pogge 2000).
NLS1s are also interesting for their low masses of central black holes and high accretion rate. NLS1s are
generally radio quiet; only a small percentage of them is radio-loud ($<7\%$; Komossa et al. 2006). So far, four
NLS1s are confirmed to be GeV emission sources by Fermi/LAT, which should have relativistic jets as predicted by
Yuan et al. (2008).
PMN J0948+0022 (z = 0.5846) is the first NLS1 detected in GeV band by Fermi/LAT. The inverted spectrum in radio
band indicates the presence of a relativistic jet viewed at small angles (Zhou et al. 2003), which is similar to
the properties of blazars. The significant variabilities of PMN J0948+0022 are detected, especially in GeV band
(Abdo et al. 2009; Foschini et al. 2012). It is found that the observed luminosity variations are usually
accompanied with the shift of peak frequencies of the SEDs, similar to some GeV-TeV BL Lacs (Zhang et al. 2012).
The abundant broadband observational data provide an opportunity to study the physical mechanism of the spectral
variations of PMN J0948+0022.
\begin{figure}
\centering
\includegraphics[angle=0,scale=0.35]{f1.eps}
\caption{{\em Panel a}---Observed SEDs ({\em scattered data points}) with model fitting ({\em lines}) for PMN
J0948+0022. The four SEDs are marked as states ``a" ({\em blue points and thick solid line}), ``b" ({\em magenta
points and dashed line}), ``c" ({\em red points and thin solid line}), and ``d" ({\em green points and dotted
line}), respectively. {\em Panels b, c}---$L_{\rm c}$ as a function of $\nu_{\rm c}$ in both the observer ({\em
Panel b}) and co-moving ({\em Panel c}) frames.}
\end{figure}
\section{SED Selection and Modelling }
\label{sec:using} We compile the observed broadband SEDs of PMN J0948+0022 from literature. Four available SEDs
as shown in Figure 1(a), are defined as SEDs ``a", ``b", ``c", and ``d" according to their peak luminosity of the IC
bump of the SEDs, respectively. The data of SEDs ``a" and ``b" are from Foschini et al. (2012) and are obtained
with the observations in 2010 July 8th and 2011 October 9th-12th, respectively. The SEDs ``c" and ``d" of this
source are taken from the observations in 2009 June 14th and 2009 May 5th (Abdo et al. 2009).
The broadband SEDs of PMN J0948+0022 are similar to the typical FSRQs and thus the $\gamma$-ray emission should
be dominated by jet emission. We use the syn+SSC+EC model to fit its SEDs because the contributions of the
external field photons from the broad line region (BLR) need to be considered. The total luminosity of the BLR
is calculated using the luminosity of its emission lines (Zhou et al. 2003) with equation (1) given in Celotti
et al. (1997). The size of the BLR is calculated using the BLR luminosity with equation (23) in Liu \& Bai
(2006). The energy density of the BLR measured in the co-moving frame is $U^{'}_{\rm
BLR}=6.76\times10^{-3}\Gamma^{2}$ erg cm$^{-3}$, where we take $\Gamma\sim\delta$. The minimum variability
timescale is taken as $\Delta t=12$ hr. The details of the model and the strategy for constraining its
parameters constraints can be found in Zhang et al. (2012).
\begin{figure}
\centering
\includegraphics[angle=0,scale=0.35]{f3.eps} \caption{$L_{\rm c}$ as functions of $B$, $\delta$, and $\gamma_{\rm b}$ in both the observer ({\em
three top panels}) and co-moving ({\em three bottom panels}) frames.}
\end{figure}
The SEDs of PMN J0948+0022 are well explained with the syn+SSC+EC model, as shown in Figure 1(a). The EC component of the SEDs for PMN J0948+0022 presents a
further constraint on $\delta$, and thus makes a tighter constraint on $\delta$ and $B$ than that for BL Lacs in
Zhang et al. (2012). The fitting parameters of SEDs for PMN J0948+0022 are also more tightly clustered; the
magnetic field strength $B$ is from $2.24\pm0.11$ G to $3.34\pm0.50$ G, the Doppler factor is from $12.4\pm0.8$
to $23.0\pm0.8$, and the break Lorenz factor of electrons is from $91\pm3$ to $107\pm9$.
\section{Spectral Variation of IC Bump}
\label{sec:frontmatter}
The broadband SEDs of PMN J0948+0022 are dominated by the EC process, and there are significant variabilities in
GeV band. The peak luminosity ($L_{\rm c}$) as a function of peak frequency ($\nu_{\rm c}$) of the IC bump in
both the observer and co-moving frames are given in Figures 1(b),(c). A tentative correlation between the peak
luminosity and peak frequency in both the observer and co-moving frames is found, i,e., $L_{\rm c} \propto
\nu_{\rm c}^{(2.12\pm0.83)}$ with $r = 0.87$ (Pearson correlation coefficient) and $p=0.13$ (chance
probability), and $L^{'}_{\rm c}\propto \nu^{'(1.05\pm0.21)}_{\rm c}$ with $r = 0.96$ and $p=0.04$,
respectively, indicating that the luminosity variations of the IC bump are accompanied with a spectral shift.
To investigate the possible physical reason of this phenomenon, we show the IC peak luminosity as functions of
$B$, $\delta$, and $\gamma_{\rm b}$ in both the observer and co-moving frames in Figure 2. It can be found that:
(1) Both $L_{\rm c}$ and $L^{'}_{\rm c}$ are anti-correlated with $B$. The Pearson correlation analysis and the
best liner fits yield $L_{\rm c} \propto B^{(-1.09\pm0.28)}$ with $r=-0.94$, $p=0.06$ and $L^{'}_{\rm c}\propto
B^{(-0.37\pm0.09)}$ with $r=-0.94$, $p=0.06$. (2) $L_{\rm c}$ seems to be correlated with $\delta$ with $r=0.93$
and $p=0.07$. (3) No correlation between $\gamma_{\rm b}$ with $L_{\rm c}$ and $L^{'}_{\rm c}$ is found, which
may be due to the uncertainties of the synchrotron radiation peak for the SEDs. These facts indicate that the
spectral variations of the IC peak for PMN J0948+0022 may be attributed to the variations of $\delta$ and $B$, similar to the results of a typical FSRQ 3C 279 (Zhang et al. 2013).
The significant variations of the IC peak for PMN J0948+0022 in GeV band are dominated by
the EC process. The energy density of the external photon field would be magnified by $\Gamma^2$ and the energy
of the seed photons would be magnified by $\Gamma$ due to the motion of the emitting regions, hence a small
variation of $\delta$ would result in significant variations of $\nu_{\rm c}$ and $L_{\rm c}$. As mentioned
above, $B$ is also anti-correlated with $L_{\rm c}$ in both the observer and co-moving frames, indicating the
variations of $B$ for this source between different states, which are also accompanied with the variations of
$\delta$, might be linked to the variations of some intrinsic physical parameters of the center black hole, such
as the disk accretion rate or the corona (Zhang et al. 2013). The instabilities of corona or disk accretion rate may
result in the variations of the jet physical condition and the variations of jets emission.
\section{Conclusion}
The SEDs observed at four epochs for PMN J0948+0022, which can be explained well with the syn+SSC+EC model, are
compiled from literature to investigate its spectral variation. A tentative correlation between the peak
luminosity and peak frequency of its IC bumps is found, indicating that a higher GeV luminosity corresponds to a
harder spectrum for the emission in the GeV band, similar to the properties of some blazars. The SEDs of PMN
J0948+0022 are dominated by the EC bumps and thus the magnification of the external photon field by the bulk
motion of the radiation regions is an essential reason for the spectral variation. The variations of $B$ and
$\delta$ for PMN J0948+0022 between different states may be produced by the instabilities of the corona or the
disk accretion rate.
\section{Acknowledgments}
This work is supported by the National Basic Research Program (973 Programme) of China (Grant 2009CB824800), the
National Natural Science Foundation of China (Grants 11078008, 11025313, 11133002, 10725313), Guangxi Science Foundation (2011GXNSFB018063, 2010GXNSFC013011), and Key Laboratory for the Structure and Evolution of Celestial Objects of Chinese Academy of Sciences.
| {'timestamp': '2013-02-25T02:00:59', 'yymm': '1302', 'arxiv_id': '1302.5500', 'language': 'en', 'url': 'https://arxiv.org/abs/1302.5500'} |
\subsection{Room specific prompt creation}
\label{ap: prompt}
When building the room type codebook, we create prompts by filling the following template.
\begin{align*}
\text{A [room type] with [obj 1], ... and [obj n]}.
\end{align*}
where [room type] is a room label annotated in Matterport3D dataset~\cite{Matterport3D} with manually spelling completion, such as map ``l" to ``living room" and ``u" to ``utility room". [obj 1] .. [obj n] are high frequency object words that co-occur with specific room labels in the training instructions of the REVERIE dataset. A frequency above 0.8 is considered a high frequency. This threshold ensures diversity and limits the number of candidates. For instance, we create the prompt ``A dining room with table and chairs" for room type ``dining room", ``a bathroom with towel and mirror" for room type ``bathroom".
\end{multicols}
\end{appendices}
\subsection{Global Visual and Cross Model Attention}
\noindent\textbf{Language Encoder.} We use a pretrained multi-layer transformer encoder~\cite{transformer} to encode the natural language instruction $\mathcal{X}$. Following the convention, we feed the sum of the token embedding, position embedding and token type embedding into the transformer encoder, and the output denoted as $\mathcal{T}$ is taken as language features.
\noindent\textbf{Global Node Self-Attention.} To enable that each node perceives global environment information and can be further used to learn room-to-room correlations in the environment layout, we conduct a graph aware self attention (GASA) \cite{vlnduet} over node embeddings $\mathcal{H}_t$ of graph $\mathcal{G}_t$. For simplicity, we use the same symbol $\mathcal{H}_t$ to denote the encoded graph node embeddings.
\noindent\textbf{Cross Graph Language Encoder.} We use a cross modal transformer~\cite{lu2019vilbert} to model both the global and local graph-language relation. We name the global and local graph-language co-attention models as global branch and local branch, respectively. For the global branch, we perform co-attention of node embeddings $\mathcal{H}_t$ over language features $\mathcal{T}$, while only the current node and its neighbouring navigable nodes are used to compute the co-attention in the local branch. We feed the outputs of the global branch to the \text{Layout Learner} for layout prediction. In addition, the local branch outputs a predicted action distribution and score for object grounding, while the global branch only generates an action distribution. The action predictions from global and local branches are fused to make final decision Fig.~\ref{fig: layout}.
\begin{align*}
\mathcal{\tilde{H}}_t^{(glo)} &= \text{Cross-Attn}(\mathcal{H}_t, \mathcal{T}, \mathcal{T}) \tag{1} \\
\mathcal{\tilde{H}}_t^{(loc)} &= \text{Cross-Attn}(\{\mathcal{H}_t(\mathcal{A}_t), \mathcal{H}_t(a_{t,0})\}, \mathcal{T}, \mathcal{T}) \tag{2}
\end{align*}
where \textit{$\text{Cross-Attn}(query, key, value)$} is a multi-layer transformer decoder, and $\mathcal{A}_t$ stands for neighbouring navigable nodes of the current node $a_{t,0}$. $\mathcal{H}_t(\cdot)$ represents extracting corresponding rows from $\mathcal{H}_t$ by node indices.
\subsection{Baseline Model}
=============== old version ===============
\subsubsection{Topological Graph.}
We construct a topological graph following the work of~\citet{vlnduet}. We briefly describe the process and define the key concepts as follows.
\noindent\textbf{Graph Building.} The baseline model gradually builds a topological graph $\mathcal{G}_t = \{\boldsymbol v,\boldsymbol e \mid \boldsymbol v \subseteq \mathcal{V}, \boldsymbol e \subseteq \mathcal{E} \}$ to represent the environment at time step $t$. The graph contains three types of nodes: (1) visited nodes; (2) navigable nodes; and (3) the current node.
In Fig.~\ref{fig: layout}, they are denoted by a blue circle, yellow circle and double blue circle, respectively.
Both visited nodes and current node have been explored, hence the agent has access to their panoramic views, while the navigable nodes are unexplored and only partially observed by the agent from neighboring visited nodes. At each step $t$, we build the graph $\mathcal{G}_t$ by updating the current node $a_{t,0}$ and its neighbouring
navigable nodes $\mathcal{A}_t$.
\noindent\textbf{Node Representation.} When navigating to a node $a_{t,0}$ at time step $t$, the agent extracts panoramic image features $\mathcal{R}_t$ and object features $\mathcal{O}_t$ from its panoramic view $\mathcal{V}_t$. Bounding boxes of objects are obtained by an object detection model~\cite{anderson2018bottom} or directly provided by simulation environment, as is the case in the REVERIE dataset.
Since current node is fully observed at this stage, we update its features by more informative local visual features. Specifically, we implement a multi-layer transformer with self-attention among the image features $\mathcal{R}_t$ and object features $\mathcal{O}_t$ observed at the current location to model their relations. The fused features $\hat{\mathcal{O}_t}$ and $\hat{\mathcal{R}_t}$ are treated as local visual features of node $a_{t,0}$.
Then we update the representations of nodes in topological graph as follows: (1) For the current node, we concatenate and average pool the local features $\hat{\mathcal{R}_t}$ and $\hat{\mathcal{O}_t}$ to update its representation;
(2) As unvisited nodes could be partially observed from different directions of multiple visited nodes, we average all image features of the partial views as its representation.
(3) The representation of previously visited node remains unchanged.
We then further add the location embedding and navigation step embedding to the representation of each node. The location embedding of a node if formed by the concatenation of Euclidean distance, heading and elevation angles relative to the current node, and similar to positional embeddings the step embedding embeds the last visited time step of each visited node and we set time step zero for unvisited nodes. We use $\mathcal{H}_t$,
to denote the full node representation. Please refer to Appendix for further details.
\section{Methodology}
\input{sec3-0-method}
\input{sec3-1-baseline}
\input{sec3-3-method}
\input{sec3-4-method}
\input{sec3-5-method}
\input{sec3-6-method}
\input{sec4-exps}
\input{sec5-results}
\section{Conclusion}
In this work, to enhance the environmental understanding of an autonomous agent in the Embodied Referring Expression Grounding task, we have proposed a Layout Learner and Goal Dreamer. These two modules effectively introduce visual common sense, in our case via an image generation model, into the decision process. Extensive experiments and case studies show the effectiveness of our designed modules. We hope our work inspires further studies that include visual commonsense resources in autonomous agent design.
\section{Acknowledgements}
This research received funding from the Flanders AI Impuls Programme - FLAIR and from the European Research Council Advanced Grant 788506.
\section{Introduction}
In recent years, embodied AI has matured. In particular, a lot of works~\cite{vlnduet,hao2020towards,majumdar2020improving,reinforced,song2022one,vlnce_topo,georgakis2022cm2} have shown promising results in Vision-and-Language Navigation (VLN)~\cite{room2room,vlnce}. In VLN, an agent is required to reach a destination following a fine-grained natural language instruction that provides detailed step-by-step information along the path, for example ``Walk forward and take a right turn. Enter the bedroom and stop at the bedside table". However, in real-world applications and human-machine interactions, it is tedious for people to give such detailed step-by-step instructions. Instead, a high-level instruction only describing the destination, such as ``Go to the bedroom and clean the picture on the wall.", is more usual.
In this paper, we target
such high-level instruction-guided tasks. Specifically, we focus on the Embodied Referring Expression Grounding task~\cite{reverie,zhu2021soon}. In this task, an agent receives a high-level instruction referring to a remote object, and it needs to explore the environment and localize the target object. When given a high-level instruction,
we humans tend to imagine what the scene of the destination looks like. Moreover, we can efficiently navigate to the target room, even in previously unseen environments, by exploiting commonsense knowledge about the layout of the environment.
However, for an autonomous agent, generalization to unseen environments still remains challenging.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{imgs/tessar.jpeg}
\caption{The agent is required to navigate and find the mentioned object in the environment. Based on acquired commonsense knowledge, the agent correctly identifies the current and surrounding room types. Based on the imagination of the destination, it correctly chooses the unexplored right yellow dot as the next step to go.}
\label{fig: tessar}
\end{figure}
Inspired by how humans make decisions when receiving high-level instructions in unseen environments,
as shown in Fig.~\ref{fig: tessar}, we design an agent that can identify the room type of current and neighboring navigable areas based on a room type codebook and previous states. On top of that, it learns to combine this information with goal imagination to jointly infer the most probable moving direction.
Thus, we propose two modules named \textbf{Layout Learner} and \textbf{Goal Dreamer} to achieve this goal.
In our model, the agent stores trajectory information by building a real-time topological map, where nodes represent either visited or unexplored but partially visible areas. The constructed topological map can be seen as a long-term scene memory. At each time step, the node representation is updated by moving the agent to the current node and receiving new observations.
The \textbf{Layout Learner} module learns to infer the layout distribution of the environment with a room-type codebook constructed using a large-scale pre-trained text-to-image model, GLIDE~\cite{glide} in our case. The codebook design helps the model to leverage high-level visual commonsense knowledge of room types and boosts performance in layout learning. This prediction is updated at each time step allowing the agent to correct its prediction when more areas are explored. In the \textbf{Goal Dreamer} module, we encourage the agent to imagine the destination beforehand by generating a set of images with the text-to-image model GLIDE. The use of this imagination prior helps accurate action prediction. The cross-attention between topological node representations and imagination features is conducted, and its output is used to help the agent make a decision at each time step.
In summary, the contributions of our paper are threefold:
\begin{itemize}
\item {We propose a \textbf{Layout Learner} which leverages the visual commonsense knowledge from a room-type codebook generated using the GLIDE model. It not only helps the agent to implicitly learn the environment layout distribution but also to better generalize to unseen environments.}
\item {The novel \textbf{Goal Dreamer} module equips the agent with the ability to make action decisions based on the imagination of the unseen destination.
This module
further boosts the action prediction accuracy.}
\item
Analyzing different codebook room types shows that visual descriptors of the room concept better generalize than textual descriptions and classification heads. This indicates that, at least in this embodied AI task,
visual features are
more informative.}
\end{itemize}
\section{Related work}
\subsubsection{Embodied Referring Expression Grounding.} In the Embodied Referring Expression Grounding task~\cite{reverie,zhu2021soon},
many prior works focus on adapting multimodal pre-trained networks to the reinforcement learning pipeline of navigation~\cite{reverie,hop} or introducing pretraining strategies for good generalization ~\cite{Qiao2022HOP,Hong_2021_CVPR}. Some recent breakthroughs come from including on-the-fly construction of a topological map and a trajectory memory as done in VLN-DUET~\cite{vlnduet}. Previous models only consider the observed history when predicting the next step. Different from them, we design a novel model to imagine future destinations while constructing the topological map.
\subsubsection{Map-based Navigation.} In general language-guided navigation tasks, online map construction gains increasing attention (e.g., ~\cite{chaplot2020object,chaplot2020learning,irshad2022sasra}). A metric map contains full semantic details in the observed space and precise information about navigable areas.
Recent works
focus on improving subgoal identification~\cite{min2021film,blukis2022persistent,song2022one} and path-language alignment~\cite{wang2022find}. However, online metric map construction is inefficient during large-scale training, and its quality suffers from sensor noise in real-world applications. Other studies focus on topological maps~\cite{nrns,vlnce_topo,vlnduet}, which provide a sparser map representation and good backtracking properties.
We use topological maps as the agent's memory.
Our agent learns layout-aware topological node embeddings that are driven by the prediction of room type as the auxiliary task, pushing it to include commonsense knowledge of typical layouts in the representation.
\subsubsection{Visual Common Sense Knowledge.} Generally speaking, visual common sense refers to knowledge that frequently appears in a day-to-day visual world. It can take the form of a hand-crafted knowledge graph such as Wikidata~\cite{vrandevcic2014wikidata} and ConceptNet~\cite{liu2004conceptnet}, or it can be extracted from a language model~\cite{acl2022-holm}. However, the knowledge captured in these resources is usually abstract and hard to align with objects mentioned in free
language.
Moreover, if, for instance, you would like to know what a living room looks like, then several images of different living rooms will form a more vivid description than its word definition.
Existing work~\cite{xiaomi@concept}
tries to boost the agent's performance using an external knowledge graph in the Grounding Remote Referring Expressions task.
Inspired by the recent use of prompts
~\cite{petroni2019lama,brown2020gpt3} to extract knowledge from large-scale pre-trained language models (PLM)~\cite{devlin2019bert,brown2020gpt3,kojima2022large}, we consider pre-trained text-to-image models~\cite{glide,ramesh2022dalle2} as our visual commonsense resources.
Fine-tuning a pre-trained vision-language model has been used in multimodal tasks~\cite{lu2019vilbert,su2019vl}.
However,
considering the explicit usage of prompted images as visual common sense for downstream tasks is novel. Pathdreamer~\cite{koh2021pathdreamer} proposes a model that predicts future observations given a sequence of previous panoramas along the path. It is applied in a VLN setting requiring detailed instructions for path sequence scoring. Our work studies the role of general visual commonsense knowledge and focuses on room-level imagination and destination imagination when dealing with high-level instructions. The experiments show that, on the one hand, including visual commonsense knowledge essentially improves task performance. On the other hand, visual common sense performs better than text labels on both environmental layout prediction and destination estimation.
\subsection{Overview}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\textwidth]{imgs/model.png}
\end{center}
\caption{The model architecture of our Layout-aware Dreamer (LAD). Our model predicts the room type of all nodes of the topological graph; for simplicity, we only show the predictions of several nodes here. The center part is the baseline model, which takes the topological graph and instruction as inputs and dynamically fuses the local and global branch action decisions to predict the next action. The dashed boxes show our proposed Layout Learner and Goal Dreamer.}
\label{fig: layout}
\end{figure*}
\textbf{Task Setup.}
In Embodied Referring Expression Grounding task~\cite{reverie,zhu2021soon}, an agent is spawned at an initial position in an indoor environment. The environment is represented as an undirected graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ stands for navigable nodes and $\mathcal{E}$ denotes connectivity edges. The agent is required to correctly localize a remote target object described by a high-level language instruction. Specifically, at the start position of each episode, the agent receives a concise high-level natural language instruction $\mathcal{X}=<w_1, w_2,..., w_L>$, where $L$ is the length of the instruction, and $w_i$ represents the $i$th word token.
The panoramic view $\mathcal{V}_t=\{v_{t,i}\}_{i=1}^{36}$ of the agent location at time step $t$ is represented by $36$ images which are the observations of the agent with
12 heading angles and 3 elevations. At each time step $t$,
the agent has access to the state information $S_t$ of its current location consisting of panoramic view $\mathcal{V}_t$,
and $M$ neighboring navigable nodes $A_{t} = [a_{t,1}, ..., a_{t,M}]$ with only a single view for each of these, namely the view observable from the current node. These single views of neighboring nodes form $\mathcal{\mathcal{N}}_t=[v_{t,1}, ..., v_{t,M}]$.
Then the agent is required to take a sequence of actions $<a_0, ... a_t, ...>$ to reach the goal location and ground the object specified by the instruction in the observations. The possible actions $a_{t}$ at each time step $t$ are either selecting one of the navigable nodes $A_t$ or stopping at the current location denoted by $a_{t,0}$.
\subsection{Base architecture}
Our architecture is built on the basis of the VLN-DUET~\cite{vlnduet} model, which is the previous state-of-the-art model on the REVERIE dataset. In the following paragraphs, we briefly describe several important components of this base architecture, including topological graph construction, the global and local cross-attention modules with minimal changes. For more details, we refer the reader to~\citet{vlnduet}.
\subsubsection{Topological Graph.} The base model gradually builds a topological graph $\mathcal{G}_t = \{\boldsymbol v,\boldsymbol e \mid \boldsymbol v \subseteq \mathcal{V}, \boldsymbol e \subseteq \mathcal{E} \}$ to represent the environment at time step $t$. The graph contains three types of nodes: (1) visited nodes; (2) navigable nodes; and (3) the current node. In Fig.~\ref{fig: layout}, they are denoted by a blue circle, yellow circle, and double blue circle, respectively. Both visited nodes and the current node have been explored and the agent has access to their panoramic views. The navigable nodes are unexplored and can be partially observed from other visited nodes. When navigating to a node $a_{t,0}$ at time step $t$, the agent extracts panoramic image features $\mathcal{R}_t$ from its panoramic view $\mathcal{V}_t$ and object features $\mathcal{O}_t$ from provided object bounding box. The model then uses a multi-layer transformer with self-attention to model the relation between the image features $\mathcal{R}_t$ and the object features $\mathcal{O}_t$. The fused features $\mathcal{R}_t$ and $\mathcal{O}_t$ are treated as local visual features of node $a_{t,0}$. During exploring the environment, the agent updates the node visual representations as follows: (1) For the current node, the node representation is updated by concatenating and average pooling the local features $\hat{\mathcal{R}}_t$ and $\hat{\mathcal{O}_t}$. (2) As unvisited nodes could be partially observed from different directions of multiple visited nodes, the average of all image features of the partial views are taken as its representation. (3) The features of visited nodes remain unchanged. The final representation of nodes is the sum of location embedding, step embedding and visual embedding. The location embedding of a node is formed by the concatenation of Euclidean distance, heading and elevation angles relative to the current node. The step embedding embeds the last visited time step of each visited node, and time zero is set for unvisited nodes.
\noindent\textbf{Language Encoder.} We use a multi-layer transformer encoder~\cite{transformer} to encode the natural language instruction $\mathcal{X}$. Following the convention, we feed the sum of the token embedding, position embedding and token type embedding into the transformer encoder, and the output denoted as $\mathcal{T}$ is taken as language features.
\noindent\textbf{Global Node Self-Attention.} Different from the VLN-DUET model, to enable each node to perceive global environment information without influenced by the language information, we conduct one more graph aware self-attention (GASA) \cite{vlnduet} over node embeddings $\mathcal{H}_t$ of graph $\mathcal{G}_t$ before interacting with word embeddings. For simplicity, we use the same symbol $\mathcal{H}_t$ to denote the encoded graph node embeddings.
\noindent\textbf{Cross Graph Encoder.} Following the work of~\citet{devlin2019bert}, we use a multimodal transformer~\cite{lu2019vilbert} to model both the global and local graph-language relation. We name the global and local graph-language cross-attention models (Global Cross GA and Local GA) as global branch and local branch, respectively. For the global branch, we perform cross-attention of node embeddings $\mathcal{H}_t$ over language features $\mathcal{T}$, while only the current node and its neighboring navigable nodes are used to compute the cross-attention in the local branch. We feed the outputs of the global branch to the \text{Layout Learner} for layout prediction. In addition, both the global and local branch outputs are further used to make the navigation decision, as shown in Fig.~\ref{fig: layout}.
\begin{align*}
\mathcal{\tilde{H}}_t^{(glo)} &= \text{Cross-Attn}(\mathcal{H}_t, \mathcal{T}, \mathcal{T}) \tag{1} \\
\mathcal{\tilde{H}}_t^{(loc)} &= \text{Cross-Attn}(\{\mathcal{H}_t(\mathcal{A}_t), \mathcal{H}_t(a_{t,0})\}, \mathcal{T}, \mathcal{T}) \tag{2}
\end{align*}
where $\text{Cross-Attn}(query, key, value)$
is a multi-layer transformer decoder, and $\mathcal{A}_t$ stands for neighbouring navigable nodes of the current node $a_{t,0}$. $\mathcal{H}_t(\cdot)$ represents extracting corresponding rows from $\mathcal{H}_t$ by node indices.
\subsection{Layout Learner}
This module aims to learn both the implicit environment layout distribution and visual commonsense knowledge of the room type, which is achieved through an auxiliary layout prediction task with our room type codebook. This auxiliary task is not used directly at inference time. The main purpose of having it during training is learning representations to capture this information, which in turn improves global action prediction.
\noindent\textbf{Building Room Type Codebook.}
We fetch room type labels from the MatterPort point-wise semantic annotations which contain 30 distinct room types.
We then select the large-scale pre-trained text-to-image generation model GLIDE~\cite{glide} as a visual commonsense resource. To better fit the embodied grounding task, we create prompt $P_{room}$ to prompt visual commonsense knowledge not only based on the room type label but including high-frequency objects of referring expressions in the training set. Specifically, when building the room type codebook, we create prompts by filling in the following template.
\begin{align*}
\text{A [room type] with [obj 1], ... and [obj n]}.
\end{align*}
where [room type] is a room label annotated in Matterport3D dataset~\cite{Matterport3D} with manual spelling completion, such as map ``l" to ``living room" and ``u" to ``utility room". [obj 1] .. [obj n] are high-frequency object words that co-occur with specific room labels in the training instructions of the REVERIE dataset. A frequency above 0.8 is considered a high frequency. This threshold ensures diversity and limits the number of candidates. For instance, we create the prompt ``A dining room with table and chairs" for room type ``dining room", ``a bathroom with towel and mirror" for room type ``bathroom". For each room type, we generate a hundred images and select $S$ representative ones in the pre-trained CLIP feature space (i.e., the image closest to each clustering center after applying a K-Means cluster algorithm). An example is shown in Fig.~\ref{fig: glide room}. Our selection strategy guarantees the diversity of the generated images, i.e., rooms from various perspectives and lighting conditions. The visual features of the selected images for different room types form the room type codebook $E_{room}\in \mathbb{R}^{K \times S \times 765}$, where $K$ is the total number of room types and $S$ represents the number of images for each room type. This codebook represents a commonsense knowledge base with visual descriptions of what a room should look like.
\noindent\textbf{Environment Layout Prediction.} Layout information is critical for indoor navigation, especially when the agent only receives high-level instructions describing the goal positions, such as ``Go to the kitchen and pick up the mug beside the sink". This module equips the agent with both the capability of learning room-to-room correlations in the environment layout and a generalized room type representation. With the help of the visual commonsense knowledge of the rooms in the room type codebook, we perform layout prediction.
We compute the similarity score between node representations $\mathcal{\tilde{H}}^{(glo)}_t$ and image features $E_{room}$ in the room type codebook and further use this score to predict the room type of each node in the graph $\mathcal{G}_t$. The predicted room type logits are supervised with ground truth labels $\mathcal{C}_t$.
\begin{align*}
\hat{\mathcal{C}_{t}^{i}} &= \sum_{j=1}^{S}{\mathcal{\tilde{H}}^{(glo)}_t E_{room (i,j)}} \tag{3} \\
\mathcal{L}_t^{\text{(LP)}} &=\text{CrossEntropy}(\hat{\mathcal{C}_t},\mathcal{C}_t) \tag{4}
\end{align*}
where $S$ is the number of images in the room type codebook for each room type, and $\hat{\mathcal{C}}_t^i$ represents the predicted score of $i$th room type. We use $\hat{\mathcal{C}_t}$ to denote the predicted score distribution of a node, thus $\hat{\mathcal{C}_t} = [\hat{\mathcal{C}_t^0},\cdots,\hat{\mathcal{C}_t^{K}}]$.
\subsection{Goal Dreamer}
A navigation agent without a global map can be short-sighted. We design a long-horizon value function to guide the agent toward the imagined destination. For each instruction, we prompt five images from GLIDE~\cite{glide} as the imagination of the destination. Three examples are shown in Fig~\ref{fig: glide des}. Imagination features $E^{(im)}$ are extracted from the pre-trained CLIP vision encoder~\cite{clip}. Then at each time step $t$, we attend the topological global node embeddings $\mathcal{\tilde{H}}_t^{(glo)}$ to $E^{(im)}$ through a cross-attention layer~\cite{transformer}.
\begin{align*}
\mathcal{\hat{H}}_t^{(glo)} = \text{Cross-Attn}(\mathcal{\tilde{H}}_t^{(glo)}, E^{(im)}, E^{(im)}) \tag{5}
\end{align*}
The hidden state $\mathcal{\hat{H}}_t^{(glo)}$ learned by the Goal Dreamer is projected by a linear feed-forward network (FFN)\footnote{FFNs in this paper are independent without parameter sharing.} to predict the probability distribution of the next action step over all navigable but not visited nodes.
\begin{align*}
Pr_{t}^{(im)} = \text{Softmax}(\text{FFN}(\mathcal{\hat{H}}_t^{(glo)}))\tag{6}
\end{align*}
We supervise this distribution $Pr_{t}^{(im)}$ in the warmup stage of the training (see next Section) with the ground truth next action $\mathcal{A}_{gt}$.
\begin{align*}
\mathcal{L}_t^{(D)} &= \text{CrossEntropy}(Pr_{t}^{(im)},\mathcal{A}_{gt}) \tag{7}
\end{align*}
Optimizing $Pr_{t}^{(im)}$ guides the learning of latent features
$\mathcal{\hat{H}}_t^{(glo)}$. $\mathcal{\hat{H}}_t^{(glo)}$ will be fused with global logits in the final decision process as described in the following section.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{imgs/room_glide_example.jpeg}
\caption{Prompted examples of the room codebook. }
\label{fig: glide room}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.95\linewidth]{imgs/ins2img.png}
\caption{Images of the destination
generated by GLIDE
based on the given instruction.}
\label{fig: glide des}
\end{figure}
\subsection{Decision Maker}
\noindent\textbf{Action Prediction}.
We follow the work of~\citet{vlnduet} to predict the next action to be performed in both the global and local branches and dynamically fuse their results to enable the agent to backtrack to previous unvisited nodes.
\begin{align*}
Pr_{t}^{\text{(floc)}} = \text{DynamicFuse}(\mathcal{\tilde{H}}_t^{(loc)}, \mathcal{\tilde{H}}_t^{(glo)}) \tag{8}
\end{align*}
The proposed goal dreamer module equips the agent with the capability of learning guidance
towards the target goal, hence we further fuse the goal dreamer's latent features $\mathcal{\hat{H}}_t^{(glo)}$ with global results weighted by a learnable $\lambda_t$. The $\lambda_t$ is node-specific; thus we apply a feed-forward network (FFN) to predict these weights conditioned on node representations.
\begin{align*}
\lambda_t &= \text{FFN}([\mathcal{\tilde{H}}_t^{(glo)};\mathcal{\hat{H}}_t^{(glo)}]) \tag{9}
\end{align*}
The fused action distribution is formulated as:
\begin{align*}
Pr_{t}^{\text{(fgd)}} &= (1-\lambda_t) * \mathcal{\text{FFN}(\tilde{H}}_t^{(glo)}) \\
&+ \lambda_t * \mathcal{\text{FFN}(\hat{H}}_t^{(glo)}) \tag{10}
\end{align*}
The objective for supervising the whole decision procedure by ground truth node $\mathcal{A}_{gt}$ in the next time step is:
\begin{align*}
Pr_t^{\text{(DSAP)}} &= Pr_{t}^{\text{(floc)}} + Pr_{t}^{\text{(fgd)}} \\
\mathcal{L}_t^{\text{(DSAP)}} &= \text{CrossEntropy}(Pr_t^{\text{(DSAP)}} ,\mathcal{A}_{gt}) \tag{11}
\end{align*}
where $Pr_t^{\text{(DSAP)}}$ is the estimated single-step prediction distribution (DSAP) over all nodes. Not only the global-local fusion already proposed in previous work but also our goal-dreaming branch is now included to predict the next action.
\noindent\textbf{Object Grounding}.
We simply consider object grounding as a classification task and use a $\text{FFN}$ to generate a score for each object in $\mathcal{O}_t$ of the current node. We then supervise this score with the annotated ground truth object $\mathcal{O}_{gt}$.
\begin{align*}
\hat{\mathcal{O}_t} &= \text{FFN}(\mathcal{O}_t) \\
\mathcal{L}_t^{\text{(OG)}} &= \text{CrossEntropy}(\hat{\mathcal{O}_t}, \mathcal{O}_{gt}) \tag{12}
\end{align*}
\subsection{Training and Inference}
\label{sec: training}
\textbf{Warmup stage.} Previous researches~\cite{history,vlnduet,episodic,hao2020towards} have shown that warming up the model with auxiliary supervised or self-supervised learning tasks can significantly boost the performance of a transformer-based VLN agent. We warm up our model with five auxiliary tasks, including three common tasks in vision-and-language navigation: masked language modeling (MLM)~\cite{devlin2019bert}, masked region classification (MRC)~\cite{lu2019vilbert}, object grounding (OG)~\cite{hop} if object annotations exist; and two new tasks, that is, layout prediction (LP) and single action prediction with the dreamer (DSAP) explained in sections \text{Layout Learner} and \text{Goal Dreamer}, respectively. In the \text{LP}, our agent predicts the room type of each node in the topological graph at each time step, aiming to model the room-to-room transition of the environment, the objective $\mathcal{L}_t^{\text{(LP)}}$ of which is shown in Eq. 4. To encourage the agent to conduct goal-oriented exploration, in the \text{DSAP}, as illustrated in $\mathcal{L}_t^{\text{(D)}}$ (Eq. 7) and $\mathcal{L}_t^{\text{(DSAP)}} $ (Eq. 11) we use the output of the goal dreamer to predict the next action.
The training objective of the warmup stage is as follows:
\begin{align*}
\mathcal{L}^{\text{WP}}_t = &\mathcal{L}_t^{\text{(MLM)}} +\mathcal{L}_t^{\text{(MRC)}} + \mathcal{L}_t^{\text{(OG)}} + \\
& \mathcal{L}_t^{\text{(LP)}} + \mathcal{L}_t^{\text{(D)}} + \mathcal{L}_t^{\text{(DSAP)}} \tag{13}
\end{align*}
\noindent\textbf{Imitation Learning and Inference.} We use the imitation learning method DAgger~\cite{dagger} to further train the agent. During training, we use the connectivity graph $\mathcal{G}$ of the environment to select the navigable node with the shortest distance from the current node to the destination as the next target node. We then use this target node to supervise the trajectory sampled using the current policy at each iteration.
The training objective here
is:
\begin{align*}
\mathcal{L}^{\text{IL}}_t = & \mathcal{L}_t^{\text{(OG)}} + \mathcal{L}_t^{\text{(LP)}} + \mathcal{L}_t^{\text{(DSAP)}} \tag{14}
\end{align*}
During inference, our agent builds the topological map on-the-fly and selects the action with the largest probability. If the agent decides to backtrack to the previous unexplored nodes, the classical Dijkstra algorithm~\cite{dijkstra1959note} is used to plan the shortest path from the current node to the target node. The agent stops either when a stop action is predicted at the current location or when it exceeds the maximum action steps. When the agent stops, it selects the object with the maximum object prediction score.
\section{Experiments}
\subsection{Datasets}
Because
the navigation task is characterized by realistic
high-level instructions, we conduct experiments and evaluate our agent on the embodied goal-oriented benchmark REVERIE~\cite{reverie} and
the SOON~\cite{song2022one} datasets.
REVERIE dataset: The dataset is split into four sets, including the training set, validation seen set, validation unseen set, and test set. The environments in both validation unseen and test sets do not appear in the training set, while all environments in validation seen have been explored or partially observed during training. The average length of instructions is $18$ words. The dataset also provides object bounding boxes for each panorama, and the length of ground truth paths ranges from $4$ to $7$ steps.
SOON dataset: This dataset has a similar data split as REVERIE. The only difference is that it proposes a new validation on the seen instruction split which contains the same instructions in the same house but with different starting positions. Instructions in SOON contain $47$ words on average, and the ground truth paths range from $2$ to $21$ steps with $9.5$ steps on average. The SOON dataset does not provide bounding boxes for object grounding, thus we use here an existing object detector~\cite{anderson2018bottom} to generate candidate bounding boxes.
\subsection{Evaluation Metrics}
\noindent\textbf{Navigation Metrics.} Following previous work~\cite{vlnduet,evaluation}, we evaluate the navigation performance of our agent using standard
metrics, including Trajectory Length (TL) which is the average path length in meters; Success Rate (SR) defined as the ratio of paths where the agent's location is less than $3$ meters away from the target location; Oracle SR (OSR) that defines success if the trajectory has ever passed by the target location; and SR weighted by inverse Path Length (SPL).
\noindent\textbf{Object Grounding Metrics.} We follow the work of~\citet{reverie} using
Remote Grounding Success (RGS), which is the ratio of successfully executed instructions, and RGS weighted by inverse Path Length (RGSPL).
\subsection{Implementation Details.}
The model is trained for 100k iterations with a batch size of 32 for single action prediction and 50k iterations with a batch size of 8 for imitation learning with DAgger~\cite{dagger}. We optimize both phases by the AdamW~\cite{adamw} optimizer with a learning rate of 5e-5 and 1e-5, respectively. We include two fixed models for preprocessing data, i.e., GLIDE~\cite{glide} for generating the room codebook and the imagined destination, and CLIP~\cite{clip} for image feature extraction.
The whole training procedure takes two days with a single NVIDIA-P100 GPU.
\section{Results}
\begin{table*}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{l|cccc|cc||cccc|cc||cccc|cc}
\hline
\multirow{3}*{Methods} & \multicolumn{6}{c}{Val-seen} & \multicolumn{6}{c}{Val-unseen} & \multicolumn{6}{c}{Test-unseen} \\
~ & \multicolumn{4}{c}{Navigation} & \multicolumn{2}{c}{Grounding} &\multicolumn{4}{c}{Navigation} &
\multicolumn{2}{c}{Grounding} &\multicolumn{4}{c}{Navigation} &
\multicolumn{2}{c}{Grounding} \\
\hline
~ & TL $\downarrow$ & OSR$\uparrow$ & SR$\uparrow$ & SPL$\uparrow$ & RGS$\uparrow$ & RGSPL$\uparrow$ & TL $\downarrow$ & OSR$\uparrow$ & SR$\uparrow$ & SPL$\uparrow$ & RGS$\uparrow$ & RGSPL$\uparrow$ & TL $\downarrow$ & OSR$\uparrow$ & SR$\uparrow$ & SPL$\uparrow$ & RGS$\uparrow$ & RGSPL$\uparrow$ \\
\hline
RCM~\cite{reinforced} & 10.70 & 29.44 & 23.33 & 21.82 & 16.23 & 15.36 & 11.98 & 14.23 & 9.29 & 6.97 & 4.89 & 3.89 & 10.60 & 11.68 & 7.84 & 6.67 & 3.67 & 3.14 \\
SelfMonitor~\cite{ma2019self} & 7.54 & 42.29 & 41.25 & 39.61 & 30.07 & 28.98 & 9.07 & 11.28 & 8.15 & 6.44 & 4.54 & 3.61 & 9.23 & 8.39 & 5.80 & 4.53 & 3.10 & 2.39 \\
REVERIE~\cite{reverie} & 16.35 & 55.17 & 50.53 & 45.50 & 31.97 & 29.66 & 45.28 & 28.20 & 14.40 & 7.19 & 7.84 & 4.67 & 39.05 & 30.63 & 19.88 & 11.61 & 11.28 & 6.08 \\
CKR~\cite{gao2021room} & 12.16 & 61.91 & 57.27 & 53.57 & 39.07 & - & 26.26 & 31.44 & 19.14 & 11.84 & 11.45& - & 22.46 & 30.40 & 22.00 & 14.25 & 11.60 & - \\
SIA~\cite{hop} & 13.61 & 65.85 & 61.91 & 57.08 & 45.96 & 42.65 & 41.53 & 44.67 & 31.53 & 16.28 & 22.41 & 11.56 & 48.61 & 44.56 & 30.8 & 14.85 & 19.02 & 9.20 \\
VLN-DUET~\cite{vlnduet} & 13.86 & \textbf{73.86} & \textbf{71.75} & \textbf{63.94} & \textbf{57.41} & \textbf{51.14} & 22.11 & 51.07 & 46.98 & 33.73 & 32.15 & 22.60 & 21.30 & 56.91 & 52.51 & 36.06 & 31.88 & 22.06 \\
\hline
\textbf{LAD (Ours)} & 16.74 & 71.68 & 69.22 & 57.44 & 52.92 & 43.46 & 26.39 & \textbf{63.96} & \textbf{57.00} & \textbf{37.92} & \textbf{37.80} & \textbf{24.59} & 25.87 & \textbf{62.02} & \textbf{56.53} & \textbf{37.8} & \textbf{35.31} & \textbf{23.38} \\
\hline
\end{tabular}
}
\caption{Results obtained on the REVERIE dataset as compared to other existing models including the current state-of-the-art model VLN-DUET.
}
\label{tab: main table}
\end{table*}
\subsection{Comparisons to the state of the art.}
\noindent\textbf{Results on REVERIE.}
In Table~\ref{tab: main table}, we compare our model with prior works in four categories: (1) Imitation Learning + Reinforcement learning models: RCM~\cite{reinforced}, SIA~\cite{hop}; (2) Supervision model: SelfMonitor~\cite{ma2019self}, REVERIE~\cite{reverie}; (3) Imitation Learning with external knowledge graph: CKR~\cite{gao2021room}; and (4) Imitation Learning with topological memory: VLN-DUET~\cite{vlnduet}.
Our model outperforms the above models with a large margin in challenging unseen environments. Significantly, our model surpasses the previous state of the art
\text{VLN-DUET} by approximately $10\%$ (\text{SR}) and $5\%$ (\text{RGS}) in the val-unseen split. On the test split, our model beats VLN-DUET with improvements of \text{SR} by $4.02\%$ and \text{RGS} by $3.43\%$. The results demonstrate that the proposed \text{LAD} better generalizes to unseen environments, which is critical for real applications.
\noindent\textbf{Results on SOON.} Table~\ref{tab: soon} presents the comparison of our proposed LAD with other models including the state-of-the-art VLN-DUET model. The LAD model significantly outperforms VLN-DUET across all evaluation metrics in the challenging test unseen split. Especially, the model improves the performance on \text{SR} and \text{SPL} by $6.15\%$ and $6.4\%$, respectively. This result clearly shows the effectiveness of the proposed \text{Layout Learner} and \text{Goal Dreamer} modules.
\subsection{Ablation Studies}
We verify the effectiveness of our key contributions via an ablation study on the REVERIE dataset.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{imgs/cropped_rt.jpeg}
\caption{Room type belief will be updated while exploring. Blue double circle denotes the current location, a yellow circle refers to an unexplored but visible node, a blue circle represents a visited node, the red line is the agent's trajectory. Each column contains bird-view and egocentric views of current agent states.}
\label{fig: room change}
\end{figure*}
\begin{table}[H]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{c|c|ccccc}
\hline
Split & Methods & TL $\downarrow$ & OSR $\uparrow$ & SR $\uparrow$ & SPL $\uparrow$ & RGSPL $\uparrow$ \\
\hline
\multirow{3}*{\makecell[c]{Val \\ Unseen}} & GBE~\cite{zhu2021soon} & 28.96 & 28.54 & 19.52 & 13.34 & 1.16 \\
~ & VLN-DUET~\cite{vlnduet} & 36.20 & \textbf{50.91} & 36.28 & 22.58 & 3.75 \\
~ & LAD & 32.32 & 50.59 & \textbf{40.24} & \textbf{29.44} & \textbf{4.20} \\
\hline
\multirow{3}*{\makecell[c]{Test \\ Unseen}} & GBE~\cite{zhu2021soon} & 27.88 & 21.45 & 12.90 & 9.23 & 0.45 \\
~ & VLN-DUET~\cite{vlnduet} & 41.83 & 43.00 & 33.44 & 21.42 & 4.17 \\
~ & LAD & 35.71 & \textbf{45.80} & \textbf{39.59} &\textbf{27.82} & \textbf{7.08} \\
\hline
\end{tabular}
}
\caption{Results of our LAD model obtained on the SOON dataset compared with the results of other state-of-the-art models
}
\label{tab: soon}
\end{table}
\noindent\textbf{Are the Layout Learner and Goal Dreamer helpful? } We first verify the contribution of the \text{Layout Learner} and \text{Goal Dreamer}. For a fair comparison, we re-implement the result of VLN-DUET~\cite{vlnduet} but replace the visual representations to CLIP features and report the results in Row 1 of Table~\ref{tab: abla_table}. The performance boost in this re-implementation compared to VLN-DUET results in Table~\ref{tab: main table} indicates that CLIP features are more suitable for visual-language tasks than original ImageNet ViT features. Comparing the results in row $2$ with row $1$, it is clear that integrating the \text{Layout Learner} to the baseline model improves its performance across all evaluation metrics, which verifies our assumption that the layout information is vital in high-level instruction following tasks. One might notice that in row $3$ the \text{Goal Dreamer} module can boost the performance in \text{SR}, \text{OSR}, \text{SPL}, and \text{RGSPL}, but it slightly harms the performance in \text{RGS}.
A lower \text{RGS} but higher \text{RGSPL} shows that the model with \text{Goal Dreamer} takes fewer steps to reach the goal, meaning that it conducts more effective goal-oriented exploration, which supports our assumption.
\begin{table}[t]
\resizebox{\linewidth}{!}{
\begin{tabular}{ccc|ccccc}
\hline
Baseline & Layout Learner & Goal Dreamer & OSR$\uparrow$ & SR$\uparrow$ & SPL$\uparrow$ & RGS$\uparrow$ & RGSPL$\uparrow$ \\
\hline
\checkmark& & & 58.68 & 52.34 & 34.45 & 35.02 & 22.87 \\
\checkmark & \checkmark & & 63.90 & 56.04 & 37.66 & 37.06 & 24.58 \\
\checkmark & & \checkmark & 61.03 & 53.45 & 37.41 & 34.34 & 24.03 \\
\checkmark & \checkmark & \checkmark & 63.96 & 57.00 & 37.92 & 37.80 & 24.59 \\
\hline
\end{tabular}
}
\caption{Comparisons of baseline model and baseline with our proposed modules (Layout Learner and Goal Dreamer).}
\label{tab: abla_table}
\end{table}
\noindent\textbf{Visual or textual common sense?}
In this work, we consider several images to describe a commonsense concept.
In this experiment, we study whether visual descriptors of room types lead to a better generalization than directly using the classification label or a textual description while learning an agent.
In the first line of Table~\ref{tb: codeboook}, we show the results of directly replacing the visual codebook module with a room label classification head. It shows a $3\%$ drop in both navigation and grounding success rates. This indicates that a single room type classification head is insufficient for learning good latent features of room concepts.
We further compare the results of using a visual codebook with using a textual codebook. Since we use text to prompt multiple room images as our visual room codebook
encoded with the CLIP~\cite{clip} visual encoder, for a fair comparison, we encode the text prompts as a textual codebook using the CLIP text encoder. Then we replace the visual codebook in our model with the textual one and re-train the whole model. As shown in Table \ref{tb: codeboook}, the textual codebook has a $5.62\%$ drop in navigation success rate (SR) and a $5.58\%$ drop in remote grounding success rate (RGS). This indicates that visual descriptors of commonsense room concepts are informative and easier to follow for an autonomous agent.
\begin{table}[t]
\resizebox{\linewidth}{!}{
\begin{tabular}{ccc|ccccc}
\hline
FFN & Text & Visual & OSR$\uparrow$ & SR$\uparrow$ & SPL$\uparrow$ & RGS$\uparrow$ & RGSPL$\uparrow$ \\
\hline
\checkmark & & &58.76 &53.73 & 35.22 &34.68 & 23.58 \\
&\checkmark & & 56.06 & 51.38 & 35.57 & 32.38 & 22.05 \\
& & \checkmark & 63.96 & 57.00 & 37.92 & 37.80 & 24.59 \\
\hline
\end{tabular}
}
\caption{Codebook type comparison: visual room codebook versus textual room codebook and direct classification head.}
\label{tb: codeboook}
\end{table}
\noindent\textbf{Could the room type prediction be corrected while exploring more?}
In this section, we study the predicted trajectory. As shown in Fig.~\ref{fig: room change}, the incorrect room type prediction of node $\alpha$ is corrected after exploration of the room. At time step $t$, the observation only contains chairs, the prediction of room type of node $\alpha$ is office. When entering the room at time step $t+1$, the table and television indicate this room is more likely to be a living room. While grabbing another view from a different viewpoint, the room type of node $\alpha$ is correctly recognized as a bedroom. Since the instruction states to find the pillow inside the bedroom, the agent could correctly track its progress with the help of room type recognition and successfully execute the instruction. This indicates that the ability to correct former beliefs benefits the layout understanding of the environment and further has a positive influence
on the action decision process. We further discuss the room type correction ability quantitatively. The following Fig.~\ref{fig: room acc} shows the room type recognition accuracy w.r.t. time step $t$ in the validation unseen set of the REVERIE dataset.
It shows that room type recognition accuracy increases with increased exploration of the environment.
We also observe that the overall accuracy of the room type recognition is still not satisfactory. We assume the following main reasons: first, room types defined in MatterPort3D have ambiguity, such as family room and living room do not have a well-defined difference; second, many rooms do not have a clear boundary in the visual input (no doors), so it is hard to distinguish connected rooms from the observations. These ambiguities require softer labels while learning, which is also a reason why using images as commonsense resource performs better than using textual descriptors and linear classification heads as is seen in Table~\ref{tb: codeboook}.
\section{Limitations and future work}
In this paper, we describe our findings while including room type prediction and destination imagination
in the Embodied Referring Expression Grounding task, but several limitations still require further study.
\noindent\textbf{Imagination is not dynamic}
and it is only conditioned on the given instruction. Including observations and dynamically modifying the imagination with a trainable generation module could be helpful for fully using the knowledge gained during exploration. This knowledge could guide the imagination model to generate destination images of a style similar to the environment. It is also possible to follow the idea of PathDreamer~\cite{koh2021pathdreamer} and Dreamer~\cite{hafner2019dream,hafner2020dreamer2}, which generate a sequence of hidden future states based on the history to enhance reinforcement learning models.
\noindent\textbf{Constant number of generated visual features.} Due to the long generation time and storage consumption, we only generate five images as the goal imaginations. It is possible to increase diversity by generating more images. Then, a better sampling strategy for the visual room codebook construction and destination imagination could be designed, such as randomly picking a set of images from the generated pool. Since we have observed overfitting in the later stage of the training, it is possible to further improve the generalization of the model by including randomness in this way.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{imgs/unseen_rt_acc.png}
\caption{Room recognition accuracy of the validation unseen set of the REVERIE dataset.}
\label{fig: room acc}
\end{figure}
| {'timestamp': '2022-12-05T02:14:53', 'yymm': '2212', 'arxiv_id': '2212.00171', 'language': 'en', 'url': 'https://arxiv.org/abs/2212.00171'} |
\section{Introduction}
For the last few decades, we have seen tremendous developments on knot theory, a subject where diverse areas in mathematics and physics interact in beautiful ways. The interplay between mathematics and physics involving knot theory was triggered by the seminal paper of Witten \cite{Witten:1988hf} which shows that Chern-Simons theory provides a natural framework to study link invariants. In particular, the expectation value of Wilson loop along a link $\cal L$ in $S^3$ gives a topological invariant of the link depending on the representation of the
gauge group. For a representation $R$ of $SU(2)$ gauge group, an invariant corresponds to a colored Jones polynomial $J_R({\cal L};q)$. Besides, one can relate an $SU(N)$ invariant with representation $R$ to a colored HOMFLY invariant $P_R({\cal L};a,q)$.
While the systematic procedure to compute $SU(N)$ invariants in $S^3$ is investigated in \cite{Kaul:1991vt, Kaul:1992rs, RamaDevi:1992dh}, it is very difficult to carry out explicit computations in general.
Even in mathematics, although the definition \cite{morton1993invariants,lin2010hecke} of colored HOMFLY polynomials was provided, explicit calculations for non-trivial knots and links are far from under control.
Nevertheless, there have been spectacular developments on computations of colored HOMFLY polynomials in recent years.
For torus knots and links, the HOMFLY invariants colored by arbitrary representations can be, in principle, computed by using the generalizations \cite{lin2010hecke,Stevan:2010jh,Brini:2011wi} of the Rosso-Jones formulae \cite{Rosso:1993vn}. In addition, Kawagoe has lately formulated a mathematically rigorous procedure based on the linear skein theory to calculate HOMFLY invariants colored by symmetric representations for some non-torus knots and links \cite{Kawagoe:2012bt}. Furthermore, the explicit closed formulae of the colored HOMFLY polynomials $P_{[n]}({\cal K};a,q)$ with symmetric representations ($R=\raisebox{-.1cm}{\includegraphics[width=1.4cm]{symmetric}}$) were provided for the $(2,2p+1)$-torus knots \cite{Fuji:2012pm} and the twist knots \cite{Itoyama:2012fq,Nawata:2012pg,Kawagoe:2012bt}.
In this paper, we shall demonstrate the computations of the HOMFLY polynomials colored by symmetric representations in the framework of Chern-Simons theory.
Exploiting the connection between Chern-Simons theory and the two-dimensional Wess-Zumino-Novikov-Witten (WZNW) model, the prescription to evaluate expectation values of Wilson loops was formulated entirely in terms of the fusion and braid operations on conformal blocks of the WZNW model \cite{Kaul:1991vt, Kaul:1992rs, RamaDevi:1992dh}. Therefore, the procedure inevitably involves the $SU(N)$ quantum Racah coefficients (the quantum $6j$-symbols for $U_q(\ensuremath{\mathfrak{sl}}_N)$), which makes explicit computations hard. The first step along this direction has been made in \cite{Zodinmawia:2011ud}: using the properties the $SU(N)$ quantum Racah coefficients should obey, the explicit expressions involving first few symmetric representations are determined. This result as well as the closed formulae of the twist knots motivated us to explore a closed form expression for the $SU(N)$ quantum Racah coefficients. We succeeded in writing the expression for multiplicity-free representations \cite{Nawata:2013ppa} which enables us to compute the colored HOMFLY polynomials carrying symmetric representations. To consider more complicated knots and links than the ones treated in \cite{Zodinmawia:2011ud}, we make use of the TQFT method developed in \cite{Kaul:1991vt, Kaul:1992rs}.
With this method, the expressions of the twist knots, the Whitehead links, the twist links and the Borromean rings \cite{Kawagoe:2012bt,Nawata:2012pg,Gukov:2013} have been reproduced up to 4 boxes. Even apart from these classes of knots and links,
the validity of our procedure is checked from the complete agreement with the results obtained in \cite{Itoyama:2012qt, Itoyama:2012re}. Furthermore, the explicit evaluations of multi-colored link invariants shed a new light on the general properties of colored HOMFLY invariants of links and provide meaningful implications on homological invariants of links.
The plan of the paper is as follows. In \S \ref{sec:CS}, we briefly review $U(N)$ Chern-Simons theory. In particular, we present the
list of building blocks and the corresponding states which are necessary for calculations
of colored HOMFLY polynomials. In \S \ref{sec:knots}, we compute the colored HOMFLY polynomials of seven-crossing knots and ten-crossing thick knots.
In \S \ref{sec:links}, multi-colored HOMFLY invariants for two-component and
three-component links are expressed. We summarize and present several open problems in \S \ref{sec:conclusion}. For convenience, we explicitly show $SU(N)$ quantum Racah coefficients for some representations in Appendix \ref{sec:fusion}. Finally, we should mention that a Mathematica file with colored HOMFLY invariants whose expressions are too lengthy for the main text is linked on the arXiv page as an ancillary file.
\section{Invariants of knots and links in Chern-Simons theory}\label{sec:CS}
We shall briefly discuss $U(N)$ Chern-Simons theory necessary for computing invariants of framed knots and links. We refer the reader to \cite{Kaul:1991vt, Kaul:1992rs,RamaDevi:1992dh} for more details. The action for $U(N)\simeq U(1)\times SU(N)$ Chern-Simons theory is given by
\begin{equation*}
S=\frac{k_1}{4 \pi} \int_{S^3} B \wedge dB+
\frac{k}{4 \pi} \int_{S^3} {\rm Tr}\left(A \wedge dA+ \frac{2}{3}A \wedge A \wedge A \right)~,
\end{equation*}
where $B$ is the $U(1)$ gauge connection and $A$ is the
$SU(N)$ matrix valued gauge connection with Chern-Simons coupling (also referred as Chern-Simons level) $k_1$ and $k$ respectively.
The Wilson loop observable for an arbitrary framed link $\mathcal{L}$ made up of $s$-components $\{\mathcal{K}_{\beta}\}$,
with framing number $f_{\beta}$,
is the trace of the holonomies of the components ${\cal K}_{\beta}$:
\begin{equation*}
W_{(R_1,n_1),(R_2,n_2),\ldots
(R_s,n_s)
}[{\cal L}] = \prod_{\beta=1}^{s}{\rm Tr}_{R_{\beta}}
U^A[{\cal K}_{\beta}]
{\rm Tr}_{n_{\beta}}U^B[{\cal K}_{\beta}]~,
\end{equation*}
where the holonomy of the gauge field $A$ around a component knot ${\cal K}_{\beta}$,
carrying a representation $R_{\beta}$, of
an $s$-component link is denoted by
$U^A[{\cal K}_{\beta}]=P[\exp \oint_{{\cal K}_{\beta}}A]$
and $n_{\beta}$ is the $U(1)$ charge carried by the component knot ${\cal K}_{\beta}$. Note that the framing number $f_{\beta}$ for the component knot ${\cal K}_\beta$ is the difference between the total number of left-handed crossings
and that of right-handed crossings.
The expectation values of these Wilson loop operators are the framed link invariants:
\begin{equation}\label{feynman}
V_{R_1,\ldots R_s}^{\{SU(N)\}}[{
\cal L}]
V_{n_1,\ldots ,n_s}^{\{U(1)\}}[{\cal L}]
=
\langle W_{(R_1,n_1),\ldots,(R_s,n_s)}[{\cal L}]\rangle=
\frac{\int[{\cal D}B][{\cal D}A] e^{iS}W_{(R_1,n_1),\ldots ,(R_s,n_s)}
[{\cal L}]}{\int [{\cal D}B][{\cal D}A] e^{iS}}~.
\end{equation}
The $SU(N)$ invariants will be
rational functions in the variable $q=\exp\left(\frac{2 \pi i}{k+N}\right)$ with the following choice for $U(1)$ charge and coupling $k_1$ \cite{Marino:2001re, Borhade:2003cu} ,
\begin{equation*}
n_{\beta}=\frac{\ell^{(\beta)}}{\sqrt N} ~;~k_1=k+N~,
\end{equation*}
where $\ell^{(\beta)}$ is the total number of boxes in the
Young Tableau representation $R_{\beta}$.
The $U(1)$ invariant involves only linking numbers $\{{\rm Lk}_{\alpha \beta}\}$ between the component knots and the framing numbers $\{f_{\beta}\}$
of each component knot. That is,
{\small
\begin{equation}
V_{\frac{\ell^{(1)}}{\sqrt N},\ldots ,\frac{\ell^{(s)}}{\sqrt N}}^{\{U(1)\}}[
{\cal L}]
=(-1)^{\sum_{\beta} \ell^{(\beta)} f_{\beta}}
\exp\left(\frac{i \pi}{k+N}\sum_{\beta=1}^s
\frac{(\ell^{(\beta)})^2 f_{\beta}}{N}\right) \exp\left( \frac{i \pi}{k+N} \sum_{\alpha \neq \beta}
\frac{\ell^{(\alpha)}\ell^{(\beta)} {\rm Lk}_
{\alpha \beta}}{N} \right)~. \label{u1}
\end{equation}
}
Although the expectation values of Wilson loops \eqref{feynman} involve infinite-dimensional functional integrals, one can obtain $SU(N)$ invariants non-perturbatively by utilizing the relation between $SU(N)$ Chern-Simons theory and the $SU(N)_k$
WZNW model \cite{Witten:1988hf}. The path integral of Chern-Simons theory on a three-manifold with boundary defines an element in the quantum Hilbert space on the boundary, which is isomorphic to the space of conformal blocks of the WZNW model. Using this fact, the evaluations of the expectation values of Wilson loops can be reduced to the braiding and fusion operations on conformal blocks once a link diagram is appropriately drawn in $S^3$ \cite{Kaul:1991vt, Kaul:1992rs, RamaDevi:1992dh}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.5]{conformal_bases}
\caption{Two bases for four-point conformal blocks}
\label{figs:conf}
\end{figure}
The Chern-Simons functional integral over a three-ball with a four-punctured $S^2$ boundary is given by a state in the Hilbert space spanned by
four-point conformal blocks. There are two different bases for four-point conformal blocks as shown in Figure \ref{figs:conf}
where the internal representations satisfy the fusion rules $t \in (R_1 \otimes R_2) \cap ( R_3 \otimes R_4)$
and
$s \in (R_2 \otimes \overline R_3) \cap (\overline R_1 \otimes R_4)$.
The conformal block $\vert \phi_t(R_1,R_2,\overline R_3, \overline R_4) \rangle$ is suitable
for the braiding operators $b_1^{(\pm)}$ and $b_3^{(\pm)}$ where $b_i$ denotes
right-handed half-twist or braiding between the $i^{th}$ and the $(i+1)^{th}$ strand. Here the superscripts $(+)$ and $(-)$ denote the braidings on two strands in parallel orientations and in anti-parallel orientations respectively. Similarly, the braiding in the middle two strands involving the operator
$b_2^{(\pm)}$ requires the conformal block
$\vert \hat {\phi}_s(R_1,R_2,\overline R_3,\overline R_4) \rangle$. In other words, these states become the eigenstates of the braiding operators
\begin{eqnarray*}
~b_1^
{(\pm)}\vert \phi_t(R_1,R_2,\overline R_3, \overline R_4) \rangle
&=&
\lambda_t^{(\pm)}(R_1, R_2)
\vert \phi_t(R_2,R_1,\overline R_3,\overline R_4) \rangle~,\\
b_2^{(\pm)} \vert \hat {\phi}_s(R_1,R_2,\overline R_3,\overline R_4) \rangle
&=&
\lambda_s^{(\pm)}(R_2,\overline R_3)
\vert \hat {\phi}_s(R_1,\overline R_3,R_2,\overline R_4) \rangle
~,\\
b_3^{(\pm)}
\vert \phi_t(R_1,R_2,\overline R_3,\overline R_4) \rangle&=&
\lambda_t^{(\pm)}(\overline R_3,\overline R_4)\vert \phi_t(R_1,R_2,\overline R_4,\overline R_3) \rangle~,
\end{eqnarray*}
where the braiding eigenvalues
$\lambda_t^{(\pm)}(R_1,R_2)$ in the vertical framing are
\begin{equation*}
\lambda_t^{(\pm)}(R_1,R_2)=\epsilon^{(\pm)}_{t;R_1,R_2} \left
(q^{\frac{C_{R_1}+C_{R_2}-C_{R_t}}{2}}\right
)^{\pm 1}~,\label {brev}
\end{equation*}
where $\epsilon^{(\pm)}_{t;R_1,R_2}=\pm 1$ (See (3.9) in \cite{Zodinmawia:2011ud}).
Here the quadratic Casimir
for the representation $R$ is denoted by
\begin{equation*}
C_R= \kappa_R - \frac{\ell^2}{2N}~,~\kappa_R=\frac{1}{2}[N\ell+\ell+\sum_i (\ell_i^2-2i\ell_i)]~, \label{casi}
\end{equation*}
where $\ell_i$ is the number of boxes in the $i^{\rm th}$ row of the Young Tableau corresponding to the representation
$R$ and $\ell$ is the total number of boxes.
The two bases in Figure \ref{figs:conf} are related by a fusion matrix $a_{ts}$ as follows:
\begin{equation*}
\vert \phi_t(R_1,R_2,\overline R_3, \overline R_4) \rangle= a_{ts}\!\left[\begin{footnotesize}\begin{array}{cc}R_1&R_2 \\\overline R_3 &\overline R_4 \end{array}\end{footnotesize}\right]
\vert \hat {\phi}_s(R_1,R_2,\overline R_3,\overline R_4) \rangle~. \label{dual}
\end{equation*}
The fusion matrix is determined by the $SU(N)$ quantum Racah coefficients. We have obtained these coefficients for a few representations in \cite{Zodinmawia:2011ud} and recently for symmetric representations in \cite{Nawata:2013ppa}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{building_blocks1}
\caption{Fundamental building blocks}
\label{figs:blocks1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{building_blocks2}
\caption{Composite building blocks}
\label{figs:blocks2}
\end{figure}
In order to write the explicit polynomial form of $SU(N)$ invariants for many knots, we will require
the states corresponding to Chern-Simons functional integral over three-balls with several four-punctured $S^2$ boundaries. Therefore, we will present these states which will serve as the necessary building blocks
for the knots and links in \S \ref{sec:knots} and \S\ref{sec:links}.
Inside a three-ball with two $S^2$ boundaries, we have a four-strand braid with braid word
${\cal B}$ as shown by $v_1$ in Figure \ref{figs:blocks1}. We shall call the
state as $v_1$ whose form in terms of four-point conformal blocks of the WZNW
model is
\begin{equation*}
v_1=\sum_{l\in(R_{1}\otimes R_{2})\cap(R_{3}\otimes R_{4})}\left\lbrace\mathcal{B}\,|\phi_l (R_{1},R_{2},\overline{R}_{3},\overline{R}_{4})\rangle\right\rbrace^{(1)}~{|\phi_l(R_{1},R_{2},\overline{R}_{3},\overline{R}_{4})\rangle}^{(2)}~,
\end{equation*}
where the superscripts outside the four-point conformal blocks denotes the boundaries (1) and (2) as indicated in Figure \ref{figs:blocks1}. For the simplest three-ball with one $S^2$ boundary, the states
$v_2$ and $v_3$ will be
\begin{equation*}
v_2={|\phi_{0}(R_{1},\overline{R}_{1},R_{2},R_{2})\rangle}^{(1)}~,~
v_3=|\hat{\phi}_{0}(R_{1},\overline{R}_{2},R_{2},\overline{R}_{1})\rangle^{(1)}~,
\end{equation*}
where the subscript $0$ in $\phi_0$ represents the singlet representation. This procedure can be generalized to three-ball with more than one $S^2$ boundary.
For definiteness, we first write the
state $v_4$ for three $S^2$ boundaries and generalize to state $v_r$ for $r$ $S^2$ boundaries:
\begin{eqnarray*}
v_4&=&\sum_{l\in(R_{1}\otimes\overline{R}_{1})\cap(R_{2}\otimes\overline{R}_{2})\cap(R_{3}\otimes\overline{R}_{3})}\frac{1}{\epsilon_{l}\sqrt{\dim_{q}l}}~
{|\phi_{l}(\overline{R}_{1},R_{1},\overline{R}_{2},R_{2})\rangle}^{(1)}\\
~~~&~&~~~~~~~~~~~~~~~~~~{|\phi_{l}(\overline{R}_{2},R_{2},\overline{R}_{3},R_{3})\rangle}^{(2)}{|\phi_{l}(\overline{R}_{3},R_{3},\overline{R}_{1},R_{1})\rangle}^{(3)}~,\\
v_{r}&=&\sum_{l}\frac{1}{\left(\epsilon_{l}\sqrt{\dim_{q}l}\right)^{r-2}}~|\phi_{l}(\overline{R}_{1},R_{1},\overline{R}_{2},R_{2})\rangle^{(1)}\ldots ~
|\phi_{l}(\overline{R}_{r},R_{r},\overline{R}_{1},R_{1})\rangle^{(r)}~.
\end{eqnarray*}
Here $\epsilon_l\equiv \epsilon_l^{R_1,\overline R_1}=\pm 1$ (See (3.1) in \cite{Zodinmawia:2011ud}).
Using these fundamental building blocks, we can obtain states for three-ball with two $S^2$ boundaries in Figure \ref{figs:blocks2} which we call composite building blocks.
For example, the state $v_5$ for the
two $S^2$ boundaries can be viewed as gluing of appropriate oppositely oriented boundaries of $v_1$, $v_2$ and $v_4$ as shown. In the equivalent diagram for $v_5$, we have
indicated $\overline 3$ on an $S^2$ boundary which denotes that it is oppositely oriented to the $S^2$ boundary numbered by $3$.
Gluing along two oppositely oriented $S^2$ boundaries amounts to taking inner product of the states corresponding to the boundaries. For example, gluing along the $S^2$ boundaries $3$ and $\overline{3}$ results in
\begin{equation*}
~^{(\overline 3)}{\langle \phi_{l}(R_3, \overline{R}_{3},R_1,\overline{R}_{1},)\vert}
{|\phi_{x}(\overline{R}_{3},R_{3},\overline{R}_{1},R_{1})\rangle}^{(3)}=\delta_{lx}~.
\end{equation*}
Writing the states $v_1$, $v_2$ and $v_4$ and taking appropriate inner product, we obtain the state $v_5$ as
\begin{eqnarray*}
v_{5}&=&\sum_{l,r}\frac{1}{\epsilon_{l} \sqrt{\dim_{q}l}}~\epsilon_{r}^{\overline{R}_1,R_3}\sqrt{\dim_{q}r}~(\lambda_{r}^{(-)}(\overline{R}_{1},R_{3})){}^{2m}~a_{lr}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_{3} & \overline{R}_{3}
\end{array}\end{footnotesize}\right] \\
~&~&~\times {|\phi_{l}(\overline{R}_{1},R_{1},\overline{R}_{2},R_{2})\rangle}^{(1)}~{|\phi_{l}(R_{3},\overline{R}_{3},R_{2},\overline{R}_{2})\rangle}^{(2)}~.
\end{eqnarray*}
The state $v_6$ is similar to the state $v_5$, but it involves an odd number of braidings:
\begin{eqnarray*}
v_{6}&=&\sum_{l,r}\frac{1}{\epsilon_{l}^{\overline{R}_1,R_3}\sqrt{\dim_{q}l}}~\epsilon_{r}^{R_1,R_3}\sqrt{\dim_{q}r}(\lambda_{r}^{(+)}(R_{1},R_{3}))^{-(2m+1)}~a_{rl}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & R_{3}\\
\overline{R}_{1} & \overline{R}_{3}
\end{array}\end{footnotesize}\right] \\
~&~&\times |\phi_{\overline{l}}(R_{1},\overline{R}_{3},\overline{R}_{2},R_{2})\rangle^{(1)}~|{\phi_{l}(R_{3},\overline{R}_{1},R_{2},\overline{R}_{2})\rangle}^{(2)}~.
\end{eqnarray*}
The state $v_7$ can be obtained by gluing $v_1$ , $v_5$ and again a $v_1$ with appropriate braid words ${\cal B}$ in both $v_1$'s as shown in the equivalent diagram:
\begin{eqnarray*}
v_{7}&=&\sum_{l,r,x,y}\frac{1}{\epsilon_{l}\sqrt{\dim_{q}l}}~\epsilon_{r}^{\overline{R}_1,R_3}\sqrt{\dim_{q}r}~(\lambda_{r}^{(-)}(\overline{R}_{1},R_{3}))^{2m}a_{lr}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_{3} & \overline{R}_{3}
\end{array}\end{footnotesize}\right]\\
~&~&\times a_{lx}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
\overline{R}_{3} & R_{3}
\end{array}\end{footnotesize}\right]a_{ly}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R}_{2} & R_{2}\\
R_{3} & \overline{R}_{3}
\end{array}\end{footnotesize}\right]|{\phi_{x}(\overline{R}_{3},\overline{R}_{1},R_{1},R_{3})\rangle}^{(1)}~{|\phi_{y}(R_{3},R_{2},\overline{R}_{2},\overline{R}_{3})\rangle}^{(2)} ~.
\end{eqnarray*}
The state $v_8$ is almost the same as the state $v_7$ except for an odd number instead of an even number of braidings:
\begin{eqnarray*}
v_{8}&=&\sum_{l,r,x,y}\frac{1}{\epsilon_{l}^{\overline{R}_1,R_3}\sqrt{\dim_{q}l}}~\epsilon_{r}^{R_1,R_3}\sqrt{\dim_{q}r}~(\lambda_{r}^{(+)}(R_{1},R_{3}))^{-(2m+1)}\!a_{rl}\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & R_{3}\\
\overline{R}_{1} & \overline{R}_{3}
\end{array}\end{footnotesize}\right] \cr
&&\times a_{x\overline{l}}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R}_{2} & R_{1}\\
\overline{R}_{3} & R_{2}
\end{array}\end{footnotesize}\right]a_{yl}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{2} & R_{3}\\
\overline{R}_{1} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]|{\phi_{x}(\overline{R}_{2},R_{1},\overline{R}_{3},R_{2})\rangle}^{(1)}~{|\phi_{y}(R_{2},R_{3},\overline{R}_{1},\overline{R}_{2})\rangle}^{(2)}~.
\end{eqnarray*}
The equivalent diagram for $v_9$ in Figure \ref{figs:blocks2} determines the state as
\begin{eqnarray*}
v_{9}&=&\sum_{l,x,y,z}\frac{1}{\epsilon_{l}^{\overline{R}_1,R_2}\sqrt{\dim_{q}l}}~\epsilon_{z}^{\overline R_1,R_2}\sqrt{\dim_{q}z}~a_{xl}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_{2} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]a_{yx}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{2} & R_{1}\\
\overline{R}_{1} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]\\
&&\times a_{zy}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R}_{1} & R_{2}\\
R_{1} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]\lambda_{x}^{(-)}(R_{1},\overline{R}_{1})~\lambda_{y}^{(+)}(R_{1},R_{2})~\lambda_{z}^{(-)}(\overline{R}_{1},R_{2})~{|\phi_{l}(\overline{R}_{1},R_{2},R_{1},\overline{R}_{2})\rangle}^{(1)} \\
&~&\times {|\phi_{\overline{l}}(R_{1},\overline{R}_{2},\overline{R}_{1},R_{2})\rangle}^{(2)}~.
\end{eqnarray*}
To get the state $v_{10}$, we could glue the state $v_1$ to the state $v_9$ with braid words ${\cal B}=b_2^{(+)} \{b_3^{(-)}\}^{-1}$:
\begin{eqnarray*}
v_{10}&=&\sum_{l,x,y,z}\frac{1}{\epsilon_{l}^{\overline{R}_1,R_2}\sqrt{\dim_{q}l}}~\epsilon_{z}^{\overline{R}_1,R_2}\sqrt{\dim_{q}z}~a_{xl}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_{2} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]a_{yx}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{2} & R_{1}\\
\overline{R}_{1} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]\\
~&~&\times a_{zy}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R}_{1} & R_{2}\\
R_{1} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]a_{sl}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_{2} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]a_{tl}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_{2} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]\lambda_{x}^{(-)}(R_{1},\overline{R}_{1})~\lambda_{y}^{(+)}(R_{1},R_{2})\\
~&~&\times \lambda_{z}^{(-)}(\overline{R}_{1},R_{2})~{|\phi_{s}(\overline{R}_{1},R_{1},\overline{R}_{2},R_{2})\rangle}^{(1)}~{|\phi_{t}(R_{1},\overline{R}_{1},R_{2},\overline{R}_{2})\rangle}^{(2)} ~.
\end{eqnarray*}
Our main aim is to
redraw many knots and links in $S^3$ using these building blocks so that the invariant involves
only the multiplicity-free Racah coefficients.
For instance, see Figure~6 and Figure~7 in \cite{Ramadevi:1993hu}
where equivalent diagrams of the knots $\bf 9_{42}$
and $\bf 10_{71}$ are drawn.
As an example, we will demonstrate the evaluation of the Chern-Simons invariant for
the knot $\bf {10_{152}}$ by using these building blocks.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{glue10_152}
\caption{The knot $\bf{10_{152}}$ in gluing of three-balls.}
\label{figs:glue}
\end{figure}
This knot can be viewed as gluing of five three-balls as shown in Figure \ref{figs:glue}. Using the states for the fundamental and composite building blocks , we can
directly write the different states corresponding to the three-balls $\{p_i\}$ ($i=1,\cdots,5$) as follows:
\begin{eqnarray*}
p_{1}&=&\sum_{s_1,u}\epsilon_{u}^{R,\overline{R}}\sqrt{\dim_{q}u}~a_{s_1u}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]~(\lambda_{u}^{(-)}(R,\overline{R}))^{2}~{|\phi_{s_1}(\overline{R},R,\overline{R},R)\rangle}^{(\overline{1})} ~,\\
p_{2}&=&\sum_l\frac{1}{\epsilon_{l}^{R,\overline{R}}\sqrt{\dim_{q}l}}~{|\phi_{l}(R,\overline{R},R,\overline{R})\rangle}^{(1)}~{|\phi_{l}(R,\overline{R},R,\overline{R})\rangle}^{(2)}~{|\phi_{l}(\overline{R},R,\overline{R},R)\rangle}^{(3)} ~,\\
p_{3}&=&\sum_{s_2,v}\epsilon_{v}^{R,R}\sqrt{\dim_{q}v}~a_{s_2v}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~\lambda_{s_2}^{(-)}(R,\overline{R})~(\lambda_{v}^{(+)}(R,R))^{3}~{|\phi_{s_2}(\overline{R},R,\overline{R},R)\rangle}^{(\overline{2})} ~, \\
p_{4}&=&\sum_{l_1,r,x,y}\frac{1}{\epsilon_{l_1}^{R,\overline{R}}\sqrt{\dim_{q}l_1}}~\epsilon_{r}^{R,R}\sqrt{\dim_{q}r}(\lambda_{r}^{(+)}(R,R))^{3}\!a_{rl_1}\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right] \\
&&\times a_{xl_1}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]a_{yl_1}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R} & \overline{R}\\
R & R
\end{array}\end{footnotesize}\right]{|\phi_{x}(R,\overline{R},R,\overline{R})\rangle}^{(\overline{3})}~{|\phi_{y}(\overline{R},\overline{R},R,R)\rangle}^{(4)} ~,\\
p_{5}&=&\sum_{s_3,z}\epsilon_{z}^{R,\overline{R}}\sqrt{\dim_{q}z}~a_{s_3z}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~\lambda_{s_3}^{(+)}(R,R)~\lambda_{z}^{(-)}(R,\overline{R})~{|\phi_{s_3}(R,R,\overline{R},\overline{R})\rangle} ^{(\overline{4})} ~.
\end{eqnarray*}
One can obtain the $SU(N)$ invariant after gluing all the three-balls together which amounts to taking appropriate inner products of
the above five states:
\begin{flalign*}
V_{R}^{\{SU(N)\}}[{\bf 10_{152}}]= & \sum_{l,l_1,r,u,v,x,y,z}\frac{1}{\epsilon_{l}^{R,\overline{R}}\sqrt{\dim_{q}l}~\epsilon_{l_1}^{R,\overline{R}}\sqrt{\dim_{q}l_1}}~\epsilon_{z}^{R,\overline{R}}\sqrt{\dim_{q}z}&\\
& \times \epsilon_{u}^{R,\overline{R}}\sqrt{\dim_{q}u}~\epsilon_{r}^{R,R}\sqrt{\dim_{q}r}~\epsilon_{v}^{R,R}\sqrt{\dim_{q}v}&\\
&\times a_{rl_1}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{ll_1}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]~a_{yl_1}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{yz}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{lu}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]& \\
& \times a_{vl}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~(\lambda_{r}^{(+)}(R,R))^{3}~\lambda_{y}^{(+)}(R,R)~\lambda_{z}^{(-)}(R,\overline{R})&\\
& \times(\lambda_{u}^{(-)}(R,\overline{R}))^{2}~\lambda_{l}^{(-)}(R,\overline{R})~(\lambda_{v}^{(+)}(R,R))^{3}.\end{flalign*}
The framing number for the knot $\bf 10_{152}$ as drawn in Figure \ref{figs:glue} is $f=-11$, giving the $U(1)$ invariant \eqref{u1}:
\begin{equation*}
V_{R}^{\{U(1)\}}[{\bf 10_{152}}]= q^{-11 \ell^2 \over 2N}~.
\end{equation*}
To adjust the framing number to zero, we introduce an additional twist with framing number $- f$ to the knot.
This additional twist leads to a multiplication by
a factor $q^{ -f C_R}$, giving the
unreduced HOMFLY polynomial
\begin{eqnarray*}
\overline P_R({\bf 10_{152}};a=q^N,q)= q^{11 C_R}V_{R}^{\{U(1)\}}[{\bf 10_{152}}] V_R^{\{SU(N)\}}[{\bf 10_{152}}] ~.
\end{eqnarray*}
Note that the factor $q^{- f C_R}$ can be incorporated as a framing correction in the
vertical framing braiding eigenvalues:
\begin{equation*}
\hat {\lambda}^{(+)}_s(R,R)= q^{C_R}\lambda^{(+)}_s(R,R)~,~ \hat {\lambda}^{(-)}_s(R,\overline{R})= q^{C_R}\lambda^{(-)}_s(R,\overline{R})~,\label{sframe}
\end{equation*}
where $\hat {\lambda}$ denotes standard framing eigenvalues which will be used in the explicit computation of colored HOMFLY polynomials of knots.
Let us conclude this section by providing the definition of a reduced HOMFLY polynomial. The reduced colored HOMFLY polynomial of a knot $\cal K$ is expressed by
\begin{eqnarray*}
P_R({\cal K};a,q)=\overline P_R({\cal K};a,q)/\overline P_R(\bigcirc;a,q)~.
\end{eqnarray*}
The unknot factor carrying the rank-$n$ symmetric representation is
\begin{eqnarray*}
\overline P_{[n]}(\bigcirc;a,q)=\frac{q^{n/2}(a;q)_n}{a^{n/2}(q;q)_n}~,
\end{eqnarray*}
where we denote the $q$-Pochhammer symbols by $(z;q)_{k}=\prod_{j=0}^{k-1} (1-zq^j)$.
\section{Colored HOMFLY polynomials for knots}\label{sec:knots}
In this section, we shall demonstrate computations of the colored HOMFLY polynomials of knots.
The closed form expressions of the colored HOMFLY polynomials with all the symmetric representations are known for the $(2,2p+1)$-torus knots \cite{Fuji:2012pm} and the twist knots \cite{Itoyama:2012fq,Nawata:2012pg,Kawagoe:2012bt}. In addition, we have verified the results in \cite{Itoyama:2012re,Itoyama:2012qt} for the colored HOMFLY polynomials of the knots ${\bf 6_2}$, ${\bf 6_3}$, ${\bf 7_3}$ and ${\bf 7_5}$ up to 4 boxes. Hence, we present the $[3]$-colored HOMFLY polynomials for the other seven-crossing knots in \S \ref{sec:seven}. (The $[2]$-colored HOMFLY polynomials are collected in \cite{Zodinmawia:2011ud}.) For each figure, we redraw the (left) diagram in the table of Rolfsen into the right diagram to which we apply the method in \S\ref{sec:CS}.
In \S \ref{sec:thick}, we shall compute the colored HOMFLY polynomials of thick knots \cite{Dunfield:2005si}.
If all the generators of the HOMFLY homology of a given knot have the same $\delta$-grading, the knot is called homologically thin. (See more detail in \cite{Dunfield:2005si}.) Otherwise, it is called homologically thick. For a thick knot, the colored HOMFLY polynomial is a crucial information to obtain the homological invariant since it is not clear the homological invariant obey the exponential growth property. For the $(3,4)$-torus knot ($\bf 8_{19}$) and the knot $\bf 9_{42}$, the $[2]$-colored superpolynomials are given in {\cite{Gukov:2011ry}. In addition, the colored HOMFLY polynomials of the knot $\bf 10_{139}$ are given up to 4 boxes \cite{Itoyama:2012re}. The evaluation of the colored HOMFLY polynomials for these knots are beyond the scope of the method provided by \cite{Kawagoe:2012bt}. We have verified these
results of the knots $\bf 9_{42}$ and $\bf 10_{139}$ using our approach.
Here we present the invariants for 10-crossing thick knots except the knot $\bf 10_{161}$ \footnote{Since the knot $\bf 10_{161}$ can be written as a three-strand knot, the colored HOMFLY polynomials can be obtained by the method in \cite{Itoyama:2012re}.}.
Before going into detail, let us fix the notation. In this paper, we use the skein relation
\begin{equation*} a^{1/2}P_{L_+}-a^{-1/2}P_{L_-}=(q^{1/2}-q^{-1/2})P_{L_0}~,
\end{equation*} for an uncolored HOMFLY polynomial $P({\cal K};a,q)=P_{[1]}({\cal K};a,q)$ with $P(\bigcirc;a,q)=1$.
To show the colored HOMFLY polynomials concisely, we use the following convention.
{\bf Example}
\begin{eqnarray*}
&&f(a,q)\left( \begin{array}{ccccccccccc} 9 & 10 & 11 & 12 \\ 5 & 6 & 7 & 8 \\ 1 & 2 & 3 & 4 \end{array} \right)\\
&=&f(a,q)\Big[(1+2q+3q^2+4q^3)+a(5+6q+7q^2+8q^3)+a^2(9+10q+11q^2+12q^3)\Big]\end{eqnarray*}
In the matrix, the $q$-degree is assigned to the horizontal axis and the $a$-degree is scaled along the vertical axis.
\subsection{Seven-crossing knots}\label{sec:seven}
\subsubsection{$7_4$ knot}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{7_4knot}}
\caption{$\bf{7_{4}}$ knot}
\end{figure}
\begin{eqnarray*}
P_{R}({\bf 7_{4}},;\,a,q) &= & \frac{1}{\dim_qR}\sum_{s,t,s^{\prime},u,v}\epsilon_{s}^{R,R}\,\sqrt{\dim_{q}s}\,\epsilon_{v}^{R,R}\,\sqrt{\dim_{q}v}\,\hat{\lambda}_{s}^{(+)}(R,\, R)\, a_{ts}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R} & R\\
R & \overline{R}
\end{array}\end{footnotesize}\right]\\
& &\times (\hat{\lambda}_{t}^{(-)}(R,\,\overline{R}))^{2}\, a_{ts^{\prime}}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
\overline{R} & R
\end{array}\end{footnotesize}\right]\,\hat{\lambda}_{s^{\prime}}^{(+)}(\overline{R},\,\overline{R})\, a_{us^{\prime}}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
\overline{R} & R
\end{array}\end{footnotesize}\right]\\
& &\times (\hat{\lambda}_{u}^{(-)}(R,\,\overline{R}))^{2}\, a_{uv}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R} & R\\
R & \overline{R}
\end{array}\end{footnotesize}\right]\,\hat\lambda_{v}^{(+)}(R,\, R).
\end{eqnarray*}
\begin{eqnarray*}
&&P_{[3]}({\bf 7_4}; a,q) =\tfrac{a^3}{ q^{3}} \times \\
&&\begin{tiny}
\left( \begin{array}{cccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & -1 & -2 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 3 & -2 & -2 & -2 & 4 & 3 & 1 & -1 & -1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 2 & 4 & 3 & -5 & -5 & -3 & 6 & 4 & -1 & -3 & -2 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & -4 & -1 & 2 & 6 & 2 & -9 & -9 & -3 & 8 & 5 & -3 & -4 & -2 & 2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & -1 & -7 & -2 & 5 & 12 & 4 & -11 & -11 & -1 & 11 & 7 & -4 & -4 & -2 & 3 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 2 & 3 & 1 & -9 & -6 & 4 & 18 & 8 & -12 & -15 & -1 & 13 & 8 & -4 & -4 & -2 & 4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 2 & 3 & -6 & -9 & -3 & 15 & 14 & -8 & -17 & -6 & 12 & 9 & -4 & -4 & -2 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 0 & -4 & -6 & 4 & 12 & 2 & -10 & -10 & 6 & 8 & 0 & -4 & -2 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & -2 & -1 & 2 & 3 & 0 & -6 & 0 & 3 & 2 & -1 & -2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{tiny}\end{eqnarray*}
\subsubsection{$7_6$ knot}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{7_6knot}}
\caption{$\bf{7_{6}}$ knot}
\end{figure}
\begin{align*}
P_{R}({\bf 7_{6}};\,a,q)= & \frac{1}{\dim_qR}\sum_{s,t,s^{\prime},u,v}\epsilon_{s}^{R,\overline{R}}\,\sqrt{\dim_{q}s}\,\epsilon_{v}^{\overline{R},R}\,\sqrt{\dim_{q}v}\,(\hat{\lambda}_{s}^{(-)}(R,\,\overline{R}))^{-2}\, a_{ts}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R} & R\\
\overline{R} & R
\end{array}\end{footnotesize}\right]&\cr
&\times (\hat{\lambda}_{t}^{(-)}(\overline{R},\, R))^{2}\, a_{ts^{\prime}}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R} & R\\
\overline{R} & R
\end{array}\end{footnotesize}\right]\,(\hat{\lambda}_{s^{\prime}}^{(-)}(R,\,\overline{R}))^{-1}\, a_{us^{\prime}}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R} & \overline{R}\\
R & R
\end{array}\end{footnotesize}\right]&\cr
&\times (\hat{\lambda}_{u}^{(+)}(\overline{R},\,\overline{R}))^{-1}\, a_{uv}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R} & \overline{R}\\
R & R
\end{array}\end{footnotesize}\right]\,(\hat{\lambda}_{v}^{(-)}(\overline{R},\, R))^{-1}.&
\end{align*}
\begin{eqnarray*}
&&P_{[3]}({\bf 7_6}; a,q) =\tfrac{1}{a^9 q^{17}} \times \\
&&\begin{tiny}
\left( \begin{array}{cccccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & 0 & 1 & 1 & -1 & -2 & 1 & 2 & 0 & -1 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & 0 & 0 & -3 & 1 & 3 & 0 & -5 & -3 & 4 & 5 & -2 & -4 & -2 & 3 & 1 & 0 & -1 \\ 0 & 0 & 0 & 0 & 0 & 1 & -1 & -1 & 3 & 2 & -2 & -5 & 6 & 10 & -3 & -12 & -3 & 14 & 10 & -7 & -10 & -1 & 8 & 3 & -2 & -2 & 0 & 1 \\ 0 & 0 & -1 & 2 & -2 & -3 & 2 & 4 & -5 & -13 & 5 & 19 & -3 & -26 & -14 & 24 & 21 & -11 & -23 & -4 & 14 & 6 & -5 & -4 & 0 & 2 & -1 & 0 \\ 0 & 2 & -1 & -2 & 4 & 7 & -1 & -14 & 4 & 26 & 9 & -25 & -24 & 21 & 34 & 0 & -24 & -12 & 13 & 11 & -1 & -4 & -1 & 2 & 0 & 0 & 0 & 0 \\ -1 & -3 & 2 & 2 & -2 & -15 & -2 & 16 & 10 & -20 & -27 & 5 & 26 & 6 & -16 & -16 & 4 & 6 & 2 & -3 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 2 & 2 & -6 & 1 & 12 & 10 & -6 & -15 & 3 & 16 & 8 & -4 & -8 & 2 & 2 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & -3 & -2 & 3 & 1 & -3 & -9 & -1 & 3 & 3 & -2 & -3 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 2 & 1 & 1 & -2 & 1 & 1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{tiny}\end{eqnarray*}
\subsubsection{$7_7$ knot}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{7_7knot}}
\caption{$\bf{7_{7}}$ knot}
\end{figure}
\begin{flalign*}
P_{R}({\bf 7_{7}};\,a,q) = & \frac{1}{\dim_qR}\sum_{s,t,s^{\prime},u,v,w,x}\epsilon_{s}^{R,R}\,\sqrt{\dim_{q}s}\,\epsilon_{x}^{R,R}\,\sqrt{\dim_{q}x}\,(\hat{\lambda}_{s}^{(+)}(R,\, R))\, a_{ts}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R} & R\\
R & \overline{R}
\end{array}\end{footnotesize}\right]&\\
&\times (\hat{\lambda}_{t}^{(-)}(\overline{R},\, R))\, a_{ts^{\prime}}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]\,(\hat{\lambda}_{s^{\prime}}^{(-)}(\overline{R},\, R))^{-1}\, a_{us^{\prime}}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]&\\
&\times (\hat{\lambda}_{\overline{u}}^{(+)}(\overline{R},\,\overline{R}))^{-1}\, a_{uv}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]\,(\hat{\lambda}_{v}^{(-)}(R,\,\overline{R}))^{-1}a_{wv}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]&\\
&\times \hat{\lambda}_{w}^{(-)}(R,\overline{R})\, a_{wx}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R} & R\\
R & \overline{R}
\end{array}\end{footnotesize}\right]\,\hat{\lambda}_{x}^{(+)}(R,R).
\end{flalign*}
\begin{eqnarray*}
&&P_{[3]}({\bf 7_7}; a,q) =\tfrac{1}{a^3 q^{13}} \times \\
&&\begin{tiny}
\left( \begin{array}{cccccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & 0 & 2 & -2 & -2 & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & -4 & 2 & 6 & 8 & -3 & -4 & 0 & 5 & 4 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & 0 & 4 & -6 & -12 & -2 & 11 & 9 & -13 & -18 & -3 & 7 & 5 & -6 & -4 & -2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & -4 & 0 & 6 & 13 & -8 & -19 & 2 & 30 & 22 & -16 & -25 & 4 & 20 & 13 & -7 & -4 & 1 & 4 & 1 \\ 0 & 0 & 0 & 0 & -2 & 2 & 2 & -6 & -7 & 7 & 20 & -9 & -38 & -10 & 36 & 35 & -25 & -42 & -4 & 25 & 13 & -13 & -10 & 1 & 3 & 0 & -2 & 0 \\ 0 & 1 & -2 & 2 & 3 & -3 & -9 & 4 & 24 & 4 & -37 & -23 & 29 & 51 & -11 & -43 & -14 & 26 & 19 & -9 & -10 & 2 & 4 & 1 & -2 & 1 & 0 & 0 \\ -1 & 1 & 2 & -2 & -6 & 0 & 16 & 7 & -22 & -22 & 11 & 38 & 3 & -27 & -17 & 12 & 16 & -3 & -7 & -1 & 2 & 1 & -1 & 0 & 0 & 0 & 0 & 0 \\ 1 & -1 & -2 & -1 & 7 & 4 & -9 & -11 & 3 & 18 & 3 & -11 & -9 & 4 & 7 & -1 & -2 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 2 & 1 & -2 & -3 & 0 & 6 & 0 & -3 & -2 & 1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{tiny}\end{eqnarray*}
\subsection{Thick knots}\label{sec:thick}
\subsubsection{$10_{124}$ knot}
Note that the knot $\bf 10_{124}$ is the $(3,5)$-torus knot.
\begin{figure}[h]
\centering{\includegraphics[scale=1]{10--124}}
\caption{$\bf{10_{124}}$ knot}
\end{figure}
\begin{eqnarray*}
P_R({\bf 10_{124}};a,q) &= & \tfrac{1}{\dim_qR}\sum_{l,r,x,y,z}\frac{1}{\epsilon_{l}^{R,\overline{R}}\sqrt{\dim_{q}l}}~\epsilon_{r}^{R,R}\sqrt{\dim_{q}r}~\epsilon_{x}^{R,R}\sqrt{\dim_{q}x} \\
& & \times \epsilon_{z}(R,R)\sqrt{\dim_{q}z}~a_{rl}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]a_{xl}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]a_{yl}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]a_{zy}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]\\
& &\times (\hat{\lambda}_{r}^{(+)}(R,R))^{-2}~(\hat{\lambda}_{x}^{(+)}(R,R))^{-5}~(\hat{\lambda}_{y}^{(-)}(R,\overline{R}))^{-1}~(\hat{\lambda}_{z}^{(+)}(R,R))^{-2}\end{eqnarray*}
\begin{align*}
&P_{[2]}({\bf 10_{124}}; a,q)= \tfrac{1}{a^{12}q^{16}} \times\\
&\begin{footnotesize}
\left( \begin{array}{ccccccccccccccccccccccccc} 1 & 0 & 1 & 2 & 2 & 2 & 4 & 2 & 4 & 4 & 3 & 3 & 4 & 2 & 3 & 3 & 2 & 1 & 2 & 1 & 1 & 1 & 0 & 0 & 1 \\ -1 & -2 & -2 & -4 & -6 & -6 & -8 & -9 & -8 & -9 & -9 & -8 & -8 & -7 & -5 & -5 & -5 & -3 & -2 & -2 & -1 & -1 & -1 & 0 & 0 \\ 0 & 2 & 2 & 4 & 5 & 7 & 7 & 9 & 8 & 9 & 7 & 7 & 6 & 6 & 4 & 3 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & -2 & -2 & -3 & -4 & -3 & -3 & -4 & -3 & -2 & -2 & -1 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{footnotesize}
\end{align*}
\subsubsection{$10_{128}$ knot}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{10--128}}
\caption{$\bf{10_{128}}$ knot}
\end{figure}
\begin{flalign*}
P_R({\bf 10_{128}};a,q) = & \sum_{l,r,x,y,x^{\prime},y^{\prime}}\frac{1}{\epsilon_{l}^{R,\overline{R}}\sqrt{\dim_{q}l}}~\epsilon_{r}^{R,\overline{R}}\sqrt{\dim_{q}r}~\epsilon_{x}^{R,\overline{R}}\sqrt{\dim_{q}x}&\\
&\times \epsilon_{x^{\prime}}^{R,\overline{R}}\sqrt{\dim_{q}x^{\prime}}~a_{rl}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]a_{yl}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]a_{y^{\prime}l}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]a_{yx}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]&\\
& \times a_{y^{\prime}x^{\prime}}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~(\hat{\lambda}_{r}^{(-)}(R,\overline{R}))^{-2}~(\hat{\lambda}_{y}^{(+)}(R,R))^{-2}~(\hat{\lambda}_{x}^{(-)}(R,\overline{R}))^{-1}~& \\
&\times (\hat{\lambda}_{y^{\prime}}^{(+)}(R,R))^{-2}~ (\hat{\lambda}_{x^{\prime}}^{(-)}(R,\overline{R}))^{-3}\end{flalign*}
\begin{eqnarray*}
&&P_{[2]}({\bf 10_{128}}; a,q)=\\
&& \tfrac{1}{a^{12}q^{14}} \times\left( \begin{array}{ccccccccccccccccccccc} 0 & 0 & 1 & -1 & 0 & 2 & 0 & -1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & -1 & 0 & 2 & -1 & -1 & 1 \\ 0 & 1 & -1 & -1 & 2 & 1 & -2 & -1 & 3 & 1 & -1 & 1 & 1 & 1 & 0 & 0 & 3 & 0 & -2 & 1 & 1 \\ 1 & -1 & -1 & 2 & 1 & -2 & -2 & 1 & 1 & -3 & -2 & -1 & -1 & -2 & -1 & 1 & -1 & -2 & 0 & 0 & 0 \\ -1 & -2 & 0 & 0 & -3 & -4 & 1 & 1 & -3 & 0 & 0 & -1 & 0 & 0 & 1 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 2 & 2 & 2 & 0 & 3 & 4 & 1 & 1 & 2 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & -2 & -1 & 0 & -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{eqnarray*}
\subsubsection{$10_{132}$ knot}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{10--132}}
\caption{$\bf{10_{132}}$ knot}
\end{figure}
\begin{flalign*}
P_R({\bf 10_{132}};a,q) = & \sum_{l,r,x,y,x^{\prime},y^{\prime}}\frac{1}{\epsilon_{l}^{R,\overline{R}}\sqrt{\dim_{q}l}}~\epsilon_{r}^{R,R}\sqrt{\dim_{q}r}~\epsilon_{x}^{R,R}\sqrt{\dim_{q}x} & \\
& \times \epsilon_{x^{\prime}}^{R,\overline{R}}\sqrt{\dim_{q}x^{\prime}}~a_{rl}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]a_{yl}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]a_{y^{\prime}l}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]a_{xy}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]&\\
& \times a_{y^{\prime}x^{\prime}}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~(\hat{\lambda}_{r}^{(+)}(R,R))^{2}~(\hat{\lambda}_{y}^{(-)}(R,\overline{R}))^{3}~(\hat{\lambda}_{x}^{(+)}(R,R))^{2} & \\
& \times (\hat{\lambda}_{y^{\prime}}^{(+)}(R,R))^{-2}~(\hat{\lambda}_{x^{\prime}}^{(-)}(R,\overline{R}))^{-1}\end{flalign*}
\begin{eqnarray*}
&&P_{[2]}({\bf 10_{132}}; a,q)=\tfrac{a}{q^{4}}
\left( \begin{array}{ccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & -1 & -1 & 0 & -1 & -2 & -1 & -2 & -2 & 0 & -1 & -1 \\ 0 & 0 & 1 & -1 & -1 & 2 & 1 & -1 & 2 & 1 & 1 & 2 & 1 & 0 & 1 \\ 0 & 1 & 0 & -2 & 2 & 2 & -2 & 0 & 1 & -1 & 0 & 0 & -1 & 0 & 0 \\ 1 & 0 & -2 & 1 & 2 & -2 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 2 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\subsubsection{$10_{136}$ knot}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{10--136}}
\caption{$\bf{10_{136}}$ knot}
\end{figure}
\begin{flalign*}
P_R({\bf 10_{136}};a,q) = & \sum_{l,l_{1},r,x,y,z}\frac{1}{\epsilon_{l}^{R,\overline{R}}\sqrt{\dim_{q}l}}~\epsilon_{x}^{R,R}\sqrt{\dim_{q}x}~\epsilon_{r}^{R,\overline{R}}\sqrt{\dim_{q}r} &\\
&\times \epsilon_{z}^{R,\overline{R}}\sqrt{\dim_{q}u}~a_{xl}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]a_{rl_{1}}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]a_{ll_{1}}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]a_{yl}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right] &\\
& \times a_{yz}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~(\hat{\lambda}_{x}^{(+)}(R,R))^{-2}~(\hat{\lambda}_{r}^{(-)}(R,\overline{R}))^{-2}~(\hat{\lambda}_{l_{1}}^{(-)}(R,\overline{R}))^{2} &\\
&\times (\hat{\lambda}_{y}^{(+)}(R,R))^{2} ~(\hat{\lambda}_{z}^{(-)}(R,\overline{R}))^{-3}\end{flalign*}
\begin{eqnarray*}
&&P_{[2]}({\bf 10_{136}}; a,q)=\tfrac{1}{a^4q^6}\left( \begin{array}{cccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & 0 & 2 & -1 & -1 & 1 & 0 \\ 0 & 0 & 0 & -1 & 0 & 1 & -2 & -1 & 2 & -2 & -2 & 2 & 0 & -1 \\ 0 & 1 & 0 & 0 & 2 & 2 & 0 & 2 & 0 & 0 & 3 & 0 & -1 & 1 \\ -1 & 0 & -2 & -3 & 1 & -1 & -4 & -2 & 0 & 0 & -1 & -1 & 0 & 0 \\ 2 & 0 & 0 & 4 & 4 & -1 & 0 & 2 & 2 & 0 & 0 & 0 & 0 & 0 \\ -1 & -2 & 1 & 0 & -3 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\subsubsection{$10_{145}$ knot}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{10--145}}
\caption{$\bf{10_{145}}$ knot}
\end{figure}
\begin{flalign*}
P_{R}({\bf 10_{145}};a,q)= & \sum_{l,r,u,v,x,y,z}\frac{1}{\epsilon_{l}^{R,R}\sqrt{\dim_{q}l}}~\epsilon_{z}^{R,R}\sqrt{\dim_{q}z}~\epsilon_{u}^{R,\overline{R}}\sqrt{\dim_{q}u}&\\
& \times\,\epsilon_{r}^{R,R}\sqrt{\dim_{q}r}~a_{lx}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{yx}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]~a_{zy}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{lu}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]&\\
& \times a_{lv}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{rv}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~(\hat{\lambda}_{x}^{(-)}(R,\overline{R}))^{-1}~\hat{\lambda}_{y}^{(-)}(R,\overline{R})~ \hat{\lambda}_{z}^{(+)}(R,R)&\\
& \times(\hat{\lambda}_{u}^{(-)}(R,\overline{R}))^{3}~(\hat{\lambda}_{v}^{(-)}(R,\overline{R}))^2~(\hat{\lambda}_{r}^{(+)}(R,R))^{2}\end{flalign*}
\begin{eqnarray*}
&&P_{[2]}({\bf 10_{145}}; a,q)=\tfrac{a^4}{q^4}\left( \begin{array}{ccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 1 & -1 & -1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 2 & 0 & -1 & 1 \\ 0 & 0 & 0 & 0 & 0 & -1 & -2 & 1 & 0 & -3 & 1 & 0 & -2 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & -2 & 2 & 2 & -2 & 2 & 2 & -1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & -3 & 1 & 2 & -3 & 0 & 1 & -2 & -1 & 0 & 0 & 0 \\ 1 & 0 & 0 & -1 & 0 & 2 & -1 & 0 & 2 & 0 & 0 & 0 & 1 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\subsubsection{$10_{152}$ knot}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{10--152}}
\caption{$\bf{10_{152}}$ knot}
\end{figure}
\begin{flalign*}
P_{R}({\bf 10_{152}};a,q)= & \sum_{l,l_1,r,u,v,x,y,z}\frac{1}{\epsilon_{l}^{R,\overline{R}}\sqrt{\dim_{q}l}~\epsilon_{l_1}^{R,\overline{R}}\sqrt{\dim_{q}l_1}}~\epsilon_{z}^{R,\overline{R}}\sqrt{\dim_{q}z}&\\
& \times \epsilon_{u}^{R,\overline{R}}\sqrt{\dim_{q}u}~\epsilon_{r}^{R,R}\sqrt{\dim_{q}r}~\epsilon_{v}^{R,R}\sqrt{\dim_{q}v}&\\
&\times a_{rl_1}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{ll_1}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]~a_{yl_1}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{yz}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{lu}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]& \\
& \times a_{vl}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~(\hat{\lambda}_{r}^{(+)}(R,R))^{3}~\hat{\lambda}_{y}^{(+)}(R,R)~\hat{\lambda}_{z}^{(-)}(R,\overline{R})&\\
& \times(\hat{\lambda}_{u}^{(-)}(R,\overline{R}))^{2}~\hat{\lambda}_{l}^{(-)}(R,\overline{R})~(\hat{\lambda}_{v}^{(+)}(R,R))^{3}\end{flalign*}
\begin{eqnarray*}
&&P_{[2]}({\bf 10_{152}}; a,q)=\tfrac{1}{a^{12}q^{16}}\\
&&\begin{footnotesize}\left( \begin{array}{ccccccccccccccccccccccccc} 1 & 0 & 1 & 3 & 2 & 2 & 6 & 2 & 6 & 5 & 2 & 7 & 6 & 0 & 5 & 6 & 2 & 0 & 3 & 2 & 1 & 1 & 0 & 0 & 1 \\ -1 & -3 & -2 & -5 & -9 & -8 & -11 & -11 & -13 & -16 & -9 & -11 & -17 & -10 & -4 & -8 & -10 & -4 & -1 & -3 & -2 & -1 & -1 & 0 & 0 \\ 0 & 3 & 3 & 6 & 7 & 10 & 14 & 12 & 9 & 20 & 13 & 6 & 12 & 13 & 6 & 4 & 3 & 4 & 2 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -2 & -4 & -3 & -3 & -9 & -8 & -1 & -7 & -10 & -3 & -2 & -3 & -3 & -2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 3 & -1 & 0 & 5 & 0 & -2 & 3 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{footnotesize}\end{eqnarray*}
\subsubsection{$10_{153}$ knot}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{10--153}}
\caption{$\bf{10_{153}}$ knot}
\end{figure}
\begin{flalign*}
P_{R}({\bf 10_{153}};a,q) = & \sum_{l,l_1,v,u,x,y,z,r} \frac{1}{\epsilon_{l}^{R,\overline{R}}\sqrt{\dim_{q}l}~\epsilon_{l_1}^{R,\overline{R}}\sqrt{\dim_{q}l_1}}~\epsilon_{z}^{R,\overline{R}}\sqrt{\dim_{q}z}\\
& \times \epsilon_{v}^{R,\overline{R}}\sqrt{\dim_{q}v}~\epsilon_{u}^{R,R}\sqrt{\dim_{q}u}~\epsilon_{r}^{R,R}\sqrt{\dim_{q}r}&\\
& \times a_{lx}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]~a_{yx}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{yz}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{ll_1}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]~a_{lv}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]&\\
& \times a_{rl_1}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{ul_1}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~(\hat{\lambda}_{x}^{(-)}(R,\overline{R}))^{-1}~(\hat{\lambda}_{y}^{(+)}(R,R))^{-1}~(\hat{\lambda}_{z}^{(-)}(R,\overline{R}))^{-1}&\\
&\times (\hat{\lambda}_{r}^{(+)}(R,R))^{2}~(\hat{\lambda}_{v}^{(-)}(R,\overline{R}))^{-2}~(\hat{\lambda}_{u}^{(+)}(R,R))^{3}\end{flalign*}
\begin{eqnarray*}
&&P_{[2]}({\bf 10_{153}}; a,q)=\tfrac{1}{a^4q^9}\\
&&\begin{small}
\left( \begin{array}{cccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 & -1 & 0 & -2 & -5 & -2 & -2 & -6 & -3 & -2 & -4 & -2 & -2 & -2 & 0 & -1 & -1 \\ 0 & 1 & -1 & 0 & 3 & 2 & -1 & 5 & 5 & 3 & 5 & 6 & 2 & 3 & 3 & 2 & 2 & 1 & 0 & 1 \\ 1 & 0 & -1 & 3 & 2 & -3 & 1 & 2 & -4 & 0 & 1 & -4 & -1 & 0 & -2 & 0 & 0 & -1 & 0 & 0 \\ -1 & -1 & 0 & -1 & -2 & -2 & 0 & -2 & -1 & 1 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & -1 & -1 & 2 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\subsubsection{$10_{154}$ knot}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{10--154}}
\caption{$\bf{10_{154}}$ knot}
\end{figure}
\begin{flalign*}
P_{R}({\bf 10_{154}};a,q)= & \sum_{l,l_1,r,s,u,v,w,x,y,z}\frac{1}{\epsilon_{l}^{R,\overline{R}}\sqrt{\dim_{q}l}~\epsilon_{l_1}^{R,\overline{R}}\sqrt{\dim_{q}l_1}}~\epsilon_{z}^{R,\overline{R}}\sqrt{\dim_{q}z}\\
& \times \epsilon_{s}^{R,\overline{R}}\sqrt{\dim_{q}s}~\epsilon_{w}^{R,\overline{R}}\sqrt{\dim_{q}w}~\epsilon_{r}^{R,\overline{R}}\sqrt{\dim_{q}r}&\\
& \times a_{lx}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]~ a_{yx}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{yz}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~a_{ll_1}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]~a_{ls}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]& \\
&\times a_{rl_1}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]~a_{l_1u}\!\left[\begin{footnotesize}\begin{array}{cc}
R & \overline{R}\\
R & \overline{R}
\end{array}\end{footnotesize}\right]~a_{vu}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]
a_{vw}\!\left[\begin{footnotesize}\begin{array}{cc}
R & R\\
\overline{R} & \overline{R}
\end{array}\end{footnotesize}\right]~(\hat{\lambda}_{x}^{(-)}(R,\overline{R}))^{-1}\\
& \times (\hat{\lambda}_{y}^{(+)}(R,R))^{-1}~(\hat{\lambda}_{z}^{(-)}(R,\overline{R}))^{-1}(\hat{\lambda}_{r}^{(-)}(R,\overline{R}))^{-2}~(\hat{\lambda}_{s}^{(-)}(R,\overline{R}))^{-2}&\\
& \times (\hat{\lambda}_{u}^{(-)}(R,\overline{R}))^{-1}~(\hat{\lambda}_{v}^{(+)}(R,R))^{-1}~(\hat{\lambda}_{w}^{(-)}(R,\overline{R}))^{-1}\end{flalign*}
\begin{eqnarray*}
&&P_{[2]}({\bf 10_{154}}; a,q)=\tfrac{a^6}{q^6}\begin{small}
\left( \begin{array}{ccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & 2 & -2 & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -4 & 2 & 4 & -3 & -2 & 3 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 3 & 1 & 0 & 1 & 4 & 5 & -1 & -2 & 3 & 1 & -1 \\ 0 & 0 & 0 & 0 & -1 & -2 & 0 & 3 & -2 & -6 & -1 & 2 & -1 & -2 & -2 & 0 & 1 & 0 & -1 \\ 0 & 0 & 0 & 0 & -3 & -1 & 5 & -2 & -7 & 1 & 2 & -3 & -1 & -2 & -3 & 1 & -1 & -2 & 0 \\ 1 & 0 & 0 & 0 & 0 & 3 & 2 & -3 & 2 & 4 & -1 & 1 & 3 & -1 & 2 & 2 & 0 & 0 & 1 \\ \end{array} \right)
\end{small}\end{eqnarray*}
\section{Colored HOMFLY invariants for links}\label{sec:links}
In this section, we shall compute the colored HOMFLY invariants of links. First of all, we should emphasize that the invariants are no longer polynomials, but \emph{rational functions} with respect to the variables $(a,q)$. In addition, the colored HOMFLY invariants of links are crucially dependent of the orientation of each component of a link. For each link in this section, we choose the orientation presented in Knot Atlas \cite{KnotAtlas}. In \cite{Gukov:2013}, the cyclotomic expansions of the colored HOMFLY invariants of the twist links including the Whitehead link $\bf 5_1^2$ and the link $\bf 7_3^2$, and the Borromean rings $\bf 6_3^3$ are given. Therefore, we treat two-component links with six and seven crossings\footnote{We would like to thank Andrey Morozov for pointing out our mistake in the previous version: The colored HOMFLY invariants of the links ${\bf 7_4^2}$ and ${\bf 7_5^2}$ are not symmetric under the exchange of the two colors.}
in \S\ref{sec:two}. We have not succeeded in computing the invariants of the link $\bf 7_6^2$ by this method\footnote{Since the link $\bf 7_6^2$ can be written as a three-strand link, the colored HOMFLY invariants can be obtained by the method in \cite{Itoyama:2012re}.}. In \S\ref{sec:three}, we consider three-component links including the links $\bf 6_1^3$ and $\bf 6_3^3$.
In \S\ref{sec:two}, every unreduced colored HOMFLY invariant ${\overline P}_{([n_1],[ n_2])}({\cal L};a,q)$ contains the unknot factor ${\overline P}_{[n_{\max}] }(\bigcirc ;a,q)$ colored by the highest rank $n_{\max}=\max(n_1,n_2)$. Furthermore, one can observe that it includes the factor $(a;q)_{n_{\max}}/(q;q)_{n_1}(q;q)_{n_2}$. If we normalize by
\begin{eqnarray}\label{Laurent2}
\frac{(q;q)_{n_1}(q;q)_{n_2}}{(a;q)_{n_{\max}}}{\overline P}_{([n_1],[n_2])}({\cal L};a,q)~,
\end{eqnarray}
then it becomes a Laurent polynomial with respect to the variables $(a,q)$. Interestingly, they satisfy the exponential growth property (the property which special polynomials satisfy) \cite{DuninBarkowski:2011yx,Zhu:2012tm,Fuji:2012pi}
\begin{eqnarray*}
&&{\rm lim}_{q\to1}\frac{(q;q)_{kn_1}(q;q)_{kn_2}}{(a;q)_{kn_{\max}}}{\overline P}_{([k n_1],[k n_2])}({\cal L};a,q)=\left[{\rm lim}_{q\to1}\frac{(q;q)_{n_1}(q;q)_{n_2}}{(a;q)_{n_{\max}}}{\overline P}_{([n_1],[n_2])}({\cal L};a,q)\right]^k~.\end{eqnarray*}
where ${\rm gcd}(n_1,n_2)=1$ and $k\in \ensuremath{\mathbb{Z}}_{\ge0}$. In fact, the forms of the Laurent polynomials \eqref{Laurent2} strongly suggest the interpretation at homological level.
For instance, it is easy to see that the difference between the $([1],[3])$-color invariant
and the $([1],[4])$-color invariant in the matrix form expressions below is just a shift in $q$-degree. In higher ranks, though the cancellation between coefficients make this shift obscure, it is not difficult that only a shift in $q$-degree is involved if you increase the rank of the larger color. The homological interpretations of link invariants will be given in the separate paper \cite{Gukov:2013}.
Similarly, for a three-component link, a colored HOMFLY invariant ${\overline P}_{([n_1],[n_2],[ n_3])}({\cal L};a,q)$ contains the unknot factor ${\overline P}_{[n_{\max}] }(\bigcirc ;a,q)$ colored by the highest rank $n_{\max}=\max(n_1,n_2,n_3)$. In addition, it also includes the factor $(a;q)_{n_{\max}}/(q;q)_{n_1}(q;q)_{n_2}(q;q)_{n_3}$. If we normalize by
\begin{eqnarray*}\label{Laurent}
\frac{(q;q)_{n_1}(q;q)_{n_2}(q;q)_{n_3}}{(a;q)_{n_{\max}}}{\overline P}_{([n_1],[n_2],[n_3])}({\cal L};a,q)~,
\end{eqnarray*}
then it becomes a Laurent polynomial, which obeys
the exponential growth property
\begin{eqnarray*}
&&{\rm lim}_{q\to1}\frac{(q;q)_{kn_1}(q;q)_{kn_2}(q;q)_{kn_3}}{(a;q)_{kn_{\max}}}{\overline P}_{([k n_1],[k n_2],[k n_3])}({\cal L};a,q)\\
&=&\left[{\rm lim}_{q\to1}\frac{(q;q)_{n_1}(q;q)_{n_2}(q;q)_{n_3}}{(a;q)_{n_{\max}}}{\overline P}_{([n_1],[n_2],[ n_3])}({\cal L};a,q)\right]^k
\end{eqnarray*}
where ${\rm gcd}(n_1,n_2,n_3)=1$.
\subsection{Two-component links}\label{sec:two}
\subsubsection{$6_2^2$ link}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{6_2link}}
\caption{$\bf{6_2^2}$ link}
\end{figure}
\begin{flalign*}
\overline{P}_{(R_{1},R_{2})}({\bf 6_{2}^2};\,a,q) = & q^{\frac{3{\ell}^{(1)}{\ell}^{(2)}}{N}}\sum_{s,t,s^{\prime}}\epsilon_{s}^{R_{1},R_{2}}\,\sqrt{\dim_{q}s}\,\epsilon_{s^{\prime}}^{\overline{R}_{1},\overline{R}_{2}}\,\sqrt{\dim_{q}s^{\prime}}\,(\lambda_{s}^{(+)}(R_{1},\, R_{2}))^{-3}\,&\\
&\times a_{t\overline{s}}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{2} & \overline{R}_{1}\\
\overline{R}_{2} & R_{1}
\end{array}\end{footnotesize}\right]~(\lambda_{\overline{t}}^{(-)}(\overline{R}_{1},\, R_{2}))^{-2}\, a_{t\overline{s^{\prime}}}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R}_{1} & R_{2}\\
R_{1} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]\,(\lambda_{s^{\prime}}^{(+)}(\overline{R}_{1},\,\overline{R}_{2}))^{-1}.\end{flalign*}
The colored HOMFLY invariants of the link $\bf 6_2^2$ is symmetric under interchanging the two colors.
\begin{itemize}
\item{$\overline P_{([1],[1])}({\bf 6_2^2}; a,q) =$}
\begin{eqnarray*}
\tfrac{(1-a) }{a^{4}q(1-q)^2 }\left( \begin{array}{ccccc} -1 & 2 & -2 & 2 & -1 \\ -1 & 2 & -3 & 2 & -1 \\ 0 & 1 & -1 & 1 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[2])}({\bf 6_2^2}; a,q) =\overline P_{([2],[1])}({\bf 6_2^2}; a,q) =$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) }{a^{9/2}q^{7/2}(1-q)^2 (1-q^2) } \left( \begin{array}{cccccccc} 0 & -1 & 1 & 1 & -2 & 1 & 1 & -1 \\ -1 & 0 & 2 & -2 & -1 & 2 & -1 & 0 \\ 0 & 1 & 0 & -1 & 1 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[3])}({\bf 6_2^2}; a,q) =\overline P_{([3],[1])}({\bf 6_2^2}; a,q) =$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) (1-a q^2)}{a^5q^6(1-q)^2 (1-q^2) (1-q^3)}\left( \begin{array}{ccccccccccc} 0 & 0 & -1 & 1 & 0 & 1 & -2 & 1 & 0 & 1 & -1 \\ -1 & 0 & 0 & 2 & -2 & 0 & -1 & 2 & -1 & 0 & 0 \\ 0 & 1 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[4])}({\bf 6_2^2}; a,q) =\overline P_{([4],[1])}({\bf 6_2^2}; a,q) =$}
\begin{eqnarray*}
\tfrac{ (1-a) (1-a q) (1-a q^2) (1-a q^3)}{a^{{11}/{2}} q^{17/2}(1-q)^2 (1-q^2) \left(1-q^3\right)(1-q^4) }\left( \begin{array}{cccccccccccccc} 0 & 0 & 0 & -1 & 1 & 0 & 0 & 1 & -2 & 1 & 0 & 0 & 1 & -1 \\ -1 & 0 & 0 & 0 & 2 & -2 & 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([2],[2])}({\bf 6_2^2}; a,q) =$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q)}{a^{8}q^{7}(1-q)^2 (1-q^2)^2 }\begin{small}\left( \begin{array}{ccccccccccccccc} 0 & 0 & 1 & -2 & 0 & 3 & -4 & 1 & 5 & -5 & -2 & 5 & -1 & -2 & 1 \\ 0 & 1 & -2 & 0 & 5 & -6 & -2 & 10 & -5 & -7 & 7 & 2 & -4 & 0 & 1 \\ 1 & -2 & 0 & 5 & -5 & -3 & 10 & -3 & -7 & 5 & 2 & -2 & 0 & 0 & 0 \\ -1 & 0 & 2 & -3 & -2 & 5 & -1 & -4 & 2 & 1 & -1 & 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & 0 & 2 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\item{$\overline P_{([2],[3])}({\bf 6_2^2}; a,q) =\overline P_{([3],[2])}({\bf 6_2^2}; a,q) =$}
\begin{eqnarray*}
&&\tfrac{(1-a) (1-a q) (1-a q^2)}{a^{{17}/{2}}q^{25/2}(1-q)^2 (1-q^2)^2 (1-q^3) }\times\\
&&\begin{small}
\left( \begin{array}{ccccccccccccccccccccc} 0 & 0 & 0 & 0 & 1 & -1 & -2 & 2 & 2 & -2 & -3 & 2 & 5 & -2 & -5 & 1 & 3 & 1 & -2 & -1 & 1 \\ 0 & 0 & 1 & 0 & -3 & 0 & 5 & 1 & -7 & -3 & 8 & 5 & -7 & -6 & 4 & 5 & -1 & -3 & 0 & 1 & 0 \\ 1 & 0 & -2 & -1 & 3 & 4 & -4 & -6 & 4 & 7 & -1 & -7 & -1 & 5 & 1 & -2 & 0 & 0 & 0 & 0 & 0 \\ -1 & -1 & 1 & 2 & -1 & -4 & 0 & 4 & 1 & -3 & -2 & 2 & 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & -1 & 0 & 1 & 1 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{small}\end{eqnarray*}
\item{$\overline P_{([2],[4])}({\bf 6_2^2}; a,q) =\overline P_{([4],[2])}({\bf 6_2^2}; a,q) =$}
\begin{eqnarray*}
&&\tfrac{(1-a) (1-a q) (1-a q^2)(1-a q^3)}{a^{9}q^{18} (1-q)^2 (1-q^2)^2 (1-q^3) (1-q^4) }\times\\
&&\begin{footnotesize}
\left( \begin{array}{ccccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & -1 & 0 & 1 & 2 & -2 & -1 & -1 & 1 & 4 & -1 & -2 & -2 & 0 & 3 & 0 & 0 & -1 & -1 & 1 \\ 0 & 0 & 0 & 1 & 0 & -1 & -2 & 0 & 4 & 1 & -1 & -5 & -2 & 5 & 3 & 2 & -5 & -5 & 3 & 2 & 2 & 0 & -3 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & -2 & -1 & 2 & 2 & 3 & -4 & -4 & 1 & 2 & 6 & -2 & -4 & -1 & -1 & 4 & 1 & -2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & -1 & 0 & 1 & 2 & -1 & -2 & -2 & 0 & 4 & 0 & -1 & -1 & -2 & 2 & 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & -1 & 0 & 1 & 0 & 1 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{footnotesize}\end{eqnarray*}
\end{itemize}
\subsubsection{$6_3^2$ link}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{6_3link}}
\caption{$\bf{6_3^2}$ link}
\end{figure}
\begin{flalign*}
\overline{P}_{(R_{1},R_{2})}({\bf 6_{3}^2};\,a,q) = & q^{(-2C_{R_{2}}-\tfrac{2{\ell}^{(1)}{\ell}^{(2)}}{N})}\sum_{s,t,s^{\prime},u,v}\epsilon_{s}^{R_{1},R_{2}}\,\sqrt{\dim_{q}s}\,\epsilon_{v}^{\overline{R}_{1},\overline{R}_{2}}\,\sqrt{\dim_{q}v}\,\lambda_{s}^{(+)}(R_{1},\, R_{2})&\\
&\times a_{ts}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R}_{1} & R_{2}\\
R_{1} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]\,\lambda_{\overline{t}}^{(-)}(R_{1},\,\overline{R}_{2})\, a_{ts^{\prime}}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R}_{1} & R_{2}\\
\overline{R}_{2} & R_{1}
\end{array}\end{footnotesize}\right]\,(\lambda_{s^{\prime}}^{(-)}(R_{2},\,\overline{R}_{2}))^{-2}&\\
&\times a_{us^{\prime}}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R}_{1} & R_{2}\\
\overline{R}_{2} & R_{1}
\end{array}\end{footnotesize}\right]\,\lambda_{u}^{(-)}(\overline{R}_{1},\, R_{2})\, a_{uv}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{2} & \overline{R}_{1}\\
\overline{R}_{2} & R_{1}
\end{array}\end{footnotesize}\right]\,\lambda_{v}^{(+)}(\overline{R}_{1},\,\overline{R}_{2}).
\end{flalign*}
The colored HOMFLY invariants of the link $\bf 6_3^2$ is symmetric under interchanging the two colors.
\begin{itemize}
\item{$\overline P_{([1],[1])}({\bf 6_3^2}; a,q) =$}
\begin{eqnarray*}
\tfrac{(1-a) }{a q(1-q)^2 }\left( \begin{array}{ccccc} 0 & 0 & -1 & 0 & 0 \\ 0 & 2 & -3 & 2 & 0 \\ -1 & 3 & -4 & 3 & -1 \\ 0 & 1 & -2 & 1 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[2])}({\bf 6_3^2}; a,q) =\overline P_{([2],[1])}({\bf 6_3^2}; a,q) =$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) }{a^{3/2} q^{1/2}(1-q)^2 (1-q^2)}\left( \begin{array}{ccccccc} 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 2 & -2 & -1 & 2 & 0 \\ -1 & 2 & 0 & -3 & 2 & 1 & -1 \\ 0 & 1 & -1 & -1 & 1 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[3])}({\bf 6_3^2}; a,q) =\overline P_{([3],[1])}({\bf 6_3^2}; a,q) =$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) (1-a q^2)}{a^2 (1-q)^2 (1-q^2) (1-q^3)}\left( \begin{array}{ccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 2 & -2 & 0 & -1 & 2 & 0 \\ -1 & 2 & -1 & 1 & -3 & 2 & 0 & 1 & -1 \\ 0 & 1 & -1 & 0 & -1 & 1 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[4])}({\bf 6_3^2}; a,q) =\overline P_{([4],[1])}({\bf 6_3^2}; a,q) =$}
\begin{eqnarray*}
\tfrac{q^{1/2}(1-a) (1-a q) (1-a q^2) (1-a q^3)}{a^{{5}/{2}} (1-q)^2 (1-q^2) \left(1-q^3\right)(1-q^4) }
\left( \begin{array}{ccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 & -2 & 0 & 0 & -1 & 2 & 0 \\ -1 & 2 & -1 & 0 & 1 & -3 & 2 & 0 & 0 & 1 & -1 \\ 0 & 1 & -1 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([2],[2])}({\bf 6_3^2}; a,q) =$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q)}{a^{2}q^{3}(1-q)^2 (1-q^2)^2}\begin{small}\left( \begin{array}{ccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & 3 & -1 & -2 & 0 \\ 0 & 0 & 0 & 0 & 1 & 2 & -6 & -1 & 10 & -3 & -6 & 3 & 1 \\ 0 & 0 & -2 & 3 & 5 & -12 & -2 & 17 & -5 & -10 & 6 & 2 & -2 \\ 1 & -3 & 1 & 9 & -11 & -8 & 19 & -2 & -13 & 7 & 2 & -3 & 1 \\ -1 & 1 & 4 & -5 & -5 & 9 & 1 & -7 & 2 & 2 & -1 & 0 & 0 \\ 0 & 1 & -2 & -1 & 4 & -1 & -2 & 1 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\item{$\overline P_{([2],[3])}({\bf 6_3^2}; a,q) =\overline P_{([3],[2])}({\bf 6_3^2}; a,q) =$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2)}{a^{5/2}q^{5/2}(1-q)^2 (1-q^2)^2 (1-q^3) }\times\\
&&\begin{small}
\left( \begin{array}{ccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & 2 & 1 & -1 & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & -4 & -4 & 4 & 6 & 0 & -6 & -2 & 3 & 1 \\ 0 & 0 & 0 & -2 & 2 & 5 & -3 & -9 & -1 & 12 & 6 & -9 & -7 & 3 & 5 & 0 & -2 \\ 1 & -2 & -2 & 6 & 4 & -8 & -10 & 6 & 15 & -2 & -12 & -2 & 6 & 3 & -3 & -1 & 1 \\ -1 & 0 & 4 & 1 & -7 & -4 & 7 & 7 & -4 & -7 & 1 & 4 & 0 & -1 & 0 & 0 & 0 \\ 0 & 1 & -1 & -2 & 1 & 2 & 1 & -2 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\item{$\overline P_{([2],[4])}({\bf 6_3^2}; a,q) =\overline P_{([4],[2]])}({\bf 6_3^2}; a,q) =$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2)(1-a q^3)}{a^{3}q^{2}(1-q)^2 (1-q^2)^2 (1-q^3) (1-q^4) }\times\\
&&\begin{small}
\left( \begin{array}{ccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & 2 & 0 & 1 & -1 & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & -4 & -2 & 1 & 1 & 6 & -1 & -3 & -2 & -2 & 3 & 1 \\ 0 & 0 & 0 & 0 & -2 & 2 & 4 & -3 & -1 & -5 & -2 & 9 & 3 & 1 & -5 & -6 & 2 & 2 & 3 & 0 & -2 \\ 1 & -2 & -1 & 3 & 1 & 3 & -5 & -7 & 2 & 3 & 9 & 0 & -7 & -3 & -2 & 5 & 2 & -1 & -1 & -1 & 1 \\ -1 & 0 & 3 & 1 & -2 & -4 & -2 & 4 & 4 & 2 & -3 & -5 & 0 & 2 & 2 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & -1 & 0 & 0 & 2 & 0 & 0 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{small}\end{eqnarray*}
\item{$\overline P_{([3],[3])}({\bf 6_3^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2)}{a^{3}q^{7}(1-q)^2 (1-q^2)^2 (1-q^3)^2 }\times\\
&&\begin{tiny}
\left( \begin{array}{cccccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & -3 & 1 & 1 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -2 & 0 & 6 & 2 & -3 & -10 & 1 & 5 & 5 & -2 & -3 & -1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & -1 & -9 & 1 & 14 & 10 & -9 & -24 & 1 & 17 & 11 & -6 & -11 & 1 & 2 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -2 & 6 & 5 & -4 & -22 & -2 & 30 & 24 & -19 & -43 & -1 & 33 & 19 & -13 & -20 & 3 & 7 & 4 & -3 & -1 \\ 0 & 0 & 0 & 0 & 2 & -3 & -5 & 5 & 15 & 2 & -35 & -18 & 36 & 48 & -14 & -64 & -13 & 44 & 32 & -17 & -29 & 4 & 12 & 3 & -5 & -2 & 2 & 0 \\ 0 & -1 & 3 & -1 & -6 & -1 & 14 & 14 & -26 & -32 & 16 & 54 & 12 & -58 & -33 & 31 & 41 & -7 & -31 & 0 & 12 & 4 & -5 & -2 & 3 & -1 & 0 & 0 \\ 1 & -1 & -3 & 0 & 7 & 8 & -12 & -18 & 3 & 28 & 15 & -27 & -24 & 8 & 26 & 4 & -17 & -5 & 5 & 5 & -2 & -2 & 1 & 0 & 0 & 0 & 0 & 0 \\ -1 & 1 & 2 & 2 & -6 & -6 & 4 & 11 & 4 & -13 & -7 & 4 & 10 & 0 & -6 & -1 & 1 & 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & -2 & -1 & 2 & 3 & 0 & -6 & 0 & 3 & 2 & -1 & -2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{tiny}\end{eqnarray*}
\end{itemize}
\subsubsection{$7_1^2$ link}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{7_1link}}
\caption{$\bf{7_1^2}$ link}
\end{figure}
\begin{flalign*}
\overline{P}_{(R_{1},R_{2})}({\bf 7_{1}^2};\,a,q) = & q^{\left(-C_{R_{2}}+\tfrac{{\ell}^{(1)}{\ell}^{(2)}}{N}\right)}\sum_{s,t,s^{\prime},u,v}\epsilon_{s}^{R_{1},R_{2}}\,\sqrt{\dim_{q}s}\,\epsilon_{v}^{\overline{R}_{1},R_{2}}\,\sqrt{\dim_{q}v}\,(\lambda_{s}^{(+)}(R_{1},\, R_{2}))&\\
&\times a_{ts}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R}_{1} & R_{2}\\
R_{1} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]\,\lambda_{\overline{t}}^{(-)}(R_{1},\,\overline{R}_{2})\, a_{ts^{\prime}}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R}_{1} & R_{2}\\
\overline{R}_{2} & R_{1}
\end{array}\end{footnotesize}\right]\,(\lambda_{s^{\prime}}^{(-)}(R_{2},\,\overline{R}_{2}))^{-1}&\\
&\times a_{us^{\prime}}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R}_{1} & \overline{R}_{2}\\
R_{2} & R_{1}
\end{array}\end{footnotesize}\right]\,\lambda_{u}^{(+)}(\overline{R}_{1},\,\overline{R}_{2})^{-3}\, a_{uv}\!\left[\begin{footnotesize}\begin{array}{cc}
\overline{R}_{2} & \overline{R}_{1}\\
R_{2} & R_{1}
\end{array}\end{footnotesize}\right]\,(\lambda_{v}^{(-)}(\overline{R}_{1},\, R_{2}))^{-1}.
\end{flalign*}
The colored HOMFLY invariants of the link $\bf 7_1^2$ is symmetric under interchanging the two colors.
\begin{itemize}
\item{$\overline P_{([1],[1])}({\bf 7_1^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) }{a^{3}q^{2}(1-q)^2 }
\left( \begin{array}{ccccccc} 0 & -1 & 1 & -1 & 1 & -1 & 0 \\ 1 & -2 & 3 & -3 & 3 & -2 & 1 \\ 0 & -1 & 2 & -2 & 2 & -1 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[2])}({\bf 7_1^2}; a,q)=\overline P_{([2],[1])}({\bf 7_1^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) }{a^{7/2}q^{7/2}(1-q)^2 (1-q^2) }\left( \begin{array}{cccccccccc} 0 & 0 & -1 & 0 & 1 & -1 & 0 & 1 & -1 & 0 \\ 1 & -1 & -1 & 3 & -1 & -2 & 3 & -1 & -1 & 1 \\ 0 & -1 & 1 & 1 & -2 & 1 & 1 & -1 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[3])}({\bf 7_1^2}; a,q)=\overline P_{([3],[1])}({\bf 7_1^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) (1-a q^2)}{a^{4}q^{5}(1-q)^2 (1-q^2) (1-q^3)}\left( \begin{array}{ccccccccccccc} 0 & 0 & 0 & -1 & 0 & 0 & 1 & -1 & 0 & 0 & 1 & -1 & 0 \\ 1 & -1 & 0 & -1 & 3 & -1 & 0 & -2 & 3 & -1 & 0 & -1 & 1 \\ 0 & -1 & 1 & 0 & 1 & -2 & 1 & 0 & 1 & -1 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[4])}({\bf 7_1^2}; a,q)=\overline P_{([4],[1])}({\bf 7_1^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) (1-a q^2) (1-a q^3)}{a^{9/2} q^{13/2} (1-q)^2 (1-q^2) \left(1-q^3\right)(1-q^4) }
\begin{small}
\left( \begin{array}{cccccccccccccccc} 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 1 & -1 & 0 & 0 & 0 & 1 & -1 & 0 \\ 1 & -1 & 0 & 0 & -1 & 3 & -1 & 0 & 0 & -2 & 3 & -1 & 0 & 0 & -1 & 1 \\ 0 & -1 & 1 & 0 & 0 & 1 & -2 & 1 & 0 & 0 & 1 & -1 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{small}
\end{eqnarray*}
\item{$\overline P_{([2],[2])}({\bf 7_1^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) }{a^{6}q^{8}(1-q)^2 (1-q^2)^2 }\begin{footnotesize}\left( \begin{array}{ccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 1 & -1 & 0 & 1 & -1 & 1 & 1 & -2 & 0 & 2 & -1 & -1 & 1 & 0 \\ 0 & 0 & -1 & 1 & 1 & -3 & 1 & 1 & -3 & 3 & 1 & -6 & 2 & 5 & -4 & -2 & 3 & 0 & -1 \\ 1 & -2 & 1 & 3 & -5 & 2 & 3 & -7 & 6 & 5 & -12 & 2 & 11 & -7 & -5 & 6 & 0 & -2 & 1 \\ -1 & 1 & 2 & -4 & 2 & 3 & -8 & 5 & 7 & -11 & -1 & 10 & -3 & -5 & 3 & 1 & -1 & 0 & 0 \\ 0 & 1 & -2 & 0 & 3 & -4 & 1 & 5 & -5 & -2 & 5 & -1 & -2 & 1 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{footnotesize}\end{eqnarray*}
\item{$\overline P_{([2],[3])}({\bf 7_1^2}; a,q)=\overline P_{([3],[2])}({\bf 7_1^2}; a,q)=$}
\begin{eqnarray*}
&&\tfrac{(1-a) (1-a q) (1-a q^2)}{a^{13/2}q^{23/2}(1-q)^2 (1-q^2)^2 (1-q^3) }\times\\
&&\begin{footnotesize}
\left( \begin{array}{ccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 & 2 & 0 & -2 & 0 & 1 & 1 & -1 & -1 & 1 & 0 \\ 0 & 0 & 0 & -1 & 0 & 2 & 0 & -3 & -1 & 3 & 1 & -4 & -1 & 5 & 1 & -5 & -3 & 4 & 4 & -3 & -3 & 1 & 2 & 0 & -1 \\ 1 & -1 & -2 & 3 & 3 & -4 & -4 & 4 & 6 & -6 & -7 & 8 & 8 & -6 & -10 & 3 & 11 & -1 & -8 & -1 & 4 & 2 & -2 & -1 & 1 \\ -1 & 0 & 3 & 0 & -5 & 0 & 7 & 0 & -9 & -1 & 11 & 3 & -11 & -5 & 8 & 6 & -4 & -5 & 1 & 3 & 0 & -1 & 0 & 0 & 0 \\ 0 & 1 & -1 & -2 & 2 & 2 & -2 & -3 & 2 & 5 & -2 & -5 & 1 & 3 & 1 & -2 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{footnotesize}\end{eqnarray*}
\item{$\overline P_{([2],[4])}({\bf 7_1^2}; a,q)=\overline P_{([4],[2])}({\bf 7_1^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2)(1-a q^3)}{a^{7}q^{15}(1-q)^2 (1-q^2)^2 (1-q^3)(1-q^4) }\times\\
&&\begin{tiny}
\left( \begin{array}{ccccccccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & -1 & 0 & 1 & 0 & 0 & -1 & 0 & 1 & 1 & 0 & -2 & 0 & 1 & 0 & 1 & -1 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 & 1 & 1 & 0 & -3 & -1 & 1 & 2 & 1 & -3 & -2 & 1 & 4 & 1 & -4 & -2 & -1 & 3 & 4 & -3 & -2 & 0 & 0 & 2 & 0 & -1 \\ 1 & -1 & -1 & 0 & 2 & 3 & -3 & -3 & -1 & 3 & 5 & -2 & -5 & -4 & 6 & 7 & -2 & -4 & -7 & 2 & 9 & 0 & -2 & -4 & -2 & 4 & 1 & 0 & -1 & -1 & 1 \\ -1 & 0 & 2 & 1 & -1 & -4 & 0 & 4 & 3 & -1 & -7 & -1 & 4 & 6 & 2 & -9 & -4 & 3 & 4 & 4 & -3 & -4 & 0 & 1 & 2 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & -1 & 0 & 1 & 2 & -2 & -1 & -1 & 1 & 4 & -1 & -2 & -2 & 0 & 3 & 0 & 0 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{tiny}\end{eqnarray*}
\end{itemize}
\subsubsection{$7_2^2$ link}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{7_2-link}}
\caption{$\bf{ 7_2^2}$ link}
\end{figure}
\begin{flalign*}
\overline{P}_{(R_{1},R_{2})}({\bf 7_{2}^2};\,a,q)=&q^{\left(-C_{R_{2}}+\tfrac{{\ell}^{(1)}{\ell}^{(2)}}{N}\right)}\sum_{s,t,s^{\prime},u,v}\epsilon_{s}^{\overline{R}_{1},R_{2}}\,\sqrt{\dim_{q}s}\,\epsilon_{v}^{R_{1},R_{2}}\,\sqrt{\dim_{q}v}\,(\lambda_{s}^{(-)}(\overline{R}{}_{1},\, R_{2}))^{2}& \\
& \times a_{ts}\left[\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_{2} & \overline{R}_{2}
\end{array}\right]\,(\lambda_{t}^{(-)}(R_{2},\,\overline{R}_{2}))^{-1}\, a_{s^{\prime}t}\left[\begin{array}{cc}
R_{1} & R_{2}\\
\overline{R}_{2} & \overline{R}_{1}
\end{array}\right]\,(\lambda_{s^{\prime}}^{(+)}(R_{1},\, R_{2}))^{-1} & \\
& \times a_{s^{\prime}u}\left[\begin{array}{cc}
R_{1} & R_{2}\\
\overline{R}_{1} & \overline{R}{}_{2}
\end{array}\right]\,(\lambda_{u}^{(-)}(\overline{R}_{1},\, R_{2}))^{-2}\, a_{vu}\left[\begin{array}{cc}
R_{1} & R_{2}\\
\overline{R}_{1} & \overline{R}_{2}
\end{array}\right]\,(\lambda_{v}^{(+)}(R_{1},\, R_{2}))^{-1} \end{flalign*}
The colored HOMFLY invariants of the link $\bf 7_2^2$ is symmetric under interchanging the two colors.
\begin{itemize}
\item{$\overline P_{([1],[1])}({\bf 7_2^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) }{a^3 q(1-q)^2}\left( \begin{array}{ccccc} 0 & -1 & 2 & -1 & 0 \\ 1 & -4 & 5 & -4 & 1 \\ 1 & -3 & 5 & -3 & 1 \\ 0 & -1 & 2 & -1 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[2])}({\bf 7_2^2}; a,q)=\overline P_{([2],[1])}({\bf 7_2^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) }{a^{7/2}q^{5/2} (1-q)^2 (1-q^2) }\left( \begin{array}{cccccccc} 0 & 0 & 0 & -1 & 1 & 1 & -1 & 0 \\ 0 & 1 & -3 & 0 & 4 & -3 & -1 & 1 \\ 1 & -1 & -2 & 4 & 0 & -2 & 1 & 0 \\ 0 & -1 & 1 & 1 & -1 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[3])}({\bf 7_2^2}; a,q)=\overline P_{([3],[1])}({\bf 7_2^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) (1-a q^2)}{a^4 q^4 (1-q)^2 (1-q^2) (1-q^3)}\left( \begin{array}{ccccccccccc} 0 & 0 & 0 & 0 & 0 & -1 & 1 & 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -3 & 1 & -1 & 4 & -3 & 0 & -1 & 1 \\ 1 & -1 & 0 & -2 & 4 & -1 & 1 & -2 & 1 & 0 & 0 \\ 0 & -1 & 1 & 0 & 1 & -1 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[4])}({\bf 7_2^2}; a,q)=\overline P_{([4],[1])}({\bf 7_2^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) (1-a q^2) (1-a q^3)}{a^{{9}/{2}} q^{{11}/{2}} (1-q)^2 (1-q^2) \left(1-q^3\right)(1-q^4) }
\begin{small}
\left( \begin{array}{cccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & 0 & 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 1 & -3 & 1 & 0 & -1 & 4 & -3 & 0 & 0 & -1 & 1 \\ 1 & -1 & 0 & 0 & -2 & 4 & -1 & 0 & 1 & -2 & 1 & 0 & 0 & 0 \\ 0 & -1 & 1 & 0 & 0 & 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{small}
\end{eqnarray*}
\item{$\overline P_{([2],[2])}({\bf 7_2^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q)}{a^{6}q^{6}(1-q)^2 (1-q^2)^2 }\times\\
&&\begin{footnotesize}
\left( \begin{array}{ccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -2 & -1 & 4 & -1 & -2 & 1 & 0 \\ 0 & 0 & 0 & 0 & -1 & 2 & 3 & -8 & -1 & 12 & -4 & -8 & 4 & 2 & -1 \\ 0 & 0 & 1 & -4 & 2 & 12 & -17 & -8 & 29 & -6 & -20 & 11 & 5 & -4 & 0 \\ 0 & 1 & -4 & 1 & 13 & -15 & -15 & 29 & 2 & -25 & 7 & 9 & -5 & -1 & 1 \\ 1 & -3 & 1 & 10 & -11 & -11 & 21 & 2 & -16 & 4 & 5 & -2 & 0 & 0 & 0 \\ -1 & 1 & 4 & -6 & -5 & 11 & 1 & -8 & 2 & 2 & -1 & 0 & 0 & 0 & 0 \\ 0 & 1 & -2 & -1 & 4 & -1 & -2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{footnotesize}
\end{eqnarray*}
\item{$\overline P_{([2],[3])}({\bf 7_2^2}; a,q)=\overline P_{([3],[2])}({\bf 7_2^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2)}{a^{{13}/{2}}q^{{19}/2}(1-q)^2 (1-q^2)^2 (1-q^3) }\times\\
&&\begin{footnotesize}
\left( \begin{array}{ccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & -2 & 1 & 2 & 1 & -2 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & 4 & -2 & -7 & -1 & 9 & 5 & -7 & -6 & 2 & 4 & 0 & -1 \\ 0 & 0 & 0 & 0 & 1 & -3 & -2 & 10 & 4 & -14 & -12 & 13 & 21 & -7 & -19 & 0 & 10 & 4 & -4 & -2 & 1 \\ 0 & 0 & 1 & -2 & -4 & 5 & 11 & -6 & -21 & -1 & 25 & 10 & -19 & -14 & 8 & 10 & -2 & -4 & 0 & 1 & 0 \\ 1 & -1 & -3 & 2 & 7 & 2 & -13 & -9 & 14 & 13 & -5 & -13 & -1 & 8 & 1 & -2 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 3 & 2 & -5 & -6 & 4 & 8 & 0 & -6 & -2 & 3 & 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & -2 & 1 & 2 & 1 & -2 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{footnotesize}\end{eqnarray*}
\item{$\overline P_{([2],[4])}({\bf 7_2^2}; a,q)=\overline P_{([4],[2])}({\bf 7_2^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2)(1-a q^3)}{a^{7}q^{13}(1-q)^2 (1-q^2)^2 (1-q^3) (1-q^4) }\times\\
&&\begin{tiny}
\left( \begin{array}{ccccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & -1 & 0 & 0 & 2 & 0 & 0 & -1 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & 3 & -1 & -2 & -4 & -1 & 6 & 3 & 1 & -4 & -5 & 1 & 2 & 2 & 0 & -1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & -3 & -1 & 6 & 2 & 2 & -10 & -8 & 6 & 6 & 13 & -4 & -13 & -2 & 0 & 7 & 3 & -2 & -2 & -1 & 1 \\ 0 & 0 & 0 & 1 & -2 & -2 & 1 & 3 & 8 & -5 & -11 & -5 & 0 & 17 & 7 & -7 & -8 & -9 & 5 & 7 & 1 & -1 & -3 & 0 & 1 & 0 & 0 \\ 1 & -1 & -1 & -1 & 1 & 6 & 0 & -2 & -8 & -6 & 10 & 6 & 4 & -3 & -9 & -1 & 2 & 4 & 1 & -2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 2 & 1 & 1 & -4 & -4 & 2 & 2 & 5 & 0 & -4 & -1 & -1 & 2 & 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & -1 & 0 & 0 & 2 & 0 & 0 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{tiny}\end{eqnarray*}
\end{itemize}
\subsubsection{$7_4^2$ link}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{7_4_2link}}
\caption{$\bf{7_4^2}$ link}
\end{figure}
\begin{flalign*}
\overline{P}_{R_{1},R_{2}}(7_{4}^{2};a,q) = & q^{-3C_{R_1}}\sum_{l,r,u,v,x,y}\frac{1}{\epsilon_{l}^{R,\overline{R}}\sqrt{\dim_{q}l}}~\epsilon_{r}^{R_{1},R_{2}}\sqrt{\dim_{q}r}~\epsilon_{u}^{R_{1},\overline{R}_{1}}&\\
& \times \sqrt{\dim_{q}u}~\epsilon_{v}^{R_{1},R_{2}}\sqrt{\dim_{q}v}~ a_{rl}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & R_{2}\\
\overline{R}_{2} & \overline{R}_{1}
\end{array}\end{footnotesize}\right]a_{xl}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & R_{1}\\
\overline{R}_{1} & \overline{R}_{1}
\end{array}\end{footnotesize}\right]~a_{ly}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_{2} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]&\\
& \times a_{xu}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & R_{1}\\
\overline{R}_{1} & \overline{R}_{1}
\end{array}\end{footnotesize}\right]a_{vy}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & R_{2}\\
\overline{R}_{1} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]~(\lambda_{r}^{(+)}(R_{1},R_{2}))^{-2}~(\lambda_{x}^{(+)}(R_{1},R_{1}))^{-2}&\\
&\times(\lambda_{u}^{(-)}(R_{1},\overline{R}_{1}))^{-1}~\lambda_{y}^{(-)}(\overline{R}_{1},R_{2})~\lambda_{v}^{(+)}(R_{1},R_{2})\end{flalign*}
\begin{itemize}
\item{$\overline P_{([1],[1])}({\bf 7_4^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a)}{a^{3}q^{2}(1-q)^2}\left( \begin{array}{ccccccc} 0 & -1 & 1 & -2 & 1 & -1 & 0 \\ 1 & -2 & 4 & -3 & 4 & -2 & 1 \\ 0 & -1 & 2 & -3 & 2 & -1 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[2])}({\bf 7_4^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a)(1-a q) }{a^{7/2}q^{5/2}(1-q)^2(1-q^2)}\left( \begin{array}{ccccccccc} 0 & 0 & -1 & 0 & 0 & -1 & 1 & -1 & 0 \\ 1 & -1 & 0 & 2 & -1 & 2 & 0 & -1 & 1 \\ 0 & -1 & 1 & 0 & -1 & 1 & -1 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[3])}({\bf 7_4^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a)(1-a q) (1-a q^2) }{a^{4} q^{3}(1-q)^2(1-q^2)(1-q^3)}
\left( \begin{array}{ccccccccccc} 0 & 0 & 0 & -1 & 0 & -1 & 1 & -1 & 1 & -1 & 0 \\ 1 & -1 & 1 & -2 & 3 & -1 & 3 & -2 & 1 & -1 & 1 \\ 0 & -1 & 1 & -1 & 2 & -2 & 1 & -1 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[4])}({\bf 7_4^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a)(1-a q) (1-a q^2) \left(1-a q^3\right) }{a^{9/2} q^{7/2} (1-q)^2(1-q^2)(1-q^3)(1-q^4) }
\left( \begin{array}{ccccccccccccc} 0 & 0 & 0 & 0 & -1 & 0 & -1 & 0 & 1 & -1 & 1 & -1 & 0 \\ 1 & -1 & 1 & -1 & -1 & 3 & -1 & 3 & -1 & -1 & 1 & -1 & 1 \\ 0 & -1 & 1 & -1 & 1 & 1 & -2 & 1 & -1 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([2],[1])}({\bf 7_4^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q)}{a^{11/2}q^{11/2}(1-q)^2 (1-q^2) }\begin{footnotesize}\left( \begin{array}{ccccccccccccc} 0 & 0 & -1 & 0 & 0 & -2 & 1 & 0 & -2 & 0 & 1 & -1 & 0 \\ 1 & -1 & 1 & 3 & -3 & 3 & 4 & -3 & 0 & 4 & -1 & -1 & 1 \\ -1 & 0 & 2 & -4 & -1 & 4 & -4 & -2 & 2 & 0 & -1 & 0 & 0 \\ 0 & 1 & -1 & -1 & 3 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{footnotesize}
\end{eqnarray*}
\item{$\overline P_{([2],[2])}({\bf 7_4^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q)}{a^{6}q^{8}(1-q)^2 (1-q^2)^2 }\times\\
&&\begin{footnotesize}\left( \begin{array}{ccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 1 & -1 & 1 & 2 & -2 & 1 & 3 & -3 & 0 & 3 & -1 & -1 & 1 & 0 \\ 0 & 0 & -1 & 1 & 0 & -4 & 2 & 0 & -7 & 3 & 2 & -9 & 1 & 6 & -5 & -3 & 3 & 0 & -1 \\ 1 & -2 & 2 & 3 & -6 & 5 & 5 & -10 & 10 & 9 & -15 & 3 & 15 & -8 & -5 & 7 & 0 & -2 & 1 \\ -1 & 1 & 1 & -5 & 4 & 3 & -12 & 6 & 9 & -15 & -2 & 12 & -4 & -6 & 3 & 1 & -1 & 0 & 0 \\ 0 & 1 & -2 & 1 & 3 & -6 & 2 & 7 & -7 & -2 & 6 & -1 & -2 & 1 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{footnotesize}
\end{eqnarray*}
\item{$\overline P_{([2],[3])}({\bf 7_4^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2)}{a^{{13}/{2}}q^{{19}/{2}}(1-q)^2 (1-q^2)^2 (1-q^3) }\times\\
&&\begin{footnotesize}
\left( \begin{array}{ccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & -2 & 1 & 2 & -1 & -1 & 1 & 0 \\ 0 & 0 & 0 & -1 & 0 & 1 & -1 & -1 & -1 & -2 & -2 & 0 & 2 & -3 & -5 & 1 & 4 & -1 & -4 & 0 & 2 & 0 & -1 \\ 1 & -1 & -1 & 3 & 0 & -1 & 2 & 0 & -2 & 1 & 9 & 3 & -9 & -2 & 11 & 3 & -6 & -3 & 4 & 3 & -2 & -1 & 1 \\ -1 & 0 & 2 & -1 & -2 & 2 & -1 & -4 & 2 & 6 & -3 & -10 & 1 & 9 & -1 & -7 & 0 & 3 & 0 & -1 & 0 & 0 & 0 \\ 0 & 1 & -1 & -1 & 2 & -1 & -1 & 2 & 1 & -1 & -3 & 2 & 3 & -2 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{footnotesize}\end{eqnarray*}
\item{$\overline P_{([2],[4])}({\bf 7_4^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2) (1-a q^3) }{a^7q^{11}(1-q)^2 (1-q^2)^2 (1-q^3) (1-q^4)}\times\\
&&\begin{footnotesize}
\left( \begin{array}{ccccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 2 & 0 & -1 & 1 & -1 & 0 & 2 & -1 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 1 & -1 & -2 & -4 & -1 & 2 & -1 & 0 & -2 & -4 & 2 & 2 & -2 & 0 & -1 & -1 & 2 & 0 & -1 \\ 1 & -1 & 0 & 0 & 0 & 2 & 1 & 2 & -4 & -3 & 4 & 3 & 6 & 0 & -4 & 0 & 3 & 3 & 1 & -3 & -1 & 2 & 1 & 1 & -1 & -1 & 1 \\ -1 & 0 & 1 & 0 & 1 & -1 & -1 & -3 & -1 & 6 & 0 & -2 & -2 & -4 & 1 & 3 & 0 & -2 & -2 & 0 & 2 & 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & 0 & 0 & -1 & 1 & 0 & 2 & -1 & -2 & 2 & -1 & 0 & 2 & -1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{footnotesize}\end{eqnarray*}
\item{$\overline P_{([3],[1])}({\bf 7_4^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2) }{a^8q^{12}(1-q)^2 (1-q^2) (1-q^3) }\times\\
&&\begin{footnotesize}\left( \begin{array}{cccccccccccccccccccccc} 0 & 0 & 0 & 0 & -1 & 0 & -1 & 0 & -2 & 1 & -1 & -1 & -2 & 1 & 0 & 0 & -2 & 0 & 0 & 1 & -1 & 0 \\ 0 & 1 & -1 & 2 & 0 & 3 & -2 & 5 & 2 & 3 & -4 & 4 & 3 & 4 & -3 & 0 & 0 & 4 & -1 & 0 & -1 & 1 \\ -1 & 0 & 0 & 0 & -4 & -1 & 1 & 0 & -6 & -4 & 1 & 3 & -3 & -3 & -2 & 2 & 0 & 0 & -1 & 0 & 0 & 0 \\ 1 & 0 & 0 & -2 & 4 & 1 & 1 & -4 & 3 & 2 & 2 & -2 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 1 & 0 & 1 & -3 & 1 & 0 & 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{footnotesize}\end{eqnarray*}
\item{$\overline P_{([3],[2])}({\bf 7_4^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2)}{a^{17/2}q^{31/2}(1-q)^2 (1-q^2)^2 (1-q^3) }\times\\
&&\begin{tiny}
\left( \begin{array}{cccccccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 2 & 2 & -2 & -1 & 2 & 3 & -1 & -2 & 0 & 2 & 1 & -1 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 & -1 & 0 & 1 & -2 & -2 & 0 & -1 & -5 & -4 & 3 & 1 & -9 & -7 & 4 & 6 & -4 & -9 & -2 & 6 & 3 & -4 & -4 & 1 & 2 & 0 & -1 \\ 0 & 1 & -1 & 0 & 3 & -1 & -1 & 4 & 5 & -3 & -4 & 13 & 13 & -7 & -12 & 10 & 20 & 2 & -15 & -5 & 11 & 12 & -4 & -8 & 0 & 5 & 2 & -2 & -1 & 1 \\ -1 & 0 & 2 & -2 & -4 & 2 & 6 & -5 & -14 & 4 & 16 & -6 & -24 & -5 & 20 & 9 & -18 & -14 & 5 & 11 & 1 & -7 & -4 & 2 & 2 & 0 & -1 & 0 & 0 & 0 \\ 1 & 0 & -2 & 0 & 5 & 1 & -9 & -1 & 13 & 6 & -13 & -10 & 13 & 13 & -5 & -11 & 0 & 7 & 3 & -2 & -2 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 1 & 2 & -3 & -2 & 3 & 4 & -3 & -7 & 3 & 6 & -1 & -4 & -1 & 2 & 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{tiny}\end{eqnarray*}
\item{$\overline P_{([4],[1])}({\bf 7_4^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2) (1-a q^3) }{a^{21/2}q^{43/2}(1-q)^2 (1-q^2) (1-q^3) (1-q^4)}\times\\
&&\begin{tiny}
\left( \begin{array}{cccccccccccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & -1 & -1 & 0 & -2 & 0 & -1 & -2 & -1 & -1 & 0 & 0 & -2 & -1 & -2 & 1 & 0 & 0 & 0 & -2 & 0 & 0 & 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 1 & -1 & 2 & 1 & 0 & 4 & 0 & 5 & 4 & 1 & 3 & 1 & 5 & 7 & 2 & 2 & -3 & 3 & 4 & 3 & 4 & -3 & 0 & 0 & 0 & 4 & -1 & 0 & 0 & -1 & 1 \\ 0 & -1 & 0 & 0 & -2 & 0 & -5 & -1 & 0 & -5 & -3 & -9 & -4 & 0 & -1 & -2 & -9 & -6 & -2 & 0 & 4 & -4 & -2 & -3 & -2 & 2 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & -1 & 1 & 3 & 2 & 4 & -3 & 2 & 3 & 5 & 7 & -1 & -1 & 0 & 1 & 6 & 1 & 1 & -1 & -1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 2 & -4 & -1 & -1 & -1 & 4 & -3 & -1 & -2 & -2 & 2 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & 0 & 0 & -1 & 3 & -1 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{tiny}\end{eqnarray*}
\end{itemize}
\subsubsection{$7_5^2$ link}
\begin{figure}[h]
\centering{\includegraphics[scale=1]{7_5_2-link}}
\caption{$\bf{7_5^2}$ link}
\end{figure}
\begin{flalign*}
\overline{P}_{R_1,R_2}({\bf 7_5^2};\, a,q) = &q^{3C_{R_1}+\frac{2{\ell}^{1}{\ell}^{2}}{N}} \sum_{l,u,v,x,y,z}\frac{1}{\epsilon_{l}^{R_1,\overline{R}_1}\sqrt{\dim_{q}l}}\epsilon_{y}^{R_{1},\overline{R}_{1}}\sqrt{\dim_{q}y}~\epsilon_{u}^{\overline{R}_{1},R_{2}}&\\
& \times \sqrt{\dim_{q}u}~\epsilon_{v}^{\overline{R}_{1},R_{2}}\sqrt{\dim_{q}v}~a_{lx}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_{1} & \overline{R}_{1}
\end{array}\end{footnotesize}\right]~a_{zx}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & R_{1}\\
\overline{R}_{1} & \overline{R}_{1}
\end{array}\end{footnotesize}\right]~a_{zy}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & R_{1}\\
\overline{R}_{1} & \overline{R}_{1}
\end{array}\end{footnotesize}\right]&\\
& \times a_{lu}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_2 & \overline{R}_{2}
\end{array}\end{footnotesize}\right]~a_{lv}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
{R}_{2} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]~\lambda_{x}^{(-)}(R_{1},\overline{R}_{1})~\lambda_{z}^{(+)}(R_1,R_1)~\lambda_{y}^{(-)}(R_{1},\overline{R}_{1})&\\
&\times (\lambda_{u}^{(-)}(\overline{R}_{1},R_{2}))^{2}~(\lambda_{v}^{(-)}(\overline{R}_{1},R_{2}))^{2}\end{flalign*}
\begin{itemize}
\item{$\overline P_{([1],[1])}({\bf 7_5^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{a(1-a)}{q(1-q)^2}\left( \begin{array}{ccccc} 0 & 0 & 1 & 0 & 0 \\ 0 & -3 & 3 & -3 & 0 \\ 2 & -4 & 6 & -4 & 2 \\ 1 & -3 & 4 & -3 & 1 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[2])}({\bf 7_5^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{a^{1/2}(1-a)(1-a q) }{q^{1/2}(1-q)^2(1-q^2)}\left( \begin{array}{ccccccc} 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & -2 & 1 & 1 & -3 & 0 \\ 1 & -1 & -1 & 4 & -2 & -1 & 2 \\ 1 & -2 & 1 & 1 & -2 & 1 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[3])}({\bf 7_5^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a)(1-a q) (1-a q^2) }{(1-q)^2(1-q^2)(1-q^3)}
\left( \begin{array}{ccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & -2 & 2 & -1 & 1 & -3 & 0 \\ 1 & -2 & 2 & -2 & 4 & -3 & 1 & -1 & 2 \\ 1 & -2 & 2 & -2 & 2 & -2 & 1 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([1],[4])}({\bf 7_5^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{q^{{1}/{2}}(1-a)(1-a q) (1-a q^2) (1-a q^3) }{a^{{1}/{2}} (1-q)^2(1-q^2)(1-q^3)(1-q^4) }
\left( \begin{array}{ccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -2 & 2 & 0 & -1 & 1 & -3 & 0 \\ 1 & -2 & 1 & 1 & -2 & 4 & -3 & 0 & 1 & -1 & 2 \\ 1 & -2 & 2 & -1 & -1 & 2 & -2 & 1 & 0 & 0 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline P_{([2],[1])}({\bf 7_5^2}; a,q)=$}
\begin{eqnarray*}
\begin{footnotesize}\tfrac{a^{5/2}(1-a)(1-a q) }{q^{5/2}(1-q)^2(1-q^2)}
\left( \begin{array}{ccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 3 & -2 & 0 & 3 & 0 \\ 0 & 0 & 0 & -3 & 0 & 4 & -5 & -4 & 4 & -2 & -2 \\ 0 & 2 & -3 & -1 & 7 & -2 & -5 & 6 & 0 & -2 & 2 \\ 1 & -1 & -2 & 3 & 1 & -4 & 2 & 1 & -2 & 1 & 0 \\ \end{array} \right)
\end{footnotesize}
\end{eqnarray*}
\item{$\overline P_{([2],[2])}({\bf 7_5^2}; a,q)=$}
\begin{eqnarray*}
\tfrac{a^{2}(1-a) (1-a q) }{q^{2}(1-q)^2 (1-q^2)^2 }\begin{footnotesize}\left( \begin{array}{ccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -3 & -1 & 3 & -2 & -3 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 3 & 4 & -9 & 1 & 15 & -4 & -6 & 7 & 2 \\ 0 & 0 & 0 & -1 & -4 & 7 & 6 & -22 & 0 & 24 & -14 & -14 & 11 & 0 & -5 \\ 0 & 1 & -1 & -7 & 11 & 12 & -28 & -1 & 34 & -16 & -16 & 18 & 0 & -6 & 3 \\ 2 & -1 & -9 & 9 & 15 & -25 & -6 & 31 & -12 & -16 & 15 & 0 & -5 & 2 & 0 \\ 1 & -3 & 0 & 9 & -8 & -8 & 15 & -2 & -10 & 7 & 1 & -3 & 1 & 0 & 0 \\ \end{array} \right)
\end{footnotesize}
\end{eqnarray*}
\item{$\overline P_{([2],[3])}({\bf 7_5^2}; a,q)=$}
\begin{eqnarray*}
&&\tfrac{a^{{3}/{2}}(1-a) (1-a q) (1-a q^2)}{q^{{3}/{2}}(1-q)^2 (1-q^2)^2 (1-q^3) }\times\\
&&\begin{footnotesize}
\left( \begin{array}{ccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & -1 & 1 & 1 & -2 & -3 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 4 & -3 & -5 & 4 & 8 & 2 & -6 & -1 & 7 & 2 \\ 0 & 0 & 0 & 0 & 0 & -3 & 1 & 7 & -1 & -12 & -5 & 13 & 8 & -13 & -10 & 5 & 6 & -3 & -5 \\ 0 & 0 & 2 & -4 & -5 & 10 & 9 & -10 & -16 & 7 & 22 & -5 & -17 & 2 & 11 & 2 & -6 & -1 & 3 \\ 1 & 1 & -5 & -3 & 11 & 6 & -15 & -9 & 14 & 11 & -11 & -10 & 8 & 6 & -5 & -2 & 2 & 0 & 0 \\ 1 & -2 & -2 & 6 & 1 & -7 & -1 & 5 & 3 & -5 & -2 & 5 & -1 & -2 & 1 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{footnotesize}\end{eqnarray*}
\item{$\overline P_{([2],[4])}({\bf 7_5^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{a(1-a) (1-a q) (1-a q^2) (1-a q^3) }{q(1-q)^2 (1-q^2)^2 (1-q^3) (1-q^4)}\times\\
&&\begin{footnotesize}
\left( \begin{array}{ccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & 0 & 1 & -1 & 1 & -2 & -3 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & -2 & -1 & -1 & 3 & 6 & -2 & 0 & -1 & -1 & 7 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & -2 & 1 & 3 & 0 & -1 & -7 & 0 & 5 & 1 & 3 & -9 & -5 & 4 & 0 & 3 & -3 & -5 \\ 0 & 0 & 1 & -1 & -3 & 0 & 6 & 3 & -6 & -2 & -3 & 2 & 11 & -4 & -4 & -2 & 0 & 8 & -1 & -2 & -1 & -1 & 3 \\ 1 & 0 & -3 & 0 & 2 & 3 & 2 & -8 & -1 & 4 & -1 & 6 & -4 & -5 & 4 & 0 & 2 & -2 & -2 & 2 & 0 & 0 & 0 \\ 1 & -2 & -1 & 4 & -1 & -1 & 0 & -2 & 2 & 0 & 1 & 0 & -3 & 3 & 0 & -2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{footnotesize}\end{eqnarray*}
\item{$\overline P_{([3],[1])}({\bf 7_5^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{a^{4}(1-a) (1-a q) (1-a q^2) }{q^4(1-q)^2 (1-q^2) (1-q^3)}\times\\
&&\begin{footnotesize}
\left( \begin{array}{cccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & -3 & 2 & -1 & 0 & -3 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 3 & 1 & 0 & -3 & 6 & 3 & 4 & -3 & 2 & 2 & 2 \\ 0 & 0 & 0 & 0 & 0 & -3 & 0 & 1 & 3 & -4 & -7 & -1 & 3 & 1 & -6 & -4 & 1 & -1 & -1 & -2 \\ 0 & 0 & 2 & -3 & 0 & -1 & 7 & -1 & -2 & -5 & 6 & 3 & 1 & -5 & 4 & 0 & 2 & -2 & 2 & 0 \\ 1 & -1 & 0 & -2 & 3 & 0 & 1 & -4 & 1 & 1 & 2 & -3 & 1 & -1 & 2 & -2 & 1 & 0 & 0 & 0 \\ \end{array} \right)
\end{footnotesize}\end{eqnarray*}
\item{$\overline P_{([3],[2])}({\bf 7_5^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{a^{7/2}(1-a) (1-a q) (1-a q^2) }{q^{7/2}(1-q)^2 (1-q^2)^2 (1-q^3) }\times\\
&&\begin{tiny}
\left( \begin{array}{cccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 3 & 1 & -2 & 0 & 2 & 3 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -3 & -4 & -2 & 7 & 2 & -11 & -10 & 1 & 6 & -2 & -7 & -2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 5 & -2 & -11 & -1 & 22 & 15 & -13 & -18 & 9 & 23 & 5 & -9 & 0 & 5 & 5 \\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & -4 & 4 & 10 & -1 & -23 & -15 & 26 & 28 & -19 & -38 & -2 & 30 & 5 & -21 & -9 & 5 & 4 & -4 & -3 \\ 0 & 0 & 0 & 1 & -1 & -6 & 0 & 16 & 10 & -23 & -28 & 19 & 44 & -2 & -44 & -13 & 35 & 17 & -18 & -12 & 9 & 8 & -4 & -3 & 3 & 0 \\ 0 & 2 & 0 & -6 & -4 & 10 & 15 & -8 & -29 & -2 & 34 & 17 & -29 & -25 & 19 & 21 & -9 & -14 & 4 & 8 & -3 & -3 & 2 & 0 & 0 & 0 \\ 1 & -1 & -3 & 1 & 6 & 3 & -9 & -9 & 8 & 14 & -4 & -15 & 1 & 11 & 1 & -7 & -1 & 5 & -1 & -2 & 1 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{tiny}\end{eqnarray*}
\item{$\overline P_{([4],[1])}({\bf 7_5^2}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{a^{11/2}(1-a) (1-a q) (1-a q^2) (1-a q^3) }{q^{11/2}(1-q)^2 (1-q^2) (1-q^3) (1-q^4)}\times\\
&&\begin{tiny}
\left( \begin{array}{cccccccccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 3 & -2 & 1 & 1 & 0 & 3 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & -4 & -1 & -1 & -1 & 2 & -7 & -4 & -3 & -5 & 3 & -3 & -2 & -2 & -2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 3 & 1 & 3 & -3 & 0 & 3 & 5 & 11 & 1 & 1 & 2 & 0 & 11 & 4 & 4 & 2 & 0 & 4 & 1 & 2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -3 & 0 & 1 & 0 & 4 & -5 & -6 & -4 & -3 & 6 & 0 & -4 & -8 & -8 & 1 & -2 & 0 & -4 & -7 & 1 & -4 & 0 & -1 & -2 & 0 \\ 0 & 0 & 0 & 2 & -3 & 0 & 0 & -1 & 7 & -1 & -1 & -2 & -5 & 6 & 3 & 5 & -2 & -4 & 2 & 0 & 6 & 1 & -3 & 4 & -2 & 2 & 2 & -2 & 2 & 0 & 0 & 0 \\ 1 & -1 & 0 & 0 & -2 & 3 & 0 & 0 & 1 & -4 & 1 & 0 & 2 & 1 & -2 & 0 & -2 & 2 & 1 & -2 & 2 & -2 & 0 & 2 & -2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{tiny}\end{eqnarray*}
\end{itemize}
\subsection{Three-component links}\label{sec:three}
\subsubsection{$6_1^3$ link}
\begin{figure}[h]
\centering{\includegraphics[scale=.8]{61_3link}}
\caption{${\bf 6_1^3}$ link}
\end{figure}
\begin{eqnarray*}
\overline{P}_{(R_{1},R_{2},R_{3})}({\bf 6_{1}^{3}};\, a,q) & = &q^{-\frac{\ell^{(1)}\ell^{(2)}}{N}-\frac{\ell^{(2)}\ell^{(3)}}{N}-\frac{\ell^{(1)}\ell^{(3)}}{N}} \sum_{l,x,y,z}\frac{1}{\epsilon_{l}^{R_{1},\overline{R}_{1}}\sqrt{\dim_{q}l}}\epsilon_{x}^{\overline{R}_{1},R_{2}}\sqrt{\dim_{q}x}\,\\
& & \times \epsilon_{y}^{\overline{R}_{2},R_{3}}\sqrt{\dim_{q}y}\,\epsilon_{z}^{\overline{R}_{1},R_{3}}\sqrt{\dim_{q}z}\, a_{lx}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_{2} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]\, a_{ly}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{2} & \overline{R}_{2}\\
R_{3} & \overline{R}_{3}
\end{array}\end{footnotesize}\right]\,\\
& & \times a_{lz}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_{3} & \overline{R}_{3}
\end{array}\end{footnotesize}\right](\lambda_{x}^{(-)}(\overline{R}_{1},R_{2}))^{2}\,(\lambda_{y}^{(-)}(\overline{R}_{2},R_{3}))^{2}\,(\lambda_{z}^{(-)}(\overline{R}_{1},R_{3}))^{2}\end{eqnarray*}
The colored HOMFLY invariants of the link $\bf 6_{1}^{3}$ are symmetric under permutations over the representations $(R_1,R_2,R_3)$.
\begin{itemize}
\item{$\overline{P}_{([1],[1],[1])}({\bf 6_{1}^{3}}; a,q)=$}
\begin{eqnarray*}
\tfrac{a^{1/2}(1-a)}{q^{1/2} (1-q)^3}\begin{small}
\left( \begin{array}{ccccc} 0 & 0 & 1 & 0 & 0 \\ 0 & -3 & 4 & -3 & 0 \\ 2 & -5 & 7 & -5 & 2 \\ 1 & -4 & 6 & -4 & 1 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\item{$\overline{P}_{([1],[1],[2])}({\bf 6_{1}^{3}}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) }{ (1-q)^3 (1-q^2)}
\begin{small}\left( \begin{array}{ccccccc} 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & -2 & 1 & 2 & -3 & 0 \\ 1 & -1 & -2 & 5 & -2 & -2 & 2 \\ 1 & -3 & 2 & 2 & -3 & 1 & 0 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\item{$\overline{P}_{([1],[1],[3])}({\bf 6_{1}^{3}}; a,q)=$}
\begin{eqnarray*}
\tfrac{q^{1/2} (1-a) (1-a q) (1-a q^2)}{a^{1/2} (1-q)^3(1-q^2) (1-q^3)}
\begin{small}\left( \begin{array}{ccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & -2 & 2 & -1 & 2 & -3 & 0 \\ 1 & -2 & 2 & -3 & 5 & -3 & 1 & -2 & 2 \\ 1 & -3 & 3 & -2 & 3 & -3 & 1 & 0 & 0 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\item{$\overline{P}_{([1],[2],[2])}({\bf 6_{1}^{3}}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q)}{(1-q)^3 (1-q^2)^2}
\begin{small}\left( \begin{array}{ccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 2 & -3 & 0 & 3 & 0 \\ 0 & 0 & 0 & -3 & 3 & 4 & -8 & -1 & 6 & -2 & -2 \\ 0 & 2 & -5 & 1 & 10 & -9 & -5 & 10 & -2 & -3 & 2 \\ 1 & -2 & -2 & 7 & -2 & -7 & 6 & 1 & -3 & 1 & 0 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\item{$\overline{P}_{([1],[2],[3])}({\bf 6_{1}^{3}}; a,q)=$}
\begin{eqnarray*}
\tfrac{q^{1/2} (1-a) (1-a q) (1-a q^2) }{a^{1/2} (1-q)^3 (1-q^2)^2 (1-q^3)}
\begin{small}\left( \begin{array}{cccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & -1 & -1 & 0 & 3 & 0 \\ 0 & 0 & 0 & 0 & -2 & 1 & 3 & 0 & -5 & -2 & 4 & 2 & -2 & -2 \\ 0 & 1 & -1 & -3 & 3 & 5 & -3 & -6 & 1 & 6 & 0 & -3 & -1 & 2 \\ 1 & -2 & -1 & 3 & 2 & -3 & -3 & 2 & 3 & -1 & -2 & 1 & 0 & 0 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\item{$\overline{P}_{([1],[3],[3])}({\bf 6_{1}^{3}}; a,q)=$}
\begin{eqnarray*}
&&\tfrac{q^{1/2}(1-a) (1-a q) (1-a q^2) }{ a^{1/2}(1-q)^3 (1-q^2)^2 (1-q^3)^2}\\
&&\begin{small}\left( \begin{array}{cccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & -2 & 3 & -1 & 0 & -3 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & -3 & -1 & -2 & 8 & 0 & 0 & -5 & 2 & 2 & 2 \\ 0 & 0 & 0 & 0 & 0 & -3 & 3 & 4 & -1 & -7 & -6 & 11 & 4 & -2 & -9 & 0 & 5 & 0 & -1 & -2 \\ 0 & 0 & 2 & -5 & 1 & 3 & 7 & -7 & -10 & 4 & 11 & 2 & -9 & -5 & 7 & 1 & 0 & -3 & 2 & 0 \\ 1 & -2 & 0 & -1 & 6 & -1 & -5 & -3 & 4 & 6 & -3 & -5 & 2 & 1 & 2 & -3 & 1 & 0 & 0 & 0 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\item{$\overline{P}_{([2],[2],[2])}({\bf 6_{1}^{3}}; a,q)=$}
\begin{eqnarray*}
\tfrac{a(1-a) (1-a q) }{q(1-q)^3 (1-q^2)^3}
\begin{footnotesize}\left( \begin{array}{ccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -3 & 0 & 4 & -2 & -3 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 3 & 2 & -11 & 3 & 16 & -8 & -8 & 7 & 2 \\ 0 & 0 & 0 & -1 & -3 & 9 & 3 & -25 & 8 & 28 & -20 & -14 & 15 & 1 & -5 \\ 0 & 1 & -2 & -5 & 15 & 4 & -35 & 12 & 38 & -28 & -17 & 23 & -1 & -7 & 3 \\ 2 & -3 & -9 & 18 & 11 & -41 & 7 & 43 & -27 & -18 & 22 & -1 & -6 & 2 & 0 \\ 1 & -4 & 2 & 12 & -17 & -8 & 28 & -8 & -17 & 12 & 2 & -4 & 1 & 0 & 0 \\ \end{array} \right)\end{footnotesize}
\end{eqnarray*}
\item{$\overline{P}_{([2],[2],[3])}({\bf 6_{1}^{3}}; a,q)=$}
\begin{eqnarray*}
&&\tfrac{a^{1/2}(1-a) (1-a q) (1-a q^2) }{q^{1/2} (1-q)^3 (1-q^2)^3 (1-q^3)}\times\\
&&\begin{footnotesize}\left( \begin{array}{ccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & -1 & 2 & 2 & -2 & -3 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 4 & -5 & -6 & 5 & 9 & 2 & -10 & -3 & 7 & 2 \\ 0 & 0 & 0 & 0 & 0 & -3 & 2 & 8 & -3 & -14 & -4 & 19 & 11 & -17 & -13 & 7 & 10 & -2 & -5 \\ 0 & 0 & 2 & -5 & -3 & 13 & 6 & -16 & -18 & 15 & 29 & -11 & -25 & 2 & 15 & 4 & -8 & -2 & 3 \\ 1 & 0 & -6 & 0 & 16 & 1 & -25 & -6 & 27 & 13 & -22 & -15 & 14 & 10 & -7 & -3 & 2 & 0 & 0 \\ 1 & -3 & -1 & 10 & -3 & -13 & 3 & 12 & 3 & -13 & -3 & 10 & -1 & -3 & 1 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{footnotesize}
\end{eqnarray*}
\item{$\overline{P}_{([2],[3],[3])}({\bf 6_{1}^{3}}; a,q)=$}
\begin{eqnarray*}
&&\tfrac{a^{1/2}(1-a) (1-a q) (1-a q^2) }{q^{1/2} (1-q)^3 (1-q^2)^3 (1-q^3)^2}\times\\
&&\begin{tiny}\left( \begin{array}{cccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & -3 & 0 & 2 & 3 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -3 & -1 & 2 & 7 & -1 & -12 & -5 & 6 & 8 & -2 & -7 & -2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 2 & -9 & -7 & 10 & 21 & -2 & -27 & -10 & 19 & 20 & -4 & -15 & -1 & 5 & 5 \\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & -3 & 9 & 7 & -16 & -22 & 12 & 44 & 3 & -49 & -25 & 30 & 37 & -10 & -28 & -3 & 11 & 6 & -4 & -3 \\ 0 & 0 & 0 & 1 & -2 & -6 & 9 & 18 & -12 & -39 & -2 & 59 & 29 & -56 & -51 & 28 & 56 & -3 & -39 & -7 & 17 & 8 & -6 & -4 & 3 & 0 \\ 0 & 2 & -2 & -8 & 3 & 21 & 5 & -37 & -27 & 43 & 52 & -27 & -64 & 0 & 56 & 15 & -33 & -16 & 13 & 11 & -5 & -4 & 2 & 0 & 0 & 0 \\ 1 & -2 & -3 & 5 & 9 & -5 & -22 & 2 & 30 & 9 & -28 & -22 & 22 & 21 & -10 & -14 & 1 & 9 & -1 & -3 & 1 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{tiny}
\end{eqnarray*}
\end{itemize}
\subsubsection{$ 6_3^3$ link}
\begin{figure}[h]
\centering{\includegraphics[scale=.8]{torus33}}
\caption{${\bf 6_3^3}$ link}
\end{figure}
\begin{eqnarray*}
\overline{P}_{(R_{1},R_{2},R_{3})}({\bf 6_3^3};\, a,q) & = &q^{\frac{{\ell}^{(1)}{\ell}^{(2)}}{N}+\frac{{\ell}^{(2)}{\ell}^{(3)}}{N}-\frac{{\ell}^{(1)}{\ell}^{(3)}}{N}} \sum_{l,x,y,z}\frac{1}{\epsilon_{l}^{R_{1},\overline{R}_{1}}\sqrt{\dim_{q}l}}\epsilon_{x}^{\overline{R}_{1},R_{2}}\sqrt{\dim_{q}x}\,\\
& & \times\epsilon_{y}^{R_{2},R_{3}}\sqrt{\dim_{q}y}\,\epsilon_{z}^{R_{1},R_{3}}\sqrt{\dim_{q}z}\, a_{lx}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & \overline{R}_{1}\\
R_{2} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]\, a_{yl}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{2} & R_{3}\\
\overline{R}_{3} & \overline{R}_{2}
\end{array}\end{footnotesize}\right]\,\\
& & \times a_{zl}\!\left[\begin{footnotesize}\begin{array}{cc}
R_{1} & R_{3}\\
\overline{R}_{3} & \overline{R}_{1}
\end{array}\end{footnotesize}\right](\lambda_{x}^{(-)}(\overline{R}_{1},R_{2}))^{-2}\,(\lambda_{y}^{(+)}(R_{2},R_{3}))^{-2}\,(\lambda_{z}^{(+)}(R_{1},R_{3}))^{2}\end{eqnarray*}
The colored HOMFLY invariants of the link ${\bf 6_3^3}$ are symmetric under the interchange of the representations $R_1$ and $R_3$.
\begin{itemize}
\item{$\overline{P}_{([1],[1],[1])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a)}{a^{5/2}q^{1/2} (1-q)^3}
\left( \begin{array}{ccccc} 0 & 2 & -3 & 2 & 0 \\ -1 & 1 & -2 & 1 & -1 \\ 0 & 1 & -1 & 1 & 0 \\ \end{array} \right)
\end{eqnarray*}
\item{$\overline{P}_{([1],[1],[2])}({\bf 6_3^3}; a,q)=\overline{P}_{([2],[1],[1])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) }{a^{3} q (1-q)^3 (1-q^2)}\begin{small}\left( \begin{array}{ccccccc} 0 & 0 & 2 & -2 & 0 & 1 & 0 \\ -1 & 1 & 0 & -2 & 0 & 1 & -1 \\ 0 & 1 & -1 & 0 & 1 & 0 & 0 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\item{$\overline{P}_{([1],[1],[3])}({\bf 6_3^3}; a,q)= \overline{P}_{([3],[1],[1])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
\begin{small} \tfrac{(1-a) (1-a q) (1-a q^2)}{a^{7/2} q^{3/2} (1-q)^3 (1-q^2) (1-q^3)}\left( \begin{array}{ccccccccc} 0 & 0 & 0 & 2 & -2 & 1 & -1 & 1 & 0 \\ -1 & 1 & 0 & 0 & -2 & 0 & 0 & 1 & -1 \\ 0 & 1 & -1 & 0 & 0 & 1 & 0 & 0 & 0 \\ \end{array} \right)
\end{small}
\end{eqnarray*}
\item{$\overline{P}_{([1],[2],[1])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) }{a^3 q^2(1-q)^3 (1-q^2)}
\begin{small}\left( \begin{array}{cccccc} 1 & -1 & 1 & 0 & -1 & 1 \\ -1 & 0 & 0 & -1 & 1 & -1 \\ 0 & 1 & -1 & 1 & 0 & 0 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\item{$\overline{P}_{([1],[2],[2])}({\bf 6_3^3}; a,q) = \overline{P}_{([2],[2],[1])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) }{a^{7/2}q^{7/2} (1-q)^3 (1-q^2)^2}\begin{small}\left( \begin{array}{cccccccccc} 0 & 0 & -2 & 2 & 2 & -5 & 1 & 3 & -2 & 0 \\ 1 & -1 & 1 & 3 & -3 & -1 & 4 & -1 & -1 & 1 \\ -1 & 0 & 1 & -2 & -1 & 1 & 0 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{small}
\end{eqnarray*}
\item{$\overline{P}_{([1],[2],[3])}({\bf 6_3^3}; a,q) = \overline{P}_{([3],[2],[1])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) (1-a q^2) }{a^4 q^5 (1-q)^3 (1-q^2)^2 (1-q^3)}\begin{small}\left( \begin{array}{ccccccccccccc} 0 & 0 & 0 & -2 & 1 & 2 & 0 & -3 & -1 & 3 & 0 & -1 & 0 \\ 1 & -1 & 0 & 1 & 3 & -1 & -3 & 1 & 2 & 1 & -1 & -1 & 1 \\ -1 & 0 & 1 & 0 & -2 & -1 & 0 & 1 & 0 & -1 & 0 & 0 & 0 \\ 0 & 1 & -1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{small}\end{eqnarray*}
\item{$\overline{P}_{([1],[3],[1])}({\bf 6_3^3}; a,q)$ }
\begin{eqnarray*}
\tfrac{(1-a q) (1-a) (1-a q^2)}{a^{7/2} q^{7/2} (1-q)^3 (1-q^2) (1-q^3)}\begin{small}
\left( \begin{array}{cccccccc} 1 & 0 & -1 & 1 & -1 & 2 & -2 & 1 \\ -1 & 0 & -1 & 1 & -1 & 1 & -1 & 0 \\ 0 & 1 & -1 & 1 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{small}\end{eqnarray*}
\item{$\overline{P}_{([1],[3],[2])}({\bf 6_3^3}; a,q) = \overline{P}_{([2],[3],[1])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a)(1-a q) (1-a q^2) }{a^4 q^4 (1-q)^3 (1-q^2)^2 (1-q^3)}\begin{small}
\left( \begin{array}{cccccccccccc} 0 & -1 & 1 & -1 & 0 & 1 & 0 & -1 & -1 & 1 & 1 & -1 \\ 1 & 0 & 0 & 1 & 1 & -1 & -1 & 1 & 2 & -1 & -1 & 1 \\ -1 & 0 & 0 & 0 & -2 & 0 & 1 & 0 & -1 & 0 & 0 & 0 \\ 0 & 1 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{small}\end{eqnarray*}
\item{$\overline{P}_{([1],[3],[3])}({\bf 6_3^3}; a,q) = \overline{P}_{([3],[3],[1])}({\bf 6_3^3}; a,q)=$}
\begin{equation*}\tfrac{(1-a) (1-a q) (1-a q^2)}{a^{9/2}q^{17/2} (1-q)^3 (1-q^2)^2 (1-q^3)^2} \begin{footnotesize}
\left( \begin{array}{cccccccccccccccccc} 0 & 0 & 0 & 0 & 2 & -2 & -1 & 1 & 3 & 0 & -5 & 0 & 4 & 1 & -1 & -3 & 2 & 0 \\ 0 & -1 & 1 & -1 & -2 & -1 & 3 & 2 & -4 & -4 & 2 & 4 & 0 & -4 & 0 & 1 & 1 & -1 \\ 1 & 0 & 0 & 0 & 2 & 3 & -2 & -1 & -1 & 4 & 1 & -1 & -1 & 0 & 1 & 0 & 0 & 0 \\ -1 & 0 & 0 & 1 & -2 & -1 & -1 & 1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{footnotesize}\end{equation*}
\item{$\overline{P}_{([2],[1],[2])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
\begin{small}\tfrac{(1-a) (1-a q) }{a^{7/2} q^{3/2} (1-q)^3 (1-q^2)^2}
\left( \begin{array}{cccccccccc} 0 & 0 & 0 & 0 & 0 & -2 & 1 & 2 & -2 & 0 \\ 0 & 0 & 1 & 1 & -3 & 1 & 5 & -3 & -1 & 2 \\ -1 & 1 & 1 & -3 & 0 & 2 & -2 & -1 & 1 & -1 \\ 0 & 1 & -1 & -1 & 2 & 0 & -1 & 1 & 0 & 0 \\ \end{array} \right)\end{small}\end{eqnarray*}
\item{$\overline{P}_{([2],[1],[3])}({\bf 6_3^3}; a,q) = \overline{P}_{([3],[1],[2])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) (1-a q^2) }{a^4 q^2 (1-q)^3 (1-q^2)^2 (1-q^3)}\begin{small}
\left( \begin{array}{ccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & 1 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & 1 & -3 & 0 & 3 & 2 & -1 & -2 & 1 & 1 \\ -1 & 1 & 1 & -1 & -2 & 0 & 2 & 0 & -2 & -1 & 0 & 1 & -1 \\ 0 & 1 & -1 & -1 & 1 & 1 & 0 & -1 & 0 & 1 & 0 & 0 & 0 \\ \end{array} \right)\end{small}\end{eqnarray*}
\item{$\overline{P}_{([2],[2],[2])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) }{a^5 q^4 (1-q)^3(1-q^2)^3}\begin{small}
\left( \begin{array}{cccccccccccccc} 0 & 0 & 0 & 0 & 0 & 3 & -3 & -4 & 7 & 1 & -5 & 1 & 1 & 0 \\ 0 & 0 & -1 & -1 & 2 & 0 & -5 & 1 & 4 & -3 & -2 & 2 & 0 & -1 \\ 1 & -1 & 0 & 3 & 0 & -2 & 2 & 2 & 0 & 0 & 0 & 0 & 1 & 0 \\ -1 & 0 & 2 & -2 & -2 & 2 & -1 & -2 & 1 & 0 & -1 & 0 & 0 & 0 \\ 0 & 1 & -1 & -1 & 2 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{small}\end{eqnarray*}
\item{$\overline{P}_{([2],[2],[3])}({\bf 6_3^3}; a,q) = \overline{P}_{([3],[2],[2])}({\bf 6_3^3}; a,q)=$}
\begin{equation*}\tfrac{(1-a) (1-a q) (1-a q^2)}{a^{11/2}q^{11/2} (1-q)^3 (1-q^2)^3 (1-q^3)}\begin{footnotesize}
\left( \begin{array}{ccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & -2 & -4 & 3 & 2 & 1 & -2 & -2 & 2 & 0 \\ 0 & 0 & 0 & -1 & -1 & 2 & 2 & -4 & -5 & 2 & 6 & 0 & -6 & -2 & 4 & 1 & -2 \\ 1 & -1 & -1 & 2 & 2 & 0 & -3 & -1 & 6 & 3 & -3 & -3 & 2 & 3 & -1 & -1 & 1 \\ -1 & 0 & 2 & 0 & -3 & -1 & 2 & 1 & -2 & -2 & 0 & 1 & 0 & -1 & 0 & 0 & 0 \\ 0 & 1 & -1 & -1 & 1 & 1 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{footnotesize}\end{equation*}
\item{$\overline{P}_{([2],[3],[2])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
\tfrac{(1-a) (1-a q) (1-a q^2) }{a^{11/2}q^{15/2} (1-q)^3 (1-q^2)^3 (1-q^3)}\begin{small}
\left( \begin{array}{ccccccccccccccccc} 0 & 0 & 0 & 0 & 2 & -1 & -4 & 4 & 3 & -3 & -3 & 1 & 5 & -2 & -3 & 2 & 0 \\ 0 & -1 & 1 & 0 & -4 & 0 & 4 & 1 & -5 & -3 & 4 & 2 & -2 & -2 & 1 & 1 & -1 \\ 1 & 0 & -1 & 1 & 2 & 2 & -1 & -2 & 2 & 1 & 1 & 0 & -1 & 0 & 1 & 0 & 0 \\ -1 & 0 & 1 & 0 & -2 & -1 & 1 & -1 & -1 & 1 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & -1 & 2 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{small}\end{eqnarray*}
\item{$\overline{P}_{([2],[3],[3])}({\bf 6_3^3}; a,q)=\overline{P}_{([3],[3],[2])}({\bf 6_3^3}; a,q) =$}
\begin{eqnarray*}
&&\tfrac{(1-a) (1-a q) (1-a q^2) }{a^6q^{10}(1-q)^3 (1-q^2)^3 (1-q^3)}\times \\
&&\begin{footnotesize}
\left( \begin{array}{ccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -3 & 2 & 4 & 0 & -7 & -4 & 9 & 5 & -5 & -6 & 1 & 5 & -1 & -1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & -2 & 0 & 3 & 4 & -3 & -7 & 3 & 8 & 2 & -7 & -4 & 5 & 3 & -1 & -2 & 0 & 1 \\ 0 & -1 & 1 & 0 & -2 & -2 & 1 & 2 & -2 & -6 & 0 & 3 & 2 & -4 & -3 & 2 & 1 & -1 & -1 & 0 & 1 & -1 & 0 \\ 1 & 0 & -1 & 0 & 2 & 2 & 0 & -2 & 2 & 3 & 2 & -1 & -2 & 2 & 2 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 \\ -1 & 0 & 1 & 1 & -2 & -2 & 1 & 1 & -1 & -2 & -1 & 1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & -1 & 1 & 1 & 0 & -1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right) \end{footnotesize}\end{eqnarray*}
\item{$\overline{P}_{([3],[1],[3])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2) }{a^{9/2} q^{5/2}(1-q)^3 (1-q^2)^2 (1-q^3)^2}\times\\
&&\begin{footnotesize}
\left( \begin{array}{cccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & -1 & 0 & -2 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 1 & 2 & -1 & -3 & -3 & 3 & 1 & 0 & -2 \\ 0 & 0 & 0 & 1 & 0 & 0 & -3 & 1 & 4 & 2 & -2 & -4 & 3 & 4 & 0 & -1 & -1 & 2 \\ -1 & 1 & 1 & 0 & -3 & -1 & 3 & 2 & -2 & -3 & 0 & 2 & -1 & -1 & -1 & 1 & -1 & 0 \\ 0 & 1 & -1 & -1 & 0 & 2 & 1 & -2 & -1 & 1 & 1 & 0 & -1 & 1 & 0 & 0 & 0 & 0 \\ \end{array} \right)
\end{footnotesize}\end{eqnarray*}
\item{$\overline{P}_{([3],[2],[3])}({\bf 6_3^3}; a,q)=$}
\begin{eqnarray*}
&& \tfrac{(1-a) (1-a q) (1-a q^2) }{a^6 q^7 (1-q)^3(1-q^2)^3 (1-q^3)^2}\times\\
&&\begin{footnotesize}
\left( \begin{array}{ccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -3 & 1 & 4 & 1 & -4 & -4 & 3 & 3 & -1 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & -5 & -1 & 7 & 7 & -4 & -11 & 2 & 9 & 2 & -4 & -3 & 2 & 1 \\ 0 & 0 & 0 & -1 & 0 & 0 & 2 & 0 & -4 & -4 & 1 & 6 & 0 & -8 & -4 & 3 & 4 & -2 & -3 & 0 & 1 & 0 & -1 \\ 1 & -1 & -1 & 1 & 3 & 0 & -3 & -2 & 3 & 4 & 2 & -2 & -1 & 1 & 3 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ -1 & 0 & 2 & 1 & -3 & -3 & 2 & 4 & -1 & -4 & -1 & 2 & 0 & -2 & -1 & 1 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & -1 & 0 & 2 & 1 & -2 & -1 & 1 & 1 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{footnotesize}\end{eqnarray*}
\item{$\overline{P}_{([3],[3],[3])}({\bf 6_3^3}; a,q)$ }
\begin{eqnarray*}
&&\tfrac{(1-a) (1-a q) (1-a q^2) }{a^{15/2} q^{23/2} (1-q)^3 (1-q^2)^3 (1-q^3)^3} \times\\
&&\begin{tiny}
\left( \begin{array}{ccccccccccccccccccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 4 & -3 & -4 & -2 & 7 & 8 & -7 & -7 & -2 & 7 & 5 & -4 & -1 & -2 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & -1 & 3 & 2 & -3 & -9 & -2 & 11 & 9 & -7 & -15 & -1 & 12 & 6 & -6 & -7 & 2 & 3 & 1 & -2 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & -1 & 1 & 3 & 3 & 1 & -4 & -3 & 6 & 9 & 4 & -11 & -6 & 5 & 10 & 1 & -6 & -2 & 3 & 1 & 0 & -1 & 1 \\ 0 & -1 & 1 & 0 & -1 & -2 & 0 & 2 & 0 & -3 & -4 & -4 & 1 & 1 & 2 & -5 & -6 & -1 & 4 & 2 & -3 & -4 & 1 & 1 & 1 & -2 & 0 & 0 & 0 \\ 1 & 0 & -1 & -1 & 2 & 3 & -1 & -2 & 0 & 3 & 2 & 1 & 3 & 1 & -1 & -1 & 2 & 4 & 0 & -1 & -1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 1 & 2 & -2 & -3 & 0 & 3 & 1 & -3 & -2 & 1 & 0 & -1 & -2 & 1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & -1 & -1 & 0 & 2 & 1 & -2 & -1 & 1 & 1 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right)\end{tiny}\end{eqnarray*}
\end{itemize}
\section{Conclusions}\label{sec:conclusion}
In this paper, we evaluate colored HOMFLY invariants carrying symmetric representations of various non-tours knots and links by using the multiplicity-free $SU(N)$ quantum Racah coefficients \cite{Nawata:2013ppa} in the context of Chern-Simons theory. This method provides a powerful tool to demonstrate explicit computations of colored HOMFLY invariants.
From the observation in \S\ref{sec:links}, we predict the following properties of multi-colored HOMFLY invariants of links.
For an $s$-component link $\cal L$, an unreduced colored HOMFLY invariant ${\overline P}_{([n_1],\cdots,[n_s])}({\cal L};a,q)$
contains the unknot factor ${\overline P}_{[n_{\rm max}]}(\bigcirc;a,q)$ colored by the highest rank $n_{\rm max}=\max (n_1,\cdots,n_s)$.
Therefore, it is reasonable to define the reduced colored HOMFLY invariants ${P}_{([n_1],\cdots,[n_s])}({\cal L};a,q)$ by
\begin{equation*}
{P}_{([n_1],\cdots,[n_s])}({\cal L};a,q)={\overline P}_{([n_1],\cdots,[n_s])}({\cal L};a,q)/{\overline P}_{[n_{\rm max}]}(\bigcirc;a,q)
\end{equation*}
for symmetric representations. Furthermore, if we normalize by
\begin{eqnarray*}
\frac{1}{(a;q)_{n_{\max}}}\left[\prod_{i=1}^s (q;q)_{n_i}\right]{\overline P}_{([n_1],\cdots,[n_s])}({\cal L};a,q)~,
\end{eqnarray*}
then it becomes a Laurent polynomial with respect to the variables $(a,q)$. Moreover, the Laurent polynomials obey
the exponential growth property
\begin{eqnarray*}
&&{\rm lim}_{q\to1}\frac{1}{(a;q)_{kn_{\max}}}\left[\prod_{i=1}^s (q;q)_{kn_i}\right]{\overline P}_{[kn_1],\cdots,[kn_s]}({\cal L};a,q)\\
&=&\left[{\rm lim}_{q\to1}\frac{1}{(a;q)_{n_{\max}}}\left[\prod_{i=1}^s (q;q)_{n_i}\right]{\overline P}_{([n_1],\cdots,[n_s])}({\cal L};a,q)\right]^k
\end{eqnarray*}
where ${\rm gcd}(n_1,\cdots,n_s)=1$.
The direct line to study further is to categorify these invariants. Especially, colored HOMFLY homologies for thick knots are known only for the knots $\bf 8_{19}$ and $\bf 9_{42}$ \cite{Gukov:2011ry}. To distinguish between generic and particular properties which can be accidentally valid for simple knots, it is important to obtain explicit expressions of colored HOMFLY homologies of ten-crossing thick knots in \S\ref{sec:thick}. The colored HOMFLY homology for links will be studied in \cite{Gukov:2013}.
Although the closed form expression \cite{Nawata:2013ppa} of the multiplicity-free $SU(N)$ quantum Racah coefficients extends the scope for calculations of colored HOMFLY polynomials to some extent, we have not succeeded in obtaining the invariants for the knot ${\bf 10_{161}}$ and the link ${\bf 7_6^2}$. In addition, the information about colored HOMFLY polynomials beyond symmetric representations are very limited \cite{Anokhina:2012rm}. To deal with more complicated links and non-symmetric representations, further study has to be undertaken for the $SU(N)$ quantum Racah coefficients with multiplicity structure.
We hope to report this issue in future.
\section*{Acknowledgement}
The authors would like to thank Andrei Mironov, Alexei Morozov and Andrey Morozov for sharing Maple files. S.N. is indebted to Petr Dunin-Barkowski, Sergei Gukov, Kenichi Kawagoe, Alexei Sleptsov, Marko Sto$\check{\text{s}}$i$\acute{\text{c}}$, Piotr Su{\l}kowski and Miguel Tierz for valuable discussions and correspondences. In addition, S.N. is grateful to IIT Bombay for its warm hospitality. S.N. and Z. would like to thank Indian String Meeting 2012 at Puri for providing a stimulating academic environment. The work of S.N. is partially supported by the ERC Advanced
Grant no.~246974, {\it "Supersymmetry: a window to non-perturbative physics"}.
| {'timestamp': '2013-07-23T02:11:31', 'yymm': '1302', 'arxiv_id': '1302.5144', 'language': 'en', 'url': 'https://arxiv.org/abs/1302.5144'} |
\section{0pt}{14pt plus 2pt minus 2pt}{5pt plus 2pt minus 2pt}
\titlespacing*\subsection{0pt}{4pt plus 1pt minus 1pt}{5pt plus 2pt minus 2pt }
\titlespacing*\subsubsection{0pt}{4pt plus 1pt minus 1pt}{5pt plus 2pt minus 2pt}
\renewcommand\bibsection{%
\section*{References}%
\markboth{\MakeUppercase{\refname}}{\MakeUppercase{\refname}}%
}%
\usepackage{chngcntr}
\usepackage{apptools}
\makeatletter
\def\mythanks#1{%
\protected@xdef \@thanks {\@thanks \protect \footnotetext [\the \c@footnote ]{#1}}%
}
\makeatother
\bibliographystyle{ecta}
\onehalfspacing
\def\boxit#1{\vbox{\hrule\hbox{\vrule\kern6pt
\vbox{\kern6pt#1\kern6pt}\kern6pt\vrule}\hrule}}
\def\claudia#1{\vskip 2mm\boxit{\vskip 2mm{\color{blue}\bf#1} {\color{orange}\bf -- CN\vskip 2mm}}\vskip 2mm}
\usepackage{pgfplots}
\usepgfplotslibrary{dateplot,fillbetween}
\definecolor{darkbrown}{RGB}{139, 69, 19}
\definecolor{darkred}{RGB}{180,0, 10}
\definecolor{darkorange}{RGB}{255,140,0}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbb{V}}{\mathbb{V}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\1}[1]{\mathbf{1}\{#1\}}
\newcommand{\textrm{Var}}{\textrm{Var}}
\newcommand{\textrm{Cor}}{\textrm{Cor}}
\newcommand{\textrm{Cov}}{\textrm{Cov}}
\newcommand{\textrm{supp}}{\textrm{supp}}
\newcommand{\textrm{range}}{\textrm{range}}
\newcommand{\textrm{sign}}{\textrm{sign}}
\DeclareMathOperator*{\plim}{plim}
\newcommand{\widehat}{\widehat}
\newcommand{\widetilde}{\widetilde}
\newcommand{\mathcal}{\mathcal}
\newcommand{\mathbb}{\mathbb}
\newcommand{\textrm{arg\,min}}{\textrm{arg\,min}}
\newcommand{\textrm{arg\,max}}{\textrm{arg\,max}}
\newcommand{\textnormal{SR}}{\textnormal{SR}}
\newcommand{1\hspace{-0,85ex}1}{1\hspace{-0,85ex}1}
\newcommand{\pi_{\textnormal{DF}}}{\pi_{\textnormal{DF}}}
\newcommand{\pi_{\textnormal{CO}}}{\pi_{\textnormal{CO}}}
\newcommand{\pi_{\textnormal{AT}}}{\pi_{\textnormal{AT}}}
\newcommand{\pi_{\textnormal{NT}}}{\pi_{\textnormal{NT}}}
\newcommand{\pi_{\textnormal{d}}}{\pi_{\textnormal{d}}}
\newcommand{\textnormal{LATE}_{\textnormal{CO}}}{\textnormal{LATE}_{\textnormal{CO}}}
\newcommand{\Delta_{\textnormal{CO}}}{\Delta_{\textnormal{CO}}}
\newcommand{\underline{\pi}_{\textnormal{CO}}}{\underline{\pi}_{\textnormal{CO}}}
\newcommand{\overline{\pi}_{\textnormal{CO}}}{\overline{\pi}_{\textnormal{CO}}}
\newcommand{\underline{\pi}_{DF}}{\underline{\pi}_{DF}}
\newcommand{\overline{\pi}_{DF}}{\overline{\pi}_{DF}}
\newcommand{\widehat{\underline{\pi}}_{DF}}{\widehat{\underline{\pi}}_{DF}}
\newcommand{\widehat{\overline{\pi}}_{DF}}{\widehat{\overline{\pi}}_{DF}}
\newcommand{F_{Y_1^{CO}}}{F_{Y_1^{CO}}}
\newcommand{F_{Y_0^{CO}}}{F_{Y_0^{CO}}}
\newcommand{f_{Y_1^{CO}}}{f_{Y_1^{CO}}}
\newcommand{f_{Y_0^{CO}}}{f_{Y_0^{CO}}}
\newcommand{F_{Y_d^{CO}}}{F_{Y_d^{CO}}}
\newcommand{f_{Y_d^{CO}}}{f_{Y_d^{CO}}}
\newcommand{F_{Y_1^{DF}}}{F_{Y_1^{DF}}}
\newcommand{F_{Y_0^{DF}}}{F_{Y_0^{DF}}}
\newcommand{f_{Y_1^{DF}}}{f_{Y_1^{DF}}}
\newcommand{f_{Y_0^{DF}}}{f_{Y_0^{DF}}}
\newcommand{F_{Y_d^{DF}}}{F_{Y_d^{DF}}}
\newcommand{f_{Y_d^{DF}}}{f_{Y_d^{DF}}}
\newcommand{F_{Y_1^{AT}}}{F_{Y_1^{AT}}}
\newcommand{f_{Y_1^{AT}}}{f_{Y_1^{AT}}}
\newcommand{F_{Y_0^{NT}}}{F_{Y_0^{NT}}}
\newcommand{f_{Y_0^{NT}}}{f_{Y_0^{NT}}}
\newcommand{q_{\textnormal{d(1-d)}}}{q_{\textnormal{d(1-d)}}}
\newcommand{Q_{\textnormal{d(1-d)}}}{Q_{\textnormal{d(1-d)}}}
\newcommand{Q_{\textnormal{dd}}}{Q_{\textnormal{dd}}}
\newcommand{q_{\textnormal{dd}}}{q_{\textnormal{dd}}}
\newcommand{c_{\textnormal{min}}}{c_{\textnormal{min}}}
\newcommand{c_{\textnormal{max}}}{c_{\textnormal{max}}}
\newcommand{\overline{F}_{Y_1^{CO}}}{\overline{F}_{Y_1^{CO}}}
\newcommand{\overline{F}_{Y_0^{CO}}}{\overline{F}_{Y_0^{CO}}}
\newcommand{\underline{F}_{Y_1^{CO}}}{\underline{F}_{Y_1^{CO}}}
\newcommand{\underline{F}_{Y_0^{CO}}}{\underline{F}_{Y_0^{CO}}}
\newcommand{\overline{F}_{Y_d^{CO}}}{\overline{F}_{Y_d^{CO}}}
\newcommand{\underline{F}_{Y_d^{CO}}}{\underline{F}_{Y_d^{CO}}}
\newcommand{\overline{F}_{Y_1^{DF}}}{\overline{F}_{Y_1^{DF}}}
\newcommand{\overline{F}_{Y_0^{DF}}}{\overline{F}_{Y_0^{DF}}}
\newcommand{\underline{F}_{Y_1^{DF}}}{\underline{F}_{Y_1^{DF}}}
\newcommand{\underline{F}_{Y_0^{DF}}}{\underline{F}_{Y_0^{DF}}}
\newcommand{\overline{F}_{Y_d^{DF}}}{\overline{F}_{Y_d^{DF}}}
\newcommand{\underline{F}_{Y_d^{DF}}}{\underline{F}_{Y_d^{DF}}}
\newcommand{y, \pdf, \delta}{y, \pi_{\textnormal{DF}}, \delta}
\newcommand{P_{dd}}{P_{dd}}
\newcommand{P_{d(1-d)}}{P_{d(1-d)}}
\newcommand{G_d^{\sup}}{G_d^{\sup}}
\newcommand{G_d^{\inf}}{G_d^{\inf}}
\newcommand{\underline{\delta}}{\underline{\delta}}
\newcommand{\overline{\delta}}{\overline{\delta}}
\newcommand{\underline{\delta_b}}{\underline{\delta_b}}
\newcommand{\overline{\delta_b}}{\overline{\delta_b}}
\newcommand{\underline{\delta_{qs}}}{\underline{\delta_{qs}}}
\newcommand{\overline{\delta_{qs}}}{\overline{\delta_{qs}}}
\newcommand{\underline{\lambda}}{\underline{\lambda}}
\newcommand{\overline{\lambda}}{\overline{\lambda}}
\newcommand{\underline{y}}{\underline{y}}
\newcommand{\overline{y}}{\overline{y}}
\pgfmathdeclarefunction{gauss}{2}{%
\pgfmathparse{1/(#2*sqrt(2*pi))*exp(-((x-#1)^2)/(2*#2^2))}%
}
\def\cdf(#1)(#2)(#3){0.5*(1+(erf((#1-#2)/(#3*sqrt(2)))))}%
\tikzset{
declare function={
normcdf(\x,\m,\s)=1/(1 + exp(-0.07056*((\x-\m)/\s)^3 - 1.5976*(\x-\m)/\s));}
}
\begin{document}
\selectlanguage{english}
\begin{titlepage}
\newpage
\thispagestyle{empty}
\vspace{-2cm}
\begin{verbatim}
\end{verbatim}
\begin{center}
\Large \scshape{ Sensitivity of LATE Estimates to}\\
\scshape{Violations of the Monotonicity Assumption}
\end{center}
\vspace{0.5cm}
\begin{flushleft}
\begin{center}
\large Claudia Noack\footnote[$\star$]{This version: June 10, 2021. Department of Economics, University of Mannheim; E-mail: claudia.noack@gess.uni-mannheim.de. Website: claudianoack.github.io. I am grateful to my advisor Christoph Rothe for invaluable support on this project. I am thankful for insightful discussions with Tim Armstrong, Matthew Masten, Yoshiyasu Rai, and Ed Vytlacil. Furthermore, I thank Michael Dobrew, Jasmin Fliegner, Martin Huber, Paul Goldsmith-Pinkham, Sacha Kapoor, Lukas Laffers, Tomasz Olma, Vitor Possebom, Alexandre Poirier, Jonathan Roth, Pedro S'Antanna, Konrad Stahl, Matthias Stelter, Jörg Stoye, Philipp Wangner, and seminar and conference participants at the University of Mannheim, University of Heidelberg, University of Bonn, University of Oxford, Yale University, ESEM 2019, ESWC 2020, IAAE 2019, and 1st International Ph.D. Conference at Erasmus University Rotterdam for helpful comments and suggestions. I gratefully acknowledge financial support by the European Research Council (ERC) through grant SH1-77202.} \\
\vspace{0.75cm}
\end{center}
\end{flushleft}
\begin{abstract}
\noindent
In this paper, we develop a method to assess the sensitivity of local average treatment effect estimates to potential violations of the monotonicity assumption of Imbens and Angris~(1994). We parameterize the degree to which monotonicity is violated using two sensitivity parameters: the first one determines the share of defiers in the population, and the second one measures differences in the distributions of outcomes between compliers and defiers. For each pair of values of these sensitivity parameters, we derive sharp bounds on the outcome distributions of compliers in the first-order stochastic dominance sense. We identify the robust region that is the set of all values of sensitivity parameters for which a given empirical conclusion, e.g. that the local average treatment effect is positive, is valid.
Researchers can assess the credibility of their conclusion by evaluating whether all the plausible sensitivity parameters lie in the robust region. We obtain confidence sets for the robust region through a bootstrap procedure and illustrate the sensitivity analysis in an empirical application. We also extend this framework to analyze treatment effects of the entire population.
\end{abstract}
\setcounter{page}{0}\clearpage
\end{titlepage}
\pagenumbering{arabic}
\onehalfspacing
\newcommand{\mathbb}{\mathbb}
\section{Introduction} \label{sectionintroduction}
The local average treatment effect framework (LATE) is used for instrumental variable analysis in setups of heterogeneous treatment effects \citep{imbens1994identification}. We consider settings of a binary instrumental variable and a binary treatment variable. The Wald estimand then equals the treatment effect of \textit{compliers}, individuals for which the instrument influences the treatment status, given the well-known classical LATE assumptions: monotonicity, independence, and relevance. Monotonicity states that the effect of the instrument on the treatment decision is monotone across all units. In the canonical example, in which the instrument encourages units to take up the treatment, monotonicity rules out the existence of \textit{defiers}, i.e., units that receive the treatment only if the instrument discourages them. Researchers might question the validity of this assumption in empirical applications. In these settings, the local treatment effect estimates might be biased and might lead the researchers to draw incorrect conclusions about the true treatment effect.
As an example of a setup in which monotonicity could plausibly be violated, consider the study of \cite{angristevens1998}, who analyze the effect of having a third child on the labor market outcomes of mothers. As the decision to have a third child is endogenous, the authors use a dummy for whether the first two children are of the same sex as an instrument. The underlying reasoning is that some parents would only decide to have a third child if their first two children were of the same sex; these parents are compliers. The monotonicity assumption seems questionable in this setting as parents, who have a strong preference for one specific sex, might act as a defier in this setup. Consider, for example parents who want to have at least two boys and their first child is a boy.
Contrary to the incentive given by instrument, they have two children if their second child is a boy, and three children if their second child is a girl. As the monotonicity assumption might be questionable in this example, one can question the validity of empirical conclusions drawn from the classical LATE analysis.\footnote{The other LATE assumptions seem to be plausible here. As the sex of a child is determined by nature and as only the number of and not the sex of the child arguably influences the labor market outcome of mothers, the independence assumption seems to be satisfied. The relevance assumption is testable.}
In this paper, we provide a framework to evaluate the sensitivity of treatment effect estimates to a potential violation of the monotonicity assumption. As noted in \cite{AngristImbensRubin1996}, a violation of the monotonicity assumption always has two dimensions: The first dimension is the heterogeneous effect of the instrumental variable on the treatment variable, the presence of defiers. The second dimension is the heterogeneous effect of the treatment variable on the outcome variable, the outcome heterogeneity between defiers and compliers. We derive the degree to which monotonicity is violated by parameterizing these two dimensions.
We parameterize the existence of defiers by their population size and the outcome heterogeneity by the Kolmogorov-Smirnov norm, which bounds the difference of the cumulative distribution functions of compliers and defiers. For each of these two sensitivity parameters, we identify sharp bounds of the outcome distribution of compliers in a first-order stochastic dominance sense. These bounds also imply sharp bounds on various treatment effects, e.g., the average treatment effect or quantile treatment effects of compliers.
Our analysis proceeds in two steps.
In a first step, we identify the \textit{sensitivity region}. The sensitivity region defines the set of sensitivity parameters for which a data generating process exists, that is consistent with our model assumptions and implies both the observed probabilities and the sensitivity parameters. Since sensitivity parameters lying in the complement of the sensitivity region are not compatible with our model, we do not analyze them further. For the derivation of the sensitivity region, we also derive sharp bounds of the population size of defiers.
In a second step, we identify the \textit{robust region}, which is the set of sensitivity parameters that imply treatment effects that are consistent with a particular empirical conclusion; for instance, the treatment effect of compliers has a specific sign or a particular order of magnitude.\footnote{See \cite{masten2020} for a detailed exposition of this approach.} Parameters lying in the complement of the robust region, the \textit{nonrobust region}, imply treatment effects that are not, or may not be, consistent with the given empirical conclusion. The robust region and the nonrobust region are separated from each other by the \textit{breakdown frontier}, following the terminology of \cite{masten2020}. For each population size of defiers, the breakdown frontier identifies the weakest assumption about outcome heterogeneity, which is necessary to be imposed to imply treatment effects being consistent with the particular empirical conclusion under consideration.
This framework can be used in the following ways. First, by evaluating the size of the sensitivity region, one can determine the plausibility of the model. If this set is empty, the model is refuted, which implies that even if one would allow for an arbitrary violation of the monotonicity assumption, at least one of the model assumptions has to be violated. Second, researchers can analyze the sensitivity of their estimates with respect to the degree to which the monotonicity assumption is violated by varying the sensitivity parameters within the sensitivity region. Third, by evaluating the plausibility of the parameters within the robust region, researchers can assess the sign or the order of magnitude of the treatment effect.
While being transparent about the imposed assumptions, they might still arrive at a particular empirical conclusion of interest in a credible way.
Fourth, one can assess to which degree monotonicity has to be violated to overturn a particular empirical conclusion.
Within our framework, researchers can use their economic insights about the analyzed situation to judge the severity of a violation monotonicity.
While the main focus of this paper lies on treatment effects of compliers, we also show how this framework can be exploited to analyze treatment effects of the entire population. Under further support assumptions of the outcome variable and for given sensitivity parameters, the average treatment effect of the entire population is partially identified, which complements known results in the literature \citep[see][]{kitagawa2021identification, Balke97, machado2019instrumental, kamat2018identifying}.
Since the analytic expressions of the sensitivity and robust regions are rather complicated and difficult to interpret, we provide simplified analytical expressions of these regions for a binary outcome.
To construct confidence sets for both the sensitivity and the robust region, we show that both regions are determined through mappings of some underlying parameters. These mappings are not Hadamard-differentiable, and inference methods relying on standard Delta method arguments are therefore not applicable. We show how to construct smooth mappings that bound the parameters of interest. This construction leads to mappings for which standard Delta method arguments are applicable, and we use the nonparametric bootstrap to construct valid confidence sets for the parameters of interest. With a binary outcome variable, the mappings resulting in the sensitivity and robust region are considerably simpler. Therefore, we can use a generalized Delta method to show asymptotic distributional results and apply a bootstrap procedure to construct asymptotically valid confidence sets.
We show in a Monte Carlo study that our proposed inference method has good finite sample properties. We further apply our method to the setup studied by \cite{angristevens1998} introduced above. We show that relatively strong assumptions on either the population size of the defiers or the outcome heterogeneity have to be imposed to preserve the sign of the estimated treatment effect.
This result demonstrates that the monotonicity assumption is key in the local treatment effect framework.
The remainder of this paper is structured as follows: A literature review follows, and Section~\ref{sectionsetup} illustrates the setup in a simplified setting. Section~\ref{sectionsensitivitypara} introduces the sensitivity parameters and Section~\ref{sec_identification_bounds} derives sharp bounds on the distribution functions of compliers.
The main sensitivity analysis is presented in Section~\ref{sectionsensitivtyanalysis}. Section~\ref{sectionextension} discusses extensions and Section~\ref{sectioninference} derives estimation and inference results. Section~\ref{sectionsimulations} contains a simulation study and Section~\ref{sectionempirical} an empirical example. Section~\ref{sectionconclusion} concludes. All proofs and additional materials are deferred to the appendix.
\subsection*{Literature} This paper relates to several strands of the literature. First, this paper contributes to the growing strand of the literature, which considers sensitivity analysis in various applications. These applications include, among many others, violations of parametric assumptions, violations of moment conditions, and multiple examples within the treatment effect literature
\citep[see, among others,][]{armstrong2021sensitivity, mukhin2018sensitivity, christensen2019counterfactual, kitamura2013robustness, bonhomme2018minimizing, bonhomme2019posterior, andrews2017measuring, andrews2020informativeness, andrews2020model, andrews2020transparency, rambachan2020honest, conley2012plausibly, imbens2003, chen2015sens}.
This paper is closely related to the literature about breakdown points of \cite{HorowitzManski1995, imbens2003, KleinSantos2013, stoye2005, stoye2010partial}, and especially closely related to \cite{masten2020, masten2021salvaging}. These papers consider several assumptions in the treatment effect literature, but not the monotonicity assumption.
Second, it is related to the local average treatment effect framework literature, which is formally introduced in \cite{imbens1994identification} and further in \cite{vytlacil2002independence}. Several papers consider violations of the monotonicity assumption through different types of assumptions.
\cite{Balke97, machado2019instrumental, hubermellace2010, manski1990, huber2015testing, huber2015testmonotonicity} consider a binary and \cite{kitagawa2021identification} a continuous outcome variable and partially identify the average treatment effect.
\cite{small2017instrumental, manskipepper2000,dahl2017s, hubermellace2010} propose alternative assumptions on the data generating process, which are strictly weaker than monotonicity and obtain bounds on various treatment effects. \cite{richardson2010analysis} consider a binary outcome variable and derive bounds on outcome distributions for a given population size of always takers. We consider not only a binary outcome variable and we introduce a second parameter to also bound outcome heterogeneity. We then consider these sensitivity parameters within the framework of a breakdown frontier.
\cite{de2017tolerating} shows that in the presence of defiers, under certain assumptions, the Wald estimand still identifies a convex combination of causal treatment effects of only a subpopulation of compliers. In a policy context, the treatment effect of compliers might be of particular interest because the treatment status of compliers is most likely to change with a small policy change. However, the same reasoning does not apply to the subpopulation of compliers.
\cite{Klein2010} evaluates the sensitivity of the treatment effect of compliers to random departures from monotonicity. \cite{FioriniStevens2014} give examples of analyzing the sensitivity of the monotonicity, and \cite{Huber2014} considers a violation of monotonicity in a specific example. They do not provide sharp identification results of the treatment effect of compliers in the presence of defiers, nor do they derive the robust region. A violation of the monotonicity assumption with a non-binary instrumental variable is considered, and alternative assumptions and testing procedures are proposed in \cite{mogstad2019identification, frandsen2019judging, norris2020effects}.
This paper contributes to this literature by presenting an effective tool to analyze the severity of a potential violation of the monotonicity assumption. It thus gives applied researchers a new tool to evaluate the robustness of their estimates to a violation of the monotonicity assumption, and their estimates may thereby gain credibility.
Our proposed inference procedure builds on seminal work about Delta methods for non-differentiable mappings by \cite{Shapiro1991, fansantos2016, dumbgen1993nondifferentiable, Hong2016}, and it further exploits ideas of smoothing population parameters by \cite{masten2020, chernozhukov2010quantile, haile2003}.
\section{Setup}\label{sectionsetup}
\subsection{Model of the Local Average Treatment Effect}
We observe the distribution of the random variables $(Y, D, Z)$, where $Y$ is the outcome of interest; $D$ is the actual treatment status, with $D=1$ if the person is treated and $D=0$ otherwise; and $Z$ is the instrument, with $Z=1$ if the person is assigned to treatment and $Z=0$ otherwise. We assume that each unit has potential outcomes $Y_0$ in the absence and $Y_1$ in the presence of treatment, and potential treatment status $D_1$ when assigned to treatment and $D_0$ when not assigned to treatment. The observed and potential outcomes are related by $Y=D Y_1 + (1-D)Y_0$, and observed and potential treatment status by $D=Z D_1 + (1-Z)D_0$.
Based on the effect of the instrument on the treatment status, we distinguish four different groups: compliers that are only treated if they are assigned to treatment (CO); defiers that are only treated if they are not assigned to treatment (DF); always takers that are independently of the instrument always treated (AT), and never takers that are never treated (NT).
We denote the population sizes of the respective group by $\pi_{\textnormal{AT}}$, $\pi_{\textnormal{NT}}$, $\pi_{\textnormal{CO}}$, and $\pi_{\textnormal{DF}}$. We denote by $Y_d^{T}$ the potential outcome variable of group $T \in \{AT, NT, CO, DF\}$ under treatment status $d$. To simplify the notation, we write $Y_d^{dT}$ for the potential outcome variable of always takers if $d=1$ and otherwise of never takers, and similarly $\pi_{dT}$ for the respective population size.
We denote the outcome distribution of a variable $Y$ by $F_Y$, its density function, if it exists, by $f_Y$, and its support by $\mathbb{Y}$.\footnote{Throughout the paper, we implicitly assume that all necessary moments of all random variables for the parameter of interest exist; for instance, if we consider the local average treatment effect, we assume $Y_d^{T}$ has first moments for all $d\in\{0,1\}$ and $T \in \{C, DF, AT, NT\}$.}
The key parameters of interest in this analysis are treatment effects of compliers. We denote the average treatment effect of compliers by\footnote{Similarly, the average treatment effect of defiers is denoted by $\Delta_{DF}=\mathbb{E}[Y_1-Y_0 |D_0=1, \; D_1=0]$.} $$\Delta_{CO}=\mathbb{E}[Y_1-Y_0 | D_0=0, \; D_1=1]. $$
Throughout the paper, we assume that $\mathbb P(D=1|Z=1) \geq \mathbb P (D=1|Z=0)$ without loss of generality, and we impose the following identifying assumptions.
\begin{assumption}
\label{assumptionLATE} The instrument satisfies $(Y_{1}, Y_{0}, D_1, D_0) \perp Z$ (Independence), and $\mathbb P(D=1|Z=1)>\mathbb P(D=1|Z=0)$ (Relevance).
\end{assumption}
We refer to \cite{AngristImbensRubin1996} for an extensive discussion of these assumptions.
\subsection{Illustration of the Sensitivity Analysis}
In this section, we illustrate the sensitivity analysis in a very simplified framework, where we introduce the sensitivity parameters, the sensitivity and the robust region. In contrast to our main sensitivity analysis in Section~\ref{sectionsensitivitypara}-\ref{sectionsensitivtyanalysis}, we do not consider any sharp identification results in this illustration.
\subsubsection{Sensitivity Parameter Space}
In the presence of defiers, \cite{AngristImbensRubin1996} show that the average treatment effect of compliers is not point identified. The Wald estimand, $\beta^{IV} =\textrm{Cov}(Y,Z) / \textrm{Cov}(D,Z)$, equals a weighted difference of the average treatment effect of compliers and defiers:
\begin{align}
\beta^{IV} = \frac{1}{\pi_{\textnormal{CO}} -\pi_{\textnormal{DF}}} \left(\pi_{\textnormal{CO}} \Delta_{CO} - \pi_{\textnormal{DF}} \Delta_{DF} \right).\label{equationidentificationaverage}
\end{align}
Three parameters in equation~\eqref{equationidentificationaverage} are in general not identified: the population size of defiers $\pi_{\textnormal{DF}}$, the treatment effect of compliers $\Delta_{CO}$ and of defiers $\Delta_{DF}$.\footnote{Clearly, if either $\pi_{\textnormal{DF}}=0$, implying the absence of defiers, or $\Delta_{CO}= \Delta_{DF}$, implying that compliers and defiers have the same average treatment effect, the treatment effect $\Delta_{CO}$ is still point identified.}
To bound the average treatment effect of compliers, we introduce two sensitivity parameters. The first one determines the population size of defiers, and the second one outcome heterogeneity between compliers and defiers. These two parameters measure the degree to which monotonicity is violated and represent the two dimensions of heterogeneity: (i) heterogeneous effects of the instrument on the treatment status and (ii) heterogeneous effects of the treatment on the outcome.
The heterogeneous impact of the instrument on the treatment status, is parameterized, in the most simplest ways, by the population size of defiers
\begin{align}
\pi_{\textnormal{DF}} =\mathbb P(D_0=1 \text{ and } D_1=0 ). \label{equ_pdf}
\end{align}
A larger sensitivity parameter $\pi_{\textnormal{DF}}$ implies a more severe violation of monotonicity. It is clear that, for a given population size of defiers, $\pi_{\textnormal{DF}}$, the population sizes of the other groups are point identified. In our analysis, these population sizes are, therefore, functions of the sensitivity parameter $\pi_{\textnormal{DF}}$, but we leave this dependence implicit.\footnote{It follows from the definitions of the groups and our assumptions that ${\pi_{\textnormal{AT}}=\mathbb{P}(D=1|Z=0)-\pi_{\textnormal{DF}}}$, $\pi_{\textnormal{NT}}=\mathbb{P}(D=0|Z=1)-\pi_{\textnormal{DF}} $ and $\pi_{\textnormal{CO}}=\mathbb{P}(D=1|Z=1)-\mathbb{P}(D=1|Z=0)+\pi_{\textnormal{DF}}$.}
We parameterize the second dimension of heterogeneity by the sensitivity parameter $\delta_a$ which equals the absolute differences in treatment effects of both groups
\begin{equation*}
\delta_a= | \Delta_{CO} - \Delta_{DF}|.
\end{equation*}
A larger sensitivity parameter $\delta_a$ implies a more severe violation of monotonicity.
\subsubsection{Sensitivity Region and Robust Region}
The \textit{sensitivity region} is the set of sensitivity parameters which do not violate our model assumptions. For instance, a sensitivity parameter ${\pi_{\textnormal{DF}}\geq0.5}$ would violate our model assumptions as the relevance assumption implies that ${\pi_{\textnormal{CO}} > \pi_{\textnormal{DF}}}$. Therefore, such a sensitivity parameter does not lie within our sensitivity region, which is identified without imposing any additional assumptions.
In this illustrative example, we simplify the derivation and say that the sensitivity region is trivially given by $$\textnormal{SR}_a= [0,0.5) \times \mathbb{R}_+.$$
In our main sensitivity analysis, this set, however, is nontrivial and can be empty. In this case, the model is rejected, implying that even though the monotonicity assumption may be violated, at least one of the other model assumptions has to be violated as well.
Even though the treatment effect of compliers is generally not point identified if $\pi_{\textnormal{DF}}>0$, using~\eqref{equationidentificationaverage}, it~is partially identified for any given pair of sensitivity parameters $(\pi_{\textnormal{DF}}, \delta_a)$~by $$ \Delta_{CO} \in \left[ \beta^{IV}- \frac{\pi_{\textnormal{DF}}}{\pi_{\textnormal{CO}}-\pi_{\textnormal{DF}}} \delta_a, \; \beta^{IV}+\frac{\pi_{\textnormal{DF}}}{\pi_{\textnormal{CO}} - \pi_{\textnormal{DF}}} \delta_a \right].$$
In a typical sensitivity analysis, researchers now consider different values of the sensitivity parameters to evaluate the identified sets of the parameter of interest and to evaluate the robustness of the LATE estimates to a potential violation of monotonicity.
However, in many empirical applications, the interest does not lie in the precise treatment effect but in its sign or in its order of magnitude. It is, therefore, natural to start with the empirical conclusion of interest and to ask which sensitivity parameters imply treatment effects that are consistent with this conclusion. This approach is formalized by the breakdown frontier \citep[see, e.g.,][]{KleinSantos2013, masten2020}.
We now consider the empirical conclusion that $\Delta{CO} \geq \mu$, and we assume that ${\beta^{IV} \geq \mu}$.\footnote{If $\beta^{IV} \leq \mu$, the robust region for the conclusion that $\Delta{CO} \geq \mu$ is empty.}
Under our model assumptions and for a given value of the population size of defiers $\pi_{\textnormal{DF}}$, the breakdown point determines the largest value of outcome heterogeneity $\delta_a$ that implies treatment effects that are consistent with our empirical conclusion of interest. Specifically, for any $\pi_{\textnormal{DF}} \in [0,0.5]$, the breakdown point is given by
$$BP_a(\pi_{\textnormal{DF}}) =\frac{\pi_{\textnormal{CO}}-\pi_{\textnormal{DF}}}{\pi_{\textnormal{DF}}} (\beta^{IV} - \mu).$$
The breakdown frontier (BF) is the set of all breakdown points and the robust region (RR) is the set of all sensitivity parameters that are consistent with the empirical conclusion of interest. They are respectively given by
\begin{align*}
BF_a= \left\lbrace (\pi_{\textnormal{DF}} , BP_a(\pi_{\textnormal{DF}})) \in \textnormal{SR}_a \right\rbrace \quad \text{and} \quad RR_a= \left\lbrace (\pi_{\textnormal{DF}} , \delta_a) \in \textnormal{SR}_a: \delta_a \leq BP_a(\pi_{\textnormal{DF}}) \right\rbrace.
\end{align*}
\begin{figure}
\centering
\resizebox{0.45\linewidth}{!}{\input{Graphics/average_identification}}
\caption[Illustration of sensitivity and robust region I.]{Illustration of Sensitivity and Robust Region. Non-shaded area represents sensitivity region. $[\underline{\pi}_{DF}, \overline{\pi}_{DF}]$ represent some bounds on the population size of defiers.}\label{figurelates}
\end{figure}
The nonrobust region is the complement of the robust region within the sensitivity region. It contains sensitivity parameters that may or may not be consistent with the empirical conclusion. Due to the functional form of the breakdown frontier, the nonrobust region is a convex set in this example. An illustrative example of this setup is shown in Figure~\ref{figurelates}.
In this simple example, neither the sensitivity region nor the robust regions are sharp.
For example, if the outcome is binary than the difference between compliers and defiers treatment effects is clearly bounded. Similarly, the robust region might also be substantially reduced by taking into account the actually observed outcomes.\footnote{To give a concrete example, assume that all treated units have a realized outcome of 1 and all nontreated units have a realized outcome of 0. Then it is clear, that the treatment effect of compliers is point identified to be one.}
This reasoning means that even though a parameter pair may lie within the sensitivity region, it might not imply a well-defined data generating process that is consistent with the model assumptions and the observed probabilities. Similarly, even though a parameter pair may lie within the nonrobust region, it might be robust. Empirical conclusions that can be drawn from this analysis might, therefore, not be very informative. Consequently, we improve upon this framework in the remainder of this paper.
\section{Sensitivity Parameters}\label{sectionsensitivitypara}
In this section, we introduce two sensitivity parameters that are interpretable and imply bounds on the outcome distributions of compliers so that the parameter of interest is partially identified. They allow us to consider a trade-off between the strength of the imposed assumption and the size of the identified set.
To derive the sensitivity parameters, we consider the following function:
\begin{gather*}
G_d(y) = \frac{\textrm{Cov}(\1{Y\leq y}, \1{D=d})}{\textrm{Cov}(Z,\1{D=d})},
\end{gather*}
for $d \in \{0,1\}$. In the absence of defiers, $G_d(y)$ is the cumulative distribution function of compliers under treatment status $d$. In the presence of defiers, it holds analogously to \eqref{equationidentificationaverage} that \begin{align}\label{equation_gd}
G_d(y)&= \frac{1}{\pi_{\textnormal{CO}}-\pi_{\textnormal{DF}}} \left(\pi_{\textnormal{CO}} F_{Y_d^{CO}}(y) - \pi_{\textnormal{DF}} F_{Y_d^{DF}}(y)\right).
\end{align}
The outcome distributions of compliers are thus identified up to the population size of defiers and the heterogeneity between the outcome distributions of compliers and defiers.
We introduce two sensitivity parameters to parameterize these two dimensions. First, the presence of defiers is parameterized by the population size of defiers $\pi_{\textnormal{DF}}$ \eqref{equ_pdf}. Second, outcome heterogeneity is represented by $\delta$, which bounds the maximal difference between cumulative distribution functions of the outcome of compliers and defiers by the Kolmogorov-Smirnov (KS) norm
\begin{equation*}
\max_{d \in \{0, 1 \}} \; \underset{y \in \mathbb{Y}}{\sup} \{|F_{Y_d^{CO}}(y) - F_{Y_d^{DF}}(y)|\} = \delta,
\end{equation*}
where $\delta \in [0,1]$. Without a restriction on $\delta$, the outcome distributions can be arbitrarily different. If $\delta=0$, the outcome distributions are restricted the most as both distribution functions coincide. A larger value of the parameter $\delta$ implies a more severe violation of monotonicity.
There are clearly many different possibilities for how heterogeneity between distribution functions can be specified. In this paper, we choose the Kolmogorov-Smirnov norm, as it leads to tractable analytical solutions of the bounds on the compliers outcome distribution. More importantly, this parameterization is simple enough to be interpretable in an empirical conclusion.
A similar parameterization is chosen in \cite{KleinSantos2013} in a different context.\footnote{Since the parameterization of $\delta$ is weak on the tails of the distributions, the bounds on the tails are likely to be uninformative. Imposing a \textit{weighted} KS assumption, that penalizes deviations at the tails of the two distributions more, would overcome this issue but would also lead to less tractable results.}
\section{Partial Identification of Distribution Functions}\label{sec_identification_bounds}
Since our main sensitivity analysis exploits bounds on parameters defined by the distribution function $F_{Y_d^{CO}}$ for $d \in \{0, 1\}$, we bound this distribution function for a fixed given sensitivity parameter pair $(\pi_{\textnormal{DF}}, \delta)$ in this section. We illustrate the derivation of the bounds in the subsequent sections, and the main result is stated in Section~\ref{sec_mainresult_sharpbounds}.
\subsection{Preliminaries}
\subsubsection{Identification Strategy}
Our goal is to obtain sharp lower and upper bounds of the distribution function $F_{Y_d^{CO}}$ in a first-order stochastic dominance sense. That is, we derive analytical characterizations of the distribution functions $\underline{F}_{Y_d^{CO}}$ and $ \overline{F}_{Y_d^{CO}}$ that are feasible candidates
for $F_{Y_d^{CO}}$, in the sense that they are compatible with the imposed sensitivity parameters, our assumptions, and the population distributions of observable probabilities. They are further such that $ \underline{F}_{Y_d^{CO}}(y) \leq F_{Y_d^{CO}}(y) \leq \overline{F}_{Y_d^{CO}}(y), $ for all $y \in \mathbb Y$.
The identification strategy for deriving such sharp bounds $\underline{F}_{Y_d^{CO}}$ and $\overline{F}_{Y_d^{CO}}$ is based on the premise that any candidate distribution function of $F_{Y_d^{CO}}$ then also implies distribution functions of $F_{Y_d^{dT}}$ and $F_{Y_d^{DF}}$. Our candidate function $F_{Y_d^{CO}}$ is therefore feasible, only if the implied functions of $F_{Y_d^{dT}}$ and $F_{Y_d^{DF}}$ are indeed distribution functions.
The explicit analytical characterization of these sharp bounds illustrates the effect of the sensitivity parameters on the bounds, and more importantly, it implies sharp bounds on a variety of treatment effects of interest, e.g., the average treatment effect of compliers \citep[][Lemma 1]{stoye2010partial}.\footnote{The explicit characterization also allows the inference procedure to be based on $\underline{F}_{Y_d^{CO}}$ and $\overline{F}_{Y_d^{CO}}$.}
\subsubsection{Notation} We here collect the notation used in the following subsections. Let $d,s \in \{0,1\}$ and $y \in \mathbb Y$. Let the differences in population sizes of compliers and defiers be denoted by $\pi_{\Delta}=\pi_{\textnormal{CO}}-\pi_{\textnormal{DF}}$.
Let $Q_{ds}(y) \equiv \mathbb{P}(Y\leq y, D=d|Z=s)$ be the observed joint distribution of $Y$ and $D$. We further let, for $\mathscr{B}$ denoting the Borel $\sigma$-algebra,
\begin{align*}
&\widetilde G_d^+(y)=\sup_{B \in \mathscr{B}} \{ \mathbb{P}(Y \in B, Y\leq y, D=d|Z=d) - \mathbb{P}(Y \in B, Y\leq y, D=d |Z=1-d) \}.
\end{align*}
and $G_d^+=\frac{1}{\pi_{\textnormal{CO}}} \widetilde G_d^+(y)$.
Our sensitivity analysis is based on the following observed underlying parameters
\begin{align}
\theta= \left(Q_{11},Q_{10}, Q_{01},Q_{00}, \widetilde G_1^+, \widetilde G_0^+\right).\label{eq_underlying_parameters}
\end{align}
\subsection{Preliminary Bounds}\label{sec_simple_bounds_maintext} To illustrate the identification argument, we first derive preliminaries bounds on the distribution function $F_{Y_d^{CO}}$, which are not necessarily sharp in general.
Based on the law of total probability and our assumptions, the probability function $Q_{dd}$ is a weighted average of the distribution functions $F_{Y_d^{CO}}$ and $F_{Y_d^{dT}}$, specifically $Q_{dd}(y)=\pi_{\textnormal{CO}} F_{Y_d^{CO}}(y)+ \pi_{\textnormal{d}} F_{Y_d^{dT}}(y) $.
Any feasible distribution function of $F_{Y_d^{CO}}$ has to imply a function $F_{Y_d^{dT}}$ that is a distribution function. Exploiting this argument and using our sensitivity parameter $\pi_{\textnormal{DF}}$, it follows that
\begin{align}
\frac{1}{\pi_{\textnormal{CO}}} Q_{dd}(y) \leq F_{Y_d^{CO}}(y) & \leq \frac{1}{\pi_{\textnormal{CO}}} \left( Q_{dd}(y)- \pi_{\textnormal{d}} \right). \label{equ_otheroutcomes_compliers_always_takers}
\end{align}
These bounds correspond to the extreme scenarios where compliers have the highest or the lowest outcomes compared to always and never takers.
Using the same argument for defiers and the definition of $G_d(y)$ in \eqref{equation_gd}, it further follows that
\begin{align}
\frac{ \pi_{\Delta}}{\pi_{\textnormal{CO}}} G_d(y) \leq F_{Y_d^{CO}}(y) & \leq \frac{1}{\pi_{\textnormal{CO}}} \left( \pi_{\Delta} G_d(y) + \pi_{\textnormal{DF}} \right). \label{equ_otheroutcomes_compliers_defiers}
\end{align}
We now consider the second sensitivity parameter $\delta$. Based on the definition of $G_d(y)$ in \eqref{equation_gd}, we conclude that any feasible candidate of $F_{Y_d^{CO}}$ also has to satisfy that
\begin{align}
G_d(y)- \frac{\pi_{\textnormal{DF}}}{\pi_{\Delta}} \delta \leq F_{Y_d^{CO}}(y) \leq G_d(y) + \frac{\pi_{\textnormal{DF}}}{\pi_{\Delta}} \delta. \label{equ_restriction based_on_senstivityparamter}
\end{align}
Since the function $G_d$ is not necessarily increasing in $y$ for all $y \in \mathbb Y$, bounds on the distribution function $F_{Y_d^{CO}}$ based on \eqref{equ_otheroutcomes_compliers_defiers} and \eqref{equ_restriction based_on_senstivityparamter} have to take this into account.
We therefore directly consider bounds on $F_{Y_d^{CO}}$ that employ this information.
To be precise, for the lower bound, we consider equation \eqref{equ_otheroutcomes_compliers_defiers} and \eqref{equ_restriction based_on_senstivityparamter}, where we replace $G_d$ by its smallest, nondecreasing upper envelope; vice versa, for the upper bound, where we replace $G_d$ by its greatest, nondecreasing lower envelope.\footnote{We give an illustration of this derivation in Appendix~\ref{sec_illustraion_bounds}.}
Following this reasoning and taking \eqref{equ_otheroutcomes_compliers_always_takers}-\eqref{equ_restriction based_on_senstivityparamter} into account, the lower bound is given by
\begin{align}
\underline{H}_{Y_d^{CO}}(y,\pi_{\textnormal{DF}},\delta)= \max\{0, \frac{1}{ \pi_{\textnormal{CO}} } (Q_{\textnormal{dd}} (y)- \pi_d) ,\; \frac{\pi_{\Delta}}{\pi_{\textnormal{CO}}} \sup_{\tilde y \leq y} G_d (\tilde y), \; \sup_{\tilde y \leq y} G_d(\tilde y) - \frac{\pi_{\textnormal{DF}}}{\pi_{\Delta}} \delta \}, \label{simpleboundslower}
\end{align}
and the upper bound by
\begin{align}
\overline{H}_{Y_d^{CO}}(y,\pi_{\textnormal{DF}},\delta)= \min\{ 1, \frac{1}{\pi_{\textnormal{CO}}} Q_{dd}(y), \frac{\pi_{\Delta}}{\pi_{\textnormal{CO}}} (\inf_{\tilde y \geq y} G_d(\tilde y)+\pi_{\textnormal{DF}}), \inf_{\tilde y \geq y} G_d(\tilde y)+ \frac{\pi_{\textnormal{DF}}}{\pi_{\Delta}} \delta \}.\label{simpleboundsupper}
\end{align}
Any value outside of these bounds is clearly incompatible with the distribution of $(Y,D,Z)$ and our assumptions.
To illustrate the effect of our sensitivity parameters, we consider the width of these bounds for any fixed $y \in \mathbb Y$ as a function of $(\pi_{\textnormal{DF}}, \delta)$, that is\footnote{This comparison is helpful as the qualitative size of the width of the bounds on the distribution functions is related to the width of the identified set of many parameters of interest, e.g., the LATE.}
$
\overline{H}_{Y_d^{CO}}(y,\pi_{\textnormal{DF}},\delta) - \underline{H}_{Y_d^{CO}}(y,\pi_{\textnormal{DF}},\delta).
$
The width is weakly increasing in the sensitivity parameter $\delta$, which implies that a larger violation of monotonicity leads to a larger identified set.
However, the effect of the sensitivity parameter $\pi_{\textnormal{DF}}$ on this width can be both negative and positive depending on the specific underlying parameters $\theta$.
For example, we note that $F_{Y_d^{CO}}$ is point identified either if $\pi_{\textnormal{DF}}=0$ or $\pi_d=0$, which denotes the absence of always or never takers.
Heuristically speaking, the parameter $\pi_{\textnormal{DF}}$, therefore, trades off the identification power gained from the non-existence of defiers and the non-existence of always or never takers.
The functions $\underline{H}_{Y_d^{CO}}$ and $\overline{H}_{Y_d^{CO}}$ clearly bound $F_{Y_d^{CO}}$ in a first-order stochastic dominance sense. However, since they do not imply that the implied functions of $F_{Y_d^{dT}}$ and $F_{Y_d^{DF}}$ are nondecreasing,
they are not necessarily a feasible candidate of $F_{Y_d^{CO}}$.
To give an intuition for this result and for the sake of argument, we now assume that all outcome variables are continuously distributed. We consider $\underline{H}_{Y_d^{CO}}$, and we assume that the bound on the outcome heterogeneity $\delta$ determines the bound, i.e.,
$
\underline{H}_{Y_d^{CO}}(y)= G_d(y) - \frac{\pi_{\textnormal{DF}}}{\pi_{\Delta}} \delta.
$
This bound does not necessarily imply that the always takers have a positive density.
Specifically, the density of the lower bound is $
g_d(y) = (q_{dd}-q_{d(1-d)}(y))/\pi_{\Delta},
$ whereas to guarantee that the density function $f_{Y_d^{dT}}$ does not take any negative value, any feasible candidate of $f_{Y_d^{CO}}$ has to satisfy that
\begin{align}
f_{Y_d^{CO}}(y) \leq \frac{q_{dd}(y)}{\pi_{\textnormal{CO}}} \label{equ_at_nondecreasing}
\end{align}
for all $y \in \mathbb Y$.\footnote{To be precise, one can assume that $q_{d(1-d)}(y)=0$ and as $\pi_\Delta =\pi_{\textnormal{CO}}- \pi_{\textnormal{DF}} \leq \pi_{\textnormal{CO}}$ the claim follows.}
A similar restriction as \eqref{equ_at_nondecreasing} can be derived for defiers such that any feasible candidate of the density $f_{Y_d^{CO}}(y)$ has to also satisfy that, for all $y \in \mathbb Y$,
\begin{align}
f_{Y_d^{CO}}(y) \geq \frac{\pi_\Delta}{\pi_{\textnormal{CO}}} \max \{ g_d(y), 0\}. \label{equ_df_nondecreasing}
\end{align}
Based on this argument, we construct
our final bounds, $\underline{F}_{Y_d^{CO}}$ and $\overline{F}_{Y_d^{CO}}$. Specifically, the distribution function $\underline{F}_{Y_d^{CO}}$ is dominated by $\underline{H}_{Y_d^{CO}}$ in a first-order stochastic dominance sense, and the distribution function $\overline{F}_{Y_d^{CO}}$ dominates $\overline{H}_{Y_d^{CO}}$ in a first-order stochastic dominance sense, and they both carefully take into account the reasoning of \eqref{equ_df_nondecreasing}~and~\eqref{equ_at_nondecreasing}.
In Appendix~\ref{sec_proof_theoremdist}, we show that these distribution functions both bound the distribution function $F_{Y_d^{CO}}$ and are feasible candidates.
\subsection{Identification Result}\label{sec_mainresult_sharpbounds} We first provide the analytical expressions of the bounds in the following. The lower bound of the distribution functions $F_{Y_d^{CO}}$ is given by
\begin{align}
\underline{F}_{Y_d^{CO}}(y,& \pi_{\textnormal{DF}},\delta) = \frac{1}{\pi_{\textnormal{CO}}} Q_{\textnormal{dd}}(y) \label{boundslower} \\
& \hfill - \frac{1}{\pi_{\textnormal{CO}}} \inf_{\tilde{y} \geq y} \left( Q_{dd}(\tilde{y}) - \left(\pi_{\Delta} G_d^+(\tilde{y}) - \inf_{\widehat{y} \leq \tilde{y} } \left( \pi_{\Delta} G_d^+(\widehat{y}) - \pi_{\textnormal{CO}} \underline{H}_{Y_d^{CO}}(\widehat y, \pi_{\textnormal{DF}},\delta) \right) \right) \right),\notag
\end{align}
and similarly the upper bound by
\begin{align}
\overline{F}_{Y_d^{CO}} (y, & \pi_{\textnormal{DF}},\delta) = \frac{ \pi_{\Delta} }{\pi_{\textnormal{CO}}} G_d^+(y) \label{boundsupper}\\
& \hfill -\frac{1}{\pi_{\textnormal{CO}}} \sup_{\tilde{y}\geq y} \left( \pi_{\Delta} G_d^+(\tilde{y}) - \left( Q_{dd}(\tilde{y})- \sup_{\widehat{y} \leq \tilde{y}} \left( Q_{dd}(\widehat{y})- \pi_{\textnormal{CO}} \overline{H}_{Y_d^{CO}}(\widehat y, \pi_{\textnormal{DF}},\delta) \right)\right) \right). \notag
\end{align}
Based on the derivation above, Theorem~\ref{theoremdist} summarizes the result.
\begin{theorem} \label{theoremdist}
Suppose that Assumption~\ref{assumptionLATE} holds, and the data generating process is compatible with the sensitivity parameters $(\pi_{\textnormal{DF}}, \delta)$. Then, it holds that
\begin{align*}
\underline{F}_{Y_d^{CO}}(y,\pi_{\textnormal{DF}},\delta) \leq F_{Y^{CO}_d}(y) \leq \overline{F}_{Y_d^{CO}}(y,\pi_{\textnormal{DF}},\delta),
\end{align*}
for $d \in \{0,1\}$ and for all $y \in \mathbb Y$.
Moreover, there exist data generating processes which are compatible with the above assumptions such that the outcome distribution of compliers equals either $\underline{F}_{Y_d^{CO}}(y,\pi_{\textnormal{DF}},\delta)$, $\overline{F}_{Y_d^{CO}}(y,\pi_{\textnormal{DF}},\delta)$, or any convex combination of these bounds.
\end{theorem}
Theorem~\ref{theoremdist} shows not only that the proposed bounds are valid but also that without imposing further assumptions, the bounds cannot be tightened in a first-order stochastic dominance sense.\footnote{As the derived bounds are rather complicated, we propose simpler bounds for each of our sensitivity parameters in Appendix~\ref{sec_simplified_bounds}. These bounds are possibly conservative. We explain how to evaluate in an empirical setting whether they are close to the sharp bounds derived in this section.}
\begin{remark}
Theorem~\ref{theoremdist} does clearly not imply that all distribution functions that are bounded by the distribution functions $\underline{F}_{Y_d^{CO}}$ and $\overline{F}_{Y_d^{CO}}$ are feasible candidates of the distribution function of $F_{Y^{CO}_d}$. The reason for that is that these functions do not necessarily imply nondecreasing distribution functions of the other groups. Since we are not interested in the distributions functions themselves but in parameters defined through the bounds, this result is sufficient to derive sharp bounds on the sensitivity and robust region for empirical conclusions about these parameters.
\end{remark}
\begin{remark}
The parameter of interest is often not only the average treatment effect but also, e.g., quantile and distribution treatment effects. As Theorem~\ref{theoremdist} bounds the entire outcome distribution functions of compliers, in a first-order stochastic dominance sense, these treatment effects are identified as well and are sharp for many relevant parameters.
We present them in Appendix~\ref{sec_additional_treatment_effects_appendix}.
\end{remark}
\begin{remark}
In empirical applications, researchers also often have access to pre-intervention covariates. In Appendix~\ref{sec_additional_covariates}, we show how these covariates can be exploited to reduce the size of the identified set of the distribution function $F_{Y_d^{CO}}$. These covariates can then be used to tighten the sensitivity and to enlarge the robust regions.
\end{remark}
\section{Sensitivity Analysis}\label{sectionsensitivtyanalysis}
We present our main sensitivity analysis in this section.
\subsection{Sensitivity Region}\label{sub_sec::sr}
We derive the sensitivity region, which is the set of sensitivity parameter pairs for which a feasible candidate of the distribution function $F_{Y_d^{CO}}$ exists. Sensitivity parameters that lie in the complement of this set refute the model, and we, therefore, do not consider them further.\footnote{\cite{masten2021salvaging} denote the complement of the sensitivity region the falsification region.}
\subsubsection{Population Size of Defiers}
We show that the population size of defiers is partially identified. We denote an upper bound by
\begin{align}
\overline{\pi}_{DF} = \min\{\mathbb{P} (D=1|Z=0), \;\mathbb{P} (D=0|Z=1) \}.\label{equ_maximal_pdf}
\end{align}
The first element of the minimum represents the sum of the population size of always takers and defiers, whereas the second one of never takers and defiers. The population size of defiers is clearly smaller than both of these quantities.
The lower bound on the population size of defiers is denoted by
\begin{align}
\underline{\pi}_{DF}= & \max_{s \in \{0,1\}} \{ \sup_{B \in \mathscr{B}} \{\mathbb{P}(Y \in B, D=s|Z=1-s) - \mathbb{P}(Y \in B, D=s |Z=s) \} \}.\label{equa::minimalpdf}
\end{align}
The supremum is taken over the differences in population distributions of defiers and compliers, which bounds the population size of defiers from below. The Proposition~\ref{theoremlambda} shows that these bounds are sharp.\footnote{\cite{richardson2010analysis} present sharp bounds on $\pi_{\textnormal{DF}}$ for a binary outcome variable.}
\begin{proposition} \label{theoremlambda}
Suppose Assumption~\ref{assumptionLATE} holds. Then the population size of defiers, $\pi_{\textnormal{DF}}$, is bounded by $[\underline{\pi}_{DF}, \overline{\pi}_{DF}]$. Moreover, there exist data generating processes which are compatible with the above assumptions such that the population size of defiers equals any value within these bounds. Thus, the bounds are sharp.
\end{proposition}
If the lower bound on population size of defiers is greater than zero, $\underline{\pi}_{DF} > 0$, at least one of the classical LATE assumptions, including monotonicity, is violated.
This reasoning is align with the result of \cite{Kitagawa2015}, who shows that $\underline{\pi}_{DF} > 0$ is sufficient and necessary such that the LATE assumptions are valid.
However, if the above inequalities contradict, i.e., $\underline{\pi}_{DF}>\overline{\pi}_{DF}$, the sensitivity region is empty. This implies that even if one allows for a violation of monotonicity, our model assumptions must be violated as well.
\subsubsection{Outcome Heterogeneity}
We now consider the sensitivity parameter $\delta$.
Based on Theorem~\ref{theoremdist}, we can bound the sensitivity parameter $\delta$ from below and from above for a given value of the sensitivity parameter $\pi_{\textnormal{DF}}$.
A given pair fo sensitivity parameters $(\pi_{\textnormal{DF}}, \delta)$ is refuted if the implied lower and upper bounds, $\underline{F}_{Y_d^{CO}}$ and $\overline{F}_{Y_d^{CO}}$, intersect, so that there does not exists a feasible candidate of the distribution function $F_{Y_d^{CO}}$ which is compatible with these sensitivity parameters. The domain of the sensitivity parameter $\delta$ is bounded from below by
\begin{align}
\underline{\delta}(\pi_{\textnormal{DF}}) =\min_{d \in\{0,1\}} \inf \{\delta: \inf_y \underline{F}_{Y_d^{CO}}(y, \pdf, \delta)-\overline{F}_{Y_d^{CO}}(y, \pdf, \delta)\geq 0 \}.\label{equation_delta_min}
\end{align}
The feasible set of the sensitivity parameter $\delta$ is further bounded from above. The bounds $\underline{F}_{Y_d^{CO}}$ and $\overline{F}_{Y_d^{CO}}$ imply bounds on the distribution function of $F_{Y_d^{DF}}$, where the largest value of the Kolmogorov-Smirnov norm between the distributions of $F_{Y_d^{CO}}$ and $F_{Y_d^{DF}}$ is achieved when $\delta=1$. It follows that there does not exists a feasible candidate function of $F_{Y_d^{CO}}$ such that the implied outcome heterogeneity parameter exceeds this value.
We denote the upper bounds by
\begin{align}
\overline{\delta}(\pi_{\textnormal{DF}}) = \max_{d \in \{0,1\} } \sup_{y \in \mathbb Y} & \left\lbrace | \overline{F}_{Y_d^{CO}}(y, \pi_{\textnormal{DF}}, 1) - \overline{F}_{Y_d^{DF}}(y,\pi_{\textnormal{DF}}, 1) |, \right. \notag\\
& \qquad \qquad \left. | \underline{F}_{Y_d^{CO}}(y,\pi_{\textnormal{DF}}, 1) - \underline{F}_{Y_d^{DF}}(y,\pi_{\textnormal{DF}}, 1) | \right\rbrace. \label{equation_delta_max}
\end{align}
By the reasoning of Theorem~\ref{theoremdist}, these bounds are sharp, and any convex combination of these bounds is feasible as well.
It follows that our sensitivity region is given by
\begin{align}
SR= \{(\pi_{\textnormal{DF}}, \delta): \pi \in [\underline{\pi}_{DF}, \overline{\pi}_{DF}] \text{ and } \underline{\delta}(\pi_{\textnormal{DF}}) \leq \delta \leq \overline{\delta}(\pi_{\textnormal{DF}}) \}. \label{equ_sensitvityregion_main}
\end{align}
\subsection{Robust Region}\label{sub_sec::rr}
We now derive the robust region for the empirical conclusion that $\Delta_{CO}\geq \mu$.\footnote{In Appendix~\ref{sec_additional_treatment_effects_appendix}, we also consider other treatment effects than the average treatment effect of compliers. Sensitivity and robust regions for empirical conclusions about these parameters can then also be derived based on the reasoning of this section.}
To simplify the presentation, we assume in the following that the sensitivity region is nonempty and that $\Delta_{CO}(\underline{\pi}_{DF}, \underline{\delta}(\underline{\pi}_{DF}))\geq \mu$.\footnote{If $\Delta_{CO}(\underline{\pi}_{DF}, \underline{\delta}(\underline{\pi}_{DF}))< \mu$, the robust region is empty.}
By first-order stochastic dominance of the distribution functions $\underline{F}_{Y_d^{CO}}$ and $\overline{F}_{Y_d^{CO}}$, we can construct sharp bounds on many treatment effect parameters, that depend on these bounds \citep[see Lemma 1 in ][]{stoye2010partial}. Specifically, let
\begin{align}
\underline{\Delta}_{CO}(\pi_{\textnormal{DF}}, \delta)& = \int_{\mathbb Y} y \; d\overline{F}_{Y_1^{CO}}(y,\pi_{\textnormal{DF}},\delta) - \int_{\mathbb Y} y\; d\underline{F}_{Y_0^{CO}}(y,\pi_{\textnormal{DF}},\delta) \label{equ_LATE_lowerbound} \\
\overline{\Delta}_{CO}(\pi_{\textnormal{DF}}, \delta) & = \int_{\mathbb Y} y \; d\underline{F}_{Y_1^{CO}}(y,\pi_{\textnormal{DF}},\delta) - \int_{\mathbb Y} y \; d\overline{F}_{Y_0^{CO}}(y,\pi_{\textnormal{DF}},\delta). \label{equ_LATE_upperbound}
\end{align}
\begin{corollary}\label{coro_treatment_effect} Suppose that Assumption~\ref{assumptionLATE} holds, and the data generating process is compatible with the sensitivity parameters $(\pi_{\textnormal{DF}}, \delta)$. Then, the average treatment effect of compliers, $\Delta_{CO}$, is bounded by $ [\underline{\Delta}_{CO}(\pi_{\textnormal{DF}}, \delta) , \overline{\Delta}_{CO}(\pi_{\textnormal{DF}}, \delta) ]$.
Moreover, there exist data generating processes which are compatible with the above assumptions such that the average treatment effect of compliers equals any value within these bounds. Thus, the bounds are sharp.
\end{corollary}
For a given sensitivity parameter $\pi_{\textnormal{CO}}$, we now consider the breakdown point given by
$$BP(\pi_{\textnormal{DF}} )=\sup \{\delta: (\pi_{\textnormal{DF}}, \delta) \in \textnormal{SR} \text{ and }\underline{ \Delta}_{CO}(\pi_{\textnormal{DF}},\delta) \geq\mu\}. $$
For a given sensitivity parameter $\pi_{\textnormal{DF}}$, it identifies the weakest assumption on outcome heterogeneity between compliers and defiers such that the empirical conclusion holds. The breakdown point, as a function of the sensitivity parameter $\pi_{\textnormal{DF}}$, is not necessarily decreasing in the population size of defiers as the bounds on the outcome distribution of compliers can become tighter if the value of $\pi_{\textnormal{DF}}$ increases (see the discussion in Section~\ref{sec_simple_bounds_maintext}).
The breakdown frontier of the average treatment effect is the boundary of the robust region and given by the set of all breakdown points
\begin{align}
BF & = \{ (\pi_{\textnormal{DF}}, \delta) \in \textnormal{SR}:\delta=BP(\pi_{\textnormal{DF}})\}. \label{equation_BF}
\end{align}
The robust region of the empirical conclusion that $\Delta_{CO}\geq \mu$ is characterized by
\begin{align}
RR & =\{(\pi_{\textnormal{DF}}, \delta) \in \textnormal{SR}: \delta \leq BP(\pi_{\textnormal{DF}}) \}. \label{equation_robustregion}
\end{align}
The nonrobust region, that is the complement of the robust region within the sensitivity region, contains pairs of sensitivity parameters which only may not imply treatment effects being consistent with the empirical conclusion.
Figure~\ref{iden_figurelqtes} illustrates one example of the sensitivity and robust region.\footnote{We refer to a discussion on how these sets can be used in an empirical setting to Section~\ref{sectionsetup}~and~\ref{sectionempirical}}
\begin{figure}[!h]
\centering
\resizebox{0.4\linewidth}{!}{\input{Graphics/quantile_identification}}
\caption[Illustration of sensitivity and robust region II.]{Sensitivity and Robust Region. Non-shaded region represents sensitivity region.}\label{iden_figurelqtes}
\end{figure}
\section{Extensions}\label{sectionextension}
In this section, we show how our framework can be exploited to draw empirical conclusions about other population parameters, and how it simplifies if the outcome variable is binary.
\subsection{Treatment Effects for other Populations}\label{sec_additional_treatment_effects}
To show how empirical questions about treatment effects of the entire population can be analyzed, we exploit that the proof of Theorem~\ref{theoremdist} presents sharp bounds on all groups in a first-order stochastic dominance sense. For $d \in \{0,1\}$, let the lower bound be denoted by
\begin{align*}
\underline F_{Y_d} (y, \pi_{\textnormal{DF}}, \delta) & = \pi_{\textnormal{CO}} \cdot \underline F_{Y_0^{CO}} (y,\pi_{\textnormal{DF}} , \delta) + Q_{d(1-d)}(y),
\end{align*}
and the upper bound by
\begin{align*}
\overline F_{Y_d} (y, \pi_{\textnormal{DF}}, \delta) & =\pi_d + \pi_{\textnormal{CO}} \cdot \overline F_{Y_0^{CO}} (y,\pi_{\textnormal{DF}} , \delta) + Q_{d(1-d)}(y).
\end{align*}
\begin{proposition}\label{prop_extensions_average}
Suppose the instrument satisfies Assumption~\ref{assumptionLATE}, and the data generating process is compatible with the sensitivity parameters $(\pi_{\textnormal{DF}}, \delta)$. Then, it holds that
$$ \underline F_{Y_d} (y, \pi_{\textnormal{DF}}, \delta) \leq F_{Y_d} (y, \pi_{\textnormal{DF}}, \delta) \leq \overline F_{Y_d} (y, \pi_{\textnormal{DF}}, \delta)$$
for $d \in \{0,1\}$ and for all $y \in \mathbb Y$.
Moreover, there exist data generating processes which are compatible with the above assumptions such that the potential outcome distributions equal either $\overline{F}_{Y_d}(y,\pi_{\textnormal{DF}},\delta)$, $\underline{F}_{Y_d}(y,\pi_{\textnormal{DF}},\delta)$, or any convex combination of these bounds.
\end{proposition}
As the data do not contain any information about the distribution functions $F_{Y_0^{AT}}$ and $F_{Y_1^{NT}}$, the bounds $ \underline F_{Y_d}$ and $\overline F_{Y_d}$ are such that their respective probability mass is shifted to the extreme of the support $\mathbb Y$.
To interpret these bounds, for any $y$ in the interior of $\mathbb Y$, we consider the difference
$$\overline F_{Y_d} (y, \pi_{\textnormal{DF}}, \delta) - \underline F_{Y_d} (y, \pi_{\textnormal{DF}}, \delta)
= \pi_d.$$
The sensitivity parameter $\delta$ does not influence the distribution of the outcome of the entire population, as it only influences how the observed outcome probability mass is distributed between the groups.
However, the size of the bounds decreases with the population size of defiers, $\pi_{\textnormal{DF}}$, as the population size of always and never takers $\pi_{\textnormal{d}}$ decreases with $\pi_{\textnormal{DF}}$. This reasoning is intuitive as if $\pi_{\textnormal{d}}$ decreases, the observed outcome probability mass represent more of the population under consideration.
This result aligns with results of the literature \cite{kitagawa2021identification,kamat2018identifying}, who shows that imposing monotonicity (e.g., $\pi_{\textnormal{DF}}=0$) does not imply a smaller identified set of the average treatment effect of the entire population if the LATE assumptions are not violated.
Based on the bounds presented in Proposition~\ref{prop_extensions_average}, we can now derive a sensitivity analysis similar to the one presented in Section~\ref{sectionsensitivtyanalysis}. However, to derive informative results about the average treatment effect of the entire population, we would have to impose that the outcome is bounded as otherwise the average treatment effect is not identified in general.
The sensitivity analysis of this paper is based on the premise that the treatment effect of compliers is the object of interest. However, if the parameter of interest is the treatment effect of the entire population, one might then be willing to impose assumptions not only on outcome heterogeneity between compliers and defiers but also between other groups. To be precise, we can replace the sensitivity parameter $\delta $ by $\delta_p\in [0,1]$ such that
$$ \max_d \sup_y \{ | F_{Y_d^{T}} (y) -F_{Y_d^{T'}} (y)| \} | \leq \delta_p \quad \forall \; T,T' \in \{ AT, NT, CO, DF\}. $$
Using similar arguments as in the proof of Theorem~\ref{theoremdist}, one can then derive sharp bounds on the outcome distribution functions of the entire population and then conduct a sensitivity analysis similar to the one described in Section~\ref{sectionsensitivtyanalysis}.
Empirical conclusions drawn on this parameterization might be substantially more informative.
\subsection{Binary Outcome Variable}\label{sec:binaryoutcomemodel}
In many empirical applications, the outcome of interest is binary. The results of Section~\ref{sec_identification_bounds}~and~\ref{sectionsensitivtyanalysis} are still valid in this case, but we show in this section that the bounds substantially simplify so that they are easier applicable. Let $P_d^{T}= \mathbb P (Y_d^T=1)$ denote the probability that the random variable $Y_d^T$ equals one, and let the conditional joint probability of the outcome and the treatment status be given by $P_{ds}=\mathbb P(Y=1, D=d|Z=s)$. We denote the underlying parameters by $\theta_b=(P_{11},P_{10},P_{01}, P_{00}, P_0, P_1) \in [0,1]^6$.
Following the same arguments as above,
the sensitivity and robust region depend on the marginal outcome distributions of the compliers.
The presence of defiers is also bounded by $\pi_{\textnormal{DF}} $, and the parameter of outcome heterogeneity simplifies to
$$ \delta_b= \max_{d \in \{0,1\}} |P_d^{CO}-P_d^{DF}|. $$
The outcome probabilities of compliers are bounded from below by
\begin{align}
\underline{P}_d^{CO}( \pi_{\textnormal{DF}},\delta) & = \max \left\lbrace 0, \frac{P_{dd} - \pi_{\textnormal{d}} }{\pi_{\textnormal{CO}}}, \frac{P_{dd} - P_{d(1-d)} }{\pi_{\textnormal{CO}}}, \frac{P_{dd} - P_{d(1-d)} - \pi_{\textnormal{DF}} \delta_b }{\pi_{\Delta} }\right\rbrace, \label{equ_lower_bound_binary}
\end{align}
and from above by
\begin{align}
\overline{P}_d^{CO}(\pi_{\textnormal{DF}},\delta)&= \min \left\lbrace 1, \frac{P_{dd} }{\pi_{\textnormal{CO}}},
\frac{P_{dd} - P_{d(1-d)} +\pi_{\textnormal{DF}} }{\pi_{\textnormal{CO}} }, \frac{P_{dd} - P_{d(1-d)} + \pi_{\textnormal{DF}} \delta_b }{\pi_{\Delta} } \right\rbrace . \label{equ_upper_bound_binary}
\end{align}
\begin{corollary}\label{extensions_corollarybinary}
Suppose Assumption~\ref{assumptionLATE} holds, and the data generating process is compatible with the sensitivity parameters $(\pi_{\textnormal{DF}}, \delta)$. The outcome probabilities of compliers are bounded by
$ \underline P_d^{CO} \leq P_d^{CO} \leq \overline P_d^{CO}$.
Moreover, there exist data generating processes which are compatible with the above assumptions such that the population size of defiers equals any value within these bounds. Thus, the bounds are sharp.
\end{corollary}
The interpretation of the width of these bounds follows the same reasoning as in Section~\ref{sec_simple_bounds_maintext}.
The lower bound of the population size of defiers simplifies to
\begin{align*}
\underline{\pi}_{DF}= \max_{d \in \{0,1\}} & \left\lbrace \sum_{y=0}^1 \max \{0, \mathbb P (Y=y, D=d|Z=1-d) - P (Y=y, D=d|Z=d ) \} \right\rbrace.
\end{align*}
The upper bound on $\pi_{\textnormal{DF}}$ cannot be simplified further and is given by \eqref{equ_maximal_pdf}.
The lower bound on outcome heterogeneity is given by
\begin{align*}
\underline{\delta}_b(\pi_{\textnormal{DF}}) = \frac{\underline{\pi}_{DF}}{\pi_{\textnormal{DF}} }.
\end{align*}
The lower bound on the sensitivity parameter $\delta$ decreases with the population size of defiers.
The upper bound on the sensitivity parameter $\delta$ is given by the maximal difference between the outcome probabilities of compliers and defiers
\begin{gather*}\overline{\delta}_b(\pi_{\textnormal{DF}}) = \max_{d \in \{0,1\}} \max \{ |\underline P_d^{CO}(\pi_{\textnormal{DF}} , 1) - \underline P_d^{DF}(\pi_{\textnormal{DF}} , 1)| ,|\overline P_d^{CO}(\pi_{\textnormal{DF}} , 1) - \overline P_d^{DF}(\pi_{\textnormal{DF}} , 1)| \}.
\end{gather*}
The sensitivity parameter space is given by
$$
\textnormal{SR}_b=\{ (\pi_{\textnormal{DF}} , \delta_b) \in [\underline{\pi}_{DF}, \overline{\pi}_{DF}] \times [0,1]: \underline{\delta}_b(\pi_{\textnormal{DF}}) \leq \delta_b \leq \overline{\delta}_b(\pi_{\textnormal{DF}}) \},
$$
and the robust region for the claim $\Delta_{CO}\geq \mu$, if $\underline P_1^{CO}(\underline{\pi}_{DF} , \underline{\delta}_b) - \overline P_0^{CO}(\underline{\pi}_{DF} , \underline{\delta}_b) \geq \mu$, is given by
\begin{align*}
RR_b= \{ (\pi_{\textnormal{DF}} , \delta_b) \in \textnormal{SR}_b: \underline P_1^{CO}(\pi_{\textnormal{DF}} , \delta_b) - \overline P_0^{CO}(\pi_{\textnormal{DF}} , \delta_b) \geq \mu\}.
\end{align*}
Using the simple algebra structure of the bounds of the outcome probabilities, a closed-form expression for both the robust and the sensitivity region can be derived. As this expression is rather lengthy without providing much intuition, we state it in Appendix~\ref{appendixproofextensions_corollarybinary}.
\section{Estimation and Inference} \label{sectioninference}
Even though the contribution of this paper is the derivation of the sensitivity and robust region for a particular empirical conclusion, we consider some methods for estimation and inference of these two regions. While the technical details are deferred to Appendix~\ref{appendixestimationandinference}, in this section, we sketch the main issues of conducting inference in this setting and our proposed solutions. To simplify the exposition, we consider the setting of a continuous and a binary outcome variable, but our method is not restricted to these distributions.
Throughout this section, we assume that we have access to the data $\{(Y^z_{i},D^z_{i})\}_{i=1}^{n_z}$ for $z\in \{0,1\}$ that are independent and identically distributed according to the distribution of $(Y,D)$ conditionally on $Z=z$ with support $\mathbb{Y}\times \{0,1\}$. We denote this distribution by $(Y^z, D^z)$ and we let $n=n_0+n_1$, where $n_0/n$ converges to a nonzero constant as $n\rightarrow \infty$.\footnote{We discuss this assumption in Assumption~\ref{assumption_inference_sampling_distribution}.}
\subsection{Estimation}
To construct estimators of the sensitivity and robust region for a particular empirical conclusion, we note that the identification argument of these regions are constructive. It follows from Section~\ref{sectionsensitivtyanalysis} that the boundaries of both regions are identified by the following mapping,\footnote{The signs of the components of the mapping~$\phi(\theta, \pi_{\textnormal{DF}})$ simplify the subsequent analysis.}
\begin{align}
\phi(\theta, \pi_{\textnormal{DF}}) = (\underline{\pi}_{DF}, \, - \overline{\pi}_{DF}, \, \underline{\delta}(\pi_{\textnormal{DF}}), \, - \overline{\delta}( \pi_{\textnormal{DF}}), \, BP(\pi_{\textnormal{DF}})), \label{equ_mapping}
\end{align} which is evaluated at the sensitivity parameter $\pi_{\textnormal{DF}} \in \left[0,0.5\right)$ and the underlying parameters $\theta$, that is defined in \eqref{eq_underlying_parameters}.
Estimating the sensitivity and robust region is then equivalent to estimating this mapping.
To do so, we consider
estimates of the underlying parameters $\theta$ that are simply obtained by replacing unknown population quantities by their corresponding nonparametric sample counterparts and by standard nonparametric kernel methods.
We denote the estimates of $\theta$ by $\widehat \theta$. Point estimates of the mapping $\phi(\theta, \pi_{\textnormal{DF}})$ can then be derived by simple plug-in methods. We defer a detailed description to Appendix~\ref{appendixestimation}.
\subsection{Goal of Inference}\label{sec:goalofinference}
We propose to construct confidence sets for the sensitivity and robust region such that the confidence set for the sensitivity region is an outer confidence set and for the robust region is an inner confidence set.\footnote{Considering inner confidence set for the robust region follows from \cite{masten2020}.} These confidence sets should therefore
jointly satisfy with probability approaching the confidence level, $1-\alpha$, that (i) any sensitivity parameter pair of the sensitivity region lies within the confidence set for the sensitivity region and (ii) not any single parameter pair of the nonrobust region lies within the confidence set for the robust region.\footnote{To give one more interpretation of the confidence sets and using the language of hypothesis testing, a sensitivity parameter pair, $(\pi_{\textnormal{DF}}, \delta)$, does not lie in the sensitivity region only if we can reject such a hypothesis with confidence level $1-\alpha$. Contrary, $(\pi_{\textnormal{DF}}, \delta)$ lies in the robust region, only if we can reject that it is nonrobust with confidence level $1-\alpha$. The confidence sets are constructed so that the hypothesis tests are valid uniformly in the sensitivity parameter space. }
Let $\widehat \textnormal{SR}_{L}$ and $\widehat{RR}_{L}$ denote two sets of the sensitivity parameters. They satisfy the described condition if
\begin{align}
{\underset{n \rightarrow \infty}{\lim} \; \mathbb{P}( \textnormal{SR} \subseteq \widehat \textnormal{SR}_{L} \text{ and } \widehat{RR}_{L}(\textnormal{SR}) \subseteq RR(\textnormal{SR}) ) \geq 1-\alpha}\label{equ_goal_of_inference}.
\end{align}
Based on the definition of the mapping $\phi (\theta, \pi_{\textnormal{DF}}) $, it therefore suffices to construct a lower confidence band for each component of the estimator $\phi (\widehat \theta, \pi_{\textnormal{DF}}) $ as a function of $\pi_{\textnormal{DF}}$ that are jointly valid.\footnote{Throughout this section, we consider confidence sets that are uniformly valid in the sensitivity parameter space, but not necessarily in the distribution of the underlying parameters $\theta$} That is, we need to find a function that is componentwise a uniformly lower bound $\phi_L (\widehat \theta, \pi_{\textnormal{DF}})$ in $\pi_{\textnormal{DF}}$ of $\phi ( \theta, \pi_{\textnormal{DF}}) $ so that\footnote{We verify this equivalence in Appendix~\ref{sec_preliminariesforconfidencesets}.}
\begin{align}
\lim_{n \rightarrow \infty} \mathbb P \left(\min_{ 1 \leq l \leq 5 } \inf_{ \pi_{\textnormal{DF}} \in [0,0.5) } e_l^\top (\phi_L (\widehat \theta, \pi_{\textnormal{DF}}) - \phi ( \theta, \pi_{\textnormal{DF}})) \leq 0\right) \geq 1- \alpha, \label{equ_goal_of_indeference_2}
\end{align}
where $e_l$ is the l-th unit vector.\footnote{Conservative confidence sets for only the average treatment effect of compliers for specific values of $(\pi_{\textnormal{DF}}, \delta)$ directly follow from the presented procedure. To obtain nonconservative confidence sets, one can follow the literature on partially identified parameters \cite[see, e.g., ][]{imbens2004confidence}}
\subsection{Inference for a Continuous Outcome Variable}
We analyze the distribution of $\phi(\widehat \theta, \pi_{\textnormal{DF}})$ in order to construct confidence sets for the mapping $\phi(\theta, \pi_{\textnormal{DF}})$.\footnote{We want to emphasize that this procedure is valid for a fixed distribution. In particular, we do not consider settings of weak instruments or data generating processes which are such that the robust region becomes empty.}
Under regularity assumptions presented in Appendix~\ref{sec_appendix_assumption}, the estimators of the underlying parameters $\widehat \theta$ converge in $\sqrt{n}$ to a tight Gaussian process.
Since the mapping $\phi$ is not Hadamard-differentiable, as it depends on minimum, maximum, supremum, and infimum of random functions, standard Delta method arguments do not apply in this setup \citep[see][]{fansantos2016}. We propose a method to construct confidence sets that are asymptotically conservative but valid in the sense of \eqref{equ_goal_of_inference}. It is based on ideas of population smoothing that have been suggested by, e.g., \cite{haile2003, chernozhukov2010quantile, masten2020}.
In contrast to considering the mapping $\phi$, which identifies the sensitivity and robust region, we construct a smooth mapping, $\phi_\kappa$, which yields valid bounds of both regions. The smoothed mapping $\phi_\kappa$ is indexed by a fixed smoothing parameter $\kappa\in \mathbb N$. The mapping $\phi_\kappa$ is differentiable such that the standard functional Delta method can be applied to $\phi_\kappa$ and we can study its asymptotic distribution by standard methods.
The mapping $\phi_\kappa$ is further such that it yields an outer set of the sensitivity region and an inner set of the robust region. This reasoning implies that confidence sets of the smooth mappings $\phi_\kappa$, which are valid in the sense of \eqref{equ_goal_of_inference}, are also valid for the mapping $\phi$.
In finite samples, the choice of the smoothing parameter $\kappa$ comprises the trade-off of constructing conservative confidence sets and better finite sample approximations of the underlying distributions.
Suppose the smoothing parameter $\kappa$ is small. In that case, the smoothed sensitivity and robust region are very similar to the original regions. However, the finite-sample distribution of $\phi_\kappa(\widehat \theta \pi_{\textnormal{DF}})$ might not be well-approximated by its asymptotic distribution.
Vice versa, suppose the smoothing parameter $\kappa$ is large. The finite-sample distribution of $\phi_\kappa(\widehat \theta \pi_{\textnormal{DF}})$ might be well-approximated by its asymptotic distribution. However, the smoothed sensitivity and robust region are conservative to the original regions.
In Appendix~\ref{sec_population_smoothing}, we show how the smoothed mappings can be constructed. It then follows that plug-in estimators of the smoothed mappings converge in $\sqrt{n}$ to a Gaussian process by standard functional Delta method arguments.
The covariance structure of this process is, in general, rather complicated and tedious to estimate. We, therefore, apply the nonparametric bootstrap to simulate its distribution. Consistency of this bootstrap procedure follows from arguments of \cite{fansantos2016}.
In Appendix~\ref{appendixestimationandinference}, we show how to construct the confidence sets based on the described procedure and that they achieve the outlined goal \eqref{equ_goal_of_inference}.
\subsection{Inference for a Binary Outcome Variable}
Following the discusssion about a bianry outcome model in Section~\ref{sec:binaryoutcomemodel}, the mapping yielding the sensitivity and the robust region for a particular conclusion for a binary outcome variable is given by\footnote{where its precise definition follows from Section~\ref{sec:binaryoutcomemodel} and Appendix~\ref{sec:appendix:inference_binary}.} $$\phi_b(\theta_b, \pi_{\textnormal{DF}})= (\underline{\pi}_{\text{DF,b}}, -\overline{\pi}_{\text{DF,b}}, - \overline{\delta}_b( \pi_{\textnormal{DF}}), BP_b(\pi_{\textnormal{DF}})),$$ The interpretation of $\phi_b$ follows the one for a continuously distributed outcome variable, and in principle, we could apply the same inference procedure as described above. However, the mapping $\phi_b(\theta_b, \pi_{\textnormal{DF}})$ is substantially simpler than the mapping $\phi(\theta, \pi_{\textnormal{DF}})$ so that, in this section, we can apply more classical inference procedure to obtain confidence sets in the sense of \eqref{equ_goal_of_inference}; in particular, we follow ideas of \cite{masten2020} and the literature about moment inequalities \citep[see, e.g., ][]{andrews2010inference}.
Under standard sampling assumptions, it follows that the estimators of the underlying parameters are jointly $\sqrt{n}$ normally distributed (see Appendix~\ref{sec:appendix:inference_binary}). The mapping $\phi_b(\theta_b, \pi_{\textnormal{DF}})$ is clearly not Hadamard-differentiable, as it consist of minimum and maximum of random functions. Standard Delta method arguments are therefore not applicable here as well. Valid confidence sets could be obtained by projection arguments, which, however, are known to be conservative in general.
We show instead that the mapping $\phi_b$ is Hadamard directionally differentiable in the direction of $\theta$ when evaluated at finitely many $\{ \pi_{\textnormal{DF}}^k\}_{k=1}^K$. Using generalized Delta method arguments, the estimator of the mapping $\phi_b$ converges to some tight random process, which is a continuous transformation of a Gaussian process, indexed at the finite set $\{\pi_{\textnormal{DF}}^k\}_{k=1}^K$. As this limiting distribution is rather complicated, we do not construct our inference procedure directly on its limiting distribution, but one can choose various modified bootstrap methods to simulate this distribution, e.g., subsampling or numerical-Delta method \citep[see][]{ dumbgen1993nondifferentiable, Hong2016}. In this paper, we follow a bootstrap method which relies on ideas based on the moment inequality literature \citep[see, e.g., ][]{andrews2010inference, bugni2010bootstrap} and we explain the procedure in detail in Appendix~\ref{sec:appendix:inference_binary}.
Based on this bootstrap procedure, we can construct valid lower confidence sets for $\phi_b$ indexed at the finite set of sensitivity parameters $\{\pi_{\textnormal{DF}}^k\}_{k=1}^K$. Using these confidence sets and exploiting the functional form of $\phi_b$, we then obtain lower confidence sets for the estimator of the mapping $\phi_b$, which are uniformly valid in $\pi_{\textnormal{DF}}$. We state these arguments precisely in Appendix~\ref{sec:appendix:inference_binary} and show that these confidence sets are asymptotically valid in the sense of our goal of inference \eqref{equ_goal_of_inference}.
\section{Simulations}\label{sectionsimulations}
\subsection{Setup}
We study the finite sample performance of the proposed estimators of the sensitivity and robust regions through a Monte Carlo study. We consider different data generating processes with varying degrees of violations of monotonicity, implying different sizes and shapes of both the sensitivity and robust regions. Specifically, we consider the following population sizes $(\pi_{\textnormal{CO}}, \pi_{\textnormal{DF}}) \in \{ (0.35, 0.05) , (0.25,0.15) \}$, where $\pi_{\textnormal{DF}}=\pi_{\textnormal{AT}} =0.3.$
We set $\mathbb{P}(Z=1)=0.5$ and we generate the outcome by
\begin{align*}
Y^{CO}_1& \sim \mathcal{B}( 1,0.5+\Delta_{CO}) & Y^{DF}_1& \sim \mathcal{B}( 1,0.5+\Delta_{DF}) &
Y^{AT}_1, Y^{NT}_0 , Y^{DF}_0, Y^{CO}_0& \sim \mathcal{B}(1, 0.5),
\end{align*}
where $\Delta_{CO} \in \{0.2,0.1\}$, and $\mathcal{B}( 1,p)$ denotes the Bernoulli distribution with parameter $p$.
The sensitivity region is nonempty as the data generating process satisfies our model assumptions.
We consider the empirical conclusion of a positive treatment effect of compliers, so that the robust region is nonempty in each of the data generating processes as the Wald estimand is positive. The bootstrap procedure requires to choose the tuning parameter $\eta$, which is explained in Appendix~\ref{sec:appendix:inference_binary}. We consider different values of $\eta$ given by $\{0.2, 0.5,1,1.5,2\}/\sqrt{N}$. The results are based on 10,000 Monte Carlo draws.
\subsection{Simulation Results}
Table~\ref{table_mc_sim} shows the results of the simulated coverage rates at which the confidence sets cover the population sensitivity and nonrobust region for the different data generating processes and choices of tuning parameters. Our considered choice of tuning parameters implies that the simulated coverage of our confidence sets is close to the nominal one in most data generating processes.
These results illustrate that the confidence method performs reasonably well in finite samples.
\begin{table}
\centering
\caption{Simulated coverage rates of the sensitivity and robust region for a positive treatment effect. }\label{table_mc_sim}
\begin{tabular}{cccrrrrr}
\toprule
$\pi_{\textnormal{CO}}$ & $\Delta_{CO}$ & $\Delta_{TE}$ & $\eta=0.2$ & $ \eta=0.5$ & $ \eta=1$ & $\eta=1.5$ & $\eta=2$ \ \\
\midrule
\rule{0pt}{3ex}
\multirow{4}{*}{0.35} & 0.3 & 0 & 99.1 & 97.8 & 95.1 & 91.3 & 90.8 \\
& 0.3 & -0.3 & 96.9 & 94.3 & 91.2 & 92.5 & 91.6 \\
& 0.1 & 0 & 99.3 & 98.5 & 96.0 & 92.9 & 91.2 \\
& 0.1 & -0.3 & 99.3 & 98.5 & 95.6 & 93.1 & 91.3 \\
\rule{0pt}{3ex}
\multirow{4}{*}{0.25} & 0.3 & 0 & 98.9 & 98.1 & 95.2 & 92.4 & 91.2 \\
& 0.3 & -0.3 & 99.1 & 98.0 & 94.3 & 92.9 & 91.4 \\
& 0.1 & 0 & 99.3 & 98.6 & 96.0 & 93.5 & 91.2 \\
& 0.1 & -0.3 & 99.0 & 97.7 & 94.3 & 92.9 & 90.9 \\
\bottomrule
\end{tabular}
\begin{flushleft}
\footnotesize{The data generating process and the expressions follow the description of the text. Results are based on 10,000 Monte Carlo draws.}
\end{flushleft}
\end{table}
\section{Empirical Application}\label{sectionempirical}
To illustrate our proposed framework, we apply this sensitivity analysis to data from \cite{angristevens1998}, who analyze the effect of having a third child on the labor market outcomes of mothers. It is shown that even small violations of the monotonicity assumption may have a large impact on the robustness of the estimated treatment effects such that even the sign of the treatment effects may be indeterminate. The same-sex instrument in \cite{angristevens1998} arguably satisfies Assumption~\ref{assumptionLATE}: The independence assumption seems to be plausible by the following reasoning: The sex of a child is determined by nature, and only the number of and not the sex of the child arguably influences the labor market outcome. The relevance assumption is testable. However, monotonicity might be violated. We apply the proposed sensitivity analysis to evaluate the robustness of the estimated treatment effects to a potential violation of monotonicity in this setting. For simplicity, we focus on two outcome variables: the labor market participation of mothers and their annual wage.\footnote{The annual wage is a continuously distributed running variable with a point mass at zero.} The binary decision to treat represents the extensive margin and the continuous outcome variable a mix of extensive and intensive variables.
We use the same data as \cite{angristevens1998}.\footnote{Data are taken from the website Joshua D. Angrist website www.economics.mit.edu/faculty/angrist from 1980. The sample is restricted to women at the age of 20-36, having at least two children, being white, and having their first child at the age of 19-25.} The sample size is 211,983. The point estimated difference of the population sizes of compliers and defiers is given by 0.06.
\subsection{Sensitivity Analysis for Binary Outcome Variable }
We consider the labor market participation of mothers as the outcome variable. The Wald estimate is given by $-0.13$. Figure~\ref{figurelates_example} illustrates the 95\% confidence set for the sensitivity and the robust region for the claim that the treatment effect of compliers is negative. The formal definition of these confidence sets is given in Section~\ref{sec:goalofinference}.
\begin{figure}
\centering
\input{Graphics/binary_outcome_angrist_evans_res}
\caption[Application - Confidence sets for the sensitivity and robust region I.]{
Confidence sets for the sensitivity and robust region for a negative treatment effect of compliers.
The confidence level is 95 \%. The treatment effect of compliers is the effect of having a third child on the labor market participation of mothers complying with the same-sex instrument. The black lines bound the sensitivity region, and the red line indicates the boundary of the robust region. The population size of defiers is on the horizontal axis, and outcome heterogeneity between compliers and defiers on the vertical axis. \label{figurelates_example}}
\end{figure}
In this example, a (conservative) 95\% confidence set for $\pi_{\textnormal{DF}}$ is given by $[0, 0.37]$. Following the literature, one can therefore not conclude that monotonicity is violated in this example \citep[see for a comparison, e.g., ][]{small2017instrumental}.
The sensitivity parameter pairs below the red line represent the robust region, which is the estimated set of sensitivity parameters implying a negative treatment effect.
This figure shows that concerns about the validity of the monotonicity assumption have to be taken seriously. Since $BP(0.37)$ is almost zero,
the hypothesis that the treatment effect is negative cannot be rejected without imposing any assumptions on the data generating process,
If the population size of defiers increases, the breakdown frontier is relatively steeply declining, and thus the robust region is rather small. This implies that relatively strong assumptions on the outcome distributions of compliers and defiers have to be imposed to conclude that the treatment effect is negative in the presence of defiers. In contrast, if the population size of defiers is small, it is not necessary to impose strong assumptions about heterogeneity in the outcome variables to imply a negative effect.
This example shows that without imposing any assumptions on the data generating process, only non-informative conclusions can be drawn in this example, which is the case as the population size of defiers is not much restricted and is arguably implausible high.\footnote{
To interpret these numbers, we note that the upper bound is a rather conservative estimate. If roughly 37\% of the population were a defier, then approximately 43\% of the population would have been a complier. This reasoning implies that roughly 90\% of the population would base their decision to have a third child on the sex composition of the first two children.}
One, therefore, might be willing to impose further assumptions to arrive at more interesting results, and we show how one could plausibly proceed. These assumptions should only serve as an example, and obviously, they have to be always adapted to the analyzed situation. We adopt the approach of \cite{de2017tolerating}. One of the most essential inherently unknown quantities of interest is the population size of defiers. Imposing a smaller upper bound of this quantity based on economic reasoning allows us to derive sharper results. Based on a survey conducted in the US, \cite{de2017tolerating} states that it seems reasonable that 5\% of defiers is a conservative upper bound of the population size of defiers in this setting.
If one is willing to impose this assumption, one would still have to assume that the differences in the Kolmogorov-Smirnov norm are less than 0.05, which is a quite strong assumption. Therefore, we would conclude that the treatment effect is not robust to a potential violation in this specific example.
\subsection{Sensitivity Analysis for Continuous Outcome Variable}
We now consider the annual log income of the mother. This variable has a point mass at zero, representing all women who do not work but is otherwise continuously distributed. The Wald estimate is given by $-1.23$. Figure~\ref{figureapplication:cont} shows the corresponding 95\% confidence sets for the robust and sensitivity region. If the monotonicity assumption were not violated, this estimate would imply that women who get a third child have an annual log wage reduced by $1.23.$
Figure~\ref{figureapplication:cont} shows 95 \% confidence sets for both the sensitivity and robust region. The same line of interpretation applies as in the case of a binary outcome variable. One can see that without imposing any assumption about the population size of defiers, the empirical conclusion of a negative treatment effect is not robust to a potential violation of monotonicity. However, applying the same reasoning as above and imposing a maximal population size of 5\% as an upper bound of the population size of defiers, one can see that the empirical conclusion is now robust to a potential violation of monotonicity.
\begin{figure}
\centering
\resizebox{0.8\textwidth}{!}{ \input{Graphics/application_pic_continuous}}
\caption[Application - Confidence Sets for the Sensitivity and Robust Region II]{Confidence Sets for the Sensitivity and Robust Region for a Negative Treatment Effect of Compliers. The confidence level is 95 \%. The compliers treatment effect is the effect of getting a third child on the annual log wage of mothers complying with the same-sex instrument. The black lines bound the sensitivity region, and the red line indicates the boundary of the robust region.
\label{figureapplication:cont}}
\end{figure}
To conclude, this sensitivity analysis is of interest, as one can identify the sign and the order of magnitude of the treatment effects by imposing further assumptions. These imposed assumptions are substantially weaker than the monotonicity assumption so that the estimates gain credibility.
\section{Conclusion} \label{sectionconclusion}
The local average treatment effect framework is popular to evaluate heterogeneous treatment effects in settings of endogenous treatment decisions and instrumental variables. In some empirical settings, one might doubt the validity of one of its key identifying assumptions, the monotonicity assumption. Conducting a sensitivity analysis of the estimates in these settings improves the reliability of the results. This paper, therefore, proposes a new framework, which allows researchers to assess the robustness of the treatment effect estimates to a potential violation of monotonicity. It parameterizes a violation of monotonicity by two parameters, the presence of defiers and heterogeneity of defiers and compliers. The former parameter is represented by the population size of defiers and the latter by the Kolmogorov-Smirnov norm bounding the outcome distributions of both groups. Based on these two parameters, we derive sharp identified sets for the average treatment effect of compliers and for any other group under further mild support assumption on the outcome variable. These identification results allow us to identify the sensitivity parameters that imply conclusions of treatment effect being consistent with the empirical conclusion.
The empirical example of \cite{angristevens1998} same-sex instruments underlines the importance of the validity of the monotonicity assumptions as small violations of monotonicity may already lead to uninformative results.
| {'timestamp': '2021-06-14T02:25:10', 'yymm': '2106', 'arxiv_id': '2106.06421', 'language': 'en', 'url': 'https://arxiv.org/abs/2106.06421'} |
\section{Introduction}
Dynamic contrast enhanced MRI (DCE-MRI) in the liver is widely used for detecting hepatic lesions and in distinguishing malignant from benign lesions. However, such images often suffer from motion artifacts due to unpredictable respiration, dyspnea, or mismatches in k-space caused by rapid injection of the contrast agent\cite{motosugi2015investigation}\cite{davenport2013comparison}. In DCE-MRI, a series of T1-weighted MR images is obtained after the intravenous injection of a gadolinium-based MR contrast agent, such as gadoxetic acid. However, acquiring appropriate data sets for DCE arterial phase MR images is difficult due to the limited scan time available in the first pass of the contrast agent. Furthermore, it has been reported that transient dyspnea can be caused by gadoxetic acid at a non-negligible frequency \cite{motosugi2015investigation}\cite{davenport2013comparison}, which results in degraded image quality due to respiratory motion-related artifacts such as blurring and ghosting\cite{stadler2007artifacts}. Especially, coherent ghosting originating from the anterior abdominal wall decrease diagnostic performance of the images\cite{chavhan2013abdominal}
Recently, many strategies have been proposed to avoid motion artifacts in DCE-MRI. Of these, fast acquisition strategies using compressed sensing may provide the simplest way to avoid motion artifacts in the liver \cite{vasanawala2010improved}\cite{zhang2014clinical}\cite{jaimes2016strategies}. Compressed sensing is an acquisition and reconstruction technique based on the sparsity of the signal, and the k-space undersampling results in a shorter scan time. Zhang et al. demonstrated that DCE-MRI with a high acceleration factor of 7.2 using compressed sensing provides significantly better image quality than conventional parallel imaging \cite{zhang2014clinical}. Other approaches include data acquisition without breath holding (free-breathing method) using respiratory triggering and respiratory triggered DCE-MRI, which is an effective technique to reduce motion artifacts in cases of patients who are unable to suspend their respiration \cite{vasanawala2010navigated}\cite{chavhan2013abdominal}.In these approaches, sequence acquisitions are triggered based on respiratory tracings or navigator echoes, and typically provide a one-dimensional projection of abdominal images. Shreyas et al. found that the image quality in acquisitions with navigator echoes under free-breathing conditions is significantly improved. Although triggering based approaches successfully reduce the motion artifacts, it is not possible to appropriately time arterial phase image acquisition due to the long scan times required to acquire an entire dataset. In addition, miss-triggers often occur in cases of unstable patient respiration, which causes artifacts and blurring of the images. Recently, a radial trajectory acquisition method with compressed sensing was proposed \cite{feng2014golden}\cite{feng2016xd}, which enables high-temporal resolution imaging without breath holding in DCE-MRI. However, the image quality of radial acquisition without breath holding is worse than that with breath holding even though the clinical usefulness of radial trajectory acquisition has been demonstrated in many papers \cite{chandarana2011free}\cite{chandarana2013free}.\cite{chandarana2014free}
Post-processing artifact reduction techniques using deep learning approaches have been also been proposed. Deep learning, which is used in complex non-linear processing applications, is a machine learning technique that relies on a neural network with a large number of hidden layers. Han et al. proposed a denoising algorithm using a multi-resolution convolutional network called “U-Net” to remove the streak artifacts induced in images obtained via radial acquisition \cite{han2018deep}.In addition, aliasing artifact reduction has been demonstrated in several papers as an alternative to compressed sensing reconstruction \cite{lee2017deep}\cite{yang2018dagan}\cite{hyun2018deep}. The results of several feasibility studies of motion artifact reduction in the brain \cite{Karsten2018ismrm}\cite{Patricia2018ismrm}\cite{Kamlesh2018ismrm}, abdomen \cite{Daiki2018ismrm}, and cervical spine \cite{Hongpyo2018ismrm} have also been reported. Although these post-processing techniques have been studied extensively, no study has ever demonstrated practical artifact reduction in DCE-MRI of the liver.
In this study, a motion artifact reduction method was developed based on a convolutional network (MARC) for DCE-MRI of the liver that removes motion artifacts from input MR images. Both simulations and experiments were conducted to demonstrate the validity of the proposed algorithm.
\section{Methods}
\subsection{Network architecture}
In this paper, a patch-wise motion artifact reduction method that employs a convolutional neural network with multi-channel images (MARC) is proposed, as shown in Fig. \ref{fig_network}, which is based on the network originally proposed by Zhang et al. for Gaussian denoising, JPEG deblocking, and super-resolution of natural images\cite{zhang2017beyond}. Patch-wise training has advantages in extracting large training datasets from limited images, and efficient memory usage on host PCs and GPUs. Residual learning approach was adopted to achieve effective training of the network\cite{he2016deep}. The network relies on two-dimensional convolutions, batch normalizations, and rectified linear units (ReLU) to extract artifact components from images with artifacts. To utilize the structural similarity of the multi-contrast images, a seven-layer patched image with varying contrast, was used as input to the network. Sixty-four filters using a kernel size of 3$\times$3$\times$7 in ReLU were adopted to facilitate non-linear operation. The number of convolution layers $N_{conv}$ was determined as shown in the Analysis subsection. In the last layer, seven filters with a kernel size of 3$\times$3$\times$64 were used for second to the last layers. Finally, a 7-channel image was predicted as the output of the network. The total number of parameters was 268,423. Artifact-reduced images could then be generated by subtracting the predicted image from the input.
\begin{figure}[bt]
\centering
\includegraphics[width=12cm]{Network}
\caption{Network architecture for the proposed convolutional neural network, two-dimensional convolutions, batch normalizations, and ReLU. The network predicts the artifact component from an input dataset. The network number of convolution layers in the network was determined by simulation-based method.}
\label{fig_network}
\end{figure}
\subsection{Imaging}
Following Institutional Review Board approval, patient studies were conducted. This study retrospectively included 31 patients (M/F, mean age 59, range 34–79 y.o.) who underwent DCE-MRI of the liver in our institution . MR images were acquired using a 3T MR750 system (GE Healthcare, Waukesha, WI); a whole-body coil and 32-channel torso array were used for RF transmission and receiving, and self-calibrated parallel imaging (ARC) was used with an acceleration factor of 2 $\times$ 2. A three-dimensional (3D) T1-weighted spoiled gradient echo sequence with a dual-echo bipolar readout and variable density Cartesian undersampling (DISCO: differential subsampling with cartesian ordering) was used for the acquisition \cite{saranathan2012differential}, along with an elliptical-centric trajectory with pseudo-randomized sorting in ky-kz. DIXON-based reconstruction method was used to suppress fat signals\cite{reeder2004multicoil}. A total of seven temporal phase images, including pre-contrast and six arterial phases, were obtained using gadolinium contrast with end-expiration breath-holdings of 10 and 21 s. The standard dose (0.025 mmol/kg) of contrast agent (EOB Primovist, Bayer Heathcare, Osaka, Japan) was injected at the rate of 1 ml/s followed by a 20-mL saline flush using a power injector. The arterial phase scan was started 30 s after the start of the injection. The acquired k-space datasets were reconstructed using a view-sharing approach between the phases and a two-point Dixon method to separate the water and fat components. The following imaging parameters were used: flip angle = 12 $^\circ$, receiver bandwidth = $\pm$ 167 kHz, TR = 3.9 ms, TE = 1.1/2.2 ms, acquisition matrix size = 320 $\times$ 192, FOV = 340 $\times$ 340 mm$^2$, the total number of slices = 56, slice thickness = 3.6 mm. The acquired images were cropped to a matrix size of 320 $\times$ 280 after zero-filling to 320 $\times$ 320.
\subsection{Respiration-induced noise simulation}\label{sec_resp_sim}
A respiration-induced artifact was simulated by adding simulated errors to the k-space datasets generated from the magnitude-only image. Generally, a breath-holding failure causes phase errors in the k-space, which results in the artifact along the phase-encoding direction. In this study, for simplicity, rigid motion along the anterior-posterior direction was assumed, as shown in Fig. \ref{fig_motion}. In this case, the phase error was induced in the phase-encoding direction, and was proportional to the motion shift. Motion during readout can be neglected because it is performed within a millisecond order. Then, the in-phase and out-of-phase MR signal with phase error $\phi$ can be expressed as follows:
\begin{eqnarray}
S'_I(k_x, k_y) &=& S_I (k_x, k_y) e^{-j\phi(k_y)}\\
S'_O(k_x, k_y) &=& S_O (k_x, k_y) e^{-j\phi(k_y)},
\end{eqnarray}
where $S_I$ and $S_O$ are the in-phase and out-of-phase signals, respectively, without the phase error; $S'_I$ and $S'_O$ are the corresponding signals with the phase error, and $k_x$, $k_y$ represent the k-space ($-\pi < k_x < \pi$, $-\pi < k_y < \pi$) in the readout and the phase-encoding directions, respectively. Finally, k-space of the water signal ($S_W$) with the phase error can be expressed as follows:
\begin{eqnarray}
S_W &=& \frac{S'_I+S'_O}{2}\\
&=& \frac{S_I+S_O}{2}e^{-j\phi(k_y)}\\
&=& \mathcal{F}[I_W]e^{-j\phi(k_y)},
\end{eqnarray}
where $\mathcal{F}$ is the Fourier operator, and Iw denotes the water image. It is clear from the above equation that artifact simulation can be implemented by simply adding the phase error components to the k-space of the water image. In this study, the k-space datasets were generated from magnitude-only water images. To simulate the background $B_0$ inhomogeneity, the magnitude images were multiplied by $B_0$ distributions derived from polynomial functions up to the third order. The coefficients for the functions were determined randomly so that the peak-to-peak value of the distribution was within $\pm5$ ppm ($\pm4.4$ radian)
To generate a motion artifact in the MR images, we used two kinds of phase error patterns: periodic and random. Generally, severe coherent ghosting artifacts are observed along the phase-encoding direction. Although there are several factors that generate artifacts in the acquired images during DCE-MRI including respiratory, voluntary motion, pulsatile arterial flow, view-sharing failure, and unfolding failure\cite{stadler2007artifacts}\cite{arena1995mr}, the artifact from the abdominal wall in the phase-encoding direction is mainly recognizable. In the case of centric-order acquisitions, the phase mismatching in the k-space results in high-frequency and coherent ghosting. An error pattern using simple sine wave with random frequency, phase, and duration was used to simulate the ghosting artifact. It was assumed that motion oscillations caused by breath-hold failures occurred after a delay as the scan time proceeded. The phase error can be expressed as follows:
\[
\phi(k_y) = \left\{ \begin{array}{ll}
0 & (|k_y| < k_{y0} ) \\
2\pi \frac{k_y \Delta sin(\alpha k_y + \beta)}{N} & (otherwise),
\end{array} \right.
\]
where $\Delta$ denotes the significance of motion, $\alpha$ is the period of the sine wave which determines the frequency, $\beta$ is the phase of the sine wave, and $k_{y0}, (0 < k_{y0} < \pi)$ is the delay time for the phase error. In this study, the values of $\Delta$ (from 0 to 20 pixels, which equals 2.4-2.6 cm depending on FOV), $\alpha$ (from 0.1 to 5 Hz), $\beta$ (from 0 to $\pi /4$) and $k_{y0}$ (from $\pi/10$ to $\pi/2$) were selected randomly. The period $\alpha$ was determined such that it covered the normal respiratory frequency for adults and elderly adults, which is generally within 0.2-0.7 Hz\cite{rodriguez2013normal}. In addition to the periodic noise, random phase error pattern was also used to simulate non-periodic irregular motion as follow. First, the number of phase-encoding lines, which have the phase error, was randomly determined as between 10-50 \% of all phase-encoding lines except at the center region of the k-space ($-\pi /10 < k_{y0} < \pi /10$). Then, the significance of the error was determined randomly line-by-line in the same manner as used for periodic noise.
\begin{figure}[bt]
\centering
\includegraphics[width=11cm]{Motion_Noise}
\caption{(left) Example of a simulation of the respiratory motion artifact by adding phase errors along the phase-encoding direction in k-space. (Right) The k-space and image datasets before and after adding simulated phase errors.}
\label{fig_motion}
\end{figure}
\subsection{Network Training}
The processing was implemented in MATLAB 2018b on a workstation running Ubuntu 16.04 LTS with an Intel Xeon CPU E5-2630, 128 GB DDR3 RAM, and an NVIDIA Quadro P5000 graphics card.
The data processing sequence used in this study is summarized in Fig \ref{fig_dataset}. Training datasets containing artifacts and residual patches were generated using multi-phase magnitude-only reference images (RO $\times$ PE $\times$ SL $\times$ Phase: 320 $\times$ 280 $\times$ 56 $\times$ 7) acquired from six patients selected by a radiologist from among the 26 patients in the study. The radiologist confirmed that all reference images were successfully acquired without motion artifacts. For the multi-phase slices (320 $\times$ 280 $\times$ 7) of the images, 125,273 patches 48 $\times$ 48 $\times$ 7 in size were randomly cropped. The resulting patches that contained only background signals were removed from the training datasets. Images with motion artifact (artifact images) were generated using the reference images, as explained in the previous subsection. Artifact patches, which were used as inputs to the MARC, were cropped from the artifact images using the same method as that for the reference patches. Finally, residual patches, which were used for the output of the network, were generated by subtracting the reference patches from the artifact patches. All patches were normalized by dividing them by the maximum value of the artifact images.
Network training was performed using a Keras and Tensorflow backend (Google, Mountain View, CA), and the network was optimized using the Adam algorithm with a learning rate of 0.001. The optimization was conducted with a mini-batch of 64 patches. A total of 100 epochs with an early-stopping patience of 10 epochs were completed for convergence purposes and the L1 loss function was used as the residual components between the artifact patches and outputs were assumed to be sparse.
\begin{equation}
Loss(I_{art}, I_{out}) = \frac{1}{N}\sum_i^N \| I_{art} - I_{out} \|_1,
\end{equation}
where $I_{art}$ represents the artifact patches, $I_{out}$ represents the outputs predicted using the MARC, and N is the number of data points. Validation for L1 loss was performed using K-fold cross validation (K = 5).
The $N_{conv}$ used in the network was determined by maximizing the structural similarity (SSIM) index between the reference and artifact-reduced patches of the validation datasets. Here, the SSIM index is a quality metric used for measuring the similarity between two images, and is defined as follows:
\begin{equation}
SSIM(I_{ref}, I_{den}) = \frac{(2\mu_{ref} \mu_{den}+c_1)(2\sigma_{ref,den}+c_2)}{(\mu_{ref}^2 + \mu_{den}^2 + c_1)(\sigma_{ref}^2 + \sigma_{den}^2+c_2)},
\end{equation}
where, $I_{ref}$ and $I_{den}$ are input and artifact-reduced patches, $\mu$ is the mean intensity, $\sigma$ denotes the standard deviation, and $c_1$ and $c_2$ are constants. In this study, the values of $c_1$ and $c_2$ were as described in \cite{wang2004image}.
The number of patients used for the training versus the L1 loss with 100-epoch training was plotted to investigate the relationship between the size of the training datasets and training performance. The average sample size for one patient was 7916 training patches and 3417 validation patches for 11,333 patches. A validation dataset of 37,582 patches used for these trainings was generated from the 11 patients.
\begin{figure}[bt]
\centering
\includegraphics[width=12cm]{Dataset}
\caption{Data processing for the training. The artifact images were simulated from the reference images. Residual images were calculated by subtracting the reference patches from the artifact patches. A total of 125,273 patches were generated by randomly cropping small images from the original images.}
\label{fig_dataset}
\end{figure}
\subsection{Analysis}\label{sec_anal}
To demonstrate the performance of the MARC to reduce the artifacts in the DCE-MR images acquired during unsuccessful breath holding, the following experiments were conducted using the data from the 20 remaining patients in the study. To identify biases in the intensities and liver-to-aorta contrast between the reference and artifact-reduced images, a Bland–Altman analysis, which plots the differences of the two images versus their average, was used in which the intensities were obtained from the central slice in each phase. The Bland–Altman analysis for the intensities was conducted in the subgroups of high (mean intensity $\geq$ 0.46) and low (mean intensity < 0.46) intensities. For convenience, half of the maximum mean intensity (0.46) was used as the threshold. The mean signal intensities of the liver and aorta were measured by manually placing the region-of-interest (ROI) on the MR images, and the ROI of the liver was carefully located in the right lobe to exclude vessels. The same ROIs were applied to all other phases of the images. The quality of images before and after applying the MARC were visually evaluated by a radiologist (M.K.) with three years of experience in abdominal radiology who was not told whether each image came before or after the MARC was applied. The radiologist evaluated the images using a 5-point scale based on the significance of the artifacts (1 = no artifact; 2 = mild artifacts; 3 = moderate artifacts; 4 = severe artifacts; 5 = non-diagnostic). The scores of more than 1 were analyzed statistically by using the Wilcoxon signed rank test. To confirm the validity of the anatomical structure after applying the MARC, the artifact-reduced images in the arterial phase were compared with those without the motion artifact, which were obtained from separate MR examinations performed 71 days apart in the same patients. The same sequence and imaging parameters were used for the acquisition.
\section{Results}
The changes in the mean and standard deviation ($\mu$) of the SSIM index between the reference and artifact-reduced images are plotted against $N_{conv}$ in Fig. \ref{fig_SSIM} (a), and the results show that the network with an $N_{conv}$ of more than four exhibited a better SSIM index, while networks with an $N_{conv}$ of four or below had a poor SSIM index. In this study, an $N_{conv}$ of seven was adopted in the experiments as this value maximized the SSIM index (mean: 0.87, $\mu$: 0.05). The training was successfully terminated by early stopping in 70 epochs as shown in Fig. \ref{fig_SSIM} (b). Figure \ref{fig_SSIM} (c) shows the number of patients used for the training versus the training and validation losses, and the sample size. The results implied that stable convergence was achieved when the sample size was 3 or more although the training with few patients gave inappropriate convergence. The features using the trained network extracted from the 1st, 4th, and 8th intermediate layers corresponding to specific input and output are shown in Fig. \ref{fig_Features}. Higher frequency ghosting-like patterns were extracted from the input in the 8th layer.
Figures \ref{fig_BAplot} (a) and (b) show the Bland–Altman plots of the intensities and liver-to-aorta contrast ratios between the reference and artifact-reduced images. The differences in the intensities between the two images (mean difference = 0.01 (95 \% CI, -0.05-0.04) for mean intensity < 0.46 and mean difference = -0.05 (95 \% CI, -0.19-0.01) for mean intensity $\geq$ 0.46) were heterogeneously distributed, depending on the mean intensity. The intensities of the artifact-reduced images were lower than that of the references by ~15 \% on average, which can be seen in the high signal intensity areas shown in Fig. \ref{fig_BAplot} (a). A Bland–Altman plot of the liver-to-aorta contrast ratio (Fig. \ref{fig_BAplot} (b)) showed no systematic errors in contrast between the two images.
The image quality of the artifact-reduced images (mean (SD) score = 3.2(0.63)) were significantly better (P < 0.05) than that of the reference images (mean (SD) score =2.7(0.77)), and the respiratory motion-related artifacts (Fig. \ref{fig_results} top row) were reduced by applying MARC (Fig. \ref{fig_results} bottom row). The middle row in Fig. \ref{fig_results} shows the extracted residual components for the input images.
The images with and without breath-hold failure are shown in Fig. 8 (a, b). The motion artifact in Fig. \ref{fig_Comparison} (b) was partially reduced by using MARC, as shown in Fig. \ref{fig_Comparison} (c). This result indicated that there was no loss of critical anatomical details, and additional blurring was observed although moderate artifact on the right lobe remained.
\begin{figure}[bt]
\centering
\includegraphics[width=7cm]{SSIM}
\caption{(a) SSIM changes depending on the number of layers ($N_{conv}$). The highest SSIM (0.89) was obtained with an $N_{conv}$ of 7. (b) The L1 loss decreased in both the training, and validation datasets as the number of epochs increased. No further decrease was visually observed in > 70 epochs. The training was terminated by early stopping in 80 epochs.Error bars on the validation loss represent the standard deviation for K-fold cross validation. (c) Validation loss, training loss, and sample size were plotted against the number of patients. Smaller loss was observed as the sample size and number of patients increased.}
\label{fig_SSIM}
\end{figure}
\begin{figure}[bt]
\centering
\includegraphics[width=11cm]{Features}
\caption{Features extracted from 1st, 4th, and 8th layers of the developed network corresponding to specific input and output. Low- and high-frequency components were observed in the lower layers. On the other hands, an artifact-like pattern was extracted from the higher layer.}
\label{fig_Features}
\end{figure}
\begin{figure}[bt]
\centering
\includegraphics[width=11cm]{BA_plot}
\caption{Bland–Altman plots for (a) the intensities and (b)the liver-to-aorta contrast ratio between the reference and artifact-reduced images in the validation dataset. The mean difference in the intensities was 0.01 (95 \% CI, -0.05-0.04) in the areas corresponding to mean intensity of < 0.46 and -0.05 (95 \% CI, -0.19-0.01) in the parts of mean intensity of $\geq$ 0.4. Mead difference in the contrast ratio was 0.00 (95 \% CI, -0.02-0.02). These results indicated that there were no systematic errors in the contrast ratios, whereas the intensities of the artifact-reduced images were lower than that of the reference images due to the effect of artifact reduction especially in the area with high signal intensities. }
\label{fig_BAplot}
\end{figure}
\begin{figure}[bt]
\centering
\includegraphics[width=12cm]{Filter_Results}
\caption{Examples of artifact reduction with MARC for a patient from the validation dataset. The motion artifacts in the images (upper row) were reduced (lower row) by using the MARC.The residual components are shown in the middle row.}
\label{fig_results}
\end{figure}
\begin{figure}[bt]
\centering
\includegraphics[width=12cm]{Comparison}
\caption{(a, b) MR image in the arterial phase with and without breath-hold failure. (c) The artifact-reduced image of (b). The images were acquired in difference studies with same imaging parameters.}
\label{fig_Comparison}
\end{figure}
\section{Discussion}
In this paper, an algorithm to reduce the number of motion-related artifacts after data acquisition was developed using a deep convolutional network, and was then used to extract artifacts from local multi-channel patch images. The network was trained using reference MR images acquired with appropriate breath-holding, and noisy images were generated by adding phase error to the reference images. The number of convolution layers in the network was semi-optimized in the simulation. Once trained, the network was applied to MR images of patients who failed to hold their breath during the data acquisition. The results of the experimental studies demonstrate that the MARC successfully extracted the residual components of the images and reduced the amount of motion artifacts and blurring. No study has ever attempted to demonstrate blind artifact reduction in abdominal imaging, although many motion correction algorithms with navigator echoes or respiratory signal have been proposed \cite{vasanawala2010navigated}\cite{brau2006generalized}\cite{cheng2012nonrigid}. In these approaches, additional RF pulses and/or longer scan time will be required to fill the k-space signal whereas MARC enables motion reduction without sequence modification or additional scan time. The processing time for one slice was 4 ms, resulting in about 650 ms for all slices of one patient. This computational cost is acceptable for practical clinical use.
In MRI of the liver, DCE-MRI is mandatory in order to identify hypervascular lesions, including hepatocellular carcinoma \cite{tang2017evidence}\cite{chen2016added}, and to distinguish malignant from benign lesions. At present, almost all DCE-MR images of the liver are acquired with a 3D gradient echo sequence due to its high spatial resolution and fast acquisition time within a single breath hold. Despite recent advances in imaging techniques that improve the image quality \cite{yang2016sparse}\cite{ogasawara2017image}, it remains difficult to acquire uniformly high quality DCE-MRI images without respiratory motion-related artifacts. In terms of reducing motion artifacts, the unpredictability of patients is the biggest challenge to overcome, as the patients who will fail to hold their breath are not known in advance. One advantage of the proposed MARC algorithm is that it is able to reduce the magnitude of artifacts in images that have been already acquired, which will have a significant impact on the efficacy of clinical MR.
In the current study, an optimal $N_{conv}$ of seven was selected based on the SSIM indexes of the reference image and the artifact-reduced image after applying MARC. The low SSIM index observed for small values of $N_{conv}$was thought to be due to the difficulty of modeling the features of the input datasets with only a small number of layers. On the other hand, a slight decrease in the SSIM index were observed for $N_{conv}$ of >12. This result implies that overfitting of the network occurred by using too many layers. To overcome this problem, a larger number of learning datasets and/or regularization and optimization of a more complicated network will be required.
Several other network architectures have been proposed for the denoising of MRI images. For example, U-Net \cite{ronneberger2015u}, which consists of upsampling and downsampling layers with skipped connections, is a widely used fully convolutional network for the segmentation \cite{dalmics2017using}, reconstruction, and denoising\cite{yu2017deep} of medical images. This architecture, which was originally designed for biomedical image segmentation, uses multi-resolution features instead of a max-pooling approach to implement segmentation with high localization accuracy. Most of the artifacts observed in MR images, such as motion, aliasing, or streak artifacts, are distributed globally in the image domain because the noise and errors usually contaminate the k-space domain. It is known that because U-Net has a large receptive field, these artifacts can be effectively removed using global structural information. Generative adversarial networks (GANs) \cite{goodfellow2014generative}, which are comprised of two networks, called the generator and discriminator, is another promising approach for denoising MR images. Yang et al. proposed a network to remove aliasing artifacts in compressed sensing MRI using a GAN-based network with a U-Net generator\cite{yang2018dagan}. We used patched images instead of a full-size image, because it was difficult to implement appropriate training with limited number of datasets as well as owing to computational limitation. However, we believe this approach is reasonable because the pattern of artifact due to respiratory motion looks similar in every patch, even though the respiratory artifact is distributed globally. Although it should be studied further in the future, we consider that MARC from the patched image can be generalized to a full-size image from our results. Recently, the AUtomated TransfOrm by Manifold APproximation (AUTOMAP) method, which uses full connection and convolution layers, has been proposed for MRI reconstruction \cite{zhu2018image}. The AUTOMAP method directly transforms the domain from the k-space to the image space, and thus enables highly flexible reconstruction for arbitrary k-space trajectories. Three-dimensional CNNs , which are network architectures for 3D images \cite{kamnitsas2017efficient}\cite{chen2018efficient} are also promising method. However, these networks require large number of parameters, huge memory on GPUs and host computers, and long computational time for training and hyperparameter tuning. Therefore, it is still challenging to apply these approaches in practical applications. These network architectures may be combined to achieve more spatial and temporal resolution. It is anticipated that further studies will be conducted on the use of deep learning strategies in MRI.
The limitations in the current study were as follows. First, clinical significance was not fully assessed. While the image quality appeared to improve in almost all cases, it will be necessary to confirm that no anatomical/pathological details were removed by MARC before this approach can be clinically applied. Second, simple centric acquisition ordering was assumed when generating the training datasets, which means that MARC can only be applied for a limited sequence. Additional training will be necessary before MARC can be generalized to more pulse sequences. In addition, realistic simulation can offer further improvement of our algorithm because noise simulation in this study was based on the assumption that ghosting originates from simple rigid motion. Moreover, the artifact was simulated in the generated k-space data from images for clinical use. Simulation in the original k-space data may offer different results. We need further researches to reveal which approach would be appropriate for artifact simulation.
The research on diagnostic performance using deep learning-based filters has not been performed sufficiently in spite of considerable effort spent for the development of algorithms. Our approach can provide additional structures and texture to the input images using the information learned from the trained datasets. Therefore, no essential information was added using MARC although image quality based on visual assessment was improved. However, even non-essential improvement may help non-expert or inexperienced readers to find lesions in the images. Further research on diagnostic performance will be required to demonstrate its clinical usefulness.
\section{Conclusion}
In this study, a deep learning-based network was developed to remove motion artifacts in DCE-MRI images. The results of experiments showed the proposed network effectively removed the motion artifacts from the images. These results indicate the deep learning-based network has the potential to also remove unpredictable motion artifacts from images.
| {'timestamp': '2018-10-04T02:05:48', 'yymm': '1807', 'arxiv_id': '1807.06956', 'language': 'en', 'url': 'https://arxiv.org/abs/1807.06956'} |
\section{Introduction}
Pandemics have caused death and destruction several times throughout human history, and have caused large and lasting impacts on society: for example the Black Death in 14\textsuperscript{th} century Europe \citep{herlihy_black_1997}, the Spanish flu in 1918-1920 \citep{johnson_updating_2002}, HIV in the 1980s \citep{pope_transmission_2003}, and H1N1 in 2009 \citep{trifonov_origin_2009}. In 2019 and 2020, policymakers have struggled to design effective policy responses to the COVID-19 pandemic, as authorities have resorted to unprecedented and widespread use of temporary emergency measures such as lockdowns, mass quarantines, and other ``social distancing'' measures, despite that the human and economic consequences of these \textit{ad-hoc} interventions are poorly understood. It is clear that more research is needed on effective policy intervention during pandemics, and how to mitigate its human and economic impacts \citep{bauch_covid-19_2020}.
Therefore, we incorporate a model of disease transmission into a model of economic growth, considering that policymakers can implement temporary policies that simultaneously slow the spread of the disease and lower the economic output. We then select model parameters so as to represent the global economy under the impact of COVID-19, and perform numerical simulations of various policies to provide insights into the trade-offs between short- and long-term human and economic outcomes. The policy simulations will help policymakers understand how altering the starting time, intensity, and duration of the policy intervention can impact the outcomes, and contribute to the understanding of how to design effective policies to confront rapidly spreading pandemics.
A growing body of literature has been devoted to studying the economic impacts of social distancing measures and the design of emergency policies during pandemics. For instance, \cite{eichenbaum_macroeconomics_2020} constructed a mathematical model and ran numerical experiments that resulted in important insights for policymakers during a pandemic, although the study abstracts from forces that affect the long-term economic development. \cite{andersson_optimal_2020} use a similar framework to study policy responses and the trade-off between output and health during a pandemic, operating with a welfare function that only includes the period of the epidemic, and will also not account for lasting impacts of the pandemic on the economy. Our study is similar to these studies, although our model includes capital stock and population growth, which enables us to look at possible long-term effects beyond the duration of the pandemic. \cite{guan_global_2020} consider the global economic impacts of the COVID-19 pandemic in a model with multiple countries and productive sectors, suggesting that the economic impacts propagate through supply chains, and that earlier, stricter, and shorter lockdowns will minimise the economic damages. \cite{acemoglu_optimal_2020} include a simplified evaluation of economic loss into an SIR-type epidemiological model that distinguishes between age groups, and find that lockdown policies that are tailored to the different age groups are more efficient. \cite{la_torre_optimal_2020} model the social costs of an epidemic, in which a social planner must choose to allocate tax revenues between prevention and treatment, showing that the optimal allocation depends on the infectivity rate of the epidemic. \cite{alvarez_simple_2020} studied the optimal lockdown policy in a linear economy, where each fatality incurs a cost.
However, even before the onset of the COVID-19 pandemic, several studies had focused on the determinants and incentives of social distancing during epidemics \citep{fenichel_economic_2013, perrings_merging_2014, kleczkowski_spontaneous_2015, toxvaerd_equilibrium_2020, quaas_social_2020}. The models in these studies mainly apply economic insights to improve epidemiological models by examining and modelling individual contact decisions.
On the relationship between epidemics and economic growth, \cite{herlihy_black_1997} and \cite{hansen_malthus_2002} argue that the increased mortality caused by the Black Death resulted in great economic damage by decreasing labour supply, which induced the substitution of labour for capital and triggered an economic modernisation that eventually lead to greater economic growth. \cite{delfino_positive_2000}, however, argue that there are causal links in both directions in the interaction between the economy and disease transmission, and proposed a model that combines disease transmission into a model of economic growth, although the model is not used to explore temporary policy interventions during specific rapidly-spreading pandemics. \cite{bonds_poverty_2010}, \cite{ngonghala_poverty_2014} and \cite{ngonghala_general_2017} show that poverty traps\footnote{That is, in this particular case, that a greater disease burden leads to more poverty, and more poverty leads to a greater disease burden, so as to create a self-perpetuating effect.} frequently arise when combining models of infectious diseases and of economic development, and that these may help explain the differences in economic development between countries. \cite{goenka_infectious_2010}, \cite{goenka_infectious_2014} and \cite{goenka_infectious_2019} also integrate disease transmission directly into an economic growth model, and allow investments in health -- building \textit{health capital} -- to affect the epidemiological parameters. Optimal investments in health and the accumulation of health capital, however, are different from designing temporary policies during a pandemic. Most of these studies are based on disease models that allow the individual to contract the disease multiple times, which is consistent with some diseases that are common in the developing world, such as malaria or dengue fever, but may not be applicable to COVID-19. Furthermore, most of these models do not include disease-related mortality, one of the main channels through which serious pandemics affect the economy \citep{hansen_malthus_2002}. These studies, however, show that integrating disease transmission in economic growth models can result in multiple steady states, and we are therefore careful to choose an approach that is not sensitive to this: we use our model to run numerical experiments, solving the numerical optimisation problem using backwards induction.
The macroeconomic impact of the HIV/AIDS epidemic has also received much research attention, altough many studies assume the disease transmission is exogenous \citep{cuddington_modeling_1993, cuddington_further_1993, cuddington_assessing_1994, haacker_modeling_2002, arndt_hiv-aids_2003, cuesta_how_2010}. \cite{azomahou_hivaids_2016} allow mortality rate to depend on health expenditure, but the disease transmission still remains exogenous to their model. \cite{bell_macroeconomics_2009} studied government investments in health and education in an overlapping generations model of HIV/AIDS in Africa. However, the HIV/AIDS epidemic and the COVID-19 pandemic are so different, that we do not expect the insights from these studies to transfer directly to the COVID-19 pandemic.
The COVID-19 pandemic has shown that many authorities are prepared to implement strict and dramatic emergency policies at short notice in order to slow down the spread of a serious pandemic. However, our understanding of the economic and human consequences of such measures is still incipient. At the same time, implementing such policies is delicate, and, if done improperly, authorities could damage the economy whilst still failing to lower the transmission rate of the disease. There is therefore an urgent need for research that develops guidelines for the use of these emergency measures, and to help policymakers understand their impacts and consequences over time.
We contribute to the study of the efficiency, impacts, and consequences of temporary emergency measures during a pandemic by incorporating a policy parameter into a model that integrates disease transmission dynamics and economic growth. Our model provides a theoretical framework for understanding the impact of emergency policies on the trajectory of the pandemic as well as the main economic variables, in light of their mutual interactions. To gain a deeper understanding of the emergency policies, we select parameter values that are consistent with the global economy under the impact of COVID-19, and numerically simulate a large number of scenarios for possible emergency policies. Using this simulation-based approach, we investigate the impact of altering the starting date of policy intervention, the intensity of the policy intervention, and the duration of the policy intervention. Altering the simulated policies along these three dimensions provides insights into the impact of the emergency measures on the trajectory of the pandemic and the development of the main economic variables. These insights could help policymakers design effective emergency responses to pandemics.
Our work differs from earlier studies in some important aspects. Principally, the period of interest in our study extends beyond the duration of the pandemic. Therefore, our model includes the dynamics of capital accumulation, population growth and pandemic deaths, which have not been jointly considered in previous studies. These components are important to study the impact of the pandemic on economic growth beyond the short- and medium term. Other novel aspects of our model include a relationship between the reduction in economic output and the reduction in the infection rate in the short run, a mortality rate that depends on the infection rate, and explicitly modelled excess costs of hospital admissions due to the pandemic. Although some of these features are included in previous models, they have not yet been combined into a single comprehensive framework. In addition, our study contains some early estimates of the economic impacts of COVID-19 on Europe's five largest economies, constructed by analysing changes in real-time and high-frequency data on electricity demand. We also make our data and custom computer code freely available, which will hopefully be useful to the research community and contribute to future developments in the area.
In addition to this introduction, this paper consists of three sections. The following section presents a mathematical framework that incorporates a model of disease transmission into a model of economic growth, shows how model parameter values were chosen to fit the model to the global economy during the COVID-19 pandemic, and details the numerical experiments that were performed. In the third section, we present, interpret, and discuss the results of the numerical experiments and their implications. In the final section, we summarise the main findings of the study.
\section{Pandemic in an Economic Growth Model}
Here we detail the integration of an epidemiological model into a neoclassical model of economic growth -- known as one of the ``workhorses'' of modern macroeconomics \citep{acemoglu_introduction_2011}. We first modify the SIR (Susceptible-Infected-Recovered) model of the spread of an infection, pioneered by \cite{kermack_contribution_1927}, then incorporate it into the a model setup similar to the classical Ramsey-Cass-Koopmans model in discrete time. We then explain how we select functional forms and parameters to represent the global economy and the global spread of COVID-19, before outlining a set of numerical experiments designed to give insight into the economic and epidemiological impacts of varying the starting time, intensity and duration of the policy interventions.
\subsection{Incorporating Pandemic Dynamics into a Neoclassical Model of Economic Growth}
Here we combine an epidemiological model with a model of economic growth, concentrating on three main bridges between the models. First, we assume that the spread of a pandemic reduces the labour force, since infected or deceased individuals will not work, and this reduces economic output. Second, we assume that society incurs additional direct costs, for instance due to the hospitalisation of infected individuals, and these costs must be covered with output that would otherwise have been consumed or invested. Third, we assume that governments may, through policy, simultaneously impact both the spread of the pandemic and the efficiency of economic production. These interactions between the spread of the pandemic and economic growth are the main focus of our model, which jointly represents the dynamics of the spread of the pandemic and the dynamics of economic growth.
The SIR model \citep{kermack_contribution_1927, brauer_mathematical_2012} is a simple Markov model of how an infection spreads in a population over time. This model divides a population ($N$) into three categories: Susceptible ($S$), Infected ($I$), and Recovered ($R$). In each period, the number of susceptible individuals who become infected is a product of the susceptible population, the number of individuals who are already infected, and an infection rate $b$. A given proportion of the infected individuals ($r$) also recover in each period.
To incorporate the SIR model into a model of economic growth, we make two adaptations to the basic SIR model. First, we introduce a distinction between recovered individuals ($R$) and deceased individuals ($D$), since recovered individuals will re-enter the labour force whereas deceased individuals will not: each period, infected individuals will recover at a rate $r$ and pass away at a rate $m$. Second, instead of considering the population to be of a fixed size, we allow the population to grow over time. Population growth is usually negligible at the timescale of interest for models of epidemics or pandemics, but it is significant in the timescales of economic growth. Therefore, we introduce a logistic model for population growth, and new individuals will be added to the number of susceptible individuals each period. Using two parameters, $a_1$ and $a_2$, to describe the population growth, we can describe the spread of the pandemic in the population as follows:
\begin{align}
\label{eq:popgrowth}
N_{t+1} &= a_1 N_t + a_2 N_t^2 - m I_t \\
\label{eq:susceptible}
S_{t+1} &= S_t + (a_1-1)N_t + a_2 N_t^2 - b S_t I_t \\
\label{eq:infected}
I_{t+1} &= I_t + b S_t I_t - r I_t - m I_t \\
\label{eq:recovere}
R_{t+1} &= R_t + r I_t \\
\label{eq:deceased}
D_{t+1} &= D_t + m I_t.
\end{align}
Many variations of the basic SIR model already exist, and it would be possible to incorporate more complex dynamics into the epidemiological model. However, this model will be sufficient for our current purposes.
The model of economic growth assumes that a representative household chooses what quantity of economic output ($Y$) to consume ($C$) or save (invest) each period in order to maximise an infinite sum of discounted utility, represented by a logarithmic utility function. Output is produced by combining labour and capital ($K$) using a technology represented by a Cobb-Douglas production function with constant returns to scale and total factor productivity $A_t$. However, we allow pandemic policy, represented by $p$, to reduce the total output, and furthermore assume that only susceptible and recovered individuals are included in the labour force:
\begin{gather}
Y_t = (1-p) A_t K_t^\alpha (S_t+R_t)^{1-\alpha},
\end{gather}
in which $\alpha$ represents the output elasticity of capital, and total factor productivity $A_t$ grows at a constant rate $g$:
\begin{gather}
A_{t+1} = (1+g)A_t.
\end{gather}
Assuming that physical capital ($K_t$) depreciates at a rate of $\delta$ from one period to the next and that the pandemic causes direct costs $H(\cdot)$ to society, the capital stock accumulates according to the following transition equation:
\begin{gather}
\label{eq:capitalaccumulation}
K_{t+1} = (1-\delta) K_t - C_t - H(\cdot).
\end{gather}
Given a utility discount factor $\beta$, we assume that a benevolent social planner chooses an infinite stream of consumption $\{ C_t \}_{t=0}^\infty$ to optimise the discounted sum of logarithmic utility, solving the following maximisation problem:
\begin{gather*}
\max_{\{ C_t \}_{t=0}^\infty } \sum_{t=0}^\infty \beta^t N_t \ln{\left( \frac{C_t}{N_t} \right) },
\end{gather*}
while respecting the restrictions represented by equations \eqref{eq:popgrowth} through \eqref{eq:capitalaccumulation}. Since our period of interest is actually finite, this maximisation problem can be solved numerically by backwards induction, provided that the terminal period is chosen so far in the future that it will not interfere with the period of interest\footnote{The implementation of the model, \emph{Macroeconomic-Epidemiological Emergency Policy Intervention Simulator}, is available at \url{https://github.com/iantrotter/ME3PI-SIM}.}.
\subsection{Representing the Global Economy and COVID-19}
The model presented in the previous subsection relies on parameters for population dynamics, the spread of a pandemic, production of economic output, and the accumulation of physical capital. In order to perform computational experiments, we need to determine realistic numerical values for these, as well as initial values for the state variables.
One fundamental issue that we must address is that the study of epidemics and economic growth usually consider different timescales: whereas the spread of an epidemic is usually analysed at a daily or weekly timescale, economic growth is usually studied at an annual, or even a decennial, timescale. To reconcile these differences, we choose a daily timescale for our model. A daily timescale is suitable for studying the spread of a pandemic, since pandemics spread rapidly and their health effects pass almost entirely within a short timeframe. However, daily resolution is an unusual choice for a model of economic growth, as capital accumulation, technological progress, and population growth are almost negligible from one day to the next. During a pandemic, however, daily movements of individuals in and out of the labour force could have a large impact on economic production, and indirectly on the accumulation of capital -- and in order to adequately capture these effects, we choose a daily resolution for the model. Parameter values for both the parameters belonging to the economic components, as well as the epidemiological components, are therefore chosen to represent a daily timescale.
\paragraph{Population Growth} The parameters for the logistic population growth model, $a_1$ and $a_2$, were selected by first estimating a linear regression model on annual global population data from the World Bank between 1960 and 2018\footnote{Available at \url{https://data.worldbank.org/indicator/SP.POP.TOTL}, accessed on 2020-05-04}. The estimation results are shown in Table \ref{tab:pop}, and the regression coefficients $a_1^y$ and $a_2^y$ -- representing the parameters of an annual model -- were converted into their corresponding daily values by calculating:
\begin{gather*}
a_1 = 1 + \frac{a^y_1-1}{365} \quad\quad a_2 = \frac{a^y_2}{365}.
\end{gather*}
The fitted values of the population model is shown in Panel (a) of Figure \ref{fig:calibration}, and appear to match the historical data for global population closely.
\input{population_model_parameters.tex}
\paragraph{Capital Stock} We imputed the global physical capital stock by combining annual data on global gross physical capital formation, available for the period between from the World Bank\footnote{Available at \url{http://api.worldbank.org/v2/en/indicator/NE.GDI.TOTL.KD}, accessed on 2020-05-04.}, with an assumed physical capital depreciation rate of $\delta = 4.46\%$. This depreciation rate corresponds to the median value of the national depreciation rates for 2017 listed in the Penn World Tables 9.1\footnote{Available for download at \url{www.ggdc.net/pwt}, accessed on 2020-05-04.}, whose distribution is shown in Panel (b) of Figure \ref{fig:calibration}. The resulting estimated level of global physical capital stock from 1990 to 2019 is shown in Panel (c) of Figure \ref{fig:calibration}.
\paragraph{Production} Following \cite{nordhaus_optimal_1992}, we set the output elasticity of capital to $\alpha = 0.3$. We then combine annual data of global output from the World Bank between 1990 and 2018\footnote{Available at \url{http://api.worldbank.org/v2/en/indicator/NY.GDP.MKTP.PP.KD}, accessed on 2020-05-04.} with the global population and the imputed global stock of physical capital, in order to estimate the total factor productivity, $A_t$, and its growth rate, $g$. This gives an annual total factor productivity growth rate of around $1.3\%$, with a corresponding daily growth rate of around $g=3.55 \times 10^{-5}$ over the period. The modelled global production is shown in Panel (d) of Figure \ref{fig:calibration}, and fits the observed data relatively well, although the modelled production level slightly overestimates global production at some points.
\paragraph{Utility} We select an annual utility discount rate that corresponds to an annual rate of $\rho=8\%$. This discount rate allows the simulated investment from the model to match the observed gross physical capital formation in the period between 1990 and 2010, as shown in Panel (d) of Figure \ref{fig:backtest}. Although this discount rate appears somewhat high, it is not unreasonable if we take into consideration that the model represents the global economy, and that large parts of the global population consists of low-income households with high discount rates.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig_Calibration.pdf}
\caption{Calibration of the economic parameters. Panel (a): Global population and one-step-ahead predicted population from the fitted Gordon-Schaefer population growth model. Panel (b): National capital depreciation rates from the Penn World Tables 9.1 for 2017, with the dashed black line marking the median value $\delta=4.46\%$. Panel (c): Imputed values for daily global physical capital stock. Panel (d): Daily modelled gross world product, using a Cobb-Douglas production function with imputed daily capital stock, interpolated values for daily global population, output elasticity of capital set to 0.3, a growth rate for total factor productivity correspodning to 0.014 annually, and initial total factor productivity set to match actual production in 1990.}
\label{fig:calibration}
\end{figure}
\paragraph{Excess Direct Pandemic Costs} A pandemic directly causes additional costs to society, which is captured by the function $H(\cdot)$ in our mathematical model. To model this cost, we look to the literature which has estimated the excess hospital admission costs for a recent similar pandemic, the H1N1 pandemic in 2009: for Spain \citep{galante_health_2012}, Greece \citep{zarogoulidis_health_2012}, Australia and New Zealand \citep{higgins_critical_2011}, New Zealand \citep{wilson_national_2012}, and the United Kingdom \citep{lau_excess_2019}. Figure \ref{fig:costs} shows the direct hospitalisation costs attributed to the H1N1 pandemic in various countries, along with the number of hospital admissions. Based on these previous cost estimates for the H1N1 pandemic, we use a flat cost of $u = 5,722$ USD per hospital admission (see Table \ref{tab:direct_costs}), corresponding to the solid red line in Figure \ref{fig:costs}. Although we assume a flat cost per admission, it may be more reasonable in other contexts -- for instance when applying the model to specific regions -- to consider a cost function with an increasing marginal cost: as hospital capacity becomes constrained in the short-run during a surge of admissions, one could expect the unit cost to increase. However, in the context of our global model, we do not distinguish between the regions in which the cases occur, and therefore cannot accurately capture such a saturation effect. Therefore, we choose a direct cost function that is simply linear in the number of hospital admissions.
We assume that $h=14.7\%$ of the confirmed infected cases will be admitted to hospital\footnote{This hospitalisation rate corresponds to the median of USA state level hospitalisation rates reported in the daily COVID-19 reports from the Center for System Science and Engineering at John Hopkins University, on the 10\textsuperscript{th} of May 2020, available at \url{https://github.com/CSSEGISandData/COVID-19}.}, and our direct cost function is given by:
\begin{gather*}
H_t = u h b S_t I_t.
\end{gather*}
\input{direct_cost_model_parameters}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Fig_Direct_Costs.pdf}
\caption{Direct costs of the H1N1 pandemic, based on data compiled by \cite{lau_excess_2019}.}
\label{fig:costs}
\end{figure}
\paragraph{Infection, Recovery and Mortality Rates} To estimate the mortality and recovery rates, $r$ and $m$, we solve the transition equations for the number of recovered $R_t$ (equation \eqref{eq:recovere}) and the number of deceased $D_t$ (equation \eqref{eq:deceased}) for their respective parameters:
\begin{gather*}
r = \frac{R_{t+1}-R_t}{I_t}, \quad m = \frac{D_{t+1} - D_t}{I_t}.
\end{gather*}
Using daily data for the number of confirmed, recovered and deceased cases, made available by John Hopkins University\footnote{Available at \url{https://github.com/CSSEGISandData/COVID-19}, accessed on 2020-05-06.}, we can calculate the recovery and mortality rates for each day, as shown in the bottom two rows of Figure \ref{fig:epidemicparameters}.
To estimate the infection rate, $b$, we solve equation \eqref{eq:infected} for the parameter $b$:
\begin{gather*}
b = \frac{I_{t+1} - (1-r-m)I_t}{S_t I_t}.
\end{gather*}
Taking into account that $I_t$ in the model refers to the number of active cases, whereas the data reports the accumulated number of cases, and using the population growth model to help estimate the number of susceptible individuals, we calculate the daily infection rates, shown in the top row of Figure \ref{fig:epidemicparameters}. As the infection rate $b$ varies over time, we choose a relatively high infection rate to represent the infection rate in the absence of policy intervention, $b_0 = 2.041\times 10
^{-11}$, which equals the upper quartile (75\%) of the observed infection rates. As we simulate different intervention policies, this base infection rate $b_0$ will be modified.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig_Parameters_TimeHist.pdf}
\caption{Calibration of the parameters for the SIR model. Panel (a) and (b): the development and distribution of the SIR infection rate, $b$. Panel (c) and (d): the development and distribution of the SIR recovery rate, $r$. Panel (e) and (f): the development and distribution of the SIR mortality rate, $m$.}
\label{fig:epidemicparameters}
\end{figure}
We notice from Figure \ref{fig:epidemicparameters} that the mortality rate appears to rise and fall together with the infection rate\footnote{\cite{alvarez_simple_2020} have also made this observation, and included the effect in their model.}. This might reflect that lack of capacity in a health system increases mortality rate, and this is therefore a feature that we would like to capture. We therefore estimate $m$ as a function of $b$:
\begin{gather*}
m = k_1 b^{k_2},
\end{gather*}
in which $k_1$ and $k_2$ are constants. Table \ref{tab:mortality} shows the regression for estimating the parameters $k_1$ and $k_2$, and the fitted function is shown in Figure \ref{fig:mortalityrate}, together with the observed values of daily infection and mortality rates.
\input{mortality_rate_model_parameters}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Fig_Mortality_x_Infection.pdf}
\caption{Global mortality rate and infection rate.}
\label{fig:mortalityrate}
\end{figure}
For the recovery rate in the simulations, we select the median of the daily recovery rates calculated from the data, $r=0.02099$, which implies that slightly over 2\% of infected individuals recover from one day to the next. The infection rate, $b$, however, will be determined individually for each scenario, and will reflect the pandemic policy simulated in each of the scenarios. The mortality rate $m$ will be determined by the infection rate $b$, according to the relationship between them we estimated earlier.
\paragraph{The Production-Infection Trade-Off} Our model assumes that pandemic policy mainly impacts the spread of the pandemic through manipulating the infection rate $b$, and mainly infects economic growth by affecting production of economic output, $Y_t$. Our model contains a single parameter, $p$, to represent pandemic policy, which directly represents the shortfall in global production. In order to analyse the trade-off between production and the infection rate, however, we must establish how the infection rate, $b$, is impacted by the policy parameter $p$ -- that is, we must quantify how the infection rate responds to foregone production.
We expect the relationship between the infection rate, $b$, and the GDP shortfall, $p$, to exhibit two specific characteristics. Firstly, we expect a reduction in the infection rate as the GDP shortfall increases, because we assume pandemic policies are designed to reduce the infection rate, which result in a shortfall in the economic production as a side-effect. Secondly, we expect the reduction in the infection rate to be greater at first, because we expect measures to be enacted in order from more to less effective, and from less to more disruptive. That is, the infection rate reductions exhibit a form of decreasing returns in the GDP shortfall. We suggest that the percentage reduction in the infection rate, $\Delta b(\%)$, responds to the percentage reduction in GDP, $\Delta \text{GDP}(\%)$, as follows:
\begin{gather*}
\Delta b(\%) = q_1 \Delta \text{GDP}(\%)^{q_2},
\end{gather*}
in which $q_1$ and $q_2$ are constants, and $q_2 \in (0,1)$.
The shortfall in production associated with each infection rate is, however, not straight-forward to estimate. Firstly, GDP data is published several months late, which means that a whole pandemic might have long since passed by the time the GDP data is published. Secondly, GDP data are generally aggregated into months, quarters or years, which would make it difficult to associate with a particular infection rate, which can vary greatly over these timeframes. Therefore, we infer the daily reduction in economic production from the shortfall in electricity consumption: electricity consumption data for many countries is available near real-time -- often at sub-hourly resolution -- and electricity consumption is known to correlate well with economic activity (see, for instance, \cite{trotter_climate_2016} and \cite{rodriguez_climate_2019}).
Data on electricity demand (load) is available for many European countries from the ENTSO-E Transparency Platform\footnote{Available at \url{https://transparency.entsoe.eu/}, accessed on 2020-05-13.}, and we utilise data from Europe's five biggest economies, which have all been significantly impacted by COVID-19: France, Germany, Italy, Spain, and the United Kingdom. To measure the daily \textit{shortfall} in electricity consumption, however, we must compare the observed electricity consumption to what it would have been under normal conditions. Therefore, we first need to create a counterfactual representing the electricity consumption under normal conditions. We use the automated forecasting procedure by \cite{taylor_forecasting_2017} to calibrate models on the daily national electricity consumption (load) data from 2015 until March 1, 2020. This period does not include the main impacts of the pandemic, such that forecasts from the models for the period from March 1, 2020 to May 10, 2020 can act as counterfactuals -- how electricity consumption would have been expected to develop under normal conditions. These counterfactuals may then be compared to the observed values in the same period, and allows us to calculate the daily electricity consumption shortfall in terms of percentages.
The relationship between electricity consumption and production has been the subject of many studies (often as variants of ``income elasticity of electricity consumption''), and, synthesising the studies into a useful heuristic, we assume that a 1\% decrease in electricity consumption is associated with a 1.5\% reduction in GDP. The estimated GDP shortfall for France, Germany, Italy, Spain and the United Kingdom are illustrated in Panel (a) of Figure \ref{fig:prodinfecttime}, which shows a clear increase in the GDP shortfall throughout March 2020, and a stable shortfall of around 10\%-20\% throughout April and the start of May 2020, appearing to correspond closely to the lockdown periods of these countries. Panel (b) of Figure \ref{fig:prodinfecttime} shows the reduction in the infection rate for the five countries over the same time period, with the base period for the infection rate considered to be the first seven days in March, and it is clear that the infection rate has decreased as the GDP shortfall has dropped, which is consistent with our expectations. The scatter plot of the estimated GDP shortfall and the reduction in infection rates, shown in Figure \ref{fig:prodinfectscatter}, shows a clear relationship between infection rate reductions and GDP shortfall. The red line in Figure \ref{fig:prodinfectscatter} shows the model, with constants estimated as in Table \ref{tab:infxgdp}. The model fits the observations well, and the estimated values for the constants, $q_1$ and $q_2$, conform to our expectations.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig_Production_and_Infection_Timeseries.pdf}
\caption{Weekly estimated GDP shortfall for France, Germany, Italy, Spain, and the United Kingdom. The GDP shortfall is based on the difference between actual and expected electricity consumption. The average weekly infection rate is based on the number of confirmed cases.}
\label{fig:prodinfecttime}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Fig_Production_x_Infection_Scatter.pdf}
\caption{Weekly estimated GDP shortfalls for France, Germany, Italy, Spain, and the United Kingdom, based on the difference between actual and expected electricity consumption. The average weekly infection rate is based on the number of confirmed cases. The red line represents the model estimated on the data.}
\label{fig:prodinfectscatter}
\end{figure}
\input{infection_gdp_regression}
\paragraph{} Table \ref{tab:parameters} summarises the chosen values for the parameters in the model. Having defined parameter values such that the model represents the global economy under the impact of COVID-19, we can now define scenarios that can be simulated numerically, and provide insight into the impact of policy on both economic growth and the spread of the pandemic.
\begin{table}
\centering
\caption{Parameter values.}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{llrl}
\toprule
\textbf{Parameter} & \textbf{Description} & \textbf{Value} \\
\midrule
$a_1$, $a_2$ & Logistic population growth (annual) & 1.028, -2.282$\times$10$^{-12}$ \\
$\delta$ & Capital depreciation rate (annual) & 4.46\% \\
$\alpha$ & Output elasticity of capital & 0.3 \\
$g$ & Growth rate of total factor productivity (annual) & 1.3\% \\
$\rho$ & Utility discount rate (annual) & 8\% \\
$u$ & Cost per hospital admission & 5,722 USD \\
$h$ & Hospital admissions per confirmed case & 14.7\% \\
$r$ & Daily recovery rate per active infection & 2.1\% \\
$b_0$ & Base infection rate (no intervention) & $2.041 \times 10^{-11}$ \\
$k_1, k_2$ & Mortality rate parameters, $m = k_1 b^{k_2}$ & 12.561, 0.717 \\
$q_1, q_2$ & Infection rate parameters, $\Delta b(\%) = q_1 \Delta\text{GDP}(\%)^{q_2}$ & 3.677, 0.238 \\
\bottomrule
\end{tabular}
}
\label{tab:parameters}
\end{table}
\subsection{Policy Experiments}
To have a basis for comparison, we first simulate two baseline scenarios: the \textit{No Pandemic} scenario, in which no pandemic occurs, and the \textit{No Intervention} scenario, in which the pandemic occurs with no direct intervention ($p=0$). Comparing the remaining scenarios to the first baseline scenario (\textit{No Pandemic}) will help us understand the impact of the pandemic. In addition, the baseline scenario will provide the initial conditions for population and capital stock at the start of the pandemic, as observations are not yet available. Comparing the remaining scenarios to the second baseline scenario (\textit{No Intervention}) will help us understand the impact of the simulated policy intervention. The initial values for these simulations are shown in Table \ref{tab:baselines}.
Having established the baseline scenarios, we run a series of simulations to investigate three fundamental aspects of the policy intervention. First, we alter the timing of the start of the intervention, to explore the advantages and disadvantages of starting the intervention early or late. Second, we alter the intensity of the intervention, to investigate the differences in the impacts between light and severe interventions. And, finally, we alter the duration of the intervention. That is, by running numerical experiments that vary the policy interventions in commencement, intensity and duration, we answer three fundamental policy questions: ``When?'', ``How much?'', and ``For how long?''. When taken together, these experiments will provide insight into the economic and health impacts of varying policies along these three dimensions, and highlight the trade-offs that policymakers must consider:
\begin{description}
\item[When to intervene?] Holding the intervention intensity and duration fixed at 10\% and 26 weeks, we run simulations altering the start of the policy intervention between April 09, 2020, and June 02, 2020.
\item[How much?] Holding the starting date of the intervention fixed at March 12, 2020 -- the date when the WHO declared COVID-19 to be a pandemic -- and the duration fixed at 26 weeks, we alter the intensity of the intervention from 5\% to 25\%, in steps of 10 percentage points.
\item[For how long?] Keeping the starting date of the intervention fixed at March 12, 2020, and the intervention intensity fixed at 10\%, we alter the duration of the intervention between 4 weeks and 76 weeks.
\end{description}
The initial values used in all these simulations are the same as in the \textit{No Intervention} scenario, specified in Table \ref{tab:baselines}. Taken together, these three sequences of simulations will provide important and actionable insights into the impacts of policy intervention on both economic growth and on the spread of the pandemic that will help policymakers understand the relevant trade-offs.
\begin{table}
\centering
\caption{Parameters used for the baseline scenarios.}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{llrrrrrrrrrr}
\toprule
\midrule
\textbf{Scenario} & \textbf{Start date} & $N_0$ & $I_0$ & $R_0$ & $D_0$ & $b_0$ & $A_0$ & $K_0$ \\
\midrule
\textbf{No Pandemic} & 2019-01-01 & 7.634$\times 10^{9}$ & 0 & 0 & 0& 0 & 1.880 & 2.775$\times 10^{14}$ \\
\textbf{No Intervention} & 2020-01-22 & 7.718$\times 10^9$ & 510 & 28 & 17 & 2.041$\times 10^{-11}$ & 1.906 & 2.827$\times 10^{14}$ \\
\bottomrule
\end{tabular}}
\label{tab:baselines}
\end{table}
\section{Results and Discussion}
\subsection{Backtest 1990-2010}
Before presenting the main simulation results, we first present the results of a backtest. This is shown in Figure \ref{fig:backtest}, and shows that the model captures the main features of the observed historical data. Although the backtest in this case is not an out-of-sample test, due to lack of data, the backtest provides strong support for the economic components of the model.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig_Backtest.pdf}
\caption{Model backtest results. Panel (a): The simulated daily development of the global physical capital stock and the daily imputed global physical capital stock. Panel (b): Global daily simulated and observed population. Panel (c): Simulated and observed daily gross world production. Panel (d): Simulated and observed daily global gross physical capital formation.}
\label{fig:backtest}
\end{figure}
\subsection{Baseline Scenarios}
The results of simulating the baseline scenarios -- \textit{No Pandemic} and \textit{No Intervention} -- are shown in Figure \ref{fig:baselines}. As expected, the \textit{No Pandemic} scenario is characterised by steady economic growth, and no infected or deceased individuals. The \textit{No Intervention} scenario, however, shows a large and abrupt drop of around 45\% in production during the first half of 2020, as the pandemic spreads through the population. The number of active infections peaks in mid-June, 2020. As the pandemic subsides, a large proportion of the labour force never returns as the mortalities reach 1.75 billion people, and production recovers only to 85\% of its pre-pandemic value before 2021. Although growth in economic production resumes after the pandemic, production remains 20\%-25\% below the production in the \textit{No Pandemic} scenario until the end of the simulation in 2030. In summary, the \textit{No Intervention} scenario shows substantial loss of human life, as well as a lasting and significant negative impact on production, and we expect that shrewd policy intervention could partially mitigate these impacts.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig_Baselines.pdf}
\caption{Simulation results for the baseline scenarios.}
\label{fig:baselines}
\end{figure}
In the following, we run model simulations to gain insight into when to start the policy intervention, to what degree to intervene, and for how long the intervention should last.
\subsection{When to Intervene?}
We first run a series of simulations to examine the question of when a possible policy intervention should start. In this series of simulations, the intervention intensity is held fixed at 10\% (that is, the intervention causes a 10\% decline in production), and the duration of the intervention is held fixed at 26 weeks. Multiple simulations are run with differing starting dates for the policy intervention. This series of simulations is shown in Figure \ref{fig:startdates}, with three possible starting dates for the policy intervention: April 9, May 21, and July 2.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig_StartDates.pdf}
\caption{Simulation results when varying the starting date of the policy intervention.}
\label{fig:startdates}
\end{figure}
Examining Panel (f) of Figure \ref{fig:startdates}, we note that intervening on July 2 allows the pandemic to spread almost identically to the \textit{No Intervention} scenario -- that is, July 2 is too late for effective intervention because the peak in active infections has passed, most of the damage is already done, and the pandemic is decelerating by itself. However, by intervening on July 2 when the number of active infections is near its highest, many mortalities are avoided, and the human and economic damage is somewhat lower than in the \textit{No Intervention} scenario. Further, we note that intervening on April 2 does not appear to significantly alter the course of the pandemic or mitigate its effects -- intervening so early in the pandemic only serves to delay the main wave of infections.
The intervention starting on May 21 -- about one month before the peak of the \textit{No Intervention} scenario -- appears to be the most effective of our simulations, both considering the economic impacts and the final mortality rate. May 21 appears to be just before the inflection point of the \textit{No Intervention} scenario, and the number of infections is growing at its highest rate. Between the three simulated scenarios, this is by far the preferred option.
It seems that timing the policy intervention is of great importance to mitigate both the human and the economic impacts. Although we do not believe that the exact dates hold for the COVID-19 pandemic in particular, these simulations lead us to interesting insights: policy intervention appears to be most effective when the number of active infections is approaching its inflection point, and is growing at its highest rate. An intervention that is too early will only serve to delay the critical phase of the pandemic, and an intervention after the peak has occurred will obviously do nothing to lower the peak. Although it may be difficult to know beforehand when a pandemic will enter its critical phase, the timing of the policy intervention is of paramount importance, and our results suggest that authorities should implement emergency policies only when the disease is sufficiently spread. However, we expect that the exact specification of \textit{sufficiently spread} to vary significantly from place to place, depending on local conditions.
This finding contradicts the claims of \cite{guan_global_2020}, who argue that interventions should be ``earlier, stricter and shorter'': instead, our results show that starting the intervention before the disease is sufficiently spread will either simply delay the critical phase of the pandemic (if the intervention indeed is kept ``short''), or prolong the intervention (if the intervention is extended). The study by \cite{eichenbaum_macroeconomics_2020} focuses on ``starting too late'' and ``ending too early'', yet our results suggest that policymakers also need to avoid starting \emph{too} early.
\subsection{How Much Intervention?}
In this series of simulations, we keep the starting date of the policy intervention fixed at March 12 and the duration of the policy intervention fixed at 26 weeks, whilst varying the intervention inensity. We simulate policies that reduce production by 5\%, 15\%, and 25\%, and the simulation results are shown in Figure \ref{fig:degrees}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig_Degrees.pdf}
\caption{Simulation results when varying the intensity of the policy intervention.}
\label{fig:degrees}
\end{figure}
The three simulations with different intervention intensity, shown in Figure \ref{fig:degrees}, suggest that varying the intensity of the intervention mainly alters the timing of the pandemic, but does little to mitigate the economic and human impacts: apart from a delay in the main phase of the pandemic, most variables behave similar to the \textit{No Intervention} scenario.
This indifference between the intensities of the interventions is likely related to the relationship we identified between the GDP shortfall and the infection rate reduction, as shown in Figure \ref{fig:prodinfectscatter}. There are strong diminishing returns, such that even an intervention of a low intensity (5\%) already reduces the infection rate substantially (60\%), and that additional measures have a lower effect on the infection rate.
Essentially, the intensity of the intervention -- above a certain minimum level -- appears to be less important than the timing of the intervention. Again, this finding contradicts the study of \cite{guan_global_2020}, our results indicating that intervention should perhaps be \emph{less} strict, as the intervention intensity faces strong diminishing returns.
\subsection{For How Long to Intervene?}
To analyse the impact of the duration of the policy intervention, we vary the duration of the policy intervention, whilst maintaining the intervention intensity fixed at 10\% and the starting date fixed at March 12. Figure \ref{fig:durations} shows the results of simulating intervention durations of 4 weeks, 28 weeks, 52 weeks and 76 weeks. It is clear from the figure that the duration of the policy intervention can have a large impact on the trajectory of the pandemic, and its human and economic aftermath.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig_Durations.pdf}
\caption{Simulation results when varying the duration of the policy intervention.}
\label{fig:durations}
\end{figure}
From Figure \ref{fig:durations}, it seems that policies with longer durations clearly lead to lower human and economic impacts, with a 76-week duration -- the longest of our simulations -- showing dramatically lower number of total mortalities, as well as a much quicker post-pandemic recovery of production. The key appears to be that 76 weeks is sufficient to include the peak in active infections of the trajectory determined by the reduced infection rate. This observation suggests that policies with a lower intensity would require a shorter duration, whereas policies with a higher intensity would require a longer duration -- something which may, at first, appear counter-intuitive.
Our results partially support the findings of \cite{eichenbaum_macroeconomics_2020}, who warn of ending the intervention ``too early'', but contradict the claims of \cite{guan_global_2020} that intervention should be ``short'' -- although the duration of the intervention should naturally be as short as possible. Our results suggest that stricter policies will require longer durations.
\subsection{Considerations, Limitations, and Concerns}
We have tried to make sensible modelling choices in this study, but, like all models, our model is a simplification that focuses only on certain aspects and ignores others. The simulation results should not be understood \emph{literally}: the intention of our model has never been to provide numerically accurate predictions, but to generate insights into the impacts of policy interventions by analysing the dynamics of the system as a whole. Although the insights from the numerical simulations can contribute to improving policy interventions during pandemics, it is important to appreciate the limitations of our model and results.
We are concerned about the quality of the parameters used for the epidemiological part of the model: there is a great deal of doubt and uncertainty about the quality of the official datasets on the spread of the pandemic -- that is, the number of confirmed cases, recovered cases and mortalities. There is a general sense that the number of confirmed cases is not representative for the number of infections, as testing is severely lacking in many regions. The lack of testing also affects the number of mortalities due to the pandemic, and number of recovered cases. Although we have used the data that is available without much discrimination, we share the concerns of many other researchers as to the quality of this data.
It is also unusual for a model of economic growth to operate at a daily resolution. We do not think this directly invalidates our resulting insights, although it means that the parameter values may appear unusual to researchers and practicioners, and that special care must be taken in the interpretation of the results. An alternative would be to develop the model in continuous time, which might be more familiar to some. However, in that case it would be necessary to discretise the model later for performing numerical experiments -- the model would, in the end, be the same, so presenting the model directly in discrete time appears to be a simpler alternative.
The parameter values were chosen for the model to represent the global economy and the global spread of COVID-19. There are, however, large differences between regions in the world. For instance, the five countries used for estimating the economic impact of policy measures -- France, Germany, Italy, Spain, and the United Kingdom, which were chosen for their data availability -- are probably not entirely representative for the rest of the world. There is also no global central government that implements global policy, and our insights are therefore not directly applicable by any specific authority. The purpose of the study, however, was not to generate recommendations for specific actions, but to generate insights into the impacts and trade-offs that policy interventions must consider. Regional, national and local policy can differ from ``global'' policy -- and probably should, as policy can be optimised to local conditions -- but the insights on intervention timing, intensity and duration may nevertheless be useful at these levels also. It would be possible to adapt the model for use at regional, national or local scales. In this case, we would recommend considering replacing the linear admission costs with a specification that allows for increasing marginal costs of admissions, which might better reflect increasing costs in the short run due to capacity saturation.
For calibrating the model, we estimated the economic impacts using an estimate of the shortfall in electricity consumption. Although we believe this approach is valid, it is difficult to assess the accuracy of the economic impact estimates. The approach based on electricity consumption could also be complemented by other near-real-time, high-resolution data sources that are believed to correspond well to economic activity, such as satellite observations, mobile phone movement and activity data, and urban traffic data. However, these data are not usually as widely available as electricity consumption data.
The model does not incorporate any demographic heterogeneity. Since some pandemics appear to affect people with certain demographic characteristics differently, this may bias the results. For instance, the mortality rate of COVID-19 appears to differ greatly between old and young people: if the disease has a greater impact on groups that were not originally included in the work force anyway, the model could exaggerate the economic impact of the pandemic by disconsidering demographic heterogeneity. We do not believe this to affect the main insights derived from our model simulations, since we do not think it substantially alters the dynamics of the system. However, it would certainly be an issue for the ``predictive accuracy'' of the simulations.
Since the model is deterministic, agents in the model have perfect foresight from the very start of the simulation. This is, naturally, not true in the real world, in which there are large uncertainties about future developments. This gives the model agents an unrealistic ability to plan for the future, and the economic portion of the model should therefore be considered an optimistic path. Another detail that also may positively bias the outcomes, is that the model does not include structural damages -- such as bankruptcies, institutional change, changing habits, and so forth -- and affords the model agents much more flexibility than economic agents may have in reality, where they may be facing additional restrictions.
Finally, we only simulated very simple policies for the purpose of understanding the impact of altering the policy in a very specific way. For instance, superior policies can be made relatively easily by allowing the intensity of the intervention to vary during the pandemic. Our examples, in which starting dates, intensity and duration are fixed, only served for illutration and to understand some of the dimensions of policy intervention.
We reiterate that our purpose has not been to provide numerically accurate predictions -- nor the means to generate accurate predictions -- of the evolution of the COVID-19 pandemic or the global economy. We have only explored particular aspects of effective policy responses to a pandemic, using a very high-level and theoretical approach, and it is with this in mind that our results are most appropriately appreciated. Our research does not aim to offer specific guidance for world authorities on the handling of the COVID-19 pandemic, but to analyse how a pandemic interacts with the global economy and thus help establish a set of of general guidelines.
\section{Conclusion}
We have presented a mathematical model for the joint evolution of the economy and a pandemic, based on incorporating the dynamics of the SIR model that describes the spread of epidemics into a neoclassical economic growth model framework. This model is subsequently adapted to represent the global economy under the impact of the COVID-19 pandemic by selecting appropriate functional forms and parameter values. The model includes a parameter that represents policy, by which economic production can be lowered in exchange for a reduced infection rate.
Using the calibrated model, we simulate the joint evolution of the economy and the pandemic for a series of policy assumptions in order to discover what is the most effective timing of a policy intervention, what intensity of policy intervention is most effective, and how long policy intervention should last.
Our experiments suggest that it is most effective to start the policy intervention slightly before the number of confirmed cases grows at its highest rate -- that is, to wait until the disease is sufficiently spread. Not only does this help lower the peak in active infections, it also reduces the economic impact and the number of mortalities. Starting too early can delay the pandemic, but does not otherwise significantly alter its course, whereas starting after the peak in active infections can obviously not impact the peak.
Furthermore, altering the intensity of the intervention does not appear to greatly influence the evolution of the pandemic nor the economy, other than cause minor delays. We ascribe the lack of effect to the concave relationship that we estimated between intervention intensity and infection rate reduction, as a large reduction in infection rate can be achieved by sacrificing a modest proportion of economic production, and appears to show strong decreasing returns thereafter. Our estimates suggest that a 60\% reduction in the infection rate can be achieved by sacrificing only 5\% of production, whereas a 70\% reduction in infection rate could be achieved for a 10\% reduction in production.
Altering the duration of the intervention showed that interventions with a longer duration lead to significantly lower mortalities and a quicker post-pandemic recovery in economic production. The key observation is that the policy must include the peak of the \textit{new} path set out by the reduced infection rate: in short, policy intervention should last until the peak has passed. Therefore -- somewhat counter-intuitively -- stricter policies should last longer, and less strict policies should last shorter.
Although the scenarios we present are not necessarily numerically accurate as predictions -- mostly due to generalisations made for modelling purposes, large regional variations, and large uncertainties in the parameters -- our conclusions are based mainly on the dynamics revealed by the policy experiments, and not specifically on their numerical values. As such, we hope that our model can serve as a tool for enhancing our understanding of the design of effective policies against the spread of pandemics, and that our insights can contribute to this discussion and provide general guidelines for policymakers.
\bibliographystyle{elsarticle-harv}
| {'timestamp': '2020-06-16T02:29:17', 'yymm': '2005', 'arxiv_id': '2005.13722', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.13722'} |
\section{Introduction}
\label{intr}
One of many achievements of the Density Functional Theory \cite{pr_136_B864,pr_140_A1133} (DFT) is its ability to provide accurate total energy of an interacting many-electron system as a function of external fields such as an electrostatic field generated by nuclei in a solid. Naturally, it is important to be able to find an arrangement of nuclei in a solid (crystal structure) which corresponds to a minimal total energy of the whole system (electrons plus nuclei). Another important field of interest is a studying of the response of a solid when its nuclei are pushed slightly from their equilibrium positions. In both situations, the ability to evaluate accurate derivatives of the total energy with respect to the atomic positions (forces) represents an important tool. In the context of an equilibrium structure search (geometry optimization), availability of the forces greatly helps, providing the directions where atoms should be moved in order to reach their equilibrium positions. In the context of small deviations from the equilibrium (phonons), availability of the forces allows one to use the finite displacement method \cite{epl_32_729,prb_64_045123,prb_78_134106} to calculate phonon frequencies without employing more technically involved linear response approach \cite{rmp_73_515}.
In a family of methods based on the Augmented Plane Waves \cite{pr_51_846} (APW) basis set, such as the Linearized Augmented Plane Waves (LAPW, \cite{prb_12_3060}), the formulation of how one can evaluate atomic forces was given by Yu et al. \cite{prb_43_6411}. The work by Yu et al. demonstrated that breaking the space into non-overlapping muffin-tin (MT) spheres and the so-called interstitial region (IR), which is an important attribute of the APW-family of methods, leads to an additional contribution to the atomic forces, the Pulay term \cite{mp_17_197}. Since then, there were a few enhancements introduced, such as inclusion of additional surface terms when one uses basis sets with discontinuities across the MT boundaries (for instance in APW+lo basis set, \cite{prb_64_195134}). Also, Kl\"{u}ppelberg et al. \cite{prb_91_035105} presented a refinement of the approach by carefully taking into account the tails of the high-energy core states as well as small discontinuities in wafe functions, density, and potential at the MT spheres. Independently, Soler and Williams formulated their variant of the LAPW method \cite{prb_40_1560,prb_42_9728} with perfectly continuous basis functions, as well as the algorithm of the forces evaluation within their approach. Their construction certainly has some advantages, but the complications related to the fact that inside the MT spheres one has to deal with momentum independent functions as well as with momentum dependent plane waves, makes it inconvenient especially if one is interested in advanced methods going beyond DFT such as GW approximation.
One of the limitations of the existing formulations of the atomic force evaluation is that they are based on the non-relativistic Kohn-Sham equations. However, in materials where elements from the far end of the periodic table are present, one has to use fully relativistic approach based on the Dirac-Kohn-Sham (DKS) theory \cite{prb_7_1912,jpc_11_L943,jpc_12_2977,jpc_12_L845}. Therefore, this work has its principal goal in removing the above mentioned limitation. The derivation of the expression for forces goes closely along the lines paved in previous works by Yu et al. and by Kl\"{u}ppelberg et al., but with DKS equations as a background theory. Whereas our derivation is directly relevant to fully relativistic theory, we specifically are pointing out in the text, where the difference from the non-relativistic theory enter. Throughout the paper the atomic units are used with Rydbergs as units of energy.
\section{General derivation of the atomic force expression}
\label{gen}
The force $\mathbf{F}_{t}$ exerted on atom positioned at \textbf{t} is defined as the derivative of the free energy F of a solid: $\mathbf{F}_{t}=-\frac{d F}{d\mathbf{t}}$. Thus, it is convenient to begin with writing down an expression for the free energy which corresponds to a specific level of theory. In the context of a joint description of the relativistic and magnetic effects within the Relativistic Density Functional Theory (RDFT), the corresponding expression was developed in works by Rajagopal, Callaway, Vosko and Ramana \cite{prb_7_1912,jpc_11_L943,jpc_12_2977,jpc_12_L845}. Principal equations of this theory are briefly capitalized here for convenience. In RDFT, free energy of a solid with electronic density
$n(\mathbf{r})$ and magnetization density $\mathbf{m}(\mathbf{r})$ can be written as the following:
\begin{eqnarray}\label{etot}
F &= -T\sum_{\mathbf{k}\lambda}\ln (1+e^{-(\epsilon^{\mathbf{k}}_{\lambda}-\mu)/T})+\mu N
\nonumber \\&-\int_{\Omega_{0}} d \mathbf{r} [n(\mathbf{r})V_{eff}(\mathbf{r})+\mathbf{m}(\mathbf{r})\cdot
\mathbf{B}_{eff}(\mathbf{r})]\nonumber\\&+\int_{\Omega_{0}} d\mathbf{r} [n(\mathbf{r})V_{ext}(\mathbf{r})+\mathbf{m}(\mathbf{r})\cdot
\mathbf{B}_{ext}(\mathbf{r})] \nonumber\\&+ \int_{\Omega_{0}} d \mathbf{r}
\int_{\Omega} d \mathbf{r'} \frac{n(\mathbf{r})n(\mathbf{r'})}{|\mathbf{r}-\mathbf{r'}|}\nonumber\\&
+\int_{\Omega_{0}} d\mathbf{r} n(\mathbf{r})\epsilon_{xc}[n(\mathbf{r}),\mathbf{m}(\mathbf{r})]+E_{nn},
\end{eqnarray}
where $T$ stands for the temperature, sum runs over the Brillouin zone points $\mathbf{k}$ and band indexes $\lambda$, $\epsilon^{\mathbf{k}}_{\lambda}$ is the band energy, $\mu$ is the chemical potential, and $N$ is the total number of electrons in the unit cell. In the integrals, $\Omega_{0}$ is the volume of the primitive unit cell and $\Omega$ is the volume of the whole solid.
Effective scalar potential $V_{eff}(\mathbf{r})$ is a sum of an external scalar field $V_{ext}(\mathbf{r})$ and induced fields (Hartree (electrostatic) $V_{H}(\mathbf{r})=2 \int d\mathbf{r'}
\frac{n(\mathbf{r'})}{|\mathbf{r} - \mathbf{r'}|}$ and exchange-correlation $V_{xc}(\mathbf{r})=\frac{\delta
E_{xc} [n(\mathbf{r}),\mathbf{m}(\mathbf{r})]}{\delta n(\mathbf{r})}$):
\begin{equation} \label{veff}
V_{eff}(\mathbf{r})=V_{ext}(\mathbf{r})+V_{H}(\mathbf{r}) + V_{xc}(\mathbf{r}),
\end{equation}
whereas the effective magnetic field $\mathbf{B}_{eff}(\mathbf{r})$ represents a sum of external $\mathbf{B}_{ext}(\mathbf{r})$ and induced $\mathbf{B}_{xc}(\mathbf{r})=\frac{\delta E_{xc}
[n(\mathbf{r}),\mathbf{m}(\mathbf{r})]}{\delta \mathbf{m}(\mathbf{r})}$ magnetic fields:
\begin{equation} \label{beff}
\mathbf{B}_{eff}(\mathbf{r})=\mathbf{B}_{ext}(\mathbf{r})+ \mathbf{B}_{xc}(\mathbf{r}).
\end{equation}
$E_{xc}$ in the above formulae stands for the exchange-correlation energy which is a functional of $n(\mathbf{r})$ and $\mathbf{m}(\mathbf{r})$ : $\int_{\Omega_{0}} d\mathbf{r} n(\mathbf{r})\epsilon_{xc}[n(\mathbf{r}),\mathbf{m}(\mathbf{r})]$. $E_{nn}$ in (\ref{etot}) is the nuclear-nuclear electrostatic interaction energy.
One-electron energies $\epsilon^{\mathbf{k}}_{\lambda}$ are the eigen values of the following equations (Dirac-Kohn-Sham equations):
\begin{equation} \label{ksh}
\left(\hat{K}+V_{eff}(\mathbf{r})+ \beta
\widetilde{\boldsymbol{\sigma}} \cdot \mathbf{B}_{eff}(\mathbf{r})\right) \Psi_{\lambda}^{\mathbf{k}}(\mathbf{r})=\epsilon^{\mathbf{k}}_{\lambda}
\Psi_{\lambda}^{\mathbf{k}}(\mathbf{r}),
\end{equation}
where $\Psi_{\lambda}^{\mathbf{k}}(\mathbf{r})$ stands for the Bloch periodic band function. The kinetic energy operator $\hat{K}$ has the Dirac form (electron rest energy has been subtracted):
\begin{equation} \label{hkin}
\hat{K}=c \boldsymbol{\alpha} \cdot \mathbf{p} +(\beta-I)
\frac{c^2}{2},
\end{equation}
and $\widetilde{\boldsymbol{\sigma}}$ are the $4 \times 4$ matrices, combined from the Pauli matrices $\boldsymbol{\sigma}$:
\begin{equation}
\label{Pauli}
\widetilde{\boldsymbol{\sigma}}= \left(
\begin{array}{cc}
\boldsymbol{\sigma} & 0 \\
0 & \boldsymbol{\sigma}
\end{array} \right).
\end{equation}
$c$ in equation (\ref{hkin}) is the light velocity ($c=274.074$ in our unit system), $\mathbf{p}$ is the momentum operator ($\equiv -i \nabla$),
$\boldsymbol{\alpha}$, and $\beta$ are Dirac matrices in the standard representation, and $I$ is the unit $4 \times 4$ matrix.
Finally, with the electron energies and the band state functions available, the electronic and magnetization densities are defined as the following
\begin{equation} \label{dens}
n(\mathbf{r})=\sum_{\mathbf{k}\lambda}f^{\mathbf{k}}_{\lambda}
\Psi_{\lambda}^{^{\dag}\mathbf{k}}(\mathbf{r}) \Psi_{\lambda}^{\mathbf{k}}(\mathbf{r}),
\end{equation}
and
\begin{equation} \label{magn}
\mathbf{m}(\mathbf{r})=\sum_{\mathbf{k}\lambda}f^{\mathbf{k}}_{\lambda}
\Psi_{\lambda}^{\dag}(\mathbf{k},\mathbf{r}) \beta
\widetilde{\boldsymbol{\sigma}} \Psi_{\lambda}(\mathbf{k},\mathbf{r}),
\end{equation}
with $f^{\mathbf{k}}_{\lambda}$ being the Fermi-Dirac distribution function ($f^{\mathbf{k}}_{\lambda}=\frac{1}{1+e^{(\epsilon^{\mathbf{k}}_{\lambda}-\mu)/T}}$).
Now we are differentiating the Eq.(\ref{etot}) term by term. For the first and the second terms on the right hand side one gets:
\begin{eqnarray} \label{dif12}
-\frac{d}{d\mathbf{t}}&\left(-T\sum_{\mathbf{k}\lambda}\ln (1+e^{-(\epsilon^{\mathbf{k}}_{\lambda}-\mu)/T})+\mu N\right)
\nonumber\\=&-\sum_{\mathbf{k}\lambda}\frac{1}{1+e^{(\epsilon^{\mathbf{k}}_{\lambda}-\mu)/T}}
\left(\frac{d \epsilon^{\mathbf{k}}_{\lambda}}{d\mathbf{t}}-\frac{d \mu}{d\mathbf{t}}\right)-N\frac{d \mu}{d\mathbf{t}}
=\sum_{\mathbf{k}\lambda}f^{\mathbf{k}}_{\lambda}\frac{d \epsilon^{\mathbf{k}}_{\lambda}}{d\mathbf{t}}.
\end{eqnarray}
The terms from the third to the sixth on the right hand side of (\ref{etot}) are represented by integrals over the unit cell. In APW-related methods, it means technically the sum of the integrals over the non-overlapping MT spheres and over the interstitial region (IR). As authors of work \cite{prb_91_035105} pointed out, the integrals should be differentiated with care, namely, the change of the integration domain when atom (and its muffin-tin sphere) moves should be taken into account. The generic differentiation formula obtained in \cite{prb_91_035105} is the following:
\begin{eqnarray} \label{int_gen}
\frac{d}{d\mathbf{t}}\int_{\Omega_{0}} d\mathbf{r}f(\mathbf{r})=\int_{\Omega_{0}}d\mathbf{r}\frac{d f(\mathbf{r})}{d\mathbf{t}}+\int_{S_{t}}d\mathbf{S}
[f^{MT}(\mathbf{r})-f^{IR}(\mathbf{r})],
\end{eqnarray}
where the surface integral is taken over the MT sphere of atom $t$. $d\mathbf{S}=\mathbf{e}dS$, and $\mathbf{e}=\frac{\mathbf{r}-\mathbf{t}}{|\mathbf{r}-\mathbf{t}|}$ denotes the normal vector on the MT sphere of atom $t$ that points into the interstitial region. $f^{MT}(\mathbf{r})$ and $f^{IR}(\mathbf{r})$ distinguish between the MT and the IR representations of the function $f$. Let us now apply the generic formula (\ref{int_gen}) to the integrals in (\ref{etot}):
\begin{eqnarray} \label{dif3}
-\frac{d}{d\mathbf{t}}&\left(-\int_{\Omega_{0}} d \mathbf{r} [n(\mathbf{r})V_{eff}(\mathbf{r})+\mathbf{m}(\mathbf{r})\cdot
\mathbf{B}_{eff}(\mathbf{r})]\right)\nonumber\\&=\int_{\Omega_{0}} d \mathbf{r} [\frac{d n(\mathbf{r})}{d\mathbf{t}}V_{eff}(\mathbf{r})+n(\mathbf{r})\frac{d V_{eff}(\mathbf{r})}{d\mathbf{t}}\nonumber\\&+\frac{d \mathbf{m}(\mathbf{r})}{d\mathbf{t}}\cdot
\mathbf{B}_{eff}(\mathbf{r})+\mathbf{m}(\mathbf{r})\cdot
\frac{d\mathbf{B}_{xc}(\mathbf{r})}{d\mathbf{t}}]\nonumber\\&+\int_{S_{t}}d\mathbf{S}
[n^{MT}(\mathbf{r})V^{MT}_{eff}(\mathbf{r})-n^{IR}(\mathbf{r})V^{IR}_{eff}(\mathbf{r})]\nonumber\\&+\int_{S_{t}}d\mathbf{S}
[\mathbf{m}^{MT}(\mathbf{r})\cdot \mathbf{B}^{MT}_{eff}(\mathbf{r})-\mathbf{m}^{IR}(\mathbf{r})\cdot \mathbf{B}^{IR}_{eff}(\mathbf{r})],
\end{eqnarray}
where we have assumed that only the induced magnetic field depends of the position of atom $t$.
\begin{eqnarray} \label{dif4}
-\frac{d}{d\mathbf{t}}&\left(\int_{\Omega_{0}} d \mathbf{r} [n(\mathbf{r})V_{ext}(\mathbf{r})+\mathbf{m}(\mathbf{r})\cdot
\mathbf{B}_{ext}(\mathbf{r})]\right)\nonumber\\&=-\int_{\Omega_{0}} d \mathbf{r} [\frac{d n(\mathbf{r})}{d\mathbf{t}}V_{ext}(\mathbf{r})+n(\mathbf{r})\frac{d V_{ext}(\mathbf{r})}{d\mathbf{t}}+\frac{d \mathbf{m}(\mathbf{r})}{d\mathbf{t}}\cdot
\mathbf{B}_{ext}(\mathbf{r})]\nonumber\\&-\int_{S_{t}}d\mathbf{S}
[n^{MT}(\mathbf{r})V^{MT}_{ext}(\mathbf{r})-n^{IR}(\mathbf{r})V^{IR}_{ext}(\mathbf{r})]\nonumber\\&-\int_{S_{t}}d\mathbf{S}
[\mathbf{m}^{MT}(\mathbf{r})\cdot \mathbf{B}^{MT}_{ext}(\mathbf{r})-\mathbf{m}^{IR}(\mathbf{r})\cdot \mathbf{B}^{IR}_{ext}(\mathbf{r})],
\end{eqnarray}
\begin{eqnarray} \label{dif5}
-\frac{d}{d\mathbf{t}}&\left(\int_{\Omega_{0}} d \mathbf{r}
\int_{\Omega} d \mathbf{r'} \frac{n(\mathbf{r})n(\mathbf{r'})}{|\mathbf{r}-\mathbf{r'}|}\right)=-\int_{\Omega_{0}} d \mathbf{r} \frac{d n(\mathbf{r})}{d\mathbf{t}}V_{H}(\mathbf{r})\nonumber\\&-\int_{S_{t}}d\mathbf{S}
[n^{MT}(\mathbf{r})V^{MT}_{H}(\mathbf{r})-n^{IR}(\mathbf{r})V^{IR}_{H}(\mathbf{r})]\nonumber\\&,
\end{eqnarray}
\begin{eqnarray} \label{dif6}
-\frac{d}{d\mathbf{t}}&\left(\int_{\Omega_{0}} d\mathbf{r} n(\mathbf{r})\epsilon_{xc}[n(\mathbf{r}),\mathbf{m}(\mathbf{r})]\right)\nonumber\\&=-\int_{\Omega_{0}} d \mathbf{r} [\frac{d n(\mathbf{r})}{d\mathbf{t}}V_{xc}(\mathbf{r})+\frac{d \mathbf{m}(\mathbf{r})}{d\mathbf{t}}\cdot
\mathbf{B}_{xc}(\mathbf{r})]\nonumber\\&-\int_{S_{t}}d\mathbf{S}
[n^{MT}(\mathbf{r})\epsilon^{MT}_{xc}(\mathbf{r})-n^{IR}(\mathbf{r})\epsilon^{IR}_{xc}(\mathbf{r})],
\end{eqnarray}
Collecting all derivatives together and assuming self-consistency (i.e. equations (\ref{veff}) and (\ref{beff}) are met) we obtain the following force:
\begin{eqnarray} \label{forc1}
\mathbf{F}_{t}=&-\sum_{\mathbf{k}\lambda}f^{\mathbf{k}}_{\lambda}\frac{d \epsilon^{\mathbf{k}}_{\lambda}}{d\mathbf{t}}+\int_{\Omega_{0}} d \mathbf{r} [n(\mathbf{r})\frac{d V_{eff}(\mathbf{r})}{d\mathbf{t}}+\mathbf{m}(\mathbf{r})\cdot
\frac{d\mathbf{B}_{xc}(\mathbf{r})}{d\mathbf{t}}]\nonumber\\&+\int_{S_{t}}d\mathbf{S}
[n^{MT}(\mathbf{r})V^{MT}_{xc}(\mathbf{r})-n^{IR}(\mathbf{r})V^{IR}_{xc}(\mathbf{r})]\nonumber\\&
+\int_{S_{t}}d\mathbf{S}
[\mathbf{m}^{MT}(\mathbf{r})\cdot \mathbf{B}^{MT}_{xc}(\mathbf{r})-\mathbf{m}^{IR}(\mathbf{r})\cdot \mathbf{B}^{IR}_{xc}(\mathbf{r})]\nonumber\\&-\int_{S_{t}}d\mathbf{S}
[n^{MT}(\mathbf{r})\epsilon^{MT}_{xc}(\mathbf{r})-n^{IR}(\mathbf{r})\epsilon^{IR}_{xc}(\mathbf{r})]\nonumber\\&+\mathbf{F}^{HF}_{t},
\end{eqnarray}
where the Hellmann-Feynman force has been introduced:
\begin{equation}\label{hell1}
\mathbf{F}^{HF}_{t}=-\int_{\Omega_{0}} d \mathbf{r} n(\mathbf{r})\frac{d V_{ext}(\mathbf{r})}{d\mathbf{t}}-\frac{d}{d\mathbf{t}} E_{nn}.
\end{equation}
Hellmann-Feynman force is proportional to the gradient of the full electrostatic potential at the center of atom $t$ (excluding the field from its nuclear), \cite{prb_43_6411}:
\begin{eqnarray}\label{hell2}
\mathbf{F}^{HF}_{t}&=2Z_{t}\frac{\partial}{\partial\mathbf{t}} \left(\int_{\Omega} d \mathbf{r} \frac{n(\mathbf{r})}{|\mathbf{t}-\mathbf{r}|}-\sum'_{\mathbf{R}\mathbf{t}'}\frac{Z_{t'}}{|\mathbf{t}-\mathbf{t}'-\mathbf{R}|}\right)\nonumber\\&=-Z_{t}\nabla V'_{el-stat}(\mathbf{r})\bigg|_{\mathbf{r}\rightarrow \mathbf{t}},
\end{eqnarray}
where $Z_{t}$ is the nuclear charge of atom $t$, the integration in the first right hand side expression is performed over the whole solid, and the sum is taken over all unit cells (indexed here by translation vector $\mathbf{R}$) and over all atoms in the unit cell (atom $t$ in the central unit cell is excluded from the sum). This consideration makes the evaluation of the Hellmann-Feynman term easy.
In order to bring the remaining terms of (\ref{forc1}) to the form convenient for evaluation one has to consider the derivative of the one-electron energies. This is done in the next section. Let us also to point out that the derivation performed up to this point is quite generic with respect to the degree of inclusion of the relativistic effects. The only formal difference is that we use vectors of the magnetization and the magnetic field as it is usually done in the spin-polarized RDFT, instead of spin up and spin down quantities as it is done in the non-relativistic spin-polarized DFT.
\section{Specifics of differentiation of the Dirac-Kohn-Sham eigenvalues}
\label{dkse}
Differentiation of the Kohn-Sham (Dirac-Kohn-Sham) eigenvalues with respect to atomic positions is rather involved. In order to keep derivation as clear as possible we will do it in a step by step fashion. Essentially the derivation is very similar to the one done by Yu et al. \cite{prb_43_6411} and by Kl\"{u}ppelberg et al. \cite{prb_91_035105}. We repeat all the steps here to make it clear where the fully relativistic formalism enters and where the formulae are independent on the formalism (relativistic or non-relativistic). We will consider the derivatives of the valence and core states separately beginning with the valence states.
As a first step, we show explicitly that only the derivatives of the basis functions enter the expression for the forces but not the derivatives of the coefficients. It can be done generically without specification of the basis set or relativistic effects. In methods which use non-orthogonal basis sets the eigenvalues can be found as the ratio of the expectation values of the hamiltonian and overlap matrices:
\begin{equation}\label{eig}
\epsilon=\frac{\sum_{ij}A^{*}_{i}H_{ij}A_{j}}{\sum_{ij}A^{*}_{i}O_{ij}A_{j}},
\end{equation}
where sums run over the basis set indexes and $A_{i}$ are the expansion coefficients. Again, using generic differentiation which we denote as prime, we obtain:
\begin{eqnarray}\label{deig}
\epsilon'&=\sum_{ij}\left(A'^{*}_{i}H_{ij}A_{j}+A^{*}_{i}H'_{ij}A_{j}+A^{*}_{i}H_{ij}A'_{j}\right)\nonumber\\&+\epsilon \sum_{ij}\left(A'^{*}_{i}O_{ij}A_{j}+A^{*}_{i}O'_{ij}A_{j}+A^{*}_{i}O_{ij}A'_{j}\right)\nonumber\\&=\sum_{ij}\left(A'^{*}_{i}[H_{ij}-\epsilon O_{ij}]A_{j}+A^{*}_{i}[H'_{ij}-\epsilon O'_{ij}]A_{j}+A^{*}_{i}[H_{ij}-\epsilon O_{ij}]A'_{j}\right)\nonumber\\&=\sum_{ij}A^{*}_{i}[H'_{ij}-\epsilon O'_{ij}]A_{j},
\end{eqnarray}
where we have used the fact that matrix equations are solved numerically exactly (i.e. for instance $\sum_{j}[H_{ij}-\epsilon O_{ij}]A_{j}$ is zero with computer accuracy). From (\ref{deig}), it is obvious that we have to differentiate only the matrix elements but not the coefficients.
Before proceeding further, let us briefly specify the basis functions (or their combinations) which we are using. As it becomes common practice in the APW-based calculations \cite{cpc_184_2670,cpc_220_230,arx_2012_04992}, we use generic combination of an augmentation function (APW or LAPW) and local orbitals of different kind. As local orbitals, we use the so called 'lo'-orbitals which have discontinuity in its small component (in its derivative in non-relativistic formulation) at the MT sphere boundaries. It is used in combination with APW augmentation \cite{prb_64_195134} to improve variational flexibility of the basis set. Next type of the local orbital is the so called High Derivative Local Orbitals (HDLO) \cite{prb_74_045104,cpc_184_2670,cpc_220_230} which can be used in combination with LAPW or APW+lo to further enhance the accuracy of the basis set in the range of energies corresponding to the valence bands. Finally, the so called High Energy Local Orbitals (HELO's, \cite{cpc_184_2670,cpc_220_230}) can be included in a basis set to describe semicore states or high energy states in the conduction band range of energies.
Hamiltonian and overlap matrix elements are represented by the volume integrals over all MT spheres in the unit cell and over the interstitial region. For the basis functions with discontinuities at the MT surfaces (for instance if the APW+lo combination is used), matrix elements of the hamiltonian include the surface correction terms as it was specified in Ref. \cite{prb_64_195134} for the non-relativistic case and in \cite{arx_2012_04992} for the fully relativistic case. The recipe (\ref{int_gen}) is applied for the differentiation when integration domain changes. Still using generic indexes for the basis set but specifying the band index and the \textbf{k}-point (i.e. using $f^{\mathbf{k}}_{i}$ as generic basis functions and $\epsilon^{\mathbf{k}}_{\lambda}$ as eigenvalues) as well as the specific form (\ref{ksh}) of the Dirac-Kohn-Sham hamiltonian $H_{DKS}$, we obtain:
\begin{eqnarray}\label{dks}
\hspace*{-2cm}
\frac{d H^{\mathbf{k}}_{ij}}{d\mathbf{t}}-\epsilon^{\mathbf{k}}_{\lambda}\frac{d O^{\mathbf{k}}_{ij}}{d\mathbf{t}}&=
P^{\mathbf{k}\lambda}_{ij}\nonumber\\&+\int_{\Omega_{0}} d\mathbf{r}
f^{^{\dagger}\mathbf{k}}_{i}(\mathbf{r})[\frac{d V_{eff}(\mathbf{r})}{d\mathbf{t}}+\beta
\widetilde{\boldsymbol{\sigma}} \cdot \frac{d\mathbf{B}_{eff}(\mathbf{r})}{d\mathbf{t}}]f^{\mathbf{k}}_{j}(\mathbf{r})\nonumber\\&\hspace*{-3cm}+\int_{S_{t}}d\mathbf{S}
\left(f^{^{\dagger}\mathbf{k}(MT)}_{i}(\mathbf{r})[H_{DKS}-\epsilon^{\mathbf{k}}_{\lambda}]f^{\mathbf{k}(MT)}_{j}(\mathbf{r})-f^{^{\dagger}\mathbf{k}(IR)}_{i}(\mathbf{r})[H_{DKS}-\epsilon^{\mathbf{k}}_{\lambda}]f^{\mathbf{k}(IR)}_{j}(\mathbf{r})\right),
\end{eqnarray}
where the terms which later will contribute to the Pulay force have been collected into the quantity $P^{\mathbf{k}\lambda t}_{ij}$:
\begin{eqnarray}\label{pulay1}
P^{\mathbf{k}\lambda t}_{ij}&=\int_{\Omega_{0}} d\mathbf{r}
\frac{d f^{^{\dagger}\mathbf{k}}_{i}(\mathbf{r})}{d\mathbf{t}}[H_{DKS}-\epsilon^{\mathbf{k}}_{\lambda}]f^{\mathbf{k}}_{j}(\mathbf{r})\nonumber\\&+\int_{\Omega_{0}} d\mathbf{r}
f^{^{\dagger}\mathbf{k}}_{i}(\mathbf{r})[H_{DKS}-\epsilon^{\mathbf{k}}_{\lambda}]\frac{d f^{\mathbf{k}}_{j}(\mathbf{r})}{d\mathbf{t}}+\frac{d}{d\mathbf{t}}S^{\mathbf{k}(DISC)}_{ij}.
\end{eqnarray}
Derivatives of the terms which appear in the hamiltonian when some of the basis functions have discontinuities were denoted as $\frac{d}{d\mathbf{t}}S^{\mathbf{k}(DISC)}_{ij}$. We do not specify them here because they will be combined with other explicitly dependent on the atomic position terms in the same way as they were combined in the derivation of the matrix elements of the hamiltonian \cite{arx_2012_04992}.
At this point of the derivation we have to take into account the differences between basis functions of augmentation type (APW and LAPW) and local basis functions (lo, HDLO, and HELO). Also, taking the derivatives assumes an understanding of the quantities themselves. So, in order to avoid the repetition of a rather lengthy derivation of the basis functions and matrix elements which has been done in \cite{arx_2012_04992}, we ask the reader to have the paper \cite{arx_2012_04992} at hand for quick references (we will refer to the equations in that paper as (I-???) with '???' as the equation number). Keeping this in mind, let us proceed with formal differentiation.
For the augmentation functions defined in (I-12,39), the derivative is not zero only in the MT sphere of atom $t$:
\begin{equation} \label{dbasmt}
\frac{d}{d\mathbf{t}}\Pi^{\mathbf{k}}_{\mathbf{G}s}(\mathbf{r})= i(\mathbf{k}+\mathbf{G})\Pi^{\mathbf{k}}_{\mathbf{G}s}(\mathbf{r})-\nabla \Pi^{\mathbf{k}}_{\mathbf{G}s}(\mathbf{r}),
\end{equation}
where the first term comes from the augmentation constraints and the second from the dependence of the radial functions on atomic position. Derivative of the local functions (I-46) also has two terms stemming from a formal Bloch factor and from the same position dependence:
\begin{equation} \label{dbasloc}
\frac{d}{d\mathbf{t}}\Lambda^{\mathbf{k}}_{tnil\mu}(\mathbf{r})= i\mathbf{k}\Lambda^{\mathbf{k}}_{tnil\mu}(\mathbf{r})-\nabla \Lambda^{\mathbf{k}}_{tnil\mu}(\mathbf{r}).
\end{equation}
Let us first consider the contribution of the gradient terms (which is generic) from (\ref{dbasmt}) and (\ref{dbasloc}) into the quantity $P^{\mathbf{k}\lambda t}_{ij}$ in (\ref{pulay1}):
\begin{eqnarray}\label{pulay2}
\hspace*{-2cm}
-\int_{\Omega_{t}} d\mathbf{r}
\nabla &f^{^{\dagger}\mathbf{k}}_{i}(\mathbf{r})[H_{DKS}-\epsilon^{\mathbf{k}}_{\lambda}]f^{\mathbf{k}}_{j}(\mathbf{r})-\int_{\Omega_{t}} d\mathbf{r}
f^{^{\dagger}\mathbf{k}}_{i}(\mathbf{r})[H_{DKS}-\epsilon^{\mathbf{k}}_{\lambda}]\nabla f^{\mathbf{k}}_{j}(\mathbf{r})\nonumber\\&=-\int_{\Omega_{t}} d\mathbf{r}
\nabla \left(f^{^{\dagger}\mathbf{k}}_{i}(\mathbf{r})[\hat{K}-\epsilon^{\mathbf{k}}_{\lambda}]f^{\mathbf{k}}_{j}(\mathbf{r})\right)-\int_{\Omega_{t}} d\mathbf{r}
V_{eff}(\mathbf{r})\nabla [f^{^{\dagger}\mathbf{k}}_{i}(\mathbf{r})f^{\mathbf{k}}_{j}(\mathbf{r})]\nonumber\\&-\int_{\Omega_{t}} d\mathbf{r}
\nabla [f^{^{\dagger}\mathbf{k}}_{i}(\mathbf{r})\beta
\widetilde{\boldsymbol{\sigma}} \cdot f^{\mathbf{k}}_{j}(\mathbf{r})]\mathbf{B}_{eff}(\mathbf{r})\nonumber\\&=-\int_{S_{t}} \mathbf{e} dS
f^{^{\dagger}\mathbf{k}(MT)}_{i}(\mathbf{r})[\hat{K}-\epsilon^{\mathbf{k}}_{\lambda}]f^{\mathbf{k}(MT)}_{j}(\mathbf{r})-\int_{\Omega_{t}} d\mathbf{r}
V_{eff}(\mathbf{r})\nabla [f^{^{\dagger}\mathbf{k}}_{i}(\mathbf{r})f^{\mathbf{k}}_{j}(\mathbf{r})]\nonumber\\&-\int_{\Omega_{t}} d\mathbf{r}
\nabla [f^{^{\dagger}\mathbf{k}}_{i}(\mathbf{r})\beta
\widetilde{\boldsymbol{\sigma}} \cdot f^{\mathbf{k}}_{j}(\mathbf{r})]\mathbf{B}_{eff}(\mathbf{r}).
\end{eqnarray}
Let us now consider the contribution from the augmentation parts of the derivatives in (\ref{dbasmt}) and (\ref{dbasloc}). It is easier to take the derivative of the final matrix element, however. In this case one can automatically include the derivatives of the discontinuities because the corresponding contributions to the matrix elements have exactly the same structure of explicit dependence on the atomic positions as the volume integral contributions \cite{arx_2012_04992}. Distinguishing the cases of the matrix elements between two augmentation functions (AA) specified in (I-60,61,68), between the local and the augmentation function (BA) specified in (I-63,64,69), and between two local functions (BB, I-66,67,70) one obtains the corresponding contribution to the quantity (\ref{pulay1}):
\begin{eqnarray}\label{dhmtaa}
\hspace*{-2cm}
i(\mathbf{G}'-\mathbf{G})&\Big[F^{t}_{\mathbf{G'-G}}\sum_{il}D^{\mathbf{k}il}_{\mathbf{G}s;\mathbf{G}'s'}[\overline{h}^{til}_{\mathbf{GG}'}-\epsilon^{\mathbf{k}}_{\lambda}\overline{o}^{til}_{\mathbf{GG}'}]\nonumber\\&+\sum_{il\mu
;i'l'\mu'}\sum_{(ww')=1}^{N^{t}_{l}}y^{^{*}(w)\mathbf{k}}_{til\mu;\mathbf{G}s}
y^{(w')\mathbf{k}}_{ti'l'\mu';\mathbf{G'}s'} \int_{\Omega_{t}}
R^{^{\dagger}(w)t}_{il\mu}(\mathbf{r}) \hat{H}_{NMT}
R^{(w')t}_{i'l'\mu'}(\mathbf{r}) d \mathbf{r}\Big]
\end{eqnarray}
for the AA type, and
\begin{eqnarray}\label{dhmtba}
i\mathbf{G}'\Big[&F^{t}_{\mathbf{G'-G}}\sum_{il}D^{\mathbf{k}il}_{\mathbf{G}s;\mathbf{G}'s'}[\overline{h}^{til}_{\mathbf{GG}'}-\epsilon^{\mathbf{k}}_{\lambda}\overline{o}^{til}_{\mathbf{GG}'}]\nonumber\\&+\sum_{i'l'\mu'}\sum_{w'=1}^{N^{t}_{l}}
y^{(w')\mathbf{k}}_{ti'l'\mu';\mathbf{G'}s'} \int_{\Omega_{t}}
R^{^{\dag}(LOC)t}_{nil\mu}(\mathbf{r}) \hat{H}_{NMT}
R^{(w')t}_{i'l'\mu'}(\mathbf{r}) d \mathbf{r}\Big]
\end{eqnarray}
for the BA type. Derivatives of the matrix elements of BB type equal to zero. The above expressions (\ref{dhmtaa}) and (\ref{dhmtba}) comprise a matrix with indexes running over the whole basis set. Anticipating a convolution of this matrix with the variational coefficients (see Eq. (\ref{deig})), it is convenient to denote this convolution as $\mathbf{C}^{\mathbf{k}t}_{\lambda}$ for a future use. The equations (\ref{dhmtaa}) and (\ref{dhmtba}) are the place where most of the differences between the fully relativistic and the non-relativistic formulations are concentrated. Whereas it is not the goal of this work to give a comprehensive account of all levels of the relativistic effects, it is helpful to know where the differences are located. Particularly, if one needs to recover all non-relativistic equations, the quantities $D^{\mathbf{k}il}_{\mathbf{G}s;\mathbf{G}'s'}$, $\overline{h}^{til}_{\mathbf{GG}'}$, and $\overline{o}^{til}_{\mathbf{GG}'}$ which are defined in (I-59,60,61) for the fully relativistic case, have to be replaced with their non-relativistic analogues.
Now it is a time to perform the Brillouin zone and the band index sums in the basis set convolution of the expression (\ref{dks}) and, correspondingly, to evaluate the first term on the right hand side of (\ref{forc1}):
\begin{eqnarray}\label{dksS}
\hspace*{-2cm}
-\sum_{\mathbf{k}\lambda}f^{\mathbf{k}}_{\lambda}\frac{d \epsilon^{\mathbf{k}}_{\lambda}}{d\mathbf{t}}&=-\sum_{\mathbf{k}\lambda}f^{\mathbf{k}}_{\lambda}\sum_{ij}A^{^{*}\mathbf{k}}_{i\lambda}\Big[\frac{d H^{\mathbf{k}}_{ij}}{d\mathbf{t}}-\epsilon^{\mathbf{k}}_{\lambda}\frac{d O^{\mathbf{k}}_{ij}}{d\mathbf{t}}\Big]A^{\mathbf{k}}_{\lambda}\nonumber\\&=
\mathbf{F}^{Pulay}_{t,val}-\int_{\Omega_{0}} d\mathbf{r}
[n_{val}(\mathbf{r})\frac{d V_{eff}(\mathbf{r})}{d\mathbf{t}}+\mathbf{m}_{val}(\mathbf{r})\beta\widetilde{\boldsymbol{\sigma}} \cdot \frac{d\mathbf{B}_{eff}(\mathbf{r})}{d\mathbf{t}}]\nonumber\\&\hspace*{-3cm}-\sum_{\mathbf{k}\lambda}f^{\mathbf{k}}_{\lambda}\int_{S_{t}}d\mathbf{S}
\Big[\Psi^{^{\dagger}\mathbf{k}(MT)}_{\lambda}(\mathbf{r})[H_{DKS}-\epsilon^{\mathbf{k}}_{\lambda}]\Psi^{\mathbf{k}(MT)}_{\lambda}(\mathbf{r})-\Psi^{^{\dagger}\mathbf{k}(IR)}_{\lambda}(\mathbf{r})[H_{DKS}-\epsilon^{\mathbf{k}}_{\lambda}]\Psi^{\mathbf{k}(IR)}_{\lambda}(\mathbf{r})\Big],
\end{eqnarray}
with the valence Pulay force
\begin{eqnarray}\label{pulay3}
\mathbf{F}^{Pulay}_{t,val}&=-\sum_{\mathbf{k}\lambda}f^{\mathbf{k}}_{\lambda}\mathbf{C}^{\mathbf{k}t}_{\lambda}
+\sum_{\mathbf{k}\lambda}f^{\mathbf{k}}_{\lambda}\int_{S_{t}} d\mathbf{S}
\Psi^{^{\dagger}\mathbf{k}(MT)}_{\lambda}(\mathbf{r})[\hat{K}-\epsilon^{\mathbf{k}}_{\lambda}]\Psi^{\mathbf{k}(MT)}_{\lambda}(\mathbf{r})\nonumber\\&+\int_{\Omega_{t}} d\mathbf{r}
V_{eff}(\mathbf{r})\nabla n_{val}(\mathbf{r})+\int_{\Omega_{t}} d\mathbf{r}
\nabla [\mathbf{m}_{val}(\mathbf{r})]\cdot\mathbf{B}_{eff}(\mathbf{r}).
\end{eqnarray}
For the core states we can formally repeat all above steps which we have done for the valence states, with a number of simplifications. The simplifications are related to the following two facts: i) each core state is an exact solution of the Dirac-Kohn-Sham equation for a spherically symmetric potential as opposite to an expansion in a basis set for the valence levels; ii) core states are strictly confined inside the corresponding MT sphere with zero values and derivatives at the boundary. As a result, all surface terms related to the augmentation or the discontinuities disappear. Equations (\ref{dksS}) and (\ref{pulay3}) for the core states, therefore, can be simplified as the following:
\begin{eqnarray}\label{dksSc}
\hspace*{-2cm}
-\sum_{c}\frac{d \epsilon_{c}}{d\mathbf{t}}&=
\mathbf{F}^{Pulay}_{t,cor}-\int_{\Omega_{0}} d\mathbf{r}
[n_{cor}(\mathbf{r})\frac{d V_{eff}(\mathbf{r})}{d\mathbf{t}}+\mathbf{m}_{cor}(\mathbf{r})\beta\widetilde{\boldsymbol{\sigma}} \cdot \frac{d\mathbf{B}_{eff}(\mathbf{r})}{d\mathbf{t}}],
\end{eqnarray}
with $c$ running over the core states of atom $t$ and with the core Pulay force
\begin{eqnarray}\label{pulay3c}
\mathbf{F}^{Pulay}_{t,cor}=\int_{\Omega_{t}} d\mathbf{r}
V_{eff}(\mathbf{r})\nabla n_{cor}(\mathbf{r})+\int_{\Omega_{t}} d\mathbf{r}
\nabla [\mathbf{m}_{cor}(\mathbf{r})]\cdot\mathbf{B}_{eff}(\mathbf{r}).
\end{eqnarray}
Finally we can include the contribution from the eigenvalue derivatives (\ref{dksS}) and (\ref{dksSc}) into a general force equation (\ref{forc1}) to finish the derivation:
\begin{eqnarray} \label{forc2}
\mathbf{F}_{t}=\mathbf{F}^{HF}_{t}+\mathbf{F}^{Pulay}_{t,cor}+\mathbf{F}^{Pulay}_{t,val}+\mathbf{F}^{Surf}_{t,kin}+\mathbf{F}^{Surf}_{t,other},
\end{eqnarray}
where we have made the following definitions:
\begin{eqnarray} \label{forc3}
\mathbf{F}^{Surf}_{t,kin}=-\sum_{\mathbf{k}\lambda}f^{\mathbf{k}}_{\lambda}\int_{S_{t}}d\mathbf{S}&
\Big[\Psi^{^{\dagger}\mathbf{k}(MT)}_{\lambda}(\mathbf{r})[\hat{K}-\epsilon^{\mathbf{k}}_{\lambda}]\Psi^{\mathbf{k}(MT)}_{\lambda}(\mathbf{r})\nonumber\\&-\Psi^{^{\dagger}\mathbf{k}(IR)}_{\lambda}(\mathbf{r})[\hat{K}-\epsilon^{\mathbf{k}}_{\lambda}]\Psi^{\mathbf{k}(IR)}_{\lambda}(\mathbf{r})\Big],
\end{eqnarray}
\begin{eqnarray} \label{forc4}
\mathbf{F}^{Surf}_{t,other}=&-\int_{S_{t}}d\mathbf{S}
[n^{MT}(\mathbf{r})V^{MT}_{eff}(\mathbf{r})-n^{IR}(\mathbf{r})V^{IR}_{eff}(\mathbf{r})]\nonumber\\&-\int_{S_{t}}d\mathbf{S}
[\mathbf{m}^{MT}(\mathbf{r})\cdot \mathbf{B}^{MT}_{eff}(\mathbf{r})-\mathbf{m}^{IR}(\mathbf{r})\cdot \mathbf{B}^{IR}_{eff}(\mathbf{r})]\nonumber\\&+\int_{S_{t}}d\mathbf{S}
[n^{MT}(\mathbf{r})V^{MT}_{xc}(\mathbf{r})-n^{IR}(\mathbf{r})V^{IR}_{xc}(\mathbf{r})]\nonumber\\&
+\int_{S_{t}}d\mathbf{S}
[\mathbf{m}^{MT}(\mathbf{r})\cdot \mathbf{B}^{MT}_{xc}(\mathbf{r})-\mathbf{m}^{IR}(\mathbf{r})\cdot \mathbf{B}^{IR}_{xc}(\mathbf{r})]\nonumber\\&-\int_{S_{t}}d\mathbf{S}
[n^{MT}(\mathbf{r})\epsilon^{MT}_{xc}(\mathbf{r})-n^{IR}(\mathbf{r})\epsilon^{IR}_{xc}(\mathbf{r})],
\end{eqnarray}
\section{Performance tests}
\label{conv}
\begin{table}[t]
\caption{Structural parameters of the solids considered in this work. The parameters correspond to the equilibrium geometries with zero forces. The change in atomic positions when we evaluate the forces is specified for each case later.} \label{list_s}
\begin{center}
\begin{tabular}{@{}c c c c c c c} &Space&&&&Wyckoff&$R_{MT}$\\
Solid &group&a(\AA)&b(\AA)&c(\AA)&positions&(a$_{B}$)\\
\hline\hline
$\alpha$-U&63 &2.854 &5.869 &4.955&0;0.1025;0.25 &2.602333\\
PuCoGa$_{5}$&123 &4.2354 & &6.7939&Pu: 0;0;0&Pu, Ga(1): 2.829752\\
& & & &&Co: 0;0;1/2 &Co, Ga(4): 2.34805\\
& & & &&Ga(4): 0;1/2;0.3086 &\\
& & & &&Ga(1): 1/2;1/2;0 &\\
FePt&123 &2.7248 & &3.78&Fe: 0;0;0&Fe, Pt: 2.55\\
& & & &&Pt: 1/2;1/2;1/2 &\\
\end{tabular}
\end{center}
\end{table}
\begin{table}[b]
\caption{Principal set up parameters of the studied solids.} \label{setup_s}
\begin{center}
\begin{tabular}{@{}c c c c c c} &Core&&$L_{max}$&$L_{max}$&\\
Solid &states&Semicore&$\Psi/\rho,V$&APW+lo+HDLO & $RK_{max}$ \\
\hline\hline
$\alpha$-U&[Kr]4d,4f&5s,6s,5p,6p,5d&12/8&3&12.0 \\
PuCoGa$_{5}$&Pu: [Kr]4d,4f,5s&Pu: 6s,5p,6p,5d&Pu: 12/10&Pu: 3&9.0 \\
& Co: [Ne]&Co: 3s,3p&Co: 10/10&Co: 2& \\
& Ga: [Ne]&Ga: 3s,3p,3d&Ga: 10/10&Ga: 2& \\
FePt&Fe: [Ne]&Fe: 3s,3p&10/10&Fe: 2&12.0 \\
& Pt: [Kr]&Pt: 5s,5p,4d,4f&&Pt: 3& \\
\end{tabular}
\end{center}
\end{table}
This section presents results of the calculations. In order to make presentation more compact, principal structural parameters for studied solids have been collected in Table \ref{list_s} and the most important set up parameters have been collected in Table \ref{setup_s}. The APW type of the plane waves augmentation was used for the "physically relevant" orbital momenta which roughly correspond to the shells which have electrons in a free atom. This type of augmentation was accompanied with addition of two local orbitals (lo and HDLO) in order to enhance variational freedom. For higher orbital momenta, LAPW type of augmentation was applied. The separation of the augmentation strategy into APW+lo and LAPW was suggested in Ref. \cite{prb_64_195134}. Additional use of HDLO's was advocated in \cite{cpc_184_2670,cpc_220_230} and, in the context of the fully relativistic calculations, in \cite{arx_2012_04992}. High energy Local Orbitals (HELO's) were used for the "physically relevant" orbital momenta, but their effect on the calculated values of the forces was rather small. Radii of the muffin-tin spheres were selected to be the largest allowed (no overlapping). In the cases of competing sizes the ratio was 1:1. All results presented below correspond to the fully relativistic approach (FRA). A few tests performed with simplified relativistic approach (SRA, \cite{arx_2012_04992}) have shown very little difference with FRA. All calculations have been performed for the electronic temperature $T=300K$. Exchange-correlation functional corresponded to the local density approximation (LDA) as parametrized in \cite{prb_45_13244}.
\begin{table}[t]
\caption{Calculated $\alpha$-U free energy (Ry), forces (mRy/a$_{B}$), and forces evaluated by numerical differentiation of the free energy, for the different Brillouin zone samplings. Forces were evaluated for the structure with atomic positions $\mathbf{t}=\pm 0.13\mathbf{B}+1/4\mathbf{C}$ which is slightly perturbed from the equilibrium one with atomic positions $\mathbf{t}=\pm 0.1025\mathbf{B}+1/4\mathbf{C}$. The free energy corresponding to the perturbed structure is denoted as F(0) in the table. In order to evaluate the forces numerically, two additional small distortions relative to the already perturbed structure were considered. Atomic positions for these distorted structures were $\mathbf{t}=\pm (0.13\pm\Delta)\mathbf{B}+1/4\mathbf{C}$ with $\Delta=0.0005$. Free energies are given relative to the constant -112276 Ry.} \label{U_conv}
\begin{center}
\begin{tabular}{@{}c c c c c c c} Number of & & & & & Numerical& Mismatch \\
\textbf{k}-points & F($-\Delta$)& F(0)& F($+\Delta$) & Force & force & (\%) \\
\hline\hline
144 & -0.4468887 &-0.4463283 &-0.4457661 & -50.6287 & -50.6095 & -0.04 \\
384 & -0.4466470 &-0.4460889 &-0.4455300 & -50.3776 & -50.3570 & -0.04 \\
700 & -0.4465695 &-0.4460026 &-0.4454351 & -51.1774 & -51.1415 & -0.07 \\
1152 & -0.4466360 &-0.4460739 &-0.4455112 & -50.7377 & -50.7087 & -0.06 \\
2560 & -0.4465665 &-0.4460026 &-0.4454378 & -50.9115 & -50.8845 & -0.05 \\
\end{tabular}
\end{center}
\end{table}
Special remark is about core states. As authors of the Ref. \cite{prb_91_035105} stress, those core states which are not exactly confined inside their MT spheres may affect the calculated forces noticeably. Such core states were allowed in \cite{prb_91_035105} to extend beyond their MT spheres and into the interstitial region and in other MT spheres with subsequent correction of the calculated forces via the plane waves expansion of their tails. This approach allows one to minimize the size of the matrices as only valence states need to be described by the basis set. The price, however, is the increased complexity of the core states treatment. Another way to handle the "shallow" core states is to include them in the list of the semicore states. In this case the size of the matrices increases slightly, but strict confinement of the remaining ('deep") core states inside their MT spheres makes the algorithm simpler, which is especially important when one builds approaches of a higher complexity (like the GW approximation) on top of the DFT code. This approach is accepted in the FlapwMBPT code.
\begin{table}[b]
\caption{Calculated free energy (Ry), forces (mRy/a$_{B}$), and numerical forces for PuCoGa$_{5}$ and ferromagnet FePt. Forces were evaluated for the structures with Pu and Pt atoms shifted from their equilibrium positions: $\mathbf{t}_{Pu}=0.02\mathbf{C}$ and $\mathbf{t}_{Pt}=1/2\mathbf{A}+1/2\mathbf{B}+0.52\mathbf{C}$ correspondingly. Free energy corresponding to these perturbed structures is denoted as F(0) in the table. In order to evaluate the forces numerically, two additional small distortions relative to already perturbed structure were considered. Plutonium positions for these distorted structures were $\mathbf{t}_{Pu}=(0.02\pm\Delta)\mathbf{C}$ with $\Delta=0.0004$, and platinum positions were: $\mathbf{t}_{Pt}=1/2\mathbf{A}+1/2\mathbf{B}+(0.52\pm\Delta)\mathbf{C}$ with $\Delta=0.0005$. Values of forces are given for Pu and Pt atoms correspondingly. Total number of \textbf{k}-points in the Brillouin zone was 486 and 6000 for PuCoGa$_{5}$ and FePt correspondingly. Free energies are given relative to the constants -81550 Ry for PuCoGa$_{5}$ and -39414 Ry for FePt.} \label{others}
\begin{center}
\begin{tabular}{@{}c c c c c c c} & & & & & Numerical& Mismatch \\
Solid & F($-\Delta$)& F(0)& F($+\Delta$) & Force & force & (\%) \\
\hline\hline
PuCoGa$_{5}$ &-0.9305437 &-0.9303248 &-0.9301012 &-43.0625&-43.0829 & 0.05 \\
FePt &-0.5907798 &-0.5907130 &-0.5906452 & -18.756 & -18.843 &0.46 \\
\end{tabular}
\end{center}
\end{table}
Principal results of this work, demonstrating the accuracy of the calculated forces, are collected in Table \ref{U_conv} (for $\alpha$-uranium) and in Table \ref{others} (for PuCoGa$_{5}$ and FePt). The tables also include the free energies which were used for the numerical evaluation of the forces. For the numerical differentiation we used three point formula $F'(0)=\frac{F(\Delta)-F(-\Delta)}{2\Delta}$ with $\Delta$ specified in Tables \ref{U_conv} and \ref{others}. Let us first discuss $\alpha$-uranium. As one can see from the Table \ref{U_conv}, the deviation of the calculated forces from the numerical ones is very small (about 0.05\%), which demonstrates high accuracy of the implementation. It is interesting, that the deviation is essentially independent on the sampling of the Brillouin zone. When the number of \textbf{k}-points increases, the forces and the numerical forces change slightly, but their difference is almost constant. This fact supports the robustness of the implementation. One has to mention that the forces evaluated by numerical differentiation are not exact. Not only they depend on the step $\Delta$ in the above formula (though this dependence was rather small in all cases considered in this work), but the free energies corresponding to the shift by $+\Delta$ and $-\Delta$ are subjects to different numerical errors. For instance, MT radii can be dependent on $\Delta$ (as they were in this work). Thus, comparison of the directly and numerically evaluated forces should not be considered as a test of the directly evaluated forces against the numerical ones but, rather, as a test of the consistency of the algorithms involved in both, energies and forces.
\begin{table}[t]
\caption{Calculated components of the forces exerted on all atoms. Forces correspond to the structure distorted from the equilibrium as described in tables \ref{U_conv} and \ref{others}. Surface (kinetic) term corresponds to the contribution to the force from the discontinuity of the kinetic energy at the MT surface as it is specified in the eq. (\ref{forc3}). Surface (other) include the contributions from all other discontinuities as it is specified in the eq. (\ref{forc4}). Group of four Gallium atoms (in undisturbed structure) becomes splitted in two groups (2 atoms each) with slightly different forces which are separated by the slash in the table.} \label{compon}
\begin{center}
\begin{tabular}{@{}c c c c c c c c}
Structure &$\alpha$-U &\multicolumn{4}{c}{PuCoGa$_{5}$} &\multicolumn{2}{c}{FePt} \\
Atom & U&Pu&Co & Ga(4) & Ga(1) &Fe &Pt \\
\hline\hline
Hellmann-Feynman &399.284 &-996.116 &-89.242 &-381.61/24.874&67.072 &-20.333 &28.551 \\
Pulay(core) &-440.654 &954.318 & 64.177 &242.442/-2.174&-38.228 &28.868 &-46.512 \\
Pulay(valence) &-8.712 &-9.682 &29.136 &134.506/-0.482&-27.138 &25.793 &-14.979 \\
Surface(kinetic) &-0.883 &8.311 &-2.408 &14.172/-12.201&0.734 &-15.713 &14.208 \\
Surface(other) &0.054 &-0.006 &-0.0004 &0.0005/-0.0003&-0.0009 &0.086 &-0.024 \\
Total &-50.912 &-43.063 &1.663 &9.51/10.016&2.44 &18.702 &-18.756 \\
Sum of totals (drift) & 0 &\multicolumn{4}{c}{-0.020} &\multicolumn{2}{c}{-0.054} \\
\end{tabular}
\end{center}
\end{table}
Whereas the accuracy of the basis set used in the calculations for $\alpha$-U was specially studied in our previous work \cite{arx_2012_04992}, basis sets used in the calculations for PuCoGa$_{5}$ and FePt have not been specifically tuned to reach very high accuracy. This, most likely, explains slightly bigger mismatch between the directly and numerically evaluated forces in these two cases. Nevertheless, the mismatch is small (about 0.5\%) and acceptable in most situations. It demonstrates, that the algorithm of the force evaluation is accurate enough not only when one sort of atoms is present ($\alpha$-U) but also in materials with different atoms (PuCoGa$_{5}$) and in materials with a long range magnetic order (FePt).
Finally, table \ref{compon} presents the components of the forces for all solids studied in the work. First interesting observation is that Hellmann-Feynman and Pulay (core part) are far the biggest components (especially for actinide atoms) and they cancel each other in considerable degree. Both of them come from the inner part of the MT spheres stressing the importance of correct numerical description in that area of the unit cell. Second observation is that the kinetic surface term prevails (considerably) over all other surface terms. This fact essentially supports the approximation accepted in the Ref. \cite{prb_43_6411} where only kinetic operator discontinuity was taken into account. Careful analysis of all other discontinuities performed by authors of Ref. \cite{prb_91_035105} had shown, however, the importance of these additional terms in enhancing the accuracy of the calculated forces. Thus, the other surface contributions were kept in this work and, as one can see, they are not negligible despite their relative smallness.
\section*{Conclusions}
\label{concl}
In conclusion, a formulation of the atomic forces evaluation in the framework of the relativistic density functional theory was given. It is formulated for the APW/LAPW family of basis sets with a flexible inclusion of different kind of the local orbitals (lo, HDLO, HELO). The method has been implemented in the computer code FlapwMBPT and successfully applied to the atomic forces evaluation in $\alpha$-U, PuCoGa$_{5}$, and FePt. The formulation of the forces evaluation in the fully relativistic framework brings in an opportunity to study, for instance, the phonon spectra in actinide materials with greater reliability than it was previously available with scalar-relativistic approaches. It can also increase the efficiency of the calculations. For example, recent successful study of the phonon spectra in $\alpha$-Plutonium \cite{sr_9_18682} used the small-displacement method \cite{cpc_180_2622} and numerical differentiation of the total energies for the force evaluation. The study could be done easier with the direct evaluation of the forces.
\section*{Acknowledgments}
\label{acknow}
This work was supported by the U.S. Department of energy, Office of Science, Basic
Energy Sciences as a part of the Computational Materials Science Program.
\bibliographystyle{elsarticle-num}
| {'timestamp': '2020-12-23T02:02:02', 'yymm': '2012', 'arxiv_id': '2012.11671', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.11671'} |
\section{Introduction}
Deep Neural Networks (DNNs) have shown remarkable success in many computer vision tasks. Despite the high performance in server-based DNNs powered by cutting-edge parallel computing hardware, most state-of-the-art architectures are not yet ready to be deployed on mobile devices due to the limitations on computational capacity, memory and power.
To address this problem, many network compression and acceleration methods have been proposed. Pruning based methods \cite{han2015learning,He_2017_ICCV,Liu2017learning,Luo2017ThiNetAF} explore the sparsity in weights and filters. Quantization based methods \cite{han2015learning,Zhou2016Incremental,Courbariaux2016BinaryNet,rastegari2016xnor,Xu2018DeepNN} reduce the bit-width of network parameters. Low-rank decomposition \cite{Denton2014Exploiting,jaderberg2014speeding,Guo2018Network,Wen2017Coordinating,Alvarez2017Compression} minimizes the channel-wise and spatial redundancy by decomposing the original network into a compact one with low-rank layers. In addition, efficient architectures \cite{Sandler2018MobileNetV2,Ma2018ShuffleNet} are carefully designed to facilitate mobile deployment of deep neural networks. Different from precedent works, this paper proposes a novel approach to design low-rank networks.
Low-rank networks can be trained directly from scratch. However, it is difficult to obtain satisfactory results for several reasons.
(1)~\textit{Low capacity:}
compared with the original full rank network, the capacity of a low-rank network is limited, which causes difficulties in optimizing its performances.
(2)~\textit{Deep structure:}
low-rank decomposition typically doubles the number of layers in a network. The additional layers make numerical optimization much more vulnerable to gradients explosion and/or vanishing.
(3)~\textit{Heuristic rank selection:}
the rank of decomposed network is often chosen as a hyperparameter based on pre-trained
networks; this may not be the optimal rank for the network trained from scratch.
Alternatively, several previous works \cite{zhang2016accelerating,Guo2018Network,jaderberg2014speeding} attempted to decompose pre-trained models in order to get initial low-rank networks. However, the heuristically imposed low-rank could incur huge accuracy loss and network retraining is needed to recover the performance of the original network as much as possible. Some attempts were made to use sparsity regularization \cite{Wen2017Coordinating,chen2015compressing} to constrain the network into a low-rank space. Though sparsity regularization reduces the error incurred by decomposition to some extent, performance still degrades rapidly when compression rate increases.
This paper is an extension of \cite{xu2018trained}. In this paper, we propose a new method, namely Trained Rank Pruning (TRP), for training low-rank networks. We embed the low-rank decomposition into the training process by gradually pushing the weight distribution of a well functioning network into a low-rank form, where all parameters of the original network are kept and optimized to maintain its capacity. We also propose a stochastic sub-gradient descent optimized nuclear regularization that further constrains the weights in a low-rank space to boost the TRP.
The proposed solution is illustrated in Fig.~\ref{fig.1}.
Overall, our contributions are summarized below.
\begin{enumerate}
\setlength\itemsep{-0em}
\item A new training method called the TRP is presented by explicitly embedding the low-rank decomposition into the network training;
\item A nuclear regularization is optimized by stochastic sub-gradient descent to boost the performance of the TRP;
\item Improving inference acceleration and reducing approximation accuracy loss in both channel-wise and spatial-wise decomposition methods.
\end{enumerate}
\begin{figure*}[h]
\centering
\includegraphics[width=5.5in]{framework.pdf}
\caption{The training of TRP consists of two parts as illustrated in (a) and (b). (a) one normal iteration with forward-backward broadcast and weight update. (b) one training iteration inserted by TRP, where the low-rank approximation is first applied on filters before convolution. During backward propagation, the gradients are directly added on low-rank filters and the original weights are substituted by updated low-rank filters. (b) is applied once every $m$ iterations (\textit{i.e.} when gradient update iteration $t=zm, z=0, 1, 2, \cdots$), otherwise (a) is applied.}\label{fig.1}
\end{figure*}
\section{Related Works}
A lot of works have been proposed to accelerate the inference process of deep neural networks. Briefly, these works could be categorized into three main categories: quantization, pruning, and low-rank decomposition.
\textbf{Quantization} Weight quantization methods include training a quantized model from scratch \cite{chen2015compressing,Courbariaux2016BinaryNet,rastegari2016xnor} or converting a pre-trained model into quantized representation \cite{Zhou2016Incremental,Han2015DeepCC,Xu2018DeepNN}. The quantized weight representation includes binary value \cite{rastegari2016xnor,Courbariaux2016BinaryNet} or hash buckets \cite{chen2015compressing}.
Note that our method is inspired by the scheme of combining quantization with training process, \textit{i.e.} we embed the low-rank decomposition into training process to explicitly guide the parameter to a low-rank form.
\textbf{Pruning} Non-structured and structured sparsity are introduced by pruning. \cite{han2015learning} proposes to prune unimportant connections between neural units with small weights in a pre-trained CNN. \cite{Wen2016LearningSS} utilizes group Lasso strategy to learn the structure sparsity of networks. \cite{Liu2017learning} adopts a similar strategy by explicitly imposing scaling factors on each channel to measure the importance of each connection and dropping those with small weights. In \cite{He_2017_ICCV}, the pruning problem is formulated as a data recovery problem. Pre-trained filters are re-weighted by minimizing a data recovery objective function. Channels with smaller weight are pruned. \cite{Luo2017ThiNetAF} heuristically selects filters using change of next layer's output as a criterion.
\textbf{Low-rank decomposition} Original models are decomposed into compact ones with more lightweight layers
\cite{jaderberg2014speeding} considers both the spatial-wise and channel-wise redundancy and proposes decomposing a filter into two cascaded asymmetric filters. \cite{zhang2016accelerating} further assumes the feature map lie in a low-rank subspace and decompose the convolution filter into $k\times k$ followed by $1\times 1$ filters via SVD. \cite{Guo2018Network} exploits the low-rank assumption of convolution filters and decompose a regular convolution into several depth-wise and point-wise convolution structures. Although these works achieved notable performance in network compression, all of them are based on the low-rank assumption. When such assumption is not completely satisfied, large prediction error may occur
Alternatively, some other works \cite{Wen2017Coordinating,Alvarez2017Compression} implicitly utilize sparsity regularization to direct the neural network training process to learn a low-rank representation. Our work is similar to this low-rank regularization method. However, in addition to appending an implicit regularization during training, we impose an explicit sparsity constraint in our training process and prove that our approach can push the weight distribution into a low-rank form quite effectively.
\section{Methodology}
\subsection{Preliminaries}
Formally, the convolution filters in a layer can be denoted by a tensor $W \in \mathbb{R}^{n \times c \times k_w \times k_h}$, where $ n $ and $ c $ are the number of filters and input channels, $ k_h $ and $ k_w $ are the height and width of the filters. An input of the convolution layer $ F_i \in \mathbb{R}^{c \times x \times y}$ generates an output as $ F_o = W * F_i $. Channel-wise correlation \cite{zhang2016accelerating} and spatial-wise correlation \cite{jaderberg2014speeding} are explored to approximate convolution filters in a low-rank space. In this paper, we focus on these two decomposition schemes.
However, unlike the previous works, we propose a new training scheme TRP to obtain a low-rank network without re-training after decomposition.
\subsection{Trained Rank Pruning}
Trained Rank Pruning (TRP) is motivated by the strategies of training quantized nets.
One of the gradient update schemes to train quantized networks from scratch \cite{Li2017TrainingQN} is
\begin{equation}\label{equ.1}
w^{t+1}=Q(w^t-\alpha \triangledown f(w^t))
\end{equation}
where $Q(\cdot)$ is the quantization function, $w^t$ denote the parameter in the $t^{th}$ iteration. Parameters are quantized by $Q(\cdot)$ before updating the gradients.
In contrast, we propose a simple yet effective training scheme called Trained Rank Pruning (TRP) in a periodic fashion:
\begin{equation}\label{equ.2}
\begin{split}
W^{t+1}&=\left \{
\begin{array}{c}
W^t - \alpha \triangledown f(W^t) \quad t \% m \neq 0 \\
T^z - \alpha \triangledown f(T^z) \quad t \% m = 0
\end{array} \right. \\
T^{z}&=\mathcal{D}(W^{t}), \quad z = t / m
\end{split}
\end{equation}
where $\mathcal{D}({\cdot})$ is a low-rank tensor approximation operator, $\alpha$ is the learning rate, $t$ indexes the iteration and $z$ is the iteration of the operator $\mathcal{D}$ , with $m$ being the period for the low-rank approximation.
At first glance, this TRP looks very simple. An immediate concern arises: can the iterations guarantee the rank of the parameters converge, and more importantly would not increase when they are updated in this way? A positive answer (see Theorem 2) given in our theoretical analysis will certify the legitimacy of this algorithm.
For the network quantization, if the gradients are smaller than the quantization, the gradient information would be totally lost and become zero. However, it will not happen in TRP because the low-rank operator is applied on the weight tensor. Furthermore, we apply low-rank approximation every $m$ SGD iterations. This saves training time to a large extend. As illustrated in Fig.~\ref{fig.1}, for every $m$ iterations, we perform low-rank approximation on the original filters, while gradients are updated on the resultant low-rank form. Otherwise, the network is updated via the normal SGD.
Our training scheme could be combined with any low-rank operators. In the proposed work, we choose the low-rank techniques proposed in \cite{jaderberg2014speeding} and \cite{zhang2016accelerating}, both of which transform the 4-dimensional filters into 2D matrix and then apply the truncated singular value decomposition (TSVD). The SVD of matrix ${W}^t$ can be written as:
\begin{equation}\label{equ.3}
W^t=\sum_{i=1}^{rank(W^t)}\sigma_i\cdot U_i\cdot (V_i)^T
\end{equation}
where $\sigma_i$ is the singular value of $W^t$ with $\sigma_1\geq \sigma_2 \geq \cdots \geq \sigma_{rank(W^t)}$, and $U_i$ and $V_i$ are the singular vectors. The parameterized TSVD($W^t;e$) is to find the smallest integer $k$ such that
\begin{equation}\label{equ.4}
\sum_{j=k+1}^{rank(W^t)}(\sigma_j)^2 \leq \quad e \sum_{i=1}^{rank(W^t)}(\sigma_i)^2
\end{equation}
where $e$ is a pre-defined hyper-parameter of the energy-pruning ratio, $e \in(0 , 1)$.
After truncating the last $n-k$ singular values, we transform the low-rank 2D matrix back to 4D tensor. Compared with directly training low-rank structures from scratch, the proposed TRP has following advantages.
(1) Unlike updating the decomposed filters independently of the network training in literature \cite{zhang2016accelerating,jaderberg2014speeding}, we update the network directly on the original 4D shape of the decomposed parameters, which enable jointly network decomposition and training by preserving its discriminative capacity as much as possible
(2) Since the gradient update is performed based on the original network structure, there will be no exploding and vanishing gradients problems caused by additional layers.
(3) The rank of each layer is automatically selected during the training. We will prove a theorem certifying the rank of network weights convergence and would not increase in section \ref{sec:thm}.
\subsection{Nuclear Norm Regularization}
Nuclear norm is widely used in matrix completion problems. Recently, it is introduced to constrain the network into low-rank space during the training process \cite{Alvarez2017Compression}.
\begin{equation}\label{equ.5}
\min \left\{ f\left(x; w\right)+\lambda \sum_{l=1}^L||W_l||_* \right\}
\end{equation}
where $f(\cdot)$ is the objective loss function, nuclear norm $||W_l||_*$ is defined as $||W_l||_*=\sum_{i=1}^{rank(W_l)}\sigma_l^i$, with $\sigma_l^i$ the singular values of $W_l$. $\lambda$ is a hyper-parameter setting the influence of the nuclear norm. In \cite{Alvarez2017Compression} the proximity operator is applied in each layer independently to solve Eq.~(\ref{equ.5}). However, the proximity operator is split from the training process and doesn't consider the influence within layers.
In this paper, we utilize stochastic sub-gradient descent \cite{Avron2012EfficientAP} to optimize nuclear norm regularization in the training process. Let $W=U\Sigma V^T$ be the SVD of $W$ and let $U_{tru}, V_{tru}$ be $U, V$ truncated to the first $rank(W)$ columns or rows, then $U_{tru}V_{tru}^T$ is the sub-gradient of $||W||_*$ \cite{watson1992characterization}. Thus, the sub-gradient of Eq.~(\ref{equ.5}) in a layer is
\begin{equation}\label{equ.6}
\triangledown f+\lambda U_{tru}V_{tru}^T
\end{equation}
The nuclear norm and loss function are optimized simultaneously during the training of the networks and can further be combined with the proposed TRP.
\subsection{Theoretic Analysis}\label{sec:thm}
In this section, we analyze the rank convergence of TRP from the perspective of matrix perturbation theory \cite{stewart1990matrix}. We prove that rank in TRP is monotonously decreasing, \textit{i.e.,} the model gradually converges to a more sparse model.
Let A be an $m\times n$ matrix, without loss of generality, $m\geq n$.
$\Sigma = diag\left(\sigma_1,\cdots,\sigma_n\right)$ and $\sigma_1\geq \sigma_2\geq\cdots\geq\sigma_n$. $\Sigma$ is the diagonal matrix composed by all singular values of $A$.
Let $\widetilde{A}=A+E$ be a perturbation of $A$, and $E$ is the noise matrix.
$\widetilde{\Sigma} = diag\left(\widetilde{\sigma}_1,\cdots,\widetilde{\sigma}_n\right)$ and $\widetilde{\sigma}_1 \geq \widetilde{\sigma}_2 \geq \cdots \geq \widetilde{\sigma}_n$. $\widetilde{\sigma}_i$ is the singular values of $\widetilde{A}$.
The basic perturbation bounds for the singular values of a matrix are given by
\begin{theorem}\label{theorem1}
Mirsky's theorem \cite{mirsky1960symmetric}:
\begin{equation}\label{equ.9}
\sqrt{\sum_{i}|\widetilde{\sigma}_i-\sigma_i|^2}\leq||E||_F\\
\end{equation}
\end{theorem}
where $||\cdot||_F$ is the Frobenius norm. Then the following corollary can be inferred from Theorem \ref{theorem1},
\begin{corollary}\label{corollary1}
Let $B$ be any $m\times n$ matrix of rank not greater than $k$, \textit{i.e.} the singular values of B can be denoted by $\varphi_1 \geq \cdots \geq \varphi_k \geq 0$ and $\varphi_{k+1}=\cdots=\varphi_n=0$. Then
\begin{equation}\label{equ.10}
||B-A||_F \geq \sqrt{\sum_{i=1}^{n}|\varphi_i-\sigma_i|^2}\geq \sqrt{\sum_{j=k+1}^{n}\sigma_{j}^2}\\
\end{equation}
\end{corollary}
Below, we will analyze the training procedure of the proposed TRP. Note that $W$ below are all transformed into 2D matrix. In terms of Eq.~(\ref{equ.2}), the training process between two successive TSVD operations can be rewritten as Eq. (\ref{equ.11})
\begin{equation}\label{equ.11}
\begin{split}
W^t&=T^z=TSVD(W^t;e)\\
W^{t+m} &= T^z-\alpha \Sigma_{i=0}^{m-1}\triangledown f(W^{t+i})\\
T^{z+1}&=TSVD(W^{t+m};e)
\end{split}
\end{equation}
where $W^t$ is the weight matrix in the $t$-th iteration. $T^z$ is the weight matrix after applying TSVD over $W^t$. $\triangledown f(W^{t+i})$ is the gradient back-propagated during the $(t+i)$-th iteration. $e\in\left(0,1\right)$ is the predefined energy threshold. Then we have following theorem.
\begin{theorem}\label{thm:2}
Assume that $||\alpha \triangledown f||_F$ has an upper bound G, if $G<\frac{\sqrt{e}}{m}||W^{t+m}||_F$, then $rank(T^z)\geq rank(T^{z+1})$.
\end{theorem}
\begin{proof}
We denote $\sigma^t_{j}$ and $\sigma^{t+m}_{j}$ as the singular values of $W^t$ and $W^{t+m}$ respectively. Then at the $t$-th iteration, given the energy ratio threshold $e$, the TSVD operation tries to find the singular value index $k \in [0, n-1]$ such that :
\begin{equation}\label{equ.12}
\begin{split}
\sum_{j=k+1}^{n}\left(\sigma^{t}_{j}\right)^2&<e||W^t||_F^2\\
\sum_{j=k}^{n}\left(\sigma^t_{j}\right)^2&\geq e||W^t||_F^2\\
\end{split}
\end{equation}
In terms of Eq.~(\ref{equ.12}), $T^z$ is a $k$ rank matrix, i.e, the last $n-k$ singular values of $T^z$ are equal to $0$. According to Corollary \ref{corollary1}, we can derive that:
\begin{equation}\label{equ.13}
\begin{split}
||W^{t+m}-T^z||_F& = ||\alpha \sum_{i=0}^{m-1}\triangledown f^{t+i}||_F\\
&\geq \sqrt{\sum_{j=k+1}^{n}\left(\sigma^{t+m}_{j}\right)^2} \\
\end{split}
\end{equation}
Given the assumption $G < \frac{\sqrt{e}}{m} ||W^{t+m}||_F $, we can get:
\begin{equation}\label{equ.14}
\begin{split}
\frac{\sqrt{\sum_{j=k+1}^{n}\left(\sigma^{t+m}_{j}\right)^2}}{||W^{t+m}||_F}&\leq \frac{||\alpha \sum_{i=0}^{m-1}\triangledown f^{t+i}||_F}{||W^{t+m}||_F}\\
&\leq \frac{\sum_{i=0}^{m-1}||\alpha \triangledown f^{t+i}||_F}{||W^{t+m}||_F}\\
&\leq \frac{mG}{||W^{t+m}||_F}< \sqrt{e} \\
\end{split}
\end{equation}
Eq.~(\ref{equ.14}) indicates that
since the perturbations of singular values are bounded by the parameter gradients, if we properly select the TSVD energy ratio threshold $e$, we could guarantee that if $n-k$ singular values are pruned by previous TSVD iteration, then before the next TSVD, the energy for the last $n-k$ singular values is still less than the pre-defined energy threshold $e$. Thus TSVD should keep the number of pruned singular values or drop more to achieve the criterion in Eq.~(\ref{equ.12}), consequently a weight matrix with lower or same rank is obtained, \textit{i.e.} $Rank(T^{z})\geq Rank(T^{z+1})$. We further confirm our analysis about the variation of rank distribution in Section \ref{sec:exp}.
\end{proof}
\begin{table}[htb]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Model & Top 1 ($\%$) & Speed up\\
\hline\hline
R-20 (baseline)&91.74&1.00$\times$\\
\hline
R-20 (TRP1)&90.12&1.97$\times$\\
R-20 (TRP1+Nu)&\textbf{90.50}&\textbf{2.17$\times$}\\
R-20 (\cite{zhang2016accelerating})&88.13&1.41$\times$\\
\hline
R-20 (TRP2)&90.13&2.66$\times$\\
R-20 (TRP2+Nu)&\textbf{90.62}&\textbf{2.84$\times$}\\
R-20 (\cite{jaderberg2014speeding})&89.49&1.66$\times$\\
\hline
\hline
R-56 (baseline)&93.14&1.00$\times$\\
\hline
R-56 (TRP1)&\textbf{92.77}&2.31$\times$\\
R-56 (TRP1+Nu)&91.85&\textbf{4.48$\times$}\\
R-56 (\cite{zhang2016accelerating})&91.56&2.10$\times$\\
\hline
R-56 (TRP2)&\textbf{92.63}&2.43$\times$\\
R-56 (TRP2+Nu)&91.62&\textbf{4.51$\times$}\\
R-56 (\cite{jaderberg2014speeding})&91.59&2.10$\times$\\
\hline
R-56 \cite{He_2017_ICCV}&91.80&2.00$\times$\\
R-56 \cite{li2016pruning}&91.60&2.00$\times$\\
\hline
\end{tabular}
\end{center}
\caption{Experiment results on CIFAR-10. "R-`` indicates ResNet-. }\label{tab1}\vspace*{-0.4cm}
\end{table}
\section{Experiments}\label{sec:exp}
\subsection{Datasets and Baseline}
We evaluate the performance of TRP scheme on two common datasets, CIFAR-10 \cite{AlexCifar10} and ImageNet \cite{Deng2009ImageNetAL}. The CIFAR-10 dataset consists of colored natural images with $32\times 32$
resolution and has totally 10 classes. The ImageNet dataset consists of 1000 classes of images for recognition task.
For both of the datasets, we adopt ResNet \cite{He2016DeepRL} as our baseline model since it is widely used in different vision tasks
We use ResNet-20, ResNet-56 for CIFAR-10 and ResNet-18, ResNet-50 for ImageNet. For evaluation metric, we adopt top-1 accuracy on CIFAR-10 and top-1, top-5 accuracy on ImageNet. To measure the acceleration performance, we compute the FLOPs ratio between baseline and decomposed models to obtain the final speedup rate. Wall-clock CPU and GPU time is also compared. Apart from the basic decomposition methods, we compare the performance with other state-of-the-art acceleration algorithms~\cite{He_2017_ICCV,li2016pruning,Luo2017ThiNetAF,Zhou_2019_ICCV}.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Method &Top1($\%$)& Top5($\%$)& Speed up\\
\hline\hline
Baseline&69.10&88.94&1.00$\times$\\
\hline
TRP1 &\textbf{65.46}&\textbf{86.48} &1.81$\times$\\
TRP1+Nu&65.39&86.37&\textbf{2.23$\times$}\\
\cite{zhang2016accelerating}\footnotemark[1] &-& 83.69&1.39$\times$ \\
\cite{zhang2016accelerating}&63.10&84.44&1.41$\times$\\
\hline
TRP2 &\textbf{65.51}&\textbf{86.74} &$2.60\times$\\
TRP2+Nu&65.34&86.61&\textbf{3.18}$\times$\\
\cite{jaderberg2014speeding}&62.80&83.72&2.00$\times$\\
\hline
\end{tabular}
\end{center}
\caption{Results of ResNet-18 on ImageNet. }\label{tab2}\vspace*{-0.2cm}
\end{table}
\begin{table}[htb]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Method &Top1($\%$)& Top5($\%$) & Speed up\\
\hline\hline
Baseline&75.90&92.70&1.00$\times$\\
\hline
TRP1+Nu&72.69&91.41&\textbf{2.30}$\times$\\
TRP1+Nu&\textbf{74.06}&\textbf{92.07}&1.80$\times$\\
\cite{zhang2016accelerating}&71.80&90.2&1.50$\times$\\
\cite{He_2017_ICCV}&-&90.80&2.00\\
\cite{Luo2017ThiNetAF}&72.04&90.67&1.58\\
\cite{luo2018thinet}&72.03&90.99&2.26\\
\cite{Zhou_2019_ICCV}&71.50&90.20&2.30\\
\hline
\end{tabular}
\end{center}
\caption{Results of ResNet-50 on ImageNet.}\label{tab3}\vspace*{-0.4cm}
\end{table}
\subsection{Implementation Details}
We implement our TRP scheme with NVIDIA 1080 Ti GPUs. For training on CIFAR-10, we start with base learning rate of $0.1$ to train 164 epochs and degrade the value by a factor of $10$ at the $82$-th and $122$-th epoch. For ImageNet, we directly finetune the model with TRP scheme from the pre-trained baseline with learning rate $0.0001$ for 10 epochs. We adopt SGD solver to update weight and set the weight decay value as $10^{-4}$ and momentum value as $0.9$. The accuracy improvement enabled by data dependent decomposition vanishes after fine-tuning. So we simply
adopt the retrained data independent decomposition as our basic methods.
\footnotetext[1]{the implementation of \cite{Guo2018Network}}
\subsection{Results on CIFAR-10}
\textbf{Settings.} Experiments on channel-wise decomposition (TRP1) and spatial-wise decomposition (TRP2) are both considered. The TSVD energy threshold in TRP and TRP+Nu is $0.02$ and the nuclear norm weight $\lambda$ is set as $0.0003$. We decompose both the $1\times 1$ and $3\times 3$ layers in ResNet-56.
\textbf{Results.} As shown in Table~\ref{tab1}, for both spatial-wise and channel-wise decomposition, the proposed TRP outperforms basic methods~\cite{zhang2016accelerating,jaderberg2014speeding} on ResNet-20 and ResNet-56. Results become even better when nuclear regularization is used. For example, in the channel-wise decomposition (TRP2) of ResNet-56, results of TRP combined with nuclear regularization can even achieve $2\times$ speed up rate than \cite{zhang2016accelerating} with same accuracy drop. TRP also outperforms filter pruning~\cite{li2016pruning} and channel pruning~\cite{He_2017_ICCV}. The channel decomposed TRP trained ResNet-56 can achieve $92.77\%$ accuracy with $2.31\times$ acceleration, while \cite{He_2017_ICCV} is $91.80\%$ and \cite{li2016pruning} is $91.60\%$. With nuclear regularization, our methods can approximately double the acceleration rate of \cite{He_2017_ICCV} and \cite{li2016pruning} with higher accuracy.
\begin{figure}[t]
\centering
\includegraphics[width=3in]{heatmap.pdf}\\
\caption{Visualization of rank selection, taken from the res3-1-2 convolution layer in ResNet-20 trained on CIFAR-10.}\label{fig.5}
\vspace*{-0.4cm}
\end{figure}
\subsection{Results on ImageNet}
\textbf{Settings.} We choose ResNet-18 and ResNet-50 as our baseline models. The TSVD energy threshold $e$ is set as 0.005. $\lambda$ of nuclear norm regularization is 0.0003 for both ResNet-18 and ResNet-50. We decompose both the $3\times 3$ and $1\times 1$ Convolution layers in ResNet-50. TRP1 is the channel-wise decomposition and TRP2 is the spatial-wise decomposition.
\begin{figure*}[htb]
\centering
\subfigure[Channel-wise decomposition]{
\includegraphics[width=2.8in]{nc2.pdf}}
\subfigure[Spatial-wise decomposition]{
\includegraphics[width=2.8in]{vh2.pdf}}
\caption{Ablation study on ResNet-20. Basic methods are data-independent decomposition methods (channel or spatial) with finetuning.}
\label{fig.6}\vspace*{-0.4cm}
\end{figure*}
\textbf{Results.} The results on ImageNet are shown in Table~\ref{tab2} and Table~\ref{tab3}. For ResNet-18, our method outperforms the basic methods~\cite{zhang2016accelerating,jaderberg2014speeding}. For example, in the channel-wise decomposition, TRP obtains 1.81$\times$ speed up rate with 86.48\% Top5 accuracy on ImageNet which outperforms both the data-driven~\cite{zhang2016accelerating}\textsuperscript{1} and data independent~\cite{zhang2016accelerating} methods by a large margin. Nuclear regularization can increase the speed up rates with the same accuracy.
For ResNet-50, to better validate the effectiveness of our method, we also compare TRP with pruning based methods. With $1.80\times$ speed up, our decomposed ResNet-50 can obtain $74.06\%$ Top1 and $92.07\%$ Top5 accuracy which is much higher than \cite{Luo2017ThiNetAF}. The TRP achieves $2.23\times$ acceleration which is higher than \cite{He_2017_ICCV} with the same $1.4\%$ Top5 degrade. Besides, with the same $2.30\times$ acceleration rate, our performance is better than \cite{Zhou_2019_ICCV}.
\subsection{Rank Variation}
To analyze the variation of rank distribution during training, we further conduct an experiment on the CIFAR-10 dataset with ResNet-20 and extract the weight from the \emph{res3-1-2} convolution layer with channel-wise decomposition as our TRP scheme. After each TSVD, we compute the normalized energy ratio $ER(i)$ for each singular value $\sigma_i$ as Eq. (\ref{eq:energy}).
\begin{equation}\label{eq:energy}
ER(i) = \frac{\sigma_i^2}{\sum_{j=0}^{rank(T^z)}\sigma_j^2}
\end{equation}
we record for totally $40$ iterations of TSVD with period $m=20$, which is equal to $800$ training iterations, and our energy theshold $e$ is pre-defined as $0.05$.
Then we visualize the variation of $ER$ in Fig.~\ref{fig.5}. During our training, we observe that the theoretic bound value $\max_t\frac{mG}{||W^t||_F} \approx 0.092 < \sqrt{e} \approx 0.223$, which indicates that our basic assumption in theorem \ref{thm:2} always holds for the initial training stage.
And this phenomenon is also reflected in Fig.~\ref{fig.5}, at the beginning, the energy distribution is almost uniform w.r.t each singular value, and the number of dropped singular values increases after each TSVD iteration
and the energy distribution becomes more dense among singular values with smaller index. Finally, the rank distribution converges to a certain point where the smallest energy ratio exactly reaches our threshold $e$ and TSVD will not cut more singular values.
\subsection{Ablation Study}
In order to show the effectiveness of different components of our method, we compare four training schemes, basic methods \cite{zhang2016accelerating,jaderberg2014speeding}, basic methods combined with nuclear norm regularization, TRP and TRP combined with nuclear norm regularization. The results are shown in Fig.~\ref{fig.6}. We can have following observations:
(1) \emph{Nuclear norm regularization} After combining nuclear norm regularization, basic methods improve by a large margin. Since Nuclear norm regularization constrains the filters into low rank space, the loss caused by TSVD is smaller than the basic methods.
(2) \emph{Trained rank pruning} As depicted in Fig.~\ref{fig.6}, when the speed up rate increases, the performance of basic methods and basic methods combined with nuclear norm regularization degrades sharply. However, the proposed TRP degrades very slowly. This indicates that by reusing the capacity of the network, TRP can learn a better low-rank feature representations than basic methods. The gain of nuclear norm regularization on TRP is not as big as basic methods because TRP has already induced the parameters into low-rank space by embedding TSVD in training process.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
Model & GPU time (ms)&CPU time (ms)\\
\hline\hline
Baseline&0.45&118.02\\
\hline
TRP1+Nu (channel)&0.33&64.75\\
\hline
TRP2+Nu (spatial)&0.31&49.88\\
\hline
\end{tabular}
\end{center}
\caption{Actual inference time per image on ResNet-18.}\label{tab4}
\end{table}
\vspace*{-0.2cm}
\subsection{Runtime Speed up of Decomposed Networks}
We further evaluate the actual runtime speed up of the compressed Network as shown in Table \ref{tab4}. Our experiment is conducted on a platform with one Nvidia 1080Ti GPU and Xeon E5-2630 CPU. The models we used are the original ResNet-18 and decomposed models by TRP1+Nu and TRP2+Nu.
From the results, we observe that on CPU our TRP scheme achieves more salient acceleration performance. Overall the spatial decomposition combined with our TRP+Nu scheme has better performance. Because cuDNN is not friendly for $1\times 3$ and $3\times1$ kernels, the actual speed up of spatial-wise decomposition is not as obvious as the reduction of FLOPs.
\section{Conclusion}
In this paper, we proposed a new scheme Trained Rank Pruning (TRP) for training low-rank networks. It leverages capacity and structure of the original network by embedding the low-rank approximation in the training process. Furthermore, we propose stochastic sub-gradient descent optimized nuclear norm regularization to boost the TRP. The proposed TRP can be incorporated with any low-rank decomposition method. On CIFAR-10 and ImageNet datasets, we have shown that our methods can outperform basic methods and other pruning based methods both in channel-wise decmposition and spatial-wise decomposition.
\clearpage
\bibliographystyle{named}
| {'timestamp': '2020-05-01T02:07:41', 'yymm': '2004', 'arxiv_id': '2004.14566', 'language': 'en', 'url': 'https://arxiv.org/abs/2004.14566'} |
\section{Introduction}
In longitudinal studies, measurements from the same individuals (units) are repeatedly taken over time. However, individuals may be lost to follow up or do not show up at some of the planned measurement occasions, leading to attrition (also referred to as \emph{dropout}) and intermittent missingness, respectively. \citet{rub1976} provides a well-known taxonomy for mechanisms that generate incomplete sequences. If the probability of a missing response does not depend neither on the observed nor on the missing responses, conditional on the observed covariates, the data are said to be missing completely at random (MCAR). Data are missing at random (MAR) if, conditional on the observed data (both covariates and responses), the missingness does not depend on the non-observed responses. When the previous assumptions do not hold, that is when, conditional on the observed data, the mechanism leading to missing data still depends on the unobserved responses, data are referred to as missing not at random (MNAR). In the context of likelihood inference, when the parameters in the measurement and in the missingness processes are distinct, processes leading either to MCAR or MAR data may be ignored; when either the parameter spaces are not distinct or the missing data process is MNAR, missing data are non-ignorable (NI). Only when the ignorability property is satisfied, standard (likelihood) methods can be used to obtain consistent parameter estimates. Otherwise, some form of joint modeling of the longitudinal measurements and the missigness process is required. See \citet{litrub2002} for a comprehensive review of the topic.
For this purpose, in the following, we will focus on the class of Random Coefficient Based Dropout Models \citep[RCBDMs - ][]{Little1995}.
In this framework, separate (conditional) models are built for the two partially observed processes, and the link between them is due to sharing common or dependent individual- (and possibly outcome-) specific random coefficients. The model structure is completed by assuming that the random coefficients are drawn from a given probability distribution.
Obviously, a choice is needed to define such a distribution and, in the past years, the literature focused both on parametric and nonparametric specifications. Frequently, the random coefficients are assumed to be Gaussian \citep[e.g.][]{ver2002, gao2004}, but this assumption was questioned by several authors, see e.g. \citet{sch1999}, since the resulting inference can be sensitive to such assumptions, especially in the case of short longitudinal sequences. For this reason, \citet{alf2009} proposed to leave the random coefficient distribution unspecified, defining a semi-parametric model where the longitudinal and the dropout processes are linked through dependent (discrete) random coefficients. \cite{tso2009} suggested to follow a similar approach for handling intermittent, potentially non ignorable, missing data.
A similar approach to deal with longitudinal Gaussian data subject to missingness was proposed by \citet{Beunc2008}, where a finite mixture of mixed effect regression models for the longitudinal and the dropout processes was discussed.
Further generalizations in the shared parameter model framework were proposed by \citet{cre2011}, who discussed an approach based on \emph{partially} shared individual (and outcome) specific random coefficients, and by \citet{bart2015} who extended standard latent Markov models to handle potentially informative dropout, via shared discrete random coefficients.
In the present paper, the association structure between the measurement and the dropout processes is based on a random coefficient distribution which is left completely unspecified, and estimated through a discrete distribution, leading to a (bi-dimensional) finite mixture model. The adopted bi-dimensional structure allows the bivariate distribution for the random coefficients to reduce to the product of the corresponding marginals when the dropout mechanism is ignorable. Therefore, a peculiar feature of the proposed modeling approach, when compared to standard finite mixture models, is that the MNAR specification properly nests the MAR/MCAR ones, and this allows a straightforward (local) sensitivity analysis. We propose to explore the sensitivity of parameter estimates in the longitudinal model to the assumptions on non-ignorability of the dropout process by developing an appropriate version of the so-called \emph{index of sensitivity to non-ignorability} (ISNI) developed by \cite{trox2004} and \cite{ma2005}, considering different perturbation scenarios.
{The structure of the paper follows. In section \ref{sec:2} we introduce the motivating application, the Leiden 85+ study, entailing the dynamics of cognitive functioning in the elderly. Section \ref{sec:3} discusses general random coefficient based dropout models, while our proposal is detailed in section\ref{sec:4}. Sections \ref{sec:5}-\ref{sec:6} detail the proposed EM algorithm for maximum likelihood estimation of model parameters and the index of local sensitivity we propose. Section \ref{sec:7} provides the application of the proposed model to data from the motivating example, using either MAR or MNAR assumptions, and the results from sensitivity analysis. Last section contains concluding remarks.
}
\section{Motivating example: Leiden 85+ data}
\label{sec:2}
The motivating data come from the Leiden 85+ study, a retrospective study entailing 705 Leiden inhabitants (in the Netherlands), who reached the age of 85 years between September 1997 and September 1999. The study aimed at identifying demographic and genetic determinants for the dynamics of cognitive functioning in the elderly. Several covariates collected at the beginning of the study were considered: gender {(female is the reference category)}, educational status distinguishing between primary {(reference category)} or higher education, plasma Apolipoprotein E (APOE) genotype. As regards the educational level, this was determined by the number of years each subject went to school; primary education corresponds to less than 7 years of schooling. As regards the APOE genotype, the three largest groups were considered: $\epsilon2,\epsilon3$, and $\epsilon 4$. This latter allele is known to be linked to an increased risk for dementia, whereas $\epsilon 2$ allele carriers are relatively protected. Only 541 subjects present complete covariate information and will be considered in the following.
Study participants were visited yearly until the age of $90$ at their place of residence and face-to-face interviews were conducted through a questionnaire whose items are designed to assess orientation, attention, language skills and the ability to perform simple actions. The Mini Mental State Examination index, in the following MMSE \citep{fol1975}, is obtained by summing the scores on the items of the questionnaire designed to assess potential cognitive impairment. The observed values are integers ranging between $0$ and $30$ (maximum total score).
A number of enrolled subjects dropout prematurely, because of poor health conditions or death. In Table \ref{tab1}, we report the total number of available measures for each follow-up visit. Also, we report the number (and the percentage) of participants who leave the study between the current and the subsequent occasion, distinguishing between those who dropout and those who die.
As it can be seen, less than half of the study participants presents complete longitudinal sequences ($49\%$) and this is mainly due to death ($44\%$ of the subjects died during the follow-up).
\begin{table}[h]
\begin{center}
\caption{Available measures per follow-up visit and number (percentage) of subjects leaving the study between subsequent occasions due to poor health conditions or death}
\label{tab1}
\vspace{1mm}
\begin{tabular}{l c c c c c } \hline
Follow-up & Total & Complete (\%) & Do not (\%) & Die (\%) \\
age & & & participate & \\ \hline
85-86 & 541 & 484 (89.46) & 9 (1.66) & 48 (8.87) \\
86-87 & 484 & 422 (87.19) & 3 (0.62) & 59 (12.19) \\
87-88 & 422 & 373 (88.39) & 2 (0.47) & 47 (11.14)\\
88-89 & 373 & 318 (85.25) & 6 (1.61) & 49 (13.14) \\
89-90 & 318 & 266 (83.65) & 15 (4.72) & 37 (11.63) \\
\hline
Total & 541 & 266 (0.49) & 35 (0.07) & 240 (0.44) \\
\hline
\end{tabular}
\end{center}
\end{table}
With the aim at understanding how the MMSE score evolves over time, we show in Figure \ref{fig:mean} the corresponding overall mean value across follow-up visits. We also represent the evolution of the mean MMSE stratified by participation in the study (completer, dropout/death before next occasion). As it is clear, cognitive functioning levels in individuals who die are much lower than those corresponding to subjects who dropout for other reasons or participate until the study end. The same figure is represented by considering the transform $\log[1+(30-MMSE)]$ which is negatively proportional to the MMSE score but will be further considered as it avoids well known ceiling and floor effects that are usually faced when dealing with this kind of indexes.
\begin{figure}[h]
\centering
\subfloat[]{{\includegraphics[width=6.5cm]{MeanMMSE2} }}%
\quad
\subfloat[]{{\includegraphics[width=6.5cm]{MeanLogMMSE2} }}%
\caption{Mean MMSE (a) and mean $\log[1+(30-MMSE)]$ (b) over time stratified by subjects' participation to the study.}
\label{fig:mean}
\centering
\end{figure}
A further empirical evidence which is worth to be observed is that, while the decline in cognitive functioning (as measured by the MMSE score) through time seems to be (at least approximately) constant across groups defined by patterns of dropout, the differential participation in the study leads to a different slope when the overall mean score is considered. Such a finding highlights a potential dependence between the evolution of the MMSE score over time and the dropout process, which may bias parameter estimates and corresponding inference. In the next sections, we will introduce a bi-dimensional finite mixture model for the analysis of longitudinal data subject to potentially non-ignorable dropouts.
\section{Random coefficient-based dropout models}
\label{sec:3}
Let $Y_{it}$ represent a set of longitudinal measures recorded on $i=1,\ldots,n,$ subjects at time occasions $t=1,\ldots,T$, and let $\mathbf{x}_{it}=(x_{it1},\ldots,x_{itp})^{\prime}$ denote the corresponding $p$-dimensional vector of observed covariates. Let us assume that, conditional on a $q$-dimensional set of individual-specific random coefficients ${\bf b}_{i}$, the observed responses $y_{it}$ are independent realizations of a random variable with density in the Exponential Family. The canonical parameter $\theta_{it}$ that indexes the density is specified according to the following regression model:
\begin{equation*}
\theta_{it}=\mathbf{x}_{it}^{\prime} \boldsymbol{\beta} + \mathbf{m}_{it}^{\prime}\mathbf{b}_i.
\label{modlong}
\end{equation*}
The terms $\mathbf{b}_i$, $i=1,\ldots,n,$ are used to model the effects of unobserved individual-specific, time-invariant, heterogeneity common to each lower-level unit (measurement occasion) within the same $i$-th upper-level unit (individual). Furthermore, $\boldsymbol{\beta}$ is a $p-$dimensional vector of regression parameters that are assumed to be constant across individuals. The covariates whose effects (are assumed to) vary across individuals are collected in the design vector $\mathbf{m}_{it}=(m_{it1},\ldots,m_{itq})$, which represents a proper/improper subset of $\mathbf{x}_{it}$. For identifiability purposes, standard assumptions on the random coefficient vector are
$$
\textrm{E}({\bf b}_{i})={\bf 0}, \quad \textrm{Cov}({\bf b}_{i})={\bf D}, \quad i=1,\dots,n.
$$
Experience in longitudinal data modeling suggests that a potential major issue when dealing with such a kind of studies is represented by missing data. That is, some individuals enrolled in the study do not reach its end and, therefore, only partially participate in the planned sequence of measurement occasions.
In this framework, let $\mathbf{R}_i$ denote the missing data indicator vector, with generic element $R_{it}=1$ if the $i$-th unit drops-out at any point in the window $(t-1,t)$, and $R_{it}=0$ otherwise. As we remarked above, we consider a discrete time structure for the study and the time to dropout; however, the following arguments may apply, with a limited number of changes, to continuous time survival process as well. We assume that, once a person drops out, he/she is out forever (attrition). Therefore, if the designed completion time is denoted by $T$, we have, for each participant, $T_i\leq T$ available measures.
To describe the (potential) dependence between the primary (longitudinal) and the secondary (dropout) processes, we may introduce an explicit model for the dropout mechanism, conditional on a set of dropout specific covariates, say $\mathbf{v}_i$, and (a subset of) the random coefficients in the longitudinal response model:
\begin{equation}
h(\mathbf{r}_i \mid \mathbf{v}_i,\mathbf{y}_i,\mathbf{b}^{\ast}_i) = h(\mathbf{r}_i \mid \mathbf{v}_i,\mathbf{b}^{\ast}_i)= \prod_{t=1}^{\min(T, T_i+1)} h(r_{it} \mid \mathbf{v}_i,\mathbf{b}^{\ast}_i), \qquad i=1,\ldots,n.
\label{drop}
\end{equation}
The distribution is indexed by a canonical parameter defined via the regression model:
\[
\phi_{it}=\mathbf{v}_{it}^\prime\boldsymbol{\gamma}+\mathbf{d}_{it}^\prime\mathbf{b}^{\ast}_i
\]
where ${\bf b}_{i}^{\ast}={\bf C} {\bf b}_{i}$, $i=1,\dots,n$, and $\bf C$ is a binary $q_{1}$-dimensional matrix ($q_1 \leq q$), with at most a one in each row.
These models are usually referred to as shared (random) coefficient models; see \cite{wu1988}, \cite{wu1989} for early developments in the field. As it may be evinced from equation \eqref{drop}, the assumption of this class of models is that the longitudinal response and the dropout indicator are independent conditional on the individual-specific random coefficients. According to this (local independence) assumption, the joint density of the observed longitudinal responses and and the missingness indicator can be specified as
\begin{eqnarray*}
f({\bf y}_{i}, {\bf r}_{i} \mid \mathbf{X}_{i},\mathbf{V}_i)&=&\int f({\bf y}_{i}, {\bf r}_{i} \mid \mathbf{X}_{i},\mathbf{V}_i, \mathbf b_i) dG(\mathbf{b}_i) = \nonumber \\
&=& \int \left[ \prod_{t=1}^{T_{i}} f(y_{it} \mid \mathbf{x}_{it},\mathbf{b}_i) \prod_{t=1}^{\min(T, T_i+1)} h(r_{it} \mid \mathbf{v}_{it}, \mathbf{b}_i) \right]dG(\mathbf{b}_i),
\label{joint}
\end{eqnarray*}
where $G(\cdot)$ represents the random coefficient distribution, often referred to as the \emph{mixing} distribution. Dependence between the measurement and the missigness, if any, is completely accounted for by the latent effects which are also used to describe unobserved, individual-specific, heterogeneity in each of the two (univariate) profiles.
As it can be easily noticed, this modeling structure leads to a perfect correlation between (subsets of) the random coefficients in the two equations, and this may not be a general enough setting. As an alternative, we may consider equation-specific random coefficients. In this context, while the random terms describe univariate heterogeneity and overdispersion, the corresponding joint distribution allows to model the association between the random coefficients in the two equations and, therefore, between the longitudinal and the missing data process (on the link function scale). \cite{ait2003} discussed such an alternative parameterization referring to it as the \emph{correlated} random effect model. To avoid any confusion with the estimator proposed by \cite{cham1984}, we will refer to it as the \emph{dependent} random coefficient model. When compared to \emph{shared} random coefficient models, this approach avoids unit correlation between the random terms in the two equations and, therefore, it represents a more flexible approach, albeit still not general.
Let $\mathbf{b}_i=(\mathbf{b}_{i1},\mathbf{b}_{i2})$ denote a set of individual- and outcome-specific random coefficients. Based on the standard local independence assumption, the joint density for the couple $\left({\bf Y}_{i}, {\bf R}_{i}\right)$ can be factorized as follows:
\begin{equation}
f({\bf y}_{i}, {\bf r}_{i} \mid \mathbf{X}_{i},\mathbf{V}_i)=\int \left[ \prod_{t=1}^{T_{i}} f(y_{it}|\mathbf{x}_{it},\mathbf{b}_{i1})\prod_{t=1}^{\min(T, T_i+1)} h(r_{it}|\mathbf{v}_{it},\mathbf{b}_{i2}) \right] dG(\mathbf{b}_{i1},\mathbf{b}_{i2}).
\label{joint_corr}
\end{equation}
A different approach to dependent random coefficient models may be defined according to the general scheme proposed by \cite{cre2011}, where common, partially shared and independent (outcome-specific) random coefficients are considered in the measurement and the dropout process. This approach leads to a particular case of dependent random coefficients where, however, the observed and the missing part of the longitudinal response do not generally come from the same distribution.
\subsection{The random coefficient distribution}
When dealing with dependent random coefficient models, a common assumption is that outcome-specific random coefficients are iid Gaussian variates.
According to \cite{wan2001}, \cite{son2002}, \cite{tsi2004}, \cite{nehu2011a}, \cite{nehu2011b} the choice of the random effect distribution may not have great impact on parameter estimates, except in extreme cases, e.g. when the \emph{true} underlying distribution is discrete. In this perspective, a major role is played by the individual sequence length: when all subjects have a relatively large number of repeated measurements, the effects of misspecifying the random effect distribution on model parameter estimates becomes minimal; see the discussion in \cite{riz2008}, who designed a simulation study to investigate the effects that a misspecification of the random coefficient distribution may have on parameter estimates and corresponding standard errors when a shared parameter model is considered. The authors showed that, as the number of repeated measurements per individual grows, the effect of misspecifying the random coefficient distribution vanishes for certain parameter estimates. These results are motivated by making explicit reference to theoretical results in \citet{car2000}. In several contexts, however, the follow-up times may be short (e.g. in clinical studies) and individual sequences include only limited information on the random coefficients. In these cases, assumptions on such a distribution may play a crucial role. As noticed by \cite{tso2009}, the choice of an \emph{appropriate} distribution is generally difficult for, at least, three reasons; see also \citet{alf2009}. First, there is often little information about unobservables in the data, and any assumption is difficult to be justified by looking at the observed data only. Second, when high dimensional random coefficients are considered, the use of a parametric multivariate distribution imposing the same shape on every dimension can be restrictive. Last, a potential dependence of the random coefficients on omitted covariates induces heterogeneity that may be hardly captured by parametric assumptions. In studies where subjects have few measurements, the choice of the random coefficient distribution may therefore be extremely important.
With the aim at proposing a generally applicable approach, \cite{tso2009} considered a semi-parametric approach with shared (random) parameters to analyze continuous longitudinal responses while adjusting for non monotone missingness. On the same line, \cite{alf2009} discussed a model for longitudinal binary responses subject to dropout, where dependence is described via outcome-specific, dependent, random coefficients.
According to these finite mixture-based approaches and starting from equation \eqref{joint_corr}, we may write the observed data log-likelihood function as follows:
\begin{align}
\ell(\boldsymbol\Psi, \boldsymbol\Phi, \boldsymbol{\pi}) & = \sum_{i=1}^n \log \left\lbrace \sum_{k=1}^K f(\mathbf{y}_i \mid \mathbf{X}_i,\boldsymbol{\zeta}_{1k})
h(\mathbf{r}_i \mid \mathbf{V}_i,\boldsymbol{\zeta}_{2k})\pi_k \right\rbrace
\nonumber \\
& = \sum_{i=1}^n \log \left\lbrace \sum_{k=1}^K \left[\prod_{t = 1}^{T_i} f({y}_{it} \mid \mathbf{X}_i,\boldsymbol{\zeta}_{1k})
\prod_{t=1}^{\min(T, T_i +1)} h({r}_{it} \mid \mathbf{V}_i,\boldsymbol{\zeta}_{2k})\pi_k \right]\right\rbrace,
\label{eq:log-likelihood}
\end{align}
where $\b \Psi=(\boldsymbol{\beta}, \boldsymbol{\zeta}_{11}, \dots,\boldsymbol{\zeta}_{1K})$, $\b \Psi=(\boldsymbol{\gamma}, \boldsymbol{\zeta}_{21}, \dots,\boldsymbol{\zeta}_{2K})$, with $\boldsymbol{\zeta}_{1k}$ and $\boldsymbol{\zeta}_{2k}$ denoting the vectors of discrete random coefficients in the longitudinal and in the missingness process, respectively.
Last, $\boldsymbol{\pi}=(\pi_{1},\dots,\pi_{K})$, with $\pi_k=\Pr({\bf b}_{i}=\boldsymbol{\zeta}_k)=\Pr\left({\bf b}_{i1}=\boldsymbol{\zeta}_{1k},{\bf b}_{i2}=\boldsymbol{\zeta}_{2k}\right)$ identifies the joint probability of (multivariate) locations $\boldsymbol{\zeta}_k=\left(\boldsymbol{\zeta}_{1k},\boldsymbol{\zeta}_{2k}\right)$, $k=1,\ldots,K$.
It is worth noticing that, in the equation above, ${\bf y}_{i}$ refers to the observed individual sequence, say $\mathbf{y}_i^{o}$. Under the model assumptions and due to the local independence between responses coming from the same sample unit, missing data, say $\mathbf{y}_i^{m}$, can be directly integrated out from the joint density of all longitudinal responses, say $f(\mathbf{y}_i^{o}, \mathbf{y}_i^{m} \mid \mathbf{x}_i, \boldsymbol{\zeta}_{k})$, and this leads to the log-likelihood function in equation \eqref{eq:log-likelihood}.
The use of finite mixtures has several significant advantages over parametric approaches. Among others, the EM algorithm for ML estimation is computationally efficient and the discrete nature of the estimates may help classify subjects into disjoint components that may be interpreted as clusters of individuals characterized by homogeneous values of model parameters. However, as we may notice by looking at expression in equation (\ref{eq:log-likelihood}), the latent variables used to account for individual (outcome-specific) departures from homogeneity are intrinsically uni-dimensional. That is, while the locations may differ across profiles, the number of locations ($K$) and the prior probabilities ($\pi_k$) are common to all profiles. This clearly reflects the standard \emph{unidimensionality} assumption in latent class models.
\section{A bi-dimensional finite mixture approach}
\label{sec:4}
Although the modeling approach described above is quite general and flexible, a clear drawback is related to the non-separability of the association structure between the random coefficients in the longitudinal and the missing data profiles.
Moreover, even if it can be easily shown that the likelihood in equation (\ref{eq:log-likelihood}) refers to a MNAR model in Rubin's taxonomy \citep{litrub2002}, it is also quite clear that it does not reduce to the (corresponding) MAR model, but in very particular cases (e.g. when $K=1$ or when either $\boldsymbol{\zeta}_{1k}=cost$ or $\boldsymbol{\zeta}_{2k}=cost$, $\forall k=1,\dots,K$). This makes the analysis of sensitivity to modeling assumptions complex to be exploited and, therefore, makes the application scope of such a modeling approach somewhat narrow.
Based on the considerations above and with the aim at enhancing model flexibility, we suggest to follow an approach similar to that proposed by \cite{alf2012}. That is, we consider outcome-specific sets of discrete random coefficients for the longitudinal and the missingness outcome, where each margin is characterized by a (possibly) different number of components. Components in each margin are successively joined by a full (bi-dimensional) matrix containing the masses associated to couples $(g,l)$, where the first index refers to the components in the longitudinal response profile, while the second denotes the components in the dropout process.
To introduce our proposal, let $\tbf b_i = (\tbf b_{i1}, \tbf b_{i2})$ denote the vector of individual random coefficients associated to the $i$-th subject, with $i=1, \dots n$.
Let us assume the vector of individual-specific random coefficients $\tbf b_{1i}$ influences the longitudinal data process and follows a discrete distribution defined on $K_1$ distinct support points $\{\b \zeta_{11}, \dots, \b \zeta_{1K_1}\}$ with masses $\pi_{g\star} = \Pr(\tbf b_{i1} = \b\zeta_{1g})$.
Similarly, let us assume the vector of random coefficients $\tbf b_{2i}$ influences the missing data process and follows a discrete distribution with $K_2$ distinct support points $\{\b \zeta_{21}, \dots, \b \zeta_{1K_2}\}$ with masses $\pi_{\star l} = \Pr(\tbf b_{i2} = \b\zeta_{2l})$.
That is, we assume that
\[
\tbf b_{1i} \sim \sum_{g = 1}^{K_1} \pi_{g\star} \, \delta(\b \zeta_{1g}), \quad \quad \tbf b_{2i} \sim \sum_{\ell = 1}^{K_2} \pi_{\star \ell} \, \delta(\b \zeta_{2\ell}),
\]
where $\delta(a)$ denotes an indicator function that puts unit mass at $a$.
To complete the modeling approach we propose, we introduce a joint distribution for the random coefficients, associating a mass $\pi_{g\ell}=\Pr({\bf b}_{i1}=\boldsymbol{\zeta}_{1g}, {\bf b}_{i2}=\boldsymbol{\zeta}_{2\ell})$ to each couple of locations $(\boldsymbol{\zeta}_{1g}, \boldsymbol{\zeta}_{2\ell})$, entailing the longitudinal response and the dropout process, respectively.
Obviously, masses $\pi_{g\star}$ and $\pi_{\star \ell}$ in the univariate profiles are obtained by properly marginalizing $\pi_{g\ell}$:
\[
\pi_{g\star} = \sum_{\ell = 1}^{K_2} \pi_{g\ell}, \quad \quad
\pi_{\star \ell} = \sum_{g = 1}^{K_1} \pi_{g\ell}.
\]
Under the proposed model specification, the likelihood in equation (\ref{eq:log-likelihood}) can be written as follows:
\begin{equation}
\ell(\boldsymbol\Phi, \boldsymbol\Psi, \boldsymbol{\pi}) = \sum_{i=1}^n \log \left\lbrace \sum_{g=1}^{K_1} \sum_{\ell=1}^{K_2} \left[f(\mathbf{y}_i \mid \mathbf{x}_i,\boldsymbol{\zeta}_{1g})h(\mathbf{r}_i \mid \mathbf{v}_i,\boldsymbol{\zeta}_{2l})\right] \pi_{g\ell} \right\rbrace.
\label{eq:log-likelihooddouble}
\end{equation}
Using this approach, marginals control for heterogeneity in the univariate profiles, while the matrix of joint probabilities $\pi_{gl}$ describes the association between the latent effects in the two sub-models.
The proposed specification could be considered as a standard finite mixture with $K = K_{1} \times K_{2}$ components, where each of the $K_{1}$ locations in the first profile pairs with each of the $K_{2}$ locations in the second profile.
However, when compared to a standard finite mixture model, the proposed specification provides a more flexible (albeit more complex) representation for the random coefficient distribution. Also, by looking at equation \eqref{eq:log-likelihooddouble}, it is immediately clear that the MNAR model directly reduces to its M(C)AR counterpart when $\pi_{g\ell}=\pi_{g\star} \pi_{\star \ell}$, for $g=1,\dots,K_{1}$ and $\ell=1,\dots,K_{2}$. As we stressed before, this is not true in the case of equation \eqref{eq:log-likelihood}.
Considering a logit transform for the joint masses $\pi_{g\ell}$, we may write
\begin{equation}\label{pi:equation}
\xi_{g \ell}=\log \left(\frac{\pi_{g \ell}}{\pi_{K_{1} K_{2}}}\right) = \alpha_{g \star} + \alpha_{\star \ell} + \lambda_{g \ell},
\end{equation}
where
$\alpha_{g \star} = \log \left({\pi_{g \star}}/{\pi_{K_{1} \star}}\right), \alpha_{\star \ell} = \log \left({\pi_{\star \ell}}/{\pi_{\star K_{2}}}\right),$
and $\lambda_{g \ell}$ provides a measures of the departure from independence model.
That is, if $\lambda_{g \ell}=0$, for all $(g, \ell) \in \left\{1,\dots,K_{1}\right\} \times \left\{1,\dots,K_{2}\right\}$, then
\[
\log \left(\frac{\pi_{g \ell}}{\pi_{K_{1} K_{2}}}\right) = \alpha_{g \star} + \alpha_{\star \ell}=\log \left(\frac{\pi_{g \star}}{\pi_{K_{1} \star}}\right)+\log \left(\frac{\pi_{\star \ell}}{\pi_{\star K_{2}}}\right).
\]
This corresponds to the independence between the random coefficients in the two equations, and, as a by-product, to the independence between the longitudinal and the dropout process.
Therefore, the vector $\boldsymbol{\lambda} = (\lambda_{11}, \dots, \lambda_{K_1K_2})$ can be formally considered as a \textit{sensitivity} parameter vector, since when $\boldsymbol{\lambda}=\b 0$ the proposed MNAR model reduces to the corresponding M(C)AR model.
It is worth noticing that the proposed approach has some connections with the model discussed by \citet{Beunc2008}, where parametric shared random coefficients for the longitudinal response and the dropout indicator are joined by means of a (second-level) finite mixture. In fact, according to Theorem 1 in \cite{dun2009}, the elements of any $K_{1} \times K_{2}$ probability matrix $\mbox{\boldmath$\Pi$} \in \mathcal{M}_{K_{1}K_{2}}$, can be decomposed as:
\begin{equation}
\pi_{g\ell}=\sum_{h=1}^{M} \tau_{h} \pi_{g \star \mid h} \pi_{\star \ell \mid h},
\label{eq:pidecomp}
\end{equation}
for an appropriate choice of $M$ and under the following constraints: \begin{eqnarray*}
\sum_{h} \tau_{h}=\sum_{g} \pi_{g\star \mid h}=\sum_{\ell} \pi_{\star \ell \mid
h}=\sum_{g} \sum_{\ell} \pi_{g\ell}=1.
\end{eqnarray*}
If we use the above parameterization for the random coefficient distribution, the association between locations $\boldsymbol{\zeta}_{1g}$ and $\boldsymbol{\zeta}_{2\ell}$, $g=1,\dots,K_{1}$ $\ell=1,\dots,K_{2}$ is modeled via the masses $\pi_{g \star \mid h}$ and $\pi_{\star \ell \mid h}$, that vary according to the upper-level (latent) class $h=1,\dots,M$.
That is, random coefficients $\mathbf{b}_{1i}$ and $\mathbf{b}_{2i}$, $i=1,\dots,n$ are assumed to be independent conditional on the $h$-th (upper level) latent class $h=1,\dots,M$. Also, the mean and covariance matrix in profile-specific random coefficient distribution may vary with second-level component, while in the approach we propose, the second level structure is just a particular way to join the two profiles and, therefore, control for dependence between outcome-specific random coefficients.
\section{ML parameter estimation}
\label{sec:5}
Let us start by assuming that the data vector is composed by an observable part $(\mathbf{y}_{i}, \mathbf{r}_i)$ and by unobservables $\mathbf{z}_{i}=(z_{i11},\dots,z_{ig\ell}, \dots, z_{iK_1K_2})$. Let us further assume the random variable $\mathbf{z}_{i}$ has a multinomial distribution, with parameters $\pi_{g\ell}$ denoting the probability of the $g$-th component in the first and the $\ell$-th component in the second profile, for $g=1, \dots, K_1$ and $\ell = 1, \dots, K_2$,.
Let $\boldsymbol{\Upsilon} = \left\{\boldsymbol\Phi, \boldsymbol\Psi, \boldsymbol{\pi}\right\}$ denote the vector of all (free) model parameters, where, as before, $\boldsymbol{\Phi} = (\boldsymbol{\beta}, \boldsymbol{\zeta}_{11}, \dots, \boldsymbol{\zeta}_{1K_1})$ and $\boldsymbol \Psi = (\boldsymbol{\gamma}, \boldsymbol{\zeta}_{21}, \dots, \boldsymbol{\zeta}_{2K_2})$ collect the parameters for the longitudinal and the missing data model, respectively, and $\b \pi= (\pi_{11}, \dots, \pi_{K_1 K_2})$.
Based on the modeling assumptions introduced so far, the complete data likelihood function is given by
\begin{eqnarray*}
L_{c}(\boldsymbol{\Upsilon})& = & \prod_{i=1}^{n}\prod_{g=1}^{K_{1}}\prod_{\ell=1}^{K_{2}}\left\{f(\mathbf{
y}_{i},\mathbf{r}_i\mid z_{ig\ell} = 1) \pi_{g\ell}\right\}^{z_{ig\ell}}
\\
& =& \prod_{i=1}^{n}\prod_{g=1}^{K_{1}}\prod_{\ell=1}^{K_{2}}\left\{
\left[ \prod_{t=1}^{T_{i}} f(y_{it}\mid z_{ig\ell}=1 )
\prod_{t=1}^{\min(T, T_{i}+1)} h(r_{it}\mid z_{ig\ell}=1)\right]\pi_{g\ell} \right\}^{z_{ig\ell}}.
\end{eqnarray*}
To derive parameter estimates, we can exploit an extended EM algorithm which, as usual, alternates two separate steps. In the ($r$-th iteration of the) E-step, we compute the posterior expectation of the complete data log-likelihood, conditional on the observed data $(\mathbf{y}_{i}, \mathbf{r}_i)$ and the current parameter estimates $\hat{\boldsymbol{\boldsymbol{\Upsilon}}}^{(r-1)}$. This translates into the computation of the posterior probability of component membership $w_{ig\ell}$, defined as posterior expectation of the random variable $z_{ig\ell}$.
In the M-step, we maximize the expected complete-data log-likelihood with respect to model parameters. Clearly, for the finite mixture probabilities $\pi_{g\ell}$, estimation is based upon the constraint $\sum_{g=1}^{K_1}\sum_{\ell=1}^{K_2} \pi_{g\ell} = 1$. As a result, the following score functions are obtained:
\begin{eqnarray*}
\label{eq:score1}
S_{c}(\boldsymbol{\Phi})&=& \sum\limits_{i=1}^{n} \frac{\partial }{\partial {\boldsymbol \Phi}} \sum\limits_{g=1}^{K_{1}} \sum\limits_{\ell=1}^{K_{2}} w_{ig\ell}^{(r)} \left[\log(f_{ig\ell}) + \log(\pi_{g\ell})
\right]=
\sum\limits_{i=1}^{n} \frac{\partial }{\partial {\boldsymbol \Phi}} \sum\limits_{g=1}^{K_{1}} w_{ig\star}^{(r)}\left[\log(f_{i1g})\right], \\
\label{eq:score2}
S_{c}(\boldsymbol{\Psi})&=&\sum\limits_{i=1}^{n}\frac{\partial }{\partial {\boldsymbol \Psi}} \sum\limits_{g=1}^{K_{1}}\sum\limits_{\ell=1}^{K_{2}} w_{ig\ell}^{(r)}\left[\log(f_{ig\ell}) + \log(\pi_{g\ell})
\right]=
\sum\limits_{i=1}^{n} \frac{\partial }{\partial {\boldsymbol \Psi}}\sum\limits_{\ell=1}^{K_{2}} w_{i\star \ell}^{(r)} \left[\log(f_{i2\ell})\right], \\
\label{eq:score3}
S_{c}({\pi}_{g\ell})&=&\sum\limits_{i=1}^{n} \frac{\partial }{\partial {\pi}_{g\ell}} \sum_{g=1}^{K_{1}} \sum_{\ell=1}^{K_{2}} w_{ig\ell}^{(r)} \pi_{g\ell} - \kappa \left(
\sum_{g=1}^{K_{1}} \sum_{\ell=1}^{K_{2}} \pi_{g\ell} -1
\right).
\end{eqnarray*}
In the equations above, $f_{ig\ell} = f(\textbf y_{i}, \textbf r_i \mid z_{ig\ell})$, while $w_{ig\star}^{(r)}$, $w_{i\star l}^{(r)}$, $f_{i1g}$ and $f_{i2\ell}$ represent the marginals for the posterior probability $w_{ig\ell}$ and for the joint density $f_{ig\ell}$, respectively.
As it is typical in finite mixture models, equation (\ref{eq:score3}) can be solved analytically to give the updates
\[
\hat \pi_{g\ell}^{(r)} = \frac{\sum_{i=1}^n w_{ig\ell}^{(r)}}{n},
\]
while the remaining model parameters may be updated by using standard Newton-type algorithms. The E- and the M-step of the algorithm are alternated until convergence, that is, until the (relative) difference between two subsequent likelihood values is smaller than a given quantity $\varepsilon > 0$. Given that this criterion may indicate lack of progress rather than true convergence, see eg \cite{karl2001}, and the log-likelihood may suffer from multiple local maxima, we usually suggest to start the algorithm from several different starting points. In all the following analyses, we used B=50 starting points.
Also, as it is typically done when dealing with finite mixtures, the number of locations $K_1$ and $K_2$ is treated as fixed and known. The algorithm is run for varying $(K_1, K_2)$ combinations and the optimal solution is chosen via standard model selection techniques, such as AIC, \citep{Akaike1973} or BIC \citep{Schwarz1978}.
Standard errors for model parameter estimates are obtained at convergence of the EM algorithm by the standard sandwich formula \citep{White1980, Royall1986}. This leads to the following estimate of the covariance matrix of model parameters:
\begin{align*}
\label{eq:swd}
\widehat{\rm Cov(\hat{\boldsymbol{\Upsilon}})} = \tbf I_o(\hat{\boldsymbol{\Upsilon}})^{-1} \widehat{\mbox{Cov}(\tbf S)}
\tbf I_o(\hat{\boldsymbol{\Upsilon}})^{-1},
\end{align*}
where $\tbf I_o(\hat{\boldsymbol{\Upsilon}})$ denotes the observed information matrix which is computed via the Oakes' formula \citep{oak1999}. Furthermore, $\tbf S$ denotes the score vector evaluated at $\hat \boldsymbol{\Upsilon}$ and $\widehat{\mbox{Cov}(\tbf S)} = \sum_{i=1}^n \tbf S_i(\hat{\boldsymbol{\Upsilon}}) \tbf S_i^\prime(\hat{\boldsymbol{\Upsilon}})$ denotes the estimate for the covariance matrix of the score function $\rm{Cov}(\tbf S)$, with $\tbf S_i$ being the individual contribution to the score vector.
\section{Sensitivity analysis: definition of the index} \label{sec:6}
The proposed bi-dimensional finite mixture model allows to account for possible effects of non-ignorable dropouts on the primary outcome of interest. However, as highlighted by \cite{mol2008}, for every MNAR model there is a corresponding MAR counterpart that produces exactly the same fit to observed data. This is due to the fact that the MNAR model is fitted by using the observed data only and it implicitly assumes that the distribution of the missing responses is identical to that of the observed ones. Further, the dependence between the longitudinal response (observed and missing) and the dropout indicator which is modeled via the proposed model specification is just one out of several possible choices.
Therefore, rather than relying on a single (possibly misspecified) MNAR model and in order evaluate how maximum likelihood estimates for the longitudinal model parameters are influenced by the hypotheses about the dropout mechanism, a sensitivity analysis is always recommended.
In this perspective, most of the available proposals focus on Selection or Pattern Mixture Model specifications \citep{Little1995}, while few proposals are available for shared random coefficient models. A notable exception is the proposal by \cite{cre2010}. Here, the authors considered a sensitivity parameter in the model and studied how model parameter estimates vary when the sensitivity parameter is forced to move away from zero.
Looking at \emph{local} sensitivity, \cite{trox2004} developed an index of local sensitivity to non-ignorability (ISNI) via a first order Taylor expansion, with the aim at describing the "geometry" of the likelihood surface in a neighborhood of a MAR solution.
Such an index was further extended by \cite{ma2005}, \cite{Xie2004}, and \cite{Xie2008} to deal with the general case of $q$-dimensional ($q >1$) non ignorability parameters by considering an $L_{2}$ to summarize the impact of a unit change in its elements.
An $L_{1}$ norm was instead considered by \cite{Xie2012}, while \cite{Gao2016} further extended the ISNI definition by considering a higher order Taylor expansion.
In the context of joint models for longitudinal responses and (continuous) time to event data, \cite{viv2014} proposed a relative index based on the ratio between the ISNI and a measure of its variability under the MAR assumption.
Due to the peculiarity of the proposed model specification, to specify a index of sensitivity to non-ignorability we proceed as follows.
As before, let $\boldsymbol{\lambda} = (\lambda_{11}, \dots, \lambda_{K_1K_2})$ denote the vector of non ignorability parameters and let $\boldsymbol{\lambda}={\bf 0}$ correspond to a MAR model. Also, let $\boldsymbol{\xi} = (\xi_{11}, \dots, \xi_{K_1K_2})$ denote the vector of all logit transforms defined in equation \eqref{pi:equation} and let $\boldsymbol{\xi}_0$ correspond to a MAR model. That is, $\boldsymbol{\xi}_0$ has elements
\[
\xi_{gl} = \alpha_{g\star} + \alpha_{\star \ell}, \quad g=1, \dots, K_1, \ell=1, \dots, K_2.
\]
Both vectors $\boldsymbol{\lambda}$ and $\boldsymbol{\xi}$ may be interchangeably considered as non-ignorability parameters in the proposed model specification, but to be coherent with the definition of the index, we will use $\boldsymbol{\lambda}$ in the following.
Last, let us denote by $\hat{\boldsymbol \Phi}(\boldsymbol{\lambda})$ the maximum likelihood estimate for model parameters in the longitudinal data model, conditional on a given value for the sensitivity parameters $\boldsymbol{\lambda}$.
The \emph{index of sensitivity to non-ignorability} may be derived as
\begin{equation}
\label{eq:ISNIbase}
ISNI_{\boldsymbol\Phi}=\left.\frac{\partial \hat{\boldsymbol\Phi}(\boldsymbol{\lambda})} {\partial \boldsymbol{\lambda}} \right|_{\bl \Phi(\bl 0)} \simeq - \left(\left.\frac{\partial^{2} \ell(\boldsymbol\Phi, \boldsymbol\Psi, \boldsymbol{\pi})}{\partial \boldsymbol\Phi \boldsymbol\Phi^{\prime}}\right|_{\bl \Phi(\bl 0)}\right)^{-1} \left. \frac{\partial^{2} \ell(\boldsymbol\Phi, \boldsymbol\Psi, \boldsymbol{\pi})}{\partial \boldsymbol\Phi \boldsymbol{\lambda}}\right|_{\bl \Phi(\bl 0)}.
\end{equation}
Based on the equation above, it is clear that the \textit{ISNI} measures the displacement of model parameter estimates from their MAR counterpart, in the direction of $\boldsymbol{\lambda}$, when we move far from $\boldsymbol{\lambda} = \b 0$.
Following similar arguments as those detailed by \citep{Xie2008}, it can be shown that the following expression holds:
\begin{align*}
\label{eq:ISNIapprox}
\hat \boldsymbol\Phi(\boldsymbol{\lambda})=\hat \boldsymbol\Phi({\bf 0})+ISNI_{\boldsymbol\Phi}\boldsymbol{\lambda};
\end{align*}
that is, the ISNI may be also interpreted as the linear impact that changes in the elements of $\boldsymbol{\lambda}$ have on $\hat \boldsymbol\Phi$.
It is worth to highlight that, $ISNI_{\boldsymbol\Phi}$ denotes a matrix with $D$ rows and $(K_{1}-1)(K_{2}-1)$ columns representing the effect each element in $\boldsymbol{\lambda}$ has on the $D$ elements in $\boldsymbol\Phi$. That is, the proposed formulation of the index leads to a matrix rather than a scalar or a vector as in the original formulations.
In this respect, to derive a global measure of local sensitivity for the parameter estimate $\hat \Phi_d$ when moving far from the MAR assumption, for $ d =1, \dots, D,$ a proper summary of the corresponding row in the \textit{ISNI} matrix, say $ISNI_{\Phi_d}$, needs to be computed.
\section{Analysis of the Leiden 85+ data}
\label{sec:7}
In this section, the bi-dimensional finite mixture model is applied to the analysis of the Leiden 85+ study data. We aim at understanding the effects for a number of covariates on the dynamics of cognitive functioning in the elderly, while controlling for potential bias in the parameter estimates due to possible non-ignorable dropouts.
First, we provide a description of the available covariates in section \ref{sec_preliminary} and describe the sample in terms of demographic and genetic characteristics of individuals participating in the study. Afterwards, we analyze the joint effect of these factors on the dynamics of the (transformed) MMSE score. Results are reported in sections \ref{sec_MARmodel}--\ref{sec_MNARmodel}-. Last in section \ref{sec_isni}, a sensitivity analysis is performed to give insight on changes in parameters estimates when we move far from the MAR assumption. Two scenarios are investigated and results reported.
\subsection{Preliminary analysis}\label{sec_preliminary}
We start the analysis by summarizing in Table \ref{tabII} individual features of the sample of subjects participating in the Leiden 85+ study, both in terms of covariates and MMSE scores, conditional on the observed pattern of participation. That is, we distinguish between those individuals who completed the study and those who did not. As highlighted before, subjects who present incomplete information are likely to leave the study because of poor health conditions and this questions weather the analysis based on the observed data only may lead to biased results.
By looking at the overall results, we may observe that the $64.88\%$ of the sample has a low level of education and females represent the $66.73\%$ of whole sample. As regards the \textit{APOE} genotype, the most referenced category is obviously $APOE_{33}$ $(58.96 \%)$, far from $APOE_{34-44}$ $(21.08\%)$ and $APOE_{22-23}$ $(17.74 \%)$, while only a very small portion of the sample ($2.22\%$) is characterized by $APOE_{24}$.
Last, we may notice that more than half of the study participants ($50.83\%$) leave the study before the scheduled end. This proportion is relatively higher for participants with low level of education ($52.71\%$), for males ($58.89\%$), and for those in the $APOE_{34-44}$ group ($61.40\%$).
\begin{table}[htb]
\caption{Leiden 85+ Study: demographic and genetic characteristics of participants}
\label{tabII}
\begin{center}
\begin{tabular}{l c c c }
\hline
Variable & Total & Completed (\%) & Did not complete (\%) \\ \hline
\textbf{Gender } & & & \\
Male & 180 (33.27) & 74 (41.11) & 106 (58.89) \\
Female & 361 (66.73) & 192 (53.19) & 169 (46.81) \\
\textbf{Education} & & & \\
Primary & 351 (64.88) & 166 (47.29) & 185 (52.71) \\
Secondary & 190 (35.12) & 100 (52.63) & 90 (47.37) \\
\textbf{APO-E} & & & \\
22-23 & 96 (17.74) & 54 (56.25) & 42 (43.75) \\
24 & 12 (2.22) & 6 (50) & 6 (50) \\
33 & 319 (58.96) & 162 (50.78) & 157 (49.22) \\
34-44 & 114 (21.08) & 44 (38.60) & 70 (61.40) \\
\hline
Total & 541 (100) & 266 (49.17) & 275 (50.83) \\
\end{tabular}
\end{center}
\end{table}
Figure \ref{fig:plot_cov} reports the evolution of the mean MMSE over time stratified by the available covariates. As it is clear, cognitive impairment is higher for males than females, even if the differences seem to decrease with age, maybe due to a direct age effect or a to differential dropout by gender (Figure \ref{fig:plot_cov}a). By looking at Figure \ref{fig:plot_cov}b, we may also observe that participants with higher education are less cognitively impaired at the begining of the study, and this difference remains persistent over the analyzed time window. Rather than only a direct effect of education, this may suggest differential socio-economic statuses being associated to differential levels of education. Last, lower MMSE scores are observed for $APOE_{34-44}$, that is when allele $\epsilon4$, which is deemed to be a risk factor for dementia, is present. The irregular pattern for $APOE_{24}$ may be due to the sample size of this group (Figure \ref{fig:plot_cov}c).
\begin{center}
\begin{figure}[h!]
\caption{Leiden 85+ Study: mean of MMSE score stratified by age, and gender(a), educational level (b), APOE (c)}
\centerline{\includegraphics[scale=0.7]{plot_cov.pdf}}
\label{fig:plot_cov}
\end{figure}
\end{center}
\subsection{The MAR model}\label{sec_MARmodel}
We start by estimating a MAR model, based on the assumption of independence between the longitudinal and the dropout process. In terms of equation (\ref{eq:log-likelihooddouble}), this is obtained by assuming $\pi_{g\ell}=\pi_{g\star}\pi{\star \ell}$, for $g=1,\dots,K_{1}$ and $\ell=1,\dots,K_{2}$. Alternatively, we can derive it by fixing $\boldsymbol{\lambda}={\bf 0}$ in equation (\ref{pi:equation}) or $M=1$ in eq. (\ref{eq:pidecomp}). To get insight on the effects of demographic and genetic features on the individual dynamics of the MMSE score, we focused on the following model specification:
\begin{align*}
\left\{
\begin{array}{ccc}
Y_{it} \mid \textbf x_{it},b_{i1} \sim {\rm N}(\mu_{it}, \sigma^{2}) \\
R_{it} \mid \textbf v_{it},b_{i2} \sim {\rm Bin}(1, \phi_{it})
\end{array}
\right.
\end{align*}
The canonical parameters are defined by the following regression models:
\begin{eqnarray*}
\mu_{it}&=&(\beta_{0}+b_{i1})+\beta_{1}\, (Age_{it}-85)+\beta_{2}\, Gender_{i}+\beta_{3}\, Educ_{it}+ \\&+&\beta_{4}\, APOE_{22-23}+\beta_{5}\, APOE_{24}+\beta_{6}\, APOE_{34-44}, \\[2mm]
{\rm logit}(\phi_{it})&=&(\gamma_{0}+b_{i2})+\gamma_{1}\, (Age_{it}-85)+\gamma_{2}\, Gender_{i}+\gamma_{3}\, Educ_{it}+\\
&+&\gamma_{4}\, APOE_{22-23}+\gamma_{5}\, APOE_{24}+\gamma_{6}\, APOE_{34-44}.
\end{eqnarray*}
As regards the response variable, the transform $Y_{it} = \log[1+ (30 - \mbox{MMSE}_{it})]$ was adopted as it is nearly optimal in a Box-Cox sense.
Both a parametric and a semi-parametric specification of the random coefficient distribution were considered. In the former case, Gaussian distributed random effects were inserted into the linear predictors for the longitudinal response and the dropout indicator. In the latter case, for each margin, the algorithm was run for varying number of locations and the solution corresponding to the lowest BIC index was retained, leading to the selection of $K_1 = 5$ and $K_2 = 3$ components for the longitudinal and the dropout process, respectively. Estimated parameters, together with corresponding standard errors are reported in Table \ref{tabmar}.
\begin{table}
\caption{Leiden 85+ Study: MAR models. Maximun likelihood estimates, standard errors, log-likelihood, and BIC value}
\label{tabmar}
\begin{center}
\begin{tabular}{l|l r r r r }
\hline
Process & & \multicolumn{2}{c}{Semi-parametric}& \multicolumn{2}{c}{Parametric} \\
& Variable & Coeff. & Std. Err. & Coeff. & Std. Err. \\ \hline\hline
& \textit{Intercept} & 1.686 & & 1.792 & 0.050 \\
& \textit{Age} & 0.090 & 0.008 & 0.089 & 0.005 \\
& \textit{Gender} & -0.137 & 0.042 & -0.085 & 0.066 \\
& \textit{Educ} & -0.317 & 0.068 & -0.623 & 0.065 \\
Y & $APOE_{22-23}$ & 0.062 & 0.072 & 0.056 & 0.083 \\
& $APOE_{24}$ & -0.105 & 0.062 & 0.096 & 0.211 \\
& $APOE_{34-44}$ & 0.347 & 0.060 & 0.369 & 0.079 \\
& $\sigma_{y}$ & 0.402 & & 0.398 & \\
& $\sigma_{b_{1}}$& 0.696 & & 0.684 &
\\ \hline
& \textit{Intercept} & -11.475 & & -3.877 & 0.520 \\
& \textit{Age} & 2.758 & 0.417 & 0.526 & 0.131 \\
& \textit{Gender} & 0.559 & 0.467 & 0.656 & 0.218 \\
& \textit{Educ} & -2.162 & 0.772 & -0.486 & 0.212 \\
R & $APOE_{22-23}$ & 0.476 & 0.409 & -0.246 & 0.252 \\
& $APOE_{24}$ & -0.026 & 0.939 & 0.131 & 0.618 \\
& $APOE_{34-44}$ & 0.805 & 0.461 & 0.565 & 0.237 \\
& $\sigma_{b_{2}}$& 5.393 & & 1.525 & \\ \hline
& $\log L$ & -2685.32 & & -2732.84 & \\
& BIC & 5534.26 & & 5572.67 & \\ \hline
\end{tabular}
\end{center}
\end{table}
By looking at the results, few findings are worth to be discussed. First, the estimates obtained via either the parametric or the semi-parametric approach are quite similar when we consider the longitudinal process. That is, $\log\left[1+(30-MMSE)\right]$ increases (and MMSE decreases) with age. A significant gender effect can be observed, with males being less impaired (on average) than females. Furthermore, a strong protective effect seems to be linked to socio-economic status in early life as it may be deduced from the significant and negative effect of higher educational levels. Table \ref{tabmar} also highlights how $APOE_{34-44}$ represents a strong risk factor, with a positive estimate on the adopted response scale and, therefore, a negative effect on the MMSE. Only few differences may be highlighted when comparing the estimates obtained under the parametric and the semi-parametric approach for the longitudinal data process. In particular, these differences are related to the gender effect, which is not significant in the parametric model, and the effect of higher education, which is much higher under the parametric specification. This differences may be possibly due to the discrete nature of the random effect distribution in the semi-parametric case, which may lead to partial aliasing with the time-constant covariates.
When the dropout process is considered, we may observe that the results are \emph{qualitatively} the same, but the size of parameter estimates is quite different. This could be due, at least partially, to the different scale of the estimated random coefficient distribution, with $\sigma_{b_{2}}=5.393$ and $\sigma_{b_2} = 1.525$ in the semi-parametric and in the parametric model, respectively. As it is clear, in the semi-parametric case, the estimated intercepts are quite higher than those that can be predicted by a Gaussian distribution and this leads to inflated effects for the set of observed covariates as well.
However, if we look at the estimated dropout probabilities resulting either from the semi-parametric or the parametric models, these are very close to each other, but for few extreme cases which are better recovered by the semi-parametric model.
\subsection{The MNAR model}\label{sec_MNARmodel}
To provide further insight into the effect of demographic and genetic factors on the MMSE dynamics, while considering the potential non-ignorability of the dropout process, we fitted both a uni-dimensional and a bi-dimensional finite mixture model. For the former approach, we run the estimation algorithm for $K = 1, \dots, 10$ and retained the optimal solution according to the BIC index.
This corresponds to a model with $K = 5$ components.
Similarly, for the proposed bi-dimensional finite mixture model, we run the algorithm for $K_1 = 1, \dots, 10$ and $K_2 = 1, \dots, 5$ components and, as before, we retained as optimal solution that with the lowest BIC. That is, the solution with $K_1 = 5$ and $K_2=3$ components for the longitudinal and the dropout process, respectively.
This result is clearly coherent with that obtained by marginal modeling the longitudinal response and the dropout indicator.
Parameter estimates and the corresponding standard errors for both model specifications are reported in Table \ref{tabmnar}.
\begin{table}
\caption{Leiden 85+ Study: MNAR models. Maximun likelihood estimates, standard errors, log-likelihood, and BIC value}\label{tabmnar}
\begin{center}
\begin{tabular}{l|l r r r r }
\hline
Process & & \multicolumn{2}{c}{Semipar. ``Uni-dim."} & \multicolumn{2}{c}{Semipar.``Bi-dim."} \\
& Variable & Coeff. & Std. Err. & Coeff. & Std. Err. \\ \hline\hline
& Intercept & 1.682 & & 1.687 & \\
& Age & 0.094 & 0.007 & 0.094 & 0.007 \\
& Gender & -0.129 & 0.048 & -0.135 & 0.039 \\
Y & Educ & -0.31 & 0.051 & -0.317 & 0.050 \\
& APOE$_{22-23}$ & 0.091 & 0.061 & 0.086 & 0.058 \\
& APOE$_{24}$ & -0.098 & 0.055 & -0.099 & 0.056 \\
& APOE$_{34-44}$ & 0.345 & 0.050 & 0.344 & 0.051 \\
& $\sigma_{y}$ & 0.402 & & 0.402 & \\
& $\sigma_{b_{1}}$& 0.701 & & 0.699 & \\
\hline
& Intercept & -3.361 & & -10.767 & \\
& Age & 0.367 & 0.037 & 2.406 & 0.384 \\
& Gender & 0.504 & 0.147 & 1.061 & 0.850 \\
R & Educ & -0.200 & 0.151 & -1.646 & 0.530 \\
& APOE$_{22-23}$ & -0.090 & 0.199 & 0.481 & 1.090 \\
& APOE$_{24}$ & -0.148 & 0.508 & -0.334 & 0.647 \\
& APOE$_{34-44}$ & 0.541 & 0.174 & 1.365 & 0.745 \\
& $\sigma_{b_{2}}$& 0.577 & & 4.891 & \\
& $\sigma_{b_{1},b_{2}}$ & 0.349 & & 0.985 & \\
& $\rho_{b_{1},b_{2}}$ & 0.863 & & 0.288 & \\
\hline \hline
& $\log L$ & -2686.902 & & -2660.391 & \\
& BIC & 5537.433 & & 5534.758 & \\ \hline
\end{tabular}
\end{center}
\end{table}
When looking at the estimated parameters for the longitudinal data process and at their significance (left panel in the table), we may conclude that estimates are coherent with those obtained in the MAR analysis. A small departure can be observed for the effect of age and gender. Males and patients with high education tend to be less cognitively impaired when compared to the rest of the sample, while subjects carrying $\epsilon4$ alleles, that is with category $APOE_{34-44}$, present a steeper increase in the observed response, e.g. a steeper decline in MMSE values. Focusing the attention on the dropout process, we may observe that age, gender and $APOE_{33-34}$ are all positively related with an increased dropout probability. That is, older men carrying $\epsilon4$ alleles are more likely to leave the study prematurely than younger females carrying $\epsilon3$ alleles.
By comparing the estimates obtained under the uni- and the bi-dimensional finite mixture model, it seems that the above results hold besides the chosen model specification. The only remarkable difference is in the estimated magnitude of the effects for the dropout process and for the random coefficient distribution. For the bi-dimensional finite mixture model, we may observe a stronger impact of the covariates on the dropout probability. However, as for the univariate model described in section \ref{sec_MARmodel}, this result is likely due to the estimated scale with an intercept value which is much lower under the bi-dimensional specification than under the uni-dimensional one.
Further, under the uni-dimensional model specification, the Gaussian process for the longitudinal response may have a much higher impact on the likelihood function when compared to the Bernoulli process for the dropout indicator. As a result, the estimates for component-specific locations and the corresponding variability in the dropout equation substantially differ when comparing the uni-dimensional and the univariate model. In the uni-dimensional model, the estimated correlation is quite high due to reduced variability of the random coefficients in the dropout equation, while this is substantially reduced in the bi-dimensional case.
We also report in Table \ref{tabprob} the estimated random intercepts for the longitudinal and the dropout process, together with the corresponding conditional distribution i.e. $\pi_{\ell \mid g} = \Pr(b_{i2}=\zeta_{2l} \mid b_{i1}=\zeta_{1g}=$. When focusing on the estimated locations in the longitudinal data process, that is $\zeta_{1g}$, we may observe higher cognitively impairment when moving from the first to the latter mixture component. On the other hand, for the dropout process, estimated locations, $\zeta_{2\ell}$, suggest that higher components correspond to a higher chance to dropout from the study.
When looking at the estimated conditional probabilities, we may observe a link between lower (higher) values of $\zeta_{1g}$ and lower (higher) values of $\zeta_{2\ell}$. That is, participants with better cognitive functioning (i.e. with lower response values) are usually characterized by a lower probability of dropping out from the study. On the contrary, cognitively impaired participants (i.e. with higher response values) present a higher chance to dropout prematurely from the study, even if there is still some overlapping between the second and the third component in the dropout profile.
\begin{table}[h!]
\caption{Maximun likelihood estimates and conditional distribution for the random parameters}\label{tabprob}
\centering
\begin{tabular}{l | c c c | c}
& \multicolumn{3}{c}{$\zeta_{2\ell}$} & \\ \hline
$\zeta_{1g}$ & -15.053 & -8.701 & -3.378 & \\ \hline\hline
0.519 & 0.865 & 0.090 & 0.045 & 1 \\
1.065 & 0.585 & 0.170 & 0.245 & 1 \\
1.681 & 0.573 & 0.227 & 0.199 & 1 \\
2.297 & 0.467 & 0.289 & 0.244 & 1 \\
2.905 & 0.144 & 0.364 & 0.492 & 1 \\ \hline
Tot. & 0.528 & 0.229 & 0.243 & 1
\end{tabular}
\end{table}
Looking at the parameter estimates obtained through the MNAR model approach, we may observe a certain degree of correlation between the random effects in the two equations. This suggests the presence of a potential non-ignorable dropout process affecting the longitudinal outcome. However, such an influence cannot be formally tested, as we may fit the proposed model to the observed data only and derive estimates on the basis of strong assumptions on the behavior of missing responses. Therefore, it could be of interest to verify how assumptions on the missing data mechanism can influence parameter estimates.
\subsection{Sensitivity analysis: results}\label{sec_isni}
To investigate the robustness of inference with respect to the assumptions on the missing data mechanism, we computed the matrix $ISNI_{\boldsymbol\Phi}$ according to formulas provided in equation \eqref{eq:ISNIbase}.
For each model parameter estimates $\hat \Phi_d$, we derived a global measure of its sensitivity to the MAR assumption by computing the norm, the minimum and the maximum of $\lvert ISNI_{\hat \Phi_d}\rvert$ and its ratio to the corresponding standard error estimates from the MAR model.
\begin{table}[h!]
\caption{MAR model estimates: ISNI norm, minimum and maximum (in absolute values), and ratio to the corresponding standard error.}\label{tabISNI}
\centering
\scalebox{0.85}{
\begin{tabular}{l | c c c c c c c}
\hline
& se & ISNI & norm(ISNI)/se & $\lvert ISNI\rvert$ & min$\lvert ISNI\rvert$/se & ISNI & max$\lvert ISNI\rvert$/se \\
Variable & & (norm) & & (min) & & (max) & \\ \hline
$\boldsymbol{\zeta}_{11}$ & 0.117 & 0.0414 & 0.354 & 0.0014 & 0.012 & 0.0204 & 0.174 \\
$\boldsymbol{\zeta}_{12}$ & 0.074 & 0.0580 & 0.784 & 0.0016 & 0.022 & 0.0303 & 0.409 \\
$\boldsymbol{\zeta}_{13}$ & 0.074 & 0.044 & 0.595 & 0.0002 & 0.003 & 0.0255 & 0.345 \\
$\boldsymbol{\zeta}_{14}$ & 0.083 & 0.1044 & 1.258 & 0.0005 & 0.006 & 0.0527 & 0.635 \\
$\boldsymbol{\zeta}_{15}$ & 0.071 & 0.0088 & 0.124 & 0.0009 & 0.013 & 0.0045 & 0.063 \\
\textit{Age} & 0.008 & 0.0089 & 1.113 & 0.0001 & 0.013 & 0.0054 & 0.675 \\
\textit{Gender} & 0.042 & 0.0058 & 0.138 & 0.0003 & 0.007 & 0.0028 & 0.067 \\
\textit{Educ} & 0.068 & 0.0075 & 0.110 & 0.0001 & 0.001 & 0.004 & 0.059 \\
$APOE_{22-23}$ & 0.072 & 0.0111 & 0.154 & 0.0001 & 0.001 & 0.0074 & 0.103 \\
$APOE_{24}$ & 0.062 & 0.0123 & 0.198 & 0.0005 & 0.008 & 0.0051 & 0.082 \\
$APOE_{34-44}$ & 0.06 & 0.012 & 0.200 & 0.0009 & 0.015 & 0.0061 & 0.102 \\
$\sigma_{y}$ & 0.194 & 0.1123 & 0.579 & 0.0001 & 0.001 & 0.0824 & 0.425 \\
\hline
\end{tabular}
}
\end{table}
{By looking at the results reported in Table \ref{tabISNI}, we may observed that, as far as fixed model parameters are concerned, the global indexes we computed to investigate how they vary when moving far from the MAR assumption are all quite close to zero. The only remarkable exception is for the \textit{age} variable. In this case, the \textit{ISNI} seems to take slightly higher values and this is particularly evident when focusing on the standardized statistics. Higher \textit{ISNI} values may be also observed when looking at the random intercepts. However, this represents an expected results being this parameters connected (even if indirectly) to the missingness process.}
To further study the potential impact that assumptions on the missing data generating mechanism may have on the parameters of interest, we may analyze how changes in $\boldsymbol{\lambda}$ parameters affect the vector $\hat \boldsymbol\Phi$. In this respect, we considered the following two scenarios.
\begin{itemize}
\item[Scenario 1]
We simulated $B=1000$ values for each element in $\boldsymbol{\lambda}$ from a uniform distribution, $\lambda_{g \ell}(b) \sim {\rm U}(-3,3)$ for $g=1,\dots,K_{1}-1$ and $\ell=1,\dots,K_{2}-1$. Then, based on the simulated values, we computed
\[
\hat \boldsymbol\Phi(b)=\hat \boldsymbol\Phi({\bf 0})+ISNI_{\boldsymbol\Phi} \times \boldsymbol{\lambda}(b).
\]
\item[Scenario 2]
We simulated $B=1000$ values for a scale constant $c$ from a uniform distribution, $c(b) \sim {\rm U}(-3,3)$. Then, based on the simulated values, we computed
\[
\xi_{g \ell}(b) =\xi({\bf 0}) + c(b) \hat{\lambda}_{g \ell}, \qquad g=1,\dots,K_{1}-1, \ell=1,\dots,K_{2}-1,
\]
where $\hat{\lambda}_{g\ell}$ denotes the maximum likelihood estimate of $\lambda_{g\ell}$ under the MNAR model. This scenario allows us to consider perturbations in the component specific masses, while preserving the overall dependence structure we estimated through the proposed MNAR model. That is, this allows us to link changes in the longitudinal model parameters with increasing (respectively decreasing) correlation between the random coefficients in the two profiles of interest. The corresponding (approximated) parameter estimates are computed as
\[
\hat \boldsymbol\Phi(b)=\hat \boldsymbol\Phi({\bf 0})+ISNI_{\boldsymbol\Phi}\boldsymbol{\lambda}(b),
\]
where $\boldsymbol{\lambda}(b)=c(b) \hat{\boldsymbol{\lambda}}$.
\end{itemize}
{The first scenario is designed to study the general sensitivity of parameter estimates. That is, we aim at analyzing how model parameter estimates vary when random changes in $\boldsymbol{\lambda}$ (in any direction) are observed. The second scenario starts from the estimated pattern of dependence between the random intercepts in the longitudinal and the missing data models and try to get insight on the changes in parameter estimates that could be registered in the case the correlation increases (in absolute value) with respect to the estimated one.
In Figures \ref{FigureScenario1} and \ref{FigureScenario2} we report
parameter estimates derived under Scenario 1 and Scenario 2, respectively. The red line and the grey bands in each graph correspond to the point and the $95\%$ interval estimates of model parameters under the MAR assumption.}
When focusing Figure \ref{FigureScenario1} (Scenario 1), it can be easily observed that only the parameter associated to the \textit{age} variable is slightly sensitive to changes in the assumptions about the ignorability of the dropout process.
All the other estimates remain quite constant and, overall, within the corresponding $95\%$ MAR confidence interval.
No particular patterns of dependence/correlation between the random coefficients can be linked to points outside the interval for the estimated effect of $age$.
{Rather, we observed that strong \emph{local} changes in the random coefficient probability matrix may cause positive (respectively negative) changes in the \textit{age} effect. In particular, changes in the upper left or intermediate right components, that is components with low values of both random coefficients (first case) or with high values for $\zeta_{2\ell}$ and intermediate values for $\zeta_{ig}$, respectively.
}
Overall, the relative frequency of points within the corresponding MAR confidence interval is equal to $0.737$, which suggests a certain sensitivity to assumptions regarding ignorability of the dropout process, even though estimates always remain within a reasonable set.
\begin{figure}
\caption{Leiden 85+ Study: Sensitivity analysis according to Scenario 1}
\centerline{\includegraphics[scale=0.55]{Sensitivity.pdf}}
\label{FigureScenario1}
\end{figure}
\begin{figure}
\caption{Leiden 85+ Study: Sensitivity analysis according to Scenario 1}
\centerline{\includegraphics[scale=0.55]{Sensitivity2.pdf}}
\label{FigureScenario2}
\end{figure}
When focusing on Figure \ref{FigureScenario2} (Scenario 2), we may observe that changes in the parameter estimates are more clearly linked to correlation between the random effects in the two profiles. As for the former scenario, a slight sensitivity to departures from the MAR assumption is observed for the \textit{age} variable only.
In this case, the relative frequency of points within the corresponding MAR confidence interval for the \textit{age} effect is equal to $0.851$, which suggests a lower sensitivity to assumptions on the ignorability of the dropout process, when compared to the one observed under Scenario 1. In this case, high positive correlation between the random coefficients leads to MAR estimates that are lower than the corresponding MNAR counterparts. On the other hand, high negative correlation leads to MAR estimates that tend to be higher than the MNAR counterparts.
The proposed approach for sensitivity analysis could be seen as a particular version of local influence diagnostics developed in the context of regression models to check for influential observations by perturbations in individual-specific weights; see eg \cite{Jans2003} and \cite{rakh2016,rakh2017} for more recent developments. Here, rather than perturbing individual observations, we perturb weights associated to the group of subjects allocated to a given component. Obviously, a \emph{global} influence approach could be adopted as well, for example by looking at the \emph{mean score} approach detailed in \cite{whit2017}.
\section{Conclusions}
\label{sec:8}
We defined a random coefficient based dropout model where the association between the longitudinal and the dropout process is modeled through discrete, outcome-specific, latent effects. A bi-dimensional representation for the random coefficient distribution was used and a (possibly) different number of locations in each margin is allowed. A full probability matrix connecting the locations in a margin to those in the other one was considered. The main advantage of this flexible representation for the random coefficient distribution is that the resulting MNAR model properly nests a model where the dropout mechanism is non ignorable. This allows us to consider a (local) sensitivity analysis, based on the ISNI index, to check changes in model parameter estimates as we move far from the MAR assumption.
The data application showed good robustness of all model parameter estimates. A slight sensitivity to assumptions on the missing data generating mechanism was only observe for the $age$ effect which, however, is always restricted to a reasonable set.
\section*{Acknowledgement}
We gratefully acknowledge Dr Ton de Craen and Dr Rudi Westendorp of the Leiden University Medical Centre, for kindly providing the analyzed data.
| {'timestamp': '2017-07-10T02:06:57', 'yymm': '1707', 'arxiv_id': '1707.02182', 'language': 'en', 'url': 'https://arxiv.org/abs/1707.02182'} |
\section{Introduction}
\label{intro}
Feedback from high-mass stars (i.e. OB stars with M$_{\star}$ $\geq$ 8 M$_{\sun}$) is fundamental to the shaping of the visible Universe. From the moment star formation begins, stellar feedback commences, injecting energy and momentum into the natal environment. This feedback can both hinder and facilitate star formation; negative feedback restrains or can even terminate star formation, whereas positive feedback acts to increase the star formation rate and/or efficiency. Many different physical mechanisms contribute to feedback by varying degrees, each depending on a variety of factors (e.g. initial conditions), resulting in an intricate and interdependent series of processes. In a recent review on this topic, \citet{2014prpl.conf..243K} groups feedback processes into three main categories: momentum feedback (e.g. protostellar outflows and radiation pressure); ``explosive'' feedback (e.g. stellar winds, photoionising radiation, and supernovae); and thermal feedback (e.g. non-ionising radiation).
Stellar feedback encompasses many astrophysical processes, moderating star formation from stellar scales ($\ll$ 1 pc) to cosmological kpc-scales (e.g. driving Galactic outflows; \citealt{2011ApJ...735...66M,2016MNRAS.456.3432G}).
Despite our growing knowledge of these processes, the overarching interplay between them remains uncertain. Observationally, limited spatial resolution makes it difficult to disentangle the effects of feedback mechanisms which operate simultaneously. Other additional factors, such as the role of magnetic fields (for which the strength and orientation are difficult to measure) and feedback from surrounding low-mass stars, complicate the process further. Moreover, limited observations of the earliest stages of high-mass star formation means that the large samples needed for a robust statistical analysis are lacking. With observatories like ALMA and the EVLA, which have sufficient angular resolution to resolve and detect individually forming high-mass stars, our understanding is continually improving.
Meanwhile, in the past few decades, there have been considerable efforts attempting to simulate the vast range of stellar feedback effects. It has been clearly demonstrated that without feedback, simulations fail to replicate the galaxies that we observe in the Universe today (e.g. \citealt{1996ApJS..105...19K,1999MNRAS.310.1087S,2000MNRAS.319..168C,2003MNRAS.339..312S,2009MNRAS.396.2332K,2011MNRAS.413.2741G,2012ARA&A..50..531K,2012MNRAS.423.1726S,2014MNRAS.445..581H,2017MNRAS.466.3293P}) and often produce galaxies that are much more massive than observed.
Consequently, simulations have looked to feedback for answers, with promising results. Yet it remains a great challenge to create a model which includes all feedback processes over a vast range of scales. Often only one or two types of feedback are included (e.g. \citealt{2010ApJ...713.1120K,2011ApJ...735...49M,2013MNRAS.430..234D,2013ApJ...770...25A,2013ApJ...776....1K,2014ApJ...788...14P,2015ApJ...801...33T,2015MNRAS.454.2691M,2016ApJ...824...79A,2017ApJ...841...82B,2017ApJ...836..204N}).
In order to improve simulations and implement more feedback effects, better understanding of the relevant physical processes is needed. This will help to provide the observational constraints needed for parameterising the simulations.
\subsection
[{H II regions}]
{H\,{\sevensize II} regions}
The study of \ion{H}{II} regions can allow us to explore how high-mass stars impact their environment via the aforementioned feedback mechanisms. \ion{H}{II} regions are bright in the radio regime, particularly with radio recombination lines (RRLs) and thermal bremsstrahlung, both clear diagnostics of high-mass star formation. See \citet{2017arXiv171105275H} for a review on synthetic observation studies, particularly regarding feedback and the global structure of \ion{H}{II} regions.
Predominantly, the study of \ion{H}{II} regions has focused on surveys examining morphologies, sizes and densities (e.g. \citealt{2006AJ....131.2525H,2012PASP..124..939H,2013MNRAS.431.1752U,2013MNRAS.435..400U,2017A&A...602A..37K,2017A&A...603A..33G}).
In terms of morphology, ultracompact (UC; $\leq 0.1$ pc) \ion{H}{II} regions can be categorised as either spherical, cometary, core-halo, shell, or irregular \citep{1989ApJS...69..831W}. \citet{2005ApJ...624L.101D} modified the classification scheme to also include bipolar morphologies.
The work of \citet{1989ApJS...69..831W} found that too many \ion{UCH}{II} regions are observed considering their short apparent lifetime, known as the `lifetime debate'.
\citet{2010ApJ...719..831P} propose a solution based on their synthetic radio continuum observations of young high-mass star formation regions. They found that \ion{H}{II} regions `flicker' as they grow, a result of a fluctuating accretion flow around the high-mass star (fragmentation-induced starvation; \citealt{2010ApJ...711.1017P}, hereafter \citetalias{2010ApJ...711.1017P}). This is a possible resolution to the lifetime problem since the young \ion{H}{II} regions shrink and grow rapidly as they evolve. Short (several year) variations in the flux density of the high-mass star forming region Sgr B2 have been observed \citep{2014ApJ...781L..36D,2015ApJ...815..123D}, which the authors attribute to this `flickering'.
There have also been detailed studies on the kinematics of \ion{H}{II} regions on (proto)stellar scales ($\lesssim$ 10,000 AU). This includes the accretion of ionised material onto forming high-mass stars (e.g. \citealt{1988ApJ...324..920K,2008ApJ...678L.109K,2005ApJ...624L..49S,2006ApJ...637..850K,2002ApJ...568..754K,2003ApJ...599.1196K,2007ApJ...666..976K,2008ApJ...674L..33G,2017arXiv171204735K}); the gravitational collapse and rotation of turbulent molecular clouds (e.g. \citealt{2000ApJ...535..887K,2009ApJ...703.1308K}); ionised outflows (e.g. \citealt{1994ApJ...428..670D,2013A&A...556A.107K,2016ApJ...818...52T}); and the rotation of ionised gas on stellar scales (e.g. \citealt{1994ApJ...428..324R,2008ApJ...681..350S}).
However, to date, fewer studies have been devoted to measuring the ionised gas kinematics of \ion{H}{II} regions on cloud scales ($\sim$ 0.1 pc), pertinent to understanding the effect of feedback of high-mass stars on their natal clouds. Most studies have focused on understanding the kinematics of cometary \ion{H}{II} regions in order to deduce which model (e.g. bow shock or champagne) applies. This is often done via the analysis of velocity ranges or gradients of the ionised gas (e.g. \citealt{1996ApJ...464..272L,1999MNRAS.305..701L,2017MNRAS.465.4219V}).
Intriguingly, G34.3+0.2, G45.07+0.13 and Sgr B2 I and H all show ranges in velocity (from 10-35 km s$^{-1}$), perpendicular to the axis of symmetry of the cometary \ion{H}{II} region \citep{1986ApJ...309..553G,1990ApJ...351..538G,1994ApJ...432..648G,2014A&A...563A..39I}.
Kinematic studies of \ion{H}{II} regions across all morphological types could present new interpretations on our understanding of feedback. For example, bipolar \ion{H}{II} regions could provide an insight to early feedback, since they typically exist at earlier evolutionary stages when ionisation has only just begun (e.g. \citealt{2010ApJ...721..222B}). Newly ionised material flows outwards with velocities up to 30 km s$^{-1}$ \citep{2015A&A...582A...1D} and neutral material (usually in the form of a molecular disc) lies perpendicular to the outflows, often showing signs of accretion towards a central (proto)star. When viewed approximately edge-on, the \ion{H}{II} region appears as bipolar. Velocity gradients within the ionised gas will typically correspond to infall, outflow, rotation or a combination, which will be influenced by the viewing angle. This can have different implications for feedback, depending on which motion truly occurs.
In this paper, we present observations of a young, bipolar \ion{H}{II} region, G316.81--0.06. Our results show a velocity gradient in the ionised gas at 0.1 pc scales, perpendicular to the bipolar axis. In conjunction with the \citetalias{2010ApJ...711.1017P} simulations, we aim to understand the origin of the velocity structure in the ionised gas and its relation to feedback.
Section \ref{data} describes the observations and simulations in more detail, followed by the data analysis in \S\, \ref{method}. The results and discussion are in \S\S\, \ref{results} and \ref{dis}, concluding with a summary in \S\, \ref{summary}.
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{fig1_3}
\caption{Multi-wavelength images of G316.81--0.06. Top and bottom-right: \textit{Spitzer} GLIMPSE/MIPSGAL image in 3.6, 8.0, and 24.0 micron IRAC bands \citep{2003PASP..115..953B,2009PASP..121..213C,2009PASP..121...76C,2015AJ....149...64G,2012ascl.soft06002L}. The dashed black arrows indicate an outflow in the direction of the two north-south MIR bubbles.
Bottom-left: GLIMPSE image in 3.6, 4.5, and 8.0 micron IRAC bands \citep{2003PASP..115..953B,2009PASP..121..213C}. Red and green contours show the 35-GHz continuum \citep{2009MNRAS.399..861L} and integrated NH$_3$(1,1) \citep{1997MNRAS.291..261W} respectively. Two separate \ion{H}{II} regions are labelled Region 1 and 2 accordingly. Masers from \citet{2010MNRAS.406.1487B} are depicted as circles: blue (hydroxyl); black (water); purple (methanol). Dashed white arrows indicate the direction of an ionised outflow, aligned with the 35-GHz continuum of Region 1 and the ``green fuzzy''.}
\label{fig:image}
\end{figure*}
\section{Data}
\label{data}
\subsection{Observations}
\label{sec:obs}
Figure \ref{fig:image} illustrates multi-wavelength images of G316.81--0.06, located 2.6 kpc away in the Galactic Disc (\citealt{2011MNRAS.417.2500G}; note that this is a newer distance estimate as opposed to the measurement of 2.7 kpc used in previous literature). Various authors have discussed the kinematic distance ambiguity in relation to this source \citep{1981A&A...102..225S,2006MNRAS.366.1096B,2014A&A...569A.125H}, and conclude it is at the near kinematic distance.
The top infrared (IR) image of Figure \ref{fig:image}
is a \textit{Spitzer} GLIMPSE/MIPSGAL image in the 3.6, 8.0, and 24.0 micron IRAC bands \citep{2003PASP..115..953B,2009PASP..121..213C,2009PASP..121...76C,2015AJ....149...64G,2012ascl.soft06002L}. On the bottom-right a close-up of G316.81--0.06 is shown, of the same GLIMPSE/MIPSGAL image.
The region is enlarged further in the bottom-left; a mid-infrared (MIR; 3.6, 4.5, and 8.0 microns) GLIMPSE image.
On large scales, strong absorption is featured roughly SE-NW in both IR images, i.e. infrared dark clouds (IRDCs; e.g. \citealt{1998ApJ...494L.199E}).
Emission features (MIR bright bubbles) are seen to the north and south, with one distinct and bright MIR central source found at the apex of these two bubbles.
Using the \textit{Australian Telescope Compact Array (ATCA)}, two radio continuum sources classified as \ion{UCH}{ii} regions in \citet{1997MNRAS.291..261W,1998MNRAS.301..640W} are overlaid with red contours (bottom-left image) showing the 35-GHz continuum \citep{2009MNRAS.399..861L}. The left-hand source (Region 1), shows two distinct lobes elongated roughly NE-SW. The continuum data were taken in addition to the H70$\upalpha$ RRL with a compact antenna configuration (12.5$\arcsec$ angular resolution) and thus, spatial filtering is not a major issue (see \citealt{2009MNRAS.399..861L} for further details).
Region 1 has many more significant features. Numerous masers (hydroxyl, class II methanol, and water) have been detected, see Appendix \ref{tab:masers} for a complete list. For clarity, only the masers listed by \citet{2010MNRAS.406.1487B} are marked (bottom-left of Figure \ref{fig:image}): blue, purple, and black circles are hydroxyl, class II methanol, and water masers respectively.
Ammonia emission (green contours; \citealt{1997MNRAS.291..261W}) coincides with the three masers, and NH$_3$ (1,1) shows a clear inverse P-Cygni profile towards the cm-continuum source which extends eastwards from Region 1 and peaks towards the IRDC \citep{2007MNRAS.379..535L}.
Other features include 4.5 $\umu$m excess emission, i.e. a ``green fuzzy" (otherwise known as an extended green object, EGO; \citealt{2008AJ....136.2391C}) in the MIR (bottom-left of Figure \ref{fig:image}; \citealt{2007prpl.conf..165B,2009ApJS..184..366B}).
Overall, G316.81--0.06 is a very complex region, affected by contributions from multiple feedback mechanisms.
We interpret the aforementioned features as follows:
(a) Two MIR bright sources are two separate \ion{H}{II} regions (Regions 1 and 2) -- formed from the IRDC filament -- which drive the MIR bubbles. It appears as though these cavities have been driven by an older outflow (indicated by the black dashed arrows; bottom-right of Figure \ref{fig:image}) in a north-south direction, perpendicular to the elongated ammonia emission.
(b) Masers indicate youth (class II 6.7-GHz maser emission suggests a possible age of 10-45 kyr; \citealt{2010MNRAS.401.2219B}).
(c) The inverse P-Cygni profile implies infall towards Region 1.
(d) We infer a more recent outflow from the presence of the ``green fuzzy" \citep{2009ApJS..181..360C}. In combination with the elongated 35-GHz continuum, the outflow appears to be bipolar, possibly in the form of an ionised jet (indicated by the white dashed arrows; bottom-left of Figure \ref{fig:image}).
In contrast, Region 2 lacks masers and ammonia emission. This implies that Region 2 is older than Region 1, as concluded by \citet{2007MNRAS.379..535L}.
\begin{figure*}
\centering
\includegraphics[width=0.495\textwidth]{numdens_new_0002}
\includegraphics[width=0.495\textwidth]{numdens_new_0009}
\includegraphics[width=0.495\textwidth]{numdens_new_0075}
\includegraphics[width=0.495\textwidth]{numdens_new_0127}
\caption{Snapshot density slices through the simulations of \citetalias{2010ApJ...711.1017P} depicting the stages prior to the formation of an \ion{H}{II} region in the xy-plane. The time-steps shown reflect four initial evolutionary stages of the simulation occurring at 614.0, 624.3, 652.7, and 668.2 kyr. The arrows are velocity vectors and the white points are sink particles.}
\label{bubbles}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.495\textwidth]{numdens_new_0330}
\includegraphics[width=0.495\textwidth]{numdens_new_0360}
\includegraphics[width=0.495\textwidth]{numdens_new_0383}
\caption{Snapshot density slices through the simulations of \citetalias{2010ApJ...711.1017P} after the formation of an \ion{H}{II} region in the xy-plane. The time-steps reflect later evolutionary stages which occur at 730.4, 739.2, and 746.3 kyr. The arrows depict velocity vectors and the white points are sink particles. The thin white border marks the boundary of 90\% ionisation fraction.}
\label{finalbubbles}
\end{figure}
\subsection{Numerical Simulations}
\label{sims}
To help interpret our data we looked for numerical simulations of young \ion{H}{II} regions which match Region 1 as closely as possible. As described below, we identified the simulations of \citetalias{2010ApJ...711.1017P} as having similar global properties to G316.81--0.06, so we focus on comparing our data to these simulations. The \citetalias{2010ApJ...711.1017P} simulations have not been fine-tuned to the observations.
Given our limited knowledge of the G316.81--0.06 region's history, and with only a single observational snapshot of the region's evolution, it is impossible to know how closely the initial conditions of the simulations are matched to the progenitor gas cloud of G316.81--0.06. For that reason our approach is to try and identify general trends in the evolution of the simulations in the hope that the underlying physical mechanisms driving this evolution will be applicable to the largest number of real \ion{H}{II} regions. We avoid focusing on detailed comparison of observations to individual (hyper-/ultra-compact) \ion{H}{II} regions around forming stars in the simulation, for which the evolution is much more stochastic.
Whilst the comparisons between our observations and the simulations of \citetalias{2010ApJ...711.1017P} provide a good foundation for further analysis, it would be beneficial to make additional comparisons with a suite of simulations. Such simulations could be fine-tuned to match the observations and model the formation and evolution of \ion{H}{II} regions with the same global properties and in the same environment, with a range of different initial conditions. However, this is outside the scope of the paper.
The hydrodynamical simulations of \citetalias{2010ApJ...711.1017P} describe the gravitational collapse of a rotating molecular gas cloud to form a cluster of massive stars. The 3D model takes into account heating by ionising and non-ionising radiation using an adapted \textsc{flash} code \citep{2000ApJS..131..273F}.
The synthetic RRL maps of the simulation data are produced using \textsc{radmc-3d} \citep{2012ascl.soft02015D} as described in \citet{2012MNRAS.425.2352P}. Both local thermodynamic equilibrium (LTE) and non-LTE simulations of H70$\upalpha$ emission are run for a total of $0.75$ Myr. We use RRL data corresponding to 730.4, 739.2, 746.3, 715.3 and 724.7 kyr for which an ionised bubble has already emerged.
Summarising \citet{2010ApJ...711.1017P,2010ApJ...725..134P}, the simulated box has a diameter of $3.89$ pc with a resolution of $98$ AU. The initial cloud mass is $1000$ M$_{\sun}$, with an initial temperature of $30$ K, and core density $3.85 \times 10^{3}$ cm$^{-3}$. Beyond the flat inner region of the cloud ($0.5$ pc radius), the density drops as $r^{-3/2}$. The initial velocities are pure solid-body rotation without turbulence, with angular velocity $1.5 \times 10^{-14}$ s$^{-1}$, and a ratio of rotational energy to gravitational energy, $\beta = 0.05$.
Sink particles (of radius 590 AU) form when the local density exceeds the critical density, $\rho_{\text{crit}} = 2.12 \times 10^{8}$ cm$^{-3}$ and the surrounding region around the sink particle, $r_{\text{sink}} = 590$ AU, is gravitationally bound and collapsing. The sink particles accrete overdense gas that is gravitationally bound, above the threshold density, and within an accretion radius. The accretion rate varies with time and is different for each sink particle.
Within the first 10$^5$ years since the formation of the first sink particle, the original star has accreted 8 M$_{\sun}$ and many new sink particles have formed. In the next $3 \times 10^{5}$ yr, the initial three sink particles have masses of $10$-$20$ M$_{\sun}$ and no star reaches a mass greater than $25$ M$_{\sun}$ overall.
Figures \ref{bubbles} and \ref{finalbubbles} show density slices of the simulation, for the last $100$ kyr. The vectors indicate velocity and the white points represent sink particles. Figure \ref{bubbles} shows four snapshots equivalent to the initial evolutionary stages before the \ion{H}{ii} region forms, occurring at $614.0$, $624.3$, $652.7$ and $668.2$ kyr. Initially, the cloud looks square as a consequence of not including turbulence in the initial conditions and the use of a grid-based code. The central rarefaction, and surrounding dense, ring-like structure may be a result of the cloud undergoing a rotational bounce (i.e. when the core --- formed after the collapse of a rotating cloud --- continually accretes from the envelope and then expands due to rotation and the increased gas pressure gradient, resulting in a ring at the cloud's centre; \citealt{2003MNRAS.340...91C}).
Figure \ref{finalbubbles} shows the snapshots at the final stages of the run after the \ion{H}{ii} region has formed, at $730.4$, $739.3$, and $746.3$ kyr. The thin white border encloses a region that has surpassed a 90\% ionisation fraction.
\subsection{Observations and Simulations Compared}
It is difficult to make an exact comparison between the observations and simulations, since we cannot observe the gas cloud of G316.81--0.06 at its initial stages.
\citet{1996A&AS..118..191J} calculated the density and cloud mass of G316.81--0.06 in their multi-transition CS study, finding a mass of 1060 M$_{\sun}$ and number density $10^{4}$ cm$^{-3}$ which is in excellent agreement with the simulations\footnote{The author also identifies a velocity gradient across the CS core. Unfortunately, the value is not specified.}.
We can also estimate the mass and size of the region from the IRDC.
\citet{2017MNRAS.470.1462L} calculated the mass of the IRDC in G316.752+0.004, a region which encompasses G316.81--0.06. They found a mass of $1.5 \times 10^4$ M$_{\sun}$, however, their distance to the IRDC is highly uncertain. Of the two distances they derive, they adopt the farther distance of $9.8$ kpc as opposed to the nearer distance of $2.6$ kpc (which is also the distance used here). Using the nearer distance estimate, the mass of the IRDC is $\sim1150$ M$_{\sun}$ which is also in agreement with the initial molecular mass of the simulated cloud of \citetalias{2010ApJ...711.1017P} ($1000$ M$_{\sun}$).
Assuming a distance of $2.6$ kpc, the masses of the observed and simulated clouds are similar.
The sizes of the observed and simulated regions are also similar. The area encompassing both \ion{H}{ii} regions within G316.81--0.06 is $\sim0.9$ pc in diameter, although we realise that the IRDC from which the \ion{H}{ii} regions formed is certainly larger than this. The initial central condensed structure at 500 kyr of the simulations is $1.3$ pc in diameter.
From this, we infer that the density of the observations and simulations will also be on the same order of magnitude, in agreement with the aforementioned result of \citet{1996A&AS..118..191J}. Given the similarity between the mass and size of the \ion{H}{ii} regions we conclude that it is reasonable to compare the observations to the simulations (bearing in mind the caveats in \S\, \ref{sims}).
\section{Data Analysis}
\label{method}
The data analysis was performed using the Common Astronomy Software Applications (\textsc{casa}; \citealt{2007ASPC..376..127M}) package and the Semi-automated multi-COmponent Universal Spectral-line fitting Engine (\textsc{scouse}; \citealt{2016MNRAS.457.2675H}). \textsc{casa} was used to calculate 2$^\text{nd}$ moment maps (velocity dispersion, $\upsigma$), and Gaussians were fit to the spectra using \textsc{scouse} in order to determine centroid velocity (v$_0$).
The input parameters for \textsc{scouse} fitting are found in Table \ref{tab:scouse}, according to \citet{2016MNRAS.457.2675H}.
\begin{table}
\caption{\textsc{scouse} input. Parameter names \newline according to \citet{2016MNRAS.457.2675H}.}
\label{tab:scouse}
\begin{tabular}{lcc}
\hline
Parameter & Observations & Simulations \\
\hline
$R_{\text{SAA}}$ & 0.$\degr$001 & 0.$\degr$0003 \\
RMS (K) & 0.02 & 0.06 \\
$\sigma_{\text{rms}}$ (K) & 3.0 & 3.0 \\
$T_1$ & 5.0 & 5.0 \\
$T_2$ & 2.5 & 2.5 \\
$T_3$ & 2.5 & 1.7 \\
$T_4$ & 1.0 & 1.00\\
$T_5$ & 0.5 & 0.5\\
$v_{\text{res}}$ (km s$^{-1}$) & 0.5 & 1.56\\
\hline
\end{tabular}
\end{table}
\subsection{Observations}
We used the H70$\upalpha$ RRL spectra taken and reduced by \citet{2009MNRAS.399..861L} for the data analysis. With \textsc{casa}, the 2$^\text{nd}$ moment map was created between velocities -9.3 and -70.7 km s$^{-1}$ including only pixels above 26 mJy beam$^{-1}$ in order to optimally exclude the weaker, second velocity component (see below).
We found that towards the south-west of Region 1, the spectra contain an additional component which is broader (by 14.8\%)
and less intense (by 73.5\%)
than the primary component. Inspection of the data cubes shows that this emission is offset both in velocity and spatially, and we conclude is unassociated with the ionised gas of Region 1.
Figure \ref{two_comp} shows Gaussian fits to both components, identified with \textsc{scouse}. Where possible, the contribution of the secondary component was excluded from further analysis, as we are interested in Region 1. At locations where the secondary component is much weaker, it became difficult to distinguish between the two components. This means that we cannot create FWHM maps reliably, and that the results from the area covering the lowest third of Region 1 must be treated with caution.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{two_comp}
\caption{H70$\upalpha$ spectra of the observational data (Region 1). With \textsc{scouse}, a two-component fit was applied to prevent contamination in further analysis. The primary component (blue) and secondary component (orange) are each fitted by a Gaussian. The combined fit is shown in green.}
\label{two_comp}
\end{figure}
\subsection{Simulations}
In order to compare the simulations and observations more robustly, the units of the H70$\upalpha$ synthetic data have been transformed to be consistent with the observations. Intensity was converted from erg s$^{-1}$ cm$^{-2}$ Hz$^{-1}$ ster$^{-1}$ to Jy beam$^{-1}$; physical size converted to an angular size using the distance to G316.81--0.06 (2.6 kpc); and frequency converted to velocity. Using \textsc{casa}, the continuum was subtracted using \texttt{imcontsub} with the line-free channels: $\leq 6$ and $\geq 54$ (LTE); $\leq 10$ and $\geq 54$ (non-LTE).
A major difference between the LTE and non-LTE simulations were narrow absorption lines (LTE) and very bright, compact, and narrow emission lines (non-LTE) which perhaps emulate real maser emission. In non-LTE conditions, RRLs may undergo maser amplification when the line optical depth is negative and its absolute value is greater than the optical depth of the free-free emission (\citealt{2002ASSL..282.....G} and references therein).
The narrow emission dominates over the broad RRL emission, and appears almost as a delta function which prevents \textsc{scouse} from being able to fit the non-LTE simulations. Therefore, a mask was applied to remove the majority of the narrow emission; every value greater than 10 mJy beam$^{-1}$ was replaced with the average of the points either side.
The narrow absorption lines (LTE) also made \textsc{scouse} fitting difficult. Significant portions of the broad RRL emission were often missing, making it challenging to fit the overall structure. Therefore, the absorption lines were removed via a RANdom SAmple Consensus (RANSAC; \citealt{Fischler:1981:RSC:358669.358692}) method\footnote{For consistency, RANSAC was also applied to the non-LTE data after the narrow emission lines were removed.}. RANSAC iteratively estimates the parameters of a mathematical model from a set of data and excludes the effects of outliers.
In our case, we selected five points at random (blue circles) along each spectrum (red) to make a Gaussian fit (Figure \ref{ransac}). Using RANSAC, the best fit is the fit with the most inliers (points with a residual error of less than 5\%) out of three hundred iterations. Values of the original spectrum which lay outside the threshold (5\%) are replaced by the values from the new fit so as not to entirely eradicate the original data. With this method we were able to successfully remove the narrow absorption lines without distorting the data, so that we could then proceed with the \textsc{scouse} fitting. This was only successful for the last three timesteps (730.4, 739.2, and 746.3 kyr). At earlier times (715.3 and 724.7 kyr), for which synthetic H70$\upalpha$ data are also available, the LTE absorption lines are too wide to be accurately removed via the RANSAC method, and thus cannot be successfully fit by \textsc{scouse}.
Finally, Gaussian smoothing was applied to both the LTE and non-LTE synthetic data with a beam size of 2.5 arcsec, using the \texttt{imsmooth} tool in \textsc{casa}. This is the largest beam size possible while still being able to resolve the overall kinematic structure.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{ransac}
\caption{RANSAC example applied to the LTE synthetic data. The narrow absorption lines prevented successful fitting with \textsc{scouse} so they were replaced. The original spectrum (red), the new spectrum (black), and the blue circles represent the five points used to make a Gaussian fit. In this example, this fit was chosen to be the best out of three hundred iterations.}
\label{ransac}
\end{figure}
\section{Results}
\label{results}
\begin{figure}
\includegraphics[width=0.49\textwidth]{scouse_obs_v_beam}
\includegraphics[width=0.49\textwidth]{obs_mom2_nan1}
\caption{Maps of the two \ion{H}{ii} regions within G316.81--0.06: Region 1 (left) and Region 2 (right). Top: centroid velocity map (\textsc{scouse}); bottom: 2$^\text{nd}$ moment map (\textsc{casa}). The beam is shown at the bottom left of each map. Contours of the 35-GHz continuum are overlaid in black \citep{2009MNRAS.399..861L}.}
\label{fig:obs}
\end{figure}
Table \ref{tab:results} contains the ranges in velocity (v$_0$(max)-v$_0$(min)), velocity gradients and maximum velocity dispersions for both the observational and synthetic H70$\upalpha$ RRL data.
Where the observations are concerned, Region 1 is the focus of the study, for it is the youngest \ion{H}{II} region (Figure \ref{fig:obs}). We note that the velocity gradient measured is what we observe along our line of sight, and does not take into account any inclination that may be present. For the simulations (both LTE and non-LTE), we show the results of the final three ages: 730.4, 739.2, and 746.3 kyr in Figures \ref{fig:sims_v} and \ref{sims_mom2}.
\subsection{Observed ionised gas kinematics}
Figure \ref{fig:obs} contains the H70$\upalpha$ centroid velocity and 2$^\text{nd}$ moment maps for the two \ion{H}{II} regions in G316.81--0.06.
Several velocity gradients across each region are visible in the centroid velocity map, and we focus on the younger \ion{H}{II} region; Region 1 (left). Region 1 shows a velocity gradient roughly east-west (across $\sim0.33$ pc) in addition to a less steep gradient north-south (across $\sim0.42$ pc) aligned with the elongation of the 35-GHz continuum and `green fuzzy'.
The 2$^\text{nd}$ moment map shows that in both \ion{H}{ii} regions $\upsigma$ increases towards the centre, and is highest towards the southern end of each region.
\begin{figure*}
\centering
\includegraphics[width=0.43\textwidth]{H70a_LTE_0330_v}
\includegraphics[width=0.43\textwidth]{H70a_nonLTE_0330_v}
\includegraphics[width=0.43\textwidth]{H70a_LTE_0360_v}
\includegraphics[width=0.43\textwidth]{H70a_nonLTE_0360_v}
\includegraphics[width=0.43\textwidth]{H70a_LTE_0383_v}
\includegraphics[width=0.43\textwidth]{H70a_nonLTE_0383_v}
\caption{\textsc{scouse} outputted centroid velocity of the simulated H70$\upalpha$ data. Left: LTE; right: non-LTE, at ages of 730.4, 739.3, and 746.3 kyr increasing from top to bottom.}
\label{fig:sims_v}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.43\textwidth]{LTE_0330_mom2}
\includegraphics[width=0.43\textwidth]{nonLTE_0330_mom2}
\includegraphics[width=0.43\textwidth]{LTE_0360_mom2}
\includegraphics[width=0.43\textwidth]{nonLTE_0360_mom2}
\includegraphics[width=0.43\textwidth]{LTE_0383_mom2}
\includegraphics[width=0.43\textwidth]{nonLTE_0383_mom2}
\caption{2$^\text{nd}$ moment maps of the simulated H70$\upalpha$ data outputted by \textsc{casa}, showing the velocity dispersion. Left: LTE; right: non-LTE, at ages of 730.4, 739.3, and 746.3 kyr increasing from top to bottom.}
\label{sims_mom2}
\end{figure*}
\subsection{Simulated ionised gas kinematics}
Figures \ref{fig:sims_v} and \ref{sims_mom2} show the centroid velocity and 2$^\text{nd}$ moment maps of the LTE and non-LTE synthetic H70$\upalpha$ data, at the final three ages 730.4, 739.2, and 746.3 kyr. The density slices (Figures \ref{bubbles} and \ref{finalbubbles}) look down on the stars in the xy-plane, i.e. along the outflow axis. For the synthetic H70$\upalpha$ data, we chose a projection perpendicular to the outflow plane (oriented along the xz-plane), to compare with the observations. Although the inclination of the observed Region 1 outflow is not known; Figure \ref{fig:image} shows it is clearly closer to perpendicular than along the line-of-sight. Any inclination will introduce a change in observed velocity motions of order sin($\theta$), where $\theta$ is the angle of inclination.
The velocity structure of the simulated ionised gas on 0.05 pc scales changes insignificantly between the different time steps for either the non-LTE or LTE synthetic maps. Given the similar kinematic structure between the non-LTE and LTE synthetic maps, we conclude non-LTE effects are not important for our analysis and focus on the LTE maps from here on. As this kinematic structure is a robust feature of the simulations, it seems reasonable to compare to the observed ionised gas kinematics.
The morphology of the centroid velocity and 2$^\text{nd}$ moment maps are similar to the observations; velocity gradients are oriented roughly east-west and velocity dispersion increases towards the centre. A significant difference is that the simulated \ion{H}{ii} region is smaller ($\sim0.15$ pc versus $\sim0.33$ pc), potentially resulting in the steeper velocity gradients compared to the observations, due to the conservation of angular momentum.
\begin{table*}
\caption{Values for the range in centroid velocity, v$_0$, velocity gradient, $\nabla$v$_0$, and maximum velocity dispersion, $\upsigma_{\text{max}}$, for both \ion{H}{ii} regions in G316.81--0.06, in addition to the LTE and non-LTE simulations of \citetalias{2010ApJ...711.1017P} for the final three time-steps.}
\label{tab:results}
\begin{tabular}{lcccccccc}
\hline
&Region 1 &Region 1 & LTE<E<E & non-LTE&non-LTE&non-LTE \\
Time (kyr) &(E-W)&(N-S)& 730.4 & 739.2 & 746.3 & 730.4 & 739.2 & 746.3\\
\hline
v$_0$ range (km s$^{-1}$) & 15.78$\pm$0.45 & 5.14$\pm$1.11 & 14.59$\pm$0.01& 11.64$\pm$0.01& 12.04$\pm$0.03& 10.49$\pm$0.05& 10.40$\pm$0.12 & 13.54$\pm$0.12 \\
$\nabla$v$_0$ (\kmspc) & 47.81$\pm$3.21 & 12.23$\pm$2.70 & 97.29$\pm$12.97 & 77.64$\pm$10.35 & 80.25$\pm$10.70 & 69.91$\pm$9.33 & 69.35$\pm$9.28 & 90.28$\pm$12.07 \\
$\upsigma_{\text{max}}$ (km s$^{-1}$) &8.1 & 8.1 & 13.1 & 12.1 & 12.1 & 13.0 & 11.8 & 11.8\\
\hline
\end{tabular}
\end{table*}
\section{Discussion}
\label{dis}
As introduced in \S\, \ref{intro}, prior literature looking at the ionised gas kinematics of \ion{H}{II} regions has primarily focused on expansion, accretion, and outflows. There are, however, a small number of \ion{H}{II} regions in the literature which display velocity gradients perpendicular to the outflow axis. For these regions, a common interpretation is that rotation in some form is contributing to the velocity structure.
In \S\, \ref{examples} we summarise previous observations put forward as evidence that rotation is playing a role in shaping the velocity gradient. In \S\, \ref{sim_rotation} we turn to the \citetalias{2010ApJ...711.1017P} simulations to try and uncover the origin of the velocity gradient perpendicular to the outflow axis. Section \ref{rotation} refers back to Region 1, discussing whether the velocity structure signifies rotation and what can be inferred with relation to feedback.
\subsection
[{Postulated evidence for rotation in observed H II regions}]
{Postulated evidence for rotation in observed H\,{\sevensize II} regions}
\label{examples}
\begin{description}
\item \textit{\textbf{G34.3+0.2C}}.
Although a cometary \ion{H}{II} region, a remarkably strong velocity gradient of $\sim 338$ \kmspc \, has been detected in the H$76\upalpha$ RRL, perpendicular to the axis of symmetry. \citet{1986ApJ...309..553G} infer that this could be caused by a circumstellar disc which formed from the collapse of a rotating protostellar cloud. They suggest that the angular velocity of the cloud was a result of Galactic rotation, and find that the angle of rotation roughly aligns with that of the Galactic plane. \citet{1994ApJ...432..648G} refute this since the surrounding molecular material appears to rotate opposite to the ionised gas. Instead, they suggest that stellar winds from two nearby sources have interacted with the ionised gas to give the observed velocity profile.
\item \textit{\textbf{W49A}}. \citet{1977ApJ...212..664M} present a low spatial resolution study of W49A, and find a velocity range of a few km s$^{-1}$ across the bipolar \ion{H}{II} region. In comparison to the velocities of two massive molecular clouds either side of the \ion{H}{II} region, they conclude that the ionised gas rotates in the middle of them, as the molecular clouds revolve about one another. With higher spatial resolution, \citet{1987Sci...238.1550W} find a 2-pc ring containing at least ten separate \ion{H}{II} regions which they claim is rotating about $5\times10^4$ M$_{\sun}$ of material. They derive an angular velocity of 14.4 \kmspc. With even higher spatial resolution, \citet{1997ApJ...482..307D} find 45 distinct continuum sources, and that the \ion{UCH}{II} regions within the ring do not appear to have ordered motions. However, one \ion{UCH}{II} region in particular (W49A/DD), shows a north-south velocity gradient of a few km s$^{-1}$ which \citet{1997ApJ...482..307D} claim may be caused by the rotation of the ionised gas.
\item \textbf{\textit{K3-50A}}. The bipolar \ion{H}{II} region, K3-50A, shows a steep velocity gradient ($\sim150$ \kmspc) along the axis of continuum emission, indicating the presence of ionised outflows \citep{1994ApJ...428..670D}. There also appears to be an unmentioned perpendicular velocity gradient across the region which we estimate to be $\sim30$ \kmspc\, (see Figure 5a of \citealt{1994ApJ...428..670D}). Further detailed comparisons with the molecular disc have been made \citep{1997ApJ...477..738H} in addition to polarimetry studies \citep{2015MNRAS.453.2622B}. This has provided a unique insight to the influence of magnetic fields and allowed for the construction of a detailed 3D model.
\item \textbf{\textit{NGC 6334A}}.
The velocity gradient of this bipolar \ion{H}{II} region was first detected by \citep{1988BAAS...20.1031R}. \citet{1995ApJ...447..220D} reconfirmed this, finding a gradient of $\sim$75 \kmspc. They inferred that the signature can be attributed to rotation of the ionised gas, originating from a circumstellar disc. They derived a core Keplerian mass of $\sim$200 M$_{\sun}$.
It has also been noted that NGC 6334A, K3-50A, and W49A/A are all alike in terms of their bipolar morphology and the possible presence of ionised outflows \citep{1997ApJ...482..307D}.
\end{description}
\subsection{Origin of the ionised gas velocity structure in the P10 simulations}
\label{sim_rotation}
Since it is difficult for models to take into account all of the different physical mechanisms involved with the ionisation process, one common simplification is to use static high-mass stars. Such simple analytic models tend to show that ionisation occurs isotropically, for a homogeneous surrounding medium, resulting in no velocity gradient. However, in most simulations it is clear that stars are in motion with respect to each other and the surrounding gas.
In the \citetalias{2010ApJ...711.1017P} simulations this motion results in a preferred direction of ionisation, downstream of the stellar orbit. We present a simple cartoon (Figure \ref{cartoons}) based on a qualitative examination of the simulated density vector maps (Figures \ref{bubbles} and \ref{finalbubbles}). We find that this can explain the red- and blue-shifted spectra of the observed RRL profile. The cartoon illustrates the evolutionary sequence beginning at the formation of the initial molecular cloud up to the formation of the \ion{H}{ii} region, explained in more detail by the following:
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{cartoons5}
\caption{A cartoon illustrating the kinematic evolution of a young \ion{H}{ii} region. (i) The molecular gas cloud forms with some initial net angular momentum ($\upOmega$, red dashed arrows), with increasing density towards the centre of the cloud (pink). (ii) Stars (black) form at the centre of the molecular cloud when the critical density is surpassed, then drift outwards. The stars orbit about the cloud's centre, tracing the angular momentum of the cloud (black dashed arrows). (iii) A ring (solid red) forms as a result of rotational bounce \citep{2003MNRAS.340...91C}. Stars continually gain mass and accrete material which gravitationally collects about the stars. The higher density ring initiates the formation of new stars. (iv) The centre becomes rarefied, meanwhile newly ionised material rapidly recombines (also known as flickering). (v) Ionisation dominates over recombination resulting in an \ion{H}{ii} region (blue). Ionisation is strongest to the front of the star's path where the pressure is lowest (white arrows). This appears as blue-shifted spectra when the star travels and ionises towards us, and red-shifted when the star travels and ionises away from the observer.
Molecular material collects about the edge of the ionised region as the bubble expands (thick red solid line).}
\label{cartoons}
\end{figure*}
\begin{enumerate}
\item The initial molecular gas cloud has some net angular momentum ($\upOmega$, red dashed arrows), with increasing density towards the centre.
\item Once the local critical density of the gas is surpassed, stars (black) form with a high star formation efficiency at the centre of the cloud. The first star forms at the centre of the potential well then quickly drifts outwards, soon followed by the formation of more stars (on timescales of kyr). These stars all immediately begin to trace the rotation of its natal cloud, about the centre of mass (black dashed arrows).
\item The central region starts to become rarefied and a ring-like structure appears (solid red)\footnote{We note in passing the similarity to the ring of \ion{H}{II} regions in W49A (\S\, \ref{examples}).}, likely a result of rotational bounce \citep{2003MNRAS.340...91C}. Simultaneously, material accumulates about the stars, and new stars form within the dense material, and in general continue to trace the rotation of the molecular cloud. The first star makes approximately one complete revolution until the simulation ends (across $\sim120$ kyr), taking into account that the stars are continually moving outwards as they orbit.
\item In the simulations, the rarefied centre is made up of inhomogeneous regions of lower density and lower pressure (shown as solid white for simplicity). Newly ionised material rapidly recombines (also known as flickering; e.g. \citetalias{2010ApJ...711.1017P}; \citealt{2011MNRAS.416.1033G}). The stars continue to orbit about the centre of mass, and also interact with each other, some getting flung outside of the cloud. For a detailed description of stellar cluster formation in this simulation see \citet{2010ApJ...725..134P}.
\item Multiple high-mass ionising stars create one large ionised bubble (solid blue) also containing lower-mass stars. The thermal pressure created by ionisation heating drives the expansion of the \ion{H}{II} region (white arrows), sweeping the surrounding neutral material into a dense shell (thick solid red line). The thermal pressure of the ionised gas is two orders of magnitude higher than in the molecular gas, and thus the pressure gradient term of the Euler equation dominates over the advection term at the \ion{H}{II} region boundary. Hence, the ionised gas does not trace the rotation of the molecular gas directly. The stars act as mediators, inheriting their angular momentum from the molecular gas out of which they formed, and then create angular momentum in the ionised gas via a different mechanism (described in more detail below). This is shown by the magnitudes and directions of the velocity arrows of the ionised gas (Figure \ref{finalbubbles}). If the ionised gas
was put in rotation by the surrounding molecular gas, then the arrows on either side of the \ion{H}{II} region boundary
should always point in the same direction, which is clearly not the case. Furthermore, the varying length of the arrows within the
\ion{H}{II} region provides evidence for strong dynamical processes inside the \ion{H}{II} region that would destroy any such
coherent velocity pattern coming from the boundary.
In fact, it is these dynamical processes that generate the rotational signature in the ionised gas as follows.
Typically, models use an idealised scenario whereby the star is static and ionises isotropically (e.g. \citealt{1978ppim.book.....S}).
However, in the frame of reference where the star is stationary, consider that upstream of the star's path, the gas flow in the cloud is in opposition to the direction of ionisation. This inhibits expansion of the ionised gas, as it is continually replenished by neutral material and recombines. Whereas downstream of the star's path, the neutral material travels with the direction of ionisation; the pressure is lowest ahead of the star in comparison to all other directions. Therefore, the expansion occurs predominantly in front of the star as it orbits, i.e. the path of least resistance.
The velocity of the ionised gas traces the orbit of the stars and gas and hence, we observe red- and blue-shifted spectra along our line-of-sight in the ionised gas of the simulation.
\end{enumerate}
\subsection
[{G316.81--0.06: a rotating H II region?}]
{G316.81--0.06: a rotating H\,{\sevensize II} region?}
\label{rotation}
We now investigate the possible origin of the velocity structure in G316.81--0.06.
If it is solely due to outflow and/or expansion of ionised gas, the bipolar \ion{H}{II} region would need to be significantly inclined. Since we clearly observe the elongated 35-GHz continuum and `green fuzzy' along the axis of bipolarity, in addition to the shallow velocity gradient north-south, we infer that Region 1 is close to edge-on, i.e.
the line of sight is primarily along the disc plane. While we cannot rule out that the observed velocity structure is caused only by a combination of expansion and outflow, we could not construct a simple model that explains the velocity structure with only these mechanisms.
Therefore, we believe that rotation is the most likely explanation for the velocity gradients.
This is further supported when we compare the ionised gas kinematics in the simulations of \citetalias{2010ApJ...711.1017P} to the observations of Region 1. As shown in \S\, \ref{results}, the morphology is similar, both in terms of centroid velocity and the 2$^\text{nd}$ moment maps. We also find that the velocity gradient in the simulations is around a factor of two higher than that of Region 1. This may be due to the simulated \ion{H}{II} regions being approximately a factor of two smaller ($\sim0.15$ pc as opposed to $\sim0.33$ pc) and that the inclination of Region 1 may be non-zero.
Further work is needed to explore the significance of rotating gas as opposed to non-rotating gas. In the absence of simulations fine-tuned to match the observations, or higher resolution observations to measure the velocity/proper motions of embedded stars we have reached the limit of the extent to which we can test this scenario.
However, bearing in mind the caveats discussed in \S\, \ref{sims}, we conclude that the simulations remain a useful tool to aid our understanding of the motions of the ionised gas, especially given the simulations were not fine-tuned to the observations. Moreover, the unusual velocity gradients naturally emerge from the \citetalias{2010ApJ...711.1017P} simulations which were not designed to study this effect.
Referring back to the previous postulated explanations from \S\, \ref{examples}, the interpretation of the velocity gradient based on comparison to the \citetalias{2010ApJ...711.1017P} simulations is most similar to the scenario put forward by \citet{1986ApJ...309..553G}. An interesting prediction of this scenario is that if the initial angular momentum of the cloud is determined by Galactic rotation, the magnitude of rotation will depend on the location in the Galaxy and the orientation of the angular momentum axis with reference to the Galactic plane.
Although not included in the \citetalias{2010ApJ...711.1017P} simulations, another possible explanation for the rotation of G316.81--0.06 may be due to the IRDC, clearly seen in Figure \ref{fig:image}. Accretion from the filament would likely induce some net angular momentum onto a central core.
Watkins et al. (in prep.) are currently studying the molecular gas kinematics of G316.81--0.06. Their results will allow for a detailed comparison between the motions of ionised and molecular gas in order to test this scenario.
If either scenario can describe the origin of ionised gas motions in many \ion{H}{ii} regions, similar velocity gradients should also be evident in RRLs for other young (bipolar) \ion{H}{ii} regions. In order to test the scenario of rotation induced by filamentary accretion, comparative studies between the kinematics of \ion{H}{ii} regions and IRDCs are required. Galactic plane surveys, with upcoming and highly sensitive interferometers (e.g. EVLA- and SKA-pathfinders), will provide a high resolution census of all ionised regions in the Milky Way. These \ion{H}{ii} regions will be at different locations in the Galaxy, with different orientations and magnitudes of angular momentum with respect to Earth. We will have an invaluable test-bed at the earliest and poorly understood phases of star formation, allowing for the study of RRLs in \ion{H}{II} regions across a large range of ages, sizes, and morphologies.
Future high resolution observational surveys in combination with suites of numerical simulations will also further our understanding of the differing contributing feedback mechanisms at early evolutionary stages and may help to constrain different star/cluster formation scenarios.
For example, the simulations of \citetalias{2010ApJ...711.1017P} can give an idea of which feedback mechanism(s) have an important effect in G316.81--0.06. The simulations include both heating by ionising and non-ionising radiation, where the latter's only effect is to increase the Jeans mass (see discussion in \citealt{2010ApJ...725..134P}). Therefore, all dynamical feedback effects in the simulation are due to photoionisation. This may imply that ionisation pressure is the dominating feedback mechanism required for the formation of a rotating ionised gas bubble and that radiation pressure and protostellar outflows are not needed to explain the dynamical feedback. This is potentially present in all \ion{H}{ii} regions and needs to be studied further in other simulations which incorporate different feedback mechanisms.
Although the formation and evolution of galaxies will not be significantly different whether or not the outflowing gas is rotating, the potential to use the ionised gas kinematics as a tracer to identify very young \ion{H}{ii} regions represents an opportunity to understand feedback at the relatively unexplored time/size scales when the stars are just beginning to affect their surroundings on cloud scales.
\section{Summary}
\label{summary}
We have studied a rare example of a young, bipolar \ion{H}{II} region which shows a velocity gradient in the ionised gas, perpendicular to the bipolar continuum axis.
Through comparisons of our H70$\upalpha$ RRL observations with the synthetic data of \citetalias{2010ApJ...711.1017P}, we find that they both share a similar morphology and velocity range along the equivalent axes.
We infer that the velocity gradient of G316.81--0.06 is the rotation of ionised gas, and that the simulations demonstrate that this rotation is a direct result of the initial net angular momentum of the natal molecular cloud. Further tests are required to deduce the origin of this angular momentum, whether it is induced by Galactic rotation, filamentary accretion, or other. If rotation is a direct result of some initial net angular momentum, this observational signature should be common and routinely observed towards other young \ion{H}{II} regions in upcoming radio surveys (e.g. SKA, SKA-pathfinders, EVLA). Further work is required to know if velocity gradients are a unique diagnostic.
If rotation is seen to exist in other \ion{H}{II} regions, and we can uncover its true origins, this may help to parameterise the dominating feedback mechanisms at early evolutionary phases, greatly demanded by numerical studies. This should be achievable through systematic studies of many \ion{H}{II} regions, combined with comparison to a wider range of numerical simulations, likely offering a new window to this investigation.
\section*{Acknowledgements}
We would like to thank the anonymous referee for their very helpful and constructive comments. This research made use of Astropy \citep{2013A&A...558A..33A}, a community-developed core Python package for Astronomy, and APLpy \citep{2012ascl.soft08017R}, an open-source plotting package for Python. \citet{Anaconda} and \citet{Jupyter} were also used. The GLIMPSE/MIPSGAL image in Figure \ref{fig:image} was created with the help of the ESA/ESO/NASA FITS Liberator \citep{2012ascl.soft06002L}. The cartoon of Figure \ref{cartoons} was created using Inkscape \citep{Inkscape}, the free and open source vector graphics editor. Thanks are also due to Dr. Stuart Lumsden for his very helpful feedback, and Dr. Lee Kelvin for his help with making RGB images.
\bibliographystyle{mnras}
| {'timestamp': '2018-05-03T02:00:17', 'yymm': '1805', 'arxiv_id': '1805.00479', 'language': 'en', 'url': 'https://arxiv.org/abs/1805.00479'} |
\section{Introduction}
As photons from a distant light source travel through the Universe, their paths are perturbed by the gravitational influence of the large-scale structure. Weak gravitational lensing concerns the small distortions in the images of distant galaxies due to the influence of the intervening mass along the line of sight (see e.g.~\citealt{Kilbinger2009} for a review). In particular, galaxy-galaxy lensing (or simply galaxy-shear) refers to the correlation between foreground (lens) galaxy positions and the tangential component of lensing shear of background (source) galaxies at higher redshifts, which is a measure of the projected, excess mass distribution around the lens galaxies \citep{Bardeen1986}. Extracting useful cosmological or astrophysical information from galaxy-galaxy lensing is complicated by a number of factors. First, one needs to model the relationship between the galaxy density field and the underlying matter field, i.e. galaxy bias \citep{Fry1993}. Second, at small angular separations between lens and source, the signal-to-noise tends to be large, but lensing-galaxy two-point functions become increasingly sensitive to the small-scale matter power spectrum, whose modeling is convoluted due to non-linearities and baryonic effects \citep{van2014impact,semboloni2013effect,harnois2015}. Also, galaxy bias may become scale-dependent at those scales (e.g. \citealt{Cresswell_2008}). To sidestep these limitations, several studies in the past have considered the usage of ratios between galaxy-shear two-point functions sharing the same lens sample, also called lensing ratios. This observable cancels out the dependence on the galaxy-matter power spectrum while keeping the sensitivity to the angular diameter distances of both tracer and source galaxies.
Several applications of lensing ratios have been considered in the literature. They were originally proposed in \citet{Jain2003} as a novel way to constrain cosmology from geometrical information only, using ratios of galaxy-shear cross-correlation functions sharing the same lens sample. They envisioned dark energy properties could be constrained using these ratios, in particular the parameter describing the equation of state of dark energy, $w$. \citet{Taylor_2007} proposed applying this technique behind clusters using ratios of individual shear measurements, rather than correlation functions. This revised method was applied to data in \citet{Kitching_2007} using lensing measurements around three galaxy clusters, obtaining weak constraints on $w$. Later, \citet{Taylor_2012} used low-mass systems from the HST Cosmos Survey and was able to detect cosmic acceleration. Other authors developed variants of these initial methods, including \citet{Zhang_2005}, who proposed an approach for both galaxy-shear and shear-shear correlations.
Also, \citet{Bernstein_2004} explored an alternative formalism for implementing the original idea of \citet{Jain2003}, and documented for the first time that the dependence on cosmology was rather weak. They showed that to achieve sensitivity on cosmological parameters, photometric redshifts had to be extremely well characterized, together with the calibration of shear biases, unless they were redshift independent. \citet{Kitching_2008} also discussed systematics affecting shear-ratio in detail, also finding that photometric redshift uncertainties played a prominent role.
Given this dominant dependency on photometric redshift uncertainties, lensing ratios of galaxy-galaxy lensing measurements have been established as a probe to test redshift distributions and redshift-dependent multiplicative biases \citep{Schneider_2016}. Note that in combination with CMB lensing, geometrical lensing ratios \textit{can} can still constrain cosmological parameters (\citealt{Das_Spergel}, \citealt{kitching2015rcslens}, \citealt*{Prat_2019} ), but otherwise they have been found to be dominated by redshift uncertainties. Because of that, many studies have used shear ratios to cross-check the redshift distributions of the source sample computed with another method. This is what is known as the ``shear-ratio test'', where ratios of galaxy-galaxy lensing measurements are used to test the redshift distributions of different redshift bins for the corresponding shape catalog. This has been done in several galaxy surveys such as in the Sloan Digital Sky Survey (SDSS), e.g. \citet{Mandelbaum_2005}, where both redshifts and multiplicative shear biases were tested for the first time, in the Red-Sequence Cluster Survey (RCS) \citep{Hoekstra_2005} and in the Kilo-Degree Survey (KiDS) \citep{Heymans_2012,Hildebrandt_2017,Hildebrandt_2020,Giblin_2020}.
In the Dark Energy Survey (DES) Y1 galaxy-galaxy lensing analysis \citep*{Y1GGL}, geometrical lensing ratios were used to place constraints on the redshift distributions of the source samples and obtained competitive constraints on the mean of the source redshift distributions. This was among the first times the shear-ratio information was used to place constraints instead of just as a diagnostic test. They were also able to constrain multiplicative shear biases. In the current study, we continue this line of work, but generalize this approach in several ways. We develop a novel method that uses lensing ratios as an extra probe to the combination of galaxy-galaxy lensing, cosmic shear and galaxy clustering, usually referred to as 3$\times$2pt. Specifically, we add the \textit{shear-ratio likelihood}, that uses small-scale independent information, to the usual 3$\times$2pt likelihood. This extra likelihood places constraints on a number of astrophysical parameters, not only those characterizing redshift uncertainties but, importantly, also those characterizing intrinsic alignments and multiplicative shear biases at the same time. By helping to constrain these nuisance parameters, the lensing ratios at small scales provide additional information to obtain tighter cosmological constraints, while still being insensitive to baryonic effects and non-linear galaxy bias.
Using the first three years of observations from DES (Y3 data), we construct a set of ratios of tangential shear measurements of different source redshift bins sharing the same lens bin, for different lens redshift bins. These ratios have the advantage that they can be modeled in the small, non-linear scale regime where we are not able to accurately model the original two-point correlation functions and which is usually discarded in cosmological analyses. This allows us to exploit information from small scales which would have otherwise been disregarded given our inability to model the tangential shear at small scales due to uncertainties in the galaxy bias model, the matter power spectrum, baryonic effects, etc, which cancel out in the ratios. This cancellation happens exactly only in the limit where the lens redshift distribution is infinitely narrow which is when lensing ratios can be perfectly modeled with geometry only. Instead, if the lens redshift distribution has some finite width, as happens in realistic scenarios as in this work, the cancellation is not exact and the ratios retain some dependence on the lens properties and matter power spectrum, though still much smaller than in the tangential shear signal itself. There are further effects which introduce dependence of shear ratios on parameters other than cosmological distances, such as magnification of the lens galaxies, and the alignment of the source galaxy orientations with the lens galaxy positions due to their physical association, what is usually referred to as Intrinsic Alignments (IA).
There are several different approaches to account for magnification and IA effects on shear ratios. One possible approach is to mitigate these effects in the ratios, e.g.~recently \citet{Unruh_2019} proposed a mitigation strategy for lens magnification effects in the shear-ratio test. Another option is to include these effects in the model, e.g. \citet{Giblin_2020} performed a shear-ratio test on the latest KiDS data set and included non-linear alignment (NLA, \citealt{Hirata2004,Bridle2007}) intrinsic alignment terms in their originally geometrical model. Importantly, they note that SR is indeed very sensitive to the IA model, and they suggest the combination of SR with other cosmological observables to fully exploit the IA constraining power of SR.
In this work we develop a SR analysis that takes full advantage of the IA dependence of the probe, and combine it with other observables to fully exploit the gains in cosmological constraining power. We can do that by describing the ratios using the full tangential shear model, as it is used in the DES Y3 3$\times$2pt analysis, but on smaller scales. In this way, we do not only take into account the width of the lens redshift distributions but also lens magnification and intrinsic alignment effects. Moreover, this original approach also has the advantage of not adding extra computational cost: the $3\times2$pt analysis already requires calculation of the full galaxy-galaxy lensing model for all the scales and source/lens combinations that we use.
Thus, the approach we develop in this work can be thought of extending the galaxy-galaxy lensing data-vector to smaller scales, where most of the signal-to-noise lies, but using the ratio transformation to retain the information we can confidently model. The threshold scale where we are not able to model DES Y3 tangential shear measurements accurately enough given the current uncertainties has been set at 6 Mpc/$h$ \citep{Krause_2017} for the 3$\times$2pt analysis. The ratios we use in this work only use tangential shear measurements below this threshold, to provide independent information. The small-scale limit of the ratios is set by the regime of validation of our IA model, in some cases, and otherwise by the angular range that has been validated for galaxy-galaxy lensing \citep{y3-gglensing}.
In this paper we explore the constraining power of lensing ratios first by themselves and then in combination with other probes such as galaxy clustering, galaxy-galaxy lensing and cosmic shear. We use the same model setup as in the DES Y3 3$\times$2pt cosmological analysis \citep{y3-3x2ptkp}, using the same nuisance parameters including IA, lens and source redshift parameters and multiplicative shear biases. We test this configuration first using simulated data vectors and $N$-body simulations to then apply it to DES Y3 data. We perform a series of tests to validate our fiducial model against different effects which are not included in it such as the impact of baryons, non-linear bias and halo-model contributions, reduced shear and source magnification, among others. In addition, we also test the robustness of the results directly on the data by using two independent lens galaxy samples, the so-called \textsc{redMaGiC} sample \citep{y3-galaxyclustering}
and a magnitude-limited sample, \textsc{MagLim} \citep{y3-2x2maglimforecast}, which demonstrates that the lensing ratios information is robust against non-linear small-scale information characterizing the galaxy-matter connection. We also use lensing ratios constructed from large-scale information to further validate the small-scales ratios in the data. After thoroughly validating the shear-ratio likelihood by itself (SR), we proceed to combine it with other 2pt functions and study the improvements it provides in the constraints, using first simulated data and then DES Y3 data. We find SR to provide significant improvements in cosmological constraints, especially for the combination with cosmic shear, due to the information SR provides on IA. The DES Y3 cosmic shear results are described in two companion papers \citep*{y3-cosmicshear1,y3-cosmicshear2}, the results from galaxy clustering and galaxy-galaxy lensing in \citet*{y3-2x2ptmagnification, y3-2x2ptbiasmodelling,y3-2x2ptaltlensresults} and the combination of all probes in \citet*{y3-3x2ptkp}.
The paper is organized as follows. Section \ref{sec:data} describes the data sets used in this work. In Section \ref{sec:modeling} we detail the modeling of the ratios and the scheme used to do parameter inference using that model. The ratio measurement procedure is described in Section \ref{sec:measurement}. The validation of the model is presented in Section \ref{sec:model_validation}. In Section \ref{sec:combination} we explore the constraining power of the lensing ratios when combined with other probes using simulated data. Finally, in Section \ref{sec:results}, we apply the methodology to DES Y3 data and present the final results. We summarize and conclude in Section \ref{sec:conclusions}.
\section{Data and simulations}
\label{sec:data}
DES is a photometric survey that covers about one
quarter of the southern sky (5000 sq.~deg.), imaging galaxies in 5 broadband filters ($grizY$) using the Dark Energy Camera \citep{Flaugher2015,DES2016}. In this work we use data from the first three years of observations (from August 2013 to February 2016, hereafter just Y3), which reaches a limiting magnitude ($S/N = 10$) of $\approx 23$ in the $i$-band (with a mean of 4 exposures out of the planned 10 for the full survey), and covers an area of approximately 4100 sq.~deg. The data is processed using the DESDM pipeline presented in \citet{Morganson2018}. For a detailed description of the DES Y3 Gold data sample, see \citet*{y3-gold}. Next we describe the lens and source galaxy samples used in this work. Their corresponding redshift distribution are shown in Figure \ref{fig:nzs}.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{plots/nzs_plot.png}
\caption{(Top panel): Redshift distributions of \textsc{redMaGiC} lens galaxies divided in five redshift bins. The first three redshift bins are used for the shear ratio analysis in this work, while the two highest-redshift ones (in gray) are not used. The $n(z)$s are obtained by stacking individual $p(z)$ distributions for each galaxy, as computed by the \textsc{redMaGiC} algorithm, and validated using clustering cross-correlations in \citet{y3-lenswz}. (Middle panel): Same as above but for the \textsc{MagLim} lens galaxy sample. The redshift distributions come from the DNF (Directional Neighbourhood Fitting) photometric redshift algorithm \citep{de2016dnf,y3-2x2ptaltlensresults}. (Bottom panel): The same, but for the weak lensing source galaxies, using the \textsc{Metacalibration} sample. In this case the redshift distributions come from the SOMPZ and WZ methods, described in \citet*{y3-sompz} and \citet*{y3-sourcewz}. }
\label{fig:nzs}
\end{figure}
\subsection{Lens samples}
In Table~\ref{tab:samples} we include a summary description for each of the lens samples used in this work, with the number of galaxies in each redshift bin, number density, linear galaxy bias values and lens magnification parameters.
\subsubsection{The \textsc{redMaGiC} sample}
One of the lens galaxy samples used in this work is a subset of the DES Y3 Gold Catalog selected by \textsc{redMaGiC} \citep{Rozo2015}, which is an algorithm designed to define a sample of luminous red galaxies (LRGs) with high quality photometric redshift estimates. It selects galaxies above some luminosity threshold based on how well they fit a red sequence template, calibrated using the redMaPPer cluster finder \citep{Rykoff2014,Rykoff2016} and a subset of galaxies with spectroscopically verified redshifts. The cutoff in the goodness of fit to the red sequence is imposed as a function of redshift and adjusted such that a constant comoving number density of galaxies is maintained.
In DES Y3 \textsc{redMaGiC} galaxies are used as a lens sample in the clustering and galaxy-galaxy lensing parts of the 3$\times$2pt cosmological analysis \citep{y3-galaxyclustering, y3-gglensing}. In this work we utilize a subset of the samples used in those analyses, in particular the galaxies with redshifts $z<0.65$, split into three redshift bins (see Figure \ref{fig:nzs}). The redshift calibration of this sample is performed using clustering cross-correlations, and is described in detail in \citet{y3-lenswz}. A catalog of random points for \textsc{redMaGiC} galaxies is generated uniformly over the footprint, and then weights are assigned to \textsc{redMaGiC} galaxies such that spurious correlations with observational systematics are cancelled. The methodology used to assign weights is described in \citet{y3-galaxyclustering}.
\subsubsection{The Magnitude-limited sample}
We use a second lens galaxy selection, which differs from \textsc{redMaGiC} in terms of number density and photometric redshift accuracy: the \textsc{MagLim} sample. In this sample, galaxies are selected with a magnitude cut that evolves linearly with the photometric redshift estimate: $i < a z_{\rm phot} + b$. The optimization of this selection, using the DNF photometric redshift estimates, yields $a=4.0$ and $b=18$. This optimization was performed taking into account the trade-off between number density and photometric redshift accuracy, propagating this to its impact in terms of cosmological constraints obtained from galaxy clustering and galaxy-galaxy lensing in \citet{y3-2x2maglimforecast}. Effectively, this selects brighter galaxies at low redshift while including fainter galaxies as redshift increases. Additionally, we apply a lower cut to remove the most luminous objects, imposing $i > 17.5$. The \textsc{MagLim} sample has a galaxy number density of more than four times that of the \textsc{redMaGiC} sample but the redshift distributions are $\sim30\%$ wider on average. This sample is split into 6 redshift bins, but in this paper we only use the first three of them. The characteristics of these three redshift bins are defined in Table~\ref{tab:samples}. The redshift binning was chosen to minimize the overlap in the redshift distributions, which is also calibrated using clustering redshifts in \citet{y3-lenswz}. \citet{y3-2x2maglimforecast} showed that changing the redshift binning does not impact the cosmological constraints. See also \citet{y3-2x2ptaltlensresults} for more details on this sample.
\begin{table}
\centering
\pmb{\textsc{redMaGiC} lens sample} \\
\vspace{2mm}
\setlength{\tabcolsep}{5pt}
\begin{tabular}{c|c|c|c|c}
\textbf{Redshift bin} & \textbf{$N^i_\text{gal}$} & \textbf{$n^i_\text{gal}$ } & \textbf{$b^i$} & \textbf{$\alpha^i$} \\
\rule{0pt}{3ex}
$0.15 < z < 0.35$ & 330243 & 0.022141 & 1.74 $\pm$ 0.12 & 1.31 \\
$0.35 < z < 0.50$ & 571551 & 0.038319 & 1.82 $\pm$ 0.11 & -0.52 \\
$0.50 < z < 0.65$ & 872611 & 0.058504 & 1.92 $\pm$ 0.11 & 0.34 \\
\end{tabular}
\\
\vspace{5mm}
\pmb{\textsc{MagLim} lens sample} \\
\vspace{2mm}
\begin{tabular}{c|c|c|c|c}
\textbf{Redshift bin} & \textbf{$N^i_\text{gal}$} & \textbf{$n^i_\text{gal}$ } & \textbf{$b^i$} & \textbf{$\alpha^i$} \\
\rule{0pt}{3ex}
$0.20 < z < 0.40$ & 2236473 & 0.1499 & 1.49 $\pm$ 0.10 & 1.21 \\
$0.40 < z < 0.55$ & 1599500 & 0.1072 & 1.69 $\pm$ 0.11 & 1.15 \\
$0.55 < z < 0.70$ & 1627413 & 0.1091 & 1.90 $\pm$ 0.12 & 1.88 \\
\end{tabular}
\\
\vspace{5mm}
\pmb{\textsc{Metacalibration} source sample} \\
\vspace{2mm}
\begin{tabular}{c|c|c|c|c}
\textbf{Redshift bin} & \textbf{$N^j_\text{gal}$} & \textbf{$n^j_\text{gal}$ } & \textbf{$\sigma_\epsilon^j$} & \textbf{$\alpha^j$}\\
\rule{0pt}{3ex}
1 & 24940465 & 1.476 & 0.243 & 0.335 \\
2 & 25280405 & 1.479 & 0.262 & 0.685 \\
3 & 24891859 & 1.484 & 0.259 & 0.993 \\
4 & 25091297 & 1.461 & 0.301 & 1.458 \\
\end{tabular}
\caption{Summary description for each of the samples used in this work. $N_\text{gal}$ is the number of galaxies in each redshift bin, $n_\text{gal}$ is the effective number density in units of gal/arcmin$^{2}$ (including the weights for each sample), $b^i$ is the mean linear galaxy bias from the 3$\times$2pt combination, the \textbf{$\alpha$}'s are the magnification parameters as measured in \citet{y3-2x2ptmagnification} and $\sigma_\epsilon^j$ is the weighted standard deviation of the ellipticity for a single component as computed in \citet*{y3-shapecatalog}.}
\label{tab:samples}
\end{table}
\subsection{Source sample}\label{sec:source_sample}
The DES Y3 source galaxy sample, described in \citet*{y3-shapecatalog}, comprises a subset of the DES Y3 Gold sample. It is based on \textsc{Metacalibration} \citep{Huff2017,Sheldon2017}, which is a method developed to accurately measure weak lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method involves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. This technique
can be applied to any shear estimation code provided it fulfills certain requirements. For this work, it has been applied to the \textsc{ngmix} shear pipeline \citet{Sheldon2014}, which fits a Gaussian model simultaneously in the $riz$ bands to measure the ellipticities of the galaxies. The details of this implementation can be found in \citet*{y3-shapecatalog}.
The redshift calibration of the source sample has been performed using the Self Organizing Maps Photometric Redshifts (SOMPZ, \citealt*{y3-sompz}) and the clustering cross-correlation (WZ, \citealt*{y3-sourcewz}) method. The SOMPZ scheme uses information from the DES Deep Fields \citep*{y3-deepfields} and connects it to the wide survey by using the Balrog transfer function \citep*{y3-balrog}. Using the that method, the source sample is split into four redshift bins (Figure \ref{fig:nzs}), and the scheme provides a set of source redshift distributions, including the uncertainty from sample variance, flux measurements, etc. The WZ method uses the cross-correlations of the positions of the source sample with the positions of the \textsc{redMaGiC} galaxies, narrowly binned in redshift. For its application, samples are drawn from the posterior distribution of redshift distributions for all bins conditioned on both the SOMPZ photometric data and the WZ clustering data. In addition, validation of the shape catalog uncertainties, and the connection to uncertainties in the associated redshift distributions has been developed in detail in \citet{y3-imagesims}, using realistic image simulations. For this work we will employ the shear catalog and use the results from these analyses as priors on source multiplicative biases and redshift calibration.
In Table~\ref{tab:samples} we include the number of galaxies in each redshift bin as well as the number density, shape noise and source magnification parameters.
\subsection{$N$-body simulations}
In this work we use $N$-body simulations to recreate an end-to-end analysis and validate our methodology. For this we use the Buzzard simulations described in Sec.~\ref{sec:buzzard}. We also use the MICE2 simulations to validate our small scale \texttt{Halofit} modeling with a Halo Occupation Distribution (HOD) model. We describe the MICE2 simulation in Sec.~\ref{sec:mice}.
\subsubsection{The Buzzard v2.0 $N$-body simulations}
\label{sec:buzzard}
Buzzard v2.0 \citep{y3-simvalidation} is a suite of 18 simulated galaxy catalogs built on $N$-body lightcone simulations that have been endowed with a number of DES Y3 specific survey characteristics. Each pair of 2 Y3 simulations are produced from a set of 3 independent $N$-body lightcones with mass resolutions of $3.3\times10^{10},\, 1.6\times10^{11},\, 5.9\times10^{11}\, h^{-1}M_{\odot}$, and simulated volumes of $1.05,\, 2.6 \textrm{ and } 4.0\, (h^{-3}\, \rm Gpc^3)$. Galaxies are included in these simulations using the \textsc{Addgals} model \citep{Wechsler2021, DeRose2021}. \textsc{Addgals} makes use of the relationship, $P(\delta_{R}|M_r)$, between a local density proxy, $\delta_{R}$, and absolute magnitude $M_r$ measured from a high resolution sub--halo abundance matching (SHAM) model in order to populate galaxies into these lightcone simulations. This model reproduces the absolute--magnitude--dependent clustering of the SHAM.
The \textsc{Calclens} algorithm is used to ray-trace the simulations, using a spherical-harmonic transform (SHT) based Poisson solver \citep{Becker2013}. A $N_{\rm side}=8192$ \textsc{HealPix} grid is used to perform the SHTs. \textsc{Calclens} computes the lensing distortion tensor at each galaxy position, and uses this quantity to deflect galaxy angular positions, shear galaxy intrinsic ellipticities, including effects of reduced shear, and magnify photometry and shapes. Convergence tests have shown that resolution effects are negligible in relevant lensing quantities on the scales used for this analysis \citep{DeRose2019}.
We apply a photometric error model based on DES Y3 data estimates in order to add realistic wide field photometric noise to our simulations. A lens galaxy sample is selected from our simulations by applying the \textsc{redMaGiC} galaxy selection with the configuration described in \citet{y3-galaxyclustering}. A weak-lensing source galaxy selection is performed by selecting on PSF-convolved sizes and $i$-band signal-to-noise in a manner that matches the measured non-tomographic source number density in the DES Y3 \textsc{metacalibration} source catalog. SOMPZ redshift estimation is used in the simulations in order to place galaxies into four source redshift bins. The shape noise per redshift bin is also matched to that measured from the \textsc{metacalibration} catalog. Two-point functions are measured in the Buzzard v2.0 simulations with the same code used for the Y3 data. \textsc{metacalibration} responses and inverse variance weights are set equal to 1 for all galaxies, because our simulations do not include these values. Weights for the simulated lens galaxy sample are assigned using the same algorithm used in the DES Y3 data.
\subsubsection{The MICE2 $N$-body simulation}\label{sec:mice}
We use DES-like mock galaxy catalogs from the MICE
simulation suite in this analysis. The MICE Grand Challenge simulation (MICE-GC) is an $N$-body simulation
run in a cube with side-length 3 Gpc/$h$ with $4096^3$ particles using the Gadget-2 code \citep{Springel_2005} with mass resolution of $2.93\times 1010 M_\odot/h$. Halos are identified using a Friends-of-Friends algorithm with linking length 0.2. For further details about this simulation, see \citet{Fosalba2015a}.
These halos are then populated with galaxies using a
hybrid sub-halo abundance matching plus halo occupation distribution (HOD) approach, as detailed in \citet{Carretero2014}. These methods are designed to match
the joint distributions of luminosity, $g - r$ color, and
clustering amplitude observed in SDSS \citep{Zehavi_2005}. The construction of the halo and galaxy catalogs is described in \citet{Crocce2015a}. MICE assumes a flat $\Lambda$CDM cosmological model with $h = 0.7$, $\Omega_m = 0.25$, $\Omega_b = 0.044$ and $\sigma_8 = 0.8$, and it populates one octant of the sky (5156 sq. degrees), which is comparable to the sky area of DES Y3 data.
To validate our small scale \texttt{Halofit} modeling in Sec~\ref{sec:hod}, testing it against an HOD model with parameters measured from MICE2, we use a DES-like lightcone catalog of \textsc{redMaGiC} galaxies matching the properties of DES Y3 data, including lens magnification.
\section{Modeling of the ratios}
\label{sec:modeling}
In this section we describe how we model the ratios of tangential shear measurements and why it is possible to model them to significantly smaller scales than the tangential shear quantity.
\subsection{The idea: Geometrical ratios}
\label{sec:geometrical_model}
When we take ratios of tangential shear measurements around the same lens sample, the dependence on the matter power spectrum and galaxy bias cancels for the most part, canceling exactly if the lens sample is infinitely narrow in redshift. In this approximation the ratios can be modelled independently of scale, and they depend only on the geometry of the Universe. As we will see now, this fact allows us to model ratios of tangential shear measurements down to significantly smaller scales than what is typically used for the tangential shear measurements themselves. For instance, in the case of the DES Y3 3$\times$2pt cosmological analysis, scales below 6 Mpc/$h$ are discarded for the galaxy-galaxy lensing probe due to our inability to accurately model the (non-linear) matter power spectrum, the galaxy bias, baryonic effects, etc. In order to see why these dependencies may cancel out in the ratios, it is useful to first express the tangential shear $\gamma_t$ in terms of the excess surface mass density $\Delta \Sigma$:
\begin{equation}\label{eq:gammat_delta_sigma}
\gamma_t = \frac{\Delta \Sigma}{\Sigma_\mathrm{crit}},
\end{equation}
where the lensing strength $\Sigma_{\mathrm{crit}}^{-1}$ is a geometrical factor that, for a single lens-source pair, depends on the angular diameter distance to the lens $D_{\rm l}$, the source $D_{\rm s}$ and the relative distance between them $D_{\rm ls}$:
\begin{equation}\label{eq:inverse_sigma_crit}
\Sigma_{\mathrm{crit}}^{-1} (z_{\rm l}, z_{\rm s}) = \frac{4\pi G}{c^2} \frac{D_{\rm ls} \, D_{\rm l}}{D_{\rm s}},
\end{equation}
with $\Sigma_{\mathrm{crit}}^{-1}(z_l,z_s)=0$ for $z_s<z_l$, and where $z_l$ and $z_s$ are the lens and source galaxy redshifts, respectively. For a single lens-source pair, Eq.~(\ref{eq:gammat_delta_sigma}) is exact and can be used to see that if one takes the ratio of two tangential shear measurements sharing the same lens with two different sources, $\Delta \Sigma$ cancels since it is a property of the lens only (see \citet{Bartelmann2001} for a review), and we are left with a ratio of geometrical factors:
\begin{equation}\label{eq:ratios_narrow_lens}
\frac{\gamma_{t}^{l,s_i}}{\gamma_{t}^{l,s_j}} = \frac{\Sigma_{\mathrm{crit}}^{-1} (z_l, z_{s_i})}{\Sigma_{\mathrm{crit}}^{-1} (z_l, z_{s_j})}.
\end{equation}
This means that ratios defined in this way will depend on the redshift of the lens and source galaxies, as well as on the cosmological parameters needed to compute each of the angular diameter distances involved in Eq.~(\ref{eq:inverse_sigma_crit}), through the distance–redshift relation.
So far we have only been considering a single lens-source pair. For a tangential shear measurement involving a sample of lens galaxies with redshift distribution $n_l(z)$ and a sample of source galaxies with $n_s(z)$, which may also overlap, we can generalize Eq.~(\ref{eq:ratios_narrow_lens}) by defining an effective $\Sigma^{-1}_{\mathrm{crit}}$ integrating over the corresponding redshift distributions. For a given lens bin $i$ and source bin $j$, it can be expressed as:
\begin{equation}\label{eq:eff_inverse_sigma_crit}
\Sigma_{\mathrm{crit},\mathrm{eff}}^{-1\ i,j} = \int_0^{z_l^\text{max}} dz_l \int_0^{z_s^\text{max}} dz_s \, n_l^i(z_l) \, n_s^j(z_s) \, \Sigma_{\mathrm{crit}}^{-1}(z_l, z_s).
\end{equation}
Then, the generalized version of Eq.~(\ref{eq:ratios_narrow_lens}) becomes:
\begin{equation}\label{eq:ratios_eff_sigma_crit}
\frac{\gamma_{t}^{l,s_i}}{\gamma_{t}^{l,s_j}} \simeq \frac{\Sigma_{\mathrm{crit},\mathrm{eff}}^{-1 \ l,s_i}}{\Sigma_{\mathrm{crit},\mathrm{eff}}^{-1 \ l, s_j}}.
\end{equation}
In this equation it becomes apparent that the main dependency of the ratios is on the redshift distributions of both the lens and the source samples. Eq.~(\ref{eq:ratios_narrow_lens}) is only exact if the lens sample is infinitely narrow in redshift and a good approximation if the lens sample is narrow \textit{enough}. This approximation is what was used in the DES Y1 shear-ratio analysis \citep*{Y1GGL} to model the ratios. In this work we will go one step further and we will not use the narrow-lens bin approximation. Instead, we will use a full modeling of the ratios adopting the tangential shear model used in the DES Y3 3$\times$2pt analysis, which includes explicit modeling of other effects such as lens magnification, intrinsic alignments and multiplicative shear biases, which will also play a role in the ratios. Next we describe in detail the full modeling of the ratios we use in this work.
\subsection{The full model}
\label{sec:model}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{plots/ratios_theory_sim_3x2pt_bestfit_cosmo_IA_bias_11_13_20_cov_nz.pdf}
\caption{Lensing ratios using the full model of the ratios we use in this work as a function of scale evaluated at the best-fit values of the 3$\times$2pt analysis (see Sec.~\ref{sec:model}) compared with the purely geometrical model used in previous shear-ratio analyses until this date, which is scale independent (see Sec.~\ref{sec:geometrical_model}). We can appreciate the geometrical component still dominates the modeling of the ratios but small but significant deviations are found when comparing with the full modeling.
The unshaded regions correspond to the ``small scales'' we use in this analysis, which are adding extra information below the scales used in the 3$\times$2pt cosmological analysis for the galaxy-galaxy lensing probe. The grey shaded regions are not used for the fiducial ratios in this work.}
\label{fig:ratios_scale}
\end{figure}
Ratios of tangential shear measurements are the main probe used in this work. In this section we will describe how we model it for our fiducial case, including the integrals of the power spectrum over the lens bins range, where we do not use the narrow lens bin approximation, and with contributions from lens magnification and intrinsic alignments.
The model of the ratio for the lens redshift bin $i$ between source redshift bins $j$ and $k$ can be expressed as:
\begin{equation}
\label{eq:ratio}
r^{(l_i,s_j,s_k)} \equiv \left \langle \frac{ \gamma_t^{l_i,s_j} (\theta)}{ \gamma_t^{l_i,s_k} (\theta) } \right \rangle_\theta = \left \langle r^{(l_i,s_j,s_k)} (\theta) \right \rangle_\theta ,
\end{equation}
where the averaging over the different angular bins is performed as detailed in Sec.~\ref{sec:measurement}, in the same way as we do it for the measurement. To model each tangential shear quantity in the ratio, we use exactly the same model used for the galaxy-galaxy lensing probe in the DES Y3 3$\times$2pt cosmological analysis, which we will summarize in this section and for which further details can be found in \citet{y3-generalmethods} and \citet{y3-gglensing}. Also, in Fig.~\ref{fig:ratios_scale} we show the full modeling of the ratios as a function of scale, before performing the angular averaging. We compare it with the purely geometrical modeling described in the previous section, and find that, even though the geometrical part component continues to be the dominant component needed to model the ratios, other contributions become significant for some of the lens-source bins combinations and thus the full modeling is needed.
The tangential shear two-point correlation function for each angular bin can be expressed as a transformation of the galaxy-matter angular cross-power spectrum $C_{gm}(\ell)$, which in this work we perform using the curved sky projection:
\begin{equation}\label{eq:curved_sky}
\gamma_t^{ij}(\theta) = (1+m^j)\sum_\ell \frac{2\ell+1}{4\pi\ell(\ell+1)} \overline{P^2_\ell}\left(\theta_{\rm min},\theta_{\rm max}\right) \, C^{ij}_{gm, \text{tot}}(\ell),
\end{equation}
for a lens redshift bin $i$ and a source redshift bin $j$, where $\overline{P^2_\ell}\left(\theta_{\rm min},\theta_{\rm max}\right) $ is the bin-averaged associated Legendre polynomial within an angular bin $[\theta_{\rm min},\theta_{\rm max}]$, defined in \citet{y3-gglensing}. $m^j$ are free parameters that account for a multiplicative uncertainty on the shape measurements. The total angular cross-power spectrum $C^{ij}_{gm, \text{tot}}$ in the equation above includes terms from Intrinsic Alignments (IA), lens magnification, and cross terms between the two effects:
\begin{equation}
C^{ij}_{gm, \text{tot}} = C^{ij}_{gm} + C^{ij}_{gm, \text{IA}} + C^{ij}_{gm, \text{lens mag}} + C^{ij}_{gm, \text{IA x lens mag}}.
\end{equation}
The main angular cross-power spectrum can be written as this projection of the 3D galaxy-matter power spectrum $P_{gm}$, using Limber's approximation \citep{Limber53, Limber_LoVerde2008} and assuming a flat Universe cosmology:
\begin{equation}\label{eq:C_gm}
C_{gm}^{ij}(\ell) = \frac{3H^2_0 \Omega_m }{2c^2}
\int d\chi N_l^i(\chi) \frac{g^j(\chi)}{a(\chi)\, \chi}
P_{gm}\scriptstyle{\left(\frac{\ell+1/2}{\chi},z(\chi)\right)},
\end{equation}
where
\begin{equation}\label{eq:N_l}
N_l^i(\chi) = \frac{n^i_l\,(z-\Delta z^i_l)}{\bar{n}^i_l}\frac{dz}{d\chi},
\end{equation}
with $\Delta z^i_l$ accounting for the uncertainty on the mean redshift of the lens redshift distributions. For the \textsc{MagLim} sample we also marginalize over the width of the lens redshift distributions, introducing the parameters $\sigma_{z_l}^i$, one for each lens redshift bin (see \citealt{y3-2x2ptaltlensresults,y3-lenswz} for additional details about the introduction of the width parameterization). In the equations above, $k$ is the 3D wavenumber, $\ell$ is the 2D multipole moment, $\chi$ is the comoving distance to redshift $z$, $a$ is the scale factor, $n^i_l$ is the lens redshift distribution, $\bar{n}^i_l$ is the mean number density of the lens galaxies and $g(\chi)$ is the lensing efficiency kernel:
\begin{equation}\label{eq:lensing_efficiency}
g(\chi) = \int_\chi^{\chi_\text{lim}} d \chi' N^j_s(\chi') \frac{\chi'- \chi}{\chi'}
\end{equation}
with $N^j_s(\chi')$ being analogously defined for the source galaxies as in Eq.~(\ref{eq:N_l}) for the lens galaxies, introducing the source redshift uncertainty parameters $\Delta z^j_s$. $\chi_\mathrm{lim}$ is the limiting comoving distance of the source galaxy sample. Also, we want to relate the galaxy-matter power spectrum to the matter power spectrum for all the terms above. In our fiducial model we assume that lens galaxies trace the mass distribution following a simple linear biasing model ($\delta_g = b \:\delta_m$). The galaxy-matter power spectrum relates to the matter power spectrum by a multiplicative galaxy bias factor:
\begin{equation} \label{eq:linearbias}
P^{ij}_{gm} = b^{i} P^{ij}_{mm},
\end{equation}
even though the galaxy bias mostly cancels in the lensing ratios. We find the lensing ratios to have significant dependence on the IA and lens magnification terms but almost no sensitivity to the galaxy bias model. We compute the non-linear matter power spectrum $P_{mm}$ using the \citet{Takahashi_2012} version of \texttt{Halofit} and the linear power spectrum with \texttt{CAMB}\footnote{https://camb.info/}.
To compute the theoretical modelling in this study, we use the \textsc{CosmoSIS} framework \citep{Zuntz_2015}.
Below we briefly describe the other terms included in our fiducial model.
\paragraph*{Lens magnification} Lens magnification is the effect of magnification applied to the lens galaxy sample by the structure that is between the lens galaxies and the observer. The lens magnification angular cross-power spectrum can be written as:
\begin{equation}
C^{ij}_{gm, \text{lens mag}} = 2 (\alpha^i -1) \, C_{mm}^{ij}(\ell)
\end{equation}
where $\alpha^i$ is a parameter that depends on the properties of the lens sample and has been measured in \citet{y3-2x2ptmagnification} for the DES Y3 lens samples within the 3$\times$2pt analysis. The measured values can be seen in Table~\ref{tab:samples}. $C_{mm}^{ij}(\ell)$ is the convergence power spectrum between the lens and source distributions, as defined in \citet{y3-2x2ptmagnification}.
\paragraph*{Intrinsic Alignments} The orientation of the source galaxies is correlated with the underlying large-scale structure, and therefore with the lenses tracing this structure. This effect is only present in galaxy-galaxy lensing measurements if the lens and source galaxies overlap in redshift. To take it into account, we employ the TATT (Tidal Alignment and Tidal Torquing) model \citep{Blazek_2019} which is an extension of the NLA (Non-linear alignment) model \citep{Hirata2004}. Then, the IA term is:
\begin{equation}
C_{IA}^{ij}(\ell) = \int d\chi \frac{N_l^i(\chi)\, N_{s}^j(\chi)}{\chi^2}P_{gI}\left(k = \frac{\ell+1/2}{\chi},z(\chi)\right)\,,
\end{equation}
where $P_{gI} = b P_{GI}$, with $b$ being the linear bias of the lens galaxies. $P_{GI}$ is model dependent and in the TATT model is given by:
\begin{align}
P_{GI} = a_1(z) P_{mm} + a_{1\delta}(z) P_{0|0E}
+ a_2 (z) P_{0|E2},
\end{align}
where the full expressions for the power spectra of the second and third term in the can be found in \citet{Blazek_2019} (see equations 37-39 and their appendix A). The other parameters are defined as:
\begin{equation}\label{eq:tatt_c1}
a_1(z) = -A_1 \bar{C}_{1} \frac{\rho_{\rm crit}\Omega_{\rm m}}{D(z)} \left(\frac{1+z}{1+z_{0}}\right)^{\eta_1}
\end{equation}
\begin{equation}\label{eq:tatt_c2}
a_2(z) = 5 A_2 \bar{C}_{1} \frac{\rho_{\rm crit}\Omega_{\rm m}}{D^2(z)} \left(\frac{1+z}{1+z_{0}}\right)^{\eta_2}
\end{equation}
\begin{equation}\label{eq:b_ta}
a_{1\delta} (z) = b_{\mathrm{TA}} a_1 (z),
\end{equation}
where $\bar{C}_1$ is a normalisation constant, by convention fixed at a value $\bar{C}_1=5\times10^{-14}M_\odot h^{-2} \mathrm{Mpc}^2$, obtained from SuperCOSMOS (see \citealt{brown02}). The denominator $z_{0}$ is a pivot redshift, which we fix to the value 0.62. Finally, the dimensionless amplitudes $(a_1, a_2)$, the power law indices $(\eta_1,\eta_2)$ and the $b_{\text{TA}}$ parameter (which accounts for the fact that the shape field is preferentially sampled in overdense regions) are the 5 free parameters of our TATT model.
\paragraph*{Lens magnification cross Intrinsic Alignments term} There is also the contribution from the correlation between lens magnification and source intrinsic alignments, which is included in our fiducial model:
\begin{equation}
C_{mI}^{ij}(\ell) = \int d\chi \frac{q_l^i(\chi)\, N_{s}^j(\chi)}{\chi^2}P_{mI}\left(k = \frac{\ell+1/2}{\chi},z(\chi)\right)\,,
\end{equation}
where $P_{mI} = P_{GI}$.
\subsubsection{Parameters of the model}
Next we will describe the different dependencies of the modeling of the ratios. In most cases, such dependencies will be described by parameters in our model (listed below), some of which will have Gaussian priors associated with them.
\begin{itemize}
\item \textbf{Cosmological parameters (6 or 7)}: 6 for $\Lambda$CDM, which are $\Omega_m$, $H_0$, $\Omega_b$, $n_s$, $A_s$ (or $\sigma_8$\footnote{We sample our parameter space with $A_s$, and convert to $\sigma_8$ at each step of the chain to get the posterior of $\sigma_8$. We also use the parameter $S_8$, which is a quantity well constrained by weak lensing data, defined here as $S_8 = \sigma_8(\Omega_m/0.3)^{0.5}$.}) and $\Omega_{\nu} h^2$. For $w$CDM, there is an additional parameter, $w$, that governs the equation of state of dark energy. Also, in our model we are assuming 3 species of massive neutrinos, following \citet{y3-3x2ptkp}, and a flat geometry of the Universe ($\Omega_k=0)$.
\item \textbf{Source redshifts parameters}: In order to characterize the uncertainties, we allow for an independent shift $\Delta z^j$ in each of the measured source redshift distributions. Priors for these parameters have been obtained in \citet*{y3-sompz, y3-sourcewz}. Additional validation with respect to marginalizing over the shape of the source redshift distributions is provided in \citet{y3-hyperrank} using the \texttt{Hyperrank} method.
\item \textbf{Lens redshift parameters}: We allow for independent shifts in the mean redshift of the distributions, $\Delta z^i$, one per each $i$ lens redshift bin, as defined in Eq.~(\ref{eq:N_l}). For the \textsc{MagLim} sample there are additional parameters to marginalize over: the width of the redshift distributions $\sigma_{z^i}$. That is because the width of the distributions is more uncertain in the \textsc{MagLim} case. Priors for these parameters have been obtained in \citet{y3-lenswz}.
\item \textbf{Multiplicative shear bias parameters}: We allow for a multiplicative change on the shear calibration of the source samples using $m^j$, one per each $j$ source redshift bin. Priors for these parameters have been obtained in \citet{y3-imagesims}.
\item \textbf{Lens magnification parameters}: They describe the sign and amplitude of the lens magnification effect. We denote them by $\alpha^i$, one per each $i$ lens redshift bin. These parameters have been computed in \citet{y3-2x2ptmagnification} and are fixed in our analysis as well as in the 3$\times$2pt analysis.
\item \textbf{(Linear) Galaxy bias parameters}: They model the relation between the underlying dark matter density field and the galaxy density field: $b^i$, one per each $i$ lens redshift bin, since we assume linear galaxy bias in this analysis.
\item \textbf{Intrinsic Alignment (IA) parameters}: Our fiducial IA model is the TATT model, which has 5 parameters: two amplitudes governing the strength of the alignment for the tidal and for the torque par, respectively, $a_1$, $a_2$, two parameters modeling the dependence of each of the amplitudes in redshift, $\alpha_1$, $\alpha_2$, and $b_{\mathrm{TA}}$, describing the galaxy bias of the source sample.
\end{itemize}
\subsubsection{Different run configurations} \label{sec:model_dv_schemes}
Now we have listed all the dependencies of the model used to describe the ratios throughout this paper. Across the paper, however, we will perform different tests using the ratios, freeing different parameters in each case. We consider three main different scenarios, and for each scenario we use different measurements and allow different parameters to vary. The tests will be described in more detail as they appear in the paper, but here we list these distinct scenarios and the modeling choices adopted in each of them:
\begin{enumerate}
\item \textbf{Shear-ratio only (SR)}: In this case, the data vector consists of small-scale shear-ratio measurements only (see Sec.~\ref{sec:small-large-ratios} for the definition of scales used). The model has 19 free parameters for the \textsc{redMaGiC} sample: 3 lens redshift parameters, 4 source redshift parameters, 4 multiplicative shear bias parameters, 3 galaxy bias parameters and 5 IA parameters. For the \textsc{MagLim} sample there are 22 free parameters, with the additional 3 lens redshift parameters describing the width of the distributions. In this case we fix the cosmological parameters since the lensing ratios have been found to be insensitive to cosmology (see Sec.~\ref{sec:cosmology_dependence} for a test of this assumption).
\item \textbf{Large-scales shear-ratio only (LS-SR)}: In this case, the data vector consists of large-scale shear-ratio measurements only (see Sec.~\ref{sec:small-large-ratios} for the definition of scales used). The model (and number of free parameters) is the same one as for the small scales lensing ratios scenario. This setup is only used as validation for the small-scale shear-ratio analysis.
\item \textbf{Shear-ratio + 3$\times$2pt (SR + 3$\times$2)}: In this case, the data vector consists of small-scale shear-ratio measurements and the usual 3$\times$2pt data vector, that is, galaxy clustering $w(\theta)$, galaxy-galaxy lensing, $\gamma_t(\theta)$, and cosmic shear, $\xi_+(\theta)$, $\xi_-(\theta)$ measurements, each one with the corresponding scale cuts applied to the DES Y3 3$\times$2pt cosmological analysis. In this case we used exactly the same model as in the 3$\times$2pt cosmological analysis, freeing all the parameters described above, that is, for the \textsc{redMaGiC sample} 29 parameters in total for $\Lambda$CDM, 30 for $w$CDM, and 31 for the \textsc{MagLim} sample for $\Lambda$CDM, 32 for $w$CDM. The only difference between this scenario and the 3$\times$2pt one is the addition of the small-scales lensing ratios measurements in the data vector.
\end{enumerate}
\subsection{Parameter inference methodology }
\label{sec:parameter_inference_methodology}
In this work we want to use ratios of small-scale galaxy-galaxy lensing measurements around the same lens bins to constrain redshift uncertainties and other systematics or nuisance parameters of our model, as described above. Next we summarize the methodology we utilize to perform such tasks using Bayesian statistics.
Let's denote the set of measured ratios as $\{ r \}$, and the set of parameters in our model as $\{ M \}$. We want to know the probability of our model parameters given the ratios data. In particular, we are interested in estimating the \textit{posterior} probability distribution function of each parameter in our model ($\{ M \}$) given the ratios data $\{ r \}$, $p(\{ M \}|\{ r \})$. In order to get that posterior probability, we will use Bayes theorem, which relates that posterior distribution to the \textit{likelihood}, $p(\{ r \}|\{ M \})$, computed from the model and the data, and the \textit{prior}, $p(\{ M \})$, which encapsulates \textit{a priori} information we may have on the parameters of our model, via the following relation:
\begin{equation}
p(\{ M \}|\{ r \}) \propto p(\{ r \}|\{ M \}) \: p(\{ M \}).
\end{equation}
We will use a given set of priors on the model parameters; some of them will be uniform priors in a certain interval, others will be Gaussian priors in the cases where we have more information about the given parameters. For SR, we will assume a Gaussian likelihood, which means that for a given set of parameters in the model ($\{ M \}$), we will compute the corresponding ratios for those model parameters ($\{r\}_M$), and then estimate a $\chi^2$ value between these and the data ratios ($\{r\}$), using a fixed data covariance ($\mathbf{C}$), and then the logarithm of the SR likelihood becomes:
\begin{equation}
\mathrm{log} \: \mathcal{L}^{\mathrm{SR}} = \mathrm{log} \: p(\{r\}|\vec{M}) = - \frac{1}{2}\chi^2 \: -\frac{1}{2} \mathrm{log} \: \mathrm{Det} \: \mathbf{C};
\end{equation}
\begin{equation}
\mathrm{with} \; \chi^2 = (\{ r \}-\{ r \}_M)^T \: \mathbf{C}^{-1} \: (\{ r \}-\{ r \}_M).
\end{equation}
This method will provide constraints on the parameters of our model given the measured ratios on the data and a covariance for them. For the fiducial DES Y3 cosmological analysis, this SR likelihood will be used in combination with the likelihood for other 2pt functions such as cosmic shear, galaxy clustering and galaxy-galaxy lensing. Because SR is independent of the other 2pt measurements (see Sec.~\ref{sec:independence_between_small_large}), the likelihoods can be simply combined:
\begin{equation}\label{eq:combined_likelihood}
\mathrm{log} \: \mathcal{L}^{\mathrm{Total}} = \mathrm{log} \: \mathcal{L}^{\mathrm{SR}} + \mathrm{log} \: \mathcal{L}^{\mathrm{2pt}} .
\end{equation}
The specific details of the parameters and the associated priors used in each test will be described in detail later in the paper, together with the description of the test itself. For MCMC chains, we use \textsc{PolyChord} \citep{polychord} as the fiducial sampler for this paper. We use the following settings for this sampler: \texttt{feedback = 3},
\texttt{fast\_fraction = 0.1}, \texttt{live\_points = 500}, \texttt{num\_repeats=60}, \texttt{tolerance=0.1}, \texttt{boost\_posteriors=10.0} for the chains ran on data, and \texttt{live\_points = 250}, \texttt{num\_repeats=30} for chains on simulated data vectors, consistent with \citet{y3-3x2ptkp}.
\section{Measurement and covariance of the ratios}
\label{sec:measurement}
In this Section, we describe the measurement and covariance of the ratios, including the choice of scales we use, and we test the robustness of the estimation. The measurement of the ratios is based on the tangential shear measurements presented and validated in \citet{y3-gglensing}, where several measurement tests are performed on the 2pt measurements, such as testing for B-modes, PSF leakage, observing conditions, scale-dependent responses, among others.
\subsection{Methodology}\label{sec:measurement_methodology}
\subsubsection{Lens-source bin combinations}
In this work we use three lens redshift bins, for both lens galaxy samples, \textsc{redMaGiC} and \textsc{MagLim}, and four source redshift bins, as described in Section \ref{sec:data} and depicted in Figure \ref{fig:nzs}. The DES Y3 3$\times$2pt project uses five and six lens bins (Figure \ref{fig:nzs}) for the two lens samples, respectively. In this work we stick to the three lowest redshift lens bins both because they carry the bulk of the total shear ratio S/N and because the impact of lens magnification is much stronger for the highest redshift lens bins, and we choose not to be dominated by lens magnification even though we include it in the modeling, given the uncertainty in the parameters calibrating it. Regarding source redshift bins, we use the four bins utilized in the DES Y3 3$\times$2pt project.
From the redshift bins described above, we will construct combinations with a given fixed lens bin and two source bins, denoted by the label $(l_i,s_j,s_k)$, where $s_k$ corresponds to the source bin which will sit in the denominator. Then, for each lens bin we can construct three independent ratios, to make a total set of 9 independent ratios, $\{ r^{(l_i,s_j,s_k)} \}$. Note there is not a unique set of independent ratios one can pick. In this work we choose to include the highest S/N tangential shear measurement in the denominator of all of the ratios since that choice will minimize any potential noise bias. In our case the highest S/N tangential shear measurement corresponds to the highest source bin, i.e.~the fourth one, and hence, for a given lens bin $i$ we will use the following three independent ratios {$\{ r^{(l_i,s_1,s_4)} \}, \{ r^{(l_i,s_2,s_4)} \}, \{ r^{(l_i,s_3,s_4)} \}$}. See also Table~\ref{tab:scale_cuts} for a complete list of the ratio combinations we use in this work.
\subsubsection{Small and large scale ratios: choice of scales}
\label{sec:small-large-ratios}
\begin{figure}
\centering
\includegraphics[width=0.52\textwidth]{IA_ratios.png}
\caption{Impact of different IA models (different parameter choices for TATT) on all the lensing ratios considered in this work. We find that for ratios whose modeling is close to a pure geometrical model (Figure \ref{fig:ratios_scale}) the impact of IA is negligible. The different lines in the plot have different IA parameters in the ranges: $a_1 = [0.5,1], \ a_2 = [-2, -0.8], \ \alpha_1 = [-2.5,0], \alpha_2 = [-4., -1.2], \ b_{TA} = [0.6, 1.2]$. The grey bands show the size of the data uncertainties on the ratios, for reference.}
\label{fig:ratios_scale_IA}
\end{figure}
When measuring the lensing ratios, we will be interested in two sets of angular scales, which we label as ``small-scale ratios'' and ``large-scale ratios''. Large-scale ratios are defined to use approximately the same angular scales as the galaxy-galaxy lensing probe in the 3$\times$2pt DES Y3 cosmological analysis, and for that we use scales above 8 $h^{-1}$Mpc and until angular separations of 250 arcmins. In fact, the minimum scale used in the 3$\times$2pt analysis is 6 $h^{-1}$Mpc, but due to their usage of analytical marginalization of small-scale information (see Section 4.2 in \citealt{y3-gglensing} and \citealt{MacCrann_2019}) the scales between 6-8 $h^{-1}$Mpc do not add significant information. Regardless, we use the large-scale ratios purely as validation for the small-scale ratios, as detailed in Section~\ref{sec:results}.
Small-scale ratios are defined using angular measurements below the minimum scale used in the cosmology analysis, i.e. 6 $h^{-1}$Mpc. Small-scale ratios will be our fiducial set of ratios, and next we focus on defining the lower boundary of scales to be used for those ratios. If ratios were purely geometrical, they would be scale-independent, and hence we could use all measured scales to constrain them (the full measured set of scales for galaxy-galaxy lensing in DES Y3 is described in \citealt{y3-gglensing}). However, as we saw in the previous Section, lensing ratios are not purely geometrical but there are other physical scale-dependent effects which need to be modeled accurately, and hence we are restricted to angular scales where the modeling is well characterized. In particular, in some ratio configurations there is enough overlap between lenses and sources to make the ratios sensitive to Intrinsic Alignments (IA), even if this dependence is smaller than for the full tangential shear measurement.
Figure \ref{fig:ratios_scale_IA} shows the impact of different IA models (different parameter choices for TATT) on all the lensing ratios considered. This Figure can be compared to Figure \ref{fig:ratios_scale}, and it is unsurprising to find that the impact of IA is smallest for the ratios that are closest to purely geometrical ratios. There are two cases in which the ratios are insensitive to IA (Figure \ref{fig:ratios_scale_IA}) and well modeled with geometry only (Figure \ref{fig:ratios_scale}), which correspond to the combinations ($l_1, s_3, s_4$) and ($l_2, s_3, s_4$). These ratios involve lens-source combinations with negligible overlap between lenses and sources and are thus not affected by IA. For these geometrical ratio combinations, we predict scale-independent ratios and hence we are able to accurately model the measurements at all scales, down to the minimum angular separation in which we measure the tangential shear, which is 2.5 arcmins \citep{y3-gglensing}.
For the remaining combinations we choose not to include physical scales below 2 $h^{-1}$Mpc, to avoid approaching the 1-halo regime. This decision is driven by the importance of shear ratio in constraining IA and the corresponding requirement to restrict the analysis to the range of validity of our fidicual IA model, the Tidal Alignment Tidal Torquing (TATT) model. This model captures nonlinear IA effects, notably the tidal torquing mechanism relevant for blue/spiral galaxies and the impact of weighting by source galaxy density, which becomes important on scales where clustering is non-negligible. The TATT model is thus significantly more flexible than the frequently-used nonlinear alignment model (NLA), which itself has been shown to accurately describe alignments of red galaxies down to a few Mpc (e.g.\ \citealp{Singh_2015, Blazek_2015}). However, as a perturbative description, the TATT model will not apply on fully nonlinear scales and thus is not considered robust within the 1-halo regime. While this choice of minimum scale is supported by both theoretical expectations and past observational results, we use our analysis when restricting to large scales as an additional robustness check. As shown in Figure~\ref{fig:ia_data}, the IA constraints from the large-scale shear ratio information are fully consistent with the fiducial shear ratio constraints, providing further support for our assumption that the TATT model can describe IA down the minimum scale. In Table~\ref{tab:scale_cuts} we summarize the scale cuts described in this section for each of the ratio combinations. Finally, it is worth noting that the choice of physical scales is the same for the two lens galaxy samples used in this work, but the choice of angular scales varies due to their slightly different redshift distributions.
\begin{table}
\centering
\setlength{\tabcolsep}{5pt}
\begin{tabular}{c|c|c|c}
\pmb{$(l_i,s_j,s_k)$} & \textbf{Scales} & $N_{dp}$ RM & $N_{dp}$ ML \\
\rule{0pt}{3ex}
$(l_1,s_1,s_4)$ & 2 -- 6 $h^{-1}$Mpc & 4 & 5\\
$(l_1,s_2,s_4)$ & 2 -- 6 $h^{-1}$Mpc & 4 & 5\\
$(l_1,s_3,s_4)$ & 2.5 arcmin -- 6 $h^{-1}$Mpc & 10 & 10\\
\rule{0pt}{3ex}
\rule{0pt}{3ex}
$(l_2,s_1,s_4)$ & 2 -- 6 $h^{-1}$Mpc & 4 & 5\\
$(l_2,s_2,s_4)$ & 2 -- 6 $h^{-1}$Mpc & 4 & 5\\
$(l_2,s_3,s_4)$ & 2.5 arcmin -- 6 $h^{-1}$Mpc & 8 & 9\\
\rule{0pt}{3ex}
\rule{0pt}{3ex}
$(l_3,s_1,s_4)$ & 2 -- 6 $h^{-1}$Mpc & 4 & 5\\
$(l_3,s_2,s_4)$ & 2 -- 6 $h^{-1}$Mpc & 4 & 5\\
$(l_3,s_3,s_4)$ & 2 -- 6 $h^{-1}$Mpc & 4 & 5\\
\end{tabular}
\caption{Redshift bin combinations and scales used in this work for the ``small-scales'' lensing ratios probe. $(l_i,s_j,s_k)$ is a label that specifies the lens and source bins considered in the ratio, where $s_k$ is in the denominator. In general, we use scales between 2 -- 6 $h^{-1}$Mpc, except for the combinations which have almost no overlap between lenses and sources, for which we use all scales available (with a lower limit of 2.5 arcmins) since they are dominated by geometry and almost not affected by IA and magnification effects. $N_{dp}$ is the number of data points remaining after applying our scale cuts, which we show for both lens samples: \textsc{redMaGiC} (RM) and \textsc{MagLim} (ML) (the variations in $N_{dp}$ for both samples come from them having slightly different mean redshifts). \label{tab:scale_cuts}}
\end{table}
\subsubsection{Estimation of the ratio}
\label{sec:estimate}
Having defined the set of ratios of galaxy-galaxy lensing measurements to be used, and the set of angular scales to employ in each of them, now we will describe the procedure to measure the lensing shear ratios. Let $\gamma_t^{l_i,s_j}(\theta)$ and $\gamma_t^{l_i,s_k}(\theta)$ be two galaxy-galaxy lensing measurements as a function of angular scale ($\theta$) around the same lens bin $l_i$ but from two different source bins, $s_j$ and $s_k$, and we want to estimate the ratio of them. Since the ratios are mostly geometrical they are predominantly scale independent (Figure \ref{fig:ratios_scale}). We have checked that using ratios as a function of scale does not significantly improve our results (although that may change for future analyses with larger data sets). Therefore, for simplicity, we average over angular scales between our scale cuts in the following way:
\begin{equation}
\label{eq:ratio}
r^{(l_i,s_j,s_k)} \equiv \left \langle \frac{ \gamma_t^{l_i,s_j} (\theta)}{ \gamma_t^{l_i,s_k} (\theta) } \right \rangle_\theta = \left \langle r^{(l_i,s_j,s_k)} (\theta) \right \rangle_\theta ,
\end{equation}
where the average over angular scales, $\left \langle ... \right \rangle_\theta$, includes the corresponding correlations between measurements at different angular scales. We can denote the ratio measurements as a function of scale as vectors, such as $r^{(l_i,s_j,s_k)} (\theta) \equiv \mathbf{r}^{(l_i,s_j,s_k)}$, and $(l_i,s_j,s_k)$ is a label that specifies the lens and source bins considered in the ratio. In order to account for all correlations, we will assume we have a fiducial theoretical model for our lensing measurements, $\tilde{\gamma}_t^{l_i,s_j}(\theta)$ and $\tilde{\gamma}_t^{l_i,s_k}(\theta)$ and a joint covariance for the two measurements as a function of scale, $\mathbf{C}_{\tilde{\gamma}}$, such that
\begin{equation}
\left ( \mathbf{C}_{\tilde{\gamma}} \right )_{m,n}= \mathrm{Cov}[\tilde{\gamma}_t^{l_i,s_j}(\theta_m),\tilde{\gamma}_t^{l_i,s_k}(\theta_n)].
\end{equation}
Now we want to estimate the average ratio of lensing measurements. The ratio is a non-linear transformation, as it is clear from Equation (\ref{eq:ratio}). The covariance of the ratio as a function of scale can be estimated as
\begin{equation}
\mathbf{C}_{\mathbf{r}} = \mathbf{J} \: \mathbf{C}_{\tilde{\gamma}} \: \mathbf{J}^T,
\end{equation}
where $\mathbf{J}$ is the Jacobian of the ratio transformation as a function of scale from Equation (\ref{eq:ratio}), $\mathbf{r}^{(l_i,s_j,s_k)}$, and can be computed exactly using the theoretical model for the lensing measurements. Note that $\mathbf{C}_{\tilde{\gamma}}$, $\mathbf{C}_{\mathbf{r}}$ and $\mathbf{J}$ are all computed for a given ratio $(l_i,s_j,s_k)$. Having the covariance for the ratio as a function of scale, the estimate of the mean ratio having minimum variance is given by:
\begin{equation}
\label{eq:ratio_est}
\begin{aligned}
r^{(l_i,s_j,s_k)} &= \sigma_r^2 \left ( \mathbf{D}^T \: \mathbf{C}^{-1}_{\mathbf{r}} \mathbf{r}^{(l_i,s_j,s_k)} \right ),\\
\mathrm{with} \; \sigma_r^2 &= \left ( \mathbf{D}^T \: \mathbf{C}_{\mathbf{r}}^{-1 \: (l_i,s_j,s_k)} \: \mathbf{D} \right )^{-1}.
\end{aligned}
\end{equation}
Here $\mathbf{D}$ is a design matrix equal to a vector of ones, $[1,...,1]^T$, of the same length as $\mathbf{r}^{(l_i,s_j,s_k)}$ (the number of angular bins considered). Note that the estimator for the ratio in Equation (\ref{eq:ratio_est}) reduces to an inverse variance weighting of the angular bins for a diagonal covariance, and to an unweighted mean in the case of a diagonal covariance with constant diagonal values.
For the fiducial simulated data in this work, we use the redshift distributions of the \textsc{redMaGiC} lens sample, although the differences between the \textsc{redMaGiC} and \textsc{MagLim} samples are small in the first three lens bins (see Figure \ref{fig:nzs}). Figure \ref{fig:ratios_sim_data} shows values of the fiducial estimated lensing shear ratios for both our simulated data and the real unblinded data. For the simulated case, we show the true values of the ratios, i.e.~those measured directly from the noiseless case, as well as the estimated ratios when noise is included. For the data cases, we show the fiducial set of data ratios used in this work for both \textsc{redMaGiC} and \textsc{MagLim} lens samples together with the corresponding best fit model using the full 3$\times$2 DES Y3 cosmological analyses. The data results will be discussed in detail in Section \ref{sec:results}. Next, we will describe the covariance estimate of the ratios and we will assess the performance of our estimator.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{plots/sr_plot.png}
\caption{\emph{(Upper panel:)} True values of the ratios $\{ r \}$ for our fiducial theory model, together with the estimates of the simulated ratios using the measurement procedure described in \S\ref{sec:estimate} and the uncertainties estimated using the procedure described in \S\ref{sec:cov}. \emph{(Middle panel:)} Measured set of shear ratios and their uncertainties in the \textsc{redMaGiC} data, together with the best-fit model from the 3$\times$2 DES Y3 cosmological analysis of the \textsc{redMaGiC} sample ($\chi^2$/ndf = 11.3/9, $p$-value of 0.26). \emph{(Lower panel:)} Measured set of shear ratios and their uncertainties in the \textsc{MagLim} data, together with the best-fit model from the 3$\times$2 DES Y3 cosmological analysis of the \textsc{MagLim} sample ($\chi^2$/ndf = 18.8/9, $p$-value of 0.03, above the threshold for inconsistencies which we originally set at $p$-value = 0.01 for the DES Y3 analysis). }
\label{fig:ratios_sim_data}
\end{figure}
\subsubsection{Covariance of the ratios}
\label{sec:cov}
We have described above how we compute the ratio of a given pair of galaxy-galaxy lensing measurements. Now, we describe how we compute the covariance between different ratios, from different pairs of lensing measurements. First, we use the fiducial model and theory joint covariance for all the galaxy-galaxy lensing measurements produced and validated in \citet{y3-covariances}, to produce $10^5$ covariant realizations of the galaxy-galaxy lensing measurements drawn from a mutivariate Gaussian centered at the fiducial model, and with the theoretical galaxy-galaxy lensing covariance.
For each of these $10^5$ realizations of galaxy-galaxy lensing measurements, we measure the set of 9 shear ratios using the procedure described above. That yields $10^5$ realizations of the set of 9 ratios. We use that to compute the 9$\times$9 covariance of the ratios, which is shown in Fig.~\ref{fig:ratios_cov} in Appendix~\ref{sec:appendix_covariance}. The number of realizations we use to produce this covariance ($10^5$) is arbitrary, but we have checked that the results do not vary when using a larger number of realizations.
\subsection{Independence between small and large scales}\label{sec:independence_between_small_large}
In this section we discuss why the SR likelihood is independent of the 2pt likelihood. We also discuss the independence of the large-scale ratios defined in Section~\ref{sec:small-large-ratios} with respect to the small-scale ones. That independence will allow us to use the large-scale ratio information as validation of the information we get from small-scales.
The correlation of the SR likelihood with the (3$\times$)2pt likelihood will come mostly from the galaxy-galaxy lensing 2pt measurements. Since we do not leave any gap between the minimum scale used for 2pt measurements (6$h^{-1}$Mpc) and the small-scale ratios, this can in principle be worrying since the tangential shear is non-local, and therefore it receives contributions from physical scales in the galaxy-matter correlation function that are below the scale at which it is measured \citep{Baldauf2010, MacCrann_2019, park2020localizing}. However, for the large scales 2pt galaxy-galaxy lensing used in the DES Y3 3$\times$2pt analysis, we follow the approach of \citet{MacCrann_2019} and marginalize analytically over the unknown enclosed mass, which effectively removes any correlation with scales smaller than the small-scale limit of 6 $h^{-1}$Mpc, ensuring that the information from the 3$\times$2pt measurements is independent from the small-scale ratios used in this work, which use scales smaller than 6 $h^{-1}$Mpc. We call this procedure ``point-mass marginalization''. This point-mass marginalization scheme significantly increases the uncertainties in galay-galaxy lensing measurements around 6-8 $h^{-1}$Mpc (see Figure 8 and Section 4.2 in \citet{y3-gglensing}). Also, see \citet{MacCrann_2019} and \citet{y3-2x2ptbiasmodelling} for a description of the point-mass marginalization implemented in the 3$\times$2pt analysis.
Regarding the large-scale shear ratios used in this work, we will only use scales larger than 8 $h^{-1}$Mpc, to ensure independence from the small-scale ratios using scales smaller than 6 $h^{-1}$Mpc (since for the SR-only chains we do not apply the point-mass marginalization, given that the ratios are not sensitive to first approximation to any enclosed mass). In order to assess the independence of small and large-scale ratios, we estimate the cross-covariance of the small and large-scale ratios, using again $10^5$ realizations of the galaxy-galaxy lensing measurements and deriving small and large-scale ratios for each of them, using the same procedure as in \$\ref{sec:cov}. We ensure that the corresponding $\Delta \chi^2$ due to including or ignoring the cross-covariance is smaller than 0.25. The reasons for this independence relate to the 2 $h^{-1}$Mpc gap left between small and large scales and to the importance of shape noise at these scales, which helps decorrelating different angular bins.
\subsection{Gaussianity of the SR likelihood}\label{sec:performance_estimator}
Because the shear ratios are a non-linear transformation of the galaxy-galaxy lensing measurements, it is important to test the assumption of Gaussianity in the likelihood. We have a number of realizations $s$ of the lensing measurements, drawn from the theory curves and the corresponding covariance, and for each of them we have a set of 9 measured ratios, $\{ r \}_s$, and we have a 9$\times$9 covariance for those ratios, which we can denote $\mathbf{C}_{\{ r \}}$. Importantly, we also have the set of ratios for the noiseless fiducial model used to generate the realizations, denoted by $\{ r \}_0$. Figure \ref{fig:ratios_sim_data} shows the noiseless (or true) ratios from the model, $\{ r \}_0$, together with the estimated mean and standard deviation of the noisy ratios, $\{ r \}_s$. Also, if the likelihood of observing a given set of noisy ratios given a model, $p(\{ r \}_s|\{ r \}_0)$, is Gaussian and given by the covariance $\mathbf{C}_{\{ r \}}$, then the following quantity:
\begin{equation}
\label{eq:chi2s}
\chi^2_s = (\{ r \}_s-\{ r \}_0) \: \mathbf{C}_{\{ r \}}^{-1} \: (\{ r \}_s-\{ r \}_0)^T
\end{equation}
should be distributed like a chi squared distribution with a number of degrees of freedom equal to the number of ratios (9 in this case), so $\chi^2_s \sim \chi^2(x,\mathrm{ndf}=9)$. Figure \ref{fig:chi2s} shows the agreement between the distribution of $\chi_s$ compared to the expected chi squared distribution for ratios at small and large scales. This agreement demonstrates that our likelihood for the ratios is Gaussian, and in conjunction with Figure \ref{fig:ratios_sim_data}, it provides validation that our estimator does not suffer from any significant form of noise bias.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{plots/plot_chi2s.png}
\caption{Distribution of $\chi^2_s$ from Equation (\ref{eq:chi2s}), for small and large-scale ratios, compared to a chi squared distribution with a number of degrees of freedom equal to the numbers of ratios in $\{ r \}_s$. }
\label{fig:chi2s}
\end{figure}
\section{Validation of the model}
\label{sec:model_validation}
In this section we validate our model for the lensing ratios by exploring the impact of several effects that are not included in our fiducial model, which are relevant to galaxy-galaxy lensing measurements at small scales (the corresponding validation for other DES Y3 data vectors is performed in \citealt{y3-generalmethods}). The fiducial model is described in \S\ref{sec:model}. The effects we consider are in some cases explored directly at the theory level (e.g.~by changing the input power spectrum) or using ratios measured in realistic $N$-body simulations. In this way, all the tests in this section are performed using noiseless simulated data vectors except for the Buzzard case (in \S\ref{sec:validation_buzzard}) which includes noise. For testing purposes, we will analyze the impact of such effects in the ratios, at both small and large scales, but we will also assess their impact on the derived constraints on our model parameters using the same priors we use in the data, which are the most relevant metric. For that, we will perform MCMC runs as described in \S\ref{sec:parameter_inference_methodology} for the various effects under consideration. The priors and allowed ranges of all parameters in our model are described in Table \ref{tab:model_validation_priors} and aim to mimic the configuration used for the final runs in the data, described in \S\ref{sec:results}. A summary of the resulting constraints for each test is included in Figures \ref{fig:summ1} and \ref{fig:ia_model_validation}, while further details are included in each subsection.
\subsection{Fiducial simulated constraints}
For the purpose of comparison and reference, in the figures of this section we also include constraints from the fiducial SR case, where ratios are constructed directly using the input theory model. As we did in the previous section, we use the redshift distributions of the \textsc{redMaGiC} lens sample for the fiducial simulated ratios, although the differences between the \textsc{redMaGiC} and \textsc{MagLim} samples are small in the first three lens bins (see Figure \ref{fig:nzs}). We have generated our fiducial simulated data vector using the best-fit values of the 3$\times$2pt+SR results for the cosmological parameters and the IA and galaxy bias parameters\footnote{Specifically, we use the values of the first 3$\times$2pt+SR \textsc{redMaGiC} unblinded results and the halo-model covariance evaluated at these values.}.
In addition, the fiducial case allows us to determine what parameters are being constrained using the information coming from the ratios. Figure \ref{fig:summ1} shows the constraints on the parameters corresponding to source redshifts, source multiplicative shear biases and lens redshifts. Due to the strong priors imposed on these parameters, no correlations are observed between them and hence we show the marginalized 1-D posteriors. In the fiducial SR case, the ratios improve the constraints on the parameters corresponding to source redshifts, while shear calibration and lens redshifts are not significantly constrained beyond the priors imposed on those parameters (Table~\ref{tab:model_validation_priors}). In detail, for the four source redshift parameters in Figure \ref{fig:summ1}, the posteriors using the ratios improve the prior constraints on $\Delta z_s$ by 12\%, 25\%, 19\% and 8\% respectively for each bin. Furthermore, the ratios are able to place constraints on some of the intrinsic alignments (IA) parameters of our model, for which we do not place Gaussian priors. For IA, out of the 5 parameters in the model, the ratios are most effective at constraining a degeneracy direction between IA parameters $a_1$ and $a_2$, as in Figure \ref{fig:ia_model_validation} (see \citealt{y3-gglensing} or \citealt*{y3-cosmicshear2} for a full description of the IA model used or Section~\ref{sec:model} in this paper for a summary of the most relevant equations). These IA constraints from SR will become important for constraining cosmological parameters when SR is combined with other probes like cosmic shear (see Section \ref{sec:combination}).
\subsubsection{Large-scale ratios (LS)}
One important test that will be used as a direct model validation in the data is the comparison of model posteriors using ratios from small and large scales. This comparison is interesting because the model for galaxy-galaxy lensing is more robust at large scales, and because small and large scales are uncorrelated since they are dominated by shape noise. Therefore, we can compare our fiducial model constraints coming from small-scale ratios to the corresponding constraints from large scales, and a mismatch between these will point out a potential problem in the modeling of small scales. In Figures \ref{fig:summ1} and \ref{fig:ia_model_validation}, we show the model constraints from large-scale ratios, for reference. We will also perform this test for the results on $N$-body simulations in Section~\ref{sec:validation_buzzard}, and directly on the data in Section~\ref{sec:results:SRonly}.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{plots/summary_model_validation.pdf}
\caption{Summary of the posteriors on the model parameters corresponding to source redshifts, shear calibration and lens redshifts for different SR-only test runs described in Section \ref{sec:model_validation} and combination runs from Section~\ref{sec:combination}. All the above tests are performed using noiseless simulated data vectors except for the Buzzard ones which include noise. The coloured bands show the 1$\sigma$ prior in each parameter, while the black errorbars show 1$\sigma$ posteriors. }
\label{fig:summ1}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{plots/ia_model_validation.pdf}
\caption{This plot summarizes the posteriors on the two intrinsic alignment model parameters that are constrained by the ratios, for different SR only test runs described in Section \ref{sec:model_validation}, using noiseless simulated data vectors. }
\label{fig:ia_model_validation}
\end{figure}
\begin{table*}
\centering
\addtolength{\tabcolsep}{10pt}
\begin{tabular}{l|c|c}
& \textbf{Range} & \textbf{Prior} \\
\rule{0pt}{3ex}Source redshifts $\Delta z_s^j$ & [-0.1,0.1] & $\mathcal{N}$(0, [0.018,0.015,0.011,0.017]) \\
Shear calibration $m^j$ & [-0.1,0.1] & $\mathcal{N}$(0, [0.0091,0.0078,0.0076,0.0076]) \\
Lens redshifts $\Delta z_l^i$ & [-0.05,0.05] & $\mathcal{N}$(0, [0.004,0.003,0.003]) \\
Galaxy bias $b^i$ & [0.8,3.0] & Uniform \\
IA $a_1, a_2, \alpha_1, \alpha_2$ & [-5,5] & Uniform \\
IA bias TA & [0,2] & Uniform
\end{tabular}
\addtolength{\tabcolsep}{-10pt}
\caption{Allowed ranges and priors of the model parameters for the chains run in Sections \ref{sec:model_validation} and \ref{sec:combination}. Indices $i$ in the labels refer to the 3 lens redshift bins, and indices $j$ refer to the 4 source redshift bins, all defined in Section \ref{sec:data}. } \label{tab:model_validation_priors}
\end{table*}
\subsection{Baryons and non-linear galaxy bias}\label{sec:baryons_and_nlbias}
Hydrodynamical simulations suggest that baryonic effects, specifically the ejection of gas due to feedback energy from active galactic nuclei (AGN), have an impact on the matter distribution at cosmologically relevant scales \citep{Mead_2015}. Such effects may lead to differences in the galaxy-galaxy lensing observable at the small scales considered in this work. In order to test this effect, we model it rescaling the non-linear matter power spectrum with the baryonic contamination from OWLS (OverWhelmingly Large Simulations project, \citealt{OWLS, vanDaalen11}) as a function of redshift and scale. Specifically, to obtain the baryonic contamination, we compare the power spectrum from the dark matter-only simulation with the power spectrum from the OWLS AGN simulation, following \citet{y3-generalmethods}.
In addition, non-linear galaxy bias effects would potentially produce differences in the ratios that could be unexplained by our fiducial model. We utilize a model for non-linear galaxy bias that has been calibrated using $N$-body simulations and is described in \citet{Pandey_2020,y3-2x2ptbiasmodelling}. In order to test the impact of these effects on the ratios, we produce a set of simulated galaxy-galaxy lensing data vectors including the effects of baryons and non-linear galaxy bias as described above, and produce the corresponding set of shear ratios. Overall we use the same procedure which is used in \citet{y3-generalmethods} to contaminate the fiducial data vector with these effects and propagate this contamination to the ratios. Then, we derive constraints on our model parameters using this new set of ratios, and we show the results in Figures \ref{fig:summ1} and \ref{fig:ia_model_validation}. In those figures we can see the small impact of these effects in our constraints compared to the fiducial case, confirming that baryonic effects and non-linear galaxy bias do not significantly bias our model constraints from the ratios.
\subsection{Halo Occupation Distribution Model} \label{sec:hod}
The Halo Occupation Distribution (HOD, \citealt{COORAY2002}) model provides a principled way of describing the connection between galaxies and their host dark matter halos, and it is capable of describing small-scale galaxy-galaxy lensing measurements at a higher accuracy than the Halofit approach, which is used in our fiducial model for the shear ratios (described in \S\ref{sec:model}). Next we test the differences between HOD and Halofit in the modeling of the ratios, and assess their importance compared to the uncertainties we characterized in Section \ref{sec:measurement}. We aim at performing two tests to assess the robustness of the fiducial Halofit modeling by comparing it to two HOD scenarios. One scenario showing the effect of HOD modeling, with a fixed HOD for each lens redshift bin, and another including HOD evolution within each lens redshift bin. For these tests, we perform the comparisons to a fiducial model without intrinsic alignments and lens magnification, for simplicity and to isolate the effects of HOD modeling compared to Halofit.
For these tests we use the MICE $N$-body simulation where a DES-like lightcone catalog of \textsc{redMaGiC} galaxies with the spatial depth variations matching DES Y3 data is generated (see Section \ref{sec:mice}). Using this catalog, we measure the mean HOD of the galaxies in the five redshift bins (\S\ref{sec:data}) as well as in higher resolution redshift bins with $\delta z \sim 0.02$. Note that these measurements are done using true redshifts of the galaxies, in order to pick up the true redshift evolution. We use these two measurements to predict the galaxy-galaxy lensing signal using a halo model formalism as described in the Appendix~\ref{app:HOD}.
The upper panel of Figure \ref{fig:hod} shows the difference between the simulated ratios using the fiducial model and the simulated ratios obtained using a mean HOD model for each redshift bin, for both small and large scales. As expected, the difference for large scales is negligible ($\Delta \chi^2$ = 0.02), since the fiducial Halofit modeling is known to provide an accurate description of galaxy-galaxy lensing at large scales ($>8$ Mpc/$h$). At small scales (between 2 and 6 Mpc/$h$), we see very small deviations of the HOD simulated ratios compared to the fiducial ones ($\Delta \chi^2$ = 0.22 for 9 data points), which do not significantly alter the constraints on the model parameters when using the HOD-derived ratios. It is worth noting that this is not a trivial test since the effect on the tangential shear itself is very significant on these scales, as can be seen in figure~8 from \citet{y3-gglensing}.
The lower panel of Figure \ref{fig:hod} shows the difference of shear ratios produced by using a mean HOD for each redshift bin and an evolving HOD as obtained by high resolution measurements in MICE. We find a residual $\Delta \chi^2$ of 0.04 at small scales (even smaller at large scales) and hence consistent shear ratio estimates. Given the results shown in Figure \ref{fig:hod}, we conclude that non-linearities introduced by HOD evolution within a tomographic redshift bin will not bias our shear ratio estimates.
It is important to note that the HOD tests described in this section correspond to one of our two lens samples, the \textsc{redMaGiC} sample, and that we do not show the equivalent test for the \textsc{MagLim} lens sample. However, having validated this for one of the lens samples, we will test the consistency between the SR constraints obtained with the two lens samples in Section \ref{sec:results}, and also \citet*{y3-cosmicshear1} performs the same validation test in the combination of SR with cosmic shear.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{plots/hod.png}
\caption{Effects of HOD modeling and HOD evolution on the shear ratios, for both small and large angular scales. The error bars show the ratio uncertainties from the same covariance as used in the data. }
\label{fig:hod}
\end{figure}
\subsection{Lens magnification}
The theoretical modeling of the galaxy-galaxy lensing signal and hence of the lensing ratios used in this work includes the effects of lens magnification. In the fiducial case, the lens magnification coefficients are fixed to the ones estimated using the \textsc{Balrog} software \citep{y3-balrog} in \citet{y3-2x2ptmagnification}. Here, we test the effect of letting the lens magnification coefficients be free for the SR analysis. In particular, Figure \ref{fig:summ1} and \ref{fig:ia_model_validation} test the effects of that choice (labelled there as ``Free mag'') on the parameters corresponding to lens and source redshifts, shear calibration and intrinsic alignments. No significant biases are observed and the derived constraints are comparable to the constraints using fixed lens magnification coefficients.
\subsection{Cosmology dependence}\label{sec:cosmology_dependence}
The lensing ratios themselves have very little sensitivity to cosmology. If they help with cosmological inference, it is because they help constrain some of the nuisance parameters that limit the cosmological constraining power. Here we will show their exact dependency, and that it is indeed safe to fix cosmological parameters when running SR only chains. Despite this weak dependency, the cosmological parameters are set as free parameters when the SR likelihood is run together with the 2pt likelihood (when combined with cosmic shear and the other 2pt functions). Hence even the small sensitivity of the ratios to cosmology is properly handled in the runs together with the 2pt likelihoods.
In Fig.~\ref{fig:red_boosts_cosmo} we show how the lensing ratios change as a function of $\Omega_m$. Our fiducial simulated data vector assumes $\Omega_m \simeq 0.35$ and we show that varying that to $\Omega_m = 0.30$ or to $\Omega_m = 0.40$ has very little impact on the ratios, compared with their uncertainties, yielding $\Delta \chi^2 = 0.03, 0.01$, respectively, for 9 data points.
\subsection{Boost factors and IA}\label{sec:boosts}
Boost factors are the measurement correction needed to account for the impact of lens-source clustering on the redshift distributions. When there is lens-source clustering, lenses and sources tend to be closer in redshift than represented by the mean survey redshift distributions that are an input to our model. This effect is scale dependent, being larger at small scales where the clustering is also larger. See Eq.~(4) of \citet{y3-gglensing} for its definition, related to the tangential shear estimator.
We include boost factors as part of our fiducial measurements as detailed in \citet{y3-gglensing} (see their figure~3 for a plot showing the boost factors). However, since boost factors are more sensitive to some effects which are not included in our modeling, such as source magnification, it is useful to test their impact on the ratios. Another effect that we test by checking the impact of the boost factors on the ratios is the contribution of lens-source clustering to intrinsic alignments. The IA term receives contributions from both the alignment of galaxies and the fact that sources cluster around lenses, leading to an excess number of lens-source pairs. We account for this term using the TATT model but on the smallest scales, roughly below a few Mpc, the TATT model will not sufficiently capture the non-linear clustering and IA \citep{Blazek_2015}. By checking that the boost factors impact on the ratios is small, we are also checking that our fiducial TATT model will suffice over the scales we use to construct the ratios. In Fig.~\ref{fig:red_boosts_cosmo} we show the difference in the ratios when including or not the boost factor correction and find it has a small impact on the ratios compared with their uncertainty, with $\Delta \chi^2 = 0.16$.
\subsection{Higher-order lensing effects}\label{sec:higher_order}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{plots/reduced_boosts_cosmo.png}
\caption{Impact of different effects on the lensing ratios, including cosmology dependence (see Sec.~\ref{sec:cosmology_dependence}), boost factors (see Sec.~\ref{sec:boosts}) and reduced shear + source magnification (see Sec.~\ref{sec:higher_order}). All these tests use noiseless simulated data vectors, and the error bars show the ratio uncertainties from the same covariance as used in the data. }
\label{fig:red_boosts_cosmo}
\end{figure}
In this section we test the impact of higher-order lensing effects to our model of the ratios, such as using the reduced shear approximation and not including source magnification in our model. In order to do that, we will propagate to the ratios the model developed and described in detail in \citet{y3-generalmethods} to include the combination of reduced shear and source magnification effects. This model is computed with the \textsc{CosmoLike} library \citep{Krause_2017} using a tree-level bispectrum that in turn is based on the non-linear power spectrum. For the source magnification coefficients, we use the values computed in \citet{y3-2x2ptmagnification}. In \citet{y3-gglensing}, the reduced shear contamination is illustrated for the tangential shear part. Here, we propagate that model to the lensing ratios, showing the small differences they produce on the ratios in Fig.~\ref{fig:red_boosts_cosmo}, with $\Delta \chi^2 = 0.09$ for 9 data points. The reduced shear contamination only produces a $\Delta \chi^2 = 0.02$ and therefore most of the change is coming from the source magnification part.
\subsection{Validation using $N$-body sims}\label{sec:validation_buzzard}
In this section we have so far considered different physical effects and tested their impact on the ratios at the theory level, for instance changing the input power spectrum used to generate the galaxy-galaxy lensing estimates. Now, instead, we use the Buzzard realistic $N$-body simulations, described in \S\ref{sec:buzzard}, to measure the lensing signal and the ratios which we then analyze using the fiducial model. These simulations are created to mimic the real DES data and hence they implicitly contain several physical effects that could potentially affect the ratios (e.g.~non-linear galaxy bias or redshift evolution of lens properties). For that reason, they constitute a stringent test on the robustness of our model. In addition, the tests in this part will be subject to noise in the measurement of the lensing signal and the ratios, due to shot noise and shape noise in the lensing sample, as opposed to the tests above which were performed with noiseless theoretical ratios. That measurement noise will also propagate into noisier parameter posteriors.
In Figure \ref{fig:summ1} we include the results of the tests using $N$-body simulations, named SR Buzzard for the fiducial small-scale ratios and SR Buzzard LS for the large-scale SR test. The results are in line with the other tests in this Section, showing the robustness of the SR constraints also on $N$-body simulations (considering the fact that the Buzzard constraints include noise in the measurements, as stated above). In addition, due to the fact that there are no intrinsic alignments in Buzzard, and the fact that lens magnification is not known precisely, we do not show IA or magnification constraints from the Buzzard run.
\section{Combination with other probes and effect on cosmological constraints}
\label{sec:combination}
In the previous section we explored the constraining power of the lensing ratios defined in this work and we validated their usage by demonstrating their robustness against several effects in their modeling. However, in the DES Y3 cosmological analysis, lensing ratios will be used in combination with other probes. For photometric galaxy surveys, the main large-scale structure and weak lensing observables at the two-point level are
galaxy clustering (galaxy-galaxy), galaxy-galaxy lensing (galaxy-shear) and cosmic shear
(shear-shear), which combined are referred to as 3$\times$2pt.
In this section, we will explore the constraining power of ratios when combined with such probes in DES, with the galaxy-galaxy lensing probe using larger scales compared to the lensing ratios.
When used by themselves, lensing ratios have no significant constraining power on cosmological parameters, however, when combined with other probes, they can help constrain cosmology through the constraints they provide on nuisance parameters such as source mean redshifts or intrinsic alignments (IA). Next we will show simulated results on the impact of the addition of SR to the three 2pt functions used in the DES Y3 cosmological analysis. We will analyze the improvement in the different nuisance parameters but also directly on cosmological parameters.
\subsection{SR impact on cosmic shear}
Cosmic shear, or simply 1$\times$2pt, measures the correlated distortion in the shapes of distant galaxies due to gravitational lensing by the large-scale structure in the Universe. It is sensitive to both the growth rate and the expansion history of the Universe, and independent of galaxy bias. Here we explore the constraining power of DES Y3 cosmic shear in combination with SR using simulated data. For that, we run MCMC chains where we explore cosmological parameters and the nuisance parameters corresponding to source galaxies, such as intrinsic alignments, source redshift calibration and multiplicative shear biases. Also, when using SR in combination with 1$\times$2pt, we sample over lens redshift calibration and galaxy bias parameters for the three redshift bins included when building the ratios, even if the posteriors on the galaxy bias are unconstrained. We make this choice to be fully consistent with the tests we have performed in the previous section but the results are consistent if we fix the galaxy bias parameters. For the lens redshift calibration parameters, we use the same priors detailed in the previous section.
The effect of adding SR to 1$\times$2pt is shown in Figure \ref{fig:1x2sim} for cosmological and IA parameters and in Figure \ref{fig:summ1} for the other nuisance parameters. For source redshift parameters, SR improves the constraints of all four source bins by 9\%, 13\%, 14\% and 2\%, respectively. Most importantly, SR significantly helps improve the constraints on cosmology, by about $25\%$ on $S_8$ and $3\%$ on $\Omega_m$ (see Table \ref{tab:cosmo_sim}). From Figure \ref{fig:1x2sim}, it is apparent the improvement in cosmology comes mostly from a major improvement in constraining the amplitudes of the IA modeling. The effect on the other IA parameters is shown in Appendix~\ref{sec:app_ia}.
\begin{table}
\centering
\label{tab:cosmo_sim}
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{ccc}
\hline
& $\Delta \Omega_m$ & $\Delta S_8$ \\
\hline
1$\times$2pt & $-0.057^{+0.077}_{-0.038}$ & $-0.005^{+0.026}_{-0.030}$ \\
1$\times$2pt + SR & $-0.050^{+0.078}_{-0.034}$ & $0.002^{+0.024}_{-0.018}$ \\
2$\times$2pt & $0.008^{+0.028}_{-0.046}$ & $-0.019^{+0.044}_{-0.027}$ \\
2$\times$2pt + SR & $-0.002^{+0.035}_{-0.037}$ & $-0.006^{+0.031}_{-0.037}$ \\
3$\times$2pt & $-0.006^{+0.038}_{-0.021}$ & $0.003^{+0.013}_{-0.022}$ \\
3$\times$2pt + SR & $0.011^{+0.018}_{-0.038}$ & $-0.006^{+0.021}_{-0.013}$ \\
\hline
\end{tabular}
\caption{Impact of SR on cosmological constraints using simulated DES Y3 data. The table shows parameter differences with respect to the truth values for the simulated data, which are $\Omega_m = 0.350$ and $S_8 = 0.768$. }
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{plots/1x2_sim.png}
\caption{Simulated likelihood analysis showing the constraints on cosmological parameters $S_8$ and $\Omega_m$ and intrinsic alignments parameters $a_1^{IA}$ and $a_2^{IA}$ from cosmic shear only (1$\times$2pt) and cosmic shear and lensing ratios (1$\times$2pt + SR). }
\label{fig:1x2sim}
\end{figure}
\subsection{SR impact on galaxy clustering and galaxy-galaxy lensing}
The combination of galaxy clustering and galaxy-galaxy lensing, also named 2$\times$2pt, is a powerful observable as it breaks the degeneracies between cosmological parameters and galaxy bias. When using SR in combination with 2$\times$2pt, there is no need to sample over additional parameters, and we use the same priors detailed in the previous section.
When we add SR to the DES Y3 2$\times$2pt combination there is a modest improvement in constraining power for cosmological parameters, by about $4\%$ on $S_8$ and $3\%$ on $\Omega_m$ (see Table \ref{tab:cosmo_sim}). The reason this improvement is smaller than for the cosmic shear case is due to the fact that the 2pt galaxy-galaxy lensing measurements are already providing IA information in this case. This makes the change in the IA parameters when we add SR smaller, as shown in Appendix~\ref{sec:app_ia}. In Figure \ref{fig:summ1} we show the impact of adding SR for the other nuisance parameters. For source redshift parameters, SR improves the constraints of the first three source bins by 9\%, 14\% and 4\%, respectively. There is also a modest improvement on the other nuisance parameters as shown in Figure \ref{fig:summ1}.
\subsection{SR impact on 3$\times$2pt}
A powerful and robust way to extract cosmological information from imaging galaxy surveys involves the full combination of the three two-point functions, in what is now the standard in the field, and referred to as a 3$\times$2pt analysis. This combination helps constraining systematic effects that influence each probe differently. When using SR in combination with 3$\times$2pt, there is no need to sample over additional parameters, and we use the same priors detailed in the previous section.
The effect of adding SR to the DES Y3 3$\times$2pt analysis is similar as for the 2$\times$2pt case. For cosmological parameters, there is an improvement in constraining power of about $3\%$ on $S_8$ and $5\%$ on $\Omega_m$ (see Table \ref{tab:cosmo_sim}). In Figure \ref{fig:summ1} we show the impact for the other nuisance parameters. For example, for source redshift parameters, SR improves the constraints of the second source bin by more than 15\%. The effect on the IA parameters is shown in Appendix~\ref{sec:app_ia}.
\section{Results with the DES Y3 data}
\label{sec:results}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{plots/deltazs_data.pdf}
\caption{Mean source redshift constraints from a shear-ratio only chain (SR), with a flat uninformative prior, in comparison with the results from the combination of the alternative calibration methods of SOMPZ + WZ, and the final combined results of SOMPZ+WZ+SR on data using the \textsc{redMaGiC} sample.}
\label{fig:data_deltaz}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{plots/source_pzs_ls.png}
\caption{Mean source redshift constraints from different shear-ratio (SR) configurations, using the DES Y3 redshift prior (SOMPZ + WZ), comparing the fiducial small-scale constraints from those of the large-scale SR (LS), for the two independent lens galaxy samples, \textsc{redMaGiC} and \textsc{MagLim}.}
\label{fig:data_deltaz_ls}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{plots/ia_data.png}
\caption{Data constraints on the two intrinsic alignment amplitude model parameters from different DES Y3 SR data configurations, comparing the fiducial small-scale constraints from those from the large-scale SR (LS), for the two independent lens galaxy samples, \textsc{redMaGiC} and \textsc{MagLim}. }
\label{fig:ia_data}
\end{figure}
In this section we will present and validate the constraints on model parameters derived from SR in the DES Y3 data sample. We compute SR for our two different lens samples, and for small and large scales. For a given set of ratios, $\{ r \}$, we use the following expression for computing the signal-to-noise:
\begin{equation}
S/N = \sqrt{(\{ r \}) \: \mathbf{C}_{\{ r \}}^{-1} \: (\{ r \})^T - \mathrm{ndf}},
\end{equation}
where ndf is the number of degrees of freedom, which equals the number of ratios (9 in our case), and $\mathbf{C}$ is the covariance described in \S\ref{sec:cov}. Using the data ratios $\{ r \}_s$ (presented in Figure \ref{fig:ratios_sim_data}), we estimate, for the fiducial small-scale ratios, a combined $S/N \sim 84$ for the \textsc{MagLim} sample ($S/N \sim 60$ for the \textsc{redMaGiC} sample), and for large-scale ratios we estimate $S/N \sim 42$ for the \textsc{MagLim} sample ($S/N \sim 38$ for the \textsc{redMaGiC} sample).
We will broadly split the section in two parts: First, we will describe the model parameter constraints from SR alone, specifically by looking at their impact on source redshift and IA parameters, and study their robustness by using two different lens samples (\textsc{redMaGiC} and \textsc{MagLim}) and large-scale ratios for validation. Then, we will study the impact of SR in improving model parameter constraints when combined with other probes such as cosmic shear, galaxy clustering and galaxy-galaxy lensing in the DES Y3 data sample. It is worth pointing out that SR will also be used for correlations between DES data and CMB lensing, although these will not be discussed here (see \citealt{y3-cmblensing1,y3-cmblensing2} for the usage of SR in combination with CMB lensing). For the results in this Section, unless we specifically note that we free some of these priors, we use the DES Y3 priors on the parameters of our model, summarized in Table~\ref{tab:data_priors}.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{plots/last_fig_cs.pdf}
\caption{Differences in the DES Y3 data constraints on cosmological parameters $S_8$ and $\Omega_m$ with the addition of SR to the cosmic shear measurement ($1\times2$pt). The left panel shows the case with SR using \textsc{redMaGiC} lenses, while the right panel shows the results with SR using the \textsc{MagLim} lens sample. All the contours in the plot have been placed at the origin of the $\Delta \Omega_m$ -- $\Delta S_8$ plane, so that the plot shows only the impact of SR in the size of contours but does not include information on the central values of parameters or shifts between them. The impact of SR is significantly relevant for cosmic shear, with improvements in constraining $S_8$ of 31\% for \textsc{redMaGiC} SR and 25\% for \textsc{MagLim} SR. }
\label{fig:cosmo_data}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{plots/ia_3x2.pdf}
\caption{DES Y3 data constraints on the two intrinsic alignment amplitude model parameters from the full combination of probes (3$\times$2pt) with and without the addition of SR, for the \textsc{redMaGiC} and \textsc{MagLim} lens samples. The crossing of the dashed black lines shows the no IA case.}
\label{fig:ia_3x2}
\end{figure}
\subsection{DES Y3 SR-only constraints}\label{sec:results:SRonly}
\begin{table*}
\centering
\addtolength{\tabcolsep}{10pt}
\begin{tabular}{l|c|c}
& \textbf{Range} & \textbf{ Data Priors} \\
\rule{0pt}{3ex}Source redshifts $\Delta z_s^j$ & [-0.1, 0.1] & $\mathcal{N}$(0, [0.018,0.015,0.011,0.017]) \\
Shear calibration $m^j$ & [-0.1, 0.1] & $\mathcal{N}$([-0.0063, -0.0198, -0.0241, -0.0369], [0.0091,0.0078,0.0076,0.0076]) \\
Lens redshifts \textsc{redMaGiC} $\Delta z_l^i$ & [-0.05, 0.05] & $\mathcal{N}$([0.006, 0.001, 0.004], [0.004,0.003,0.003]) \\
Lens redshifts \textsc{MagLim} $\Delta z_l^i$ & [-0.05, 0.05] & $\mathcal{N}$([-0.009, -0.035, -0.005], [0.007,0.011,0.006]) \\
Lens redshifts \textsc{MagLim} $\sigma_{z_l}^i$ & [0.1, 1.9] & $\mathcal{N}$([0.975, 1.306, 0.87], [0.062,0.093,0.054]) \\
Galaxy bias $b^i$ & [0.8, 3.0] & Uniform \\
IA $a_1, a_2, \alpha_1, \alpha_2$ & [-5, 5] & Uniform \\
IA bias TA & [0, 2] & Uniform
\end{tabular}
\addtolength{\tabcolsep}{-10pt}
\caption{Allowed ranges and priors of the model parameters for the DES Y3 data chains run in Section \ref{sec:results}. Indices $i$ in the labels refer to the 3 lens redshift bins, and indices $j$ refer to the 4 source redshift bins, all defined in Section \ref{sec:model}. } \label{tab:data_priors}
\end{table*}
Now we will present and discuss the model parameter constraints from SR in DES Y3. Because we will show and compare the constraints from various SR configurations, including ratios from two independent lens samples, we will also assess the robustness of these results. As we demonstrated in \S\ref{sec:model_validation}, SR provides constraints on model parameters corresponding to source redshifts and intrinsic alignments, so we will focus on those for this part.
Figure \ref{fig:data_deltaz} presents the SR constraints on the source redshift parameters of our model using a number of SR configurations. The left panel shows the SR constraints coming from the independent \textsc{redMaGiC} and \textsc{MagLim} galaxy samples, using flat, uninformative priors on the source redshift parameters and the priors described in Table~\ref{tab:data_priors} for the rest of the parameters. In that panel, for comparison, we also include the source redshift prior used in the DES Y3 analysis, which comes from a combination of photometric information (SOMPZ) and clustering redshifts (WZ), and which is presented in detail in \citet*{y3-sompz} and \citet*{y3-sourcewz} and shown here in Table~\ref{tab:data_priors}. At this point we can compute the tension between SR redshift constraints and the redshift prior, for the 4 source redshift bins combined, and we obtain a $0.97\sigma$ tension ($p$-value $>$ 0.33) for the \textsc{redMaGiC} SR, and $2.08\sigma$ ($p$-value $>$ 0.04) for the \textsc{MagLim} (numbers computed following \citealt*{Lemos2020}). Since these values are above our threshold for consistency ($p$-value $>$ 0.01), the SR constraints are in agreement with the prior and we can proceed to use the redshift prior for the SR likelihoods (see \citealt*{y3-sompz} for a review of the complete DES Y3 weak lensing source calibration, and \citealt{y3-cosmicshear1} for SR consistency checks in combination with cosmic shear). Regarding the mild tension between SR and the redshift prior for the \textsc{MagLim} sample, we refer to \citet{y3-3x2ptkp} for results demonstrating the consistency of the cosmological constraints with and without SR.
The right panel in Figure \ref{fig:data_deltaz} shows the \textsc{redMaGiC} and \textsc{MagLim} SR constraints when using the DES Y3 redshift prior, so we can visualize the improvement that SR brings to the prior redshift constraints. Specifically, for \textsc{redMaGiC} SR, the constraints on the 4 source redshift $\Delta z$ parameters are improved by 11\%, 28\%, 25\% and 14\% with respect to the prior, and for \textsc{MagLim}, by 14\%, 38\%, 25\% and 17\%, respectively for the 4 redshift parameters (the percentage numbers quote the reduction in the width of parameter posteriors compared to the prior). Note that within the DES Y3 3$\times$2pt setup, we do not use the SR information in this way, i.e., by using the redshift prior that comes from the combination of SOMPZ + WZ + SR, but instead we add the shear-ratio likelihood to the 3$\times$2pt likelihood as written in Eq.~(\ref{eq:combined_likelihood}). In this way, the SR information is not only constraining redshifts but also the rest of the parameters of the model, especially the parameters modeling IA.
The agreement between the SR constraints coming from our two independent lens samples, \textsc{redMaGiC} and \textsc{MagLim}, demonstrates the robustness of SR source redshift constraints and provides excellent validation for the methods used in this work. In addition, in Figure \ref{fig:data_deltaz_ls} we show the large-scale SR constraints for both lens samples, compared to the small-scale, fiducial SR constraints. As discussed in Sections \ref{sec:measurement} and \ref{sec:model_validation}, large-scale SR provides independent validation of the small-scale SR constraints. Because of the larger angular scales used in their calculation, they are less sensitive to effects such as non-linear galaxy bias or the impact of baryons (although we have demonstrated that small-scale ratios are also not significantly impacted by these in Sec.~\ref{sec:baryons_and_nlbias}). At this point we can again compute the tension between fiducial and large-scale SR, and we obtain a $0.1\sigma$ tension for the \textsc{redMaGiC} case, and $0.3\sigma$ for \textsc{MagLim} (numbers computed following \citealt*{Lemos2020}). This agreement between the fiducial small-scale SR and the large-scale versions, again for two independent lens galaxy samples, provides additional evidence of the robustness of the results in this work.
In addition to the source redshift parameters, the other parameters that are significantly constrained by SR are the amplitudes of the IA model, $a_1^{\mathrm{IA}}$ and $a_2^{\mathrm{IA}}$ (see \S \ref{sec:model} for a description). Importantly, such constraints have a strong impact in tightening cosmological constraints when combined with other probes, such as cosmic shear (see Figure \ref{fig:1x2sim} and \citealt{y3-cosmicshear1}). Figure \ref{fig:ia_data} shows the IA amplitude constraints from \textsc{redMaGiC} and \textsc{MagLim} SR, both using small-scales (fiducial) and using large-scale (LS) SR as validation. The agreement between these constraints demonstrates the robustness of the IA SR constraints, which play an important role when combined with cosmic shear and other 2pt functions.
\subsection{Impact of SR on $1,2,3 \times 2$pt in the DES Y3 cosmological analysis}
The SR methods described in this work are part of the fiducial DES Y3 cosmological analysis, and hence the SR measurements are used as an additional likelihood to the other 2pt functions. In this part we will describe the impact of adding the SR likelihood in constraining our cosmological model when combined with other probes such as cosmic shear, galaxy clustering and galaxy-galaxy lensing. We will do so by comparing the cosmological constraints with and without SR and then describing the gains in constraining power in them when SR is used. Please note that we will focus on the gains of the combination with SR, and we will not present or discuss the cosmological results or their implications. For such presentation and discussion, please see the cosmic shear results in two companion papers \citet*{y3-cosmicshear1,y3-cosmicshear2}, the results from galaxy clustering and galaxy-galaxy lensing in \citet*{y3-2x2ptbiasmodelling,y3-2x2ptaltlensresults, y3-2x2ptmagnification} and the combination of all probes in \citet*{y3-3x2ptkp}.
Figure \ref{fig:cosmo_data} shows the impact of SR in constraining cosmological parameters $\Omega_m$ and $S_8$ when combined with cosmic shear data ($1\times2$pt) in the DES Y3 data, for both the SR case with \textsc{redMaGic} and \textsc{MagLim} lens samples. The contours in the plot have all been placed at the origin of the $\Delta \Omega_m$ -- $\Delta S_8$ plane, so that the plot shows only the impact of SR in the size of contours but does not include information on the central values. The gain in constraining power from the addition of the SR likelihood in the data is in line with our findings on noiseless simulated data (\S\ref{sec:combination} and Figure \ref{fig:1x2sim}), pointing to the robustness of the simulated analysis in reproducing the DES Y3 data. As in the simulated case, SR is especially important in constraining cosmology from cosmic shear, where it improves the constraints on $S_8$ by 31\% for \textsc{redMaGiC} SR and 25\% for \textsc{MagLim} SR. As explored in Figure \ref{fig:1x2sim}, the improvement comes especially from the ability of SR to place constraints on IA, which then breaks important degeneracies with cosmology in cosmic shear. Given this role of SR as a key component of cosmic shear in constraining IA and cosmology, it is worth exploring the role played by SR in cosmic shear for different models of IA. In this paper we have assumed the fiducial IA model (TATT) for all tests. For a study showing how SR impacts the cosmic shear constraints using different IA models, see the DES Y3 cosmic shear companion papers \citet{y3-cosmicshear1} and \citet*{y3-cosmicshear2}. In summary, we find that SR improves IA constraints from cosmic shear for all IA models. When using TATT (which is a five-parameter IA model) or NLA with redshift evolution (which is a three-parameter IA model), SR significantly helps constraining $S_8$ due to the breaking of degeneracies with IA. For the simplest NLA model without redshift evolution (which is a one-parameter IA model), SR significantly tightens the IA constraints from cosmic shear, but the impact on $S_8$ is reduced due to the milder degeneracies between IA and $S_8$ for that case.
In the combination with the other 2pt functions in the data, galaxy clustering and galaxy-galaxy lensing, the improvement coming from SR is less pronounced, as expected from our simulated analysis, but nonetheless for the full combination of probes (3$\times$2pt) the addition of SR results in the DES Y3 data constraints on $S_8$ being tighter by 10\% for the \textsc{redMaGiC} case and 5\% for \textsc{MagLim} (see also \citealt{y3-3x2ptkp}). The SR improvement on the $3\times2$pt cases is slightly higher than what we found in the simulated case (\S\ref{sec:combination}), which may be due to the fact that the covariance used in the data is different from the simulated case, as it was re-computed at the best-fit cosmology after the $3\times2$pt unblinding (see \citealt{y3-3x2ptkp} for more details).
Besides the impact of SR in cosmological constraints, it is important to stress that SR does significantly impact parameter posteriors on source redshifts and intrinsic alignments in all cases, even in the cases where the improvements in cosmology are mild or negligible. In particular, for the full combination of probes ($3\times2$pt), the cases with SR present tighter posteriors on the second and third source redshift parameters (the ones SR constraints best) by around 14\%, for both \textsc{redMaGiC} and \textsc{MagLim}. In addition, SR does have an impact on the posteriors on IA for the full combination of probes, as can be seen in Figure \ref{fig:ia_3x2}. In that plot, one can see how the addition of SR pulls the IA constraints closer to the no IA case (marked in the plot as a cross of dashed lines), for both lens samples. This is consistent with Figure \ref{fig:ia_data}, where the SR data is shown to be consistent with the case of no IA, although in a degeneracy direction between IA parameters $a_1$ and $a_2$, and it demonstrates the impact of SR in the IA constraints even for the cases where such impact does not translate to a strong impact on cosmological constraints. For a discussion of IA in the context of the $3\times2$pt analysis, see \citet{y3-3x2ptkp}.
\section{Summary and conclusions}
\label{sec:conclusions}
The Dark Energy Survey Y3 3$\times$2pt cosmological analysis, much like other cosmological analyses of photometric galaxy surveys, relies on the combination of three measured 2pt correlation functions, namely galaxy clustering, galaxy-galaxy lensing and cosmic shear. The usage of these measurements to constrain cosmological models, however, is limited too large angular scales because of the uncertainties coming from modeling baryonic effects and galaxy bias. Consequently, a significant amount of information at smaller angular scales typically remains unused in these analyses.
In this work we have developed a method to use small-scale ratios of galaxy-galaxy lensing measurements to place constraints on parameters of our model, particularly those corresponding to source redshift calibration and intrinsic alignments. These ratios of galaxy-galaxy lensing measurements, evaluated around the same lens bins, are also known as lensing or shear ratios (SR). The SR have often been used in the past assuming they were a purely geometrical probe. In this work, instead, we use the full modeling of the galaxy-galaxy lensing measurements involved, including the corresponding integration over the power spectrum and the contributions from intrinsic alignments and lens weak lensing magnification. Taking ratios of small-scale galaxy-galaxy lensing measurements sharing the same lens bins reduces their sensitivity to non-linearities in galaxy bias or baryonic effects, but retains crucial and independent information about redshift calibration and effects on intrinsic alignments, which we fully exploit with this approach.
We perform extensive testing of the small-scale shear ratio modeling by characterizing the impact of different effects, such as the inclusion of baryonic physics in the power spectrum, non-linear galaxy biasing, the effect of HOD modeling description and lens magnification. We test the shear ratio constraints on realistic $N$-body simulations of the DES data. We find that shear ratios as defined in this work are not significantly affected by any of those effects. We also use simulated data to study the constraining power of SR given the DES Y3 modeling choices and priors, and find it to be most sensitive to the calibration of source redshift distributions and to the amplitude of intrinsic alignments (IA) in our model. In particular, the sensitivity to IA makes SR very important when combined with other probes such as cosmic shear, and SR can significantly improve the constraints on cosmological parameters by breaking their degeneracies with IA.
The shear ratios presented in this work are utilized as an additional contribution to the likelihood for cosmic shear and the full 3$\times$2pt in the fiducial DES Y3 cosmological analysis. The SR constraints have an important effect in improving the constraining power in the analysis. Assuming four source galaxy redshift bins, SR improves the constraints on the mean redshift parameters of those bins by up to more than 30\%. For the cosmic shear analysis, presented in detail in two companion papers \citet{y3-cosmicshear1} and \citet*{y3-cosmicshear2}, we find that SR improves the constraints on the amplitude of matter fluctuations $S_8$ by up to 31\%, due to the tightening of redshift posteriors but especially due to breaking degeneracies with intrinsic alignments (IA). For the full combination of probes in DES Y3 data, the so-called $3\times2$pt analysis \citep{y3-3x2ptkp}, SR improves the constraints on $S_8$ by up to 10\%. Even for the cases where the improvements in cosmology are mild, SR brings significant and independent information to the characterization of IA and source redshifts. In addition, when adding CMB lensing information to the DES Y3 analysis, \citet{y3-cmblensing1,y3-cmblensing2} find significant improvements with the addition of SR to the cross-correlation between shear and CMB lensing convergence maps, again due to constraints on intrinsic alignments.
One of the main advantages of SR is its weak sensitivity to modeling uncertainties at small scales, compared to the pure galaxy-galaxy lensing measurements. For that reason, for any choice of angular scales performed for galaxy-galaxy lensing, there will always be smaller angular scales that will be available for SR. These scales can be used to extract independent information. In addition, SR naturally places constraints on the mean redshift of redshift distributions, complementing other methods (such as clustering cross-correlations) that place constraints on the \textit{shapes} of these distributions. Even more importantly, SR provides redshift calibration even when the redshift distributions do not overlap with spectroscopic samples used for clustering cross-correlations, providing valuable independent information.
For these reasons, we conclude that SR can become a standard addition to cosmological analyses from imaging surveys using cosmic shear and 3$\times$2-like data. Furthermore, if redshift and intrinsic alignment modeling does not improve as quickly as the increased quality and quantity of data, then SR may become even more important for cosmological inference than it has been in DES Y3. This scenario seems likely given that source redshift priors did not improve significantly between Y1 and Y3, and the model of intrinsic alignments moved from 3 to 5 parameters from Y1 to Y3, thus becoming more complicated. Therefore, it seems plausible that SR will become an important tool to characterize these two uncertainties in our model, and hence become even more relevant at improving the cosmological constraints in future analyses.
\section*{Acknowledgments}
CS is supported by grants AST-1615555 from the U.S. National Science Foundation, and DE-SC0007901 from the U.S. Department of Energy (DOE). JP is supported by DOE grant DE-SC0021429. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain,
the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing
Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago,
the Center for Cosmology and Astro-Particle Physics at the Ohio State University,
the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos,
Funda{\c c}{\~a}o Carlos Chagas Filho de Amparo {\`a} Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cient{\'i}fico e Tecnol{\'o}gico and
the Minist{\'e}rio da Ci{\^e}ncia, Tecnologia e Inova{\c c}{\~a}o, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey.
The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energ{\'e}ticas,
Medioambientales y Tecnol{\'o}gicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh,
the Eidgen{\"o}ssische Technische Hochschule (ETH) Z{\"u}rich,
Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ci{\`e}ncies de l'Espai (IEEC/CSIC),
the Institut de F{\'i}sica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universit{\"a}t M{\"u}nchen and the associated Excellence Cluster Universe,
the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, The Ohio State University, the University of Pennsylvania, the University of Portsmouth,
SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, Texas A\&M University, and the OzDES Membership Consortium.
The DES data management system is supported by the National Science Foundation under Grant Numbers AST-1138766 and AST-1536171.
The DES participants from Spanish institutions are partially supported by MINECO under grants AYA2015-71825, ESP2015-88861, FPA2015-68048, SEV-2012-0234, SEV-2016-0597, and MDM-2015-0509,
some of which include ERDF funds from the European Union. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya.
Research leading to these results has received funding from the European Research
Council under the European Union's Seventh Framework Program (FP7/2007-2013) including ERC grant agreements 240672, 291329, and 306478.
We acknowledge support from the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020.
This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Based in part on observations at Cerro Tololo Inter-American Observatory,
National Optical Astronomy Observatory, which is operated by the Association of
Universities for Research in Astronomy (AURA) under a cooperative agreement with the National
Science Foundation.
| {'timestamp': '2021-05-31T02:07:34', 'yymm': '2105', 'arxiv_id': '2105.13542', 'language': 'en', 'url': 'https://arxiv.org/abs/2105.13542'} |
\section*{Acknowledgements}
AA is at the d\'epartement d'informatique de l'ENS, \'Ecole normale sup\'erieure, UMR CNRS 8548, PSL Research University, 75005 Paris, France, and INRIA Sierra project-team. The authors would like to acknowledge support
from the \textit{data science} joint research initiative with the \textit{fonds AXA pour la recherche} and Kamet Ventures. TK acknowledges funding from the CFM-ENS chaire {\em les mod\`eles et sciences des donn\'ees}. Finally the authors would like to thanks Raphael Berthier for fruitfull discussions.
\bibliographystyle{apalike}
\section{Additional Numerical Results}
\subsection{Genome assembly experiment (detailed)}\label{ssec:supp-genome-assembly-exp}
Here we provide background about the application of seriation methods for genome assembly and details about our experiment.
We used the {\it E. coli} reads from \citet{Loman15}. They were sequenced with Oxford Nanopore Technology (ONT) MinION device. The sequencing experiment is detailed in \url{http://lab.loman.net/2015/09/24/first-sqk-map-006-experiment} where the data is available.
The overlaps between raw reads were computed with minimap2 \citep{li2018minimap2} with the ONT preset.
The similarity matrix was constructed directly from the output of minimap2. For each pair $(i,j)$ of reads where an overlap was found, we let the number of matching bases be the similarity value associated (and zero where no overlap are found). The only preprocessing on the matrix is that we set a threshold to remove short overlaps. In practice we set the threshold to the median of the similarity values, {\it i.e.}, we discard the lower half of the overlaps.
We then apply our method to the similarity matrix.
The laplacian embedding is shown in Figure~\ref{subfig:ecoli-3d-embedding}. We used no scaling of the Laplacian as it corrupted the filamentary structure of the embedding, but we normalized the similarity matrix beforehand with $W \gets D^{-1} W D^{-1}$ as in \citet{coifman2006diffusion}.
The resulting similarity matrix $S$ computed from the embedding in Algorithm~\ref{algo:Recovery_order_filamentary} is disconnected. Then, Algorithm~\ref{alg:spectral} is applied in each connected component, yielding a fragmented assembly with correctly ordered contigs, as shown in Figure~\ref{subfig:ecoli-partial-orderings}.
However, if the new similarity matrix $S$ is disconnected, the input matrix $A$ is connected. The fragmentation happened while ``scanning'' the nearest-neighbors from the embedding.
One can therefore merge the ordered contigs using the input matrix $A$ as follows.
For each contig, we check from $A$ if there are non-zero overlaps between reads at the edges of that contig and some reads at the edges of another contig. If so, we merge the two contigs, and repeat the procedure until there is only one contig left (or until there is no more overlaps between edges from any two contigs). This procedure is detailed in Algorithm~\ref{alg:connect_clusters}.
Note that the {\it E. coli} genome is circular, therefore computing the layout should be casted as a \ref{eqn:circ-seriation} problem, as illustrated in Figure~\ref{fig:circular-genome-illustration}. Yet, since the genome is fragmented in subsequences since $S$ is disconnected, we end up using Algorithm~\ref{alg:spectral} in each connected component, {\it i.e.}, solving an instance of \ref{eqn:seriation} in each contig.
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=.4\textwidth]{images/circularGenomeOverlaps.pdf}
\caption{
Illustration of why the overlap-based similarity matrix of an ideal circular genome should be $\circR$.
}
\label{fig:circular-genome-illustration}
\end{center}
\end{figure}
\begin{figure}[hbt]
\begin{center}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/ecoli_3d_embedding.png}
\caption{\kLE{3}}\label{subfig:ecoli-3d-embedding}
\end{subfigure}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/ecoli_partial_orderings.pdf}
\caption{partial orderings}\label{subfig:ecoli-partial-orderings}
\end{subfigure}
\caption{
3d Laplacian embedding from {\it E. coli} reads overlap-based similarity matrix (\ref{subfig:ecoli-3d-embedding}), and the orderings found in each connected component of the new similarity matrix created in Algorithm~\ref{algo:Recovery_order_filamentary} (\ref{subfig:ecoli-partial-orderings}) versus the position of the reads within a reference genome obtained by mapping tge reads to the reference with minimap2 (all plotted on the same plot for compactness).
The orderings have no absolute direction, {\it i.e.}, $(1,2,\ldots,n)$ and $(n,n-1,\ldots,1)$ are equivalent, which is why the lines in subfigure~\ref{subfig:ecoli-partial-orderings} can be either diagonal or anti-diagonal.
}
\label{fig:ecoli-exp-supp}
\end{center}
\vskip -0.2in
\end{figure}
The experiment can be reproduced with the material on \url{https://github.com/antrec/mdso}, and the parameters easily varied.
Overall, the final ordering found is correct when the threshold on the overlap-based similarity is sufficient (in practice, above $\sim40\%$ of the non-zero values).
When the threshold increases or when the number of nearest neighbors $k$ from Algorithm~\ref{algo:Recovery_order_filamentary} decreases, the new similarity matrix $S$ gets more fragmented, but the final ordering remains the same after the merging procedure.
\subsection{Gain over baseline}
In Figure~\ref{fig:exp-main-banded}, each curve is the mean of the Kendall-tau (a score directly interpretable by practitioners) over many different Gaussian random realizations of the noise. The shaded confidence interval represents the area in which the true expectation is to be with high probability but not the area in which the score of an experiment with a given noisy similarity would be. As mentioned in the main text, the shaded interval is the standard deviation divided by $\sqrt{n_{\text{exps}}}$, since otherwise the plot was hard to read, as the intervals crossed each others.
Practitioners may use this method in one-shot (e.g. for one particular data-set). In that case, it would be more relevant to show directly the standard deviation on the plots, which is the same as what is displayed, but multiplied by 10. Then, the confidence intervals between the baseline and our method would cross each other. However, the standard deviation on all experiments is due to the fact that some instances are more difficult to solve than some others. On the difficult instances, the baseline and our method perform more poorly than on easy instances.
However, we also computed the gain over the baseline, {\it i.e.}, the difference of score between our method and the baseline, for each experiment, and it is always, or almost always positive, {\it i.e.}, our method almost always beats the baseline although the confidence intervals cross each other.
\subsection{Numerical results with KMS matrices}\label{ssec:app-KMS-main-exp}
In Figure~\ref{fig:exp-main-KMS} we show the same plots as in Section~\ref{sec:numerical_results} but with matrices $A$ such that $A_{ij} = e^{\alpha |i-j|}$, with $\alpha=0.1$ and $n=500$.
\begin{figure}[hbt]
\begin{center}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/kendall-tau-vs-noise-for-several-dims-typematrix_LinearStrongDecrease.pdf}
\caption{Linear KMS}\label{subfig:exps-lin-KMS}
\end{subfigure}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/kendall-tau-vs-noise-for-several-dims-typematrix_CircularStrongDecrease.pdf}
\caption{Circular KMS}\label{subfig:exps-circ-KMS}
\end{subfigure}
\caption{
K-T scores for Linear (\ref{subfig:exps-lin-KMS}) and Circular (\ref{subfig:exps-circ-KMS}) Seriation for noisy observations of KMS, Toeplitz, matrices, displayed for several values of the dimension parameter of the \kLE{d}.
}
\label{fig:exp-main-KMS}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Sensitivity to parameter $k$ (number of neighbors)}\label{ssec:app-k-sensitivity}
Here we show how our method performs when we vary the parameter $k$ (number of neighbors at step 4 of Algorithm~\ref{algo:Recovery_order_filamentary}), for both linearly decrasing, banded matrices, $A_{ij} = \max \left( c - |i-j|, 0, \right)$ (as in Section~\ref{sec:numerical_results}), in Figure~\ref{fig:exp-ksensitivity-banded} and with matrices $A$ such that $A_{ij} = e^{\alpha |i-j|}$, with $\alpha=0.1$ (Figure~\ref{fig:exp-ksensitivity-KMS}.
\begin{figure}[hbt]
\begin{center}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/kendall-tau-vs-noise-for-several-k_nns-typematrix_LinearBanded.pdf}
\caption{Linear Banded}\label{subfig:exps-lin-banded-ksensitivity}
\end{subfigure}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/kendall-tau-vs-noise-for-several-k_nns-typematrix_CircularBanded.pdf}
\caption{Circular Banded}\label{subfig:exps-circ-banded-ksensitivity}
\end{subfigure}
\caption{
K-T scores for Linear (\ref{subfig:exps-lin-banded-ksensitivity}) and Circular (\ref{subfig:exps-circ-banded-ksensitivity}) Seriation for noisy observations of banded, Toeplitz, matrices, displayed for several values of the number of nearest neighbors $k$, with a fixed value of the dimension of the \kLE{d}, $d=10$.
}
\label{fig:exp-ksensitivity-banded}
\end{center}
\vskip -0.2in
\end{figure}
\begin{figure}[hbt]
\begin{center}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/kendall-tau-vs-noise-for-several-k_nns-typematrix_LinearStrongDecrease.pdf}
\caption{Linear KMS}\label{subfig:exps-lin-KMS-ksensitivity}
\end{subfigure}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/kendall-tau-vs-noise-for-several-k_nns-typematrix_CircularStrongDecrease.pdf}
\caption{Circular KMS}\label{subfig:exps-circ-KMS-ksensitivity}
\end{subfigure}
\caption{
K-T scores for Linear (\ref{subfig:exps-lin-KMS-ksensitivity}) and Circular (\ref{subfig:exps-circ-KMS-ksensitivity}) Seriation for noisy observations of KMS, Toeplitz, matrices, displayed for several values of the number of nearest neighbors $k$, with a fixed value of the dimension of the \kLE{d}, $d=10$.
}
\label{fig:exp-ksensitivity-KMS}
\end{center}
\vskip -0.2in
\end{figure}
We observe that the method performs roughly equally well with $k$ in a range from 5 to 20, and that the performances drop when $k$ gets too large, around $k=30$. This can be interpreted as follows. When $k$ is too large, the assumption that the points in the embedding are locally fitted by a line no longer holds.
Note also that in practice, for small values of $k$, {\it e.g.}, $k=5$, the new similarity matrix $S$ can be disconnected, and we have to resort to the merging procedure described in Algorithm~\ref{alg:connect_clusters}.
\subsection{Sensitivity to the normalization of the Laplacian}\label{ssec:scaling-sensitivity}
We performed experiments to compare the performances of the method with the default Laplacian embedding \eqref{eqn:kLE} (red curve in Figure~\ref{fig:exp-appendix-scaling} and~\ref{fig:exp-appendix-scaling-20}) and with two possible normalized embeddings \eqref{eqn:scaled-kLE} (blue and black curve).
We observed that with the default \ref{eqn:kLE}, the performance first increases with $d$, and then collapses when $d$ gets too large.
The CTD scaling (blue) has the same issue, as the first $d$ eigenvalues are roughly of the same magnitude in our settings.
The heuristic scaling \ref{eqn:scaled-kLE} with $\alpha_k = 1/\sqrt{k}$ that damps the higher dimensions yields better results when $d$ increases, with a plateau rather than a collapse when $d$ gets large.
We interpret these results as follows.
With the \eqref{eqn:kLE}, Algorithm \ref{algo:Recovery_order_filamentary}, line \ref{line:find_direction} treats equally all dimensions of the embedding. However, the curvature of the embedding tends to increase with the dimension (for $\circR$ matrix, the period of the cosines increases linearly with the dimension). The filamentary structure is less smooth and hence more sensitive to noise in high dimensions, which is why the results are improved by damping the high dimensions (or using a reasonably small value for $d$).
\begin{figure}[hbt]
\begin{center}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/kendall-tau-vs-noise-for-several-scalings-typematrix_LinearBanded.pdf}
\caption{Linear Banded}\label{subfig:exps-lin-banded-scaling}
\end{subfigure}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/kendall-tau-vs-noise-for-several-scalings-typematrix_CircularBanded.pdf}
\caption{Circular Banded}\label{subfig:exps-circ-banded-scaling}
\end{subfigure}
\caption{
Mean of Kendall-Tau for Linear (\ref{subfig:exps-lin-banded-scaling}) and Circular (\ref{subfig:exps-circ-banded-scaling}) Seriation for noisy observations of banded, Toeplitz, matrices, displayed for several scalings of the Laplacian embedding, with a fixed number of neighbors $k=15$ and number of dimensions $d=10$ in the \kLE{d}.}
\label{fig:exp-appendix-scaling}
\end{center}
\vskip -0.2in
\end{figure}
\begin{figure}[hbt]
\begin{center}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/kendall-tau-vs-noise-for-several-scalings-typematrix_LinearBanded-dim_20.pdf}
\caption{Linear Banded}\label{subfig:exps-lin-banded-scaling-20}
\end{subfigure}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/kendall-tau-vs-noise-for-several-scalings-typematrix_CircularBanded-dim_20.pdf}
\caption{Circular Banded}\label{subfig:exps-circ-banded-scaling-20}
\end{subfigure}
\caption{
Mean of Kendall-Tau for Linear (\ref{subfig:exps-lin-banded-scaling-20}) and Circular (\ref{subfig:exps-circ-banded-scaling-20}) Seriation for noisy observations of banded, Toeplitz, matrices, displayed for several scalings of the Laplacian embedding, with a fixed number of neighbors $k=15$ and number of dimensions $d=20$ in the \kLE{d}.}
\label{fig:exp-appendix-scaling-20}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Illustration of Algorithm~\ref{algo:Recovery_order_filamentary}}\label{ssec:illustrations}
Here we provide some visual illustrations of the method with a circular banded matrix.
Given a matrix $A$ (Figure~\ref{subfig:exps-matshow-noisy}), Algorithm~\ref{algo:Recovery_order_filamentary} computes the \kLE{d}. The \kLE{2} is plotted for visualization in Figure~\ref{subfig:exps-embedding-noisy}.
Then, it creates a new matrix $S$ (Figure~\ref{subfig:exps-matshow-clean}) from the local alignment of the points in the \kLE{d}.
Finally, from the new matrix $S$, it computes the \kLE{2} (Figure~\ref{subfig:exps-matshow-clean}), on which it runs the simple method from Algorithm~\ref{alg:circular-2d-ordering}.
Figure~\ref{fig:exp-noisy-illustration} and \ref{fig:exp-clean-illustration} give a qualitative illustration of how the method behaves compared to the basic Algorithm~\ref{alg:circular-2d-ordering}.
\begin{figure}[hbt]
\begin{center}
\begin{subfigure}[htb]{0.35\textwidth}
\includegraphics[width=\textwidth]{images/mat_noisy_circular_ampl_6.png}
\caption{Noisy circular banded matrix $A$}\label{subfig:exps-matshow-noisy}
\end{subfigure}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/embedding_noisy_circular_ampl_6.pdf}
\caption{Noisy \kLE{2}}\label{subfig:exps-embedding-noisy}
\end{subfigure}
\caption{
Noisy Circular Banded matrix (\ref{subfig:exps-matshow-noisy}) and associated 2d Laplacian embedding (\ref{subfig:exps-embedding-noisy}).
}
\label{fig:exp-noisy-illustration}
\end{center}
\vskip -0.2in
\end{figure}
\begin{figure}[hbt]
\begin{center}
\begin{subfigure}[htb]{0.35\textwidth}
\includegraphics[width=\textwidth]{images/mat_cleaned_circular_ampl_6.png}
\caption{Matrix $S$ from Algorithm~\ref{algo:Recovery_order_filamentary}}\label{subfig:exps-matshow-clean}
\end{subfigure}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/embedding_cleaned_circular_ampl_6.pdf}
\caption{New \kLE{2}}\label{subfig:exps-embedding-clean}
\end{subfigure}
\caption{
Matrix $S$ created through Algorithm~\ref{algo:Recovery_order_filamentary} (\ref{subfig:exps-matshow-clean}), and associated 2d-Laplacian embedding (\ref{subfig:exps-embedding-clean}).
}
\label{fig:exp-clean-illustration}
\end{center}
\vskip -0.2in
\end{figure}
\section{Introduction}
The seriation problem seeks to recover a latent ordering from similarity information. We typically observe a matrix measuring pairwise similarity between a set of $n$ elements and assume they have a serial structure, {\it i.e.}~they can be ordered along a chain where the similarity between elements decreases with their distance within this chain. In practice, we observe a random permutation of this similarity matrix, where the elements are not indexed according to that latent ordering. Seriation then seeks to find that global latent ordering using only (local) pairwise similarity.
Seriation was introduced in archaeology to find the chronological order of a set of graves. Each contained artifacts, assumed to be specific to a given time period. The number of common artifacts between two graves define their similarity, resulting in a chronological ordering where two contiguous graves belong to a same time period.
It also has applications in, {\it e.g.}, envelope reduction \citep{Barn95}, bioinformatics \citep{atkins1996physical,cheema2010thread,jones2012anges} and DNA sequencing \citep{Meid98,Garr11,recanati2016spectral}.
In some applications, the latent ordering is circular. For instance, in {\it de novo} genome assembly of bacteria, one has to reorder DNA fragments subsampled from a circular genome.
In biology, a cell evolves according to a cycle: a newborn cell passes through diverse states (growth, DNA-replication, {\it etc.}) before dividing itself into two newborn cells, hence closing the loop. Problems of interest then involve collecting cycle-dependent data on a population of cells at various, unknown stages of the cell-cycle, and trying to order the cells according to their cell-cycle stage.
Such data include gene-expression \citep{liu2017reconstructing}, or DNA 3D conformation data \citep{liu2018unsupervised}.
In planar tomographic reconstruction, the shape of an object is inferred from projections taken at unknown angles between 0 and $2\pi$. Reordering the angles then enables to perform the tomography \citep{coifman2008graph}.
The main structural hypothesis on similarity matrices related to seriation is the concept of $R$-matrix, which we introduce below, together with its circular counterpart.
\begin{definition}\label{def:R-mat}
We say that $A\in\symm_n$ is a R-matrix (or Robinson matrix) iff it is symmetric and satisfies
$A_{i,j} \leq A_{i,j+1}$ and $A_{i+1,j} \leq A_{i,j}$ in the lower triangle, where $1\leq j < i \leq n$.
\end{definition}
\begin{definition}\label{def:circ-R-mat}
We say that $A\in\symm_n$ is a circular R-matrix iff it is symmetric and satisfies, for all $i \in [n]$,
$\left(A_{ij}\right)_{j=1}^{i}$ and $\left(A_{ij}\right)_{i=j}^{n}$ are unimodal : they are decrease to a minimum and then increase.
\end{definition}
Here $\symm_n$ is the set of real symmetric matrices of dimension~$n$.
Definition~\ref{def:R-mat} states that when moving away from the diagonal in a given row or column of $A$, the entries are non-increasing, whereas
in Def~\ref{def:circ-R-mat}, the non-increase is followed by a non-decrease. For instance, the proximity matrix of points embedded on a circle follows Def~\ref{def:circ-R-mat}.
Figure~\ref{fig:seriation} displays examples of such matrices.
\begin{figure}[hbt]
\begin{center}
\begin{subfigure}[htb]{0.3\textwidth}
\includegraphics[width=\textwidth]{images/R-mat.pdf}
\caption{R-matrix}\label{subfig:Rmat}
\end{subfigure}
\begin{subfigure}[htb]{0.3\textwidth}
\includegraphics[width=\textwidth]{images/circ-R-mat.pdf}
\caption{circular R-matrix}\label{subfig:circRmat}
\end{subfigure}
\begin{subfigure}[htb]{0.3\textwidth}
\includegraphics[width=\textwidth]{images/R-mat-perm.pdf}
\caption{permuted R-matrix}\label{subfig:2DRperm}
\end{subfigure}
\caption{
From left to right, R-matrix (\ref{subfig:Rmat}), circular R-matrix (\ref{subfig:circRmat}), and a randomly permuted observation of a R-matrix (\ref{subfig:2DRperm}). Seriation seeks to recover (\ref{subfig:Rmat}) from its permuted observation (\ref{subfig:2DRperm}).
}
\label{fig:seriation}
\end{center}
\vskip -0.2in
\end{figure}
In what follows, we write $\cR^n$ (resp., $\circR^n$) the set of R (resp., circular-R) matrices of size $n$, and $\cP_n$ the set of permutations of $n$ elements. A permutation can be represented by a vector $\pi$ (lower case) or a matrix $\Pi \in \{0,1\}^{n \times n}$ (upper case) defined by $\Pi_{ij} = 1$ iff $\pi(i) = j$, and $\pi = \Pi \pi_{Id}$ where $\pi_{Id} = (1, \dots, n)^T$. We refer to both representations by $\cP_n$ and may omit the subscript $n$ whenever the dimension is clear from the context.
We say that $A \in \symm_n$ is pre-$\cR$ (resp., pre-$\circR$) if there exists a permutation $\Pi \in \cP$ such that the matrix $\PAP$ (whose entry $(i,j)$ is $A_{\pi(i),\pi(j)}$) is in $\cR$ (resp., $\circR$). Given such $A$, Seriation seeks to recover this permutation $\Pi$,
\begin{align}
\begin{array}{lllll}
\find & \Pi \in \cP & \st & \PAP \in \cR & \tag{Linear Seriation}\label{eqn:seriation}
\end{array}
\\
\begin{array}{lllll}
\find & \Pi \in \cP & \st & \PAP \in \circR & \tag{Circular Seriation}\label{eqn:circ-seriation}
\end{array}
\end{align}
A widely used method for \ref{eqn:seriation} is a spectral relaxation based on the graph Laplacian of the similarity matrix.
It transposes Spectral Clustering \citep{von2007tutorial} to the case where we wish to infer a latent ordering rather than a latent clustering on the data.
Roughly speaking, both methods embed the elements on a line and associate a coordinate $f_i \in {\mathbb R}$ to each element $i \in [n]$. Spectral clustering addresses a graph-cut problem by grouping these coordinates into two clusters. Spectral ordering \citep{Atkins} addresses \ref{eqn:seriation} by sorting the $f_i$.
Most Spectral Clustering algorithms actually use a Laplacian embedding of dimension $d>1$, denoted \kLE{d} in the following. Latent cluster structure is assumed to be enhanced in the \kLE{d}, and the k-means algorithm \citep{macqueen1967some,hastie2009unsupervised} seamlessly identifies the clusters from the embedding.
In contrast, Spectral Ordering is restricted to $d=1$ by the sorting step (there is no total order relation on ${\mathbb R}^d$ for $d>1$).
Still, the latent linear structure may emerge from the \kLE{d}, if the points are distributed along a curve.
Also, for $d=2$, it may capture the circular structure of the data and allow for solving \ref{eqn:circ-seriation}.
One must then recover a (circular) ordering of points lying in a $1D$ manifold (a curve, or filament) embedded in ${\mathbb R}^d$.
In Section~\ref{sec:related-work}, we review the Spectral Ordering algorithm and the Laplacian Embedding used in Spectral Clustering. We mention graph-walk perspectives on this embedding and how this relates to dimensionality reduction techniques. Finally, we recall how these perspectives relate the discrete Laplacian to continuous Laplacian operators, providing insights about the curve structure of the Laplacian embedding through the spectrum of the limit operators. These asymptotic results were used to infer circular orderings in a tomography application in e.g. \citet{coifman2008graph}.
In Section~\ref{sec:theory}, we evidence the filamentary structure of the Laplacian Embedding, and provide theoretical guarantees about the Laplacian Embedding based method for \ref{eqn:circ-seriation}.
We then propose a method in Section~\ref{sec:results} to leverage the multidimensional Laplacian embedding in the context of \ref{eqn:seriation} and \ref{eqn:circ-seriation}.
We eventually present numerical experiments to illustrate how the spectral method gains in robustness by using a multidimensional Laplacian embedding.
\section{Related Work}\label{sec:related-work}
\subsection{Spectral Ordering for Linear Seriation}
\ref{eqn:seriation} can be addressed with a spectral relaxation of the following combinatorial problem,
\begin{align}
\tag{2-SUM} \label{eqn:2sum}
\begin{array}{llll}
\mbox{minimize} & \sum_{i,j=1}^n
A_{ij} |\pi_i - \pi_j|^2 & \st & \pi \in \cP_n\\
\end{array}
\end{align}
Intuitively, the optimal permutation compensates high $A_{ij}$ values with small $|\pi_i - \pi_j|^2$, thus laying similar elements nearby.
For any $f = \left(f(1),\ldots,f(n)\right)^T \in {\mathbb R}^n$, the objective of \ref{eqn:2sum} can be written as a quadratic (with simple algebra using the symmetry of $A$, see \citet{von2007tutorial}),
\begin{align}\label{eqn:2sum-is-quadratic}
{\textstyle \sum_{i,j=1}^n} A_{ij} |f(i) - f(j)|^2 = f^T L_A f
\end{align}
where $L_A \triangleq \mathop{\bf diag}(A\mathbf 1)-A$ is the graph-Laplacian of $A$.
From~\eqref{eqn:2sum-is-quadratic}, $L_A$ is positive-semi-definite for $A$ having non-negative entries, and $\mathbf 1 = (1, \ldots, 1)^T$ is an eigenvector associated to $\lambda_0 = 0$.
The spectral method drops the constraint $\pi \in \cP_n$ in \ref{eqn:2sum} and enforces only norm and orthogonality constraints, $\|\pi\|=1$, $\pi^T \mathbf 1 = 0$, to avoid the trivial solutions $\pi = 0$ and $\pi \propto \mathbf 1$, yielding,
\begin{align}
\tag{Relax. 2-SUM}\label{eqn:2sum-relaxed}
\begin{array}{llll}
\mbox{minimize} & f^T L_A f & \st & \| f \|_2 = 1 \:,\: f^T \mathbf 1 = 0.\\
\end{array}
\end{align}
This is an eigenvalue problem on $L_A$ solved by $f_{(1)}$, the eigenvector associated to $\lambda_1 \geq 0$ the second smallest eigenvalue of $L_A$. If the graph defined by $A$ is connected (which we assume further) then $\lambda_1 > 0$.
From $f_{(1)}$, one can recover a permutation by sorting its entries.
The spectral relaxation of \ref{eqn:2sum} is summarized in Algorithm~\ref{alg:spectral}.
For pre-$\cR$ matrices, \ref{eqn:seriation} is equivalent to \ref{eqn:2sum} \citep{Fogel}, and can be solved with Algorithm~\ref{alg:spectral} \citep{Atkins}, as stated in Theorem~\ref{thm:spectral-solves-seriation-preR}.
\begin{algorithm}[H]
\footnotesize
\caption{Spectral ordering \citep{Atkins}}\label{alg:spectral}
\begin{algorithmic} [1]
\REQUIRE Connected similarity matrix $A \in \mathbb{R}^{n \times n}$
\STATE Compute Laplacian $L_A=\mathop{\bf diag}(A\mathbf 1)-A$
\STATE Compute second smallest eigenvector of $L_A$, $f_{1}$
\STATE Sort the values of $f_{1}$
\ENSURE Permutation $\sigma : f_{1}({\sigma(1)}) \leq
\ldots \leq f_{(1)}({\sigma(n)})$
\end{algorithmic}
\end{algorithm}
\begin{theorem}[\cite{Atkins}]\label{thm:spectral-solves-seriation-preR}
If $A \in \symm_n$ is a pre-$\cR$ matrix, then Algorithm~\ref{alg:spectral} recovers a permutation $\Pi \in \cP_n$ such that $\PAP \in \cR^n$, {\it i.e.}, it solves \ref{eqn:seriation}.
\end{theorem}
\subsection{Laplacian Embedding}
Let $0=\lambda_0 < \lambda_1 \leq \ldots \leq \lambda_{n-1}$,
$\Lambda \triangleq \mathop{\bf diag}\left(\lambda_0, \ldots, \lambda_{n-1} \right)$,
$\Phi = \left( \mathbf 1, f_{1}, \ldots, f_{n-1} \right)$,
be the eigendecomposition of $L_A = \Phi \Lambda \Phi^T$.
Algorithm~\ref{alg:spectral} embeds the data in 1D through the eigenvector $f_{1}$ (\kLE{1}).
For any $d < n$,
$\Phi^{(d)} \triangleq \left( f_{1}, \ldots, f_{d} \right)$
defines a $d$-dimensional embedding (\kLE{d})
\begin{align}\tag{\kLE{d}}\label{eqn:kLE}
{\boldsymbol y}_i = \left(f_{1}(i), f_{2}(i), \ldots, f_{d}(i)\right)^T \in {\mathbb R}^d
, \: \: \: \mbox{for} \: \: \: i=1,\ldots,n.
\end{align}
which solves the following embedding problem,
\begin{align}
\tag{Lap-Emb} \label{eqn:lapl-embed}
\begin{array}{ll}
\mbox{minimize} & \sum_{i,j=1}^n
A_{ij} \|{\boldsymbol y}_i - {\boldsymbol y}_j\|_{2}^2 \\
\st & \tilde{\Phi}=\left( {\boldsymbol y}_{1}^T, \ldots, {\boldsymbol y}_{n}^T \right)^T \in {\mathbb R}^{n \times d} \:,\: \tilde{\Phi}^T \tilde{\Phi} = \mathbf{I}_d \:,\: \tilde{\Phi}^T \mathbf 1_n = {\mathbf 0}_d
\end{array}
\end{align}
Indeed, like in \eqref{eqn:2sum-is-quadratic}, the objective of \ref{eqn:lapl-embed} can be written $\mathop{\bf Tr} \left( \tilde{\Phi}^T L_A \tilde{\Phi} \right)$ (see \citet{belkin2003laplacian} for a similar derivation).
The \ref{eqn:2sum} intuition still holds: the \kLE{d} lays similar elements nearby, and dissimilar apart, in ${\mathbb R}^d$.
Other dimensionality reduction techniques such as Multidimensional scaling (MDS) \citep{kruskal1978multidimensional}, kernel PCA \citep{scholkopf1997kernel}, or Locally Linear Embedding (LLE) \citep{roweis2000nonlinear} could be used as alternatives to embed the data in a way that intuitively preserves the latent ordering. However, guided by the generalization of Algorithm~\ref{alg:spectral} and theoretical results that follow, we restrict ourselves to the Laplacian embedding.
\subsubsection{Normalization and Scaling}
Given the weighted adjacency matrix $W \in \symm_n$ of a graph, its Laplacian reads $L = D - W$, where $D = \mathop{\bf diag}(W\mathbf 1)$ has diagonal entries $d_{i} = \sum_{j=1}^n W_{ij}$ (degree of $i$).
Normalizing $W_{ij}$ by
$\sqrt{d_i d_j}$ or $d_i$
leads to the normalized Laplacians,
\begin{align}
\begin{array}{rll}
\Lsym & = & D^{-1/2} L D^{-1/2} = \mathbf{I} - D^{-1/2} W D^{-1/2}\\
\Lrw & = & D^{-1} L = \mathbf{I} - D^{-1} W \\
\end{array}
\end{align}
They correspond to graph-cut normalization (normalized cut or ratio cut). Moreover,
$\Lrw$ has a Markov chain interpretation, where
a random walker on edge $i$ jumps to edge $j$ from time $t$ to $t+1$ with transition probability $P_{ij} \triangleq W_{ij}/d_i$.
It has connections with diffusion processes, governed by the heat equation $\frac{\partial \cH_t}{\partial t} = - \Delta \cH_t$, where $\Delta$ is the Laplacian operator, $\cH_t$ the heat kernel, and $t$ is time \citep{qiu2007clustering}.
These connections lead to diverse Laplacian embeddings backed by theoretical justifications, where the eigenvectors $f^{\text{rw}}_{k}$ of $\Lrw$ are sometimes scaled by decaying weights $\alpha_k$ (thus emphasizing the first eigenvectors),
\begin{align}\tag{\kLE{($\alpha$, d)}}\label{eqn:scaled-kLE}
\tilde{{\boldsymbol y}}_i = \left( \alpha_1 f_{1}^{\text{rw}}(i), \ldots, \alpha_{d-1} f_{d}^{\text{rw}}(i)\right)^T \in {\mathbb R}^d
, \: \: \: \mbox{for} \: \: \: i=1,\ldots,n.
\end{align}
Laplacian eigenmaps \citep{belkin2003laplacian} is a nonlinear dimensionality reduction technique based on the spectral embedding of $\Lrw$ (\eqref{eqn:scaled-kLE} with $\alpha_k =1$ for all $k$).
Specifically, given points $x_1, \ldots, x_n \in {\mathbb R}^d$
, the method computes a heat kernel similarity matrix $W_{ij} = \exp{-\left(\| x_i - x_j\|^2/t\right)}$ and outputs the first eigenvectors of $\Lrw$ as a lower dimensional embedding.
The choice of the heat kernel is motivated by connections with the heat diffusion process on a manifold, a partial differential equation involving the Laplacian operator.
This method has been successful in many machine learning applications such as semi-supervised classification \citep{belkin2004semi} and search-engine type ranking \citep{zhou2004ranking}. Notably, it provides a global, nonlinear embedding of the points that preserves the local structure.
The commute time distance $\text{CTD}(i,j)$ between two nodes $i$ and $j$ on the graph is the expected time for a random walker to travel from node $i$ to node $j$ and then return.
The full \ref{eqn:scaled-kLE}, with $\alpha_k = (\lambda_k^{\text{rw}})^{-1/2}$ and $d=n-1$, satisfies $\text{CTD}(i,j) \propto \| \tilde{{\boldsymbol y}}_{i} - \tilde{{\boldsymbol y}}_{j} \|$. Given the decay of $\alpha_k$, the \kLE{d} with $d \ll n$ approximately preserves the CTD.
This embedding has been successfully applied to vision tasks, {\it e.g.}, anomaly detection \citep{albano2012euclidean}, image segmentation and motion tracking \citep{qiu2007clustering}.
Another, closely related dimensionality reduction technique is that of diffusion maps \citep{coifman2006diffusion}, where the embedding is derived to preserve diffusion distances, resulting in the \ref{eqn:scaled-kLE}, for $t \geq 0$, $\alpha_{k}(t) = (1 - \lambda_{k}^{\text{rw}})^t$.
\citet{coifman2006diffusion,coifman2008graph} also propose a normalization of the similarity matrix $\tilde{W} \gets D^{-1} W D^{-1}$, to extend the convergence of $\Lrw$ towards the Laplace-Beltrami operator on a curve when the similarity is obtained through a heat kernel on points that are \emph{non uniformly} sampled along that curve.
Finally, we will use in practice the heuristic scaling $\alpha_k = 1/\sqrt{k}$ to damp high dimensions, as explained in Appendix~\ref{ssec:scaling-sensitivity}.
For a deeper discussion about spectral graph theory and the relations between these methods, see for instance \citet{qiu2007clustering} and \citet{chung2000discrete}.
\subsection{Link with Continuous Operators}\label{ssec:asymptotic-lap}
In the context of dimensionality reduction, when the data points $x_1, \ldots, x_n \in {\mathbb R}^D$ lie on a manifold $\mathcal{M} \subset {\mathbb R}^d$ of dimension $K \ll D$, the graph Laplacian $L$ of the heat kernel ($W_{ij} = \exp{\left(-\| x_i - x_j\|^2/t\right)}$) used in \citet{belkin2003laplacian} is a discrete approximation of $\Delta_{\mathcal{M}}$, the Laplace-Beltrami operator on $\mathcal{M}$ (a differential operator akin to the Laplace operator, adapted to the local geometry of $\mathcal{M}$).
\citet{singer2006graph} specify the hypothesis on the data and the rate of convergence of $L$ towards $\Delta_{\mathcal{M}}$ when $n$ grows and the heat-kernel bandwidth $t$ shrinks.
\citet{von2005limits} also explore the spectral asymptotics of the spectrum of $L$ to prove consistency of spectral clustering.
This connection with continuous operators gives hints about the Laplacian embedding in some settings of interest for \ref{eqn:seriation} and \ref{eqn:circ-seriation}.
Indeed, consider $n$ points distributed along a curve $\Gamma \subset {\mathbb R}^D$ of length $1$, parameterized by a smooth function $\gamma : {\mathbb R} \rightarrow {\mathbb R}^D$,
$\Gamma = \{ \mathbf{\gamma}(s) \::\: s \in [0,1] \}$, say $x_i = \mathbf{\gamma}(i/n)$. If their similarity measures their proximity along the curve, then the similarity matrix is a circular-R matrix if the curve is closed ($\gamma(0)=\gamma(1)$), and a R matrix otherwise.
\citet{coifman2008graph} motivate a method for \ref{eqn:circ-seriation} with the spectrum of the Laplace-Beltrami operator $\Delta_{\Gamma}$ on $\Gamma$ when $\Gamma$ is a closed curve.
Indeed, $\Delta_{\Gamma}$ is simply the second order derivative with respect to the arc-length $s$, $\Delta_{\Gamma} f(s) = f^{\prime \prime}(s)$ (for $f$ twice continuously differentiable), and its eigenfunctions are given by,
\begin{align}\label{eqn:lapl-beltr-eigenpb}
f^{\prime \prime}(s) = -\lambda f(s).
\end{align}
With periodic boundary conditions, $f(0)=f(1)$, $f^{\prime}(0)=f^{\prime}(1)$, and smoothness assumptions, the first eigenfunction is constant with eigenvalue $\lambda_0=0$, and the remaining are $\left\{ \cos{\left(2 \pi m s \right)}, \: \sin{\left(2 \pi m s \right)} \right\}_{m=1}^{\infty}$, associated to the eigenvalues $\lambda_m = (2\pi m)^2$ of multiplicity 2.
Hence, the \kLE{2}, $\left( f_{1}(i), f_{2}(i) \right) \approx \left( \cos{(2\pi s_i)}, \sin{(2\pi s_i)} \right)$ should approximately lay the points on a circle, allowing for solving \ref{eqn:circ-seriation} \citep{coifman2008graph}.
More generally, the \kLE{2d}, $\left(f_{1}(i), \ldots, f_{2d+1}(i)\right)^T \approx \left( \cos{(2\pi s_i)}, \sin{(2\pi s_i)}, \ldots, \cos{(2 d \pi s_i)}, \sin{(2 d \pi s_i)} \right)$ is a closed curve in ${\mathbb R}^{2d}$.
If $\Gamma$ is not closed, we can also find its eigenfunctions. For instance, with Neumann boundary conditions (vanishing normal derivative), say, $f(0)=1$, $f(1)=0$, $f^{\prime}(0)=f^{\prime}(1)=0$, the non-trivial eigenfunctions of $\Delta_{\Gamma}$ are $\left\{ \cos{\left(\pi m s \right)}\right\}_{m=1}^{\infty}$, with associated eigenvalues $\lambda_m = (\pi m)^2$ of multiplicity 1.
The \kLE{1} $f_{1}(i) \approx \cos{\left(\pi s_i \right)}$ respects the monotonicity of $i$, which is consistent with Theorem~\ref{thm:spectral-solves-seriation-preR}. \citet{lafon2004diffusion} invoked this asymptotic argument to solve an instance of \ref{eqn:seriation} but seemed unaware of the existence of Atkin's Algorithm~\ref{alg:spectral}.
Note that here too, the \kLE{d}, $\left(f_{1}(i), \ldots, f_{d}(i)\right)^T \approx \left( \cos{(\pi s_i)}, \ldots, \cos{(d \pi s_i)} \right)$ follows a closed curve in ${\mathbb R}^{d}$, with endpoints.
These asymptotic results hint that the Laplacian embedding preserves the latent ordering of data points lying on a curve embedded in ${\mathbb R}^D$.
However, these results are only asymptotic and there is no known guarantee for the \ref{eqn:circ-seriation} problem as there is for \ref{eqn:seriation}.
Also, the curve (sometimes called filamentary structure) stemming from the Laplacian embedding has been observed in more general cases where no hypothesis on a latent representation of the data is made, and the input similarity matrix is taken as is (see, {\it e.g.}, \citet{diaconis2008horseshoes} for a discussion about the horseshoe phenomenon).
\subsection{Ordering points lying on a curve}
Finding the latent ordering of some points lying on (or close to) a curve can also be viewed as an instance of the traveling salesman problem (TSP), for which a plethora of (heuristic or approximation) algorithms exist \citep{reinelt1994traveling,laporte1992traveling}. We can think of this setting as one where the cities to be visited by the salesman are already placed along a single road, thus these TSP instances are easy and may be solved by simple heuristic algorithms.
Existing approaches for \ref{eqn:seriation} and \ref{eqn:circ-seriation} have only used 2D embeddings so far, for simplicity.
\citet{kuntz2001iterative} use the \kLE{2} to find a circular ordering of the data. They use a somehow exotic TSP heuristic which maps the 2D points onto a pre-defined ``space-filling'' curve, and unroll the curve through its closed form inverse to obtain a 1D embedding and sort the points.
\citet{friendly2002corrgrams} uses the angle between the two first coordinates of the 2D-MDS embedding and sorts them to perform \ref{eqn:seriation}.
\citet{coifman2008graph} use the \kLE{2} to perform \ref{eqn:circ-seriation} in a tomographic reconstruction setting,
and use a simple algorithm that sorts the inverse tangent of the angle between the two components to reorder the points.
\citet{liu2018unsupervised} use a similar approach to solve \ref{eqn:circ-seriation} in a cell-cycle related problem, but with the 2D embedding given by MDS.
\section{Spectral properties of some (circular) Robinson matrices}\label{sec:theory}
We have claimed that the \kLE{d} enhances the latent ordering of the data and we now present some theoretical evidences. We adopt a point of view similar to \citet{Atkins}, where the feasibility of \ref{eqn:seriation} relies on structural assumptions on the similarity matrix ($\cR$).
For a subclass $\SCR$ of $\circR$ (set of circular-R matrices), we show that the \kLE{d} lays the points on a closed curve, and that for $d=2$, the elements are embedded on a circle according to their latent circular ordering.
This is a counterpart of Theorem~\ref{thm:spectral-solves-seriation-preR} for \ref{eqn:circ-seriation}. It extends the asymptotic results motivating the approach of \citet{coifman2008graph}, shifting the structural assumptions on the elements (data points lying on a curve embedded in ${\mathbb R}^D$) to assumptions on the raw similarity matrix that can be verified in practice.
Then, we develop a perturbation analysis to bound the deformation of the embedding when the input matrix is in $\SCR$ up to a perturbation.
Finally, we discuss the spectral properties of some (non circular) $\cR$-matrices that shed light on the filamentary structure of their \kLE{d} for $d>1$.
For simplicity, we assume $n \triangleq 2p+1$ odd in the following. The results with $n=2p$ even are relegated to the Appendix, together with technical proofs.
\subsection{Circular Seriation with Symmetric, Circulant matrices}
Let us consider the set $\SCR$ of matrices in $\circR$ that are circulant, in order to have a closed form expression of their spectrum.
A matrix $A \in {\mathbb R}^{n \times n}$ is Toeplitz if its entries are constant on a given diagonal, $A_{ij} = b_{(i-j)}$ for a vector of values $b$ of size ${2n-1}$. A symmetric Toeplitz matrix $A$ satisfies $A_{ij} = b_{|i-j|}$, with $b$ of size ${n}$.
In the case of circulant symmetric matrices, we also have that $b_{k} = b_{n-k}$, for $1 \leq k \leq n$, thus symmetric circulant matrices are of the form,
\begin{align}
\label{eq:defrsptmat} A\ =\ \left(
\begin{array}{cccccc}
b_0 & b_1 & b_2 & \cdots & b_2 & b_1 \\
b_1 & b_0 & b_1 & \cdots & b_3 & b_2 \\
b_2 & b_1 & b_0 & \cdots & b_4 & b_3 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
b_2 & b_3 & b_4 & \cdots & b_0 & b_1 \\
b_1 & b_2 & b_3 & \cdots & b_1 & b_0
\end{array}\right).
\end{align}
Where $b$ is a vector of values of size $p+1$ (recall that $n=2p+1$).
The circular-R assumption (Def~\ref{def:circ-R-mat}) imposes that the sequence $(b_0, \ldots, b_{p+1})$ is non-increasing.
We thus define the set $\SCR$ of circulant matrices of $\circR$ as follows.
\begin{definition}
A matrix $A \in \symm^n$ is in $\SCR$ iff it verifies $A_{ij}=b_{|i-j|}$ and $b_k = b_{n-k}$ for $1\leq k\leq n$ with $(b_k)_{k=0,\ldots,\lfloor n/2 \rfloor}$ a non-increasing sequence.
\end{definition}
The spectrum of symmetric circulant matrices is known \citep{reichel1992eigenvalues,gray2006toeplitz,massey2007distribution}, and for a matrix $A$ of size $n=2p+1$, it is given by,
\begin{align}\label{eqn:spectrum-circ}
\begin{array}{lll}
\vspace{.1cm}
\nu_m &=& b_0 + 2 {\textstyle \sum_{k=1}^{p}} {b_k \cos{ \left(2 \pi k m/n\right)}}\\
\vspace{.1cm}
y^{m, \cos} &= & \frac{1}{\sqrt{n}} \left(1, \cos \left( 2 \pi m / n \right), \ldots, \cos \left( 2 \pi m (n-1) / n \right) \right)\\
\vspace{.1cm}
y^{m, \sin} &= & \frac{1}{\sqrt{n}} \left(1, \sin \left( 2 \pi m / n \right), \ldots, \sin \left( 2 \pi m (n-1) / n \right) \right)~.
\end{array}
\end{align}
For $m = 1,\ldots,p$, $\nu_m$ is an eigenvalue of multiplicity 2 with associated eigenvectors $y^{m, \cos}$,$y^{m, \sin}$.
For any $m$, $(y^{m, \cos}, y^{m, \sin})$ embeds the points on a circle, but for $m>1$, the circle is walked through $m$ times, hence the ordering of the points on the circle does not follow their latent ordering.
The $\nu_m$ from equations~\eqref{eqn:spectrum-circ} are in general not sorted. It is the Robinson property (monotonicity of $(b_k)$) that guarantees that $\nu_1 \geq \nu_m$, for $m \geq 1$, and thus that
the \kLE{2} embeds the points on a circle \emph{that follows the latent ordering} and allows one to recover it by scanning through the unit circle.
This is formalized in Theorem~\ref{th:without_noise}, which is the main result of our paper, proved in Appendix \ref{sec:circular_toeplitz_matrix}.
It provides guarantees in the same form as in Theorem~\ref{thm:spectral-solves-seriation-preR} with the simple Algorithm~\ref{alg:circular-2d-ordering} that sorts the angles, used in \citet{coifman2008graph}.
\begin{algorithm}[H]
\caption{Circular Spectral Ordering \citep{coifman2008graph}}\label{alg:circular-2d-ordering}
\begin{algorithmic} [1]
\REQUIRE Connected similarity matrix $A \in \mathbb{R}^{n \times n}$
\STATE Compute normalized Laplacian $\Lrw_{A}= \mathbf{I} - \left(\mathop{\bf diag}(A\mathbf 1)\right)^{-1}A$
\STATE Compute the two first non-trivial eigenvectors of $\Lrw_A$, $\left(f_{1}, f_{2}\right)$
\STATE Sort the values of $\theta(i) \triangleq \tan^{-1}{\left(f_{2}(i)/f_{1}(i) \right)} + \mathbbm{1}[f_1(i)<0] \pi$
\ENSURE Permutation $\sigma : \theta({\sigma(1)}) \leq
\ldots \leq \theta({\sigma(n)})$
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{th:without_noise}
Given a permuted observation $\PAP$ ($\Pi \in \cP$) of a matrix $A \in \SCR$, the \kLE{2} maps the items on a circle, equally spaced by angle $2\pi/n$, following the circular ordering in $\Pi$.
Hence, Algorithm~\ref{alg:circular-2d-ordering} recovers a permutation $\Pi \in \cP_n$ such that $\PAP \in \SCR$, {\it i.e.}, it solves \ref{eqn:circ-seriation}.
\end{theorem}
\subsection{Perturbation analysis}
The spectrum is a continuous function of the matrix.
Let us bound the deformation of the \kLE{2} under a perturbation of the matrix $A$ using
the Davis-Kahan theorem \citep{davis1970rotation}, well introduced in \citep[Theorem 7]{von2007tutorial}.
We give more detailed results in Appendix~\ref{sec:perturbation_analysis} for a subclass of $\SCR$ (KMS) defined further.
\begin{proposition}[Davis-Kahan]\label{prop:davis_Kahan}
Let $L$ and $\tilde{L} = L + \delta L$ be the Laplacian matrices of $A \in \SCR$ and $A + \delta A \in \symm^n$, respectively, and $V,\tilde{V} \in {\mathbb R}^{2 \times n}$ be the associated \kLE{2} of $L$ and $\tilde{L}$, {\it i.e.}, the concatenation of the two eigenvectors associated to the two smallest non-zero eigenvalues, written $\lambda_1 \leq \lambda_2$ for $L$.
Then, there exists an orthonormal rotation matrix $O$ such that
\begin{eqnarray}\label{eq:perturbation_result}
\frac{\|V_1-\tilde{V}_1 O\|_F}{\sqrt{n}} \leq \frac{\|\delta A\|_F}{\min(\lambda_1,\lambda_2-\lambda_1)}~.
\end{eqnarray}
\end{proposition}
\subsection{Robinson Toeplitz matrices}
Let us investigate how the latent linear ordering of Toeplitz matrices in $\cR$ translates to the \kLE{d}.
Remark that from Theorem~\ref{thm:spectral-solves-seriation-preR}, the \kLE{1} suffices to solve \ref{eqn:seriation}.
Yet, for perturbed observations of $A \in \cR$, the \kLE{d} may be more robust to the perturbation than the \kLE{1}, as the experiments in~\S\ref{sec:numerical_results} indicate.
\textbf{Tridiagonal Toeplitz matrices}
are defined by $b_0 > b_1 > 0=b_2 = \ldots = b_p $.
For $m=0,\ldots,n-1$, they have eigenvalues $\nu_m$ with multiplicity 1 associated to eigenvector $y^{(m)}$ \citep{trench1985eigenvalue},
\begin{align}\label{eqn:spectrum-tridiag}
\begin{array}{lll}
\nu_m & =& b_0 + 2 b_1 \cos{\left(m \pi / (n+1)\right)}\\
y^{(m)} &=& \left( \sin{\left(m \pi / (n+1)\right)}, \ldots, \sin{\left(m n \pi / (n+1)\right)} \right),~
\end{array}
\end{align}
thus matching the spectrum of the Laplace operator on a curve with endpoints from \S\ref{ssec:asymptotic-lap} (up to a shift).
This type of matrices can indeed be viewed as a limit case with points uniformly sampled on a line with strong similarity decay, leaving only the two nearest neighbors with non-zero similarity.
\textbf{Kac-Murdock-Szegö (KMS) matrices}
are defined, for $\alpha > 0$, $\rho = e^{-\alpha}$, by $A_{ij} = b_{|i-j|} = e^{-\alpha |i-j|} = \rho^{|i-j|}$.
For $m=1,\ldots,\lfloor n/2 \rfloor$, there exists $\theta_m \in \left({(m-1)\pi}/{n}, {m\pi}/{n}\right)$, such that $\nu_m$ is a double eigenvalue associated to eigenvectors $y^{m, \cos}$,$y^{m, \sin}$,
\begin{align}\label{eqn:spectrum-KMS}
\begin{array}{lll}
\nu_m &=& \frac{1 - \rho^2}{1 - 2 \rho \cos{\theta_m} + \rho^2}\\
\vspace{.1cm}
y^{m, \cos} &= & \left( \cos{\left( (n-2 r +1)\theta_m /2 \right)} \right)_{r=1}^{n}\\
\vspace{.1cm}
y^{m, \sin} &= & \left( \sin{\left( (n-2 r +1)\theta_m /2 \right)} \right)_{r=1}^{n}~.
\end{array}
\end{align}
\textbf{Linearly decreasing Toeplitz matrices}
defined by $A^{lin}_{ij} = b_{|i-j|} = n - |i-j|$ have spectral properties analog to those of KMS matrices (trigonometric expression, interlacement, low frequency assigned to largest eigenvalue), but with more technical details available in \citet{bunger2014inverses}.
This goes beyond the asymptotic case modeled by tridiagonal matrices.
\textbf{Banded Robinson Toeplitz matrices} typically include
similarity matrices from DNA sequencing.
Actually, any Robinson Toeplitz matrix becomes banded under a thresholding operation.
Also, fast decaying Robinson matrices such as KMS matrices are almost banded.
There is a rich literature dedicated to the spectrum of generic banded Toeplitz matrices \citep{boeottcher2005spectral,gray2006toeplitz,bottcher2017asymptotics}.
However, it mostly provides asymptotic results on the spectra.
Notably, some results indicate that the eigenvectors of some banded symmetric Toeplitz matrices become, up to a rotation, close to the sinusoidal, almost equi-spaced eigenvectors observed in equations~\eqref{eqn:spectrum-tridiag} and \eqref{eqn:spectrum-KMS} \citep{bottcher2010structure,ekstrom2017eigenvalues}.
\subsection{Spectral properties of the Laplacian}\label{ssec:spectral-prop-lapl}
For circulant matrices $A$, $L_A$ and $A$ have the same eigenvectors since $L_A = \mathop{\bf diag} (A \mathbf 1) - A = c \mathbf{I} - A$, with $c \triangleq \sum_{k=0}^{n-1} b_k$. For general symmetric Toeplitz matrices, this property no longer holds as $c_i = \sum_{j=1}^{n} b_{|i-j|}$ varies with $i$.
Yet, for fast decaying Toeplitz matrices, $c_i$ is almost constant except for $i$ at the edges, namely $i$ close to $1$ or to $n$.
Therefore, the eigenvectors of $L_A$ resemble those of $A$ except for the ``edgy'' entries.
\section{Recovering Ordering on Filamentary Structure}\label{sec:results}
We have seen that (some) similarity matrices $A$ with a latent ordering lead to a filamentary \kLE{d}.
The \kLE{d} integrates local proximity constraints together into a global consistent embedding. We expect isolated (or, uncorrelated) noise on $A$ to be averaged out by the spectral picture.
Therefore, we present Algorithm~\ref{algo:Recovery_order_filamentary} that redefines the similarity $S_{ij}$ between two items from their proximity within the \kLE{d}. Basically, it fits the points by a line \emph{locally}, in the same spirit as LLE, which makes sense when the data lies on a linear manifold (curve) embedded in ${\mathbb R}^K$.
Note that Spectral Ordering (Algorithm~\ref{alg:spectral}) projects all points on a given line (it only looks at the first coordinates $f_{1}(i)$) to reorder them. Our method does so in a local neighborhood, allowing for reordering points on a curve with several oscillations.
We then run the basic Algorithms~\ref{alg:spectral} (or~\ref{alg:circular-2d-ordering} for \ref{eqn:circ-seriation}).
Hence, the \kLE{d} is eventually used to pre-process the similarity matrix.
\setlength{\textfloatsep}{10pt
\begin{algorithm}[ht]
\footnotesize
\caption{Ordering Recovery on Filamentary Structure in ${\mathbb R}^K$.}\label{algo:Recovery_order_filamentary}
\begin{algorithmic} [1]
\REQUIRE A similarity matrix $A\in\mathcal{S}_n$, a neighborhood size $k \geq 2$, a dimension of the Laplacian Embedding $d$.
\STATE $\Phi =\left( {\boldsymbol y}_{1}^T, \ldots, {\boldsymbol y}_{n}^T \right)^T \in {\mathbb R}^{n \times d}\gets \text{\kLE{d}}(A)$ \hfill $\triangleright$ \text{Compute Laplacian Embedding}\label{line:laplacian_embedding_enhance_filament}
\STATE Initialize $S = \mathbf{I}_n$ \hfill $\triangleright$ \text{New similarity matrix}
\FOR{$i=1,\ldots,n$}
\STATE $V \gets \{j \: : \: j \in k\text{-NN}({\boldsymbol y}_i) \}\cup \{i\}$ \hfill $\triangleright$ \text{find $k$ nearest neighbors of ${\boldsymbol y}_i \in {\mathbb R}^d$} \label{line:find_neighborhood}
\STATE $w \gets \text{LinearFit}(V)$ \hfill $\triangleright$ \text{fit $V$ by a line }
\label{line:find_direction}
\STATE $D_{uv} \gets |w^T ({\boldsymbol y}_u - {\boldsymbol y}_v) |$, for $u,v \in V$. \hfill \text{$\triangleright$ Compute distances on the line}
\STATE $S_{uv} \gets S_{uv} + D_{uv}^{-1}$, for $u,v \in V$. \hfill \text{$\triangleright$ Update similarity} \label{line:update-similarity}
\ENDFOR
\STATE Compute $\sigma^*$ from the matrix $S$ with Algorithm~\ref{alg:spectral} (resp., Algorithm~\ref{alg:circular-2d-ordering}) for a linear (resp., circular) ordering. \label{line:laplacian_embedding_enhance_basic_ordering}
\ENSURE A permutation $\sigma^*$.
\end{algorithmic}
\end{algorithm}
In Algorithm~\ref{algo:Recovery_order_filamentary}, we compute a \kLE{d} in line~\ref{line:laplacian_embedding_enhance_filament} and then a \kLE{1} (resp., a \kLE{2}) for linear ordering (resp., a circular ordering) in line~\ref{line:laplacian_embedding_enhance_basic_ordering}. For reasonable number of neighbors $k$ in the $k$-NN of line 4 (in practice, $k=10$), the complexity of computing the \kLE{d} dominates Algorithm~\ref{algo:Recovery_order_filamentary}. We shall see in Section~\ref{sec:numerical_results} that our method, while being almost as computationally cheap as the base Algorithms~\ref{alg:spectral} and \ref{alg:circular-2d-ordering} (roughly only a factor 2), yields substantial improvements.
In line~\ref{line:update-similarity} we can update the similarity $S_{uv}$ by adding any non-increasing function of the distance $D_{uv}$, {\it e.g.}, $D_{uv}^{-1}$, $\exp{\left(-D_{uv}\right)}$, or $-D_{uv}$ (the latter case requires to add an offset to $S$ afterwards to ensure it has non-negative entries. It is what we implemented in practice.)
In line~\ref{line:laplacian_embedding_enhance_basic_ordering}, the matrix $S$ needs to be connected in order to use Algorithm~\ref{alg:spectral}, which is not always verified in practice (for low values of $k$, for instance).
In that case, we reorder separately each connected component of $S$ with Algorithm~\ref{alg:spectral}, and then merge the partial orderings into a global ordering by using the input matrix $A$, as detailed in Algorithm~\ref{alg:connect_clusters}, Appendix~\ref{sec:appendix_algo}.
\section{Numerical Results}\label{sec:numerical_results}
\subsection{Synthetic Experiments}\label{ssec:synthetic-exps}
We performed synthetic experiments with noisy observations of Toeplitz matrices $A$, either linear ($\cR$) or circular ($\SCR$). We added a uniform noise on all the entries, with an amplitude parameter $a$ varying between 0 and $5$, with maximum value of the noise $a \| A \|_F$. The matrices $A$ used are either banded (sparse), with linearly decreasing entries when moving away from the diagonal, or dense, with exponentially decreasing entries (KMS matrices).
We used $n=500$, several values for the parameters $k$ (number of neighbors) and $d$ (dimension of the \kLE{d}), and various scalings of the \kLE{d} (parameter $\alpha$ in~\ref{eqn:scaled-kLE}), yielding similar results (see sensitivity to the number of neighbors $k$ and to the scaling~\ref{eqn:scaled-kLE} in Appendix~\ref{ssec:app-k-sensitivity}).
In an given experiment, the matrix $A$ is randomly permuted with a ground truth permutation $\pi^*$. We report the Kendall-Tau scores between $\pi^*$ and the solution of Algorithm~\ref{algo:Recovery_order_filamentary} for different choices of dimension $K$, for varying noise amplitude $a$, in Figure~\ref{fig:exp-main-banded}, for banded (circular) matrices.
For the circular case, the ordering is defined up to a shift. To compute a Kendall-Tau score from two permutations describing a circular ordering, we computed the best Kendall-Tau scores between the first permutation and all shifts from the second, as detailed in Algorithm~\ref{alg:circular-kendall-tau}.
The analog results for exponentially decaying (KMS) matrices are given in Appendix~\ref{ssec:app-KMS-main-exp}, Figure~\ref{fig:exp-main-KMS}.
For a given combination of parameters, the scores are averaged on 100 experiments and the standard-deviation divided by $\sqrt{n_{\text{exps}}} = 10$ (for ease of reading) is plotted in transparent above and below the curve.
The baseline (in blue) corresponds to the basic spectral method of Algorithm~\ref{alg:spectral} for linear and Algorithm~\ref{alg:circular-2d-ordering} for circular seriation.
Other lines correspond to given choices of the dimension of the \kLE{d}, as written in the legend.
\begin{figure}[hbt]
\begin{center}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/kendall-tau-vs-noise-for-several-dims-typematrix_LinearBanded.pdf}
\caption{Linear Banded}\label{subfig:exps-lin-banded}
\end{subfigure}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/kendall-tau-vs-noise-for-several-dims-typematrix_CircularBanded.pdf}
\caption{Circular Banded}\label{subfig:exps-circ-banded}
\end{subfigure}
\caption{
Kendall-Tau scores for Linear (\ref{subfig:exps-lin-banded}) and Circular (\ref{subfig:exps-circ-banded}) Seriation for noisy observations of banded, Toeplitz, matrices, displayed for several values of the dimension parameter of the \kLE{d}($d$), for fixed number of neighbors $k=15$.
}
\label{fig:exp-main-banded}
\end{center}
\vskip -0.2in
\end{figure}
We observe that leveraging the additional dimensions of the \kLE{d} unused by the baseline methods Algorithm~\ref{alg:spectral} and~\ref{alg:circular-2d-ordering} substantially improves the robustness of Seriation. For instance, in Figure~\ref{subfig:exps-lin-banded}, the performance of Algorithm~\ref{algo:Recovery_order_filamentary} is almost optimal for a noise amplitude going from 0 to 4, when it falls by a half for Algorithm~\ref{alg:spectral}.
We illustrate the effect of the pre-processing of Algorithm~\ref{algo:Recovery_order_filamentary} in Figures~\ref{fig:exp-noisy-illustration} and~\ref{fig:exp-clean-illustration}, Appendix~\ref{ssec:illustrations}.
\subsection{Genome assembly experiment}
In {\it de novo} genome assembly, a whole DNA strand is reconstructed from randomly sampled sub-fragments (called {\it reads}) whose positions within the genome are unknown. The genome is oversampled so that all parts are covered by multiple reads with high probability.
Overlap-Layout-Consensus (OLC) is a major assembly paradigm based on three main steps.
First, compute the overlaps between all pairs of read. This provides a similarity matrix $A$, whose entry $(i,j)$ measures how much reads $i$ and $j$ overlap (and is zero if they do not).
Then, determine the layout from the overlap information, that is to say find an ordering and positioning of the reads that is consistent with the overlap constraints. This step, akin to solving a one dimensional jigsaw puzzle, is a key step in the assembly process.
Finally, given the tiling of the reads obtained in the layout stage, the consensus step aims at determining the most likely DNA sequence that can be explained by this tiling. It essentially consists in performing multi-sequence alignments.
In the true ordering (corresponding to the sorted reads' positions along the genome), a given read overlaps much with the next one, slightly less with the one after it, and so on, until a point where it has no overlap with the reads that are further away. This makes the read similarity matrix Robinson and roughly band-diagonal (with non-zero values confined to a diagonal band).
Finding the layout of the reads therefore fits the \ref{eqn:seriation} framework (or \ref{eqn:circ-seriation} for circular genomes, as illustrated in Supplementary Figure~\ref{fig:circular-genome-illustration}).
In practice however, there are some repeated sequences (called {\it repeats}) along the genome that induce false positives in the overlap detection tool \citep{Pop04}, resulting in non-zero similarity values outside (and possibly far away) from the diagonal band. The similarity matrix ordered with the ground truth is then the sum of a Robinson band matrix and a sparse ``noise'' matrix, as in Figure~\ref{subfig:ecoli-sim-mat}.
Because of this sparse ``noise'', the basic spectral Algorithm~\ref{alg:spectral} fails to find the layout, as the quadratic loss appearing in \ref{eqn:2sum} is sensitive to outliers.
\citet{recanati2018robust} tackle this issue by modifying the loss in \ref{eqn:2sum} to make it more robust.
Instead, we show that the simple multi-dimensional extension proposed in Algorithm~\ref{algo:Recovery_order_filamentary} suffices to capture the ordering of the reads despite the repeats.
\begin{figure}[hbt]
\begin{center}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/sim_mat_ecoli.png}
\caption{similarity matrix}\label{subfig:ecoli-sim-mat}
\end{subfigure}
\begin{subfigure}[htb]{0.45\textwidth}
\includegraphics[width=\textwidth]{images/ecoli_total_orderings.pdf}
\caption{ordering found}\label{subfig:ecoli-total-ordering}
\end{subfigure}
\caption{
Overlap-based similarity matrix (\ref{subfig:ecoli-sim-mat}) from {\it E. coli} reads, and the ordering found with Algorithm~\ref{algo:Recovery_order_filamentary} (\ref{subfig:ecoli-total-ordering}) versus the position of the reads within a reference genome obtained by mapping to a reference with minimap2.
The genome being circular, the ordering is defined up to a shift, which is why we observe two lines instead of one in (\ref{subfig:ecoli-total-ordering}).
}
\label{fig:ecoli-exp}
\end{center}
\vskip -0.2in
\end{figure}
We used our method to perform the layout of a {\it E. coli} bacterial genome. We used reads sequenced with third-generation sequencing data, and computed the overlaps with dedicated software, as detailed in Appendix~\ref{ssec:supp-genome-assembly-exp}.
The new similarity matrix $S$ computed from the embedding in Algorithm~\ref{algo:Recovery_order_filamentary} was disconnected, resulting in several connected component instead of one global ordering (see Figure~\ref{subfig:ecoli-partial-orderings}). However, the sub-orderings could be unambiguously merged into one in a simple way described in Algorithm~\ref{alg:connect_clusters}, resulting in the ordering shown in Figure~\ref{subfig:ecoli-total-ordering}.
The Kendall-Tau score between the ordering found and the one obtained by sorting the position of the reads along the genome (obtained by mapping the reads to a reference with minimap2 \citep{li2018minimap2}) is of 99.5\%, using Algorithm~\ref{alg:circular-kendall-tau} to account for the circularity of the genome.
\section{Conclusion}
In this paper, we bring together results that shed light on the filamentary structure of the Laplacian embedding of serial data.
It allows for tackling~\ref{eqn:seriation} and~\ref{eqn:circ-seriation} in a unifying framework.
Notably, we provide theoretical guarantees for~\ref{eqn:circ-seriation} analog to those existing for~\ref{eqn:seriation}.
These do not make assumptions about the underlying generation of the data matrix, and can be verified {\it a posteriori} by the practitioner.
Then, we propose a simple method to leverage the filamentary structure of the embedding.
It can be seen as a pre-processing of the similarity matrix.
Although the complexity is comparable to the baseline methods, experiments on synthetic and real data indicate that this pre-processing substantially improves robustness to noise.
\section{Proof of Theorem~\ref{th:without_noise}}\label{sec:circular_toeplitz_matrix}
In this Section, we prove Theorem~\ref{th:without_noise}.
There are many technical details, notably the distinction between the cases $n$ even and odd.
The key idea is to compare the sums involved in the eigenvalues of the circulant matrices $A \in \SCR$. It is the sum of the $b_k$ times values of cosines. For $\lambda_1$, we roughly have a reordering inequality where the ordering of the $b_k$ matches those of the cosines.
For the following eigenvalues, the set of values taken by the cosines is roughly the same, but it does not match the ordering of the $b_k$.
Finally, the eigenvectors of the Laplacian of $A$ are the same than those of $A$ for circulant matrices $A$, as observed in \S\ref{ssec:spectral-prop-lapl}.
We now introduce a few lemmas that will be useful in the proof.
\textbf{Notation.} In the following we denote $z_k^{(m)} \triangleq \cos(2\pi k m /n)$ and $S_p^{(m)} \triangleq \sum_{k=1}^{p}{z_k^{(m)}}$. Let's define $\mathcal{Z}_n = \{\cos(2\pi k/n)~|~k\in\mathbb{N}\}\setminus\{-1;1\}$. Depending on the parity of $n$, we will write $n=2p$ or $n=2p+1$. Hence we always have $p=\big\lfloor\frac{n}{2}\big\rfloor$. Also when $m$ and $n$ are not coprime we will note $m=d m^{\prime}$ as well as $n = d n^{\prime}$ with $n^{\prime}$ and $m^{\prime}$ coprime.
\subsection{Properties of sum of cosinus.}
The following lemma gives us how the partial sum sequence $(S_q^{(m)})$ behave for $q=p$ or $q=p-1$ as well as it proves its symmetric behavior in \eqref{eqn:sum-zk-sym-ineq}.
\begin{lemma}\label{lemma:equality_ending_partial_sum}
For $z_k^{(m)}=\cos(\frac{2\pi k m}{n})$, $n=2p+1$ and any $m=1,\ldots,p$
\begin{eqnarray}\label{eqn:sum-total-zk}
S^{(m)}_{p} \triangleq \sum_{k=1}^{p}{z_k^{(m)}} = -\frac{1}{2}~.
\end{eqnarray}
Also, for $1 \leq q \leq p/2$,
\begin{align}\label{eqn:sum-zk-sym-ineq}
S^{(1)}_{p-q} \geq S^{(1)}_{q}~.
\end{align}
For $n$ and $m\geq 2$ even ($n = 2p$), we have
\begin{align}
S^{(1)}_{p-1-q} &= S^{(1)}_{q}~~\text{for}~~ 1 \leq q \leq (p-1)/2\label{eq:sum_zk_n_m_even}\\
S^{(1)}_{p-1} &= 0 ~~\text{and}~~ S^{(m)}_{p-1} = -1 ~.\label{eq:S_m_p_1}
\end{align}
Finally for $n$ even and $m$ odd we have
\begin{eqnarray}\label{eq:sum_zk_n_even_m_odd}
S_p^{(m)} = S_p^{(1)} = -1~.
\end{eqnarray}
\end{lemma}
\begin{proof}
Let us derive a closed form expression for the cumulative sum $S^{(m)}_{q}$, for any $m,q \in \{1,\ldots,p\}$
\begin{align}\label{eqn:sum-zk-m}
\begin{array}{lll}
S^{(m)}_{q} = \sum_{k=1}^{q}{z_k^{(m)}} &=& \Re\Big(\sum_{k=1}^{q}{e^{\frac{2 i \pi k m}{n}}}\Big)\\
&=& \Re\Big(e^{2 i \pi m/n} \frac{ 1- e^{2 i \pi q m/n} }{ 1- e^{2 i \pi m/n}}\Big)\\
&=& \cos\big(\pi (q+1) m / n\big) \frac{\sin(\pi q m /n)}{\sin(\pi m/n)}~.
\end{array}
\end{align}
Let us prove equation~\eqref{eqn:sum-total-zk} with the latter expression for $q=p$.
Given that $n=2p+1 = 2 (p+1/2)$, we have,
\begin{eqnarray*}
\frac{\pi (p+1) m }{n} = \frac{\pi (p+1/2 + 1/2) m }{2(p+1/2)} = \frac{\pi m }{2} + \frac{\pi m }{2n}, \\
\frac{\pi p m }{n} = \frac{\pi (p+1/2 - 1/2) m }{2(p+1/2)} = \frac{\pi m }{2} - \frac{\pi m }{2n}~.
\end{eqnarray*}
Now, by trigonometric formulas, we have,
\begin{eqnarray*}
\cos{\left(\frac{\pi m }{2} + x \right)}=
\begin{cases}
(-1)^{m/2}\cos{(x)}, & \text{if}\ \text{$m$ is even} \\
(-1)^{(m+1)/2} \sin{(x)}, & \text{if}\ \text{$m$ is odd}
\end{cases}
\end{eqnarray*}
\begin{eqnarray*}
\sin{\left(\frac{\pi m }{2} - x \right)}=
\begin{cases}
(-1)^{(1+m/2)}\sin{(x)}, & \text{if}\ \text{$m$ is even} \\
(-1)^{(m-1)/2} \cos{(x)}, & \text{if}\ \text{$m$ is odd}
\end{cases}
\end{eqnarray*}
It follows that, for any $m$,
\begin{eqnarray*}
\cos{\left(\frac{\pi m }{2} + x \right)} \sin{\left(\frac{\pi m }{2} - x \right)} = - \cos{(x)} \sin{(x)} = - \frac{1}{2} \sin{(2x)}
\end{eqnarray*}
Finally, with $x=\pi m /(2n)$, this formula simplifies the numerator appearing in equation~\eqref{eqn:sum-zk-m} and yields the result in equation~\eqref{eqn:sum-total-zk}.
Let us now prove equation~\eqref{eqn:sum-zk-sym-ineq} with a similar derivation.
Let $f(q) \triangleq \cos\big(\pi (q+1) / n\big) \sin(\pi q /n)$, defined for any real $q \in [1,p/2]$.
We wish to prove $f(p-q) \geq f(q)$ for any integer $q \in \{1,\ldots,\lfloor p/2 \rfloor \}$.
Using $n= 2 (p+1/2)$, we have,
\begin{eqnarray*}
\frac{\pi (p-q+1) }{n} = \frac{\pi (p+1/2 - (q- 1/2)) }{2(p+1/2)} = \frac{\pi }{2} - \frac{\pi (q - 1/2) }{n}, \\
\frac{\pi (p -q) }{n} = \frac{\pi (p+1/2 - (q+1/2)) }{2(p+1/2)} = \frac{\pi }{2} - \frac{\pi (q+1/2) }{n}~.
\end{eqnarray*}
Using $\cos{(\pi/2 - x )} = \sin{(x)}$ and $\sin{(\pi/2 - x )} = \cos{(x)}$, we thus have,
\begin{align}
f(p-q) = \cos\big(\pi (q+1/2) / n\big) \sin(\pi (q-1/2) /n) = f(q-1/2)
\end{align}
To conclude, let us observe that $f(q)$ is non-increasing on $[1,p/2]$.
Informally, the terms $\{z^{1}_k\}_{1\leq k \leq q}$ appearing in the partial sums $S^{(1)}_{q}$ are all non-negative for $q \leq p/2$.
Formally, remark that the derivative of $f$, $df/dq (q) = (\pi/n) \cos{\left( \pi (2q + 1)/n \right)}$ is non-negative for $ q \in [1,p/2]$.
Hence, for $q \leq p/2$, $f(q-1/2) \geq f(q)$, which ends the proof of equation~\eqref{eqn:sum-zk-sym-ineq}.
To get the first equality of \eqref{eq:S_m_p_1}, from the exact form in \eqref{eqn:sum-zk-m}, we have ($n=2p$)
\begin{eqnarray*}
S_{p-1}^{(1)} = \cos(\pi p/(2p)) \frac{\sin(\pi (p-1)/n)}{\sin(\pi/n)} = 0~.
\end{eqnarray*}
For the second equality in \eqref{eq:S_m_p_1}, we have ($m=2q$):
\begin{eqnarray*}
S_{p-1}^{m} &=& \cos(\pi q) \frac{\sin(\pi q -\pi m/n)}{\sin(\pi m /n)} = (-1)^q \frac{-(-1)^q\sin(\pi m/n)}{\sin(\pi m /n)} = -1~.
\end{eqnarray*}
Finally to get \eqref{eq:sum_zk_n_even_m_odd}, let us write ($n=2p$ and $m$ odd):
\begin{eqnarray*}
S_p^{(m)} &=& (-1)^{m+1}\frac{\cos(\pi (p+1)m/n)}{\sin(\pi m/n)} = (-1)^{m+1}\frac{\cos(\pi m/2 + \pi m/n)}{\sin(\pi m/n)}\\
&=& (-1)^m \sin(\pi m/2) = -1~.
\end{eqnarray*}
\end{proof}
The following lemma gives an important property of the partial sum of the $z_k^{(m)}$ that is useful when combined with proposition \ref{prop:partial_sum_trick}.
\begin{lemma}\label{lemma:cos_partial_sum}
Denote by $z_k^{(m)}=\cos({2\pi k m}/{n})$. Consider first $n = 2p$ and $m$ even. For $m= 1,\ldots, p$ and $q=1,\ldots,p-2$
\begin{eqnarray}\label{eq:partial_sum_domination_sub_case}
S_q^{(1)}=\sum_{k=1}^{q}{z_k^{(1)}} \geq \sum_{k=1}^{q}{z_k^{(m)}}=S_q^{(m)}~.
\end{eqnarray}
Otherwise we have for every $(m,q)\in\{1,\ldots,p\}^2$
\begin{eqnarray}\label{eq:partial_sum_domination}
S_q^{(1)} > S_q^{(m)}~,
\end{eqnarray}
with equality when $q= p$.
\end{lemma}
\begin{proof}
\textbf{Case $m$ and $n$ coprime.} Values of $\big(z_k^{(m)}\big)_{k=1,\ldots,p}$ are all distinct. Indeed $z_k^{(m)}= z_{k^\prime}^{(m)}$ implies that $n$ divides $k+k^{\prime}$ or $k-k^{\prime}$. It is impossible (the range of $k+k^{\prime}$ is $[2,2p]$) unless $k=k^{\prime}$
\textbf{Case $m$ and $n$ not coprime.} $m = d m^{\prime}$ and $n = d n^{\prime}$, with $d\geq 3$. In that situation we need to distinguish according to the parity of $n$.
\textbf{Case $n=2p+1$.}
Let's first remark that $\big(z_k^{(1)}\big)_{k=1,\ldots,p}$ takes all values but two ($-1$ and $1$) of the cosinus of multiple of the angle $\frac{2\pi}{n}$, e.g. $\big(z_k^{(1)}\big)_{k=1,\ldots,p}\subset \mathcal{Z}_n$. Also $(z_k^{(1)})_{k=1,\ldots,p}$ is non-increasing.
Let's prove \eqref{eq:partial_sum_domination} by distinguishing between the various values of $q$.
\begin{itemize}
\item Consider $q=p-(n^{\prime}-1),\ldots,p$. From
\eqref{eqn:sum-total-zk} in lemma \eqref{lemma:cos_partial_sum}, we have $S_p^{(1)}=S_p^{(m)}$. The $\big(z_k^{(1)}\big)_k$ are ordered in non-increasing order and the $\big(z_k^{(m)}\big)_{k=p-n^{\prime}+1,\ldots,p}$ take value in $\mathcal{Z}_n\cup\{1\}$ without repetition (it would requires $k\pm k^\prime \sim 0 ~[n^\prime]$). Also the partial sum of $z^{(1)}_k$ starting from the ending point $p$ are lower than any other sequence taking the same or greater value without repetition. Because $1$ is largest than any possible value in $\mathcal{Z}_n$, we hence have
\begin{eqnarray}\label{eq:partial_from_ending}
\sum_{k=q}^{p}{z^{(1)}_k} \leq \sum_{k=q}^{p}{z^{(m)}_k}~\text{for any } q=p-(n^{\prime}-1),\ldots,p~.
\end{eqnarray}
Since $S_q^{(m)}= S_p^{(m)} - \sum_{k= q+1}^{p}{z^{(m)}_k}$, \eqref{eq:partial_from_ending} implies \eqref{eq:partial_sum_domination} for that particular set of $q$.
\item For $q=1,\ldots,n^{\prime}-1$ it is the same type of argument. Indeed the $(z_k^{(1)})_k$ takes the highest values in $\mathcal{Z}_n$ in decreasing order, while $(z_k^{(m)})_k$ takes also its value in $\mathcal{Z}_n$ (because $z_q^{(m)}\neq 1$). This concludes \eqref{eq:partial_sum_domination}.
Note that when $n^\prime \geq \frac{p+1}{2}$, \eqref{eq:partial_sum_domination} is then true for all $q$. In the sequel, let's then assume that this is not the case, e.g. $n^\prime < \frac{p+1}{2}$.
\item For $q= n^{\prime}-1,\ldots,\big\lfloor \frac{p}{2}\big\rfloor$, the $z_q^{(1)}$ are non-negative. Hence $S_q^{(1)}$ is non-decreasing and lower bounded by $S_{n^{\prime}-1}^{(1)}$.
Also because $S^{(m)}_{n^\prime}=0$ and $S^{(1)}_{n^\prime-1}\geq S^{(m)}_{k}$ for $k=1,\ldots,n^\prime$, it is true that for all $q$ in the considered set, $S_q^{(m)}$ is upper-bounded by $S_{n^{\prime}-1}^{(1)}$. All in all it shows \eqref{eq:partial_sum_domination} for these values of $q$.
\item For $q= \big\lfloor \frac{p}{2}\big\rfloor+1 , \ldots , p- n^{\prime}$, we apply \eqref{eqn:sum-zk-sym-ineq} with $q = n^{\prime}$ (and indeed $n^{\prime}\leq\frac{p}{2}$) to get $S_{p-n^{\prime}}^{(1)} \geq S^{(1)}_{n^{\prime}}$.
Because $S_q^{(m)}$ is upper-bounded by $S_{n^{\prime}-1}^{(1)}$, it follows that $S_{p-n^{\prime}}^{(1)} \geq S_{q}^{(m)}$. Finally since $(S_{q}^{(1)})$ is non-increasing for the considered sub-sequence of $q$, \eqref{eq:partial_sum_domination} is true.
\end{itemize}
\textbf{Case $n=2p$.} Here $\big(z_k^{(1)}\big)_{k=1,\ldots,p}$ takes unique values in $\mathcal{Z}_n\cup\{-1\}$. We also need to distinguish according to the parity of $m$.
\begin{itemize}
\item $\big(z_k^{(m)}\big)_{k=1,\ldots,n^{\prime}-1}$ takes also unique value in $\mathcal{Z}_n$. We similarly get \eqref{eq:partial_sum_domination} for $q=1,\ldots,n^{\prime}-1$, and for $q=n^{\prime}$ because $S_{n^{\prime}}^{(m)}=0$.
\item Consider $m$ odd, from \eqref{eq:sum_zk_n_even_m_odd}, $S^{(m)}_p=S^{(1)}_p=-1$ so that we can do the same reasoning as with $n$ odd to prove \eqref{eq:partial_sum_domination} for $q=p-n^{\prime}+1,\ldots,p$ and $q=1,\ldots,n^{\prime}$. The remaining follows from the symmetry property \eqref{eq:sum_zk_n_m_even} of the sequence $(S_q^{(1)})_q$ in Lemma \ref{lemma:equality_ending_partial_sum}.
\item $m$ and $n$ even, we have that $S_{p-1}^{(1)}=0$ and $S_{p-1}^{(m)}=-1$ so that
\begin{eqnarray*}
S_{p-1}^{(1)} \geq S_{p-1}^{(m)}+1~.
\end{eqnarray*}
$S_q^{(1)}\geq S_q^{(m)}$ for $q<p-1$ follows with same techniques as before.
\end{itemize}
\end{proof}
\subsection{Properties on R-Toeplitz circular matrix.}
This proposition is a technical method that will be helpful at proving that the eigenvalues of a R-circular Toeplitz matrix are such that $\nu_1>\nu_m$.
\begin{proposition}\label{prop:partial_sum_trick}
Suppose than for any $k=1,\ldots,q:$
\begin{eqnarray*}
W_k\triangleq\sum_{i=1}^{k}{w_i} \geq \sum_{i=1}^{k}{\tilde{w}_i}\triangleq\tilde{W}_k~,
\end{eqnarray*}
with $(w_i)$ and $(\tilde{w}_i)$ two sequences of reals. Then, if $(b_k)_{k}$ is non increasing and non negative, we have
\begin{eqnarray}\label{eq:order_eigen_values}
\sum_{k=1}^{q}{b_k w_k} \geq \sum_{k=1}^{q}{b_k \tilde{w}_k}~.
\end{eqnarray}
\end{proposition}
\begin{proof}
We have
\begin{eqnarray*}
\sum_{k=1}^{q}{b_k w_k} &=& \sum_{k=1}^{q}{b_k (W_k-W_{k-1})}\\
&=& \underbrace{b_q}_{\geq 0} W_q + \sum_{k=1}^{q-1}{\underbrace{(b_k-b_{k+1})}_{\geq 0}W_k}\\
&\geq & b_q \tilde{W}_q + \sum_{k=1}^{q-1}{(b_k - b_{k+1})\tilde{W}_k} = \sum_{k=1}^{q}{b_k\tilde{W}_k}~.
\end{eqnarray*}
\end{proof}
As soon as there exists $k_0\in\{1,\ldots,q\}$ such that
\begin{eqnarray*}
\sum_{i=1}^{k_0}{w_i} > \sum_{i=1}^{k_0}{\tilde{w}_i} ~,
\end{eqnarray*}
then \eqref{eq:order_eigen_values} holds strictly.
The following proposition gives the usual derivations of eigenvalues in the R-circular Toeplitz case.
\begin{proposition}\label{prop:eigen_value_vector_circular_Toeplitz_full}
Consider $A$, a circular-R Toeplitz matrix of size $n$.
For $n=2p+1$
\begin{eqnarray}
\nu_m & \triangleq & b_0 + 2\sum_{k=1}^{p}{b_k \cos \left(\frac{2 \pi k m}{n}\right)}~.
\end{eqnarray}
For $m = 1,\ldots,p$ each $\nu_m$ are eigenvalues of $A$ with multiplicity 2 and associated eigenvectors
\begin{align}\label{eqn:eigvec-circ}
\begin{array}{ll}
\vspace{.1cm}
y^{m, \text{cos}} = & \frac{1}{\sqrt{n}} \left(1, \cos \left( 2 \pi m / n \right), \ldots, \cos \left( 2 \pi m (n-1) / n \right) \right)\\
\vspace{.1cm}
y^{m, \text{sin}} = & \frac{1}{\sqrt{n}} \left(1, \sin \left( 2 \pi m / n \right), \ldots, \sin \left( 2 \pi m (n-1) / n \right) \right)~.
\end{array}
\end{align}
For $n = 2p$
\textbf{\begin{align}\label{eqn:eigval-circ-even}
\begin{array}{lll}
\nu_m & \triangleq & b_0 + 2\sum_{k=1}^{p-1}{b_k \cos \left(\frac{2 \pi k m}{n}\right)} + b_p \cos \left( \pi m \right)~,
\vspace{.1cm}
\end{array}
\end{align}}
where $\nu_{0}$ is still singular, with $y^{(0)} = \frac{1}{\sqrt{n}} \left(1 ,\ldots, 1\right)~$.
$\nu_{p}$ also is, with $y^{(p)} = \frac{1}{\sqrt{n}} \left(+1,-1, \ldots, +1,-1\right)~$, and there are $p-1$ double eigenvalues, for $m=1,\ldots,p-1$, each associated to the two eigenvectors given in equation~\eqref{eqn:eigvec-circ}.
\end{proposition}
\begin{proof}
Let us compute the spectrum of a circular-R, symmetric, circulant Toeplitz matrix. From \citet{gray2006toeplitz}, the eigenvalues are
\begin{align}
\nu_m = \sum_{k=0}^{n-1}{b_k \rho_m^k}~,
\end{align}
with $\rho_m =\exp(\frac{2 i \pi m}{n})$, and the corresponding eigenvectors are,
\begin{align}
y^{(m)} = \frac{1}{\sqrt{n}} \left(1, e^{-2i\pi m/n},\ldots, e^{-2i\pi m (n-1)/n}\right)~,
\end{align}
for $m=0, \ldots, n-1$.
\textbf{Case $n$ is odd, with $n=2p+1$.}
Using the symmetry assumption $b_k = b_{n-k}$, and the fact that $\rho_m^{n-k} = \rho_m^{n} \rho_m^{-k} = \rho_m^{-k}$, it results in real eigenvalues,
\begin{align}\label{eq:derivation_eigen_values}
\begin{array}{lll}
\vspace{.1cm}
\nu_m & = & b_0 + \sum_{k=1}^{p}{b_k \rho_m^k} + \sum_{k=p+1}^{n-1}{b_k \rho_m^k}\\
\vspace{.1cm}
& = & b_0 + \sum_{k=1}^{p}{b_k \rho_m^k} + \sum_{k=1}^{p}{b_{n-k} \rho_m^{n-k}}\\
\vspace{.1cm}
& = & b_0 + \sum_{k=1}^{p}{b_k (\rho_m^k+\rho_m^{-k})}\\
\vspace{.1cm}
& = & b_0 + 2\sum_{k=1}^{p}{b_k \cos \left(\frac{2 \pi k m}{n}\right)}~.
\end{array}
\end{align}
Observe also that $\nu_{n-m} = \nu_{m}$, for $m = 1, \ldots, n-1$, resulting in $p+1$ real distinct eigenvalues. $\nu_{0}$ is singular, whereas for $m = 1, \ldots, p$, $\nu_{m}$ has multiplicity $2$, with eigenvectors $y^m$ and $y^{n-m}$.
This leads to the two following real eigenvectors, $y^{m, \text{cos}} = 1/2 (y^m + y^{n-m})$ and $y^{m, \text{sin}}=1/(2i)(y^m - y^{n-m})$
\begin{align}
\begin{array}{ll}
\vspace{.1cm}
y^{m, \text{cos}} = & \frac{1}{\sqrt{n}} \left(1, \cos \left( 2 \pi m / n \right), \ldots, \cos \left( 2 \pi m (n-1) / n \right) \right)\\
\vspace{.1cm}
y^{m, \text{sin}} = & \frac{1}{\sqrt{n}} \left(1, \sin \left( 2 \pi m / n \right), \ldots, \sin \left( 2 \pi m (n-1) / n \right) \right)\\
\end{array}
\end{align}
\textbf{Case $n$ is even, with $n=2p$.}
A derivation similar to \eqref{eq:derivation_eigen_values} yields,
\begin{align}
\begin{array}{lll}
\nu_m & = & b_0 + 2\sum_{k=1}^{p-1}{b_k \cos \left(\frac{2 \pi k m}{n}\right)} + b_p \cos \left( \pi m \right)
\vspace{.1cm}
\end{array}
\end{align}
$\nu_{0}$ is still singular, with $y^{(0)} = \frac{1}{\sqrt{n}} \left(1 ,\ldots, 1\right)~$,
$\nu_{p}$ also is, with $y^{(p)} = \frac{1}{\sqrt{n}} \left(+1,-1, \ldots, +1,-1\right)~$, and there are $p-1$ double eigenvalues, for $m=1,\ldots,p-1$, each associated to the two eigenvectors given in equation~\eqref{eqn:eigvec-circ}.
\end{proof}
The following proposition is a crucial property of the eigenvalues of a circular Toeplitz matrix. It later ensures that when choosing the second eigenvalues of the laplacian, it will corresponds to the eigenvectors with the lowest period. It is paramount to prove that the latent ordering of the data can be recovered from the curve-like embedding.
\begin{proposition}\label{prop:eigen_values_order}
A circular-R, circulant Toeplitz matrix has eigenvalues $(\nu_m)_{m=0,\ldots,p}$ such that $\nu_1 \geq \nu_m$ for all $m=2,\ldots,p$ with $n=2p$ or $n = 2p+1$.
\end{proposition}
\begin{proof}
Since the shape of the eigenvalues changes with the parity of $n$, let's again distinguish the cases.
For $n$ odd, $\nu_1\geq \nu_m$ is equivalent to showing
\begin{eqnarray}
\sum_{k=1}^{p}{b_k \cos(2\pi k /n)} \geq \sum_{k=1}^{p}{b_k \cos(2\pi k m /n)}~.
\end{eqnarray}
It is true by combining proposition \ref{prop:partial_sum_trick} with lemma \ref{lemma:cos_partial_sum}. The same follows for $n$ even and $m$ odd.
Consider $n$ and $m$ even. We now need to prove that
\begin{eqnarray}\label{eq:eigen_value_even}
2\sum_{k=1}^{p-1}{b_k \cos \left(\frac{2 \pi k }{n}\right)}-b_p\geq 2\sum_{k=1}^{p-1}{b_k \cos \left(\frac{2 \pi k m}{n}\right)}+b_p~.
\end{eqnarray}
From lemma \ref{lemma:cos_partial_sum}, we have that
\begin{eqnarray}
\sum_{k=1}^{q}{z_k^{(1)}} &\geq& \sum_{k=1}^{q}{z_k^{(m)}}~\text{for }q=1,\ldots,p-2\\
\sum_{k=1}^{p-1}{z_k^{(1)}} &\geq& \sum_{k=1}^{p-1}{z_k^{(m)}} +1~.
\end{eqnarray}
Applying proposition \ref{prop:partial_sum_trick} with $w_k = z_k^{(1)}$ and $\tilde{w}_k=z_k^{(m)}$ for $k\leq p-2$ and $\tilde{w}_{p-1} = z_{p-1}^{(m)}+1$, we get
\begin{eqnarray}
\sum_{k=1}^{p-1}{z_k^{(1)}b_k} \geq \sum_{k=1}^{p-1}{b_kz_k^{(m)}} +b_{p-1}\\
2\sum_{k=1}^{p-1}{z_k^{(1)}b_k} \geq 2\sum_{k=1}^{p-1}{b_kz_k^{(m)}} +2b_{p}~.
\end{eqnarray}
The last inequality results from the monotonicity of $(b_k)$ and is equivalent to \eqref{eq:eigen_value_even}. It concludes the proof.
\end{proof}
\subsection{Recovering exactly the order.}
Here we provide the proof for Theorem \ref{th:without_noise}.
\begin{theorem}\label{th:proof_without_noise}
Consider the seriation problem from an observed matrix $\Pi S\Pi^T$, where $S$ is a R-circular Toeplitz matrix. Denote by $L$ the associated graph Laplacian. Then the two dimensional laplacian spectral embedding (\eqref{eqn:lapl-embed} with d=2) of the items lies ordered and equally spaced on a circle.
\end{theorem}
\begin{proof}
Denote $A= \Pi S\Pi^T$. The unnormalized Laplacian of $A$ is $L \triangleq \text{diag}(A 1)-A$. The eigenspace associated to its second smallest eigenvalue corresponds to that of $\mu_1$ in $A$. $A$ and $S$ share the same spectrum. Hence the eigenspace of $\mu_1$ in $A$ is composed of the two vectors $\Pi y^{1,sin}$ and $\Pi y^{1,cos}$.
Denote by $(p_i)_{i=1,\ldots,n}\in\mathbb{R}^2$ the \kLE{2}. Each point is parametrized by
\begin{eqnarray}
p_i = (\cos(2\pi \sigma(i)/n),\sin(2\pi \sigma(i)/n))~,
\end{eqnarray}
where $\sigma$ is the permutation represented matricially by $\Pi$.
\end{proof}
\section{Perturbation analysis}\label{sec:perturbation_analysis}
The purpose of the following is to provide guarantees of robustness to the noise with respect to quantities that we will not try to explicit. Some in depths perturbation analysis exists in similar but simpler settings \citep{Fogel}. In particular, linking performance of the algorithm while controlling the perturbed embedding is much more challenging than with a one dimensional embedding.
We have performed graph Laplacian re-normalization to make the initial similarity matrix closer to a Toeplitz matrix. Although we cannot hope to obtain exact Toeplitz Matrix. Hence perturbation analysis provide a tool to recollect approximate Toeplitz matrix with guarantees to recover the ordering.
\subsection{Davis-Kahan}
We first characterize how much each point of the new embedding deviate from its corresponding point in the rotated initial set of points. Straightforward application of Davis-Kahan provides a bound on the Frobenius norm that does not grant directly for individual information on the deviation.
\begin{proposition}[Davis-Kahan]\label{prop:davis_kahan_circular}
Consider $L$ a graph Laplacian of a R-symmetric-circular Toeplitz matrix $A$. We add a symmetric perturbation matrix $H$ and denote by $\Tilde{A} = A + H$ and $\tilde{L}$ the new similarity matrix and graph Laplacian respectively. Denote by $(p_i)_{i=1,\ldots,n}$ and $(\tilde{p}_i)_{i=1,\ldots,n}$ the \kLE{2} coming from $L$ and $\tilde{L}$ respectively. Then there exists a cyclic permutation $\tau$ of $\{1,\ldots,n\}$ such that
\begin{eqnarray}
\sup_{i=1,\ldots,n} ||p_{\tau(i)} - \tilde{p}_i||_2 \leq \frac{2^{3/2}\min(\sqrt{2}||L_H||_2,||L_H||_F)}{\min(|\lambda_1|,|\lambda_2-\lambda_1|)}~,
\end{eqnarray}\label{eq:perturbation_analysis}
where $\lambda_1<\lambda_2$ are the first non-zeros eigenvalues of $L$.
\end{proposition}
\begin{proof}
For a matrix $V\in\mathbb{R}^{n\times d}$, denote by
\begin{eqnarray*}
\big\vert\big\vert V \big\vert\big\vert_{2,\infty} = \sup_{i=1,\ldots,n} \big\vert\big\vert V_i \big\vert\big\vert_2~,
\end{eqnarray*}
where $V_i$ are the columns of $V$.
Because in $\mathbb{R}^n$ we have $||\cdot||_{\infty} \leq ||\cdot||_2$, it follows that
\begin{eqnarray*}
\big\vert\big\vert V \big\vert\big\vert_{2,\infty} &\leq& \big\vert\big\vert \big(||V_i||\big)_{i=1,\ldots,n} \big\vert\big\vert_2 = \sqrt{\sum_{i=1}^{n}{||V_i||_2^2}}\\
&\leq& \big\vert\big\vert V \big\vert\big\vert_{F}~.
\end{eqnarray*}
We apply \citep[Theorem 2]{yu2014useful} to our perturbed matrix, a simpler version of classical davis-Kahan theorem \citep{davis1970rotation}.
Let's denote by $(\lambda_1,\lambda_2)$ the first non-zeros eigenvalues of $L$ and by $V$ its associated 2-dimensional eigenspace. Similarly denote by $\tilde{V}$ the 2-dimensional eigenspace associated to the first non-zeros eigenvalues of $\tilde{L}$. There exists a rotation matrix $O\in SO_2(\mathbb{R})$ such that
\begin{eqnarray}
||\tilde{V}-VO||_F \leq \frac{2^{3/2}\min(\sqrt{2}||L_H||_2,||L_H||_F)}{\min(|\lambda_1|,|\lambda_2-\lambda_1|)} ~.
\end{eqnarray}
In particular we have
\begin{eqnarray*}
\big\vert\big\vert \tilde{V} - V O \big\vert\big\vert_{2,\infty} &\leq& \big\vert\big\vert \tilde{V} - V O \big\vert\big\vert_{F} \\
\big\vert\big\vert \tilde{V} - V O \big\vert\big\vert_{2,\infty} &\leq& \frac{2^{3/2}\min(\sqrt{2}||L_H||_2,||L_H||_F)}{\min(|\lambda_1|,|\lambda_2-\lambda_1|)}
\end{eqnarray*}
Finally because $A$ is a R-symmetric-circular Toeplitz, from Theorem \ref{th:without_noise}, the row of $V$ are $n$ ordered points uniformly sampled on the unit circle. Because applying a rotation is equivalent to translating the angle of these points on the circle. It follows that there exists a cyclic permutation $\tau$ such that
\begin{eqnarray*}
\sup_{i=1,\ldots,n} ||p_i - \tilde{p}_{\tau(i)}||_2 \leq \frac{2^{3/2}\min(\sqrt{2}||L_H||_2,||L_H||_F)}{\min(|\lambda_1|,|\lambda_2-\lambda_1|)}~,
\end{eqnarray*}
\end{proof}
\subsection{Exact recovery with noise for Algorithm \ref{alg:circular-2d-ordering}}
When all the points remain in a sufficiently small ball around the circle, Algorithm \ref{alg:circular-2d-ordering} can exactly find the ordering. Let's first start with a geometrical lemma quantifying the radius of the ball around each $(\cos(\theta_k),\sin(\theta_k))$ so that they do not intersect.
\begin{lemma}\label{lemma:variation_angle_arcos}
For ${\boldsymbol x}\in\mathbb{R}^2$ and $\theta_k = 2\pi k/n$ for $k\in\mathbb{N}$ such that
\begin{eqnarray}\label{eq:safe_ball}
||{\boldsymbol x} - (\cos(\theta_k),\sin(\theta_k))||_2 \leq \sin(\pi/n)~,
\end{eqnarray}
we have
\begin{eqnarray*}
|\theta_x - \theta_k|\leq \pi/n~,
\end{eqnarray*}
where $\theta_x = \tan^{-1}({\boldsymbol x}_1/{\boldsymbol x}_2) + 1 [{\boldsymbol x}_1<0]\pi$.
\end{lemma}
\begin{proof}
Let ${\boldsymbol x}$ that satisfies \eqref{eq:safe_ball}. Let's assume without loss of generality that $\theta_k=0$ and $\theta_x\geq 0$. Assume also that ${\boldsymbol x} = \boldsymbol{e}_1+\sin(\pi/n) {\boldsymbol u}_x$ where ${\boldsymbol u}$ is a unitary vector. A ${\boldsymbol x}$ for which $\theta_x$ is maximum over these constrained is such that ${\boldsymbol u}_x$ and ${\boldsymbol x}$ are orthonormal.
Parametrize ${\boldsymbol u}_x = (\cos(\gamma),\sin(\gamma))$, because ${\boldsymbol u}_x$ and ${\boldsymbol x}$ are orthonormal, we have $\cos(\gamma) = \sin(-\pi/n)$. Finally since $\theta_x\geq 0$, it follows that $\gamma = \pi/2 +\pi/n$ and hence with elementary geometrical arguments $\theta_x = \pi/n$.
\end{proof}
\begin{proposition}[Exact circular recovery under noise in Algorithm \ref{alg:circular-2d-ordering}]
Consider a matrix $\tilde{A} = \Pi^T A \Pi + H$ with $A$ a $R-$circular Toeplitz ($\Pi$ is the matrix associated to the permutation $\sigma$) and $H$ a symmetric matrix such that
\begin{eqnarray*}
\min(\sqrt{2}||L_H||_2,||L_H||_F) \leq 2^{-3/2} \sin(\pi/n) \min(|\lambda_1|,|\lambda_2-\lambda_1|)~,
\end{eqnarray*}
where $\lambda_1<\lambda_2$ are the first non-zeros eigenvalues of the graph Laplacian of $\Pi^T A \Pi$. Denote by $\hat{\sigma}$ the output of Algorithm \ref{alg:circular-2d-ordering} when having $\tilde{A}$ as input. Then there exists a cyclic permutation $\tau$ such that
\begin{eqnarray}
\hat{\sigma} = \sigma^{-1}\circ\tau^{-1}~.
\end{eqnarray}
\end{proposition}
\begin{proof}
We have
\begin{eqnarray*}
\Pi^T\tilde{A}\Pi = A + \Pi^T H\Pi~.
\end{eqnarray*}
$L$ is the graph Laplacian associated to $A$ and $\tilde{L}$, the one associated to $\tilde{A}$. Denote by $(p_i)_{i=1,\ldots,n}$ and $(\tilde{p}_i)_{i=1,\ldots,n}$ the \kLE{2} coming from $L$ and $\tilde{L}$ respectively. $(\tilde{p}_{\sigma^{-1}(i)})_{i=1,\ldots,n}$ is the \kLE{2} coming from the graph Laplacian of $\Pi^T\tilde{A}\Pi$.
Applying Proposition \ref{prop:davis_kahan_circular} with $\Pi^T\tilde{A}\Pi$, there exists a cyclic permutation such that
\begin{eqnarray*}
\sup_{i=1,\ldots,n} ||\tilde{p}_{\sigma^{-1}(i)} - p_{\tau(i)}||_2 < \frac{2^{3/2}\min(\sqrt{2}||L_{H^{\pi}}||_2,||L_{H^{\pi}}||_F)}{\min(|\lambda_1|,|\lambda_2-\lambda_1|)}~,
\end{eqnarray*}
with $H^{\pi} = \Pi^T H\Pi$, $\lambda_1<\lambda_2$ the first non zero eigenvalues of $A$.
Graph Laplacian involve the diagonal matrix $D_H$. In particular we have that $D_{H^{\pi}} = \Pi^T D_H \Pi$. For the unnormalized Laplacian, it results in $L_{H^{\pi}} = \Pi^T L_H \Pi$. We hence have
\begin{eqnarray*}
\sup_{i=1,\ldots,n} ||\tilde{p}_{\sigma(i)} - p_{\tau(i)}||_2 & < & \frac{2^{3/2}\min(\sqrt{2}||L_{H}||_2,||L_{H}||_F)}{\min(|\lambda_1|,|\lambda_2-\lambda_1|)}\\
\sup_{i=1,\ldots,n} ||\tilde{p}_{i} - p_{\tau\circ\sigma^{-1}(i)}||_2 & < & \sin(\pi/n)~.
\end{eqnarray*}
From Theorem \ref{th:without_noise}, $p_i = \cos(2\pi i /n)$ for all $i$. It follows that for any $i$
\begin{eqnarray*}
||\tilde{p}_{i} - \cos(2\pi \tau\circ\sigma(i) /n)||_2 & < & \sin(\pi/n)~.
\end{eqnarray*}
Algorithm \ref{alg:circular-2d-ordering} recovers the ordering by sorting the values of
\begin{eqnarray*}
\theta_i = \tan^{-1}(\tilde{p}_i^1/\tilde{p}_i^2) + 1 [\tilde{p}_i^1<0]\pi~,
\end{eqnarray*}
where $\tilde{p}_i = (\tilde{p}_i^1, \tilde{p}_i^2)$. Applying Lemma \ref{lemma:variation_angle_arcos}:
\begin{eqnarray*}
|\theta_i - 2\pi (\tau\circ\sigma^{-1})(i) /n| < \pi/n~~\forall i\in\{1,\ldots,n\},
\end{eqnarray*}
so that
\begin{eqnarray}
\theta_{\sigma^{-1}\circ\tau^{-1}(1)}\leq \cdots \leq \theta_{\sigma^{-1}\circ\tau^{-1}(n)}~.
\end{eqnarray}
Finally $\Hat{\sigma} = \sigma^{-1}\circ\tau^{-1}$.
\end{proof}
\section{Additional Algorithms}\label{sec:appendix_algo}
\subsection{Merging connected components}
The new similarity matrix $S$ computed in Algorithm~\ref{algo:Recovery_order_filamentary} is not necessarily the adjacency matrix of a connected graph, even when the input matrix $A$ is. For instance, when the number of nearest neighbors $k$ is low and the points in the embedding are non uniformly sampled along a curve, $S$ may have several, disjoint connected components (let us say there are $C$ of them in the following).
Still, the baseline Algorithm~\ref{alg:spectral} requires a connected similarity matrix as input.
When $S$ is disconnected, we run \ref{alg:spectral} separately in each of the $C$ components, yielding $C$ sub-orderings instead of a global ordering.
However, since $A$ is connected, we can use the edges of $A$ between the connected components to merge the sub-orderings together.
Specifically, given the $C$ ordered subsequences, we build a meta similarity matrix between them as follows. For each pair of ordered subsequences $(c_i, c_j)$, we check whether the elements in one of the two ends of $c_i$ have edges with those in one of the two ends of $c_j$ in the graph defined by $A$. According to that measure of similarity and to the direction of these meta-edges ({\it i.e.}, whether it is the beginning or the end of $c_i$ and $c_j$ that are similar), we merge together the two subsequences that are the closest to each other. We repeat this operation with the rest of the subsequences and the sequence formed by the latter merge step, until there is only one final sequence, or until the meta similarity between subsequences is zero everywhere.
We formalize this procedure in the greedy Algorithm~\ref{alg:connect_clusters}, which is implemented in the package at \url{https://github.com/antrec/mdso}.
Given $C$ reordered subsequences (one per connected component of $S$) $(c_i)_{i=1,\ldots,C}$, that form a partition of $\{1,\ldots,n\}$, and a window size $h$ that define the length of the ends we consider ($h$ must be smaller than half the smallest subsequence), we denote by $c_i^-$ (resp. $c_i^+$) the first (resp. the last) $h$ elements of $c_i$, and $a(c_i^{\epsilon}, c_j^{\epsilon^\prime}) = \sum_{u \in c_i^{\epsilon}, v \in c_j^{\epsilon^\prime}} A_{uv}$ is the similarity between the ends $c_i^{\epsilon}$ and $c_j^{\epsilon^\prime}$, for any pair $c_i, c_j$, $i \neq j \in \{1, \ldots, C\}$, and any combination of ends $\epsilon, \epsilon^\prime \in \{+,-\}$.
Also, we define the meta-similarity between $c_i$ and $c_j$ by,
\begin{eqnarray}\label{eqn:meta-sim-merge}
s(c_i,c_j) \triangleq \text{max}(a(c_i^+,c_j^+), a(c_i^+,c_j^-), a(c_i^-,c_j^+), a(c_i^-,c_j^-))~,
\end{eqnarray}
and $(\epsilon_i, \epsilon_j) \in \{+,-\}^2$ the combination of signs where the argmax is realized, {\it i.e.}, such that $s(c_i,c_j) = a(c_i^{\epsilon_i}, c_j^{\epsilon_j})$.
Finally, we will use $\bar{c}_i$ to denote the ordered subsequence $c_i$ read from the end to the beginning, for instance if $c=(1,\ldots, n)$, then $\bar{c} = (n, \ldots, 1)$.
\begin{algorithm}[H]
\caption{Merging connected components}\label{alg:connect_clusters}
\begin{algorithmic} [1]
\REQUIRE $C$ ordered subsequences forming a partition $P=(c_1,\ldots,c_C)$ of $\{1,\ldots,n\}$, an initial similarity matrix $A$, a neighborhood parameter $h$.
\WHILE{$C > 1$}
\STATE Compute meta-similarity $\tilde{S}$ such that $\tilde{S}_{ij} = s(c_i, c_j)$, and meta-orientation $(\epsilon_i, \epsilon_j)$, for all pairs of subsequences with equation~\ref{eqn:meta-sim-merge}.
\IF{$\tilde{S} = 0$} \STATE break \ENDIF \label{line:break-merge}
\STATE find $(i,j) \in \mathop{\rm argmax} \tilde{S}$, and $(\epsilon_i, \epsilon_j)$ the corresponding orientations.
\IF{$(\epsilon_i, \epsilon_j) = (+,-)$}
\STATE $c^{\text{new}} \gets (c_i, c_j)$
\ELSIF{$(\epsilon_i, \epsilon_j) = (+,+)$}
\STATE $c^{\text{new}} \gets (c_i, \bar{c}_j)$
\ELSIF{$(\epsilon_i, \epsilon_j) = (-,-))$}
\STATE $c^{\text{new}} \gets (\bar{c}_i, c_j)$
\ELSIF{$(\epsilon_i, \epsilon_j) = (-,+))$}
\STATE $c^{\text{new}} \gets (\bar{c}_i, \bar{c}_j)$
\ENDIF
\STATE Remove $c_i$ and $c_j$ from $P$.
\STATE Add $c^{\text{new}}$ to $P$.
\STATE $C \gets C - 1$
\ENDWHILE
\ENSURE Total reordered sequence $c^{\text{final}}$, which is a permutation if $C=1$ or a set of reordered subsequences if the loop broke at line~\ref{line:break-merge}.
\end{algorithmic}
\end{algorithm}
\subsection{Computing Kendall-Tau score between two permutations describing a circular ordering}
Suppose we have data having a circular structure, {\it i.e.}, we have $n$ items that can be laid on a circle such that the higher the similarity between two elements is, the closer they are on the circle.
Then, given an ordering of the points that respects this circular structure ({\it i.e.}, a solution to \ref{eqn:circ-seriation}), we can shift this ordering without affecting the circular structure.
For instance, in Figure~\ref{fig:circular-shift-illustration}, the graph has a $\circR$ affinity matrix whether we use the indexing printed in black (outside the circle), or a shifted version printed in purple (inside the circle).
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=.35\textwidth]{images/circular_shift.pdf}
\caption{
Illustration of the shift-invariance of permutations solution to a \ref{eqn:circ-seriation} problem.
}
\label{fig:circular-shift-illustration}
\end{center}
\end{figure}
Therefore, we transpose the Kendall-Tau score between two permutations to the case where we want to compare the two permutations up to a shift with Algorithm~\ref{alg:circular-kendall-tau}
\begin{algorithm}[H]
\caption{Comparing two permutation defining a circular ordering}\label{alg:circular-kendall-tau}
\begin{algorithmic} [1]
\REQUIRE Two permutations vectors of size $n$, $\sigma = \left(\sigma(1), \ldots, \sigma(n)\right)$ and $\pi = \left(\pi(1), \ldots, \pi(n)\right)$
\FOR{$i=1$ \TO $n$}
\STATE $KT(i) \gets \text{Kendall-Tau}(\sigma, \left(\pi(i), \pi(i+1), \ldots, \pi(n), \pi(1), \ldots, \pi(i-1)\right))$
\ENDFOR
\STATE $\text{best score} \gets \max_{i=1,\ldots,n} KT(i)$
\ENSURE best score
\end{algorithmic}
\end{algorithm}
| {'timestamp': '2018-07-20T02:01:18', 'yymm': '1807', 'arxiv_id': '1807.07122', 'language': 'en', 'url': 'https://arxiv.org/abs/1807.07122'} |
\section{Introduction}
\label{sec:intro}
With the detection of the first gravitational wave (GW)
signals GW150914~\cite{Abbott:2016blz} and GW151226~\cite{Abbott:2016nmj}
the era of GW astronomy has begun.
Beside black hole binaries, binary neutron stars (BNS) are one of the expected
sources for future detections with the advanced GW interferometers~\cite{Aasi:2013wya}.
The theoretical modeling of the GW signal is crucial to support future GW
astronomy observation of BNS. BNS mergers are also expected to be
bright in the electromagnetic (EM) spectrum. Possible EM counterparts
of the GW signal are short gamma-ray bursts \cite{Paczynski:1986px,Eichler:1989ve,Soderberg:2006bn},
kilonovae~\cite{Tanvir:2013pia,Yang:2015pha,Jin:2016pnm,Metzger:2016pju} (also referred to as macronovae)
and radio flares~\cite{Nakar:2011cw}. Detailed models of EM counterparts
will help the development of multimessenger astronomy.
Modeling BNS mergers requires to cover the entire
parameter space of BNSs, including the stars' rotational (spin) effects.
Although observations suggest that most neutron stars (NSs) in binary systems
have comparable individual masses $\sim 1.35 M_\odot$ and
relatively small spins
\cite{Kiziltan:2013oja,Lattimer:2012nd}, this
conclusion might be biased by the small
number of observed BNS. The BNS parameter space could be much richer,
in particular population synthesis models predict a wider range of masses and
mass ratios~\cite{Dominik:2012kk,Dietrich:2015pxa}.
Recent observations of compact binary systems with
mass ratios of $q \approx 1.3$ suggest that BNSs with a significant
mass asymmetry can exist~\cite{Martinez:2015mya,Lazarus:2016hfu}.
As far as spins are concerned, pulsar data indicate that NS can have a
significant rotation even in binary systems. Some of these NS in
binaries
approach the rotational frequency of isolated milli-second pulsars.
For example, the NS in the binary system PSR J1807$-$2500B has a
rotation frequency of $239$Hz \cite{Lorimer:2008se,Lattimer:2012nd},
and one of the double pulsar components
(PSR J0737$-$3039A) has rotational a
frequency of $44$Hz \cite{Burgay:2003jj}.
There is also evidence that dynamical capture and exchange interactions involving NSs
are a frequent occurrence in globular clusters~\cite{Verbunt:2013kka};
during this process exotic objects,
such as double millisecond pulsars might form~\cite{Benacquista:2011kv}.
The only possibility to study the dynamics and waveforms in the time
period shortly before and after the merger of BNS systems is to
perform
numerical relativity (NR) simulations that include general relativistic
hydrodynamics (GRHD). Despite the large progress of the field during
the last 10 years, spin effects in BNS mergers have been investigated
in few works. A main reason was the lack of consistent and realistic
initial data for the simulations, a crucial prerequisite for NR evolutions.
General-relativistic quasi-equilibrium configurations of rotating NSs
of circular binary system can be now computed within the constant
rotational velocity (CRV) approach~\cite{Tichy:2011gw,Tichy:2012rp}.
These data are neither corotational nor irrotational, and permit, for
the first time, the NR-GW modeling
of realistic BNS sources with spins. (See Sec.~I of
\cite{Bernuzzi:2013rza} for a discussion). Alternative NR evolutions
of spinning BNS were presented
in~\cite{Kastaun:2013mv,Tsatsin:2013jca,Kastaun:2014fna,East:2015vix},
but employed constraint violating initial data.
Spinning BNS were also considered with a smooth particle hydrodynamics code
under the assumption of conformal flatness,
e.g.~\cite{Bauswein:2015vxa}.
Evolutions of CRV initial data have been considered
in~\cite{Bernuzzi:2013rza,Dietrich:2015pxa,Tacik:2015tja}.
We have presented the first evolutions covering the last 3 orbits and
postmerger for a
BNS systems described by polytropic equations of state
\cite{Bernuzzi:2013rza}. That work proposed an analysis of the
conservative dynamics in terms of gauge-invariant curves of the binding
energy vs. angular momentum and a very preliminary analysis of the
spin effects on the waveform.
In~\cite{Dietrich:2015pxa} we have made significant improvements in
the way we construct CRV initial data, which allows us to investigate
BNS mergers in an extended parameter space, and presented preliminary
evolutions of generic mergers (i.e.~with precession).
Ref.~\cite{Tacik:2015tja} presented an independent implementation of
CRV initial data and preliminary evolutions, but did not cover the
final merger and postmerger phases.
Several important questions remain open.
A detailed understanding of the role of spin interactions will
be fundamental for building analytical models of the inspiral--merger
phase. Thus, it is important to further explore the BNS dynamics with
long simulations and spanning a larger parameter space than previously
considered. The influence of the NS spins on the GW phase evolution
during the last orbits and up to merger is not fully understood but is
very relevant for GW data analysis \cite{Brown:2012qf,Agathos:2015uaa}. Understanding
the spin influence on the merger remnant might be relevant for both GW and
EM observations. Also, the role
of the NSs rotation on the dynamical ejecta
and on the EM counterparts has not been explored.
In this article, we investigate rotational (spin) effects in
multi-orbit BNS merger simulations with
different mass-ratio and propose the first answers to the questions above.
Our simulations cover $\sim12$ orbits to merger and
postmerger for mass ratios $q=1,1.25,1.5$, two different equations of
state (EOSs), and spin aligned or anti-aligned to the orbital angular momentum.
These simulations are the first of their kind, and will support the
development of analytical models of the GWs and of the EM emission
from merger events.
This paper extends the results of Ref.~\cite{Dietrich:2016hky}
(hereafter Paper I) that was limited to irrotational
configurations and focused on the effect of the mass ratio.
Our goal is to cover a significant part of the BNS
parameter space.
The article is structured as follows:
In Sec.~\ref{sec:methods}, we describe briefly
the numerical methods and some analysis tools.
In Sec.~\ref{sec:config} we present the configurations
employed in this work.
Section~\ref{sec:dynamics} summarizes the dynamics
of the merger process, where
the spin evolution of the individual stars and the energetics during the
inspiral and post-merger are discussed.
In Sec.~\ref{sec:ejecta}-\ref{sec:EM} dynamical ejecta,
the GW signal, and possible electromagnetic (EM) counterparts
are studied. We conclude in Sec.~\ref{sec:conclusion}.
Throughout this work we use geometric units, setting $c=G=M_\odot=1$,
though we will sometimes include $M_\odot$ explicitly or quote values
in cgs units for better understanding and astrophysical interpretation.
Spatial indices are denoted by Latin letters running from 1 to 3 and
Greek letters are used for spacetime indices running from 0 to 3.
\section{Simulation methods}
\label{sec:methods}
\subsection{Initial configurations}
Our initial configurations are constructed with the pseudospectral
SGRID code~\cite{Tichy:2006qn,Tichy:2009yr,Tichy:2009zr,Dietrich:2015pxa}.
We use the conformal thin sandwich equations~\cite{Wilson:1995uh,Wilson:1996ty,York:1998hy}
together with the CRV approach~\cite{Tichy:2011gw,Tichy:2012rp} to solve
the constraint equations.
We construct quasi-equilibrium configuration in quasi-circular orbits,
assuming a helical Killing vector. We follow exactly the same setup as
in Paper~I to which we refer for more details.
In order to construct BNS with different spins the approach
of~\cite{Tichy:2009zr,Dietrich:2015pxa} is adopted.
The CRV method does not allow to prescribe the spin (or the
dimensionless spin) directly; only the rotational part of the
four-velocity can be specified as free data.
We use Eq.~(C3) of Ref.~\cite{Dietrich:2015pxa} to obtain an estimate
for the four-velocity corresponding to a given
dimensionless spin of $\chi=0.1$. Once the rotational velocity is
fixed, we compute a single NS with the same baryonic mass as the one
in the binary and measure its ADM angular momentum. This gives the
dimensionless spin of one component of the binary. The procedure is
repeated for the other component.
For binary configurations in quasi-equilibrium, the described procedure
gives consistent results between the ADM angular momentum of the spinning
and irrotational BNS. In particular the difference between the
$J_{\rm ADM}$ of the spinning and irrotational BNS is
consistent with the sum of the spin estimates, $\Delta J_{\rm
ADM}\sim(S^A+S^B)$, up to $10^{-2}$; fractional errors are always
$\lesssim 0.3 \%$.
Those small differences might also be caused by small differences
in the initial orbital frequency.
The properties of the initial BNS configuration are summarized in
Tab.~\ref{tab:config}, and discussed in more detail in
Sec. \ref{sec:config}.
\begin{tiny}
\begin{table*}[t]
\centering
\caption{BNS configurations.
The first column defines the configuration name.
Next 9 columns describe the physical properties of the single stars:
the EOS, the gravitational masses of the individual stars $M^{A,B}$,
the baryonic masses of the individual stars $M_{b}^{A,B}$, the stars' spins
$S^{A,B}$, and dimensionless spins $\chi^{A,B}$.
The last 5 columns define the tidal coupling constant $\kappa_2^T$,
the mass-weighted spin $\chi_{mw}$,
the initial GW frequency $M \omega_{22}^0$,
the ADM-Mass $M_{\rm ADM}$,
and the ADM-angular momentum $J_{\rm ADM}$.}
\setlength{\tabcolsep}{1pt}
\begin{tabular}{c|l|ccccccccc|ccccc}
& Name & EOS & $M^A$ & $M^B$ & $M_b^A$ & $M_b^B$ & $S^A$ & $S^B$ & $\chi^A$ & $\chi^B$ & $\kappa_2^T$ & $\chi_{\rm mw}$ & $M \omega_{22}^0$ & $ M_{ADM}$ & $J_{ADM}$ \\
\hline
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{\textbf{$q=1.00$}}}
& ALF2-137137$^{(00)}$ & ALF2 & 1.375008 & 1.375008 & 1.518152 & 1.518152 & 0.0000 & 0.0000 & 0.000 & 0.000 & 125 & 0.000 & 0.0360 & 2.728344 & 8.1200 \\
& ALF2-137137$^{(\uparrow \uparrow)}$ & ALF2 & 1.375516 & 1.375516 & 1.518152 & 1.518152 & 0.1936 & 0.1936 & 0.102 & 0.102 & 125 & 0.102 & 0.0360 & 2.729319 & 8.4811 \\
& ALF2-137137$^{(\uparrow \downarrow)}$ & ALF2 & 1.375516 & 1.375516 & 1.518152 & 1.518152 & 0.1936 & -0.1936 & 0.102 & -0.102 & 125 & 0.000 & 0.0360 & 2.729333 & 8.1240 \\
& ALF2-137137$^{(\uparrow 0)}$ & ALF2 & 1.375516 & 1.375008 & 1.518152 & 1.518152 & 0.0000 & 0.1936 & 0.102 & 0.000 & 125 & 0.051 & 0.0360 & 2.728816 & 8.2997 \\
\hline \multirow{4}{*}{\rotatebox[origin=c]{90}{\textbf{$q=1.00$}}}
& H4-137137$^{(00)}$ & H4 & 1.375006 & 1.375006 & 1.498528 & 1.498528 & 0.0000 & 0.0000 & 0.000 & 0.000 & 188 & 0.000 & 0.0348 & 2.728211 & 8.0934 \\
& H4-137137$^{(\uparrow \uparrow)}$ & H4 & 1.375440 & 1.375440 & 1.498528 & 1.498528 & 0.1892 & 0.1892 & 0.100 & 0.100 & 188 & 0.100 & 0.0348 & 2.729056 & 8.4508 \\
& H4-137137$^{(\uparrow \downarrow)}$ & H4 & 1.375440 & 1.375440 & 1.498528 & 1.498528 & 0.1892 & -0.1892 & 0.100 & -0.100 & 188 & 0.000 & 0.0349 & 2.729067 & 8.0983 \\
& H4-137137$^{(\uparrow 0)}$ & H4 & 1.375440 & 1.375006 & 1.498528 & 1.498528 & 0.1892 & 0.000 & 0.100 & 0.000 & 188 & 0.050 & 0.0348 & 2.728643 & 8.2711 \\
\hline
\hline \multirow{4}{*}{\rotatebox[origin=c]{90}{\textbf{$q=1.25$}}}
& ALF2-122153$^{(00)}$ & ALF2 & 1.527790 & 1.222231 & 1.707041 & 1.334040 & 0.0000 & 0.0000 & 0.000 & 0.000 & 127 & 0.000 & 0.0357 & 2.728212 & 7.9556 \\
& ALF2-122153$^{(\uparrow \uparrow)}$ & ALF2 & 1.528484 & 1.222602 & 1.707041 & 1.334040 & 0.2430 & 0.1521 & 0.104 & 0.102 & 127 & 0.103 & 0.0357 & 2.729255 & 8.3300 \\
& ALF2-122153$^{(\uparrow \downarrow)}$ & ALF2 & 1.528484 & 1.222602 & 1.707041 & 1.334040 & 0.2430 & -0.1521 & 0.104 & -0.102 & 127 & 0.013 & 0.0358 & 2.729256 & 8.0479 \\
& ALF2-122153$^{(\uparrow 0)}$ & ALF2 & 1.528484 & 1.222231 & 1.707041 & 1.334040 & 0.2430 & 0.0000 & 0.104 & 0.000 & 127 & 0.058 & 0.0357 & 2.728907 & 8.1895 \\
\hline \multirow{4}{*}{\rotatebox[origin=c]{90}{\textbf{$q=1.25$}}}
& H4-122153$^{(00)}$ & H4 & 1.527789 & 1.222228 & 1.683352 & 1.318080 & 0.0000 & 0.0000 & 0.000 & 0.000 & 193 & 0.000 & 0.0349 & 2.728675 & 8.0248 \\
& H4-122153$^{(\uparrow \uparrow)}$ & H4 & 1.528365 & 1.222546 & 1.683352 & 1.318080 & 0.2329 & 0.1499 & 0.100 & 0.100 & 193 & 0.100 & 0.0349 & 2.729567 & 8.3899 \\
& H4-122153$^{(\uparrow \downarrow)}$ & H4 & 1.528365 & 1.222546 & 1.683352 & 1.318080 & 0.2329 & -0.1499 & 0.100 & -0.100 & 193 & 0.011 & 0.0349 & 2.729585 & 8.1135 \\
& H4-122153$^{(\uparrow 0)}$ & H4 & 1.528365 & 1.222228 & 1.683352 & 1.318080 & 0.2329 & 0.0000 & 0.100 & 0.00 & 193 & 0.056 & 0.0349 & 2.729250 & 8.2491 \\
\hline
\hline \multirow{4}{*}{\rotatebox[origin=c]{90}{\textbf{$q=1.50$}}}
& ALF2-110165$^{(00)}$ & ALF2 & 1.650015 & 1.100016 & 1.862057 & 1.189870 & 0.0000 & 0.0000 & 0.000 & 0.000 & 133 & 0.000 & 0.0356 & 2.728542 & 7.6852 \\
& ALF2-110165$^{(\uparrow \uparrow)}$ & ALF2 & 1.650924 & 1.100296 & 1.862057 & 1.189870 & 0.2919 & 0.1223 & 0.107 & 0.101 & 133 & 0.105 & 0.0355 & 2.729669 & 8.0732 \\
& ALF2-110165$^{(\uparrow \downarrow)}$ & ALF2 & 1.650924 & 1.100296 & 1.862057 & 1.189870 & 0.2919 & -0.1223 & 0.107 & -0.101 & 133 & 0.024 & 0.0355 & 2.729677 & 7.8475 \\
& ALF2-110165$^{(\uparrow 0)}$ & ALF2 & 1.650924 & 1.100016 & 1.862057 & 1.189870 & 0.2919 & 0.0000 & 0.107 & 0.000 & 133 & 0.064 & 0.0355 & 2.729404 & 7.9599 \\
\hline \multirow{4}{*}{\rotatebox[origin=c]{90}{\textbf{$q=1.50$}}}
& H4-110165$^{(00)}$ & H4 & 1.650017 & 1.100006 & 1.834799 & 1.176579 & 0.0000 & 0.0000 & 0.000 & 0.000 & 209 & 0.000 & 0.0350 & 2.729385 & 7.81991 \\
& H4-110165$^{(\uparrow \uparrow)}$ & H4 & 1.650752 & 1.100242 & 1.834799 & 1.176579 & 0.2745 & 0.1204 & 0.101 & 0.099 & 209 & 0.100 & 0.0350 & 2.730283 & 8.18713 \\
& H4-110165$^{(\uparrow \downarrow)}$ & H4 & 1.650752 & 1.100242 & 1.834799 & 1.176579 & 0.2745 & 0.1204 & 0.101 & -0.099 & 209 & 0.021 & 0.0350 & 2.730267 & 7.96085 \\
& H4-110165$^{(\uparrow 0)}$ & H4 & 1.650752 & 1.100006 & 1.834799 & 1.176579 & 0.2745 & 0.0000 & 0.101 & 0.000 & 209 & 0.061 & 0.0350 & 2.730050 & 8.07357 \\
\hline \hline
\end{tabular}
\label{tab:config}
\end{table*}
\end{tiny}
\subsection{Evolutions}
Dynamical simulations are performed with the BAM
code~\cite{Thierfelder:2011yi,Brugmann:2008zz,Dietrich:2015iva},
employing the Z4c scheme~\cite{Bernuzzi:2009ex,Hilditch:2012fp} and the 1+log and
gamma-driver conditions for the
gauge system~\cite{Bona:1994a,Alcubierre:2002kk,vanMeter:2006vi}.
The GRHD equations are solved in
conservative form by defining Eulerian conservative variables from the
rest-mass density $\rho$, pressure $p$, internal energy $\epsilon$, and
3-velocity $v^i$ with a high-resolution-shock-capturing
method~\cite{Thierfelder:2011yi} based on primitive reconstruction
and the Local-Lax-Friedrichs central scheme for the numerical
fluxes. The GRHD system is closed by an EOS.
We work with two EOSs modeled as piecewise polytropic
fits~\cite{Read:2008iy,Dietrich:2016hky}
and include thermal effects with an additive pressure contribution
$p_{\rm th} = (\Gamma_{\rm th}-1)\rho\epsilon$~\cite{Shibata:2005ss,Bauswein:2010dn}
setting $\Gamma_{\rm th}=1.75$.
The Berger-Oliger algorithm is employed for the time stepping~\cite{Berger:1984zza} and we
make use of an additional refluxing algorithm to enforce mass conservation
across mesh refinement boundaries~\cite{Berger:1989,East:2011aa}
as in previous works~\cite{Dietrich:2014wja,Dietrich:2015iva,Dietrich:2016hky}.
Restriction and prolongation between the refinement levels is performed with
an average scheme and 2nd order essentially non-oscillatory
scheme, respectively.
We employ the same grid setup as the ``shell'' setup in~Paper~I,
i.e.~the numerical domain is made of a hierarchy of cell-centered nested
Cartesian grids, where the outermost level is substituted by a
multipatch (cubed-sphere) grid~\cite{Ronchi:1996,Thornburg:2004dv,Pollney:2009yz,Hilditch:2012fp}.
In total we have used 4 different grid setups summarized in Tab.~\ref{tab:grid}.
\begin{table}[t]
\centering
\caption{Grid configuration:
name, EOS, finest grid spacing $h_{L-1}$, radial resolution inside the
shells $h_r$, number of points $n$ ($n^{mv}$) in the fixed (moving) levels,
radial point number $n_r$ and azimuthal number of points $n_\theta$ in the shells,
inradius up to which GRHD equations are solved $r_1$, and the outer boundary $r_b$.}
\begin{tabular}{l|l|cccccccc}
Name & EOS & $h_{L-1}$ & $h_{r}$ & $n$ & $n^{mv}$ & $n_r$ & $n_\theta$ & $r_1$ & $r_b$ \\
\hline
\hline
R1 & ALF2 & 0.250 & 8.00 & 128 & 64 & 128 & 64 & 572 & 1564 \\
R2 & ALF2 & 0.167 & 5.33 & 192 & 96 & 192 & 96 & 552 & 1555 \\
\hline
R1 & H4 & 0.250 & 8.00 & 128 & 72 & 128 & 64 & 572 & 1564 \\
R2 & H4 & 0.167 & 5.33 & 192 & 108 & 192 & 96 & 552 & 1555 \\
\hline \hline
\end{tabular}
\label{tab:grid}
\end{table}
\subsection{Simulation analysis}
Most of our analysis tools were summarized
in Sec.~III of Paper~I. They include the computation of the ejecta
quantities, the disk masses,
the entropy indicator, the amount of mass transfer during the inspiral,
and the way we extract GWs.
Here, we extend the analysis tools by including a quasi-local measure of the
spin of the NSs. Following a similar approach as in
Refs.~\cite{Tacik:2015tja},
we evaluate the surface integral
\begin{equation}
\label{eq:quasi_local_spin}
S^i \approx \frac{1}{8\pi}
\int_{r_s} \text{d}^2 x \sqrt{\gamma} \left( \gamma^{k j} K_{l k} - \delta_{l}^{j} K \right)
n_{j} \varphi^{li},
\end{equation}
on coordinate spheres with radius $r_S$ around the NSs.
$\varphi^{li} = \epsilon^{l i k } x_k $ defines the approximate rotational Killing vectors
in Cartesian coordinates ($\varphi^{l1},\varphi^{l2},\varphi^{l3}$),
$K_{ij}$ denotes the extrinsic
curvature, $\gamma^{ij}$ the inverse 3-metric,
and $n_i = (x_i-x_i^{\rm NS})/r$ the normal vector with respect to the
center of the NS.
The center is given by the minimum of the lapse inside
the NS\footnote{While
this paper has been written also~\cite{Kastaun:2016private} implemented
the exactly same method as proposed here to measure the spin of the
single NSs during BNS inspirals. Both implementations
have been compared and give similar results.}.
Differently from \cite{Tacik:2015tja} we do not determine the center of the
coordinate sphere by the maximum density and we do not use comoving coordinates in our
simulations.
Let us discuss the interpretation of
Equation~\eqref{eq:quasi_local_spin}.
For equilibrium rotating NS
spacetimes, Eq.~\eqref{eq:quasi_local_spin} in the limit
$r_S\to\infty$ reproduces the ADM angular momentum of the (isolated) NS,
see Appendix~\ref{app:quasi_local_single}.
In dynamical BNS evolutions, Eq.~\eqref{eq:quasi_local_spin} allows us
to measure the spin evolution and spin direction. We stress that, in
the BNS case, the spin measure has some caveats:
(i) no unambiguous spin definition of a single object inside a binary
system exists in general relativity;
(ii) Eq.~\eqref{eq:quasi_local_spin} is evaluated in the strong-field region although
it is only well defined at spatial infinity;
(iii) the $r_S$ spheres are gauge dependent.
\section{BNS Configurations}
\label{sec:config}
We consider BNS configurations with fixed total mass of
$M=M^A+M^B=2.75M_\odot$, and vary EOS, mass-ratio, and the spins. The
spins are always aligned or antialigned to the orbital angular
momentum.
The EOSs are ALF2 and H4; both support masses of isolated NSs above
$2M_\odot$ and are compatible with current astrophysical constraints.
We vary the mass ratio,
\begin{equation}
q := \frac{M^A}{M^B} \geq1 \ ,
\end{equation}
spanning the values $q=(1.0,1.25,1.5)$.
For every EOS and $q$, we consider four different spin configurations:
\begin{enumerate
\item[$(00)$] none of the stars is spinning;
\item[$(\uparrow \uparrow)$] both spins are aligned with the orbital momentum;
\item[ $(\uparrow \downarrow)$] the spin of star A is aligned, the other
star spin is anti-aligned;
\item[$(\uparrow 0)$] the spin of star A is aligned to the orbital angular
momentum and the other NS is
considered to be irrotational,
\end{enumerate}
where the dimensionless spin magnitude
\begin{equation}
\chi := \frac{S}{M^2}
\end{equation}
of each star is either $\chi=0$ or $\chi\sim0.1$.
The properties of the considered BNSs are summarized in Tab.~\ref{tab:config}.
A BNS configuration is determined by its EOS, individual masses (or mass
ratio), and the two spins. Focusing on the GWs, we parametrize this
configuration space as follows.
Spin effects are described by the mass-weighted spin combination,
\begin{equation}
\chi_{\rm mw} := \frac{M^A \chi^A + M^B \chi^B}{(M^A+M^B)} \ , \label{eq:chimw}
\end{equation}
which is used for phenomenological waveforms models and during GW searches,
e.g.~\cite{Ajith:2009bn,TheLIGOScientific:2016pea}.
The mass-weighted spin is related to the effective spin $\chi_{\phi}$,
which captures the leading order spin effects of the phase evolution via
\begin{equation}
\chi_{\phi} = \chi_{\rm mw} - \frac{38\nu}{113} (\chi^A + \chi^B)\ ,
\end{equation}
with the symmetric mass ratio $\nu = M^A M^B/ (M^A+M^B)^2$.
For the setups presented here $\chi_{\rm mw} \approx \chi_{\rm eff}$,
which is the reason why we restrict us to the more commonly used $\chi_{\rm mw}$.
Most of the NS structure and EOS information is encoded in the
tidal polarizability coefficient~\cite{Bernuzzi:2014kca,Bernuzzi:2015rla}
\begin{equation}
\kappa^T_2 := 2 \left( \frac{q^4}{(1+q)^5} \frac{k_2^A}{C_A^5} +
\frac{q}{(1+q)^5} \frac{k_2^B}{C_B^5}
\right) \
\label{eq:kappa}
\end{equation}
that describes at leading order the NSs' tidal
interactions. $\kappa^T_2$ depends on
the EOS via the quadrupolar dimensionless Love number $k_2$ of isolated spherical
star configurations, e.g.~\cite{Damour:2009vw},
and the compactness $C$ of the irrotational stars (defined as the ratio of the
gravitational mass in isolation with the star's proper radius).
As a further parameter we choose the mass ratio, since
the dynamics of nonspinning black hole binary is entirely
described by $q$.
The 3D parametrization $(q,\chi_{\rm mw},\kappa_2^T)$ is a (possible) minimal
choice for the description of BNS GWs. The binary total
mass $M$, in particular, scales trivially in absence of tides and
its dependency in the tidal waveform is hidden in the $\kappa^T_2$, to leading order.
It should be noted, however, that $(q,\chi_{\rm mw},\kappa_2^T)$ are
not independent variables and some degeneracies exists [$\kappa_2^T$ and $\chi_{\rm mw}$
depend on $q$ for instance]. Furthermore, we note that the intrinsic
NS rotation can also influence tidal effects during the evolution.
In this work, we use for consistency $(q,\chi_{\rm
mw},\kappa_2^T)$ to study the parametric dependency of other
quantities than GWs, like ejecta and EM luminosity.
The $(q,\chi_{\rm mw},\kappa_2^T)$ parameter space coverage of our
work is shown in Fig.~\ref{fig:param}.
In total we are considering 24 BNSs. The irrotational configurations
were already presented in~Paper~I, but 36 new
simulations were performed for the scope of this paper.
Every configuration is simulated with two
different resolutions R1 and R2, see Tab.~\ref{tab:grid}.
This allows us to place error bars on our results, which will be
conservatively estimated as the difference between the resolutions,
see~\cite{Bernuzzi:2011aq,Bernuzzi:2016pie} for a detailed analysis.
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{fig01.pdf}
\caption{The $(q,\chi_{\rm mw},\kappa_2^T)$ parameter space
coverage.
Different colors refer to the EOS ALF2 (orange) and H4 (green).
Different markers correspond to different spin configurations:
circles $(00)$, triangles pointing down ($\uparrow \downarrow$),
triangles pointing right $(\uparrow 0)$, and
triangles pointing up $(\uparrow \uparrow)$.}
\label{fig:param}
\end{figure}
\section{Dynamics}
\label{sec:dynamics}
\subsection{Qualitative discussion}
\label{sec:Qualitative_analysis}
\begin{figure*}[t]
\includegraphics[width=1\textwidth]{fig02.png}
\caption{Rest-mass density profile inside the orbital plane for simulations employing the H4 EOS and using
the R2 grid setup. The snapshots represent the moments of merger.
The panels refer to (from top to bottom) mass ratios $q=1.00,q=1.25,q=1.50$ and
(from left to right) spin configurations $(00),(\uparrow \downarrow),(\uparrow 0),(\uparrow \uparrow)$.
The rest-mass density $\rho$ is shown on a logarithmic scale from blue to red.
The rest-mass density of unbound material is colored from brown to green.
Most material gets ejected from the tidal tails of the NS
inside the orbital plane. }
\label{fig:2d_rho_H4}
\end{figure*}
Our simulations span $N_\text{orb}\sim10-12$ orbits (20-24 GW cycles)
to merger, the number of orbits increases (decreases) for spin
aligned (antialigned) to the
orbital angular momentum. In this regime, spin effects typically
contribute up to $\Delta N_{Spin}\sim\pm1$ orbits. The spin effect
is comparable to the effect of the EOS variation and of the mass
ratio, $\Delta N_{\rm EOS}\sim\Delta N_{q}\sim \Delta N_{Spin}$. BNS with
stiffer EOS and/or larger $q$ take fewer orbits to merge for a
fixed initial GW
frequency\footnote{Recall that for our configurations the H4 EOS is
stiffer than the ALF2 EOS.}.
Figure~\ref{fig:2d_rho_H4} shows the rest-mass density profile inside the
orbital plane for the configurations employing
the H4 EOS with resolution R2. The snapshots are taken at the moment of merger,
i.e.,~at the time where the amplitude of the GW has its maximum.
Although the initial orbital frequency is almost identical for all systems with
the same mass-ratio and EOS,
cf.~Tab.~\ref{tab:config}, the ``orbital phase'' at the moment of
merger differs due to the spin of the individual stars.
In general, if a NS has spin aligned to the orbital angular momentum,
the binary dynamics is less bound leading to a slower phase evolution
with respect to the irrotational case. Contrary, if a NS has anti-aligned spin
the binary is more bound leading to a faster phase evolution
and an earlier moment of merger,
i.e.~at lower frequencies (see Sec.~\ref{sec:GW}).
This {\it spin-orbit} (SO) effect has a solid analytical
basis~\cite{Damour:2001tu},
and was already reported in both BBH
simulations~\cite{Campanelli:2006uy} (``orbital hang-up'' effect) and in BNS
setups~\cite{Kastaun:2013mv,Tsatsin:2013jca,Bernuzzi:2013rza,Dietrich:2015pxa}.
In the BNS configurations with $(\uparrow \downarrow)$ and
equal masses ($q=1$) the SO effect is zero at leading order,
cf.~Eq.~\eqref{Hso} and discussion below.
Notably in these cases, the effects of the {\it spin-spin} interactions
(SS) are observed in our simulations. Comparing the irrotational BNS
$(00)$ with the $(\uparrow \downarrow)$ configuration,
the latter has a faster phase evolution, i.e.~merges at lower frequencies.
After merger, the simulations are continued for about $\sim 30$ms.
All the BNS considered in this work form a hypermassive neutron star
(HMNS). The presence of spin
influences the angular momentum of the remnant
HMNS. Configurations with $(\uparrow \uparrow)$, for example,
have additional angular momentum support and the
HMNS has a longer lifetime. Spin effects influence the HMNS's
rotation law and its dynamical evolution [see
Sec.~\ref{sec:GW:postmerger} for a detailed discussion].
Overall, spin effects are observed in the remnant and ejecta, but
better resolved during the early part of the simulations.
\subsection{Spin Evolution}
\label{sec:Spin-Evolution}
The evolution of the quasi-local spin computed by
Eq.~\eqref{eq:quasi_local_spin}, is shown in
Fig.~\ref{fig:quasi_local} for the representative case
H4-137137$^{(\uparrow \downarrow)}$.
We find that, within our uncertainties, the spins magnitudes remain
roughly constant up to the actual collision of the two stars.
When the two stars finally merge, there is a single surface integral,
and Eq.~\eqref{eq:quasi_local_spin} estimates the orbital angular
momentum of the merger remnant.
Our results are consistent with what was observed in~\cite{Tacik:2015tja},
although the latter do not extend to merger.
They are also consistent with BBH simulations in which spins remain roughly constant
up to the formation of a common horizon
\cite{Lovelace:2010ne,Scheel:2014ina,Ossokine:2015vda}.
However, comparing with \cite{Tacik:2015tja} our results have larger
uncertainties, whose origin we discuss in the following.
Fig.~\ref{fig:quasi_local} shows that, during the evolution, the spin
magnitude of the NS with spin aligned to the orbital angular momentum
seems to be larger by $\sim 20\%$ compared to the other. This happens
despite the fact that the rotational velocities (and initial spin
values) are initially of the same magnitude. A similar effect was
shown in~\cite{Tacik:2015tja}, but it is more pronounced in our setup. \\
We argue this is caused by the fact that we are
using coordinate spheres in a non comoving
coordinate system. As a result, our setup does not capture accurately the
(approximate) rotational symmetry around each star, the latter being
numerically entangled with the orbital motion.
The spin magnitudes are, consequently, overestimated.
For the same reason we observe a drift of the spin magnitude that
increases the closer the merger is. We believe this effect is partially
related to numerical accuracies.
Note finally that for irrotational BNSs we measure a residual spin
$S\sim 10^{-2}$. The value is consistent with the accuracy level of
the initial data, see also our results on isolated stars in
App.~\ref{app:quasi_local_single}.
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{fig03.pdf}
\caption{Quasi-local measurement of the individual spin of the two NSs
for H4-137137$^{(\uparrow \downarrow)}$.
The gray dashed dotted lines in the diagram represent the value
computed from the initial data solver and given in Tab.~\ref{tab:config}.
Different colors refer to different radii of the coordinate spheres.
The increasing quasi-local spin is most likely caused by the choice of
the center of the integration surface and by using a coordinate sphere not taking tidal deformations
into account.}
\label{fig:quasi_local}
\end{figure}
\subsection{Energetics}
\label{sec:dyn:inspiral}
\begin{figure*}[t]
\includegraphics[width=1\textwidth]{fig04.png}
\caption{Binding energy vs.~specific angular momentum curves
$E(\ell)$ for the equal mass configurations (upper panels).
The circles mark the moment of merger for all configurations.
We also include a non-spinning BBH configuration from the public SXS
catalog, see text for more details.
The bottom panels show the individual contributions to the binding energy,
Eq.~\eqref{eq:Eb_ansatz}, we present the $SO$ (green), $S^2$ (orange), ${S_{A}S_{B}}$ (blue)
configurations obtained from the NR data as solid lines. For those
contributions we also include 3.5PN (App.~\ref{app:PN_Ej}) estimates as dashed lines.
The tidal contributions are shown as cyan lines.
We also include the SO contribution from the EOB model of~\cite{Nagar:2015xqa}.
We mark the difference between resolution R2 and R1 as a colored shaded region.
The vertical gray areas correspond to the merger of the ALF2-137137$^{(\uparrow \downarrow)}$ (left)
and the H4-137137$^{(\uparrow \downarrow)}$ configuration (right).}
\label{fig:Ej_H137137}
\end{figure*}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{fig05.pdf}
\caption{Quantity $\mathcal{E}_{Spin}$ for different EOS and the same mass ratio ($q=1.0$) in the top panel and
for the same EOS (ALF2), but mass ratios $q=1.0$ and $q=1.5$ in the bottom panel.
The colored shaded regions mark the difference between the results for resolution R2 and R1.
Notice that for an unequal mass merger also the $(\uparrow \downarrow)$ configuration
contains SO-contributions and therefore could be included
for $q =1.5$ although the late time behavior
is dominated by the SS-interactions.
The vertical gray areas correspond to the moment of merger
for H4-137137$^{(\uparrow 0)}$ (upper panel) and ALF2-110165$^{(\uparrow \downarrow)}$
(lower panel).
We also include the 3.5PN SO contribution as a dashed line
and EOB $\mathcal{E}_{Spin}$ as present in the
EOB model of~\cite{Nagar:2015xqa} as a dashed dotted line.}
\label{fig:Ebell_H4}
\end{figure}
We now discuss the BNS dynamics at a quantitative level by considering the
gauge invariant curves of the binding energy vs.~orbital angular
momentum~\cite{Damour:2011fu} as well as the binding energy and
angular momentum dependency on the orbital frequency.
The specific binding energy is given by
\begin{equation}
\label{eq:Eb}
E_b = \frac{1}{\nu} \left[ \frac{M_{\rm ADM}(t=0)- \mathcal{E}_{\rm
rad}}{M}-1 \right] \ ,
\end{equation}
where $\mathcal{E}_{\rm rad}$ is the energy emitted via GWs, as
computed from the simulations (cf. Sec.~V of Paper I).
The specific and dimensionless orbital angular momentum is
\begin{equation}
\label{eq:j}
\ell = \frac{ L(t=0) - \mathcal{J}_{\rm rad}} {\nu M^2} \ ,
\end{equation}
where $\mathcal{J}_{\rm rad}$ denotes the angular momentum emitted by GWs.
$L$ is the orbital angular momentum, a quantity that is not of direct
access in our simulations.
Thus, we approximate $L(t=0)$ by \cite{Bernuzzi:2013rza,Dietrich:2015pxa}
\begin{equation}
L (t=0) = J_{\rm ADM}(t=0) - S^A - S^B,
\end{equation}
where $J_{\rm ADM}(t=0)$ is the ADM-angular momentum and $S^{A,B}$ the
spins of the NSs measured in the
initial data. We further assume that spins are approximately constant
during the evolution, cf.~Sec.~\ref{sec:Spin-Evolution}
and~\cite{Tacik:2015tja}.
In addition to the $E_b(\ell)$ curves we consider the binding energy
and angular momentum as functions of the dimensionless parameter $x = ( M
\Omega)^{2/3}$, where $\Omega$ is the orbital frequency. The latter
can be unambiguously calculated from the simulation as \cite{Bernuzzi:2015rla}
\begin{equation}
\label{eq:MOmega}
M \Omega = \frac{\partial E_b}{\partial \ell} \ .
\end{equation}
This quantity can be also used to characterize the postmerger
dynamics as we do in Sec.~\ref{sec:dynamics:postmerger}.
\subsubsection{Energetics: late inspiral--merger}
The $E_b(\ell)$ curves probe in a direct way the conservative dynamics of
the binary~\cite{Damour:2011fu}. In \cite{Bernuzzi:2013rza} we have
proposed a simple way of analyzing energetics during the
inspiral--merger that relies on extracting the individual contributions of the
binary interactions, i.e.~spin-orbit (SO), spin-spin (SS) and tidal
(T). Our new simulations allow us to improve that analysis by
extracting more accurately the SO and SS interaction contributions.
Motivated by the post-Newtonian (PN) formalism and building on
\cite{Bernuzzi:2013rza,Dietrich:2015pxa} we make the additive
ansatz for the binding energy [we omit hereafter the subscript ``$b$'']
\begin{equation}
E \approx E_0 + E_T + E_{SO} + E_{S^2} + E_{{S_{A}S_{B}}}, \label{eq:Eb_ansatz}
\end{equation}
where $E_0$ is a orbital (point-particle) term, $E_T$ the tidal term, $E_{SO}$ the SO term,
$E_{S^2}$ a SS term due self-coupling of the spin of the
single star, i.e.~a change of the quadrupole moment due to the intrinsic rotation,
and $E_{{S_{A}S_{B}}}$ an SS interaction term due to the
coupling of the stars' spins. Each of the above contributions corresponds
to a term in the PN Hamiltonian. At leading order (LO) we have at 1.5PN
\begin{equation}
\label{Hso}
H_{SO} \approx \frac{2 \nu L}{r^3} S_{\rm eff}
\end{equation}
with the effective spin
\begin{equation}
S_{\rm eff} = \left( 1 + \frac{3}{4} \frac{M ^B }{M^A}\right) \bar{S}^A +
\left( 1 + \frac{3}{4}\frac{M^A }{M^B}\right) \bar{S}^B
\end{equation}
and at 2PN
\begin{equation}
\label{Hs2}
H_{S^2} \approx - \frac{\nu}{2 r^3}
\left( \frac{ C_{Q_A} M^B}{M^A} \bar{S}_A^2 + \frac{ C_{Q_B} M^A}{M^B} \bar{S}_B^2 \right),
\end{equation}
and
\begin{equation}
\label{Hsasb}
H_{{S_{A}S_{B}}} \approx - \frac{\nu}{r^3} \bar{S}^A \bar{S}^B,
\end{equation}
with $C_{Q}$ describing the quadrupole deformation due to spin,
e.g.~\cite{Levi:2014sba} and Appendix \ref{app:PN_Ej}, and
$\bar{S}^A = q \chi^A$, $\bar{S}^B=\chi^B/q$.
Focusing on the equal mass configurations and applying the ansatz
above to the binding energy of each configuration, we write
\begin{eqnarray}
E^{(00)} & \approx & E_0 + E_T, \\
E^{(\uparrow \downarrow )}& \approx & E_0 + E_T + E^{(\uparrow \downarrow )}_{{S_{A}S_{B}}}
+ E^{(\uparrow \downarrow )}_{S^2}, \\
E^{(\uparrow 0)} & \approx & E_0 + E_T + E^{(\uparrow 0)}_{SO} + E^{(\uparrow 0)}_{S^2}, \\
E^{(\uparrow \uparrow)} & \approx & E_0 + E_T + E^{(\uparrow \uparrow)}_{SO} +
E^{(\uparrow \uparrow)}_{{S_{A}S_{B}}} + E^{(\uparrow \uparrow)}_{S^2}.
\end{eqnarray}
We have omitted the superscript for the individual simulations
for the $E_0$ and $E_T$ contribution since we assume that they are
the same for all setups.
Using the simulation data we extract each contributions as
follows. First, we consider an equal mass,
non-spinning BBH-simulation to provide $E^{(\rm BBH)} \approx E_0$~\footnote{
The BBH $E_b(\ell)$ curve is computed with the SpEC code~\cite{SpEC}
and corresponds to SXS:BBH:0066 of the public catalog,
see also~\cite{Blackman:2015pia}.}. Then, we use the relations
\begin{align}
&E^{(\uparrow \uparrow)}_{SO} \approx 2 E^{(\uparrow 0)}_{SO} \ , \ \
E_{S^2}^{(\uparrow \uparrow)} \approx 2 E_{S^2}^{(\uparrow 0)} , \\
&E_{S^2}^{(\uparrow \uparrow)} \approx E_{S^2}^{(\uparrow \downarrow)}\ , \ \
E_{{S_{A}S_{B}}}^{(\uparrow \uparrow)} \approx - E_{{S_{A}S_{B}}}^{(\uparrow \downarrow)} \ ,
\end{align}
that come from the LO expressions of the PN Hamiltonian above
Eqs.~\eqref{Hso}-\eqref{Hs2}-\eqref{Hsasb} and from the fact that the stars
have the same mass ($M^A=M^B$) and spin magnitudes ($S^A=S^B$).
This way, based on the five different cases, the individual
contributions read
\begin{eqnarray}
E_{T} & \approx & E^{(00)} - E^{\rm BBH} \label{eq:ET_ex} \\
E_{SO}^{(\uparrow \uparrow)} & \approx & - 2 E^{(00)} - E^{(\uparrow \downarrow)} + 4 E^{(\uparrow 0)} - E^{(\uparrow \uparrow)}, \\
E_{S^2}^{(\uparrow \uparrow)} & \approx & E^{(\uparrow \downarrow)} - 2 E^{(\uparrow 0)} + E^{(\uparrow \uparrow)}, \\
E_{{S_{A}S_{B}}}^{(\uparrow \uparrow)} & \approx & E^{(00)} - 2 E^{(\uparrow 0)} + E^{(\uparrow \uparrow)}. \label{eq:ESS_ex}
\end{eqnarray}
All contributions are shown in Fig.~\ref{fig:Ej_H137137} for the ALF2
EOS (left) and the H4 EOS (right). For comparison we include as a
shaded region the difference between resolutions R1 and R2 for the
individual components. The plot clearly shows the repulsive
(attractive) character of the SO (tidal) interaction and quantifies
each term for a fixed value of the orbital angular
momentum.
The plot indicates that, although poorly resolved, SS
interactions might play a role close to merger. The $E_{S^2}$ terms,
in particular, are rather large for $\ell\lesssim3.6$ and contribute
to the merger dynamics with an effect opposed to the one of the SO
interaction (note the negative sign of $E_{S^2}$ in the
plots). Summing up all the spin effects, we find that spin
contributions are of the same order as tidal effects.
This demonstrates the importance of including spins in
analytical models of BNS.
On top of our numerical results we plot 3.5PN estimates for SO,
and SS interactions as dashed lines (see Appendix \ref{app:PN_Ej} for their explicit
expressions) and the SO effective-one-body (EOB) estimate of
\cite{Nagar:2015xqa} as dot-dashed lines. The SO term extracted from the NR data shows
significant deviations from the EOB analytical
results for $\ell\lesssim3.6-3.7$,
which correspond to GW frequencies of $M\omega_{22}\sim0.073-0.083$
(compare with merger frequencies in Tab.~\ref{tab:GWs}). The EOB model
is closer to the numerical data than the PN model, but
underestimate (in absolute value) the magnitude of the SO term during
the last 2-3 orbits.
The PN description of SO couplings shows deviations already at
$\ell\lesssim3.8$, and it is very inaccurate
for the description of the $S^2$ SS effects\footnote{Note,
however, that the ansatz in \eqref{eq:Eb_ansatz} might be
inaccurate at high-frequency, and that our analysis might break close to
merger. Furthermore, higher resolved simulations are needed to reduce
the uncertainties extracting SS contributions.}.
These findings suggest that, already at the level of the Hamiltonian,
more analytical work is needed to describe the very last orbits of BNS.
Interestingly however, we note that the ``cumulative'' spin
contribution SO+SS can be reasonably approximated by the considered
EOB SO model solely
(for the considered dynamical range). The reason for this might be
that the attractive character of the SS interaction partially
``compensates'' the effect of the missing analytical
information\footnote{Let us also point out that extracting $E_{Spin}$
is effected by smaller numerical uncertainties since only two different
BNS configurations have to be considered,
instead of four setups for $E_{SO}$.}.
Let us consider the sum of all spin contributions, $E_{Spin}$, and
assume it can be formally parametrized as the LO SO interaction (lowest order in spin)
\begin{equation}
E_{Spin} \approx 2 \nu S_{\rm eff} \mathcal{E}_{Spin} \ .
\end{equation}
Consequently, Eq.~\eqref{eq:Eb_ansatz} simplifies to
\begin{equation}
E \approx E_0 + E_T + 2 \nu S_{\rm eff} \mathcal{E}_{Spin} \ , \label{eq:Ebell_ansatzSeff}
\end{equation}
and, by subtracting the non-spinning binding energy curves from the
curves for spinning configurations, we calculate $\mathcal{E}_{Spin}$.
Figure~\ref{fig:Ebell_H4} presents our results. For this analysis we
also include unequal mass configurations for which it was not possible to
extract the individual contributions to the binding energy as done
above for the equal mass cases. Notice that for an unequal mass system
also $(\uparrow \downarrow)$ configurations contain SO-interactions,
see cyan line. In the top panel we compare simulations
for different EOS. The quantity $\mathcal{E}_{Spin}$ is the same for
all simulations independent of the EOS. The bottom panel of
Fig.~\ref{fig:Ebell_H4} shows the effect of the mass ratio on
$\mathcal{E}_{Spin}$, where again up to the merger all estimates
agree. The EOB SO curve for
$\mathcal{E}_{Spin}$ are closer to the NR data in this cases than in
the one presented in Fig.~\ref{fig:Ej_H137137}.
\begin{figure*}[t]
\includegraphics[width=1\textwidth]{fig06.pdf}
\caption{Binding energy (top panels) and specific angular momentum (bottom panels) as a function
of the PN parameter $x$.
The circles mark the moment of merger for all configurations.
We also include a non-spinning BBH configuration from the public SXS
catalog as in Fig.~\ref{fig:Ej_H137137}.
We have applied a Savitzky Golay filter on $E(x)$ and $\ell(x)$
to reduce numerical noise and eccentricity oscillations. }
\label{fig:E_ell_x}
\end{figure*}
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{fig07.pdf}
\caption{Individual contributions to the binding energy (top panels) and
specific angular momentum (bottom panels) as a function
of the PN parameter $x=(M \Omega)^{2/3}$ for the equal mass systems employing the H4 EOS.
The colored shaded regions mark the difference between resolution R2 and R1.
The initial oscillations are caused mostly by remaining eccentricity effects not fully
removed after applying the Savitzky Golay filter to $E(x)$ and $\ell(x)$.
We also include 3.5PN estimates for the binding energy
(App.~\ref{app:PN_Ej}) as dashed dotted lines.
The vertical gray line corresponds to the merger point of the
H4-137137$^{(\uparrow \downarrow)}$.}
\label{fig:E_ell_contr}
\end{figure}
We finally discuss the curves $E_b(x)$ and $\ell(x)$, i.e., compare
BNS energetics at fixed orbital frequency $\Omega$. Figure~\ref{fig:E_ell_x}
summarizes the equal mass results for ALF2 EOS (left) and the H4 EOS (right).
The figure shows that once we consider systems with the
same orbital frequency tidal contributions to
the binding energy are larger
than spin contributions.
This becomes more visible in Fig.~\ref{fig:E_ell_contr}
for which we have extracted the individual components following
Eq.~\eqref{eq:ET_ex}-\eqref{eq:ESS_ex}.
Figure~\ref{fig:E_ell_contr} also shows that the individual
contributions to $E_b(x)$ and $E_b(\ell)$
have opposite signs. This can be understood by considering
$\ell \propto \Omega r^2$ and $E \propto - r^{-1}$.
Let us first focus on tidal effects comparing a BBH and a BNS system.
Because of the attractive nature of tidal effects
$E_{b,{\rm BBH}}>E_{b,{\rm BNS}}$
(but $|E_{b,{\rm BBH}}|<|E_{b,{\rm BNS}}|$) for fixed angular momentum.
Consequently $\Omega_{\rm BBH} < \Omega_{\rm BNS}$,
which explains the inverse ordering of
$E_b(x)$ and $E_b(\ell)$.
Another approach is to consider
the $\ell(x)$ curves for a fixed frequency
for which $\ell_{\rm BNS} > \ell_{\rm BBH}$ and
$r_{\rm BNS} > r_{\rm BBH}$. Therefore, the system is less bound,
i.e.~$E_{b,\rm BNS} > E_{b,\rm BBH}$,
which is reflected in the $E(x)$ curves.
In analogy it is possible to explain why $E_{SO}$ and other
spin dependent contributions have opposite signs if
$E_b(x)$ and $E_b(\ell)$ are compared.
This also shows that while $E_b(\ell)$ curves can be directly used to
understand the effect of individual components on the conservative dynamics, the
interpretation of $E_b(x)$ is more subtle, but will be useful for
the phase analysis of the system presented in Sec.~\ref{sec:GW}.
\subsubsection{Energetics: postmerger}
\label{sec:dynamics:postmerger}
\begin{figure}[t]
\includegraphics[width=0.52\textwidth]{fig08.pdf}
\caption{Binding energy vs.~specific angular momentum curves after the merger of the two neutron stars for
the H4 setups with mass ratio $q=1.0$ (left) and $q=1.50$ (right).
The merger is marked as a circle for all setups.
The bottom panels present the frequency $M \Omega$ estimated from the
binding energy curves.}
\label{fig:Ebell_postmerger}
\end{figure}
Binding energy vs.~specific angular momentum curves can be used also
to study post-merger dynamics \cite{Bernuzzi:2015rla}. The frequency
$M\Omega = \partial E_b/ \partial \ell$, in particular, gives the
rotation frequency of the HMNS merger remnant, and matches extremely
well half the postmerger GW frequency. Spins effects are clearly
visible at merger \cite{Bernuzzi:2013rza} but also in the postmerger
$\Omega$, especially in cases in
which the merger remnant collapses to a black hole.
In Fig.~\ref{fig:Ebell_postmerger} $E_b(\ell)$ and $M\Omega$
are presented for all configurations employing the H4 EOS.
When the postmerger $E_b(\ell)$ are approximately linear, the
rotational (and emission) frequency $\Omega$ remains steady for
several milliseconds.
Comparable masses BNS remnant collapse to black hole within the
simulated times (left panels) and $\Omega$ increases continuously
up to the collapse. The continuous evolution of
$\Omega$ (a ``post-merger chirp''
\cite{Bernuzzi:2015rla,Dietrich:2016hky}) is caused by the increasing
compactness and rotational velocity of the remnant. Spins aligned
to orbital angular momentum increase the angular momentum
support of the remnant that, therefore, collapses later in time and at
smaller values of $\ell$. The remnant of configuration
$(\uparrow\downarrow)$ has a very similar dynamics to the one of
$(00)$.
The remnants of $q=1.5$ BNS instead do not collapse during the
simulated time. Interestingly, $\Omega$ shows a sharp jump right after
merger and then remains approximately constant.
The jump is only present in the $q=1.5$ mass ratio setups. It
originates from the secondary star whose core ``falls'' onto
the primary star, after a partial tidal
disruption. Consequently, the the rotational frequency of the merger
remnant experience a rapid increase over a short time.
\section{Ejecta}
\label{sec:ejecta}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{fig09.pdf}
\caption{Mass of the ejecta (first row), kinetic energy of the ejecta (second row),
ejecta velocities inside the plane (third row) and orthogonal to it (fourth row)
as a function of the spin of the configurations.
Green points represent data for the H4-EOS, while orange data points correspond
to ALF2. Left panels refer to mass ratio $q=1.00$ and right panels to mass ratio
$q=1.50$.
The markers characterize the spin of the configurations as in Fig.~\ref{fig:param},
where circles correspond to $(00)$ setups,
down-pointing triangles to ($\uparrow \downarrow$),
right-pointing triangles to ($\uparrow 0$),
and upwards pointing triangles to ($\uparrow \uparrow$).
The spin of the secondary star
influences the amount of ejected material
and the kinetic energy, where aligned spin leads to larger ejecta.
The velocity inside the orbital plane does not depend notably on $q$ or $\chi_{\rm eff}$,
and the velocity perpendicular to the orbital plane decreases for increasing $q$. }
\label{fig:Mej}
\end{figure}
\begin{table*}[t]
\centering
\caption{Ejecta properties. The columns refer to:
the name of the configuration, the mass of the ejecta $M_{\rm ej}$,
the kinetic energy of the ejecta $T_{\rm ej}$,
the average velocity of the ejecta inside the orbital plane $\mean{|v|}_\rho$
(not necessarily pointing along the radial direction),
the average velocity of the ejecta perpendicular to the orbital plane $\mean{|v|}_z$,
and the average of $v^2$ of fluid elements inside the orbital plane
$\mean{\bar{v}}^\rho$ and perpendicular to it $\mean{\bar{v}}^z$,
see~Paper~I for more details.
Results stated in the table refer to resolution R2 and results for
R1 are given in brackets.}
\begin{tabular}{l|cccccc}
Name & $M_{\rm ej} \ [10^{-2}M_\odot] $ & $T_{\rm ej}\ [10^{-4}]$ &
$\mean{|v|}_\rho$ & $\mean{|v|}_z$ & $\mean{\bar{v}}^\rho$ & $\mean{\bar{v}}^z$ \\
\hline
\hline
ALF2-137137$^{00}$ & 0.34 (0.20) & 0.76 (0.22) & 0.17 (0.12) & 0.10 (0.11) & 0.17 (0.12) & 0.22 (0.15) \\
ALF2-137137$^{\uparrow \uparrow}$ & 0.16 (0.09) & 0.18 (0.31) & 0.16 (0.10) & 0.05 (0.08) & 0.16 (0.10) & 0.14 (0.13) \\
ALF2-137137$^{\uparrow \downarrow}$ & 0.41 (0.34) & 0.31 (0.49) & 0.12 (0.15) & 0.07 (0.10) & 0.12 (0.15) & 0.12 (0.16) \\
ALF2-137137$^{\uparrow 0}$ & 0.20 (0.25) & 0.20 (0.20) & 0.13 (0.11) & 0.05 (0.07) & 0.13 (0.11) & 0.13 (0.13) \\
H4-137137$^{00}$ & 0.34 (0.06) & 0.89 (0.10) & 0.19 (0.13) & 0.10 (0.14) & 0.19 (0.13) & 0.23 (0.22) \\
H4-137137$^{\uparrow \uparrow}$ & 0.20 (0.12) & 0.44 (0.23) & 0.15 (0.22) & 0.07 (0.07) & 0.16 (0.24) & 0.21 (0.27) \\
H4-137137$^{\uparrow \downarrow}$ & 0.15 (0.07) & 0.35 (0.12) & 0.16 (0.12) & 0.10 (0.10) & 0.17 (0.12) & 0.23 (0.20) \\
H4-137137$^{\uparrow 0}$ & 0.07 (0.06) & 0.13 (0.11) & 0.17 (0.14) & 0.10 (0.08) & 0.17 (0.14) & 0.22 (0.20) \\
\hline
ALF2-122153$^{00}$ & 0.75 (0.97) & 2.2 (2.1) & 0.17 (0.09) & 0.12 (0.10) & 0.17 (0.09) & 0.23 (0.17) \\
ALF2-122153$^{\uparrow \uparrow}$ & 0.67 (0.63) & 1.4 (1.7) & 0.16 (0.28) & 0.08 (0.06) & 0.16 (0.32) & 0.20 (0.44) \\
ALF2-122153$^{\uparrow \downarrow}$ & 0.45 (0.49) & 0.94 (0.74) & 0.15 (0.14) & 0.11 (0.09) & 0.15 (0.14) & 0.22 (0.18) \\
ALF2-122153$^{\uparrow 0}$ & 0.55 (1.9) & 1.2 (7.8) & 0.16 (0.17) & 0.13 (0.13) & 0.17 (0.18) & 0.21 (0.20) \\
H4-122153$^{00}$ & 0.66 (0.88) & 1.7 (1.7) & 0.18 (0.15) & 0.11 (0.11) & 0.18 (0.16) & 0.22 (0.28) \\
H4-122153$^{\uparrow \uparrow}$ & 0.78 (1.2) & 1.7 (1.6) & 0.18 (0.15) & 0.11 (0.04) & 0.18 (0.15) & 0.22 (0.16) \\
H4-122153$^{\uparrow \downarrow}$ & 0.41 (0.53) & 0.95 (1.1) & 0.17 (0.17) & 0.09 (0.11) & 0.17 (0.18) & 0.20 (0.22) \\
H4-122153$^{\uparrow 0}$ & 0.64 (0.40) & 1.8 (1.4) & 0.18 (0.25) & 0.08 (0.09) & 0.19 (0.28) & 0.22 (0.20) \\
\hline
ALF2-110165$^{00}$ & 2.4 (1.5) & 4.2 (2.1) & 0.17 (0.15) & 0.07 (0.08) & 0.17 (0.15) & 0.18 (0.16) \\
ALF2-110165$^{\uparrow \uparrow}$ & 2.4 (3.4) & 4.2 (6.5) & 0.18 (0.18) & 0.04 (0.07) & 0.18 (0.19) & 0.18 (0.17) \\
ALF2-110165$^{\uparrow \downarrow}$ & 1.1 (0.97) & 2.0 (1.5) & 0.18 (0.17) & 0.05 (0.05) & 0.18 (0.17) & 0.19 (0.18) \\
ALF2-110165$^{\uparrow 0}$ & 1.4 (1.8) & 2.3 (2.5) & 0.18 (0.17) & 0.04 (0.06) & 0.18 (0.17) & 0.19 (0.17) \\
H4-110165$^{00}$ & 1.6 (2.0) & 2.9 (2.9) & 0.17 (0.16) & 0.05 (0.04) & 0.18 (0.16) & 0.17 (0.17) \\
H4-110165$^{\uparrow \uparrow}$ & 2.7 (3.7) & 4.2 (7.1) & 0.17 (0.19) & 0.02 (0.03) & 0.17 (0.19) & 0.15 (0.18) \\
H4-110165$^{\uparrow \downarrow}$ & 0.95 (1.5) & 1.4 (2.5) & 0.17 (0.18) & 0.03 (0.05) & 0.17 (0.18) & 0.17 (0.18) \\
H4-110165$^{\uparrow 0}$ & 1.9 (2.0) & 3.1 (3.1) & 0.17 (0.17) & 0.03 (0.04) & 0.17 (0.17) & 0.18 (0.21) \\
\hline
\hline
\end{tabular}
\label{tab:ejecta}
\end{table*}
In Paper~I we have pointed out that the amount of ejected material depends significantly on the
mass-ratio where the ejecta mass increases for higher mass ratios with a
linear behavior in $q$. In large-$q$ BNSs the mass-ejection from the tidal
tail of the companion (centrifugal effect) dominates
the one originating from the cores' collision and the subsequent
shock-wave. For the same reason, stiffer EOS favor larger mass
ejection over softer EOS.
The effect of the stars' rotation (dimensionless spins $\chi\sim0.1$)
on the dynamical ejecta are sub-dominant with respect to the mass-ratio and,
to some extend, also to varying the EOS.
We find that for configurations with
large mass ratio ($q=1.5$) the amount of ejecta is increasing from
$(\uparrow \downarrow)$ to $(\uparrow 0)$, and to $(\uparrow
\uparrow)$ due to the progressively larger angular momentum in the
tidal tail of the companion.
We also identify a spin effects on the unbound material, as
discussed below.
Figure~\ref{fig:Mej} shows the most important ejecta quantities and their
dependence on the spin and the mass ratio.
We report the total ejecta mass $M_{\rm ej}$, the kinetic energy of the ejecta $T_{\rm ej}$,
and the average velocities inside the orbital plane $\mean{|v_\rho|}$ and perpendicular to the orbital
plane $\mean{|v_z|}$ [see Paper~I for more details].
The difference between resolution R2 and R1 is used as an error estimate
and marked as an error bar.
In the left panels results for $q=1.00$ are shown and
results for $q=1.50$ are shown in the right panels.
Different EOSs are colored differently: ALF2 (orange), H4 (green);
and different markers represent the different spin configurations as
in Fig.~\ref{fig:param}.
More details about the ejecta are given in Tab.~\ref{tab:ejecta}.
For all configurations (independent of the spin) the ejecta mass is
larger for larger mass ratios. A similar statement is true for the
kinetic energy of the ejecta (second panel of Fig.~\ref{fig:Mej}).
The EOS variation considered here does not show significant
differences in the ejecta. Mass ejection in $q=1$ BNS
mostly originates from the shock wave that forms during the core collision,
while in $q=1.5$ BNS mostly originates form the tidal tail.
The influence of the NS spin is smaller than the effect of the mass ratio.
It is most visible for larger ejecta masses, i.e.~the $q=1.25,1.5$ cases, and is
related to the spin of the companion star (less
massive NS). In a Newtonian system,
mass ejection sets in once the fluid velocity
is sufficiently large and the material is not bound by gravitational forces,
i.e., once $v^2 > M_{\rm NS}/R_{\rm NS}$. The velocity of the fluid elements can be
approximated by $v \sim v_{\rm orb} + v_{\omega}$.
The component $v_{\rm orb}$ depends on the orbital motion and is therefore
only indirectly effected by the spins.
The component $v_{\omega} \approx \omega R_{\rm NS}$ is the speed of a
fluid element in the frame moving with the center of the star.
Considering the two configurations $(\uparrow \uparrow)$ and
$(\uparrow \downarrow)$, one can approximate the fluid velocity at the points farthest away from
the center of mass as $v \sim v_{\rm orb} + |v_{\omega}|$ for $(\uparrow \uparrow)$, and
as $v \sim v_{\rm orb} - |v_{\omega}|$ for $(\uparrow \downarrow)$ configurations.
The criterion $v^2 > M_{\rm NS}/R_{\rm NS}$ would be fulfilled for the
former configuration but not fulfilled for the latter. This
observation, although based on a Newtonian description, explains why
for $q\neq1$ the unbound mass increases with increasing $\chi_B$.
The observation that more material can be
ejected for aligned configurations was also reported in~\cite{East:2015yea}
for eccentric encounters of NSBH systems using approximate initial data.
\section{Gravitational waves}
\label{sec:GW}
In this section we discuss spin effects on the GW. In
Sec.~\ref{sec:GW:inspiral} we present, for the first time, a GW phase
analysis up to merger that quantifies the contributions of spin and
tidal interaction in the dynamical regime covered by the
simulations.
We find that spin effects contribute to phase differences up to
$\sim5$ radians in the considered dynamical regime (for
$\chi\sim0.1$).
In Sec.~\ref{sec:GW:postmerger} we discuss
the postmerger signal and the main emission
channels.
We find that aligned spin configurations
have a longer lifetime before collapse and therefore influence the
spectral properties of the remnant. However,
resolving spin effects with current simulations
in the power spectral density (PSD) of the GW signal
is not possible.
Our notation follows Sec.~V of Paper I, and
focuses on the dominant $(2,2)$ mode of the GW strain. We often use
\begin{equation}
\hat{\omega} := M \omega_{22}
\end{equation}
for the dimensionless and mass-rescaled GW frequency.
GWs are plotted versus the retarded time $u$. The (real part of the)
waveforms is plotted in Fig.~\ref{fig:GW}, as an overview of the
different signals.
Several important quantities are listed in Tab.~\ref{tab:GWs}.
\begin{figure*}[t]
\includegraphics[width=1\textwidth]{fig10.pdf}
\caption{Gravitational wave signal for all considered configurations employing the R2 resolution.
Top panels (yellow lines) refer to the ALF2 EOS, bottom panels (green lines) refer the H4 EOS.
The mass ratio from top to bottom is: $q=1.00,q=1.25,q=1.50$.
The columns refer to setups: $(00)$,$(\uparrow \downarrow)$,
$(\uparrow 0)$, $(\uparrow \uparrow)$. }
\label{fig:GW}
\end{figure*}
\begin{table*}[t]
\centering
\begin{small}
\caption{Gravitational waveform quantities.
The columns refer to: the name of the configuration, the number of orbits up to merger,
the dimensionless frequency at merger $M \omega_{mrg}$, the merger frequency in $kHz$,
the dominant frequencies during the post merger stage $f_1,f_2,f_3$ stated in $kHz$ and extracted
from the (2,1),(2,2),(3,3)-mode.
Results stated in the table refer to resolution R2 and results for
R1 are given in brackets. \label{tab:GWs}}
\begin{tabular}{l|ccc|cccc}
Name & $N_\text{orb}$ & $M\omega_{\rm mrg}$ & $f_{\rm mrg}$ & $f_1$ & $f_{2}$ & $f_3$ \\
\hline
\hline
ALF2-137137$^{(00)}$ & 11.5 (11.0) & 0.144 (0.142) & 1.69 (1.67) & 1.55 (1.46) & 2.80 (2.77) & 4.30 (4.06) \\
ALF2-137137$^{(\uparrow \downarrow)}$ & 11.3 (10.9) & 0.141 (0.138) & 1.66 (1.62) & 1.42 (1.36) & 2.77 (2.65) & 4.13 (3.78) \\
ALF2-137137$^{(\uparrow 0)}$ & 11.7 (11.3) & 0.147 (0.142) & 1.73 (1.67) & 1.46 (1.35) & 2.81 (2.63) & 4.08 (3.84) \\
ALF2-137137$^{(\uparrow \uparrow)}$ & 12.0 (11.5) & 0.147 (0.144) & 1.73 (1.69) & 1.45 (1.43) & 2.75 (2.75) & 4.17 (3.99) \\
H4-137137$^{(00)}$ & 10.4 (10.5) & 0.133 (0.127) & 1.56 (1.49) & 1.27 (1.38) & 2.50 (2.58) & 3.74 (3.84) \\
H4-137137$^{(\uparrow \downarrow)}$ & 10.6 (10.3) & 0.128 (0.126) & 1.50 (1.48) & 1.38 (1.37) & 2.58 (2.61) & 4.50 (3.97) \\
H4-137137$^{(\uparrow 0)}$ & 11.0 (10.8) & 0.133 (0.128) & 1.56 (1.50) & 1.28 (1.35) & 2.50 (2.55) & 4.30 (4.23) \\
H4-137137$^{(\uparrow \uparrow)}$ & 11.3 (11.1) & 0.136 (0.134) & 1.59 (1.57) & 1.36 (1.36) & 2.54 (2.51) & - \\
\hline
ALF2-122153$^{(00)}$ & 10.6 (10.1) & 0.133 (0.131) & 1.56 (1.57) & 1.42 (1.44) & 2.72 (2.68) & 4.11 (4.13) \\
ALF2-122153$^{(\uparrow \downarrow)}$ & 10.4 (9.9) & 0.126 (0.123) & 1.48 (1.45) & 1.46 (1.42) & 2.73 (2.74) & 3.86 (4.17) \\
ALF2-122153$^{(\uparrow 0)}$ & 10.9 (10.4) & 0.133 (0.130) & 1.56 (1.53) & 1.45 (1.38) & 2.70 (2.71) & 4.33 (4.16) \\
ALF2-122153$^{(\uparrow \uparrow)}$ & 11.2 (10.7) & 0.135 (0.133) & 1.58 (1.56) & 1.43 (1.40) & 2.75 (2.70) & 4.22 (4.10) \\
H4-122153$^{(00)}$ & 10.7 (10.3) & 0.114 (0.115) & 1.34 (1.35) & 1.28 (1.24) & 2.42 (2.38) & 3.78 (3.70) \\
H4-122153$^{(\uparrow \downarrow)}$ & 10.7 (10.3) & 0.108 (0.106) & 1.27 (1.25) & 1.38 (1.29) & 2.49 (2.47) & 4.26 (3.95) \\
H4-122153$^{(\uparrow 0)}$ & 11.2 (10.9) & 0.112 (0.111) & 1.32 (1.30) & 1.29 (1.28) & 2.51 (2.47) & 4.07 (4.12) \\
H4-122153$^{(\uparrow \uparrow)}$ & 11.6 (11.2) & 0.115 (0.114) & 1.35 (1.34) & 1.27 (1.29) & 2.49 (2.49) & 3.70 (3.79) \\
\hline
ALF2-110165$^{(00)}$ & 9.9 (9.6) & 0.119 (0.118) & 1.40 (1.39) & 1.45 (1.32) & 2.74 (2.74) & 4.17 (4.06) \\
ALF2-110165$^{(\uparrow \downarrow)}$ & 9.7 (9.3) & 0.114 (0.113) & 1.34 (1.33) & 1.44 (1.40) & 2.82 (2.79) & 4.20 (4.24) \\
ALF2-110165$^{(\uparrow 0)}$ & 10.3 (9.7) & 0.118 (0.116) & 1.39 (1.36) & 1.43 (1.33) & 2.83 (2.69) & 4.05 (4.00) \\
ALF2-110165$^{(\uparrow \uparrow)}$ & 10.6 (10.0) & 0.121 (0.120) & 1.42 (1.41) & 1.42 (1.41) & 2.80 (2.80) & 4.18 (4.24) \\
H4-110165$^{(00)}$ & 11.0 (10.7) & 0.100 (0.098) & 1.17 (1.15) & 1.27 (1.24) & 2.48 (2.43) & 3.83 (3.54) \\
H4-110165$^{(\uparrow \downarrow)}$ & 10.6 (10.1) & 0.095 (0.095) & 1.12 (1.12) & 1.29 (1.24) & 2.58 (2.50) & 3.98 (3.80) \\
H4-110165$^{(\uparrow 0)}$ & 11.2 (10.8) & 0.098 (0.097) & 1.15 (1.14) & 1.29 (1.24) & 2.56 (2.53) & 3.98 (3.77) \\
H4-110165$^{(\uparrow \uparrow)}$ & 11.6 (11.2) & 0.100 (0.099) & 1.17 (1.16) & 1.28 (1.27) & 2.54 (2.54) & 3.93 (3.78) \\
\hline
\hline
\end{tabular}
\end{small}
\end{table*}
\subsection{Late-inspiral phasing}
\label{sec:GW:inspiral}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{fig11.pdf}
\caption{Top panel: $\phi(\hat{\omega})$ accumulated in $\hat{\omega}\in[0.04,0.11]$ for ALF2-137137$^{(00)}$ (blue),
ALF2-137137$^{(\uparrow \uparrow)}$ (red), and a non-spinning,
equal mass BBH setup (black).
Bottom panel: individual contributions $\Delta \phi(\hat{\omega})$.
We include tidal effects for ALF2 (orange) and H4 (green) EOS, as well as
spin effects for ALF2 (blue) and H4 (cyan).
We also include estimates from the EOB models of~\cite{Nagar:2015xqa}
for the spinning contribution and~\cite{Bernuzzi:2014owa} for the tidal contribution
as dashed lines.
\label{fig:Phiw}}
\end{figure}
In order to analyze the phasing of the waves we proceed as follows.
We first fit the quantity $\hat{\omega}(t)$ as described in
App.~\ref{app:omg_fit}, eliminating this way the oscillation due the
residual eccentricity in the NR data. We then integrate to obtain
$\phi(t)$ and parametrize $\phi(t(\hat{\omega}))$ to obtain the phase as a function of
the GW frequency. The integration introduces an arbitrary phase shift,
which is set to zero at an initial frequency $\hat{\omega} = 0.04$. The phase
comparison is then restricted to the frequency interval $\hat{\omega} \in
[0.04,0.11]$, which corresponds to physical GW frequencies
$\sim470-1292$~Hz.
Figure~\ref{fig:Phiw} summarized our results. The upper
panel shows the phase of ALF2-137137$^{(00)}$ (blue) and
ALF2-137137$^{(\uparrow \uparrow)}$ (red).
The estimated uncertainty of the data is shown as a shaded region;
note that the error bar is not symmetric. The phase of a non-spinning,
equal mass BBH is included as black curve. The latter is obtained from
the EOB model of~\cite{Nagar:2015xqa}.
In the bottom panel we show the accumulated phase due to spin and
tidal interaction separately. As in the case of the
energetics, we separate the spin and
tidal contributions to the phase by considering the difference between
the $(\uparrow \uparrow)$ and $(00)$ configuration (spin) and the
difference between the $(00)$ and the BBH configuration.
This analysis shows that tidal effects contribute to about $15$ to
$20$ radians, accumulated in the considered frequency interval of
$\hat{\omega} \in [0.04,0.11]$. This is about 4-5 times the phase
accumulated from $10$~Hz to $\sim$ 470~Hz (i.e. from infinite
separation up to $\hat{\omega} \sim0.04$) estimated with PN
methods~\cite{Damour:2012yf}. Spin effects for $\chi \sim 0.1$ give an
accumulated phase of $\sim5$ radiants on the same frequency interval.
These results are consistent with EOB predictions included as dashed
lines.
Regarding the GW merger frequency (defined as the frequency at the
wave's amplitude peak), Tab.~\ref{tab:GWs} shows that BNS systems employing a stiffer EOS and/or larger
mass ratios have smaller $M\omega_{\rm mrg}$ (cf. Paper I). Spin
interactions shift the merger frequency of $\Delta M\omega\sim\pm0.005$,
where the exact value depends on the mass ratio and EOS.
\subsection{Post-merger spectra}
\label{sec:GW:postmerger}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{fig12.pdf}
\caption{Spectrogram of the GW signal for simulations with H4 EOS and resolution R2.
The spectrogram only considers the dominant (2,2) mode.
Horizontal blue dashed lines refer to the $f_2$ frequency extracted from
the entire postmerger GW signal. Black dashed lines refer to the frequency of the (2,2)-mode
and white lines refer to the frequency extracted from the binding
energy $\partial_\ell E$ (Fig.~\ref{fig:Ebell_postmerger}).
The spectrograms show the last part of the inspiral signal (left bottom corners of the spectrograms)
and evolution of the HMNS.}
\label{fig:spectra}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{fig13.pdf}
\caption{PSD for the H4-137137 (upper panel) and H4-110165 (lower panel) setups.
We mark the merger frequency with triangles and the postmerger peak frequency $f_2$
as diamonds.}
\label{fig:psd}
\end{figure}
We analyze the GW spectrum of the postmerger waveform by performing a
Fourier transform of the simulation data (cf.~Sec.~V of Paper I).
Figure~\ref{fig:spectra} shows typical spectrograms of the postmerger
GW. The plot highlights the continuum character of the GW frequency,
which is especially evident in the cases in which the merger remnant is close
to collapse. This emission mirrors the dynamics discussed in
Sec. \ref{sec:dynamics:postmerger}. Due to the increasing compactness of
the remnant, the GW frequency increases until the system settles to a
stable state or collapses to a BH.
As shown in~\cite{Bernuzzi:2015opx,Dietrich:2016hky}, however, most energy is
released shortly after the formation of the HMNS. Therefore most
of the power is at a frequency close to the one at the formation of the
merger remnant.
Spin effects are clearly distinguishable in the GW spectrum. For example, the
irrotational configuration H4-137137
evolves faster to the collapse and has slightly lower frequency during
the postmerger then configuration $(\uparrow\uparrow)$. The frequency
drift in H4-110165 is more prominent in the irrotational $(00)$
configuration then in the $(\uparrow\uparrow)$, indicating the remnant
is closer to the threshold of radial instability (collapse).
The spectrogram plots include a horizontal blue line indicating the
``peak'' frequency $f_2$ extracted from the waveform PSD (see below).
They also include as a white line the dynamical frequency $2 M \Omega = 2 \partial E_b/\partial \ell$
as computed in Sec.~\ref{sec:dynamics:postmerger} and as a black dashed line $M \omega_{22}$.
The two frequencies remarkably agree with each other, indicating that the emission is
dominated by the non-axisymmetric $m=2$ deformation of the rotating
remnant.
Figure~\ref{fig:psd} shows the spectrum of the signal for two exemplary
cases. Some broad peaks can be identified in the PSD, and we report
for completeness some peak frequencies in Tab.~\ref{tab:GWs}.
As described in Paper~I, the frequencies $f_1,f_2,f_3$
refer to the dominant frequencies of the $(2,1)$,$(2,2)$,$(3,3)$-modes, respectively.
Secondary peaks $f_s$ are also present,
see~\cite{Takami:2014tva,Rezzolla:2016nxn,Bauswein:2015yca,Clark:2015zxa}
for a discussion.
As we discussed in Paper I, the $f_s$ peak at a frequency close to the
merger frequency is basically absent for high mass ratio BNS, while a
secondary peak with slightly lower frequency as the $f_2$-frequency
becomes visible. Here we find that the secondary peak close to the merger frequency
is enhanced for aligned spin configurations.
In Ref.~\cite{Bernuzzi:2013rza} we reported a shift of the $f_2$
frequency of about $\sim 200$~Hz due to the spin of the NSs.
Those simulations used higher resolutions than the ones presented here,
but were restricted to a simple $\Gamma=2$ EOS.
Ref.~\cite{Bauswein:2015vxa} found that for more realistic EOS but
under the assumption of conformal flatness the frequency shift is smaller.
In our new data we cannot clearly resolve frequency shifts of
$\lesssim200$~Hz, which is then to be considered an upper limit for spin
effects in BNS with the
employed EOSs and spins $\chi\lesssim0.1$ [see also the discussion in
Sec.~\ref{sec:dynamics:postmerger}].
Nevertheless, we find in agreement with~\cite{Bernuzzi:2013rza}
that aligned spin configurations have higher peak frequencies $f_2$.
For our setups the shift is only on the order of $\lesssim 50$Hz
and thus not resolved properly.
Longer and higher resolved simulations will be needed for a further investigation of the
$f_2$-shift.
\section{EM counterparts}
\label{sec:EM}
Let us now discuss spin effects in possible EM counterparts in the
infrared and radio band generated from the mass ejecta.
As a consequence of the results about the dynamical ejecta, we find
that spins effects are subdominant with respect mass-ratio effect, and
more relevant the larger the unbound mass is, i.e.~for large $q$.
However, we identify a clear trend: aligned spins
increase the luminosity of the kilonovae and
the radio fluency of the radio flares
and, therefore, favor the detection of EM counterparts.
As in Paper~I we use the analytical model of \cite{Grossman:2013lqa}
to estimate the peak luminosity, time, and temperature
of the marcronovae produced by the ejecta. We also use the model of~\cite{Nakar:2011cw}
to describe radio flares peak fluxes. Our results are summarized in Tab.~\ref{tab:EM},
Fig.~\ref{fig:EM1}, and Fig.~\ref{fig:EM2}.
As pointed out with previous studies an increasing mass ratio delays
the luminosity peak of the kilonovae for few days, but leads to an
overall larger peak luminosity. Also, the temperature at peak
luminosity decreases for larger mass ratios. The effect of the spins
is less strong, but because of the larger ejecta mass for systems
for which the secondary star has spin
aligned to the orbital angular momentum
we find a trend towards delayed peaks, increasing
luminosity, and decreasing temperature.
This effect is clearly present for larger mass ratios, see
Fig.~\ref{fig:EM1}.
We present the bolometric luminosities for the
expected kilonova. The lightcurves are computed following the approach
of~\cite{Kawaguchi:2016ana,Dietrich:2016prep3}.
Figure~\ref{fig:lightcurves} shows the bolometric luminosity for
the H4-165110 setups considering different spin configurations.
Because of the larger ejecta mass for the $(\uparrow \uparrow)$ configuration
the bolometric luminosity is larger than for the other setups.
Contrary when the secondary star has antialigned spin the
bolometric luminosity is about a factor of $\sim 2$ smaller than for
the aligned setup.
Considering the radio flares, we find that systems with a larger mass
ratio are more likely to be detectable than equal mass setups,
Fig.~\ref{fig:EM2}. The fluency and the peak time increase
with increasing mass ratio. Our results suggest that in cases where the less massive star has spin
aligned to the orbital angular momentum the radio fluency increases and happens at later times.
For a more quantitative analysis higher resolution simulations and
better models estimating the kilonova and radio burst properties are needed.
Note also that, since our simulations are based on simulations not including microphysics
and on simplified models, further simulations are needed to check our results.
\begin{table}[t]
\centering
\setlength{\tabcolsep}{0.5pt}
\caption{Electromagnetic Counterparts.
The columns refer to: the name of the configuration,
the time in which the peak in the near infrared occurs $t_{\rm peak}$,
the corresponding peak luminosity $L_{\rm peak}$,
the temperature at this time $T_{\rm peak}$,
the time of peak in the radio band $t_{\rm peak}^{\rm rad}$,
and the corresponding radio fluence.
As in other tables, we present results for R2 and in brackets resolutions for R1.}
\begin{small} \begin{tabular}{l|ccccc}
Name & $t_{\rm peak}$ & $L_{\rm peak}$ & $T_{\rm peak}$ & $t^{\rm rad}_{\rm peak}$ & $F^{\nu {\rm rad}}_{\rm peak}$ \\
& [days] & [$10^{40}\frac{\rm erg}{\rm s}$] & [$10^3$ K] & [years] & [$\mu$Jy] \\
\hline
\hline
ALF2-137137$^{(00)}$ & 2.0 (1.8) & 2.6 (1.9) & 2.5 (2.7) & 6.4 (6.1) & 41 (7) \\
ALF2-137137$^{(\uparrow \downarrow)}$ & 2.7 (2.2) & 2.3 (2.5) & 2.5 (2.5) & 8.4 (6.7) & 8 (20) \\
ALF2-137137$^{(\uparrow 0)}$ & 1.8 (2.1) & 1.8 (1.8) & 2.8 (2.7) & 7.2 (8.1) & 5 (4) \\
ALF2-137137$^{(\uparrow \uparrow)}$ & 1.5 (1.3) & 1.8 (1.2) & 2.8 (3.2) & 5.2 (10.0) & 7 (6) \\
H4-137137$^{(00)}$ & 1.9 (0.9) & 2.8 (1.4) & 2.5 (3.3) & 5.9 (3.5) & 58 (5) \\
H4-137137$^{(\uparrow \downarrow)}$ & 1.4 (1.0) & 2.0 (1.3) & 2.8 (3.3) & 5.0 (4.9) & 19 (4) \\
H4-137137$^{(\uparrow 0)}$ & 0.9 (1.0) & 1.5 (1.3) & 3.2 (3.3) & 3.7 (4.9) & 7 (4) \\
H4-137137$^{(\uparrow \uparrow)}$ & 1.7 (1.1) & 2.0 (2.0) & 2.7 (2.9) & 6.8 (3.3) & 17 (18) \\
\hline
ALF2-122153$^{(00)}$ & 2.9 (4.2) & 3.7 (2.9) & 2.2 (2.2) & 8.0 (17.6) & 139 (46) \\
ALF2-122153$^{(\uparrow \downarrow)}$ & 2.4 (2.7) & 2.8 (2.7) & 2.4 (2.4) & 7.6 (8.4) & 44 (27) \\
ALF2-122153$^{(\uparrow 0)}$ & 2.5 (4.6) & 3.3 (5.2) & 2.3 (1.9) & 6.9 (11.9) & 74 (516) \\
ALF2-122153$^{(\uparrow \uparrow)}$ & 3.0 (2.3) & 3.2 (4.2) & 2.2 (2.2) & 8.9 (4.5) & 64 (215) \\
H4-122153$^{(00)}$ & 2.7 (3.4) & 3.5 (3.5) & 2.2 (2.1) & 7.3 (9.6) & 105 (74) \\
H4-122153$^{(\uparrow \downarrow)}$ & 2.3 (2.5) & 2.8 (3.1) & 2.4 (2.3) & 7.2 (6.9) & 48 (61) \\
H4-122153$^{(\uparrow 0)}$ & 2.8 (1.9) & 3.3 (3.5) & 2.2 (2.3) & 8.3 (4.6) & 99 (149) \\
H4-122153$^{(\uparrow \uparrow)}$ & 3.0 (4.3) & 3.7 (3.5) & 2.1 (2.1) & 7.4 (12.0) & 108 (52) \\
\hline
ALF2-110165$^{(00)}$ & 5.6 (4.6) & 5.0 (4.1) & 1.8 (2.0) & 12.8 (11.4) & 190 (83) \\
ALF2-110165$^{(\uparrow \downarrow)}$ & 3.8 (3.6) & 3.9 (3.6) & 2.0 (2.1) & 9.7 (9.5) & 96 (65) \\
ALF2-110165$^{(\uparrow 0)}$ & 4.3 (4.9) & 4.1 (4.4) & 2.0 (1.9) & 10.6 (11.5) & 106 (104) \\
ALF2-110165$^{(\uparrow \uparrow)}$ & 5.5 (6.5) & 5.0 (6.0) & 1.8 (1.7) & 12.5 (12.9) & 198 (352) \\
H4-110165$^{(00)}$ & 4.6 (5.4) & 4.4 (4.5) & 1.9 (1.9) & 11.2 (12.9) & 133 (109)\\
H4-110165$^{(\uparrow \downarrow)}$ & 3.7 (4.4) & 3.4 (4.3) & 2.1 (2.0) & 10.2 (10.6) & 53 (116) \\
H4-110165$^{(\uparrow 0)}$ & 5.1 (5.3) & 4.5 (4.5) & 1.9 (1.9) & 12.5 (12.7) & 126 (124) \\
H4-110165$^{(\uparrow \uparrow)}$ & 6.2 (6.9) & 5.0 (6.0) & 1.8 (1.7) & 14.5 (14.3) & 160 (352) \\
\hline
\hline
\end{tabular}
\end{small}
\label{tab:EM}
\end{table}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{fig14.pdf}
\caption{Peak time $t_{\rm peak}$ (top panel), peak luminosity $L_{\rm peak}$ (middle panel),
and peak temperature $T_{\rm peak}$ (bottom panel) of marcronovae produced
by the BNS mergers considered in this article as a function of the effective spin $\chi_{\rm eff}$.
We mark different EOS with different colors: green (H4) and orange (ALF2).
Different markers refer to different spin configurations, see Fig.~\ref{fig:param}.}
\label{fig:EM1}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{fig15.pdf}
\caption{Peak time in the radio band $t_{\rm peak}^{\rm rad}$ (top panel) and corresponding
radio fluence $F^{\nu {\rm rad}}_{\rm peak}$ (bottom panel),
as a function of the effective spin $\chi_{\rm eff}$.
We mark different EOS with different colors: green (H4) and orange (ALF2).
Different markers refer to different spin configurations, see Fig.~\ref{fig:param}.}
\label{fig:EM2}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{fig16.pdf}
\caption{Bolometric luminosity for the
H4-110165 setups with different spin orientations.
The luminosities are computed following the approach of~\cite{Kawaguchi:2016ana}.}
\label{fig:lightcurves}
\end{figure}
\section{Summary}
\label{sec:conclusion}
In this article we studied the effect of the stars' rotation on equal
and unequal mass binary neutron star mergers dynamics.
Our analysis provides a basis for future models of spin effects in
gravitational waves and electromagnetic emission.
Combined with Paper I (\cite{Dietrich:2016hky}) this work is one of the most complete
investigations of the binary neutron star parameter space available to date.
Our findings are summarized in what follows.
\paragraph*{Energetics:}
We have considered gauge-invariant binding energy curves for both
fixed orbital angular momentum $E_b(\ell)$ and fixed orbital frequency
$E_b(x)$. The former are useful to understand the effect of individual
terms in the Hamiltionan; the latter are directly linked to the GW
phase analysis (see below).
Our new analysis of the energetics up to merger indicates that,
although the main spin effect is due to spin-orbit (SO) interactions
\cite{Bernuzzi:2013rza}, also spin-spin interaction might play a role in
the very last stage of the merger. In particular, we argue that a
self-coupling of the NS spin
($S_A^2$ \eqref{Hs2})
caused by quadrupole deformation of the star due to its intrinsic rotation
contributes during the last orbits with an
attractive effect opposed to the repulsive effect of
the SO interaction~\cite{Poisson:1997ha}.
This illustrates the importance of including also spin-spin effects in
analytical models of BNS, and poses the challenge of resolving such
effects in NR simulations.
We note that the current best analytical representation of the SO
Hamiltonian (the effective-one-body model, EOB) shows
some significant deviation from the NR data at small separations, see
Fig.~\ref{fig:Ej_H137137}. Curiously, comparing the EOB analytical SO model
with NR data that include also the $S^2$ interaction, we find an
effective closer agreement between the two, Fig.~\ref{fig:Ebell_H4}.
We have used energetics and the dynamical frequency $\Omega=M^{-1}\partial
E_b/\partial\ell$ also to analyze the postmerger dynamics.
Spin effects are clearly visible in cases in which
the merger remnant collapses to black hole, Fig.~\ref{fig:Ebell_postmerger}.
Spins aligned to orbital angular momentum increase the angular momentum
support of the remnant, therefore, collapse happens at later times and at
smaller values of $\ell$. Spin effects on the frequency evolution of
more stable merger remnant are small and difficult to resolve.
The $\Omega$ analysis also shows that in large-mass ratio BNS ($q\gtrsim1.5$),
the rotational frequency $\Omega$ has a sharp increase right after merger due
to the collision of the companion's core with the primary star
(Fig.~\ref{fig:Ebell_postmerger} bottom right panel).
This fact was unnoticed in Paper I where we did
not inspect energetics.
\paragraph*{Mass ejection:}
Spin effects in dynamical ejecta are clearly observed in unequal mass
BNS (and with large mass ratios, $q\sim1.3-1.5$), in which mass
ejection originates from the tidal tail for the companion.
Spin aligned to the orbital angular momentum favors the amount of
ejected mass because the additional angular momentum contributes to
unbinding a larger fraction of fluid elements. This effect is mostly
dependent on the spin of the companion.
\paragraph*{Gravitational Waves:}
We presented the first analysis of spin effects in the GW phase up to
merger. Spin effects contribute to phase differences up to
$\sim5$ radians in the considered dynamical regime (and for
$\chi\sim0.1$).
This dephasing should be compared with the $\sim 20$
radiants due to tidal effects accumulated to the BNS merger (with
respect a BBH). The hierarchy of these effects on the phase mirrors
what is observed in the energetics $E_b(x)$, Fig.~\ref{fig:E_ell_x}.
Neglecting spin effects would bias
the determination of tidal parameters in GW observations, e.g.
\cite{Agathos:2015uaa}.
Mainly as a consequence of spin-orbit interactions,
the merger GW frequency of CRV BNS shift to higher (lower) frequencies
then irrotational BNS if a NS has aligned (antialigned) spin to the
orbital angular momentum.
The Fourier analysis of the postmerger signal indicates that spin
effects are visible in the GW spectrum (cf. Fig.~\ref{fig:psd}
and \ref{fig:spectra}).
Some differences in the GW frequency evolution are observed in cases
the additional angular momentum support due to aligned spins
stabilizes the merger remnant for a longer time period (cf.~discussion
on spin effects on dynamics).
Our simulations suggest that if a BNS has spin aligned to the angular momentum the
spectrum is slightly shifted to higher frequencies up to $\sim 50$~Hz
(for dimensionless spins $\chi\lesssim0.1$), but more accurate simulations
with longer postmerger evolutions will be needed to resolve the shift properly.
Furthermore, we see an effect of the spin
on the secondary peak frequencies, where aligned configurations
increase the enhance a secondary peak shortly after the merger frequency.
\paragraph*{Electromagnetic counterparts:}
For the considered spin magnitudes the spin effects on the kilonovae
and radio flare properties is subdominant with respect to the mass ratio
and the EOS. Spin effects are more prominent for larger ejecta masses, where
spin aligned (antialigned) with the orbital angular momentum increases (decreases)
the luminosity of the kilonovae and also increases (decreases) the radio fluency of the radio flares.
Overall we find that aligned spin BNS in combination with larger
mass ratios favor bright electromagnetic counterparts.
\begin{acknowledgments}
It is a pleasure to thank Bernd Br\"ugmann, Roland Haas, Tanja
Hinderer, Nathan~K.~Johnson-McDaniel, Harald Pfeiffer, Jan Steinhoff, Justin Vines for helpful discussions.
We are very thankful to Alessandro Nagar for providing us with EOB
$E_b(\ell)$ curves and to Serguei Ossokine for providing us with the
NR black holes binary $E_b(\ell)$.
We thank David Radice for comments improving the manuscript.
%
W.T. was supported by the National Science Foundation under grant
PHY-1305387.
%
Computations where performed on SuperMUC at the LRZ (Munich) under
the project number pr48pu, Jureca (J\"ulich)
under the project number HPO21, Stampede
(Texas, XSEDE allocation - TG-PHY140019),
Marconi (ISCRA-B) under the project number HP10BMAB71.
\end{acknowledgments}
| {'timestamp': '2016-11-23T02:06:46', 'yymm': '1611', 'arxiv_id': '1611.07367', 'language': 'en', 'url': 'https://arxiv.org/abs/1611.07367'} |
\section{Introduction to Sparsity}
One of the core features of Hawking radiation \cite{HawkingEffect1,HawkingEffect2} is the similarity of its emission spectrum to that of a grey body, hence that of a black body. While educational and edifying, this comparison omits one very important feature: This radiation is \emph{sparse}. Technically, this concept is encoded in a low density of states \cite{KieferDecoherenceHawkingRad,BHradNonclass}. Strictly speaking and as evidenced by Don Page's work \cite{Page1,Page2,PageThesis,Page3}, this feature is known since the beginning, yet is often glossed over. A subsequent focus on high temperature regimes can be considered partly responsible for this \cite{OliensisHill84,BHEvapJets,NatureBHgamma,BHPhotospheresPage,BremsstrahlungBH,MG12PageMacGibbonsCarr}. Four years ago, a heuristic method to bring this forgotten feature more to the forefront has been introduced \cite{HawkFlux1,HawkFlux2,SparsityNumerical} --- simply called \emph{sparsity} $\eta$. This concept was originally applied to $3+1$ dimensional black holes (corresponding to the solutions of Schwarzschild, Kerr, Reissner--Nordström, and \enquote{dirty black holes}), but soon found application also in higher dimensions \cite{HodNdim}, in phenomenological quantum gravity extensions \cite{SparsityBackreaction}, and in the context of the influence of generalised uncertainty principles on Hawking evaporation \cite{SparsityAna,OngGUPSparsity}.
Let us quickly introduce the concept: Sparsity $\eta$ is a measure to estimate the density of states of radiation. For this, one compares a localisation time scale $\tau_{\text{loc}}$ of an emitted particle with a time scale $\tau_{\text{gap}}$ characterising the time between subsequent emission events. It is worth emphasising the indefinite article here: Different choices can be made for both time-scales, though their numerical values will not differ by much. The easiest way to choose the time scale $\tau_{\text{gap}}$ is given by the inverse of the integrated number flux density $\ensuremath{\operatorname{d}}\! \Upgamma_n$ of the radiation, and we will adhere only to this choice throughout the letter. Hence:
\begin{equation}
\tau_{\text{gap}} = \ed{\Upgamma_n}.
\end{equation}
For the localisation time scale $\tau_{\text{loc}}$, however, the identification of a \enquote{simplest} choice is less obvious. This is due to the fact that for most spectra (in our context the Planck spectrum) peak and average frequencies do not agree, nor are they the same when comparing number density spectrum $\Upgamma_n$ and energy density spectrum $\Upgamma_E$. We encode this in the following way:
\begin{equation}
\tau_{\text{loc}} = \ed{\nu_{c,q,s}} = \frac{2\pi}{\omega_{c,q,s}},
\end{equation}
where the index $c \in \{ \text{avg.}, \text{peak}\}$ indicates how the physical \underline{q}uantity $q$ (associated to a unique frequency by appropriate multiplication with natural constants $\hbar,k_\text{B},c,G$) is \underline{c}alculated, and $s$ determines the \underline{s}pectrum we consider\footnote{Only in the definition of $\tau_{\text{loc}}$ we employed the frequency, not the angular frequency as it makes for a more conservative estimate of sparsity. This was suggested by an anonymous referee of \cite{HawkFlux1}.}. As our $\tau_{\text{gap}}$ is fixed, the sparsities
\begin{equation}\label{eq:defsparsity}
\eta_{c,q,s} \mathrel{\mathop:}= \frac{\tau_{\text{gap}}}{\tau_{\text{loc}}}
\end{equation}
inherit the freedom (\textit{i.e.}, the indices) of $\tau_{\text{loc}}$. The method of calculating the sparsities is always the same: First, calculate the quantity $q$ associated to the spectrum $s$. Second, find a corresponding angular frequency $\omega_{c,q,s}$. Here, the choice of the quantity is of relevance --- one should keep to quantities which can be related in a straightforward manner to a frequency. Third, and last, calculate the sparsity
\begin{equation}
\eta_{c,E,s} = \frac{\omega_{c,E,s}}{2\pi \, \Upgamma_n}.
\end{equation}
Let us see this in action on two less trivial examples: Here, the frequency is gained from either the average wavelength $\lambda$ or the average period $\tau$ of emitted particles for the number spectrum. In order to compare the frequency corresponding to the average wavelength or of the average period of emitted particles with $1/\Upgamma_n$ as it appears in the sparsity definition~\eqref{eq:defsparsity}, one has to take their inverses before multiplying with the appropriate factors of speed of light $c$, and Planck's constant $\hbar$. In the resulting expression, $\Upgamma_n$ cancels and one arrives at the convenient expressions
\begin{equation}
\eta_{\text{avg.},\tau,n} = \ed{\int \frac{2\pi \hbar}{E} \ensuremath{\operatorname{d}}\! \Upgamma_n}, \qquad \eta_{\text{avg.},\lambda,n} = \ed{\int \frac{2\pi}{c k} \ensuremath{\operatorname{d}}\! \Upgamma_n},
\end{equation}
where $E$ is the energy of the emitted particle (equivalent to its angular frequency, as $E=\hbar \omega$), and $k$ its wave number. Note that both integrals are identical \emph{in the case of massless particles} --- in this case we simply call both the same, $\eta_{\text{binned}}$. A convenient bonus of this result is the fact that different emission processes can be considered \enquote{happening in parallel}, with the relevant notion borrowed from circuit analysis, that is
\begin{equation}\label{eq:binning}
\ed{\eta_{\text{tot}}} = \sum_{\text{channel}~i} \ed{\eta_i}
\end{equation}
for these two sparsities (and others reducing to a simple inverse of a single integral). They also lend themselves nicely to a interpretation as \enquote{binned} or \enquote{bolometric} sparsity measures, as they can be understood as dividing up the emission spectrum into infinitesimal bins.
Much more common, however, is an expression not amenable to this property. For example, if one calculates the peak frequency (\emph{i.e.}, the peak energy) of the number spectrum, the resulting sparsity
\begin{equation}
\eta_{\text{peak},E,n} = \frac{\omega_{\text{peak},E,n}}{2\pi \Upgamma_n}
\end{equation}
involves finding the zeroes of the first derivative of $\Upgamma_n$ w.r.t. energy/frequency. However, the peak frequencies will usually not be given explicitly, as only in rare special cases can they be found analytically exactly.
This is an opportune moment to describe the spectra under consideration in more detail. We shall consider spectra of the form
\begin{equation}
\ensuremath{\operatorname{d}}\!\Upgamma_n = \frac{g}{(2\pi)^D} \frac{c \hat{k}\cdot\hat{n}}{\exp\kl{\frac{\hbar\sqrt{m^2 c^2+k^2 c^4}}{k_\text{B} T} - \tilde\mu} + s} \ensuremath{\operatorname{d}}\!^D k \ensuremath{\operatorname{d}}\! A
\end{equation}
and
\begin{equation}
\ensuremath{\operatorname{d}}\!\Upgamma_E = \frac{g}{(2\pi)^D} \frac{c \hbar\sqrt{m^2 c^2+k^2 c^4} \; \hat{k}\cdot\hat{n}}{\exp\kl{\frac{\hbar\sqrt{m^2 c^2+k^2 c^4}}{k_\text{B} T} - \tilde\mu} + s} \ensuremath{\operatorname{d}}\!^D k \ensuremath{\operatorname{d}}\! A,
\end{equation}
where $g$ is the (possibly dimension-dependent) degeneracy factor of the emitted particles (more below in section~\ref{sec:massless}), $m$ their mass, $D$ the number of space dimensions, $T$ the temperature of the radiation, $\tilde\mu$ the chemical potential divided by $k_\text{B} T$ (\emph{i.e.}, the logarithm of the fugacity), $\hat{n}$ the surface normal to the emitting hypersurface $A$, and $s \in \{-1,0,+1\}$ a parameter distinguishing (respectively) between bosons, Maxwell--Boltzmann/classical particles, and fermions. The differential $\ensuremath{\operatorname{d}}\!^D k$ takes on the following form in spherical coordinates:
\begin{align}
&\ensuremath{\operatorname{d}}\!^{D} k\nonumber\\& = k^{D-1} \sin^{D-2} \varphi_1 \cdots \sin \varphi_{D-2} \sin^0 \varphi_{D-1} \ensuremath{\operatorname{d}}\! k \ensuremath{\operatorname{d}}\! \varphi_1 \ensuremath{\operatorname{d}}\!\varphi_{D-1},
\end{align}
where $\varphi_{D-1}\in[0,2\pi)$, $\varphi_i \in [0,\pi)$, if $i \in \{2,\dots,D-2\}$, and $\varphi_{1} \in [0,\frac{\pi}{2})$ (at least for our future integration steps). In many cases, the term $\hat{k} \cdot \hat{n}$ seems to be forgotten in higher dimensions (we will not mention the guilty parties) --- even though without it, it will not be possible to correctly link these spectra to the $3+1$ dimensional case and its Stefan--Boltzmann law.
Regarding our earlier mentioned peak frequencies in $3+1$ dimensions, the peak frequencies of the classical massive particle's number and energy spectra can be found in terms of cubics --- but even these are neither useful nor enlightening in most situations. For massless particles, on the other hand, the result is in all dimensions expressible in terms of the Lambert~W-function:
\begin{subequations}\label{eq:peaks}
\begin{align}
&m=0:\nonumber\\
&\omega_{\text{peak},E,E} = \frac{k_\text{B} T}{\hbar}\kl{D+W(sD e^{\tilde\mu-D+2})},\\
& \omega_{\text{peak},E,n} = \frac{k_\text{B} T}{\hbar}\kl{D-1+W(s(D-1) e^{\tilde\mu-D+3})}.
\end{align}
\end{subequations}
In the following we will apply these methods to Tangherlini black holes. The sparsities we will calculate in this letter are: $\eta_{\text{peak},E,n}$, $\eta_{\text{peak},E,E}$, $\eta_{\text{avg.},E,n}$, $\eta_{\text{avg.},\tau,n}$, and $\eta_{\text{avg.},\lambda,n}$. The latter two being the same in the massless case, they are relabelled as $\eta_{\text{binned}}$ in that case.
\section{Preliminaries for Tangherlini Black Holes}
The Tangherlini black hole \cite{Tangherlini,FrolovZelnikov2011} is the higher dimensional generalization of the Schwarzschild black hole; it is the $D+1$-dimensional, spherically symmetric vacuum solution. The metric has the form
\begin{align}\label{eq:Tangherlini}
\ensuremath{\operatorname{d}}\! s^2 =& -\kl{1-\kl{\frac{r_\text{H}}{r}}^{D-2}}\ensuremath{\operatorname{d}}\! t^2 + \kl{1-\kl{\frac{r_\text{H}}{r}}^{D-2}}^{-1}\ensuremath{\operatorname{d}}\! r^2 \nonumber\\&+ r^2 \ensuremath{\operatorname{d}}\! \Omega_{D-1}^2,
\end{align}
where $\ensuremath{\operatorname{d}}\!\Omega_{D-1}^2$ is the differential solid angle, and
\begin{equation}
r_\text{H} = \sqrt[D-2]{\frac{8 \Gamma(\frac{D}{2})GM/c^2}{(D-1)\pi^{\nicefrac{(D-2)}{2}}}}
\end{equation}
is the $D+1$-dimensional Schwarzschild radius, $G$ the (dimension-dependent) gravitational constant, and $M$ the mass of the black hole. A $\Gamma$ without the indices indicating number or energy densities simply refers to the $\Gamma$-function. In passing, we note that the uniqueness theorems for black holes hold (without further assumptions) only in $3+1$ dimensions \cite{Papantonopoulos2009,NumBlackSaturns,QGHolo}, related to a more complex notion of angular momenta. This somewhat justifies our focus on higher dimensional, non-rotating black holes even though some solutions are explicitly known, like the Myers--Perry solution \cite{MyersPerry}.
The surface area of the horizon becomes
\begin{equation}
A_{\text{H}} = 2 \frac{\pi^{D/2}}{\Gamma(D/2)} r_{\text{H}}^{D-1},
\end{equation}
while the corresponding Hawking temperature is
\begin{equation}
\frac{D-2}{4\pi r_\text{H}}\frac{\hbar c}{k_\text{B}}.
\end{equation}
Due to the spherical symmetry, the angular and area integral required for the sparsity calculations can (a) be separated from each other, and (b) the term $\hat{k}\cdot\hat{n}$ evaluates to a simple $\cos\varphi_1$. This factor will prevent an integration over the angular variables from being the area of a hypersphere. Rather, the result is (in all instances to be encountered in the following)
\begin{align}
&\int_0^{2\pi} \ensuremath{\operatorname{d}}\! \varphi_{D-1} \int_{0}^{\pi}\ensuremath{\operatorname{d}}\! \varphi_{D-2} \sin \varphi_{D-2} \times\cdots \nonumber\\
&\times \int_{0}^{\pi}\ensuremath{\operatorname{d}}\! \varphi_2 \sin^{D-3}\varphi_2 \int_{0}^{\frac{\pi}{ 2}} \ensuremath{\operatorname{d}}\! \varphi_1 \cos\varphi_1 \sin^{D-2}\varphi_1\nonumber\\
&\quad = \frac{2\pi}{D-1}\frac{\sqrt{\pi}^{D-3}}{\Gamma(\ed{2}(D-1))}.
\end{align}
\section{Sparsity Results and Comparison with the Literature}
The origin of the sparsity of Hawking evaporation in $3+1$ dimensions can be sought and found in the connection between size of the horizon and the Hawking temperature. This feature is absent from black bodies --- as long as their temperature can be maintained, they can be made of arbitrary sizes. Put differently, in $3+1$ dimensions and for non-rotating\footnote{Since, as mentioned before, rotating black hole solutions are more subtle in higher dimensions we will limit the discussion to non-rotating ones. As a shorthand, we will from now on assume no rotation.} black holes, the thermal wavelength $\lambda_{\text{thermal}}$ fulfils $\lambda_{\text{thermal}}^2 < A_{\text{H}}$. However, as we will see below, this does not translate to arbitrary dimensions as already shown by Hod in \cite{HodNdim}. Also, the inclusion of rotation would lead to the emergence of super-radiance further complicating the discussion. However, away from super-radiant regimes one can include easily the parameter $\tilde\mu$ (introduced above) to capture at least charges --- allowing a spherically symmetric solution ---, or with less qualms and more bravado about deviating from spherical symmetry even very small angular momenta.
First, we will reproduce and improve Hod's results on the emission of massless particles, then we shall generalise to massive particles. Due to the length of the results, these will be provided in tables~\ref{tab:massless} and~\ref{tab:massive}. All results will be given in terms of $\nicefrac{\lambda_{\text{thermal}}^{D-1}}{gA}$. Note that in this expression the Tangherlini black hole mass drops out.
\subsection{Warm-up: The Massless Case}\label{sec:massless}
Before starting, it is worth reminding ourselves that we want to be as conservative as possible in our sparsity results: A small sparsity would mean little phenomenological departure from the familiar black body radiation. Hence, we will not consider the area of the horizon to be the relevant area from which the Hawking radiation originates, but rather we will take the capture cross section $\sigma_{\text{capture}}$ for massless particles. This turns out to be \cite{FrolovZelnikov2011}
\begin{equation}
\sigma_{\text{capture}} = \underbrace{\ed{2\sqrt{\pi}}\frac{\Gamma(\nicefrac{D}{2})}{\Gamma(\nicefrac{D+1}{2})} \kl{\frac{D}{D-2}}^{\frac{D-1}{2}} \kl{\frac{D}{2}}^{\frac{D-1}{D-2}}}_{=\mathrel{\mathop:} c_\text{eff}} A_\text{H}.
\end{equation}
The factor $c_\text{eff}$ has been defined for future convenience. This change of area can be motivated and backed with numerical studies highlighting that the renormalised stress-energy tensors of Hawking radiation do not have their maximum at or very close to the horizon but rather a good distance away from it \cite{BHQuantumAtmosphere}.
It is relatively straightforward (though not necessarily notationally easy-going) to manipulate standard integral expressions \cite{GradshteynRyzhik1980,Olver2010a} into the required form. The Boltzmann case (\emph{i.e.}, $s=0$) often requires recognising removable singularities, but apart from this is straightforward to include in these results. This is most apparent in the ubiquitous expressions $\Li{n}(-s)/(-s)$ involving the polylogarithm of order $n$. We have collected the results in table~\ref{tab:massless}. In order to emphasise the dependence of the degeneracy factor $g$ on the dimension, it is written as $g(D)$ in the table.
\begin{table*}
\centering
\begin{tabular}{rcl}
$\eta_{\text{peak},E,n}$ &$=$& $\displaystyle\frac{1}{2\pi (D-2)!} \frac{\Gamma(\frac{D-1}{2})}{\pi^{\nicefrac{(D-3)}{2}}}\frac{(D-1+W((D-1)s e^{\mu-D+3}))}{\frac{\Li{D}(-s e^{\mu})}{(-s)}} \frac{\lambda_\text{thermal}^{D-1}}{g(D) c_\text{eff} A_\text{H}}$\\
$\eta_{\text{peak},E,E}$ &$=$& $\displaystyle\frac{1}{2\pi (D-2)!} \frac{\Gamma(\frac{D-1}{2})}{\pi^{\nicefrac{(D-3)}{2}}}\frac{(D+W(D s e^{\mu-D+2}))}{\frac{\Li{D}(-s e^{\mu})}{(-s)}} \frac{\lambda_\text{thermal}^{D-1}}{g(D) c_\text{eff} A_\text{H}}$\\
$\eta_{\text{avg.},E,n}$ &$=$& $\displaystyle\frac{D}{2\pi (D-2)!} \frac{\Gamma(\frac{D-1}{2})}{\pi^{\nicefrac{(D-3)}{2}}}\frac{\frac{\Li{D+1}(-s e^{\mu})}{(-s)}}{\kl{\frac{\Li{D}(-s e^{\mu})}{(-s)}}^2} \frac{\lambda_\text{thermal}^{D-1}}{g(D) c_\text{eff} A_\text{H}}$\\
$\eta_{\text{binned}}$ &$=$& $\displaystyle\frac{\Gamma(\nicefrac{(D-1)}{2}) (D-1)}{2\pi\sqrt{\pi}^{D-3} (D-2)!}\frac{1}{\frac{\Li{D-1}(-se^{\mu})}{(-s)}} \frac{\lambda_\text{thermal}^{D-1}}{g(D) c_\text{eff} A_\text{H}}$
\end{tabular}
\caption{Sparsities for emission of massless particles in a $D+1$-dimensional Tangherlini space-time in terms of polylogarithms $\Li{n}(x)$, and Lambert-W functions $W(x)$. $\lambda_{\text{thermal}}$ is the thermal wavelength, $c_\text{eff}$ a correction factor to link capture cross-section with horizon area $A_\text{H}$, and $g(D)$ the particles' degeneracy factor.}
\label{tab:massless}
\end{table*}
These results correctly reproduce the earlier, $3+1$-dimensional results found in \cite{HawkFlux1,MyThesis}. Note that the exact solution of the peak frequencies of equations~\eqref{eq:peaks} has a different asymptotic behaviour for $D\to \infty$ compared to the approximation used in \cite{HodNdim}. This does not influence the general statement much: Sparsity is lost in high dimensions, Hawking radiation indeed becomes classical and fully comparable to a black body spectrum. However, the exact dimension where the transition sparse to non-sparse happens changes. In figure~\ref{fig:ndim} we compare the various sparsities and their dependence on $D$ for massless gravitons as done in \cite{HodNdim}, where $\eta_{\text{Hod}} \approx \frac{e}{8\pi^2}\kl{\frac{4\pi}{D}}^{D+1}$. We can see that the qualitative picture each measure of sparsity draws is universal --- and at least in the massless case this can be inferred from the way numerator and denominator behave in the definition~\eqref{eq:defsparsity}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{ndimsans}
\caption[Plot of Dependence of Sparsity of Gravitons in Dimension D]{A comparison of various sparsities $\eta$ for massless gravitons in $D$ space dimensions. The constant $1$ indicates the transition sparse to non-sparse.}
\label{fig:ndim}
\end{figure}
This is especially important once one takes into account the fact that we ignore grey body factors throughout our calculation: Their inclusion will push non-sparsity necessarily to even higher dimensions. That even their inclusion will not change the qualitative result, is nonetheless shown by another comparison to the literature: In \cite{KantiNdimBH,CardosoBulkHawking,KantiNdimBH2} numerical analysis was performed to take the effects of grey body factors into account. Even though these analyses were performed without sparsity as such in mind, it is easy to compare how different particle types will behave. As the different degeneracy factors $g$ for massless particles with different spin depend characteristically on the space dimension $D$, let us summarise these for the spins present in the standard model of particle physics plus gravity: $g_\text{scalar} = 1$, $g_{\text{spin~}1/2} = 2^{n-1}$ for $D=2n$ or $D=2n-1$ and assuming Dirac fermions (and counting particles and anti-particles separately), $g_\text{vector} = D-1$, and $g_\text{graviton} = (D+1)(D-2)/2$. These degeneracies depend, however, on the specifics of the higher dimensional physics considered: In brane world models they are for all $D$ the familiar, $3+1$-dimensional ones for emission into the brane \cite{KantiNdimBH2}. Such brane world models are already covered by the present analysis --- up to a dimension-\emph{independent} factor this corresponds to looking at the dimension-dependence of the scalar sparsities.
The $3+1$-dimensional case shows amply \cite{HawkFlux1,SparsityNumerical} that the inclusion of grey body factors drastically changes the sparsity of, for example, gravitons. Even so, as shown in figure~\ref{fig:species}, the simplifications made while deriving our expressions for sparsity still qualitatively reproduce the behaviour of the earlier-mentioned, numerical studies. While the order in which different particles change from $\eta>1$ to $\eta<1$ shows minor changes, the over-all behaviour is retained, as is the prediction that emitted gravitons become classical radiation first. This then would correspond to a thermal gravitational wave.
\begin{figure}
\includegraphics[width=\columnwidth]{ParticleTypesDim}
\caption[Plot of Dependence of Binned Sparsity of Different Particles in Dimension D]{The binned sparsity $\eta_{\text{binned}}$ for different (massless) particle species.}
\label{fig:species}
\end{figure}
\subsection{Gory Details: The Massive Case}\label{sec:massive}
Before starting the calculations for massive particles, it is a good idea to revisit the effective area $A=c_\text{eff} A_\text{H}$. The capture cross-section underlying this approach changes significantly for massive particles: They become dependent on the particle's velocity $\beta$. While for any massless particle $\beta=c$, for massive particles this means that the effective capture cross-section diverges to $\infty$ for particles with velocity $\beta=0$. The capture cross-section for massless particles reappears as the limiting case for $\beta\to 1$. Finding a corresponding effective cross-section for any given $\beta$ can still be done analytically in $3+1$ dimensions, but this fails in higher dimensions. On top of this, in higher dimensions stable orbits do not exists \cite[\S7.10.2]{FrolovZelnikov2011}; at least assuming the dynamics of higher dimensional general relativity.
To retain an ansatz for the following calculation we shall hence assume that the same effective cross-sectional area as for massless particles gives a good approximation for the area from which Hawking radiation originates. On the one hand, in $3+1$ dimensions this seems a good starting point as we can expect the massless case to be a limiting case for massive particles. An example of this approach is found in \cite{BHQuantumAtmosphere}: The heuristic arguments based on an analogy to the Schwinger effect presented therein cover both massive and massless cases; the additionally studied renormalised stress-energy tensor for massless particles constitutes such a limiting case. On the other hand, the assumption that this carries over in some way to higher dimensions is also a good starting point and working hypothesis.
These arguments in place, we can head straight for the integrals involved, only this time with the relation $E^2 = k^2c^2 + m^2c^4$ connecting the momentum $k$ and the energy $E$ of the particle emitted. The strategy here is always similar: First simplify the integration by rewriting it as the integration of a geometric sum, then integrating by parts until one can make use of the substitution $k=z\cosh x$. This allows employing the identity \cite[3.547.9]{GradshteynRyzhik1980}:
\begin{gather}
\int_{0}^\infty \exp\kl{-\beta\cosh x} \sinh^{2\nu} x\ensuremath{\operatorname{d}}\! x = \hspace{2cm} \nonumber \\\ed{\sqrt{\pi}}\kl{\frac{2}{\beta}}^\nu \Gamma\kl{\frac{2\nu+1}{2}}K_\nu(\beta),
\end{gather}
valid for $\mathrm{Re}(\beta)>0, \mathrm{Re}(\nu) >-1/2$. The resulting sums of modified Bessel functions of the second kind are the expressions in table~\ref{tab:massive}.
At first glance, these seem to be rather unhelpful for further analysis. This is not quite the case: For example, remembering that for fixed $\nu$
\begin{equation}
K_\nu(\beta) \stackrel{\beta\to \infty}{\sim} \sqrt{\frac{\pi}{2\beta}} e^{-\beta},
\end{equation}
tells that for high masses sparsity will be regained in any (fixed) dimension. From a phase-space point of view this is what physical intuition would suggest. Likewise, asymptotic expansions for $z\to 0$ will regain our earlier, massless results. Similar asymptotic analysis was employed in the service of separating superradiant regimes from genuine Hawking radiation in the analysis of the Kerr space-time in \cite{MyThesis} and \cite{HawkFlux1} (though it involved modified Bessel functions of the \emph{first} kind and requires restricting oneself to sparsities fulfilling property~\eqref{eq:binning}).
\begin{table*}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{rcl}
$\eta_{\text{peak},E,n/E}$ &$=$& $\displaystyle \frac{(D-1)}{\sqrt{\pi}^{D-2} 2^{\nicefrac{D+3}{2}}}\frac{\Gamma\kl{\frac{D-1}{2}}}{\Gamma\kl{\frac{D+2}{2}}}\frac{\omega_{\text{peak},E,n/E}}{z^{\frac{D+1}{2}}} \kle{\displaystyle \sum_{n=0}^{\infty} \frac{(-s)^n e^{(n+1)\tilde\mu}}{(n+1)^{\frac{D-1}{2}}} K_{\nicefrac{D+1}{2}}\kl{(n+1)z}}^{-1} \frac{\lambda_\text{thermal}^{D-1}}{g(D) c_\text{eff} A_\text{H}}$\\
$\eta_{\text{avg.},E,n}$ &$=$& $\displaystyle \frac{D(D-1)}{2^{\nicefrac{(D+3)}{2}} \sqrt{\pi}^{D-2}} \frac{\Gamma\kl{\frac{D-1}{2}}}{\Gamma\kl{\frac{D+2}{2}}} \frac{\displaystyle\sum_{n=0}^{\infty} (-s)^n e^{(n+1)\tilde\mu} \frac{z^{\frac{D+3}{2}}}{(n+1)^{\frac{D-1}{2}}} \kl{K_{\nicefrac{(D-1)}{2}}\kl{(n+1)z} + \frac{D}{(n+1)z} K_{\nicefrac{(D+1)}{2}}\kl{(n+1)z}}}{\displaystyle\kl{\sum_{n=0}^{\infty} (-s)^n \frac{e^{(n+1)\tilde\mu}}{(n+1)^{\frac{D-1}{2}}} z^{\frac{D+1}{2}} K_{\nicefrac{(D+1)}{2}}\kl{(n+1)z} }^2} \frac{\lambda_\text{thermal}^{D-1}}{g(D) c_\text{eff} A_\text{H}}$\\
$\eta_{\text{avg.},\tau,n}$ &$=$& $\displaystyle\frac{D-1}{2 \pi^{\frac{D-2}{2}} z^{\frac{D-1}{2}}} \frac{\Gamma\kl{\frac{D-1}{2}}}{\Gamma\kl{\frac{D}{2}}} \kle{\sum_{n=0}^{\infty}(-s)^n e^{(n+1)\tilde\mu} \kl{\frac{2}{n+1}}^{\frac{D-1}{2}} K_{\nicefrac{(D-1)}{2}}\kl{(n+1)z} }^{-1} \frac{\lambda_\text{thermal}^{D-1}}{g(D) c_\text{eff} A_\text{H}} $\\[0.5cm]
$\eta_{\text{avg.},\lambda,n}$ & $=$ & $\displaystyle\frac{D-1}{(2z)^{D/2}} \kle{ \sum_{n=0}^{\infty} (-s)^n e^{(n+1)\tilde\mu} \kl{\frac{\pi}{n+1}}^{\frac{D-2}{2}} K_{\nicefrac{D}{2}}\kl{(n+1)z} }^{-1}\frac{\lambda_\text{thermal}^{D-1}}{g(D) c_\text{eff} A_\text{H}}$
\end{tabular}
}
\caption{Sparsities for massive particle emission in a $D+1$-dimensional Tangherlini space-time in terms of polylogarithms $\Li{n}(x)$, and modified Bessel functions of the second kind $K_\nu(x)$. Here, $z\mathrel{\mathop:}= \frac{mc^2}{k_\text{B} T_\text{H}}$ is a dimensionless mass-parameter. $\lambda_{\text{thermal}}$ is the thermal wavelength, $c_\text{eff}$ a correction factor to link capture cross-section with horizon area $A_\text{H}$, and $g(D)$ the particles' degeneracy factor.}
\label{tab:massive}
\end{table*}
\section{Conclusion}
In this letter, we have provided a generalisation to $D+1$ dimensions of the exact, heuristic, semi-classical results for non-rotating black holes found in \cite{HawkFlux1} which introduced the concept of sparsity. We have reproduced and improved on the results of \cite{HodNdim}, and shown agreement with previous numerical studies \cite{KantiNdimBH,CardosoBulkHawking,KantiNdimBH2}. This highlights two things: First, it demonstrates the robustness of the heuristic concept of \enquote{sparsity}. Second, this concept provides a quick, simple, and often pedagogical insight into radiation processes, here exhibited on the Hawking radiation from a Tangherlini black hole in $D+1$ space-time dimensions.
Given the propensity of higher dimensional model building encountered in the quest for quantum gravity, it seems important to have an easy-to-calculate, but predictive physical quantity like sparsity that helps to understand differences between such models. This is particularly true for the prime benchmark that is the Hawking effect: Traditionally, a focus for this differentiation between models relies on the connection between entropy and area, and how different models vary this more or less severely compared to the Bekenstein--Hawking result. Using instead a property of the emitted radiation (like sparsity) seems experimentally more readily accessible than entropy or horizon area. Here, we presented the results for models predicting a dynamical situation as higher dimensional general relativity would have. Sparsity is, however, more than just a tool of curved space-time quantum field theory and general relativity: Other phenomenological approaches involving generalised uncertainty principles \cite{SparsityAna,OngGUPSparsity}, and attempts to model backreaction \cite{SparsityBackreaction}, further illustrate the use of this tool also for other dynamics, as more particle physics inspired extensions (like string theory) might imply.
An obvious extension of the present letter is the analysis of Myers--Perry black holes along the lines of the Kerr analysis in \cite{MyThesis} and \cite{HawkFlux1}; for sparsities amenable to the binning property~\eqref{eq:binning} even a combined superradiance-mass analysis could be performed based on the present results. Less straightforward would be an extension to other high-dimensional models not implying dynamics not akin to those of general relativity.
\section*{Acknowledgements}
Part of the research presented here was funded by a Victoria University of Wellington PhD Scholarship. The author would like to thank Finnian Gray, Alexander Van-Brunt, and Matt Visser for many helpful discussions.
| {'timestamp': '2019-10-17T02:11:08', 'yymm': '1910', 'arxiv_id': '1910.07256', 'language': 'en', 'url': 'https://arxiv.org/abs/1910.07256'} |
\section{Introduction}
Over the last decades, many wavelet procedures have been developed in various statistical frameworks. Yet, in multivariate settings, most of them are based on isotropic wavelet bases. These indeed have the advantage of being as easily tractable as their univariate counterparts since each isotropic wavelet is a tensor product of univariate wavelets coming from the same resolution level. Notable counterexamples are~\cite{DonohoCART},~\cite{Neumann} and~\cite{NeumannVonSachs}, or~\cite{ACF2015} and~\cite{ACFMaxiset}. They underline the usefulness of hyperbolic wavelet bases, where coordinatewise varying resolution levels are allowed, so as to recover a wider range of functions, and in particular functions with anisotropic smoothness.
Much attention has also been paid to the so-called curse of dimensionality. A common way to overcome this problem in Statistics is to impose structural assumptions on the function to estimate. In a regression framework, beyond the well-known additive and single-index models, we may cite the work of~\cite{HorowitzMammen} who propose a spline-based method in an additive model with unknown link function, or the use of ANOVA-like decompositions in~\cite{Ingster} or~\cite{DIT}. Besides, two landmark papers consider a general framework of composite functions, encompassing several classical structural assumptions:~\cite{JLT} propose a kernel-based procedure in the white noise framework, whereas~\cite{BaraudBirgeComposite} propose a general model selection procedure with a wide scope of applications. Finally, Lepski~\cite{LepskiInd} (see also~\cite{RebellesPointwise,RebellesLp}) consider density estimation with adaptation to a possibly multiplicative structure of the density. In the meanwhile, in the field of Approximation Theory and Numerical Analysis, a renewed interest in function spaces with dominating mixed smoothness has been growing (see for instance~\cite{DTU}), due to their tractability for multivariate integration for instance. Such spaces do not impose any structure, but only that the highest order derivative is a mixed derivative. Surprisingly, in the statistical literature, it seems that only the thresholding-type procedures of~\cite{Neumann} and~\cite{BPenskyPicard} deal with such spaces, either in the white noise framework or in a functional deconvolution model.
In order to fill this gap, this paper is devoted to a new statistical procedure based on wavelet selection from hyperbolic biorthogonal bases. We underline its universality by studying it in a general intensity estimation framework, encompassing many examples of interest such as density, copula density, Poisson intensity or L\'evy jump intensity estimation. We first define a whole collection of linear subspaces, called models, generated by subsets of the dual hyperbolic basis, and a least-squares type criterion adapted to the norm induced by the primal hyperbolic basis. Then we describe a procedure to choose the best model from the data by using a penalized approach similar to~\cite{BBM}. Our procedure satisfies an oracle-type inequality provided the intensity to estimate is bounded. Besides, it reaches the minimax rate up to a constant factor, or up to a logarithmic factor, over a wide range of spaces with dominating mixed smoothness, and this rate is akin to the one we would obtain in a univariate framework. Notice that, contrary to~\cite{Neumann} or~\cite{BPenskyPicard}, we allow for a greater variety of such spaces (of Sobolev, H\"older or Besov type smoothness) and also for spatially nonhomogeneous smoothness. For that purpose, we prove a key result from nonlinear approximation theory, in the spirit of~\cite{BMApprox}, that may be of interest for other types of model selection procedures (see for instance~\cite{BirgeIHP,BaraudHellinger,BaraudBirgeRho}). Depending on the kind of intensity to estimate, different structural assumptions might make sense, some of which have been considered in~\cite{JLT},~\cite{BaraudBirgeComposite},~\cite{LepskiInd},~\cite{RebellesPointwise,RebellesLp}, but not all. We explain in what respect these structural assumptions fall within the scope of estimation under dominating mixed smoothness. Yet, we emphasize that we do not need to impose any structural assumptions on the target function. Thus in some way our method is adaptive at the same time to many structures. Besides, it can be implemented with a computational complexity linear in the sample size, up to logarithmic factors.
The plan of the paper is as follows. In Section~\ref{sec:framework}, we describe the general intensity estimation framework and several examples of interest. In Section~\ref{sec:onemodel}, we define the so-called pyramidal wavelet models and a least-squares type criterion, and provide a detailed account of estimation on a given model. Section~\ref{sec:selection} is devoted to the choice of an adequate penalty so as to perform data-driven model selection. The optimality of the resulting procedure from the minimax point of view is then discussed in Section~\ref{sec:adaptivity}, under mixed smoothness assumptions. The algorithm for implementing our wavelet procedure and an illustrative example are given in Section~\ref{sec:algorithm}. All proofs are postponed to Section~\ref{sec:proofs}. Let us end with some remark about the notation. Throughout the paper, $C,C_1,\ldots$ will stand for numerical constants, and $C(\theta), C_1(\theta),\ldots$ for positive reals that only depend on some $\theta.$ Their values are allowed to change from line to line.
\section{Framework and examples}\label{sec:framework}
\subsection{General framework} Let $d\in\mathbb{N},d\geq 2,$ and $Q=\prod_{k=1}^d \left[a_k,b_k\right]$ be a given hyperrectangle in $\mathbb{R}^d$ equipped with its Borel $\sigma-$algebra $\mathcal B(Q)$ and the Lebesgue measure. We denote by $\L^2(Q)$ the space of square integrable functions on $Q,$ equipped with its usual
norm
\begin{equation}\label{eq:usualnorm}
\|t\| = \sqrt{\int_Q t^2(x) \d x}
\end{equation}
and scalar product $\langle . , .\rangle.$ In this article, we are interested in a nonnegative measure on $\mathcal B(Q)$ that admits a bounded density $s$ with respect to the Lebesgue measure, and our aim is to estimate that function $s$ over $Q.$ Given a probability space $(\Omega, \mathcal E,\P),$ we assume that there exists some random measure $M$ defined on $(\Omega, \mathcal E,\P)$, with values in the set of Borel measures on $Q$ such that, for all $A\in\mathcal B(Q),$
\begin{equation}\label{eq:M}
\mathbb{E}\left[M(A)\right] = \langle \BBone_A, s \rangle.
\end{equation}
By classical convergence theorems, this condition implies that, for all nonnegative or bounded measurable functions $t,$
\begin{equation}\label{eq:Mgen}
\mathbb{E}\left[\int_Q t \d M\right] = \langle t, s \rangle.
\end{equation}
We assume that we observe some random measure $\widehat M,$ which is close enough to $M$ in a sense to be made precise later. When $M$ can be observed, we set of course $\widehat M=M.$
\subsection{Examples}\label{sec:examples} Our general framework encompasses several special frameworks of interest, as we shall now show.
\subsubsection{Example 1: density estimation.}\label{sec:density} Given $n\in\mathbb{N}^\star,$ we observe identically distributed random variables $Y_1,\ldots,Y_n$ with common density $s$ with respect to the Lebesgue measure on $Q=\prod_{k=1}^d \left[a_k,b_k\right].$ The observed empirical measure is then given by
$$\widehat M(A)=M(A)=\frac{1}{n}\sum_{i=1}^n \BBone_A(Y_i), \text{ for } A\in\mathcal B(Q),$$
and obviously satisfies~\eqref{eq:M}.
\subsubsection{Example 2: copula density estimation.}\label{sec:copula} Given $n\in\mathbb{N}^\star,$ we observe independent and identically distributed random variables $X_1,\ldots,X_n$ with values in $\mathbb{R}^d.$ For $i=1,\ldots,n$ and $j=1,\ldots,d,$ the $j$-th coordinate $X_{ij}$ of $X_i$ has continuous distribution function $F_j.$ We recall that, from Sklar's Theorem~\cite{Sklar} (see also~\cite{Nelsen}, for instance), there exists a unique distribution function $C$ on $[0,1]^d$ with uniform marginals such that, for all $(x_1,\ldots,x_d)\in \mathbb{R}^d,$
$$\P(X_{i1}\leq x_1,\ldots, X_{id}\leq x_d) = C(F_1(x_1),\ldots,F_d(x_d)).$$
This function $C$ is called the copula of $X_{i1},\ldots,X_{id}.$ We assume that it admits a density $s$ with respect to the Lebesgue measure on $Q=[0,1]^d.$ Since $C$ is the joint distribution function of
the $F_j(X_{1j}), j=1,\ldots,d,$ a random measure satisfying~\eqref{eq:M} is given by
$$M(A)=\frac{1}{n}\sum_{i=1}^n \BBone_A\left(F_1(X_{i1}),\ldots,F_d(X_{id})\right), \text{ for } A\in\mathcal B([0,1]^d).$$
As the marginal distributions $F_j$ are usually unknown, we replace them by the empirical distribution functions $\hat F_{nj},$ where
$$\hat F_{nj}(t)=\frac{1}{n}\sum_{i=1}^n \BBone_{X_{ij}\leq t},$$
and define
$$\widehat M(A)=\frac{1}{n}\sum_{i=1}^n \BBone_A\left(\hat F_{n1}(X_{i1}),\ldots,\hat F_{nd}(X_{id})\right), \text{ for } A\in\mathcal B([0,1]^d).$$
\subsubsection{Example 3: Poisson intensity estimation.}\label{sec:Poisson} Let us denote by $\text{Vol}_d(Q)$ the Lebesgue measure of $Q=\prod_{k=1}^d \left[a_k,b_k\right].$ We observe a Poisson process $N$ on $Q$ whose mean measure has intensity $\text{Vol}_d(Q)s.$ Otherwise said, for all finite family $(A_k)_{1\leq k\leq K}$ of disjoint measurable subsets of $Q,$ $N(A_1),\ldots,N(A_K)$ are independent Poisson random variables with respective parameters $\text{Vol}_d(Q)\int_{A_1} s,\ldots,\text{Vol}_d(Q)\int_{A_K} s.$
Therefore the empirical measure
$$\widehat M(A)=M(A)=\frac{N(A)}{\text{Vol}_d(Q)}, \text{ for } A\in\mathcal B(Q),$$
does satisfy~\eqref{eq:M}.
We do not assume $s$ to be constant throughout $Q$ so that the Poisson process may be nonhomogeneous.
\subsubsection{Example 4: L\'evy jump intensity estimation (continuous time).}\label{sec:levydensitycont} Let $T$ be a fixed positive real, we observe on $[0,T]$ a L\'evy process $\mathbf X=(X_t)_{t\geq 0}$ with values in $\mathbb{R}^d.$ Otherwise said, $\mathbf X$ is a process starting at $0,$ with stationary and independent increments, and which is continuous in probability with c\`adl\`ag trajectories (see for instance~\cite{Bertoin,Sato,ContTankov}). This process may have jumps, whose sizes are ruled by the so-called jump intensity measure or L\'evy measure. An important example of such process is the compound Poisson process
$$X_t=\sum_{i=1}^{N_t} \xi_i,t\geq 0,$$
where $(N_t)_{t\geq 0}$ is a univariate homogeneous Poisson process, $(\xi_i)_{i\geq i}$ are i.i.d. with values in $\mathbb{R}^d$ and distribution $\rho$ with no mass at $0,$ and $(N_t)_{t\geq 0}$ and $(\xi_i)_{i\geq i}$ are independent. In this case, $\rho$ is also the L\'evy measure of $\mathbf X.$
Here, we assume that the L\'evy measure admits a density $f$ with respect to the Lebesgue measure on $\mathbb{R}^d\backslash\{0\}.$ Given some compact hyperrectangle $Q=\prod_{k=1}^d \left[a_k,b_k\right] \subset \mathbb{R}^d\backslash\{0\},$ our aim is to estimate the restriction $s$ of $f$ to $Q.$ For that purpose, we use the observed empirical measure
$$\widehat M(A)=M(A)=\frac{1}{T} \iint\limits_{[0,T]\times A} N(\d t, \d x), \text{ for } A\in\mathcal B(Q).$$
A well-known property of L\'evy processes states that the random measure $N$ defined for $B\in \mathcal B\left([0,+\infty)\times \mathbb{R}^d\backslash\{0\}\right)$ by
$$N(B)= \sharp \{t >0/ (t, X_t-X_{t^-}) \in B\}$$
is a Poisson process with mean measure
\begin{equation*}\label{eq:levymeasure}
\mu(B)=\int\int_B f(x) \d t \d x,
\end{equation*}
so that $M$ satisfies~\eqref{eq:M}.
\subsubsection{Example 5: L\'evy jump intensity estimation (discrete time).}\label{sec:levydensitydisc} The framework is the same as in Example 4, except that $(X_t)_{t\geq 0}$ is not observed. Given some time step $\Delta>0$ and $n\in\mathbb{N}^\star,$ we only have at our disposal the random variables
$$Y_i=X_{i\Delta}- X_{(i-1)\Delta}, i=1,\ldots n.$$
In order to estimate $s$ on $Q,$ we consider the random measure
$$M(A)=\frac{1}{n\Delta} \iint\limits_{[0,n\Delta]\times A} N(\d t, \d x), \text{ for } A\in\mathcal B(Q),$$
which is unobserved, and replaced for estimation purpose with
$$\widehat M(A)=\frac{1}{n \Delta }\sum_{i=1}^n \BBone_A(Y_i), \text{ for } A\in\mathcal B(Q).$$
\section{Estimation on a given pyramidal wavelet model}\label{sec:onemodel}
The first step of our estimation procedure relies on the definition of finite dimensional linear subspaces of $\L^2(Q),$ called models, generated by some finite families of biorthogonal wavelets. We only describe here models for $Q=[0,1]^d.$ For a general hyperrectangle $Q$, the adequate models can be deduced by translation and scaling. We then introduce a least-squares type contrast that allows to define an estimator of $s$ within a given wavelet model.
\subsection{Wavelets on $\L_2([0,1])$}\label{sec:uniwave_assumptions}
We shall first introduce a multiresolution analysis and a wavelet basis for $\L_2([0,1])$ satisfying the same general assumptions as in~\cite{Hochmuth} and~\cite{HochmuthMixed}. Concrete examples of wavelet bases satisfying those assumptions may be found in~\cite{CohenInterval} and~\cite{Dahmen} for instance.
In the sequel, we denote by $\kappa$ some positive constant, that only depends on the choice of the bases. We fix the coarsest resolution level at $j_0 \in\mathbb{N}.$ On the one hand, we assume that the scaling spaces
$$V_j=\text{Vect}\{\phi_\lambda; \lambda \in \Delta_j\}
\text{ and }
V^\star_j=\text{Vect}\{\phi^\star_\lambda; \lambda \in \Delta_j\}, j\geq j_0,$$
satisfy the following hypotheses:
\begin{enumerate}[$S.i)$]
\item (Riesz bases) For all $j\geq j_0$, $\{\phi_\lambda; \lambda \in \Delta_j\}$ are linearly independent functions from $\L_2([0,1])$, so are $\{\phi^\star_\lambda; \lambda \in \Delta_j\}$, and they form Riesz bases of $V_j$ and $V^\star_j$, \textit{i.e.} $\left\|\sum_{\lambda\in\Delta_j} a_\lambda \phi_\lambda\right\| \sim \left(\sum_{\lambda\in\Delta_j} a^2_\lambda \right)^{1/2} \sim \left\|\sum_{\lambda\in\Delta_j} a_\lambda \phi^\star_\lambda\right\|.$
\item (Dimension) There exists some nonnegative integer $B$ such that, for all $j\geq j_0$, $\dim(V_j)=\dim(V^\star_j)=\sharp \Delta_j =2^j+B.$
\item (Nesting) For all $j\geq j_0$, $V_j\subset V_{j+1}$ and $V^\star_j\subset V^\star_{j+1}.$
\item (Density) $\overline{\cup_{j\geq j_0} V_j}=\overline{\cup_{j\geq j_0} V^\star_j}=\L_2([0,1])$.
\item (Biorthogonality) Let $j\geq j_0$, for all $\lambda,\mu\in \Delta_j$, $\langle \phi_\lambda, \phi^\star_\mu\rangle =\delta_{\lambda,\mu}.$
\item (Localization) Let $j\geq j_0$, for all $\lambda\in \Delta_j$, $|\text{Supp}(\phi_\lambda)| \sim |\text{Supp}(\phi^\star_\lambda)| \sim 2^{-j}.$
\item (Almost disjoint supports) For all $ j\geq j_0$ and all $\lambda\in \Delta_j$,
$$\max\left(\sharp\{ \mu\in\Delta_j \text{ s.t. } \text{Supp}(\phi_\lambda) \cap \text{Supp}(\phi_\mu)\neq \varnothing\}, \sharp\{ \mu\in\Delta_j \text{ s.t. } \text{Supp}(\phi^\star_\lambda)\cap \text{Supp}(\phi^\star_\mu)\neq \varnothing\}\right) \leq \kappa.$$
\item (Norms) For all $j\geq j_0$ and all $\lambda\in\Delta_j$, $\|\phi_\lambda\|=\|\phi^\star_\lambda\|=1$ and $\max(\|\phi_\lambda\|_{\infty},\|\phi^\star_\lambda\|_{\infty}) \leq \kappa 2^{j/2}$.
\item (Polynomial reproducibility) The primal scaling spaces are exact of order $N$, \textit{i.e.} for all $j\geq j_0$, $\Pi_{N-1} \subset V_j,$ where $\Pi_{N-1}$ is the set of all polynomial functions with degree $\leq N-1$ over $[0,1].$
\end{enumerate}
On the other hand, the wavelet spaces
$$W_j=\text{Vect}\{\psi_\lambda; \lambda \in \nabla_j\}
\text{ and }
W^\star_j=\text{Vect}\{\psi^\star_\lambda; \lambda \in \nabla_j\}, j\geq j_0+1,$$
fulfill the following conditions:
\begin{enumerate}[$W.i)$]
\item (Riesz bases) The functions $\{\psi_\lambda;\lambda\in\cup_{j\geq j_0+1} \nabla_j\}$ are linearly independent. Together with the $\{\phi_\lambda;\lambda\in\Delta_{j_0}\}$, they form a Riesz basis for $\L_2([0,1])$. The same holds for the $\psi^\star$ and the $\phi^\star$.
\item (Orthogonality) For all $j\geq j_0$, $V_{j+1}=V_j\oplus W_{j+1}$ and $V^\star_{j+1}=V^\star_j\oplus W^\star_{j+1}$, with $V_j\perp W^\star_{j+1}$ and $V^\star_j\perp W_{j+1}.$
\item (Biorthogonality) Let $j\geq j_0$, for all $\lambda,\mu\in \nabla_{j+1}$, $\langle \psi_\lambda, \psi^\star_\mu\rangle =\delta_{\lambda,\mu}.$
\item (Localization) Let $j\geq j_0$, for all $\lambda\in \nabla_{j+1}$, $|\text{Supp}(\psi_\lambda)| \sim |\text{Supp}(\psi^\star_\lambda)| \sim 2^{-j}.$
\item (Almost disjoint supports) For all $ j\geq j_0$ and all $\lambda\in \nabla_{j+1}$,
$$\max\left(\sharp\{ \mu\in\nabla_{j+1} \text{ s.t. } \text{Supp}(\psi_\lambda) \cap \text{Supp}(\psi_\mu)\neq \varnothing\}, \sharp\{ \mu\in\nabla_{j+1} \text{ s.t. } \text{Supp}(\psi^\star_\lambda)\cap \text{Supp}(\psi^\star_\mu)\neq \varnothing\}\right) \leq \kappa.$$
\item (Norms) For all $j\geq j_0$ and all $\lambda\in\nabla_{j+1}$, $\|\psi_\lambda\|=\|\psi^\star_\lambda\|=1$ and $\max(\|\psi_\lambda\|_{\infty},\|\psi^\star_\lambda\|_{\infty}) \leq \kappa 2^{j/2}$.
\item (Fast Wavelet Transform) Let $j\geq j_0$, for all $\lambda\in\nabla_{j+1}$,
$$\sharp\{\mu\in\Delta_{j+1} | \langle \psi_\lambda ,\phi_\mu\rangle \neq 0 \}\leq \kappa$$
and for all $\mu\in\Delta_{j+1}$
$$ |\langle \psi_\lambda ,\phi_\mu\rangle| \leq \kappa.$$
The same holds for the $\psi^\star_\lambda$ and the $\phi^\star_\lambda.$
\end{enumerate}
\bigskip
\textit{Remarks: }
\begin{itemize}
\item These properties imply that any function $f\in\L_2([0,1])$ may be decomposed as
\begin{equation}\label{eq:decompwavelet1}
f=\sum_{\lambda\in \Delta_{j_0}}\langle f,\phi_\lambda \rangle \phi^\star_\lambda + \sum_{j\geq j_0+1} \sum_{\lambda\in \nabla_j} \langle f,\psi_\lambda \rangle \psi^\star_\lambda.
\end{equation}
\item Properties $S.ii)$ and $W.ii)$ imply that $\dim(W_{j+1})=2^j.$
\item Property $W.vii)$ means in particular that, for each resolution level $j$, any wavelet can be represented as a linear combination of scaling functions from the same resolution level with a number of components bounded independently of the level as well as the amplitude of the coefficients.
\end{itemize}
As is well known, contrary to orthogonal bases, biorthogonal bases allow for both symmetric and smooth wavelets. Besides, properties of dual biorthogonal bases are usually not the same. Usually, in decomposition~\eqref{eq:decompwavelet1}, the analysis wavelets $\phi_\lambda$ and $\psi_\lambda$ are the one with most null moments, whereas the synthesis wavelets $\phi^\star_\lambda$ and $\psi^\star_\lambda$ are the one with greatest smoothness. Yet, we may sometimes need the following smoothness assumptions on the analysis wavelets (not very restrictive in practice), only to bound residual terms due to the replacement of $M$ with $\widehat M.$
\medskip
\noindent
\textit{\textbf{Assumption (L).}
For all $\lambda\in\Delta_{j_0}$, for all $j\geq j_0$ and all $\mu\in\nabla_{j+1},$ $\phi_\lambda$ and $\psi_\mu$ are Lipschitz functions with Lipschitz norms satisfying $\|\phi_\lambda\|_L \leq \kappa 2^{3j_0/2}$ and $\|\psi_\mu\|_L\leq \kappa 2^{3j/2}.$}
\medskip
\noindent
We still refer to~\cite{CohenInterval} and~\cite{Dahmen} for examples of wavelet bases satisfying this additional assumption.
\subsection{Hyperbolic wavelet basis on $\L_2([0,1]^d)$}
In the sequel, for ease of notation, we set $\mathbb{N}_{j_0} =\{j\in\mathbb{N}, j\geq j_0\},$ $\nabla_{j_0}=\Delta_{j_0}$, $W_{j_0}=V_{j_0}$ and $W^\star_{j_0}=V^\star_{j_0},$ and for $\lambda\in\nabla_{j_0}$, $\psi_\lambda=\phi_\lambda$ and $\psi^\star_\lambda=\phi^\star_\lambda.$ Given a biorthogonal basis of $\L_2([0,1])$ chosen according to~\ref{sec:uniwave_assumptions}, we deduce biorthogonal wavelets of $\L_2([0,1]^d)$ by tensor product. More precisely, for $\boldsymbol{j}=(j_1,\ldots,j_d)\in\mathbb{N}_{j_0}^d$, we set $\bm{\nabla_j}=\nabla_{j_1}\times\ldots\times\nabla_{j_d}$ and for all $\bm{\lambda}=(\lambda_1,\ldots,\lambda_d)\in \bm{\nabla_j}$, we define $\Psi_{\bm{\lambda}}(x_1,\ldots,x_d)=\psi_{\lambda_1}(x_1)\ldots\psi_{\lambda_d}(x_d)$ and $\Psi^\star_{\bm{\lambda}}(x_1,\ldots,x_d)=\psi^\star_{\lambda_1}(x_1)\ldots\psi^\star_{\lambda_d}(x_d).$ Contrary to most statistical works based on wavelets, we thus allow for tensor products of univariate wavelets coming from different resolution levels $j_1,\ldots,j_d.$ Writing $\Lambda=\cup_{{\bm j}\in \mathbb{N}_{j_0}^d}\bm\nabla_{\bm j},$ the families $\{\Psi_{\bm{\lambda}};\bm{\lambda}\in \Lambda \}$ and $\{\Psi^\star_{\bm{\lambda}};\bm{\lambda}\in \Lambda\}$ define biorthogonal bases of $\L_2([0,1]^d)$ called biorthogonal hyperbolic bases. Indeed,
$$\L_2([0,1]^d)=\overline{\cup_{j\geq j_0} V_j\otimes \ldots \otimes V_j},$$
and for all $j\geq j_0,$
\begin{align*}
V_j\otimes \ldots \otimes V_j
&=(W_{j_0}\oplus W_{j_0+1}\oplus\ldots\oplus W_j )\otimes\ldots\otimes(W_{j_0}\oplus W_{j_0+1}\oplus\ldots\oplus W_j ) \\
&=\underset{j_0\leq k_1,\ldots,k_d \leq j}\oplus W_{k_1}\otimes\ldots\otimes W_{k_d}.
\end{align*}
In the same way,
$$\L_2([0,1]^d)=\overline{\cup_{j\geq j_0} \underset{j_0\leq k_1,\ldots,k_d \leq j}\oplus W^\star_{k_1}\otimes\ldots\otimes W^\star_{k_d}}.$$
Besides, they induce on $\L_2([0,1]^d)$ the norms
\begin{equation}\label{eq:normbiortho}
\|t\|_{\bm\Psi}=\sqrt{\sum_{\bm{\lambda} \in\Lambda} \langle t,\Psi_{\bm{\lambda}}\rangle ^2}\text{ and }\|t\|_{\bm\Psi^\star}=\sqrt{\sum_{\bm{\lambda} \in\Lambda} \langle t,\Psi^\star_{\bm{\lambda}}\rangle ^2},
\end{equation}
which are both equivalent to $\|.\|,$ with equality when the wavelet basis is orthogonal. It should be noticed that the scalar product derived from $\|.\|_{\bm\Psi},$ for instance, is
\begin{equation}\label{eq:psbiortho}
\langle t,u\rangle_{\bm\Psi}=\sum_{\bm{\lambda} \in\Lambda} \langle t,\Psi_{\bm{\lambda}}\rangle\langle u,\Psi_{\bm{\lambda}}\rangle.
\end{equation}
\subsection{Pyramidal models}
A wavelet basis in dimension 1 has a natural pyramidal structure when the wavelets are grouped according to their resolution level. A hyperbolic basis too, provided we define a proper notion of resolution level that takes into account anisotropy: for a wavelet $\Psi_{\bm\lambda}$ or $\Psi^\star_{\bm\lambda}$ with $\bm\lambda\in\bm\nabla_{\bm j},$ we define the global resolution level as $|\bm j|:=j_1+\ldots+j_d.$ Thus, the supports of all wavelets corresponding to a given global resolution level $\ell\in\mathbb{N}_{dj_0}$ have a volume of roughly $2^{-\ell}$ but exhibit very different shapes. For all $\ell \in\mathbb{N}_{dj_0},$ we define $\bm J_\ell=\{\bm j \in \mathbb{N}_{j_0}^d/|\bm j|=\ell\},$ and $U\bm{\nabla}( \ell)=\cup_{\bm j \in \bm J_\ell} \bm\nabla_{\bm j} $ the index set for $d$-variate wavelets at resolution level $\ell$.
Given some maximal resolution level $L_\bullet\in\mathbb{N}_{dj_0}$, we define, for all $\ell_1\in\{dj_0+1,\ldots, L_\bullet +1\}$, the family $\mathcal M^\mathcal P_{\ell_1}$ of all sets $m$ of the form
$$m=\left(\bigcup_{\ell=dj_0}^{\ell_1-1} U\bm{\nabla}( \ell) \right)
\cup \left( \bigcup_{k=0}^{L_\bullet-\ell_1} m(\ell_1+k)\right),$$
where, for all $0\leq k \leq L_\bullet-\ell_1$, $m(\ell_1+k)$ may be any subset of $U\bm{\nabla}(\ell_1+k)$ with $N(\ell_1,k)$ elements. Typically, $N(\ell_1,k)$ will be chosen so as to impose some sparsity: it is expected to be smaller than the total number of wavelets at level $\ell_1+k$ and to decrease when the resolution level increases. An adequate choice of $N(\ell_1,k)$ will be proposed in Proposition~\ref{prop:combinatorial}. Thus, choosing a set in $\mathcal M^\mathcal P_{\ell_1}$ amounts to keep all hyperbolic wavelets at level at most $\ell_1-1,$ but only a few at deeper levels. We set $\mathcal M^\mathcal P=\cup_{\ell_1=dj_0+1}^{L_\bullet +1} \mathcal M^\mathcal P_{\ell_1}$ and define a pyramidal model as any finite dimensional subspace of the form
$$S^{\star}_m=\text{Vect}\{\Psi^\star_{\bm\lambda}; \bm\lambda \in m \}, \text{ for }m \in \mathcal M^\mathcal P.$$ We denote by $D_m$ the dimension of $S^{\star}_m.$
Setting $m_\bullet=\bigcup_{\ell=dj_0}^{L_\bullet} U\bm{\nabla}(\ell),$ we can see that all pyramidal models are included in $S^{\star}_{m_\bullet}.$
\subsection{Least-squares type estimator on a pyramidal model}\label{sec:LSonemodel}
Let us fix some model $m\in\mathcal M^\mathcal P.$ If the random measure $M$ is observed, then we can build a least-squares type estimator $\check s_m^\star$ for $s$ with values in $S^\star_m$ and associated with the norm $\|.\|_{\bm\Psi}$ defined by~\eqref{eq:normbiortho}. Indeed, setting
\begin{equation*}
\gamma(t)=\|t\|_{\bm\Psi}^2 - 2 \sum_{\bm\lambda\in\Lambda} \langle t, \Psi_{\bm\lambda} \rangle\check\beta_{\bm\lambda},
\end{equation*}
where $$\check\beta_{\bm\lambda}=\int_Q \Psi_{\bm\lambda} \d M,$$
we deduce from~\eqref{eq:Mgen} that $s$ minimizes over $t\in\L^2(Q)$
\begin{align*}
\|s-t\|^2_{\bm\Psi} - \|s\|^2_{\bm\Psi}
= \|t\|^2_{\bm\Psi}- 2 \sum_{\bm\lambda\in \Lambda} \langle t,\Psi_{\bm\lambda} \rangle \langle s,\Psi_{\bm\lambda} \rangle
=\mathbb{E}[\gamma(t)],
\end{align*}
so we introduce
$$\check s^\star_m =\argmin{t\in S^\star_m} \gamma (t).$$
For all sequences of reals $(\alpha_{\bm\lambda})_{\bm\lambda \in m},$
\begin{equation}\label{eq:checksm}
\gamma\left(\sum_{\bm\lambda \in m} \alpha_{\bm\lambda} \Psi^\star_{\bm\lambda} \right)
= \sum_{\bm\lambda \in m}\left(\alpha_{\bm\lambda} - \check\beta_{\bm\lambda}\right)^2 - \sum_{\bm\lambda \in m} \check \beta_{\bm\lambda}^2,
\end{equation}
hence
$$\check s^\star_m =\sum_{\bm\lambda \in m} \check \beta_{\bm\lambda} \Psi^\star_{\bm\lambda}.$$
Since we only observe the random measure $\widehat M,$ we consider the pseudo-least-squares contrast
\begin{equation*}
\widehat \gamma(t)=\|t\|_{\bm\Psi}^2 - 2 \sum_{\bm\lambda\in\Lambda} \langle t, \Psi_{\bm\lambda} \rangle{\widehat\beta_{\bm\lambda}},
\end{equation*}
where
$$\widehat\beta_{\bm\lambda}=\int_Q \Psi_{\bm\lambda} \d \widehat M,$$
and we define the best estimator of $s$ within $S^\star_m$ as
$$\widehat s^\star_m = \argmin{t\in S^\star_m} \widehat\gamma (t)= \sum_{\bm\lambda \in m} \widehat \beta_{\bm\lambda} \Psi^\star_{\bm\lambda}.$$
\subsection{Quadratic risk on a pyramidal model}\label{sec:QRonemodel}
Let us introduce the orthogonal projection of $s$ on $S^\star_m$ for the norm $\|.\|_{\bm\Psi},$ that is
$$s^\star_m =\sum_{\bm\lambda \in m}\beta_{\bm\lambda}\Psi^\star_{\bm\lambda},$$
where
$$\beta_{\bm\lambda}=\langle \Psi_{\bm\lambda},s \rangle.$$
It follows from~\eqref{eq:Mgen} that $\check \beta_{\bm\lambda}$ is an unbiased estimator for $\beta_{\bm\lambda},$ so that $\check s^\star_m$ is an unbiased estimator for $s^\star_m.$ Thanks to Pythagoras' equality, we recover for $\check s^\star_m$ the usual decomposition
\begin{equation}\label{eq:riskexact}
\mathbb{E}\left[\|s-\check s^\star_m\|^2_{\bm\Psi}\right] = \|s-s^\star_m\|^2_{\bm\Psi} + \sum_{\bm\lambda\in m} \text{\upshape{Var}}(\check \beta_{\bm\lambda}),
\end{equation}
where the first term is a bias term or approximation error and the second term is a variance term or estimation error. When only $\widehat M$ is observed, combining the triangle inequality, the basic inequality~\eqref{eq:basic} and~\eqref{eq:riskexact} easily provides at least an upper-bound akin to~\eqref{eq:riskexact}, up to a residual term.
\begin{prop}\label{prop:onemodel}
For all $\theta >0,$
$$ \mathbb{E}\left[\|s-\hat s^\star_m\|^2_{\bm\Psi}\right]
\leq (1+\theta)\left( \|s- s^\star_m\|^2_{\bm\Psi} + \sum_{\bm\lambda\in m} \text{\upshape{Var}}(\check \beta_{\bm\lambda})\right) + (1+1/\theta)\mathbb{E}\left[ \|\check s^\star_m-\hat s^\star_m\|^2_{\bm \Psi}\right].$$
When $\widehat M=M,$ $\theta$ can be taken equal to 0 and equality holds.
\end{prop}
\noindent
In all the examples introduced in Section~\ref{sec:examples}, we shall verify that the quadratic risks satisfies, for all $\theta>0,$
\begin{equation}\label{eq:upperonemodel}
\mathbb{E}\left[\|s-\hat s^\star_m\|^2_{\bm\Psi}\right]
\leq c_1 \|s- s^\star_m\|^2_{\bm\Psi} + c_2 \frac{\|s\|_\infty D_m}{\bar n} + r_1(\bar n),
\end{equation}
where $\bar n$ describes the amount of available data, and the residual term $r_1(\bar n)$ does not weigh too much upon the estimation rate.
\subsubsection{Example 1: density estimation (continued).} In this framework, the empirical coefficients are of the form
$$\check \beta_{\bm\lambda}=\frac1n\sum_{i=1}^n \Psi_{\bm\lambda} (Y_i).$$
As the wavelets are normalized and $s$ is bounded,
$$ \text{\upshape{Var}}(\check \beta_{\bm\lambda})\leq \frac1n\int_Q\Psi^2_{\bm\lambda} (x) s(x) \d x \leq \frac{\|s\|_\infty}{n},$$
so
$$ \mathbb{E}\left[\|s-\hat s^\star_m\|^2_{\bm\Psi}\right]
\leq \|s- s^\star_m\|^2_{\bm\Psi} + \frac{\|s\|_\infty D_m}{n}.$$
Hence~\eqref{eq:upperonemodel} is satisfied for instance with $\bar n=n,$ $c_1=c_2=1,r_1(n)=0.$
\subsubsection{Example 2: copula density estimation (continued).} In this case,
$$\check \beta_{\bm\lambda}=\frac1n\sum_{i=1}^n \Psi_{\bm\lambda} \left(F_1(X_{i1}),\ldots,F_d(X_{id})\right),$$
while
$$\widehat \beta_{\bm\lambda}=\frac1n\sum_{i=1}^n \Psi_{\bm\lambda} \left(\hat F_{n1}(X_{i1}),\ldots,\hat F_{nd}(X_{id})\right).$$
As in Example 1, $\text{\upshape{Var}}(\check \beta_{\bm\lambda})\leq \|s\|_\infty/ n.$ Besides we prove in Section~\ref{sec:proofscopula} the following upper-bound for the residual terms.
\begin{prop}\label{prop:copulaone} Under Assumption (L), for all $m\in\mathcal M^{\mathcal P},$
$$ \mathbb{E}[\|\check s^\star_m-\hat s^\star_m\|^2] \leq C(\kappa,d)L^{d-1}_\bullet 2^{4L_\bullet} \log(n)/n.$$
\end{prop}
\noindent
Hence choosing $\bar n=n,$ $2^{4L_\bullet}=\sqrt{n}/\log(n),$ $c_1=c_2=2,$ and $r_1(n)=C(\kappa,d)\log( n)^{d-1}/\sqrt{n}$ yields~\eqref{eq:upperonemodel}.
\subsubsection{Example 3: Poisson intensity estimation (continued).} In this case,
$$\check \beta_{\bm\lambda}=\frac{1}{\text{Vol}_d(Q)}\int_Q \Psi_{\bm\lambda} (x) N(\d x).$$
From Campbell's formula,
$$ \text{\upshape{Var}}(\check \beta_{\bm\lambda})= \frac{1}{\text{Vol}_d(Q)} \int_Q \Psi^2_{\bm\lambda} (x) s(x) \d x$$
so
$$ \mathbb{E}\left[\|s-\hat s^\star_m\|^2_{\bm\Psi}\right]
\leq \|s- s^\star_m\|^2_{\bm\Psi} + \frac{\|s\|_\infty D_m}{\bar n},$$
with $\bar n=\text{Vol}_d(Q).$
\subsubsection{Example 4: L\'evy jump intensity estimation with continuous time observations (continued).} In this case,
$$\check \beta_{\bm\lambda}=\frac{1}{T}\iint\limits_{[0,T]\times Q} \Psi_{\bm\lambda} (x) N(\d t, \d x).$$
From Campbell's formula again,
$$ \text{\upshape{Var}}(\check \beta_{\bm\lambda})= \frac{1}{T^2}\iint\limits_{[0,T]\times Q} \Psi^2_{\bm\lambda} (x) s(x)\d t \d x$$
so
$$ \mathbb{E}\left[\|s-\hat s^\star_m\|^2_{\bm\Psi}\right]
\leq \|s- s^\star_m\|^2_{\bm\Psi} + \frac{\|s\|_\infty D_m}{\bar n},$$
with $\bar n=T.$
\subsubsection{Example 5: L\'evy jump intensity estimation with discrete time observations (continued).} In this case, the empirical coefficients and their approximate counterparts are of the form
$$\check \beta_{\bm\lambda}=\frac{1}{n\Delta}\iint\limits_{[0,n\Delta]\times Q} \Psi_{\bm\lambda} (x) N(\d t, \d x)\quad\text{and}\quad \widehat \beta_{\bm\lambda}=\frac{1}{n\Delta}\sum_{i=1}^n \Psi_{\bm\lambda} (X_{i\Delta}-X_{(i-1)\Delta}).$$
We deduce as previously that $ \text{\upshape{Var}}(\check \beta_{\bm\lambda})\leq \|s\|_\infty/\bar n$ with $\bar n=n\Delta.$ Besides we can bound the residual term thanks to the following proposition, proved in Section~\ref{sec:proofslevydisc}.
\begin{prop}\label{prop:disclevyone} Under Assumption (L), for all $m\in\mathcal M^{\mathcal P},$
$$\mathbb{E}[\|\check s^\star_m-\hat s^\star_m\|^2]
\leq 8\frac{\|s\|_\infty D_m}{n\Delta}+ C(\kappa,d,f,Q) L_\bullet^{d-1} \frac{2^{4L_\bullet}n\Delta^3+ 2^{3L_\bullet}\Delta }{n\Delta}.$$
provided $\Delta$ is small enough.
\end{prop}
\noindent
Assuming $n\Delta^2$ stays bounded while $n\Delta\rightarrow\infty$ as $n\rightarrow \infty,$ and choosing $2^{4L_\bullet}=n\Delta,$ we deduce that~\eqref{eq:upperonemodel} is satisfied under Assumption (L) with $\bar n=n\Delta, c_1=2, c_2=18,$ and $r_1(\bar n)=C(\kappa,d,f,Q)\log(\bar n)^{d-1}/\bar n.$ Notice that these assumptions on $n$ and $\Delta$ are classical in the so-called framework of high-frequency observations.
\textit{Remark:} Proposition~\ref{prop:disclevyone} extends~\cite{FigueroaDiscrete} to a multivariate model with a complex structure due to the use of hyperbolic wavelets, instead of isotropic ones. Yet, the extension is not so straightforward, so we give a detailed proof in Section~\ref{sec:proofs}.
\section{Wavelet pyramid model selection}\label{sec:selection}
The upper-bound~\eqref{eq:upperonemodel} for the risk on one pyramidal model suggests that a good model should be large enough so that the approximation error is small, and small enough so that the estimation error is small. Without prior knowledge on the function $s$ to estimate, choosing the best pyramidal model is thus impossible. In this section, we describe a data-driven procedure that selects the best pyramidal model from the data, without using any smoothness assumption on $s.$ We provide theoretical results that guarantee the performance of such a procedure. We underline how these properties are linked with the structure of the collection of models.
\subsection{Penalized pyramid selection} When $M$ is observed, we deduce from~\eqref{eq:riskexact} that
\begin{equation*}
\mathbb{E}\left[\|s-\check s^\star_m\|^2_{\bm\Psi}\right] - \|s\|^2_{\bm\Psi} = -\|s^\star_m\|^2_{\bm\Psi} + \sum_{\bm\lambda\in m} \text{\upshape{Var}}(\check \beta_{\bm\lambda})
\end{equation*}
and from~\eqref{eq:checksm} that $\gamma(\check s^\star_m)=-\|\check s^\star_m\|^2_{\bm\Psi}.$ Following the work of~\cite{BBM}, we introduce a penalty function $\text{\upshape{pen}} : \mathcal M^{\mathcal P} \rightarrow \mathbb{R}^+$ and choose a best pyramidal model from the data defined as
$$\hat m^{\mathcal P}=\argmin{m\in\mathcal M^{\mathcal P}}{\left(\hat \gamma(\hat s_m^\star) + \text{\upshape{pen}}(m)\right)}.$$
In order to choose the pyramidal model with smallest quadratic risk, the penalty $\text{\upshape{pen}}(m)$ is expected to behave roughly as the estimation error within model $m.$ We provide such a penalty in the following Section. Our final estimator for $s$ is then
$$\tilde s^{\mathcal P}= \hat s_{\hat m^{\mathcal P}}^\star.$$
\subsection{Combinatorial complexity and choice of the penalty function}
As widely examplified in~\cite{Massart,BGH} for instance, the choice of an adequate penalty depends on the combinatorial complexity of the collection of models, which is measured through the index
\begin{equation}\label{eq:combindex}
\max_{dj_0+1 \leq \ell_1 \leq L_\bullet +1 } \frac{\log\left (\sharp \mathcal M^\mathcal P_{\ell_1}\right)}{D(\ell_1)},
\end{equation}
where $D(\ell_1)$ is the common dimension of all pyramidal models in $\mathcal M^\mathcal P_{\ell_1}.$ Ideally, this index should be upper-bounded independently of the sample size for the resulting model selection procedure to reach the optimal estimation rate. The following proposition describes the combinatorial complexity of the collection of pyramidal models.
\begin{prop}\label{prop:combinatorial} Let $M=2+B/2^{j_0-1}.$ For all $\ell_1\in\{dj_0+1,\ldots, L_\bullet +1\}$ and all $k\in\{0,\ldots,L_\bullet-\ell_1\},$ let
\begin{equation}\label{eq:N}
N(\ell_1,k)=\lfloor 2\sharp U\bm{\nabla}(\ell_1+k)(k+2)^{-(d+2)}2^{-k} M^{-d}\rfloor
\end{equation}
and $D(\ell_1)$ be the common dimension of all models in $\mathcal M^\mathcal P_{\ell_1}.$ There exists positive reals $\kappa_1(d),$ $\kappa_2(j_0,B,d)$ and $\kappa_3(j_0,B,d)$ such that
\begin{equation*}\label{eq:cardinalpart}
\kappa_1(d) (\ell_1-dj_0+d-2)^{d-1}2^{\ell_1} \leq D(\ell_1) \leq \kappa_2(j_0,B,d)(\ell_1-dj_0+d-2)^{d-1}2^{\ell_1}
\end{equation*}
and
\begin{equation*}\label{eq:cardinalsubfamily}
\log\left (\sharp \mathcal M^\mathcal P_{\ell_1}\right) \leq \kappa_3(j_0,B,d) D(\ell_1).
\end{equation*}
\end{prop}
\noindent
We remind that $B$ is defined in Section~\ref{sec:uniwave_assumptions}(Assumption $S.ii)$). Possible values for $\kappa_1,\kappa_2$ and $\kappa_3$ are given in the proof, which is postponed to Section~\ref{sec:proofcombinatorial}. In the same way, we could prove a matching lower-bound for $\log\left (\sharp \mathcal M^\mathcal P_{\ell_1}\right)$ for large enough $\ell_1,$ so that the whole family $\mathcal M^\mathcal P$ contains of order of $L_\bullet^{d-1} 2^{L_\bullet}$ models. Typically, we will choose $L_\bullet$ such that $2^{L_\bullet}$ is a power of the sample size $\bar n.$ So while $\mathcal M^\mathcal P$ contains at least an exponential number of models, the number of models per dimension is moderate enough so that the combinatorial index~\eqref{eq:combindex} bounded.
From now on, we assume that~\eqref{eq:N} is satisfied, as well as the following hypotheses. For all subfamily $\mathcal T$ of $S^\star_{m_\bullet},$ let
$$\mathcal Z(\mathcal T)=\sup_{t\in\mathcal T} \left(\int_Q \sum_{\lambda \in m_\bullet} \langle t, \Psi_{\bm\lambda} \rangle \Psi_{\bm \lambda} \d M -\langle s,t\rangle_{\bm \Psi}\right).$$
\medskip
\noindent
\textit{\textbf{Assumption (Conc).} There exist positive reals $\bar n,\kappa'_1,\kappa'_2,\kappa'_3$ such that, for all countable subfamily $\mathcal T$ of $\left\{t\in S^\star_{m_\bullet} | \| t\|_{\bm\Psi}=1\right\}$ satisfying
$$\sup_{t\in\mathcal T}\left\|\sum_{\lambda \in m_\bullet} \langle t, \Psi_{\bm\lambda} \rangle \Psi_{\bm \lambda}\right \|_\infty \leq B(\mathcal T)$$
for some positive constant $B(\mathcal T),$ we have, for all $x>0,$
$$\P\left(\mathcal Z(\mathcal T) \geq \kappa'_1 \mathbb{E}\left[\mathcal Z(\mathcal T)\right] + \sqrt{\kappa'_2 \|s\|_\infty \frac{x}{\bar n}} + \kappa'_3 B(\mathcal T) \frac{x}{\bar n}\right) \leq \exp(-x).$$}
\medskip
\noindent
\textit{\textbf{Assumption (Var).} There exist a nonnegative constant $\kappa'_4$ and a collection of estimators $(\hat \sigma^2_{\bm \lambda})_{{\bm \lambda}\in m_\bullet}$ such that, for all ${\bm \lambda} \in m_\bullet$,
$$\mathbb{E}\left[\hat \sigma^2_{\bm \lambda}\right] \leq \kappa'_4 \max(\|s\|_\infty,1).$$
Besides there exist a nonnegative constant $\kappa'_5,$ a nonnegative function $w$ such that $w(\bar n)/\bar n \xrightarrow[\bar n \to \infty] {} 0$, and a measurable event $\Omega_\sigma$ on which, for all ${\bm \lambda} \in m_\bullet,$
$$\text{\upshape{Var}}(\check \beta_{\bm \lambda}) \leq \kappa'_5 \frac{\max\{\hat \sigma^2_{\bm \lambda},1\}}{\bar n}$$
and such that
$$p_\sigma:=\P(\Omega_{\sigma}^c)\leq \frac{w(\bar n)}{\bar n}.$$}
\medskip
\noindent
\textit{\textbf{Assumption (Rem).} For the same function $w$ as in Assumption \textbf{(Var)} and some nonnegative constant $\kappa'_6,$
$$\mathbb{E}\left[\|\check s^\star_{m} - \hat s^\star_{m}\|^2_{\bm \Psi}\right] \leq \kappa'_6\frac{\|s\|_\infty D_m}{\bar n} + \frac{w(\bar n)}{\bar n}, \text{ for all } {m\subset m_\bullet},$$
and
$$\max\left\{\frac{1}{\bar n (\log(\bar n)/d)^{(d+1)/2}} \sqrt{\mathbb{E}\left[\|\check s^\star_{m_\bullet} - \hat s^\star_{m_\bullet}\|^4_{\bm \Psi}\right]},\sqrt{p_\sigma\mathbb{E}\left[\|\check s^\star_{m_\bullet} - \hat s^\star_{m_\bullet}\|^4_{\bm \Psi}\right]} \right\}\leq \frac{w(\bar n)}{\bar n}.$$
}
\noindent
Assumption \textbf{(Conc)} describes how the random measure $M$ concentrates around the measure to estimate. Assumption \textbf{(Var)} ensures that we can estimate the variance terms $\mathbb{E}\left[\|s^\star_m - \check s^\star_m \|^2_{\bm \Psi}\right]$ over each $m\in\mathcal M^{\mathcal P}.$ Last, Assumption \textbf{(Rem)} describes how close $\widehat M$ is to $M.$
\begin{theo}\label{theo:choosepen} Assume that~\eqref{eq:N}, Assumptions \textbf{(Conc)}, \textbf{(Var)},\textbf{(Rem)} are satisfied, and that $\max(\|s\|_\infty,1) \leq \bar R.$ Choose $L_\bullet$ such that
$$2^{L_\bullet}\leq \frac{\bar n}{\left((\log \bar n)/d\right)^{2d}}$$
and a penalty of the form
$$\text{\upshape{pen}}(m)= \sum_{\lambda\in m}\frac{c_1\hat \sigma^2_{\bm\lambda}+c_2\bar R}{\bar n}, m\in\mathcal M^\mathcal P.$$
If $c_1,c_2$ are positive and large enough, then
$$\mathbb{E}\left[\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi} \right] \leq C_1 \min_{m\in\mathcal M^{\mathcal P}}\left (\|s-s^\star_m\|^2_{\bm \Psi}+ \frac{\bar R D_m}{\bar n}\right) +C_2 \frac{\max\left\{\|s\|^2_{\bm \Psi},\|s\|_\infty,1\right\}}{\bar n}\left(1+\left(\log (\bar n)/d\right)^{-3(d+1)/2} +w(\bar n)\right)$$
where $C_1$ may depend on $\kappa'_1,\kappa'_2,\kappa'_4,\kappa'_5,\kappa'_6,c_1,c_2$ and $C_2$ may depend $\kappa'_1,\kappa'_2,\kappa'_3,\kappa'_7,j_0,d.$
\end{theo}
\noindent
In practice, the penalty constants $c_1$ and $c_2$ are calibrated by simulation study. We may also replace $\bar R$ in the penalty by $\max\{\|\hat s^\star_{m_\bullet}\|_\infty,1\},$ and extend Theorem~\ref{theo:choosepen} to a random $\bar R$ by using arguments similar to~\cite{AkakpoLacour}.
\subsection{Back to the examples}\label{sec:backtoex} First, two general remarks are in order. For $t\in S^\star_{m_\bullet},$ let $f_t= \sum_{\lambda \in m_\bullet} \langle t, \Psi_{\bm\lambda} \rangle \Psi_{\bm \lambda},$ then $\|f_t\|= \|t\|_{\bm \Psi}$ and by~\eqref{eq:Mgen}, for all countable subfamily $\mathcal T$ of $S^\star_{m_\bullet},$
$$\mathcal Z(\mathcal T)=\sup_{t\in\mathcal T} \left(\int_Q f_t \d M -\mathbb{E}\left[\int_Q f_t \d M \right]\right).$$
So Assumption \textbf{(Conc)} usually proceeds from a Talagrand type concentration inequality. Besides, we have seen in Section~\ref{sec:QRonemodel} that in general
$$\max_{\bm\lambda \in m_{\bullet}} \text{\upshape{Var}}(\check\beta_{\bm\lambda})\leq \frac{\|s\|_\infty}{\bar n}.$$ Thus, whenever some upper-bound $R_\infty$ for $\|s\|_\infty$ is known, Assumption \textbf{(Var)} is satisfied with $\hat \sigma^2_{\bm \lambda} = R_\infty$ for all $\bm \lambda\in m_\bullet,$ $\Omega_\sigma=\Omega,$ $w(\bar n)=0,\kappa'_4=\kappa'_5=1.$ One may also estimate each variance term: this is what we propose in the following results, proved in Section~\ref{sec:proofcorollaries}.
\begin{corol}\label{corol:density} In the density estimation framework (see~\ref{sec:density}), let $\bar R\geq \max(\|s\|_\infty,1),$ $2^{L_\bullet}=n{\left((\log n)/d\right)^{-2d}},$
$$\hat \sigma^2_{\bm \lambda}=\frac{1}{n(n-1)}\sum_{i=2}^n\sum_{j=1}^{i-1}\left(\Psi_{\bm\lambda}(Y_i)-\Psi_{\bm\lambda}(Y_j)\right)^2 \text{ for all } \bm\lambda \in m_{\bullet},$$
and
$$\text{\upshape{pen}}(m)= \sum_{\lambda\in m}\frac{c_1\hat \sigma^2_{\bm\lambda}+c_2\bar R}{n}, \text{ for all }m\in\mathcal M^\mathcal P.$$
If $c_1,c_2$ are positive and large enough, then
$$\mathbb{E}\left[\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi} \right] \leq C_1 \min_{m\in\mathcal M^{\mathcal P}}\left (\|s-s^\star_m\|^2_{\bm \Psi}+ \frac{\bar R D_m}{n}\right) +C_2 \frac{\max\left\{\|s\|^2_{\bm \Psi},\bar R\right\}}{n}$$
where $C_1$ may depend on $\kappa, d,c_1,c_2$ and $C_2$ may depend $\kappa,j_0,d.$
\end{corol}
\begin{corol}\label{corol:copula} In the copula density estimation framework (see~\ref{sec:copula}), let $\bar R\geq \max(\|s\|_\infty,1)$ and $2^{L_\bullet}=\min\{n^{1/8}(\log n)^{-1/4},n{\left((\log n)/d\right)^{-2d}}\}.$ For all $\bm\lambda \in m_{\bullet},$ define
$$\hat \sigma^2_{\bm \lambda}=\frac{1}{n(n-1)}\sum_{i=2}^n\sum_{j=1}^{i-1}\left(\Psi_{\bm\lambda}\left(\hat F_{n1}(X_{i1}),\ldots,\hat F_{nd}(X_{id})\right)-\Psi_{\bm\lambda}\left(\hat F_{n1}(X_{j1}),\ldots,\hat F_{nd}(X_{jd})\right)\right)^2,$$
and for all $m\in\mathcal M^\mathcal P,$ let
$$\text{\upshape{pen}}(m)= \sum_{\lambda\in m}\frac{c_1\hat \sigma^2_{\bm\lambda}+c_2\bar R}{n}.$$
Under Assumption (L), and if $c_1,c_2$ are positive and large enough, then
$$\mathbb{E}\left[\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi} \right] \leq C_1 \min_{m\in\mathcal M^{\mathcal P}}\left (\|s-s^\star_m\|^2_{\bm \Psi}+ \frac{\bar R D_m}{n}\right) +C_2 \max\left\{\|s\|^2_{\bm \Psi},\bar R\right\}\frac{(\log n)^{d-1}}{\sqrt{n}}$$
where $C_1$ may depend on $\kappa, d,c_1,c_2$ and $C_2$ may depend $\kappa,j_0,d.$
\end{corol}
\begin{corol}\label{corol:Poisson} In the Poisson intensity estimation framework (see~\ref{sec:Poisson}), let $\bar R\geq \max(\|s\|_\infty,1),$ $2^{L_\bullet}=\text{Vol}_d(Q){\left((\log \text{Vol}_d(Q))/d\right)^{-2d}},$
$$\hat \sigma^2_{\bm \lambda}=\frac{1}{\text{Vol}_d(Q)}\int_Q \Psi^2_{\bm\lambda}\d N, \text{ for all } \bm\lambda \in m_{\bullet},$$
and
$$\text{\upshape{pen}}(m)= \sum_{\lambda\in m}\frac{c_1\hat \sigma^2_{\bm\lambda}+c_2\bar R}{\text{Vol}_d(Q)}, \text{ for all }m\in\mathcal M^\mathcal P.$$
If $c_1,c_2$ are positive and large enough, then
$$\mathbb{E}\left[\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi} \right] \leq C_1 \min_{m\in\mathcal M^{\mathcal P}}\left (\|s-s^\star_m\|^2_{\bm \Psi}+ \frac{\bar R D_m}{\text{Vol}_d(Q)}\right) +C_2 \frac{\max\left\{\|s\|^2_{\bm \Psi},\bar R\right\}}{\text{Vol}_d(Q)}$$
where $C_1$ may depend on $\kappa, d,c_1,c_2$ and $C_2$ may depend $\kappa,j_0,d.$
\end{corol}
\noindent
\begin{corol}\label{corol:levydensitycont} In the L\'evy jump intensity estimation framework with continuous time observations (see~\ref{sec:levydensitycont}), let $\bar R\geq \max(\|s\|_\infty,1),$ $2^{L_\bullet}=T{\left((\log T)/d\right)^{-2d}},$
$$\hat \sigma^2_{\bm \lambda}=\frac{1}{T} \iint\limits_{[0,T]\times Q} \Psi^2_{\bm\lambda}(x) N(\d t, \d x), \text{ for all } \bm\lambda \in m_{\bullet},$$
and
$$\text{\upshape{pen}}(m)= \sum_{\lambda\in m}\frac{c_1\hat \sigma^2_{\bm\lambda}+c_2\bar R}{T}, \text{ for all }m\in\mathcal M^\mathcal P.$$
If $c_1,c_2$ are positive and large enough, then
$$\mathbb{E}\left[\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi} \right] \leq C_1 \min_{m\in\mathcal M^{\mathcal P}}\left (\|s-s^\star_m\|^2_{\bm \Psi}+ \frac{\bar R D_m}{T}\right) +C_2 \frac{\max\left\{\|s\|^2_{\bm \Psi},\bar R\right\}}{T}$$
where $C_1$ may depend on $\kappa, d,c_1,c_2$ and $C_2$ may depend $\kappa,j_0,d.$
\end{corol}
\begin{corol}\label{corol:levydensitydisc} In the L\'evy jump intensity estimation framework with discrete time observations (see~\ref{sec:levydensitydisc}), let $\bar R\geq \max(\|s\|_\infty,1),$ $2^{L_\bullet}=\min\left\{(n\Delta)^{1/4},n\Delta{\left(\log (n\Delta)/d\right)^{-2d}}\right\},$
$$\hat \sigma^2_{\bm \lambda}=\frac{1}{n\Delta} \sum_{i=1}^n \Psi^2_{\bm\lambda}\left(X_{i\Delta}- X_{(i-1)\Delta}\right), \text{ for all } \bm\lambda \in m_{\bullet},$$
and
$$\text{\upshape{pen}}(m)= \sum_{\lambda\in m}\frac{c_1\hat \sigma^2_{\bm\lambda}+c_2\bar R}{n\Delta}, \text{ for all }m\in\mathcal M^\mathcal P.$$
If Assumption (L) is satisfied, if $n\Delta^2$ stays bounded while $n\Delta\rightarrow\infty$ as $n\rightarrow \infty,$ and if $c_1,c_2$ are positive and large enough, then
$$\mathbb{E}\left[\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi} \right] \leq C_1 \min_{m\in\mathcal M^{\mathcal P}}\left (\|s-s^\star_m\|^2_{\bm \Psi}+ \frac{\bar R D_m}{T}\right) +C_2 \max\left\{\|s\|^2_{\bm \Psi},\bar R\right\}\frac{\log^{d-1}(n\Delta)}{n\Delta}$$
where $C_1$ may depend on $\kappa, d,c_1,c_2$ and $C_2$ may depend $\kappa,j_0,d,Q,f.$
\end{corol}
\noindent
Corollaries~\ref{corol:Poisson},~\ref{corol:levydensitycont},~\ref{corol:levydensitydisc} extend respectively the works of~\cite{ReynaudPoisson,FigueroaHoudre,UK} to a multivariate framework, with a complex family of models allowing for nonhomogeneous smoothness, and a more refined penalty.
\section{Adaptivity to mixed smoothness}\label{sec:adaptivity}
There remains to compare the performance of our procedure $\tilde s^{\mathcal P}$ to that of other estimators. For that purpose, we derive the estimation rate of $\tilde s^{\mathcal P}$ under smoothness assumptions that induce sparsity on the hyperbolic wavelet coefficients of $s.$ We then compare it to the minimax rate.
\subsection{Function spaces with dominating mixed smoothness} For $\alpha\in\mathbb{N}^\star$ and $1\leq p\leq \infty,$ the mixed Sobolev space with smoothness $\alpha$ measured in the $\L_p-$norm is defined as
$$SW^\alpha_{p,(d)}=\left\{f\in\L_p([0,1]^d) \Bigg| \|f\|_{SW^\alpha_{p,(d)}}:= \sum_{0\leq r_1,\ldots,r_d\leq \alpha} \left\|\frac{\partial^{r_1+\ldots+r_d}f}{\partial^{r_1}x_1\ldots\partial^{r_d}x_d}\right\|_p <\infty\right\},$$
while the classical Sobolev space is
$$W^\alpha_{p,(d)}=\left\{f\in\L_p([0,1]^d) \Bigg| \|f\|_{W^\alpha_{p,(d)}}:= \sum_{0\leq r_1+\ldots+r_d\leq \alpha} \left\|\frac{\partial^{r_1+\ldots+r_d}f}{\partial^{r_1}x_1\ldots\partial^{r_d}x_d}\right\|_p <\infty\right\}.$$
The former contains functions whose highest order derivative is the mixed derivative $\partial^{d\alpha}f /{\partial^\alpha x_1\ldots\partial^\alpha x_d},$ while the latter contains all derivatives up to global order $d\alpha.$ Both spaces coincide in dimension $d=1,$ and otherwise we have the obvious continuous embeddings
\begin{equation}\label{eq:embedSobol}
W^{d\alpha}_{p,(d)} \hookrightarrow SW^\alpha_{p,(d)} \hookrightarrow W^\alpha_{p,(d)}.
\end{equation}
H\"older and Besov spaces with mixed dominating smoothness may be defined thanks to mixed differences. For $f:[0,1]\rightarrow \mathbb{R},x\in[0,1]$ and $h>0,$
$$\Delta^0_{h}(f,x) =f(x),\quad\Delta^1_{h}(f,x) =f(x+h)-f(x)$$
and more generally, for $r\in\mathbb{N}^\star,$ the $r$-th order univariate difference operator is
$$\Delta^r_{h}=\Delta^1_{h}\circ \Delta^{r-1}_{h},$$ so that
\begin{equation}\label{eq:binomdiff}
\Delta^r_h(f,x)=\sum_{k=0}^r \binom{r}{k} (-1)^{r-k}f(x+kh).
\end{equation}
Then for $t>0$ the univariate modulus of continuity of order $r$ in $\L_p$ is defined as
$$w_r(f,t)_p=\sup_{0<h<t} \|\Delta^{r}_{h}(f,.)\|_p.$$
For $f:[0,1]^d\rightarrow \mathbb{R},\mathbf x=(x_1,\ldots,x_d)\in[0,1]^d,r\in\mathbb{N}^\star$ and $h_\ell>0,$ we denote by $\Delta^r_{h_\ell,\ell}$ the univariate difference operator applied to the $\ell$-th coordinate while keeping the other ones fixed, so that
$$\Delta^r_{h_\ell,\ell}(f,\mathbf x)=\sum_{k=0}^r \binom{r}{k} (-1)^{r-k}f(x_1,\ldots,x_\ell+kh_\ell,\ldots,x_d).$$
For any subset $\mathbf e$ of $\{1,\ldots,d\}$ and $\mathbf h=(h_1,\ldots,h_d)\in(0,+\infty)^d,$ the $r$-th order mixed difference operator is given by
$$\Delta^{r,\mathbf e}_{\mathbf h}:= \prod_{\ell \in \mathbf e} \Delta^r_{h_\ell,\ell}.$$
For $\mathbf t=(t_1,\ldots,t_d)\in(0,+\infty)^d,$ we set $\mathbf {t_e}=(t_\ell)_{\ell \in \mathbf e},$ and define the mixed modulus of continuity
$$w_r^{\mathbf e}(f,\mathbf {t_e})_p=\sup_{0<h_\ell<t_\ell,\ell \in \mathbf e} \|\Delta^{r,\mathbf e}_{\mathbf h}(f,.)\|_p.$$
For $\alpha>0$ and $0< p\leq \infty,$ the mixed H\"older space $SH^\alpha_{p,(d)}$ is the space of all functions $f:[0,1]^d\rightarrow \mathbb{R}$ such that
$$\|f\|_{SH^\alpha_{p,(d)}}:=\sum_{\mathbf e \subset \{1,\ldots,d\}} \sup_{\mathbf t >0} \prod_{\ell \in \mathbf e}{t_\ell ^{-\alpha}}w_{\lfloor \alpha \rfloor +1}^{\mathbf e}(f,\mathbf {t_e})_p$$
is finite, where by convention the term associated with $\mathbf e=\emptyset$ is $\|f\|_p.$ More generally, for $\alpha>0$ and $0<p,q\leq \infty,$ the mixed Besov space $SB^\alpha_{p,q,(d)}$ is the space of all functions $f:[0,1]^d\rightarrow \mathbb{R}$
such that
$$\|f\|_{SB^\alpha_{p,q,(d)}}:=\sum_{\mathbf e \subset \{1,\ldots,d\}} \left(\int_{(0,1)}\ldots\int_{(0,1)}\left( \prod_{\ell \in \mathbf e}{t_\ell ^{-\alpha}} w_{\lfloor \alpha \rfloor +1}^{\mathbf e}(f,\mathbf {t_e})_p\right)^q \prod_{\ell \in \mathbf e} \frac{\d t_\ell}{t_\ell}\right)^{1/q},$$
where the $\L_q$-norm is replaced by a sup-norm in case $q=\infty,$ so that $SB^{\alpha}_{p,\infty,(d)}=SH^{\alpha}_{p,(d)}.$
By comparison, the usual Besov space $B^\alpha_{p,q,(d)}$ may be defined as the space of all functions $f\in\L_p([0,1]^d)$ such that
$$\|f\|_{B^\alpha_{p,q,(d)}}:=
\left\{
\begin{array}{ll}
\|f\|_p +\sum_{\ell=1}^d \left(\int_{(0,1)} \left({t_\ell ^{-\alpha}} w_{\lfloor \alpha \rfloor +1}^{\{\ell\}}(f,t_\ell)_p\right)^q \frac{\d t_\ell}{t_\ell}\right)^{1/q}& \mbox{if } 0<q < \infty \\
\|f\|_p +\sum_{\ell=1}^d \sup_{t_\ell>0}{t_\ell ^{-\alpha}} w_{\lfloor \alpha \rfloor +1}^{\{\ell\}}(f,t_\ell)_p & \mbox{if } q=\infty
\end{array}
\right.
$$
is finite. Extending~\eqref{eq:embedSobol}, the recent results of~\cite{NguyenSickelEmbed} confirm that the continuous embeddings
\begin{equation*}
B^{d\alpha}_{p,q,(d)} \hookrightarrow SB^\alpha_{p,q,(d)} \hookrightarrow B^\alpha_{p,q,(d)},
\end{equation*}
hold under fairly general assumptions on $\alpha,p,q,d.$.
On the other hand, given $\alpha>0,0<p<\infty, 0<q \leq \infty,$ we define
$$N_{\bm\Psi,\alpha,p,q}(f) =
\left\{
\begin{array}{ll}
\left(\sum_{\ell\geq dj_0} 2^{q\ell(\alpha+1/2-1/p)} \sum_{\bm j\in \bm J_\ell} \left(\sum_{\bm\lambda\in\bm{\nabla_j}} {|\langle f,\Psi_{\bm\lambda}\rangle|^p}\right)^{q/p}\right)^{1/q} & \mbox{if } 0<q < \infty \\
\sup_{\ell\geq dj_0} 2^{\ell(\alpha+1/2-1/p)} \sup_{\bm j\in\bm J_\ell} \left(\sum_{\bm\lambda\in\bm{\nabla_j}} {|\langle f,\Psi_{\bm\lambda}\rangle|^p}\right)^{1/p} & \mbox{if } q=\infty
\end{array}
\right.
$$
and $N_{\bm\Psi,\alpha,\infty,q}$ in the same way by replacing the $\ell_p$-norm with a sup-norm.
Then for $\alpha>0,0<p,q\leq \infty,R>0,$ we denote by $\mathcal {SB}(\alpha,p,q,R)$ the set of all functions $f\in\L_p([0,1]^d)$ such that
$$N_{\bm\Psi,\alpha,p,q}(f)\leq R.$$
Under appropriate conditions on the smoothness of $\bm\Psi^\star,$ that we will assume to be satisfied in the sequel, the sets $\mathcal {SB}(\alpha,p,q,R)$ may be interpreted as balls with radius $R$ in Besov spaces with dominating mixed smoothness $SB^{\alpha}_{p,q,(d)}$ (see for instance~\cite{SchmeisserTriebel,HochmuthMixed,Heping,DTU}). Mixed Sobolev spaces are not easily characterized in terms of wavelet coefficients, but they satisfy the compact embeddings
$$ SB^{\alpha}_{p,\min(p,2),(d)} \hookrightarrow SW^{\alpha}_{p,(d)} \hookrightarrow SB^{\alpha}_{p,\max(p,2),(d)}, \text{ for } 1<p<\infty$$
and
$$ SB^{\alpha}_{1,1,(d)} \hookrightarrow SW^{\alpha}_{1,(d)} \hookrightarrow SB^{\alpha}_{1,\infty,(d)}$$
(see~\cite{DTU}, Section 3.3). So, without loss of generality, we shall mostly turn our attention to Besov-H\"older spaces in the sequel.
\subsection{Link with structural assumptions} The following property collects examples of composite functions with mixed dominating smoothness built from lower dimensional functions with classical Sobolev or Besov smoothness. The proof and upper-bounds for the norms of the composite functions are given in Section~\ref{sec:proofcomposite}. An analogous property for (mixed) Sobolev smoothness instead of (mixed) Besov smoothness can be proved straightforwardly.
\begin{prop}\label{prop:composite} Let $\alpha>0$ and $0<p,q\leq \infty.$
\begin{enumerate}[(i)]
\item If $u_1,\ldots,u_d\in B^{\alpha}_{p,q,(1)},$ then $f(\mathbf x)=\sum_{\ell=1}^d u_\ell(x_\ell) \in SB^{\alpha}_{p,q,(d)}.$
\item Let $\mathfrak P$ be some partition of $\{1,\ldots,d\}.$ If, for all $I \in\mathfrak P,u_I \in B^{\alpha_I}_{p,q,(|I|)},$ then $f(\mathbf x)=\prod_{I\in\mathfrak P} u_I(\mathbf x_I)\in SB^{\bar \alpha}_{p,q,(d)}$ where $\bar\alpha=\min_{I\in\mathfrak P} (\alpha_I/|I|).$
\item Let $\alpha\in\mathbb{N}^\star$ and $p>1,$ if $g\in W^{d\alpha}_{\infty,(1)}$ and $u_\ell \in W^\alpha_{p,{1}}$ for $\ell=1,\ldots,d,$ then $f(\mathbf x)=g\left(\sum_{\ell=1}^d u_\ell(x_\ell)\right)\in SW^{\alpha}_{p,(d)}.$
\item If $f\in SB^{\alpha}_{p,q,(d)}$ with $\alpha>1$ and $\partial^d f/\partial x_1\ldots\partial x_d\in\L_p([0,1]^d),$ then $\partial^d f/\partial x_1\ldots\partial x_d \in SB^{\alpha-1}_{p,q,(d)}.$
\item If $f_1$ and $f_2\in SB^{\alpha}_{p,p,(d)}$ where either $1<p\leq \infty$ and $\alpha>1/p,$ or $p=1$ and $\alpha\geq1,$ then the product function $\mathbf x \mapsto f_1(\mathbf x)f_2(\mathbf x)\in SB^{\alpha}_{p,p,(d)}.$
\end{enumerate}
\end{prop}
\noindent Notice that in $(i)$ (resp. $(ii), (iii)$), the assumptions on the component functions $u_\ell,u_I$ or $g$ are not enough to ensure that $f\in B^{d\alpha}_{p,q,(d)}$ (resp. $B^{d\bar\alpha}_{p,q,(d)}, W^{d\alpha}_{p,(d)}$).
\smallskip
\noindent
\textit{Remark: }We believe that a generalization of $(iii)$ to Besov or fractional Sobolev smoothness holds. Yet such a generalization would require refined arguments from Approximation Theory in the spirit of~\cite{BourdaudSickelComposition, MoussaiComposition} which are beyond the scope of that paper.
The structural assumption $(ii)$ may be satisfied in the multivariate density estimation framework~\ref{sec:density} whenever $Y_1=(Y_{11},\ldots,Y_{1d})$ can be split into independent sub-groups of coordinates, and has recently been considered in~\cite{LepskiInd,RebellesPointwise,RebellesLp}. Case $(i)$ and its generalization $(iii)$ may not be directly of use in our multivariate intensity framework, but they will allow to draw a comparison with~\cite{HorowitzMammen,BaraudBirgeComposite}. Combining $(iii)$ and $(iv)$ is of interest for copula density estimation~\ref{sec:copula}, having in mind that a wide nonparametric family of copulas are Archimedean copulas (see~\cite{Nelsen}, Chapter 4), which have densities of the form
$$s(x_1,\ldots,x_d)=(\phi^{-1})'(x_1)\ldots (\phi^{-1})'(x_d) \phi^{(d)}\left(\phi^{-1}(x_1)+\ldots+\phi^{-1}(x_d)\right)$$
provided the generator $\phi$ is smooth enough (see for instance~\cite{McNeilNeslehova}). Combining $(iii), (iv),(v)$ may be of interest for L\'evy intensity estimation in~\ref{sec:levydensitycont} or \ref{sec:levydensitydisc}. Indeed, a popular way to build multivariate L\'evy intensities is based on L\'evy copulas studied in~\cite{KallsenTankov} (see also~\cite{ContTankov}, Chapter 5). The resulting L\'evy intensities then have the form
$$f(x_1,\ldots,x_d)= f_1(x_1)\ldots f_d(x_d) F^{(1,\ldots,1)}(U_1(x_1)+\ldots +U_d(x_d))$$
where $F$ is a so-called L\'evy copula, $F^{(1,\ldots,1)}=\partial^d {F}/ \partial t_1 \ldots \partial t_d$ and $U_\ell(x_\ell)=\int_{x_\ell}^\infty f_\ell(t)\d t.$ Besides, a common form for $F$ is
$$F(x)=\phi\left(\phi^{-1}(x_1)+\ldots+\phi^{-1}(x_d)\right)$$
under appropriate smoothness assumptions on $\phi.$ Last, let us emphasize that any linear combination (mixtures for instance) of functions in $SB^{\alpha}_{p,q,(d)}$ inherits the same smoothness. Consequently, mixed dominating smoothness may be thought as a fully nonparametric surrogate for a wide range of structural assumptions.
\subsection{Approximation qualities and minimax rate}
We provide in Section~\ref{sec:proofapprox} a constructive proof for the following nonlinear approximation result, in the spirit of~\cite{BBM}.
\begin{theo}\label{theo:approx}
Let $R>0,0<p<\infty, 0<q\leq \infty,\alpha >\max(1/p-1/2,0)$, and $f\in\L_2([0,1]^d)\cap \mathcal {SB}(\alpha,p,q,R).$ Under~\eqref{eq:N}, for all $\ell_1\in\{dj_0+1,\ldots,L_\bullet +1\}$, there exists some model $m_{\ell_1}(f)\in\mathcal M_{\ell_1}^{\mathcal P}$ and some approximation $A(f,\ell_1)\in S^\star_{m_{\ell_1}(f)}$ for $f$ such that
\begin{align*}\label{eq:upperboundapprox}
&\|f-A(f,\ell_1)\|^2_{\bm\Psi} \\&\leq C(B,j_0,\alpha,p, d) R^2\left( L_\bullet^{2(d-1)(1/2-1/q)_+} 2^{-2L_\bullet(\alpha-(1/p-1/2)_+)}+\ell_1^{2(d-1)(1/2-1/\max(p,q))}2^{-2\alpha \ell_1} \right).
\end{align*}
\end{theo}
\noindent
\textit{Remark:} When $p\geq 2,$ the same kind of result still holds with all $N(\ell_1,k)=0.$ But Assumption~\eqref{eq:N} is really useful when $p<2,$ the so-called non-homogeneous smoothness case.
\medskip
\noindent
The first term in the upper-bound is a linear approximation error by the highest dimensional model $S^\star_{m_\bullet}$ in the collection. As $D_{m_\bullet}$ is of order $L_\bullet^{d-1}2^{L_\bullet},$ we deduce from~\cite{DTU} (Section 4.3) that this first term is optimal over $SB^\alpha_{p,q},$ at least for $1< p <\infty,1\leq q\leq \infty$ and $\alpha>\max(1/p-1/2,0),$ for instance. The second term in the upper-bound is a nonlinear approximation error of $f$ within the model $S^\star_{m_{\ell_1}(f)},$ with dimension $D_{m_{\ell_1}(f)}$ of order $\ell_1^{d-1}2^{\ell_1}.$ So we deduce from~\cite{DTU} (Theorem 7.6) that this second term, which is of order $D_{m_{\ell_1}(f)}^{-2\alpha}(\log D_{m_{\ell_1}(f)} )^{2(d-1)(\alpha+1/2-1/q)},$ is also optimal up to a constant factor over $SB^\alpha_{p,q},$ at least for $1< p <\infty, p\leq q\leq \infty$ and $\alpha>\max(1/p-1/2,0).$ Notice that, under the classical Besov smoothness assumption $f\in\L_2([0,1]^d)\cap B^{\alpha}_{p,q,(d)},$ the best possible approximation rate for $f$ by $D$-dimensional linear subspaces in the $\L_2$-norm would be of order $D^{-2\alpha/d}.$ Thus with a mixed smoothness of order $\alpha$ in dimension $d,$ we recover the same approximation rate as with a classical smoothness of order $d\alpha$ in dimension $d,$ up to a logarithmic factor.
Let us define, for $\alpha,p,q,R,R'>0,$
$$\overline{\mathcal {SB}}(\alpha,p,q,R,R')=\{f \in \mathcal {SB}(\alpha,p,q,R)/ \|f\|_\infty \leq R'\}.$$ In the sequel, we use the notation $a\asymp C(\theta) b$ when there exist positive reals $C_1(\theta),C_2(\theta)$ such that $C_1(\theta) b\leq a \leq C_2(\theta) b.$
\begin{corol}\label{corol:upperrate} Assume $L_\bullet$ is large enough, then for all $0<p<\infty,0<q\leq \infty,\alpha>(1/p-1/2)_+,R\geq \bar n^{-1},R'>0,$
$$\sup_{s\in\overline{\mathcal {SB}}(\alpha,p,q,R,R')} \mathbb{E}_s \left[\|s-\tilde s^{\mathcal P}\|^2_{\bm\Psi}\right] \leq C(B,d,\alpha,p,R') \left( \left(\log(\bar n R^2)\right)^{(d-1)(\alpha+1/2-1/\max(p,q))} R\bar n^{-\alpha}\right)^{2/(1+2\alpha)}.$$
\end{corol}
\begin{proof} In order to minimize approximately the upper-bound, we choose $\ell_1$ such that
$$ \ell_1^{2(d-1)(1/2-1/\max(p,q))} 2^{-2\alpha \ell_1} R^2 \asymp C(\alpha,p,q,d) \ell_1^{d-1} 2^{\ell_1}/\bar n,$$
that is for instance
$$2^{\ell_1} \asymp C(\alpha,p,q,d) \left( \left(\log(\bar n R^2)\right)^{-2(d-1)/\max(p,q)} (\bar n R^2)\right)^{1/(1+2\alpha)},$$
which yields the announced upper-bound.
\end{proof}
\noindent
Remember that a similar result holds when replacing the $\bm\Psi$-norm by the equivalent $\L_2$-norm. Though unusual, the upper-bound in Corollary~\ref{corol:upperrate} is indeed related to the minimax rate.
\begin{prop} In the density estimation framework, assume $R^2\geq n^{-1},R'>0,p>0, 0<q\leq \infty,$ and either $\alpha>(1/p-1/2)_+$ and $q\geq 2$ or $\alpha>(1/p-1/2)_++1/\min(p,q,2) - 1/\min(p,2),$ then
$$\inf_{\hat s \text{ estimator of } s }\sup_{s\in\overline{\mathcal {SB}}(\alpha,p,q,R,R')} \mathbb{E}_s \left[\|s-\hat s\|^2\right] \asymp C(\alpha,p,q,d)\left( \left(\log(nR^2)\right)^{(d-1)(\alpha+1/2-1/q)} R n^{-\alpha}\right)^{2/(1+2\alpha)}.$$
\end{prop}
\begin{proof}
One may derive from~\cite{DTU} (Theorem 6.20), ~\cite{Dung} (proof of Theorem 1) and the link between entropy number and Kolmogorov entropy that the Kolmogorov $\epsilon$-entropy of $\mathcal {SB}(\alpha,p,q,R)$ is
$$H_\epsilon(\alpha,p,q,R)=(R/\epsilon)^{1/\alpha} \left(\log(R/\epsilon)\right)^{(d-1)(\alpha+1/2-1/q)/\alpha}.$$
According to~\cite{YangBarron} (Proposition 1), in the density estimation framework, the minimax risk over $\overline{\mathcal {SB}}(\alpha,p,q,R,R')$ is of order $\rho^2_n$ where $\rho^2_n=H_{\rho_n}(\alpha,p,q,R)/n,$ which yields the announced rate.
\end{proof}
\noindent
Consequently, in the density estimation framework, the penalized pyramid selection procedure is minimax over $\mathcal {SB}(\alpha,p,q,R)$ up to a constant factor if $p\leq q\leq \infty,$ and only up to a logarithmic factor otherwise.
Let us end with some comments about these estimation rates. First, we remind that the minimax rate under the assumption $s\in B^\alpha_{p,q,(d)}$ is of order $n^{-2\alpha/d/(1+2\alpha/d)}.$ Thus, under a mixed smoothness assumption of order $\alpha,$ we recover, up to a logarithmic factor, the same rate as with smoothness of order $\alpha$ in dimension 1, which can only be obtained with smoothness of order $d\alpha$ under a classical smoothness assumption in dimension $d$. Besides, under the multiplicative constraint $(ii)$ of Proposition~\ref{prop:composite}, we recover the same rate as~\cite{RebellesLp}, up to a logarithmic factor. And under the generalized additive constraint $(iii)$ of Proposition~\ref{prop:composite}, we recover the same rate as~\cite{BaraudBirgeComposite} (Section 4.3), up to a logarithmic factor. Regarding Neumann seminal work on estimation under mixed smoothness~\cite{Neumann} (see his Section 3), a first adaptive wavelet thresholding is proved to be optimal up to a logarithmic factor over $SW^{r}_{2,(d)}=SB^{r}_{2,2,(d)},$ and another, nonadaptive one, is proved to be optimal up to a constant over $SB^{r}_{1,\infty,(d)},$ where $r$ is a positive integer. Our procedure thus outperforms~\cite{Neumann} by being at the same time adaptive and minimax optimal up to a constant over these two classes, and many other ones.
\section{Implementing wavelet pyramid selection}\label{sec:algorithm}
We end this paper with a quick overview of practical issues related to wavelet pyramid selection. As we perform selection within a large collection of models, where typically the number of models is exponential in the sample size, we must guarantee that the estimator can still be computed in a reasonable time. Besides, we provide simulation based examples illustrating the interest of this new method.
\subsection{Algorithm and computational complexity}\label{sec:twostepalgo} Theorem~\ref{theo:choosepen} supports the choice of an additive penalty of the form
$$\text{\upshape{pen}}(m)=\sum_{\bm\lambda\in m} \hat v^2_{\bm\lambda},$$
where detailed expressions for $\hat v^2_{\bm\lambda}$ in several statistical frameworks have been given in Section~\ref{sec:backtoex}.
As $\hat \gamma(\hat s_m^\star)=-\sum_{\bm\lambda\in m} \hat \beta^2_{\bm\lambda},$ the penalized selection procedure amounts to choose
$$\hat m ^{\mathcal P}=\argmax{m\in\mathcal M^{\mathcal P}}{\text{crit}(m)}$$
where
$$\text{crit}(m)=\sum_{\bm\lambda \in m}(\hat \beta^2_{\bm\lambda}- \hat v_{\bm\lambda}^2).$$
Since each $\hat v^2_{\bm\lambda}$ is roughly an (over)estimate for the variance of $\hat \beta^2_{\bm\lambda}, $ our method, though different from a thresholding procedure, will mainly retain empirical wavelet coefficients $\hat \beta^2_{\bm\lambda}$ which are significantly larger than their variance.
A remarkable thing is that, due to both the structure of the collection of models and of the penalty function, the penalized estimator can be determined without computing all the preliminary estimators $(\hat s^\star_m)_{m\in\mathcal M^{\mathcal P}},$ which makes the computation of $\tilde s^{\mathcal P}$ feasible in practice. Indeed, we can proceed as follows.
\textit{Step 1.} For each $\ell_1\in\{dj_0+1,\ldots,L_\bullet+1\}$, determine
$$\hat m_{\ell_1}=\argmax{m\in\mathcal M_{\ell_1}^{\mathcal P}}{\sum_{\bm\lambda \in m}(\hat \beta^2_{\bm\lambda}- \hat v_{\bm\lambda}^2)}.$$
For that purpose, it is enough, for each $k\in\{0,\ldots,L_\bullet-\ell_1\},$ to
\begin{itemize}
\item compute and sort in decreasing order all the coefficients $(\hat \beta^2_{\bm\lambda}- \hat v_{\bm\lambda}^2)_{\bm\lambda \in U\bm\nabla(\ell_1+k)};$
\item keep the $N(\ell_1,k)$ indices in $U\bm\nabla(\ell_1+k)$ that yield the $N(\ell_1,k)$ greatest such coefficients.
\end{itemize}
\textit{Step 2.} Determine the integer $\hat \ell \in\{dj_0+1,\ldots,L_\bullet+1\}$ such that
$$\hat m_{\hat \ell}=\argmax{dj_0+1\leq \ell_1 \leq L_\bullet+1}{\text{crit}(\hat m_{\ell_1})}.$$
\noindent
The global computational complexity of $\tilde s^{\mathcal P}$ is thus $\mathcal O (\log(L_\bullet) L_\bullet^d 2^{L_\bullet}).$ Typically, we will choose $L_\bullet$ at most of order $\log_2(\bar n)$ so the resulting computational complexity will be at most of order $\mathcal O (\log(\log(\bar n))\log^d(\bar n)\bar n).$
\subsection{Illustrative examples}
In this section, we study two examples in dimension $d=2$ by using Haar wavelets.
First, in the density estimation framework, we consider an example where the coordinates of $Y_i=(Y_{i1},Y_{i2})$ are independent conditionally on a $K$-way categorical variable $Z,$ so that the density of $Y_i$ may be written as
$$s(x_1,x_2)=\sum_{k=1}^K \pi_k s_{1,k}(x_1)s_{2,k}(x_2),$$
where $\mathbf \pi= (\pi_1,\ldots,\pi_K)$ is the probability vector characterizing the distribution of $Z.$ For a compact interval $I,$ and $a,b>0,$ let us denote by $\beta(I;a,b)$ the Beta density with parameters $a,b$ shifted and rescaled to have support $I,$ and by $\mathcal U(I)$ the uniform density on $I.$ In our example, we take
\begin{itemize}
\item $K=4$ and $\pi=\left(3/5,1/10,1/40,11/40\right);$
\item $s_{1,1}=\beta\left([0,3/5];4,4\right)$ and $s_{2,1}=\beta\left([0,2/5];4,4\right);$
\item $s_{1,2}=\beta\left([2/5,1];100,100\right)$ and $s_{2,2}=\beta\left([2/5,1];20,20\right);$
\item $s_{1,3}=\mathcal U\left([0,1]\right)$ and $s_{2,3}=\mathcal U\left([0,1]\right);$
\item $s_{1,4}=\beta\left([3/5,1];8,4\right)$ and $s_{2,4}=\mathcal U\left([2/5,1]\right).$
\end{itemize}
The resulting mixture density $s$ of $Y_i$ is shown in Figure~\ref{fig:densexample} $(b)$. We choose $L_\bullet=\left[n/((\log n)/2)^2\right]$ and first compute the least-squares estimator $\hat s^\star_\bullet$ of $s$ on the model $V^\star_{L_\bullet/2}\otimes\ldots \otimes V^\star_{L_\bullet/2},$ which provides the estimator $\hat R=\max\{\|\hat s^\star_\bullet\|,1\}$ for $\bar R.$ We then use the penalty
$$\text{\upshape{pen}}(m)=\sum_{\bm\lambda \in m} \frac{1.5 \hat \sigma^2_{\bm \lambda}+0.5 \hat R}{n}.$$
For a sample with size $n=2000,$ Figure~\ref{fig:densexample} illustrates how the procedure first selects a rough model $\hat m_{\hat\ell}$ (Figure~\ref{fig:densexample} (c)) and then add some details wherever needed (Figure~\ref{fig:densexample} (d)). Summing up the two yields the pyramid selection estimator $\tilde s^\mathcal P$ (Figure~\ref{fig:densexample} (e)). By way of comparison, we also represent in Figure~\ref{fig:densexample} (f) a widely used estimator: the bivariate Gaussian kernel estimator, with the "known support" option, implemented in MATLAB {\tt{ksdensity}} function. We observe that, contrary to the kernel density estimator, the pyramid selection estimator recovers indeed the main three modes, and in particular the sharp peak.
\begin{centering}
\begin{figure}[h]
\includegraphics[scale=0.2]{DensExample2000.png}
\caption{Pyramid selection and standard kernel for an example of mixture of multiplicative densities.}
\label{fig:densexample}
\end{figure}
\end{centering}
In the copula density estimation framework, we consider an example where the copula of $X_i=(X_{i1},X_{i2})$ is either a Frank copula or a Clayton copula conditionally to a binary variable $Z.$ More precisely, we consider the mixture copula
$$s(x_1,x_2)=0.5 s_F(x_1,x_2) + 0.5 s_C(x_1,x_2)$$
where $s_F$ is the density of a Frank copula with parameter 4 and $s_C$ is the density of a Clayton copula with parameter 2. These two examples of Archimedean copula densities are shown in Figure~\ref{fig:FCcopula} and the resulting mixture in Figure~\ref{fig:copulaexample} (b). We use the same penalty as in the previous example, adapted of course to the copula density estimation framework. We illustrate in Figure~\ref{fig:copulaexample} the pyramid selection procedure on a sample with size $n=2000.$ Though not all theoretical conditions are fully satisfied here, the pyramid selection procedure still provides a reliable estimator.
\begin{figure}[h]
\includegraphics[scale=0.2]{FCcopula.png}
\caption{Left: Frank copula density with parameter 4; Right: Clayton copula density with parameter 2.}
\label{fig:FCcopula}
\end{figure}
\begin{figure}[h]
\includegraphics[scale=0.2]{CopulaExample.png}
\caption{Pyramid selection for an example of mixture copula density.}
\label{fig:copulaexample}
\end{figure}
As a conclusion, those examples suggest that the Haar pyramid selection already provides a useful new estimation procedure. This is most encouraging for pyramid selection based on higher order wavelets, whose full calibration based on an extensive simulation study in each framework will be the subject of another work.
\section{Proofs}\label{sec:proofs}
We shall use repeatedly the classical inequality
\begin{equation}\label{eq:basic}
2ab \leq \theta a^2 + \frac1\theta b^2
\end{equation}
for all positive $\theta,a,b.$
\subsection{Proof of Proposition~\ref{prop:copulaone}}\label{sec:proofscopula}
We only have to prove~\ref{prop:copulaone} for $m_\bullet.$ Indeed, as any pyramidal model $m$ is a subset of $m_\bullet,$ a common upper-bound for the residual terms is
$$ \|\check s^\star_m-\hat s^\star_m\|^2\leq \|\check s^\star_{m_\bullet}-\hat s^\star_{m_\bullet}\|^2.$$
Under Assumption (L), and thanks to assumptions $S.iii)$ and $W.v),$ we have that for all $\bm\lambda\in m_\bullet,$ $\bm x=(x_1,\ldots,x_d)$ and $\bm y=(y_1,\ldots,y_d) \in Q,$
$$|\Psi_{\bm\lambda}(\bm x)-\Psi_{\bm\lambda}(\bm y)| \leq \kappa^d 2^{3L_\bullet/2}\sum_{k=1}^d |x_k-y_k|.$$
According to Massart's version of Dworetzky-Kiefer-Wolfowitz inequality (see~\cite{MassartDKW}), for any positive $z,$ and $1\leq k\leq d,$ there exists some event $\Omega_k(z)$ on which $\|\hat F_{nk}-F_k\|_\infty \leq z/\sqrt{n}$ and such that $\P(\Omega^c_k(z))\leq 2\exp(-2z^2).$ Setting $\Omega(z)=\cap_{k=1}^d \Omega_k(z),$ we thus have for all $\bm\lambda\in m_\bullet$
$$|\check \beta_{\bm\lambda}- \widehat\beta_{\bm\lambda}| \leq
\kappa^d 2^{3L_\bullet/2} d(z/\sqrt{n}) \BBone_{\Omega(z)} + \kappa^d 2^{3L_\bullet/2} d \BBone_{\Omega^c(z)},$$
hence
$$ \mathbb{E}[\|\check s^\star_{m_\bullet}-\hat s^\star_{m_\bullet}\|^2] \leq \kappa^{2d} 2^{3L_\bullet} d^2(z^2/n + 2d\exp(-2z^2)) D_{m_\bullet}.$$
Finally, $D_{m_\bullet}$ is of order $L^{d-1}_\bullet 2^{L_\bullet}$ (see Proposition~\ref{prop:combinatorial}), so by choosing $2z^2=\log(n),$
$$ \mathbb{E}[\|\check s^\star_{m_\bullet}-\hat s^\star_{m_\bullet}\|^2] \leq C(\kappa,d)L^{d-1}_\bullet 2^{4L_\bullet} \log(n)/n.$$
\subsection{Proof of Proposition~\ref{prop:disclevyone}}\label{sec:proofslevydisc}
For all bounded measurable function $g,$ let us denote $D_\Delta(g)=\mathbb{E}[g(X_\Delta)]/\Delta-\int_Q g s.$ For all $\bm\lambda\in \Lambda,$
\begin{equation}\label{eq:disclevycoeff}
\mathbb{E}\left[\left(\check \beta_{\bm\lambda}-\widehat\beta_{\bm\lambda}\right)^2\right]
\leq 4\left( \text{\upshape{Var}}(\check \beta_{\bm\lambda})+ \text{\upshape{Var}}(\widehat\beta_{\bm\lambda})+ D^2_\Delta(\Psi_{\bm\lambda})\right)
\leq 8\frac{\|s\|_\infty}{n\Delta}+ 4\frac{D_\Delta(\Psi^2_{\bm\lambda})}{n\Delta}+ 4D^2_\Delta(\Psi_{\bm\lambda}).
\end{equation}
We shall bound $D_\Delta(\Psi_{\bm\lambda})$ by using the decomposition of a L\'evy process into a big jump compound Poisson process and an independent small jump L\'evy process. Let us fix $\varepsilon>0$ small enough so that $Q=\prod_{k=1}^d [a_k,b_k]\subset \{\|x\|>\varepsilon\}$ and denote by $(\Sigma,\bm{\mu},\nu)$ the characteristic L\'evy triplet of $\mathbf X=(X_t)_{t\geq 0},$ where $\bm{\mu}$ stands for the drift and $\nu$ is the L\'evy measure, with density $f$ with respect to the Lebesgue measure on $\mathbb{R}^d$ (see Section~\ref{sec:examples}). Then $\mathbf X$ is distributed as $\mathbf X^\varepsilon +\tilde {\mathbf X}^\varepsilon,$ where
$\mathbf X^\varepsilon$ and $\tilde{\mathbf X}^\varepsilon$ are independent L\'evy processes with following characteristics. First, $\mathbf X^\varepsilon$ is a L\'evy process with characteristic L\'evy triplet $(\Sigma,\bm{\mu}_\varepsilon,\nu_\epsilon),$ where the drift is
$$\bm{\mu}_\varepsilon= \bm{\mu}-\int_{\varepsilon<\|x\|\leq 1} x\ f(x)\d x.$$
and the L\'evy measure is
$$\nu_\varepsilon (\d x)=\BBone_{\|x\|\leq \varepsilon} f(x) \d x.$$
The process $\tilde {\mathbf X}^\varepsilon$ is the compound Poisson process
$$\tilde {X}_t^\varepsilon=\sum_{i=1}^{\tilde N_t} \xi_i,$$
where $\tilde N$ is a homogeneous Poisson process with intensity $\lambda_\varepsilon=\nu(\{\|x\|> \varepsilon\}),$ $(\xi_i)_{i\geq 1}$ are i.i.d. with density $\lambda_\varepsilon^{-1} \BBone_{\|x\|> \varepsilon}f(x),$ and $\tilde N$ and $(\xi_i)_{i\geq 1}$ are independent.
Conditioning by $\tilde N$ and using the aforementioned independence properties yields
$$\frac{\mathbb{E}[\Psi_{\bm\lambda}(X_\Delta)]}{\Delta}=e^{-\lambda_{\varepsilon} \Delta} \frac{\mathbb{E}[\Psi_{\bm\lambda}(X^\varepsilon_\Delta)]}{\Delta} + \lambda_{\varepsilon}e^{-\lambda_{\varepsilon} \Delta} \mathbb{E}[\Psi_{\bm\lambda}(X^\varepsilon_\Delta+\xi_1)]+ \lambda^2_{\varepsilon} \Delta e^{-\lambda_{\varepsilon} \Delta} \sum_{j=0}^\infty\mathbb{E}\left[\Psi_{\bm\lambda}\left(X^\varepsilon_\Delta+\sum_{i=1}^{j+2}\xi_i\right)\right]\frac{(\lambda_{\varepsilon} \Delta)^j}{(j+2)!}.$$
Conditioning by $\xi_1$ and using independence between $X^\varepsilon_\Delta$ and $\xi_1$ then yields
$$\lambda_{\varepsilon} \mathbb{E}[\Psi_{\bm\lambda}(X^\varepsilon_\Delta+\xi_1)]=\int_{\|x\|>\varepsilon} \mathbb{E}[\Psi_{\bm\lambda}(X^\varepsilon_\Delta+x)]f(x)\d x.$$
Writing $\langle \Psi_{\bm\lambda},s\rangle= e^{-\lambda_{\varepsilon} \Delta} \langle \Psi_{\bm\lambda},s\rangle+ (1-e^{-\lambda_{\varepsilon} \Delta})\langle \Psi_{\bm\lambda},s\rangle$ and using $(1-e^{-\lambda_{\varepsilon} \Delta})\leq \lambda_{\varepsilon} \Delta$ leads to
$$|D_\Delta(\Psi_{\bm\lambda})| \leq R_\Delta^{(1)}(\Psi_{\bm\lambda}) +R_\Delta^{(2)}(\Psi_{\bm\lambda}) + R_\Delta^{(3)}(\Psi_{\bm\lambda}) +R_\Delta^{(4)}(\Psi_{\bm\lambda}),$$
where
$$ R_\Delta^{(1)}(\Psi_{\bm\lambda}) = e^{-\lambda_{\varepsilon} \Delta}\frac{\mathbb{E}[\Psi_{\bm\lambda}(X^\varepsilon_\Delta)]}{\Delta}, \quad
R_\Delta^{(2)}(\Psi_{\bm\lambda}) = e^{-\lambda_{\varepsilon} \Delta}\int_{\|x\|>\varepsilon} \left|\mathbb{E}\left[\Psi_{\bm\lambda}(X^\varepsilon_\Delta+x) -\Psi_{\bm\lambda}(x)\right]\right|f(x)\d x,$$
\begin{equation}\label{eq:R3R4}
R_\Delta^{(3)}(\Psi_{\bm\lambda})=\lambda_{\varepsilon} \Delta \|\Psi_{\bm\lambda}\|_1 \|s\|_\infty,\quad
R_\Delta^{(4)}(\Psi_{\bm\lambda})= \lambda_{\varepsilon}^2 \Delta \| \Psi_{\bm\lambda}\|_\infty.
\end{equation}
As $\Psi_{\bm\lambda}$ has compact support $Q,$
$$ R_\Delta^{(1)}(\Psi_{\bm\lambda}) \leq e^{-\lambda_{\varepsilon} \Delta} \| \Psi_{\bm\lambda}\|_\infty \frac{\P(X^\varepsilon_\Delta \in Q)}{\Delta}.$$
Let us denote by $X^\varepsilon_{\Delta,k}$ the $k$-th coordinate of $X^\varepsilon_\Delta$ and by $d_Q$ the maximal distance from $[a_k,b_k]$ to 0, for $k=1,\ldots,d,$ reached for instance at $k=k_0.$
We deduce from the proof of Lemma 2 in~\cite{RW} (see also~\cite{FigueroaHoudre}, equation (3.3)) that there exists $z_0=z_0(\varepsilon)$ such that if $\Delta <d_Q/z_0(\varepsilon),$
$$\P(X^\varepsilon_\Delta \in Q)\leq \P(|X^\varepsilon_{\Delta,k_0}| \geq d_Q)\leq \exp\left((z_0\log(z_0)+u-u\log(u))/(2\varepsilon)\right) \Delta^{d_Q/(2\varepsilon)}$$
so that
\begin{equation}\label{eq:R1}
R_\Delta^{(1)}(\Psi_{\bm\lambda}) \leq C(d_Q,\varepsilon) e^{-\lambda_{\varepsilon} \Delta} \| \Psi_{\bm\lambda}\|_\infty \Delta^{d_Q/(2\varepsilon)-1}.
\end{equation}
Under Assumption $(L),$ $\Psi_{\bm\lambda}$ is Lipschitz on $Q,$ so
$$\left|\Psi_{\bm\lambda}(X^\varepsilon_\Delta+x) -\Psi_{\bm\lambda}(x)\right|\leq \|\Psi_{\bm\lambda}\|_L \|X_\Delta^\varepsilon\|_1\BBone_{\{X_\Delta^\varepsilon +x \in Q\} \cap \{x \in Q\}} +|\Psi_{\bm\lambda}(x)| \BBone_{\{X_\Delta^\varepsilon +x \notin Q\} \cap\{x \in Q\}} + \|\Psi_{\bm\lambda}\|_\infty \BBone_{\{X_\Delta^\varepsilon +x \in Q\} \cap\{x \notin Q\}}.
$$
Besides, as $Q$ is compact and bounded away from the origin, there exists $\delta_Q$ and $\rho_Q>0$ such that
$$\{X_\Delta^\varepsilon +x \in Q\} \cap \{x \in Q\} \subset \{\|X_{\Delta}^\varepsilon\| \geq \delta_Q\} $$
$$(\{X_\Delta^\varepsilon +x \in Q\} \cap \{x \notin Q\})\cup (\{X_\Delta^\varepsilon +x \notin Q\} \cap\{x \in Q\}) \subset \{\|X_{\Delta}^\varepsilon\| \geq \rho_Q\}$$
The L\'evy measure of $\mathbf X^\varepsilon$ is compactly supported and satisfies
$$\int \|x\| ^2 \nu_\varepsilon(\d x) = \int_{\|x\| \leq \varepsilon} \|x\| ^2 \nu(\d x) $$
which is finite since $\nu$ is a L\'evy measure (see for instance~\cite{Sato}, Theorem 8.1). So we deduce from~\cite{Millar}, Theorem 2.1, that
\begin{equation*}\label{eq:momsmalllevy}
\mathbb{E}\left[\|X_\Delta^\varepsilon\|^2\right]\leq C(d,f) \Delta,
\end{equation*}
hence
$$\mathbb{E}\left[ \|X_\Delta^\varepsilon\|_1\BBone_{\|X_{\Delta}^\varepsilon\| \geq \delta_Q} \right] \leq
C(d)\delta_Q^{-1} \mathbb{E}\left[ \|X_\Delta^\varepsilon\|^2\right] \leq C(d,f)\delta_Q^{-1} \Delta
$$
and from Markov inequality
$$ \P(\|X^\varepsilon_\Delta\| \geq \rho_Q) \leq C(d,f) \rho_Q^{-2}\Delta.$$
Finally, fixing $0<\varepsilon<\min(d_Q/4, \inf_{x\in Q} \|x\|),$ we have for all $0<\Delta < \min(d_Q/z_0(\epsilon), 1)$
\begin{equation}\label{eq:R2}
R_\Delta^{(2)}(\Psi_{\bm\lambda}) \leq C(d,f) e^{-\lambda_{\varepsilon}\Delta} \Delta (\delta_Q^{-1} \lambda_\varepsilon \|\Psi_{\bm\lambda}\|_L + \rho_Q^{-2}\|s\|_\infty \|\Psi_{\bm\lambda}\|_1+ \rho_Q^{-2} \lambda_\varepsilon \|\Psi_{\bm\lambda}\|_\infty ).
\end{equation}
For all $\bm\lambda\in m_\bullet,$
$$\max(\|\Psi_{\bm\lambda}\|_L, \|\Psi_{\bm\lambda}\|_\infty, \|\Psi_{\bm\lambda}\|_1) \leq C(\kappa) 2^{3L_\bullet/2},\max(\|\Psi^2_{\bm\lambda}\|_L, \|\Psi^2_{\bm\lambda}\|_\infty, \|\Psi^2_{\bm\lambda}\|_1) \leq C(\kappa) 2^{2L_\bullet},$$
so that combining~\eqref{eq:disclevycoeff},~\eqref{eq:R1},~\eqref{eq:R2} and~\eqref{eq:R3R4} yields
$$\mathbb{E}[\|\check s^\star_m-\hat s^\star_m\|^2]
\leq 8\frac{\|s\|_\infty D_m}{n\Delta}+ C(\kappa,d,f,Q,\varepsilon) L_\bullet^{d-1} \frac{2^{4L_\bullet}n\Delta^3+ 2^{3L_\bullet}\Delta }{n\Delta}.$$
\subsection{Proof of Proposition~\ref{prop:combinatorial}}\label{sec:proofcombinatorial}
Due to hypotheses $S.ii)$ and $W.ii),$ we have for all $j\geq j_0,$
$$2^{j-1}\leq \sharp \nabla_j \leq M 2^{j-1},$$
hence, for all $\bm j\in \mathbb{N}_{j_0}^d,$
\begin{equation*}\label{eq:cardnablauni}
(1/2)^d 2^{|\bm j|}\leq \sharp \bm\nabla_{\bm j} \leq (M/2)^d 2^{|\bm j|}.
\end{equation*}
Let us fix $\ell\in\{dj_0,\ldots, L_\bullet\}.$ The number of $d$-uples $\bm j\in\mathbb{N}_{j_0}^d$ such that $|\bm j|=\ell$ is equal to the number of partititions of the integer $\ell-dj_0$ into $d$ nonnegative integers, hence
\begin{equation*}\label{eq:cardpartint}
\sharp \bm J_\ell=\binom{\ell-dj_0+d-1}{d-1}=\prod_{k=1}^{d-1} \left(1+\frac{\ell-dj_0}{k}\right).
\end{equation*}
The last two displays and the classical upper-bound for binomial coefficient (see for instance~\cite{Massart}, Proposition 2.5) yield
\begin{equation}\label{eq:cardbignabla}
c_0(d) (\ell-dj_0+d-1)^{d-1}2^\ell \leq \sharp U\bm\nabla(\ell) \leq c_1(M,d) (\ell-dj_0+d-1)^{d-1}2^\ell,
\end{equation}
where $c_0(d)=2^{-d}(d-1)^{-(d-1)}$ and $c_1(M,d)=(M/2)^d(e/(d-1))^{d-1}.$
Let us now fix $\ell_1\in\{dj_0+1,\ldots, L_\bullet +1\}.$ Any model $m\in\mathcal M^\mathcal P_{\ell_1}$ satisfies
$$D_m
= \sum_{\ell=dj_0}^{\ell_1-1} \sharp U\bm\nabla(\ell) + \sum_{k=0}^{L_\bullet-\ell_1}N(\ell_1,k).$$
So we obviously have
$$D_m \geq \sharp U\bm\nabla(\ell_1-1) \geq \kappa_1(d)(\ell_1-dj_0+d-2)^{d-1}2^{\ell_1},$$
with $\kappa_1(d)=c_0(d)/2=2^{-(d+1)}(d-1)^{-(d-1)}.$
Besides, with our choice of $N(\ell_1,k),$
\begin{equation*}
D_m \leq c_1(M,d) (\ell_1-dj_0+d-2)^{d-1} \sum_{\ell=dj_0}^{\ell_1-1}2^\ell + 2M^{-d}c_1(M,d) s_1(d) (\ell_1-dj_0+d-2)^{d-1}2^{\ell_1},
\end{equation*}
so that Proposition~\ref{prop:combinatorial} holds with $\kappa_2(d,j_0,B) = c_1(M,d) (1+2M^{-d}c_1(M,d) s_1(d)),$ where
$$s_1(d)=\sum_{k=0}^\infty \frac{(1+k/(d-1))^{d-1}} {(2+k)^{d+2}}.$$
The number of subsets of $\Lambda$ in $\mathcal M^\mathcal P_{\ell_1}$ satisfies
\begin{equation*}
\sharp\mathcal M^\mathcal P_{\ell_1}
= \prod_{k=0}^{L_\bullet-\ell_1} \binom{\sharp U\bm{\nabla}(\ell_1+k)}{N(\ell_1,k)}
\leq \prod_{k=0}^{L_\bullet-\ell_1} \left(\frac{e\: \sharp U\bm{\nabla}(\ell_1+k)}{N(\ell_1,k)}\right)^{N(\ell_1,k)}.
\end{equation*}
For $k\in\{0,\ldots,L_\bullet-\ell_1\},$ let $f(k)=(k+2)^{d+2}2^k M^d/2,$ then $N(\ell_1,k)\leq \sharp U\bm{\nabla}(\ell_1+k)/f(k).$ As the function $x\in [0,U]\mapsto x \log(e U/x)$ is increasing, we deduce
\begin{equation*}
\log (\sharp \mathcal M^\mathcal P_{\ell_1})
\leq D(\ell_1) \sum_{k=0}^{L_\bullet-\ell_1} \frac{\sharp U\bm\nabla(\ell_1+k)}{\sharp U\bm\nabla(\ell_1-1)} \frac{1+\log(f(k))}{f(k)}.
\end{equation*}
Setting
$$s_2=\sum_{k=0}^\infty\frac{1}{(k+2)^3},\quad s_3=\sum_{k=0}^\infty\frac{\log(k+2)}{(k+2)^3}, s_4 = \sum_{k=0}^\infty\frac{1}{(k+2)^2},$$ one may take for instance
$\kappa_3(j_0,B,d)=(\log(e/2)+d\log(M))s_2 + (d+2) s_3+\log(2) s_4$ in Proposition~\ref{prop:combinatorial}.
\subsection{Proof of Theorem~\ref{theo:choosepen}}\label{sec:proofpen}
\subsubsection{Notation and preliminary results}
Hyperbolic wavelet bases inherit from the underlying univariate wavelet bases a localization property which can be stated as follows.
\begin{lemm}\label{lemm:localization} Let $\underline D(L_\bullet)=\left(e(L_\bullet-dj_0+d-1)/(d-1)\right)^{d-1} 2^{L_\bullet/2},$ then for all real-valued sequence $(a_{\bm\lambda})_{\bm\lambda\in m_\bullet},$
$$\max\left\{\left\|\sum_{\bm\lambda\in m_\bullet} a_{\bm\lambda} \Psi_{\bm\lambda}\right\|_\infty,\left\|\sum_{\bm\lambda\in m_\bullet} a_{\bm\lambda} \Psi^\star_{\bm\lambda}\right\|_\infty\right\} \leq \kappa'_7 \max_{{\bm\lambda}\in m_\bullet} |a_{\bm\lambda}| \underline D(L_\bullet),$$
where $\kappa'_7=\kappa^{2d} (2+\sqrt{2})$ for instance.
\end{lemm}
\begin{proof} For all $\mathbf x=(x_1,\ldots,x_d)\in[0,1]^d,$ using assumptions $S.vi), S.vii), S.viii),W.iv), W.v), W. vi)$ in Section~\ref{sec:uniwave_assumptions}, we get
\begin{align*}
\left|\sum_{\bm\lambda\in m_\bullet} a_{\bm\lambda} \Psi_{\bm\lambda}\right|
&\leq \max_{\bm\lambda\in m_\bullet} \left| a_{\bm\lambda} \right| \sum_{\ell=dj_0}^{L_\bullet} \sum_{\bm j \in \bm J_\ell} \prod_{k=1}^d \left(\sum_{\lambda_k\in\nabla_{j_k}}|\psi_{\lambda_k}(x_k)|\right)\\
&\leq \kappa^{2d} \max_{\bm\lambda\in m_\bullet} \left| a_{\bm\lambda} \right| \sum_{\ell=dj_0}^{L_\bullet} \sum_{\bm j \in \bm J_\ell} 2^{\ell/2}.
\end{align*}
We deduce from the proof of Proposition~\ref{prop:combinatorial} the upper-bound
$\sharp\bm J_\ell\leq \left(e(L_\bullet-dj_0+d-1)/(d-1)\right)^{d-1}$ which allows to conclude.
\end{proof}
For all $t\in\L_2([0,1]^d),$ we define
$$\nu(t)=\sum_{\bm\lambda\in\Lambda}\langle t,\Psi_{\bm\lambda} \rangle (\check \beta_{\bm\lambda}-\langle s,\Psi_{\bm\lambda} \rangle),\quad
\nu_R(t)=\sum_{\bm\lambda\in\Lambda}\langle t,\Psi_{\bm\lambda} \rangle (\hat \beta_{\bm\lambda}-\check \beta_{\bm\lambda}),\quad
\hat\nu(t)=\nu(t)+\nu_R(t),$$
and for all $m\in\mathcal M^{\mathcal P},$ we set
$$\chi(m)=\sup_{t\in S^\star_m|\|t\|_{\bm\Psi}=1} \nu(t), \quad \chi_R(m)=\sup_{t\in S^\star_m|\|t\|_{\bm\Psi}=1} \nu_R(t).$$
\begin{lemm}\label{lemm:chi2exp}
For all $m\in\mathcal M^{\mathcal P},$ let $t^\star_m= \sum_{\bm\lambda\in m}(\nu(\Psi^\star_{\bm\lambda})/\chi(m)) \Psi^\star_{\bm\lambda},$ then
$$\chi(m)=\sqrt{\sum_{\bm\lambda\in m}\nu^2(\Psi^\star_{\bm\lambda})}=\|s^\star_m-\check s^\star_m\|_{\bm\Psi}=\nu(t^\star_m),$$
$$\chi_R(m)=\sqrt{\sum_{\bm\lambda\in m}\nu_R^2(\Psi^\star_{\bm\lambda})}=\|\check s^\star_m-\hat s^\star_m\|_{\bm\Psi}.$$
\end{lemm}
\begin{proof} The proof follows from the linearity of $\nu$ and $\nu_R$ and Cauchy-Schwarz inequality.
\end{proof}
\begin{lemm}\label{lemm:chitrunc}Let $\epsilon=\kappa'_2\|s\|_\infty/(\kappa'_3\kappa'_7\underline D(L_\bullet))$ and
$$\Omega_T=\cap_{\bm \lambda\in m_\bullet}\left\{|\nu(\Psi^\star_{\bm\lambda})| \leq \epsilon \right\}.$$
For all $x>0,$ there exists a measurable event $\Omega_m(x)$ on which
$$\chi^2(m) \BBone_{\Omega_T\cap \Omega_\sigma} \leq 2 \kappa'^2_1\kappa'_5 \sum_{\bm\lambda \in m} \frac{\max\{\hat\sigma^2_{\bm\lambda},1\}}{\bar n} + 8 \kappa'_2 \|s\|_\infty\frac{x}{\bar n}.$$
and such that $\P(\Omega^c_m(x))\leq \exp(-x).$
\end{lemm}
\begin{proof} We observe that $\chi(m)=\mathcal Z (\mathcal T_m)$ where $\mathcal T_m=\{t\in S^\star_m | \|t\|_{\bm\Psi} =1\}.$ Let us set $z=\sqrt{\kappa'_2\|s\|_\infty x/\bar n}$ and consider a countable and dense subset $\mathcal T'_m$ of $\{t\in S^\star_m | \|t\|_{\bm\Psi} =1, \max_{\bm\lambda\in m} |\langle t,\Psi_{\bm\lambda}\rangle|\leq \epsilon/z\}.$ Thanks to the localization property in Lemma~\ref{lemm:localization},
$$\sup_{t\in\mathcal T'_m}\left\|\sum_{\lambda \in m_\bullet} \langle t, \Psi_{\bm\lambda} \rangle \Psi_{\bm \lambda}\right \|_\infty \leq \kappa'_3\frac{\sqrt{\kappa'_2 \|s\|_\infty}}{\sqrt{x/\bar n}}.$$
So Assumption \textbf{(Conc)} ensures that there exists $\Omega_m(x)$ such that $\P(\Omega^c_m(x))\leq \exp(-x)$ and on which
$$\mathcal Z(\mathcal T'_m) \leq \kappa'_1 \mathbb{E}\left[\mathcal Z(\mathcal T'_m)\right] + 2\sqrt{\kappa'_2 \|s\|_\infty \frac{x}{\bar n}},$$
hence
$$\mathcal Z^2(\mathcal T'_m) \leq 2\kappa'^2_1 \mathbb{E}^2\left[\mathcal Z(\mathcal T'_m)\right] + 8\kappa'_2 \|s\|_\infty \frac{x}{\bar n}.$$
As $Z(\mathcal T'_m)\leq \chi(m),$ we obtain by convexity and Lemma~\ref{lemm:chi2exp}
$$\mathbb{E}^2\left[\mathcal Z(\mathcal T'_m)\right]\leq \mathbb{E}\left[\chi^2(m)\right] =\sum_{\bm\lambda \in m} \text{\upshape{Var}}(\check \beta_{\bm\lambda}).$$
On $\Omega_T\cap \{\chi(m) \geq z\},$ $t^\star_m$ given by Lemma~\ref{lemm:chi2exp} satisfies $\sup_{\bm\lambda\in m}|\langle t^\star_m, \Psi_{\bm\lambda}\rangle| \leq \epsilon/z,$ so that $\chi^2(m)=\mathcal Z^2(\mathcal T'_m),$ while on $\Omega_T\cap \{\chi(m) < z\},$ $\chi^2(m) < \kappa'_2\|s\|_\infty x/\bar n.$ The proof then follows from Assumption \textbf{(Var)}.
\end{proof}
\subsubsection{Proof of Theorem~\ref{theo:choosepen}}
Let us fix $m\in\mathcal M^{\mathcal P}.$ From the definition of $\hat m^{\mathcal P}$ and of $\hat s^\star_m,$ we get
$$\hat\gamma(\tilde s^{\mathcal P})+\text{\upshape{pen}}(\hat m^{\mathcal P})\leq \hat\gamma(s^\star_m)+\text{\upshape{pen}}(m).$$
For all $t,u\in\L_2([0,1]^d),$
$$\hat\gamma(t)-\hat\gamma(u)=\|t-s\|^2_{\bm \Psi}-\|u-s\|^2_{\bm \Psi}-2\hat \nu(t-u),$$
so
$$\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi}\leq \|s-s^\star_m\|^2_{\bm \Psi}+2\hat \nu(\tilde s^{\mathcal P}-s^\star_m)+ \text{\upshape{pen}}(m)-\text{\upshape{pen}}(\hat m^{\mathcal P}).$$
Using the triangle inequality and Inequality~\eqref{eq:basic} with $\theta=1/4$ and $\theta=1,$ we get
\begin{align*}
2\hat \nu(\tilde s^{\mathcal P}-s^\star_m)
&\leq 2\|\tilde s^{\mathcal P}-s^\star_m\|_{\bm \Psi}(\chi(m\cup\hat m^{\mathcal P})+\chi_R(m\cup\hat m^{\mathcal P}))\\
&\leq \frac{1}{2} \|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi} + \frac{1}{2} \|s-s^\star_m\|^2_{\bm \Psi} + 8 \chi^2(m\cup\hat m^{\mathcal P}) + 8\chi^2_R(m\cup\hat m^{\mathcal P}),
\end{align*}
hence
\begin{equation}\label{eq:step1}
\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi}\leq 3\|s-s^\star_m\|^2_{\bm \Psi}+16\chi^2(m\cup\hat m^{\mathcal P})+2( \text{\upshape{pen}}(m)-\text{\upshape{pen}}(\hat m^{\mathcal P})) + 16\chi^2_R(m\cup\hat m^{\mathcal P}).
\end{equation}
Let us fix $\zeta>0$ and set $\omega=\kappa_3(j_0,B,d)+\log(2)$ and $\Omega_\star(\zeta)=\cap_{m'\in\mathcal M^{\mathcal P}} \Omega_{m\cup m'}(\zeta + \omega D_m).$ We deduce from Lemma~\ref{lemm:chitrunc} that on $\Omega_\star(\zeta)$
\begin{equation}\label{eq:step2}
\chi^2(m\cup \hat m^{\mathcal P}) \BBone_{\Omega_T\cap \Omega_\sigma} \leq 2 \kappa'^2_1\kappa'_5 \sum_{\bm\lambda \in m\cup \hat m^{\mathcal P}}\frac{\max\{\hat\sigma^2_{\bm\lambda},1\}}{\bar n}+ 8 \kappa'_2 \|s\|_\infty\frac{\omega (D_m+D_{\hat m^{\mathcal P}})}{\bar n}+ 8 \kappa'_2 \|s\|_\infty\frac{\zeta}{\bar n}.
\end{equation}
Besides, given Proposition~\ref{prop:combinatorial}, our choice of $\omega$ leads to
$$ \P(\Omega^c_\star(\zeta)) \leq e^{-\zeta} \sum_{\ell=dj_0+1}^{L_\bullet+1} \exp\left(-D(\ell)\left(\omega-\frac{\log(\sharp \mathcal M_\ell^{\mathcal P})}{D(\ell)}\right) \right)\leq e^{-\zeta}.$$
Choosing for instance
$$\text{\upshape{pen}}(m)=c_1\sum_{\bm\lambda \in m} \frac{\hat\sigma^2_{\bm\lambda}}{\bar n} + c_2\frac{\bar R D_m}{\bar n},$$
with $c_1\geq 16 \kappa'^2_1\kappa'_5$ and $c_2\geq 64 \kappa'_2\omega+8\kappa'_6$ and integrating with respect to $\zeta>0,$ we deduce from~\eqref{eq:step1},~\eqref{eq:step2}, Assumption \textbf{(Var)} and Assumption \textbf{(Conc)} that
\begin{equation}\label{eq:step3}
\mathbb{E}\left[\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi} \BBone_{\Omega_T\cap \Omega_\sigma} \right] \leq 3 \|s-s^\star_m\|^2_{\bm \Psi}+ C \frac{\bar R D_m}{\bar n}
+ 64\kappa'_2\frac{ \|s\|_\infty}{\bar n}+8\frac{w(\bar n)}{\bar n},
\end{equation}
where $C$ may depend on $\kappa'_1,\kappa'_2,\kappa'_4,\kappa'_5,\kappa'_6,c_1,c_2.$
In order to bound $\mathbb{E}\left[\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi} \BBone_{\Omega^c_T\cup \Omega^c_\sigma} \right],$ we first notice that from the triangle inequality and Lemma~\ref{lemm:chi2exp}
\begin{align*}
\|s-\tilde s^{\mathcal P}\|_{\bm \Psi}
&\leq \|s- s^\star_{\hat m^\mathcal P}\|_{\bm \Psi}+\|s^\star_{\hat m^\mathcal P}-\hat s^\star_{\hat m^\mathcal P}\|_{\bm \Psi}\\
&\leq \|s\|_{\bm \Psi}+\chi(\hat m) +\chi_R(\hat m),
\end{align*}
hence
$$\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi} \leq \|s\|^2_{\bm \Psi}+4\chi^2(m_\bullet) +4\chi_R^2(m_\bullet).$$
Then setting $p_T=\P(\Omega_T^c)$ and $p_\sigma=\P(\Omega_\sigma^c),$ Cauchy-Schwarz inequality entails
\begin{equation*}
\mathbb{E}\left[\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi} \BBone_{\Omega^c_T\cup \Omega^c_\sigma}\right] \leq 2(p_T+p_\sigma)\|s\|^2_{\bm \Psi}+4\sqrt{p_T+p_\sigma} \left(\sqrt{\mathbb{E}\left[\chi^4(m_\bullet)\right]} +\sqrt{\mathbb{E}\left[\chi_R^4(m_\bullet)\right]}\right).
\end{equation*}
Let $\bm\lambda\in m_\bullet,$ $\|\Psi^\star_{\bm\lambda}\|_\infty \leq \kappa^d 2^{L_\bullet/2},$ so applying Assumption \textbf{(Conc)} with $\mathcal T= \{\Psi^\star_{\bm\lambda}\}$ and $\mathcal T= \{-\Psi^\star_{\bm\lambda}\},$ we get
$$\P\left(|\nu(\Psi^\star_{\bm\lambda})|\geq \epsilon \right)
\leq 2 \exp\left(-\min\left\{\frac{\bar n \epsilon^2}{4\kappa'_2\|s\|_\infty}, \frac{\bar n \epsilon}{2\kappa'_3 \kappa^d 2^{L_\bullet/2}}\right\}\right).$$
Then setting $\iota= \left(e(L_\bullet-dj_0+d-1)/(d-1)\right)^{d-1},$ Proposition~\ref{prop:combinatorial} yields
$$p_T\leq 2 \iota 2^{L_\bullet} \exp\left(-C\|s\|_\infty \frac{\bar n}{\iota^2 2^{L_\bullet}}\right) \leq \frac{C}{\bar n^2 (\log(\bar n)/d)^{d+1}},$$
where $C$ may depend on $\kappa'_2,\kappa'_3,\kappa'_7,j_0,d.$
Besides, we deduce from Assumption \textbf{(Conc)} and Lemma~\ref{lemm:localization} that, for all $x>0,$
$$\P\left(\chi(m_\bullet) \geq \kappa'_1 \sqrt{\frac{\|s\|_\infty D_{m_\bullet}}{\bar n}} + \sqrt{\kappa'_2 \|s\|_\infty \frac{x}{\bar n}} + \kappa'_3 \kappa'_7\underline D(L_\bullet) \frac{x}{\bar n}\right) \leq \exp(-x).$$
For a nonnegative random variable $U,$ Fubini's inequality implies
$$\mathbb{E}[U^4]=\int_0^\infty 4 x^{p-1} \P(U\geq x)\d x$$
so
$$\mathbb{E}[\chi^4(m_\bullet)] \leq C \max\left\{\frac{\iota^4 2^{2L_\bullet}}{\bar n^4},\frac{\iota^2 2^{2L_\bullet}}{\bar n^2}\right\}\leq \frac{C}{\left(\log (\bar n)/d\right)^{2(d+1)}}$$
where $C$ may depend on $\kappa'_1,\kappa'_2,\kappa'_3,\kappa'_7,j_0,d.$
Remembering~\eqref{eq:step3} , we conclude that
$$\mathbb{E}\left[\|s-\tilde s^{\mathcal P}\|^2_{\bm \Psi} \right] \leq 3 \|s-s^\star_m\|^2_{\bm \Psi}+ C_1 \frac{\bar R D_m}{\bar n} +C_2 \frac{ \|s\|_\infty}{\bar n}+C_3\max\{\|s\|^2_{\bm \Psi},1\} \left(\frac{1}{\bar n\left(\log (\bar n)/d\right)^{3(d+1)/2}}+\frac{w(\bar n)}{\bar n}\right),$$
where $C_1$ may depend on $\kappa'_1,\kappa'_2,\kappa'_4,\kappa'_5,\kappa'_6,c_1,c_2,$ $C_2$ may depend on $\kappa'_2,$ $C_3$ may depend $\kappa'_1,\kappa'_2,\kappa'_3,\kappa'_7,j_0,d.$
\subsection{Proofs of Corollaries~\ref{corol:density} to~\ref{corol:levydensitydisc}}\label{sec:proofcorollaries}
\subsubsection{Proof of Corollary~\ref{corol:density}}
Assumption \textbf{(Conc)} is a straightforward consequence of Talagrand's inequality, as stated for instance in~\cite{Massart} (Inequality $(5.50),$ and is satisfied, whatever $\theta>0,$ for
\begin{equation}\label{ref:concdensity}
\bar n=n, \kappa'_1=1+\theta,\kappa'_2=2,\kappa'_3=(1/3+1/\theta)/2.
\end{equation}
For all $\bm\lambda\in m_\bullet,$ $\hat \sigma^2_{\bm\lambda}$ is an unbiased estimator for $\text{\upshape{Var}}(\Psi_{\bm\lambda}(Y_1)).$ Besides, the existence of $\Omega_\sigma$ follows from Lemma 1 in~\cite{ReynaudRT} with $\gamma=2.$ Thus Assumptions \textbf{(Var)} and \textbf{(Rem)} are satisfied by taking $\kappa'_4=1,\kappa'_5$ that only depends on $\kappa$ and $d,$ $\kappa'_6=0,$ $w(n)=C(\kappa,j_0,d)/\log^{d+1} (n).$
\subsubsection{Proof of Corollary~\ref{corol:copula}} Setting $Y_i=(F_1(X_{i1}),\ldots,F_d(X_{id})),i=1,\ldots,n,$ we recover the previous density estimation framework, so Assumption \textbf{(Conc)} is still satisfied with~\eqref{ref:concdensity}. Setting $\hat Y_i=(F_{n1}(X_{i1}),\ldots,\hat F_{nd}(X_{id})),i=1,\ldots,n,$ and
$$\check \sigma^2_{\bm \lambda}=\frac{1}{n(n-1)}\sum_{i=2}^n\sum_{j=1}^{i-1}\left(\Psi_{\bm\lambda}(Y_i)-\Psi_{\bm\lambda}(Y_j)\right)^2,$$
we observe that, for all $\bm\lambda \in m_{\bullet}$
$$\max\left\{\hat\sigma^2_{\bm\lambda} - 4 \check \sigma^2_{\bm \lambda}, \check \sigma^2_{\bm\lambda} - 4 \hat \sigma^2_{\bm \lambda}\right\}\leq 8 R_{\bm\lambda}(n)$$
where
$$R_{\bm \lambda}(n)=\frac{1}{n(n-1)}\sum_{i=2}^n(i-1)\left(\Psi_{\bm\lambda}(\hat Y_i)-\Psi_{\bm\lambda}(Y_i)\right)^2.$$
Using the same arguments as in the proof of Proposition~\ref{prop:copulaone}, we get for all $\bm\lambda \in m_{\bullet}$ and all $m\subset m_\bullet$
$$\mathbb{E}\left[R_{\bm \lambda}(n)\right] \leq C(\kappa,d) 2^{3L_\bullet} \log(n) /n,$$
$$R_{\bm \lambda}(n)\leq C(\kappa,d) 2^{3L_\bullet} \log(n) /n$$
except on a set with probability smaller than $2d/n,$
$$\mathbb{E}\left[\|\check s^\star_m - \hat s^\star _m\|^2\right] \leq C(\kappa,d,j_0) L_\bullet^{d-1}2^{4 L_\bullet} \log(n)/n,$$
and
$$\sqrt{\mathbb{E}\left[\|\check s^\star_{m_\bullet} - \hat s^\star _{m_\bullet}\|^4\right]} \leq C(\kappa,d,j_0) L_\bullet^{d-1}2^{4 L_\bullet} /\sqrt{n}.$$
Building on the proof of Corollary~\ref{corol:density}, we conclude that Assumptions \textbf{(Var)} and \textbf{(Rem)} are satisfied with $\kappa'_4,\kappa'_5$ that only depend on $\kappa,j_0,d,$ $\kappa'_6=0,$ and $w(n)=\sqrt{n} \log^{d-1}(n).$
\subsubsection{Proof of Corollary~\ref{corol:Poisson}}
Assumption \textbf{(Conc)} is a straightforward consequence of Talagrand's inequality for Poisson processes proved by~\cite{ReynaudPoisson} (Corollary 2), and is satisfied, whatever $\theta>0,$ by
\begin{equation}\label{ref:concdensity}
\bar n=\text{Vol}_d(Q), \kappa'_1=1+\theta,\kappa'_2=12,\kappa'_3=(1.25+32/\theta).
\end{equation}
For all $\bm\lambda\in m_\bullet,$ $\hat \sigma^2_{\bm\lambda}$ is an unbiased estimator for $\int_Q \Psi^2_{\bm\lambda}s= \text{Vol}_d(Q)\text{\upshape{Var}}(\check \beta_{\bm\lambda}).$ Besides, the existence of $\Omega_\sigma$ follows from Lemma 6.1 in~\cite{ReynaudR}. Thus Assumptions \textbf{(Var)} and \textbf{(Rem)} are satisfied by taking $\kappa'_4=1,\kappa'_5$ that only depends on $\kappa$ and $d,$ $\kappa'_6=0,$ $w(\bar n)=C(\kappa,j_0,d)/\log^{d+1} (\bar n).$
\subsubsection{Proof of Corollary~\ref{corol:levydensitycont}}
The proof is similar to that of Corollary~\ref{corol:Poisson} with $\bar n=T.$
\subsubsection{Proof of Corollary~\ref{corol:levydensitydisc}} Regarding Assumption \textbf{(Conc)}, the proof is similar to that of Corollary~\ref{corol:Poisson} with $\bar n=n\Delta.$ For all $\bm\lambda \in m_{\bullet},$ let
$$\check \sigma^2_{\bm \lambda}=\frac{1}{n\Delta} \iint\limits_{[0,n\Delta]\times Q} \Psi^2_{\bm\lambda}(x) N(\d t, \d x).$$
For any bounded measurable function $g$ on $Q,$ let
$$R(g)= \int_Q g(\d \widehat M-\d M),\quad I(g)=\int_Q g \d M - \mathbb{E}\left[\int_Q g \d M\right],\quad \hat I(g)=\int_Q g \d \widehat M - \mathbb{E}\left[\int_Q g \d \widehat M\right], $$
then
$$R(g)=\hat I(g) - I(g) +D_\Delta(g)$$
where $D_\Delta$ has been defined in the proof of Proposition~\ref{prop:disclevyone}.
Notice that
$$\hat \sigma^2_{\bm \lambda} - \check \sigma^2_{\bm \lambda} = R( \Psi^2_{\bm\lambda})$$
and
$$\|\hat s^\star_{m_\bullet} - \check s^\star_{m_\bullet}\|^2_{\bm\Psi} = \sum_{\bm\lambda\in m_\bullet} R^2(\Psi_{\bm\lambda}).$$
In the course of the proof of Proposition~\ref{prop:disclevyone}, we have shown that, for bounded and Lipschitz functions $g$ on $Q,$
$$\left|D_\Delta(g)\right|\leq C(\lambda_{\varepsilon},\varepsilon,f,Q) \max\left\{\|g\|_1,\|g\|_\infty,\|g\|_L \right\} \Delta$$
provided $\Delta$ and $\varepsilon$ are small enough. Besides, both $\hat I(g)$ and $I(g)$ satisfy Bernstein inequalities (Bernstein inequality as stated in~\cite{Massart}, Proposition 2.9, for the former, and Bernstein inequality as stated in~\cite{ReynaudPoisson}, Proposition 7, for the latter). Combining all these arguments yields Corollary~\ref{corol:levydensitydisc}.
\subsection{Proof of Proposition~\ref{prop:composite}}\label{sec:proofcomposite}
For $\alpha>0,$ we set $r=\lfloor \alpha \rfloor+1.$
$(i).$ From~\eqref{eq:binomdiff}, it is easy to see that $\Delta^r_{h_\ell,\ell}(f,\mathbf x) = \Delta^r_{h_\ell}(u_\ell,x_\ell).$ Thus $w^{\{\ell\}}_r(f,t_\ell)_p=w_r(u_\ell,t_\ell)_p$ and $w^{\mathbf e}_r(f,\mathbf{t_e})_p=0$ as soon as $\mathbf e \subset \{1,\ldots,d\}$ contains at least two elements. Therefore,
$$\|f\|_{SB^{\alpha}_{p,q,(d)}} \leq C(p) \sum_{\ell=1}^d \|u_\ell\|_{B^{\alpha}_{p,q,(1)}}.$$
$(ii).$ For the sake of readability, we shall detail only two special cases. Let us first deal with the case $f(\mathbf x)=\prod_{\ell=1}^d u_\ell(x_\ell)$ where each $u_\ell\in B^{\alpha}_{p,q,(1)}.$ From~\eqref{eq:binomdiff},
$$\Delta^{r,\mathbf e}_{\mathbf h}(f,\mathbf x)=\prod_{\ell \in \mathbf e} \Delta^r_{h_\ell}(u_\ell,x_\ell) \prod_{\ell \notin \mathbf e} u_\ell(x_\ell),$$
so $$\|f\|_{SB^{\alpha}_{p,q,(d)}} \leq 2^d \prod_{\ell=1}^d \|u_\ell\|_{B^{\alpha}_{p,q,(1)}}.$$
Let us now assume that $d=3$ and that $f(\mathbf x)=u_1(x_1)u_{2,3}(x_2,x_3)$ where $u_1\in B^{\alpha_1}_{p,q,(1)}$ and $u_{2,3}\in B^{\alpha_2}_{p,q,(2)}.$ We set $r_\ell=\lfloor \alpha_\ell \rfloor+1$ for $\ell=1,2,$ and $\bar r = \lfloor \bar\alpha \rfloor+1,$ where $\bar \alpha=\min(\alpha_1,\alpha_2/2).$ For $0<t_1,t_2,t_3<1,$ we easily have
$$\|f\|_p=\|u_1\|_p\|u_{2,3}\|_p$$
$$t_1^{-\bar \alpha} w_{\bar r}^{\{1\}}(f,t_1)_p \leq t_1^{-\alpha_1} w_{r_1}(u_1,t_1)_p\|u_{2,3}\|_p$$
$$t_\ell^{-\bar\alpha} w_{\bar r}^{\{\ell\}}(f,t_\ell)_p \leq \|u_1\|_p t_\ell^{-\alpha_\ell} w_{r_\ell}^{\{\ell\}}(u_{2,3},t_\ell)_p, \text{ for } \ell=2,3$$
$$t_1^{-\bar\alpha}t_\ell^{-\bar\alpha} w_{\bar r}^{\{1,\ell\}}(f,t_1,t_\ell)_p \leq t_1^{-\alpha_1}w_{r_1}(u_1,t_1)_p t_\ell^{-\alpha_\ell} w_{r_\ell}^{\{\ell\}}(u_{2,3},t_\ell)_p, \text{ for } \ell=2,3.$$
Besides, we deduce from~\eqref{eq:binomdiff} that
$$\|\Delta^{\bar r}_h(g,.)\|_p \leq C({\bar r},p) \|g\|_p,$$
and as operators $\Delta^{\bar r} _{h_\ell,\ell}$ commute, we have
$$t_2^{-\bar\alpha}t_3^{-\bar\alpha} w_{\bar r}^{\{2,3\}}(f,t_2,t_3)_p \leq C(p,\bar r) \|u_1\|_p t_2^{-\bar\alpha}t_3^{-\bar\alpha} \min\left\{w_{\bar r}^{\{2\}}(u_{2,3},t_2)_p, w_{\bar r}^{\{3\}}(u_{2,3},t_3)_p\right\}.$$
The inequality of arithmetic and geometric means entails that $2t_2^{-\bar\alpha}t_3^{-\bar\alpha}\leq t_2^{-2\bar\alpha}+ t_3^{-2\bar\alpha},$ so
$$t_2^{-\bar\alpha}t_3^{-\bar\alpha} w_{\bar r}^{\{2,3\}}(f,t_2,t_3)_p \leq C(p,\bar r) \|u_1\|_p \left(t_2^{-\alpha_2} w_{r_2}^{\{2\}}(u_{2,3},t_2)_p +t_3^{-\alpha_3} w_{r_2}^{\{3\}}(u_{2,3},t_3)_p\right).$$
In the same way,
$$t_1^{-\bar\alpha}t_2^{-\bar\alpha}t_3^{-\bar\alpha} w_{\bar r}^{\{1,2,3\}}(f,t_1,t_2,t_3)_p \leq C(p,\bar r) t_1^{-\alpha_1} w_{r_1}(u_1,t_1)_p \left(t_2^{-\alpha_2} w_{r_2}^{\{2\}}(u_{2,3},t_2)_p +t_3^{-\alpha_3} w_{r_2}^{\{3\}}(u_{2,3},t_3)_p\right).$$
Consequently,
$$\|f\|_{SB^{\bar\alpha}_{p,q,(d)}} \leq C(p,\bar r) \|u_1\|_{B^{\alpha_1}_{p,q,(1)}} \|u_{2,3}\|_{B^{\alpha_2}_{p,q,(2)}}.$$
$(iii).$ The proof follows from the chain rule for higher order derivatives of a composite function. Notice that for all $1\leq \ell\leq d$ and $1\leq r\leq \alpha-1,$ $u_\ell^{(r)}\in W^{\alpha-r}_{p,(1)},$ with $\alpha-r>1/p,$ so $u_\ell^{(r)}$ is bounded.
$(iv).$ The proof follows from a $d$-variate extension of Theorem 4.1, Inequality (10) in~\cite{Potapov} (see also~\cite{DeVoreLorentz} Chapter 6, Theorem 3.1).
$(v).$ See Theorem 3.10 in~\cite{NguyenSickel}.
\subsection{Proof of Theorem~\ref{theo:approx}}\label{sec:proofapprox}
We recall that for any finite sequence $(a_i)_{i\in I},$ and $0<p_1,p_2<\infty,$
\begin{equation*}\label{eq:lqnorms}
\left(\sum_{i\in I} |a_i|^{p_2}\right)^{1/p_2} \leq |I|^{(1/p_2-1/p_1)_+}\left(\sum_{i\in I} |a_i|^{p_1}\right)^{1/p_1}.
\end{equation*}
Besides, we have proved in the course of the proof of Proposition~\ref{prop:combinatorial} that
$$\bm J_\ell \leq c_1(M,d)(\ell-dj_0+d-1)^{d-1}.$$
In the hyperbolic basis, $f$ admits a unique decomposition of the form
$$f=\sum_{\ell=dj_0}^\infty\sum_{\bm\lambda\in U\bm{\nabla}(\ell)} \langle f,\Psi_{\bm\lambda}\rangle \Psi^\star_{\bm\lambda}.$$
Defining
$$f_\bullet=\sum_{\ell=dj_0}^{L_\bullet}\sum_{\bm\lambda\in U\bm{\nabla}(\ell)} \langle f,\Psi_{\bm\lambda}\rangle \Psi^\star_{\bm\lambda},$$
we have for finite $q>0,$ using the aforementioned reminders,
\begin{align*}
\|f-f_\bullet\|^2_{\bm\Psi}
&= \sum_{\ell=L_\bullet+1}^\infty\sum_{\bm j\in \bm J_\ell}\sum_{\bm\lambda\in \bm{\nabla_j}} \langle f,\Psi_{\bm\lambda}\rangle ^2\\
&\leq \sum_{\ell=L_\bullet+1}^\infty \sum_{\bm j\in \bm J_\ell} \left(\sharp \bm{\nabla_j}\right)^{2(1/2-1/p)_+}\left( \sum_{\bm\lambda\in\bm{\nabla_j}} |\langle f,\Psi_{\bm\lambda}\rangle|^p\right)^{2/p}\\
&\leq C( B,d,p) \sum_{\ell=L_\bullet+1}^\infty 2^{2\ell (1/2-1/p)_+} \sum_{\bm j\in \bm J_\ell} \left( \sum_{\bm\lambda\in\bm{\nabla_j}} |\langle f,\Psi_{\bm\lambda}\rangle|^p\right)^{2/p}\\
&\leq C(B,d,p) \sum_{\ell=L_\bullet+1}^\infty 2^{2\ell (1/2-1/p)_+} \sharp\bm J_\ell ^{2(1/2-1/q)_+}\left(\sum_{\bm j\in \bm J_\ell} \left( \sum_{\bm\lambda\in\bm{\nabla_j}} |\langle f,\Psi_{\bm\lambda}\rangle|^p\right)^{q/p}\right)^{2/q}\\
&\leq C(B,d,p) \sum_{\ell=L_\bullet+1}^\infty 2^{2\ell (1/2-1/p)_+} (\ell-dj_0+d-1)^{2(d-1)(1/2-1/q)_+}R^22^{-2\ell(\alpha+1/2-1/p)}\\
&\leq C(B,d,p) R^2\sum_{\ell=L_\bullet+1}^\infty (\ell-dj_0+d-1)^{2(d-1)(1/2-1/q)_+}2^{-2\ell(\alpha-(1/p-1/2)_+)}\\
&\leq C(B,\alpha,p,d)R^2 L_\bullet^{2(d-1)(1/2-1/q)_+} 2^{-2L_\bullet(\alpha-(1/p-1/2)_+)}.
\end{align*}
The case $q=\infty$ can be treated in the same way.
Let us fix $k\in\{0,\ldots,L_\bullet-\ell_1\}$ and define $\bar m (\ell_1+k,f)$ as the subset of $U\bm\nabla(\ell_1+k)$ such that $\left\{|\langle f,\Psi_{\bm{\lambda}}\rangle|; \bm\lambda \in \bar m (\ell_1+k,f)\right\}$ are the $N(\ell_1,k)$ largest elements among $\left\{|\langle f,\Psi_{\bm{\lambda}}\rangle|; \bm\lambda \in U\bm\nabla(\ell_1+k)\right\}$. We then consider the approximation for $f$ given by
$$A(\ell_1,f)=\sum_{\ell=dj_0}^{\ell_1-1}\sum_{\bm\lambda\in U\bm{\nabla}(\ell)} \langle f,\Psi_{\bm\lambda}\rangle \Psi^\star_{\bm\lambda}+
\sum_{k=0}^{L_\bullet-\ell_1}\sum_{\bm\lambda\in\bar m (\ell_1+k,f)} \langle f,\Psi_{\bm\lambda}\rangle^2
$$
and the set
$$m_{\ell_1}(f)=\left(\bigcup_{\ell=dj_0}^{\ell_1-1} U\bm{\nabla}( \ell) \right)
\cup \left( \bigcup_{k=0}^{L_\bullet-\ell_1} \bar m(\ell_1+k,f)\right).$$
Let us first assume that $0<p\leq 2.$ Using Lemma 4.16 in~\cite{Massart} and~\eqref{eq:lqnorms}, we get
\begin{align*}
\|f_\bullet-A(\ell_1,f))\|_{\bm\Psi}^2
&= \sum_{k=0}^{L_\bullet-\ell_1}\sum_{\bm\lambda\in U\bm{\nabla}(\ell_1+k) \backslash \bar m (\ell_1+k,f)} \langle f,\Psi_{\bm\lambda}\rangle ^2\\
&\leq\sum_{k=0}^{L_\bullet-\ell_1}\left(\sum_{\bm\lambda\in U\bm{\nabla}(\ell_1+k)} |\langle f,\Psi_{\bm\lambda}\rangle|^p\right)^{2/p}/(N(\ell_1,k)+1)^{2(1/p-1/2)}\\
&\leq\sum_{k=0}^{L_\bullet-\ell_1}\sharp \bm J_{\ell_1+k} ^{2(1/p-1/q)+}\left(\sum_{\bm j\in \bm J_{\ell_1+k}}\left(\sum_{\bm\lambda\in \bm{\nabla_j}} |\langle f,\Psi_{\bm\lambda}\rangle|^p\right)^{q/p}\right)^{2/q}/(N(\ell_1,k)+1)^{2(1/p-1/2)}.
\end{align*}
Besides, it follows from~\eqref{eq:N} that
$$ N(\ell_1,k)+1 \geq 2 M^{-d}2^{-d}(d-1)^{-(d-1)} (\ell_1+k-dj_0+d-1)^{d-1} 2^{\ell_1} (k+2)^{-(d+2)}.$$
Therefore
\begin{equation*}
\|f_\bullet-A(\ell_1,f))\|_{\bm\Psi}^2
\leq C(\alpha,p,d)R^2 (\ell_1-dj_0+d-1)^{2(d-1)(1/2-1/\max(p,q))}2^{-2\alpha \ell_1} .
\end{equation*}
In case $p\geq 2,$ the same kind of upper-bound follows from
$$\|f_\bullet-A(\ell_1,f))\|_{\bm\Psi}^2
\leq
\sum_{k=0}^{L_\bullet-\ell_1}\sharp U\bm\nabla(\ell_1+k)^{2(1/2-1/p)} \left(\sum_{\bm\lambda\in U\bm{\nabla}(\ell_1+k)} |\langle f,\Psi_{\bm\lambda}\rangle|^p\right)^{2/p}.$$
Last,
$$ \|f-A(\ell_1,f))\|_{\bm\Psi}^2=\|f-f_\bullet\|_{\bm\Psi}^2+\|f_\bullet-A(\ell_1,f))\|_{\bm\Psi}^2$$
which completes the proof.
\bibliographystyle{alpha}
| {'timestamp': '2016-11-28T02:05:11', 'yymm': '1611', 'arxiv_id': '1611.07237', 'language': 'en', 'url': 'https://arxiv.org/abs/1611.07237'} |
\section{Introduction}
Derivate is one of the most important topics not only in mathematics, but also in physics, chemistry, economics and engineering. Every standard Calculus course provides a variety of exercises for the students to learn how to apply the concept of derivative. The types of problems range from finding an equation of the tangent line to the application of differentials and advanced curve sketching. Usually, these exercises heavily rely on such differentiation techniques as Product, Quotient and Chain Rules, Implicit and Logarithmic Differentiation \cite{Stewart2012}. The definition of the derivative is hardly ever applied after the first few classes and its use is not much motivated.
Like many other topics in undergraduate mathematics, derivative gave rise to many misconceptions \cite{Muzangwa2012}, \cite{Gur2007}, \cite{Li2006}. Just when the students seem to learn how to use the differentiation rules for most essential functions, the application of the derivative brings new issues. A common students' error of determining the domain of the derivative from its formula is discussed in \cite{Rivera2013} and some interesting examples of the derivatives, defined at the points where the functions themselves are undefined, are provided. However, the hunt for misconceptions takes another twist for the derivatives undefined at the points where the functions are in fact defined.
The expression of the derivative of the function obtained using differentiation techniques does not necessarily contain the information about the existence or the value of the derivative at the points, where the expression for the derivative is undefined. In this article we discuss a type of continuous functions that have the expression for the derivative undefined at a certain point, while the derivative itself at that point exists. We show, how relying on the formula for the derivative for finding the horizontal tangent line of a function, leads to a false conclusion and consequently to missing a solution. We also provide a simple methodological treatment of similar functions suitable for the classroom.
\section{Calculating the Derivative}
In order to illustrate how deceitful the expression of the derivative can be to a students' eye, let us consider the following problem.
\vspace{12pt}
\fbox{\begin{minipage}{5.25in}
\begin{center}
\begin{minipage}{5.0in}
\vspace{10pt}
\emph{Problem}
\vspace{10pt}
Differentiate the function $f\left(x\right)=\sqrt[3]{x}\sin{\left(x^2\right)}$. For which values of $x$ from the interval $\left[-1,1\right]$ does the graph of $f\left(x\right)$ have a horizontal tangent?
\vspace{10pt}
\end{minipage}
\end{center}
\end{minipage}}
\vspace{12pt}
Problems with similar formulations can be found in many Calculus books \cite{Stewart2012}, \cite{Larson2010}, \cite{Thomas2009}. Following the common procedure, let us find the expression for the derivative of the function $f\left(x\right)$ applying the Product Rule:
\begin{eqnarray}
f'\left(x\right) &=& \left(\sqrt[3]{x}\right)'\sin{\left(x^2\right)}+\left(\sin{\left(x^2\right)}\right)'\sqrt[3]{x} \notag \\ &=& \frac{1}{3\sqrt[3]{x^2}}\sin{\left(x^2\right)}+2x\cos{\left(x^2\right)}\sqrt[3]{x} \notag \\ &=& \frac{6x^2\cos{x^2}+\sin{x^2}}{3\sqrt[3]{x^2}} \label{DerivativeExpression}
\end{eqnarray}
Similar to \cite{Stewart2012}, we find the values of $x$ where the derivative $f'\left(x\right)$ is equal to zero:
\begin{equation}
6x^2\cos{x^2}+\sin{x^2} = 0
\label{DerivativeEqualZero}
\end{equation}
Since the expression for the derivative (\ref{DerivativeExpression}) is not defined at $x=0$, it is not hard to see that for all values of $x$ from $\left[-1,1\right]$ distinct from zero, the left-hand side of (\ref{DerivativeEqualZero}) is always positive. Hence, we conclude that the function $f\left(x\right)$ does not have horizontal tangent lines on the interval $\left[-1,1\right]$.
However, a closer look at the graph of the function $f\left(x\right)$ seems to point at a different result: there is a horizontal tangent at $x=0$ (see Figure \ref{fig:FunctionGraph}).
First, note that the function $f\left(x\right)$ is defined in $x=0$. In order to verify if it has a horizontal tangent at this point, let us find the derivative of the function $f\left(x\right)$ using definition:
\begin{eqnarray}
f'\left(0\right) &=& \lim_{h\rightarrow0}{\frac{f\left(0+h\right)-f\left(0\right)}{h}} \notag \\
&=& \lim_{h\rightarrow0}{\frac{\sqrt[3]{h}\sin{\left(h^2\right)}}{h}} \notag \\
&=& \lim_{h\rightarrow0}{\left(\sqrt[3]{h} \cdot {h} \cdot \frac{\sin{\left(h^2\right)}}{h^2}\right)} \notag \\
&=& \lim_{h\rightarrow0}{\sqrt[3]{h}} \cdot \lim_{h\rightarrow0}{h} \cdot \lim_{h\rightarrow0}{\frac{\sin{\left(h^2\right)}}{h^2}} \notag \\
&=& 0 \cdot 0 \cdot 1 = 0 \notag
\end{eqnarray}
since each of the limits above exists. We see that, indeed, the function $f\left(x\right)$ possesses a horizontal tangent line at the point $x=0$.
\section{Closer Look at the Expression for the Derivative}
What is the problem with the standard procedure proposed by many textbooks and repeated in every Calculus class? The explanation lies in the following premise: the expression of the derivative of the function does not contain the information as to whether the function is differentiable or not at the points where it is undefined. As it is pointed out in \cite{Rivera2013}, the domain of the derivative is determined \emph{a priori} and therefore should not be obtained from the formula of the derivative itself.
In the example above the Product Law for derivatives requires the existence of the derivatives of both functions at the point of interest. Since the function $\sqrt[3]{x}$ is not differentiable in zero, the Product Rule cannot be applied.
In order to see what exactly happens when we apply the Product Rule, let us find the expression for the derivative using definition of the derivative:
\begin{eqnarray}
f'\left(x\right) &=& \lim_{h\rightarrow0}{\frac{f\left(x+h\right)-f\left(x\right)}{h}} \notag \\
&=& \lim_{h\rightarrow0}{\frac{\sqrt[3]{x+h}\sin{\left(x+h\right)^2}-\sqrt[3]{x}\sin{\left(x^2\right)}}{h}} \notag \\
&=& \lim_{h\rightarrow0}{\frac{\left(\sqrt[3]{x+h}-\sqrt[3]{x}\right)}{h}\sin{\left(x^2\right)}} + \notag \\
&& \lim_{h\rightarrow0}{\frac{\left(\sin{\left(x+h\right)^2}-\sin{\left(x^2\right)}\right)}{h}\sqrt[3]{x+h}} \notag \\
&=& \lim_{h\rightarrow0}{\frac{\sqrt[3]{x+h}-\sqrt[3]{x}}{h}} \cdot \lim_{h\rightarrow0}{\sin{\left(x^2\right)}} + \notag \\&& \lim_{h\rightarrow0}{\frac{\sin{\left(x+h\right)^2}-\sin{\left(x^2\right)}}{h}} \cdot \lim_{h\rightarrow0}{\sqrt[3]{x+h}} \notag \\
&=& \frac{1}{3\sqrt[3]{x^2}} \cdot \sin{\left(x^2\right)}+2x\cos{\left(x^2\right)} \cdot \sqrt[3]{x} \notag
\end{eqnarray}
which seems to be identical to the expression (\ref{DerivativeExpression}).
Students are expected to develop a skill of deriving similar results and know how to find the derivative of the function using definition of the derivative only. But how `legal' are the performed operations?
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.0in]{sin.pdf}
\vspace{.1in}
\caption{Graph of the function $g\left(x\right)=\sqrt[3]{x}\cos{\left(x^2\right)}$}
\label{fig:GFunction}
\end{center}
\end{figure}
Let us consider each of the following limits:
\begin{eqnarray*}
&& \lim_{h\rightarrow0}{\frac{\sqrt[3]{x+h}-\sqrt[3]{x}}{h}} \notag \\
&& \lim_{h\rightarrow0}{\sin{\left(x^2\right)}}\notag \\
&& \lim_{h\rightarrow0}{\frac{\sin{\left(x+h\right)^2}-\sin{\left(x^2\right)}}{h}}\notag \\
&& \lim_{h\rightarrow0}{\sqrt[3]{x+h}}.
\end{eqnarray*}
The last three limits exist for all real values of the variable $x$. However, the first limit does not exist when $x=0$. Indeed
\begin{equation*}
\lim_{h\rightarrow0}{\frac{\sqrt[3]{0+h}-\sqrt[3]{0}}{h}} = \lim_{h\rightarrow0}{\frac{1}{\sqrt[3]{h^2}}} = + \infty
\end{equation*}
This implies that the Product and Sum Laws for limits cannot be applied and therefore this step is not justifiable in the case of $x=0$. When the derivation is performed, we automatically assume the conditions, under which the Product Law for limits can be applied, i.e. that both limits that are multiplied exist. It is not hard to see that in our case these conditions are actually equivalent to $x\neq0$. This is precisely why, when we wrote out the expression for the derivative (\ref{DerivativeExpression}), it already contained the assumption that it is only true for the values of $x$ that are different from zero.
Note, that in the case of $x=0$ the application of the Product and Sum Laws for limits is not necessary, since the term $\left(\sqrt[3]{x+h}-\sqrt[3]{x}\right)\sin{\left(x^2\right)}$ vanishes.
The correct expression for the derivative of the function $f\left(x\right)$ should be the following:
\begin{equation*}
f'\left(x\right) =
\begin{cases}
\frac{6x^2\cos{\left(x^2\right)}+\sin{\left(x^2\right)}}{3\sqrt[3]{x^2}}, & \mbox{if } x \neq 0 \\
0, & \mbox{if } x = 0
\end{cases}
\end{equation*}
The expression for the derivative of the function provides the correct value of the derivative only for those values of the independent variable, for which the expression is defined; it does not tell anything about the existence or the value of the derivative, where the expression for the derivative is undefined. Indeed, let us consider the function
\begin{equation*}
g\left(x\right) = {\sqrt[3]{x}}\cos{\left(x^2\right)}
\end{equation*}
and its derivative $g'\left(x\right)$
\begin{equation*}
g'\left(x\right) = \frac{\cos{\left(x^2\right)}-6x^2\sin{\left(x^2\right)}}{3\sqrt[3]{x^2}}
\end{equation*}
Similar to the previous example, the expression for the derivative is undefined at $x=0$. Nonetheless, it can be shown that $g\left(x\right)$ is not differentiable at $x=0$ (see Figure \ref{fig:GFunction}). Therefore, we provided two visually similar functions: both have the expressions for their derivatives undefined in zero, however, one of these functions possesses a derivative, but the other one does not.
\section{Methodological Remarks}
Unfortunately, there exist many functions similar to the ones discussed above and they can arise in a variety of typical Calculus problems: finding the points where the tangent line is horizontal, finding an equation of the tangent and normal lines to the curve at the given point, the use of differentials and graph sketching. Relying only on the expression of the derivative for determining its value at the undefined points may lead to missing a solution (as in the example discussed above) or to some completely false interpretations (as in the case of curve sketching).
As it was discussed above, the expression for the derivative does not provide any information on the existence or the value of the derivative, where the expression itself is undefined. Here we present a methodology for the analysis of this type of functions.
Let $f\left(x\right)$ be the function of interest and $f'\left(x\right)$ be the expression of its derivative undefined at some point $x_{0}$. In order to find out if $f\left(x\right)$ is differentiable at $x_{0}$, we suggest to follow a list of steps:
\begin{enumerate}
\item Check if the function $f\left(x\right)$ itself is defined at the point $x_{0}$. If $f\left(x\right)$ is undefined at $x_{0}$, then it is not differentiable at $x_{0}$. If $f\left(x\right)$ is defined at $x_{0}$, then proceed to next step.
\item Identify the basic functions that are used in the formula of the function $f\left(x\right)$, that are themselves defined at the point $x_{0}$, but their derivative is not (such as, for example, the root functions).
\item Find the derivative of the function $f\left(x\right)$ at the point $x_{0}$ using definition.
\end{enumerate}
The importance of the first step comes from the fact that most students tend to pay little attention to the functions domain analysis when asked to investigate its derivative. Formally, the second step can be skipped, however it will give the students the insight into which part of the function presents a problem and teach them to identify similar cases in the future. the difficulty of accomplishing the third step depends on the form of the function and sometimes can be tedious. Nevertheless, it allows the students to apply the previously obtained skills and encourages the review of the material.
\begin{figure}[H]
\begin{center}
\includegraphics[width=6.0in]{cos.pdf}
\vspace{.1in}
\caption{Graph of the function $g\left(x\right)=\sqrt[3]{x}\cos{\left(x^2\right)}$}
\label{fig:GFunction}
\end{center}
\end{figure}
\section{Conclusion}
We discussed the misconception, that the expression of the derivative of the function contains the information as to whether the function is differentiable or not at the points, where the expression is undefined. We considered a typical Calculus problem of looking for the horizontal tangent line of a function as an example. We showed how the search for the values that make the expression of the derivative equal zero leads to missing a solution: even though the expression of the derivative is undefined, the function still possesses the derivative at the point. We provided an example of the function that similarly has the expression for the derivative undefined, however the function is not differentiable at the point. We also presented the methodological treatment of such functions by applying the definition of the derivative, which can be used in the classroom.
| {'timestamp': '2018-05-02T02:09:26', 'yymm': '1805', 'arxiv_id': '1805.00343', 'language': 'en', 'url': 'https://arxiv.org/abs/1805.00343'} |
\section{Introduction}
\label{sec:Introduction}
Binary black holes (BBH) are among the most important sources of
gravitational waves for upcoming gravitational-wave detectors like
Advanced LIGO~\cite{TheLIGOScientific:2014jea} and
Virgo~\cite{Accadia:2011zzc}. Accurate predictions of the
gravitational waveforms emitted by such systems are important for
detection of gravitational waves and for parameter estimation of any
detected binary~\cite{Abbott:2007}. When either black hole carries
spin that is \emph{not} aligned with the orbital angular momentum,
there is an exchange of angular momentum between the components of the
system, leading to complicated dynamical behavior.
Figure~\ref{fig:precessionCones} exhibits the directions of the
various angular momenta in several simulations described in this
paper. This behavior is imprinted on the emitted
waveforms~\cite{Apostolatos1994, PekowskyEtAl:2013, BoyleEtAl:2014},
making them more feature-rich than waveforms from aligned-spin BBH
systems or non-spinning BBH systems. In order to model the waveforms
accurately, then, we need to understand the dynamics.
The orbital-phase evolution of an inspiraling binary, the precession
of the orbital angular momentum and the black-hole spins, and the
emitted gravitational waveforms can be modeled with post-Newtonian
theory~\cite{lrr-2014-2}, a perturbative solution of Einstein's
equations in powers of $v/c$, the ratio of the velocity of the black
holes to the speed of light. Such post-Newtonian waveforms play an
important role in the waveform modeling for ground-based
interferometric gravitational-wave detectors (see,
e.g.,~\cite{Ohme:2011rm}). For non-spinning and aligned-spin BBH,
however, the loss of accuracy of the post-Newtonian phase evolution in
the late inspiral has been identified as one of the dominant
limitations of waveform modeling~\cite{Damour:2010, Boyle:2011dy,
OhmeEtAl:2011, MacDonald:2011ne, MacDonald:2012mp, Nitz:2013mxa}.
\begin{figure*}
\includegraphics[width=0.98\linewidth,trim=12 24 12 35,clip=true]{precessionCones}
\caption{Precession cones of the six primary precessing simulations
considered here, as computed by NR and PN. Shown are the paths
traced on the unit sphere by the normal to the orbital plane
$\ensuremath{\hat{\ell}}$ and the spin-directions $\hat{\chi}_{1,2}$. The thick
lines represent the NR data, with the filled circles indicating the
start of the NR simulations. The lines connecting the NR data to
the origin are drawn to help visualize the precession-cones. The PN
data, plotted with thin lines, lie on the scale of this figure
almost precisely on top of the NR data. (The PN data was
constructed using the Taylor T4 approximant matched at frequency
$m\Omega_m=0.021067$, with a matching interval width
$\delta\Omega=0.1\Omega_m$.) }
\label{fig:precessionCones}
\end{figure*}
Precessing waveform models (e.g.,~\cite{Hannam:2013oca,
Taracchini:2013rva, Pan:2013rra, BoyleEtAl:2014}) depend on the
orbital phase evolution and the precession dynamics. Therefore, it is
important to quantify the accuracy of the post-Newtonian approximation
for modeling the precession dynamics itself, and the orbital-phase
evolution of precessing binaries. Recently, the SXS collaboration has
published numerical-relativity solutions to the full Einstein
equations for precessing BBH systems~\cite{PhysRevLett.111.241104}.
These simulations cover $\gtrsim 30$ orbits and up to two precession
cycles. Therefore, they offer a novel opportunity to systematically
quantify the accuracy of the post-Newtonian precession equations, the
topic of this paper.
In this paper, we develop a new technique to match the initial
conditions of post-Newtonian dynamics to a numerical relativity
simulation. We then use this technique to study the level of
agreement between the post-Newtonian precession equations and the
numerical simulations. The agreement is remarkably good, the
directions of orbital angular momentum and spin axes in post-Newtonian
theory reproduces the numerical simulations usually to better than $1$
degree. We also investigate nutation effects on the orbital
time-scale that are imprinted both in the orbital angular momentum and
the spin-directions. For the orbital angular momentum, NR and PN
yield very similar nutation features, whereas for the spin direction,
nutation is qualitatively different in PN and the investigated NR
simulations. Considering the orbital-phase evolution, we find that
the disagreement between post-Newtonian orbital phase and numerical
relativity simulation is comparable to the aligned-spin case. This
implies that the orbital phase evolution will remain an important
limitation for post-Newtonian waveforms even in the precessing case.
Finally, we study the convergence with post-Newtonian order of the
precession equations, and establish very regular and fast convergence,
in contrast to post-Newtonian orbital phasing.
This paper is organized as follows: Section~\ref{sec:Methodology}
describes the post-Newtonian expressions utilized, the numerical
simulations, how we compare PN and NR systems with each other, and how
we determine suitable ``best-fitting'' PN parameters for a comparison
with a given NR simulation. Section~\ref{sec:comparison} presents our
results, starting with a comparison of the precession dynamics
in Sec.~\ref{sec:PrecessionComparisons}, and continuing with an
investigation in the accuracy of the orbital phasing in
Sec.~\ref{sec:OrbitalPhaseComparisons}. The following two sections
study the convergence of the PN precession equations and the impact of
ambiguous choices when dealing with incompletely known spin-terms in
the PN orbital phasing. Section~\ref{sec:tech_cons}, finally, is
devoted to some technical numerical aspects, including an
investigation into the importance of the gauge conditions used for the
NR runs. We close with a discussion in Sec.~\ref{sec:discussion}.
The appendices collect the precise post-Newtonian expressions we use
and additional useful formulae about quaternions.
\section{Methodology}
\label{sec:Methodology}
\subsection{Post-Newtonian Theory}
\label{sec:PostNewtonianTheory}
Post-Newtonian (PN) theory is an approximation to General Relativity
in the weak-field, slow-motion regime, characterized by the small
parameter $\epsilon \sim (v/c)^{2} \sim \frac{Gm}{rc^2}$, where $m$,
$v$, and $r$ denote the characteristic mass, velocity, and size of the
source, $c$ is the speed of light, and $G$ is Newton's gravitational
constant. For the rest of this paper, the source is always a binary
black-hole system with total mass $m$, relative velocity $v$ and
separation $r$, and we use units where $G=c=1$.
Restricting attention to quasi-spherical binaries in the
adiabatic limit, the local dynamics of the source can be split into
two parts: the evolution of the orbital frequency, and the precession
of the orbital plane and the spins. The leading-order precessional
effects~\cite{Barker:1975ae} and spin contributions to the evolution
of the orbital frequency~\cite{kidder93,Kidder:1995zr} enter
post-Newtonian dynamics at the 1.5~PN order (i.e., $\epsilon^{3/2}$)
for spin-orbit effects, and 2~PN order for spin-spin effects. We also
include non-spin terms to 3.5~PN order \cite{lrr-2014-2}, the
spin-orbit terms to 4~PN order \cite{Marsat:2013caa}, spin-spin terms
to 2~PN order \cite{Kidder:1995zr}\footnote{ During the
preparation of this manuscript, the 3 PN spin-spin contributions
to the flux and binding energy were completed in
\cite{Bohe:2015ana}. These terms are not used in the analysis
presented here.}. For the precession equations, we include the
spin-orbit contributions to next-to-next-to-leading order,
corresponding to 3.5~PN \cite{BoheEtAl:2013}. The spin-spin terms are
included at 2~PN order\footnote{The investigation of the
effects of spin-spin terms at higher PN~orders (see
e.g. \cite{Hartung:2011ea,PR08a,Levi:2014sba} and references
therein), and terms which are higher order in spin (e.g cubic spin
terms) \cite{Marsat:2014xea,Levi:2014gsa} is left for future
work.}.
\subsubsection{Orbital dynamics}
\label{sec:OrbitalEvolution}
Following earlier work (e.g., Ref.~\cite{Kidder:1995zr}) we describe the
precessing BH binary by the evolution of the orthonormal triad
$(\hat{n},\hat{\lambda},\ensuremath{\hat{\ell}})$, as indicated in
Fig.~\ref{fig:orbitalDefns}: $\hat{n}$ denotes the unit separation
vector between the two compact objects, $\ensuremath{\hat{\ell}}$ is the normal to the
orbital plane and $\ensuremath{\hat{\lambda}}=\ensuremath{\hat{\ell}}\times \ensuremath{\hat{n}}$ completes the
triad. This triad is time-dependent, and
is related to the constant inertial triad
$(\hat{x},\hat{y},\hat{z})$ by a time-dependent rotation $R_{f}$, as
indicated in Fig.~\ref{fig:orbitalDefns}. The rotation $R_f$ will play
an important role in Sec.~\ref{CharacterizingPrecessionByRotors}. The
orbital triad obeys the following
equations:
\begin{subequations}
\label{eq:TriadEvolution}
\begin{align}
\frac{d\ensuremath{\hat{\ell}}}{dt} &= \varpi\ensuremath{\hat{n}}\times\ensuremath{\hat{\ell}} \label{eq:ellHatEv},\\
\frac{d\hat{n}}{dt} &= \Omega\hat{\lambda},\label{eq:n-evolution}\\
\frac{d\hat{\lambda}}{dt} &= -\Omega\hat{n} + \varpi\ensuremath{\hat{\ell}}. \label{eq:lambEv}
\end{align}
\end{subequations}
Here, $\Omega$ is the instantaneous orbital frequency and
$\varpi$ is the precession frequency of the orbital
plane.
\begin{figure}
\includegraphics[width=0.96\linewidth]{orbitalDefns}
\caption{Vectors describing the orbital dynamics of the system. The
yellow plane denotes the orbital plane. $\Rf(t)$ is the rotor
that rotates the coordinate triad $(\hat{x}, \hat{y}, \hat{z})$
into the orbital triad $(\ensuremath{\hat{n}},\ensuremath{\hat{\lambda}},\ensuremath{\hat{\ell}})$.}
\label{fig:orbitalDefns}
\end{figure}
The dimensionless spin vectors $\vec{\chi}_i = \vec{S}_i/m^2_i$ also
obey precession equations:
\begin{subequations}
\label{eq:SpinsEv}
\begin{align}
\frac{d\vec{\chi}_{1}}{dt} &= \vec{\Omega}_{1}\times \vec{\chi}_{1},
\\
\frac{d\vec{\chi}_{2}}{dt} &= \vec{\Omega}_{2}\times \vec{\chi}_{2}.
\end{align}
\end{subequations}
The precession frequencies $\vec{\Omega}_{1,2},\ \varpi$ are series in
the PN expansion parameter $\epsilon$; their explicit form is given in
Appendix~\ref{ap:PN}.
The evolution of the orbital frequency is derived from energy balance:
\begin{equation}
\label{eq:eBalance}
\frac{dE}{dt} = -\mathcal{F},
\end{equation}
where $E$ is the energy of the binary and $\mathcal{F}$ is the
gravitational-wave flux. $E$ and $\mathcal{F}$ are PN series depending
on the orbital frequency $\Omega$, the vector $\ensuremath{\hat{\ell}}$, and the BH
spins $\vec{\chi}_{1},\ \vec{\chi}_{2}$. Their explicit formulas are
given in Appendix~\ref{ap:PN}. In terms of
$x\equiv(m\Omega)^{2/3} \sim \epsilon$, Eq.~\eqref{eq:eBalance}
becomes:
\begin{equation}
\label{eq:OmEv}
\frac{dx}{dt} = -\frac{\mathcal{F}}{dE/dx},
\end{equation}
where the right-hand side is a ratio of two PN series
There are several well known ways of solving Eq.~\eqref{eq:OmEv},
which lead to different treatment of uncontrolled higher-order PN
terms---referred to as the Taylor T1 through T5
approximants~\cite{Damour:2000zb,Ajith:2011ec}. The T2 and T3
approximants cannot be applied to general precessing systems; we
therefore exclude them from this work. We now briefly review the
remaining approximants, which will be used throughout this
work.\footnote{See, e.g., Ref.~\cite{Boyle2007} for a more complete
description of approximants T1 through T4.} The most
straightforward approach is to evaluate the numerator and denominator
of Eq.~\eqref{eq:OmEv} and then solve the resulting ordinary
differential equation numerically, which is the Taylor T1
approximant. Another approach is to re-expand the ratio
$\mathcal{F}/(dE/dx)$ in a new power series in $x$, and then truncate at the
appropriate order. This gives the Taylor T4 approximant. Finally, one
can expand the \emph{inverse of the right-hand-side of
Eq.~\eqref{eq:OmEv}} in a new power series in $x$, truncate it at the
appropriate order, and then substitute the inverse of the truncated
series into the right-hand side in Eq.~\eqref{eq:OmEv}. This last
approach, known as the Taylor T5 approximant~\cite{Ajith:2011ec}, has
been introduced fairly recently.
\begin{table*}
\caption{ \label{tbl:Parameters}Numerical
relativity simulations utilized here. SXS ID refers to the
simulation number in Ref.~\cite{PhysRevLett.111.241104}, $q=m_1/m_2$ is the mass ratio,
$\vec{\chi}_{1,2}$ are the dimensionless spins, given in
coordinates where $\ensuremath{\hat{n}}(t=0) = \hat{x}$,
$\ensuremath{\hat{\ell}}(t=0)=\hat{z}$. $D_{0}$, $\Omega_{0}$ and $e$ are the initial
coordinate separation, the initial orbital
frequency, and the orbital eccentricity, respectively.
The first block lists the precessing runs utilized, where
$\vec{\chi}_{1,r}=(-0.18,-0.0479,-0.0378)$ and $\vec{\chi}_{2,r}=
(-0.0675,0.0779,-0.357)$.
The second block indicates 31 further precessing simulations used
in Fig.~\ref{fig:AngleLRandom31}, and the last block lists the
aligned spin systems for orbital phase comparisons.
}
\begin{ruledtabular}
\begin{tabular}{@{}llllllllll@{}}
Name & SXS ID & $q$ & $\vec{\chi}_{1}$ &
$\vec{\chi}_{2}$ & $D_{0}/M$ & $m\Omega_{0}$ & $e$ \\
\hline
q1\_0.5x & 0003 & 1.0 & (0.5,0.0,0) & (0,0,0) & 19 &
0.01128 & 0.003 \\
q1.5\_0.5x & 0017 & 1.5 & (0.5,0,0) & (0,0,0) & 16 &
0.01443 & $<2\times10^{-4}$ \\
q3\_0.5x & 0034 & 3.0 & (0.5,0,0) & (0,0,0) & 14 &
0.01743 & $<2\times10^{-4}$ \\
q5\_0.5x & & 5.0 & (0.5,0,0) & (0,0,0) & 15 &
0.01579 & 0.002 \\
q1\_two\_spins & 0163 & 1.0 & (0.52,0,-0.3) & (0.52,0,0.3)
& 15.3 & 0.01510 & 0.003 \\
q1.97\_random & 0146 & 1.97
& $\vec{\chi}_{1,r}$
& $\vec{\chi}_{2,r}$
& 15 & 0.01585 & $<10^{-4}$ \\
\\[-0.95em]
\hline
\\[-.95em]
31 random runs & 115--145 & $[1,2]$ & $\chi_1\le 0.5$ & $\chi_2\le 0.5$ &
15 & $\approx 0.0159$ & $[10^{-4}, 10^{-3}]$ \\
\\[-0.95em]
\hline\\[-.95em]
\verb|q1_0.5z| & 0005 & 1.0 & (0,0,0.5) & (0,0,0) & 19 &
0.01217 & 0.0003\\
\verb|q1_-0.5z| & 0004 & 1.0 & (0,0,0.5) & (0,0,0) & 19 &
0.01131 & 0.0004\\
\verb|q1.5_0.5z| & 0013 & 1.5 & (0,0,0.5) &
(0,0,0) & 16 & 0.01438& 0.00014\\
\verb|q1.5_-0.5z| & 0012 & 1.5 & (0,0,-0.5) &
(0,0,0) & 16 & 0.01449 & 0.00007\\
\verb|q3_0.5z| & 0031 & 3.0 & (0,0,0.5) & (0,0,0)& 14 &
0.01734 & $<10^{-4}$ \\
\verb|q3_-0.5z| & 0038 & 3.0 & (0,0,-0.5) & (0,0,0) & 14 &
0.01756 & $<10^{-4}$ \\
\verb|q5_0.5z| & 0061 & 5.0 & (0,0,0.5) & (0,0,0) & 15 &
0.01570 & 0.004 \\
\verb|q5_-0.5z| & 0060 & 5.0 & (0,0,-0.5) & (0,0,0)& 15 &
0.01591 & 0.003 \\
\verb|q8_0.5z|& 0065 & 8.0 & (0,0,0.5) & (0,0,0) & 13 &
0.01922 & 0.004\\
\verb|q8_-0.5z| & 0064 & 8.0 & (0,0,-0.5) & (0,0,0) & 13 &
0.01954 & 0.0005 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsubsection{Handling of spin terms}
\label{sec:PNOrders}
When constructing Taylor approximants that include the re-expansion of
the energy balance equation, the handling of spin terms becomes
important. In particular, terms of quadratic and higher order in
spins, such as $(\vec{S}_{i})^{2}$, appear in the evolution of
the orbital frequency at 3 PN and higher orders. These terms arise
from lower-order effects and represent incomplete information, since the
corresponding terms are unknown in the original power series for
the binding energy $E$ and the flux $\mathcal{F}$,
\begin{align}
\label{eq:EExpansion}
E(x) &= -\frac{1}{2}\,m\nu x\,\left(1+\sum_{k=2}a_{k}x^{k/2}\right), \\
\mathcal{F}(x) &= \frac{32}{5}\nu^{2}x^{5}\left(1+\sum_{k=2}
b_{k}x^{k/2}\right),
\end{align}
where $m=m_1+m_2$ and $\nu = m_1 m_2/m^2$, and
$m_{1,2}$ are the individual masses.
In these expansions, the spin-squared terms come in at 2 PN order and
thus appear in $a_4$ and $b_{4}$, cf. Eqs.~\eqref{eq:Energy-a4}
and~\eqref{eq:Flux-b4}. Then, in the re-expansion series of Taylor
T4,
\begin{equation}
\label{eq:rexpand}
S\equiv -\frac{\mathcal{F}}{dE/dx} = \frac{64 \nu}{5 m} x^5(1+\sum_{k=2}s_{k}x^{k/2}),
\end{equation}
the coefficients $s_k$ can be recursively determined, e.g.
\begin{align}
\label{eq:rexpandsol}
s_{4} &= b_{4}-3a_{4}-2s_{2}a_{2}, \\
s_{6} &= b_{6}-(4a_{6}+3s_{2}a_{4}+\frac{5}{2}s_{3}a_{3}+2s_{4}a_{2}).
\end{align}
Thus, the spin-squared terms in $a_4$ and $b_4$ will induce
spin-squared terms at 3PN order in $s_{6}$. The analogous conclusion
holds for Taylor T5. These spin-squared terms are incomplete as the
corresponding terms in the binding energy and flux (i.e. in $a_6$ and
$b_6$) are not known.
This re-expansion has been handled in several ways in the
literature. For example, Nitz et~al.~\cite{Nitz:2013mxa} include only
terms which are linear in spin beyond 2 PN order. On the other hand,
Santamar\'ia et~al.~\cite{Santamaria:2010yb} keep \emph{all} terms in
spin arising from known terms in $E$ and $\mathcal{F}$. In the present
work, we also keep all terms up to 3.5 PN order, which is the highest
order to which non-spin terms are completely known. Similarly, we
include all terms when computing the precession frequency (see
\ref{ap:orbitalEvolution}). We investigate the impact of different
spin-truncation choices in Sec.~\ref{sec:ChangeSpinTruncation}, along
with the impact of partially known 4 PN spin terms.
\subsection{Numerical Relativity Simulations}
\label{sec:NR}
To characterize the effectiveness of PN theory in reproducing NR
results, we have selected a subset of 16 simulations from the SXS
waveform catalog described in
Ref.~\cite{PhysRevLett.111.241104}.\footnote{The waveform and orbital
data are publicly available at
\url{https://www.black-holes.org/waveforms/}.} Our primary results
are based on six precessing simulations and a further ten
non-precessing ones for cross-comparisons. To check for systematic
effects, we use a further 31 precessing simulations with random
mass-ratios and spins. The parameters of these runs are given in Table
\ref{tbl:Parameters}. They were chosen to represent various degrees of
complexity in the dynamics: (i) precessing versus non-precessing
simulations, the latter with spins parallel or anti-parallel to
$\ensuremath{\hat{\ell}}$; (ii) one versus two spinning black holes; (iii) coverage of
mass ratio from $q=1$ to $q=8$; (iv) long simulations that cover more
than a precession cycle; and (v) a variety of orientations of
$\hat{\chi}_{1},\hat{\chi}_{2},\ensuremath{\hat{\ell}}$.
Figure~\ref{fig:precessionCones} shows the precession cones of the
normal to the orbital plane and the spins for for the six primary
precessing cases in Table~\ref{tbl:Parameters}. The PN data were
computed using the Taylor T4 3.5 PN approximant.
The simulations from the catalog listed in Table~\ref{tbl:Parameters}
were run with numerical methods similar to~\cite{Buchman:2012dw}. A
generalized harmonic evolution
system~\cite{Friedrich1985,Garfinkle2002,Pretorius2005c,Lindblom2006}
is employed, and the gauge is determined by gauge source functions
$H_a$. During the inspiral phase of the simulations considered here,
$H_a$ is kept constant in the co-moving frame,
cf.~\cite{Scheel2009,Chu2009,Boyle2007}. About 1.5 orbits before
merger, the gauge is changed to damped harmonic
gauge~\cite{Lindblom2009c,Szilagyi:2009qz,Choptuik:2009ww}. This
gauge change happens outside the focus of the comparisons presented
here.
The simulation q5\_0.5x analyzed here is a re-run of the SXS
simulation SXS:BBH:0058 from Ref.~\cite{PhysRevLett.111.241104}. We
performed this re-run for two reasons: First, SXS:BBH:0058 changes to
damped harmonic gauge in the middle of the inspiral, rather than close
to merger as all other cases considered in this work. Second,
SXS:BBH:0058 uses an unsatisfactorily low numerical resolution during
the calculation of the black hole spins. Both these choices leave
noticeable imprints on the data from SXS:BBH:0058, and the re-run
q5\_0.5x allows us to quantify the impact of these deficiencies. We
discuss these effects in detail in Secs.~\ref{sec:Gauge}
and~\ref{sec:AH-resolution}. The re-run q5\_0.5x analyzed here is
performed with improved numerical techniques. Most importantly,
damped harmonic gauge is used essentially from the start of the
simulation, $t\gtrsim 100M$. The simulation q5\_0.5x also benefits
from improved adaptive mesh refinement~\cite{Szilagyi2014} and
improved methods for controlling the shape and size of the excision
boundaries; the latter methods are described in Sec.II.B. of
Ref.~\cite{Scheel:2014ina}.
We have performed convergence tests for some of the simulations;
Sec.~\ref{sec:tech_cons} will demonstrate with
Fig.~\ref{fig:convergenceWithLev} that numerical truncation error is
unimportant for the comparisons presented here.
\subsection{Characterizing Precession}
\label{CharacterizingPrecessionByRotors}
The symmetries of non-precessing systems greatly simplify the problem
of understanding the motion of the binary. In a non-precessing
system, the spin vectors are essentially constant, and two of the
rotational degrees of freedom are eliminated in the binary's orbital
elements. Assuming quasi-circular orbits, the entire system can be
described by the orbital phase $\Phi$, which can be defined as the
angle between $\ensuremath{\hat{n}}$ and $\hat{x}$. In post-Newtonian theory the
separation between the black holes can be derived from $d\Phi/dt$.
Thus comparison between post-Newtonian and numerical orbits, for
example, reduces entirely to the comparison between $\Phi_{\text{PN}}$
and $\Phi_{\text{NR}}$~\cite{Buonanno-Cook-Pretorius:2007, Boyle2007}.
For precessing systems, on the other hand, the concept of an orbital
phase is insufficient; $\Phi$ could be thought of as just one of
the three Euler angles. We saw in Sec.~\ref{sec:OrbitalEvolution}
that the orbital dynamics of a precessing system can be fairly
complex, involving the triad $(\ensuremath{\hat{n}}, \ensuremath{\hat{\lambda}}, \ensuremath{\hat{\ell}})$ (or
equivalently the frame rotor $\Rf$) as well as the two spin vectors
$\vec{\chi}_{1}$ and $\vec{\chi}_{2}$---each of which is, of course,
time dependent. When comparing post-Newtonian and numerical results,
we need to measure differences between each of these quantities in
their respective systems.
To compare the positions and velocities of the black holes themselves,
we can condense the information about the triads into the quaternion
quantity~\cite{Boyle:2013}
\begin{equation}
R_{\Delta} \coloneqq \Rf^{\text{PN}}\, \Rbarf^{\text{NR}}~,
\end{equation}
which represents the rotation needed to align the PN frame with the NR
frame. This is a geometrically meaningful measure of the relative
difference between two frames. We can reduce this to a single real
number by taking the magnitude of the logarithm of this quantity,
defining the angle\footnote{More explanation of these expressions,
along with relevant formulas for calculating their values, can be
found in Appendix~\ref{sec:UsefulQuaternionFormulas}.}
\begin{equation}
\label{eq:PhaseDifference}
\Phi_{\Delta} \coloneqq 2 \left\lvert \log R_{\Delta} \right\rvert~.
\end{equation}
This measure has various useful qualities. It is invariant, in
the sense that any basis frame used to define $\Rf^{\text{PN}}$ and $\Rf^{\text{NR}}$
will result in the same value of $\Phi_{\Delta}$. It conveniently
distills the information about the difference between the frames
into a single value, but is also non-degenerate in the sense that
$\Phi_{\Delta} = 0$ if and only if the frames are identical. It
also reduces precisely to $\Phi_{\text{PN}} - \Phi_{\text{NR}}$ for
non-precessing systems; for precessing systems it also incorporates
contributions from the relative orientations of the orbital
planes.%
\footnote{It is interesting to note that any attempt to define
the orbital phases of precessing systems separately, and then
compare them as some $\Phi_{B} - \Phi_{A}$, is either ill defined or
degenerate---as shown in
Appendix~\ref{sec:InadequacyOfSeparatePhases}. This does not
mean that it is impossible to define such phases, but at best they
will be degenerate; multiple angles would be needed to represent
the full dynamics.}
Despite these useful features of $\Phi_{\Delta}$, it may sometimes be
interesting to use different measures, to extract individual
components of the binary evolution. For example,
Eq.~\eqref{eq:ellHatEv} describes the precession of the orbital plane.
When comparing this precession for two approaches, a more informative
quantity than $\Phi_{\Delta}$ is simply the angle between the
$\ensuremath{\hat{\ell}}$ vectors in the two systems:
\begin{equation}
\label{eq:LAngle}
\angle L = \cos^{-1}\left(\ensuremath{\hat{\ell}}^{\rm PN}\cdot\ensuremath{\hat{\ell}}^{\rm NR}\right).
\end{equation}
Similarly, we will be interested in understanding the evolution of the
spin vectors, as given in Eqs.~\eqref{eq:SpinsEv}. For this purpose,
we define the angles between the spin vectors:
\begin{subequations}
\label{eq:ChiAngles}
\begin{align}
\angle \chi_{1} &= \cos^{-1}\left(\hat{\chi}_{1}^{\rm PN}\cdot\hat{\chi}_{1}^{\rm NR}\right), \\
\angle \chi_{2} &=
\cos^{-1}\left(\hat{\chi}_{2}^{\rm PN}\cdot\hat{\chi}_{2}^{\rm NR}\right).
\end{align}
\end{subequations}
We will use all four of these angles below to compare the
post-Newtonian and numerical orbital elements.
\begin{figure}
\includegraphics[width=0.96\linewidth]{averagingProcedureExampleQ197} \\[10pt]
\includegraphics[width=0.96\linewidth]{averagingProcedureExampleQ5}
\caption{Examples of the averaging procedure and error estimates
employed for all comparisons. Shown here are q1.97\_random and
q5.0\_0.5x. PN evolutions were performed with the Taylor T1
approximant. The thin blue lines show all the PN-NR matching
intervals.
\label{fig:avProcedure}
}
\end{figure}
\subsection{Matching Post-Newtonian to Numerical Relativity}
\label{sec:matching}
When comparing PN theory to NR results, it is important to ensure that
the initial conditions used in both cases represent the same physical
situation. We choose a particular orbital frequency $\Omega_{m}$ and
use the NR data to convert it to a time $t_{m}$. To initialize a PN
evolution at $t_{m}$, we need to specify
\begin{gather}
q,\chi_{1},\chi_{2}, \label{eq:conserved}\\
\ensuremath{\hat{\ell}},\ensuremath{\hat{n}},\hat{\chi}_{1},\hat{\chi}_{2}, \label{eq:orientation} \\
\Omega. \label{eq:sep}
\end{gather}
The quantities \eqref{eq:conserved} are conserved during the PN
evolution. The quantities \eqref{eq:orientation} determine the
orientation of the the binary and its spins relative to the inertial
triad $(\hat{x},\hat{y},\hat{z})$. The orbital frequency $\Omega$ in
Eq.~\eqref{eq:sep}, finally, parametrizes the separation of the binary
at $t_{m}$. The simplest approach is to initialize the PN evolution
from the respective quantities in the initial data of the NR
evolution. This would neglect initial transients in NR data as in,
e.g., Fig.~1 of Ref.~\cite{Chu2009}. These transients affect the
masses and spins of the black holes, so any further PN-NR comparisons
would be comparing slightly different physical configurations. The NR
transients decay away within the first orbit of the NR simulation, so
one can consider initializing the PN evolution from NR at a time after
the NR run has settled down. However, the generally non-zero (albeit
very small) orbital eccentricity in the NR simulation can lead to
systematic errors in the subsequent comparison as pointed out in
Ref.~\cite{Boyle2007}.
Therefore, we use time-averaged quantities evaluated after the initial
transients have vanished. In particular, given a numerical relativity
simulation, we set the PN variables listed in Eq.~\eqref{eq:conserved}
to their numerical relativity values after junk radiation has
propagated away.
The remaining nine quantities Eqs.~\eqref{eq:orientation}
and~\eqref{eq:sep} must satisfy the constraint $\ensuremath{\hat{\ell}} \cdot
\ensuremath{\hat{n}}\equiv 0$. We determine them with constrained minimization by
first choosing an orbital frequency interval $[\Omega_m
-\delta\Omega/2, \Omega_m+\delta\Omega/2]$ of width $\delta\Omega$.
Computing the corresponding time interval $[t_{i},t_{f}]$ in the NR
simulation, we define the time average of any quantity $Q$ by
\begin{equation}
\langle Q\rangle=\frac{1}{t_f-t_i}\;\int_{t_i}^{t_f} Q\, dt.
\end{equation}
Using these averages, we construct the objective functional $\cal S$
as
\begin{equation}
\label{eq:ObjectiveFunction}
{\cal S}=\langle(\angle L)^{2}\rangle+\langle(\angle
\chi_{1})^{2}\rangle+\langle(\angle \chi_{2})^{2}\rangle + \langle(\Delta\Omega)^{2}\rangle
\end{equation}
where $\Delta\Omega = (\Omega_{\rm PN}-\Omega_{\rm NR})/\Omega_{\rm
NR}$. When a spin on the black holes is below $10^{-5}$ the
corresponding term is dropped from
Eq.~\eqref{eq:ObjectiveFunction}. The objective functional is then
minimized using the SLSQP algorithm~\cite{Kraft:1988, pyopt-paper} to
allow for constrained minimization. In
Eq.~(\ref{eq:ObjectiveFunction}) we use equal weights for each term;
other choices of the weights do not change the qualitative picture
that we present.
The frequency interval $[\Omega_m\pm \delta\Omega/2]$ is chosen based
on several considerations. First it is selected after junk radiation
has propagated away. Secondly, it is made wide enough so that any
residual eccentricity effects average out. Finally, we would like to
match PN and NR as early as possible. But since we want to compare
various cases to each other, the lowest possible matching frequency
will be limited by the shortest NR run (case q8\_-0.5z). Within these
constraints, we choose several matching intervals, in order to
estimate the impact of the choice of matching interval on our eventual
results. Specifically, we use three matching frequencies
\begin{equation}
\label{eq:Omega_m}
m\Omega_m\in \{ 0.021067,0.021264,0.021461 \},
\end{equation}
and employ four different matching windows for each, namely
\begin{equation}
\label{eq:deltaOmega}
\delta\Omega/\Omega_m\in \{0.06,0.08,0.1,0.12\}.
\end{equation}
These frequencies correspond approximately to the range between 10-27
orbits to merger depending on the parameters of the binary, with the
lower limit for the case q1.0\_-0.5x and the upper for q8.0\_0.5x.
Matching at multiple frequencies and frequency windows allows an
estimate on the error in the matching and also ensures that the
results are not sensitive to the matching interval being used. In
this article, we generally report results that are averaged over the
12 PN-NR comparisons performed with the different matching intervals.
We report error bars ranging from the smallest to the largest result
among the 12 matching intervals. As examples,
Fig.~\ref{fig:avProcedure} shows $\Phi_{\Delta}$ as a function of time
to merger $t_{\rm merge}$ for the cases q1.97\_random and q5\_0.5x for
all the matching frequencies and intervals, as well as the average
result and an estimate of the error. Here $t_{\rm merge}$ is the time
in the NR simulation when the common horizon is detected.
\section{Results}
\label{sec:comparison}
\subsection{Precession Comparisons}
\label{sec:PrecessionComparisons}
We apply the matching procedure of Sec.~\ref{sec:matching} to the
precessing NR simulations in Table \ref{tbl:Parameters}. PN--NR
matching is always performed at the frequencies given by
Eq.~\eqref{eq:Omega_m} which are the lowest feasible orbital
frequencies across all cases in Table \ref{tbl:Parameters}.
Figure~\ref{fig:precessionCones} shows the precession cones for the
normal to the orbital plane $\ensuremath{\hat{\ell}}$ and the spins
$\hat{\chi}_{1,2}$. As
time progresses, $\ensuremath{\hat{\ell}}$ and $\hat{\chi}_{1,2}$ undergo precession
and nutation, and the precession cone widens due to the emission of
gravitational radiation. Qualitatively, the PN results seem to follow
the NR results well, until close to merger.
\begin{figure}
\includegraphics[width=0.98\linewidth]{AngleL2cases}
\caption{\label{fig:2casesLAngles}
Angle $\angle L$ by which $\ensuremath{\hat{\ell}}^{\rm PN}(t)$ differs from
$\ensuremath{\hat{\ell}}^{\rm NR}(t)$ for the configuration q1\_0.5x (red lines) and
q5\_0.5x (black lines). $\angle L \le 0.2^{\circ}$ except very
close to merger. In each case, the PN predictions based on
different PN approximants are shown in different line styles.
Shown is the point-wise average of 12 $\angle L(t)$ curves, i.e. the
thick red line of Fig.~\ref{fig:avProcedure}. The thin horizontal
lines show the widest edges of the PN matching intervals.}
\end{figure}
We now turn to a quantitative analysis of the precession dynamics,
establishing first that the choice of Taylor approximant is of minor
importance for the precession dynamics. We match PN dynamics to the
NR simulations q5\_0.5x and q1\_0.5x for the Taylor approximants T1,
T4 and T5. We then compute the angles $\angle L$ and $\angle
\chi_{1}$. Figure~\ref{fig:2casesLAngles} shows the resulting $\angle
L$. During most of the inspiral, we find $\angle L$ of order a few
$10^{-3}$ radians increasing to $\sim 0.1$ radians during the last
$1000M$ before merger. Thus the direction of the normal to the
orbital plane is reproduced well by PN theory. This result is
virtually independent of the Taylor approximant suggesting that the
choice of approximant only weakly influences how well PN precession
equations track the motion of the orbital plane. In other words,
precession dynamics does not depend on details of orbital phasing like
the unmodeled higher-order terms in which the Taylor approximants
differ from each other.
Turning to the spin direction $\hat{\chi}_{1}$ we compute the angle
$\angle \chi_{1}$ between $\hat{\chi}_{1}^{\rm NR}(t)$ and
$\hat\chi_{1}^{\rm PN}(t) $ and plot the result in
Fig.~\ref{fig:2casesSpinAngles}. While Fig.~\ref{fig:2casesSpinAngles}
looks busy, the first conclusion is that $\angle \chi_{1}$ is quite
small $\lesssim 0.01$ rad through most of the inspiral, and rises
somewhat close to merger.
\begin{figure}
\includegraphics[width=0.98\linewidth]{AngleS2cases}
\caption{Angle $\angle \chi_{1}$ by which $\vec{\chi}_{1}^{\rm PN}(t)$
differs from $\vec{\chi}_{1}^{\rm NR}(t)$ for the configuration
q1\_0.5x (red lines) and q5\_0.5x (black lines). In each case,
the PN predictions based on different PN approximants are shown in
different line styles. The thin horizontal lines show the widest
edges of the PN matching intervals.}
\label{fig:2casesSpinAngles}
\end{figure}
The pronounced short-period oscillations of $\angle\chi_1$ in
Fig.~\ref{fig:2casesSpinAngles} are caused by differences between
PN-nutation features and NR-nutation features. To better understand
the nutation features and their impact on the angle $\angle \chi_{1}$,
we remove nutation features by filtering out all frequencies
comparable to the orbital frequency. This is possible because the
precession frequency is much smaller than the nutation frequency. The
filtering is performed with a 3rd order, bi-directional low pass
Butterworth filter~\cite{paarmann2001design} with a fixed cutoff
frequency chosen to be lower than the nutation frequency at the start
of the inspiral. Due to the nature of the filtering, the resulting
averaged spin will suffer from edge effects which affect approximately
the first and last 1000 M of the inspiral. Furthermore, the precession
frequency close to merger becomes comparable to the nutation frequency
at the start of the simulation and thus filtering is no longer
truthful in this region. Therefore, we only use the ``averaged'' spins
where such features are absent.
Applying this smoothing procedure to both $\hat\chi_1^{\rm PN}$ and
$\hat\chi_1^{\rm NR}$ for the run q5\_0.5x, we compute the angle
$\angle\tilde\chi_1$ between the averaged spin vectors,
$\tilde\chi_1^{\rm PN}$ and $\tilde\chi_1^{\rm NR}$. This angle is
plotted in Fig.~\ref{fig:AngleSmoothedS2cases}\footnote{To illustrate
edge effects of the Butterworth filter,
Fig.~\ref{fig:AngleSmoothedS2cases} includes the early and late time
periods where the filter affects $\angle\tilde\chi_1$.}, where
results only for the Taylor T1 approximant are shown, and for only one
matching interval specified by $m\Omega_m=0.0210597$ and
$\delta\Omega/\Omega_m=0.1$. The orbit-averaged spin directions
$\tilde\chi_1^{\rm NR/PN}$ agree significantly better with each other
than the non-averaged ones (cf. the black line in
Fig.~\ref{fig:AngleSmoothedS2cases}, which is duplicated from
Fig.~\ref{fig:2casesSpinAngles}). In fact, the orbit-averaged spin
precessing between NR and PN agrees as well as the orbital angular
momentum precession, cf. Fig.~\ref{fig:2casesLAngles}. Thus, the
difference in the spin dynamics is dominated by the nutation features,
with the orbit-averaged spin dynamics agreeing well between PN and NR.
\begin{figure}
\includegraphics[width=0.98\linewidth]{AngleSSmoothQ5}
\caption{ \label{fig:AngleSmoothedS2cases} Angle $\angle
\tilde{\chi}_{1}$ between the ``orbit-averaged'' spins for the
configuration q5\_0.5x. The non orbit-averaged difference
$\angle\chi_{1}$ (cf. Fig.~\ref{fig:2casesSpinAngles}) is shown
for comparison. Shown is one matching interval as indicated by
the thin horizontal line. }
\end{figure}
\begin{figure}
\includegraphics[width=0.98\linewidth]{ProjectionSpinsQ5}
\caption{ \label{fig:ProjectionSpinsQ5} The projection of
$\hat{\chi}_{1}^{\rm NR}$ and $\hat{\chi}_{1}^{\rm PN}$ onto the
$\hat{e}_{2}-\hat{e}_{3}$ plane described in the text for case
q5\_0.5x. The system is shown in the interval $t-t_{\rm
merge}\in[-6662,-1556]$. along the $\hat{e}_{3}$
axis. Meanwhile, the NR data show variations in $\hat{e}_{2}$ and
$\hat{e}_{3}$ directions of comparable magnitude. The solid
symbols (black diamond for NR, red square for PN) indicate the
data at the start of the plotted interval, chosen such that
$\hat\chi_1\cdot\hat n$ is maximal---i.e., where the spin
projection into the orbital plane is parallel to $\hat n$. The
subsequent four open symbols (blue diamonds for NR, green squares
for PN) indicating the position 1/8-th, 1/4-th, 3/8-th and 1/2 of
an orbit later. }
\end{figure}
\begin{figure}
\includegraphics[width=0.96\linewidth]{AngleEllSmoothQ5} \\[10pt]
\includegraphics[width=\linewidth]{ProjectionEllHatQ5}
\caption{ \label{fig:ProjectionEllHatQ5} Characterization of
nutation effects of the orbital angular momentum. {\bf Top}:
angle $\angle \tilde{L}$ between the ``averaged'' $\ensuremath{\hat{\ell}}$ in PN
and NR for the configuration q5\_0.5x (thick red line). $\angle
L$ is shown in thin black line for comparison (cf.
Fig.~\ref{fig:AngleSmoothedS2cases}). The thin blue line shows
$\angle (\ensuremath{\hat{\ell}},\tilde{\ell})$ between the averaged and the
filtered signal. Note that it is larger than both $\angle L$ and
$\angle \tilde{L}$. {\bf Bottom}: the projection of $\ensuremath{\hat{\ell}}^{\rm
NR}$ (gray) and $\ensuremath{\hat{\ell}}^{\rm PN}$ (red) onto the
$\hat{e}_{2}-\hat{e}_{3}$ plane described in the text for case
q5\_0.5x (cf. Fig.~\ref{fig:ProjectionEllHatQ5}). The system is
shown in the interval $[-6662,-1556]$. Both PN and NR show the
same behavior, in contrast to the behavior of the spin in
Fig.~\ref{fig:ProjectionSpinsQ5}. The PN-NR matching interval is
indicated by the horizontal line in the top panel. }
\end{figure}
To characterize the nutation features in the spin vectors, we
introduce a coordinate system which is specially adapted to
highlighting nutation effects. The idea is to visualize nutation with
respect to the averaged spin vector $\tilde\chi$. We compute the
time-derivative $\dot{\tilde{\chi}}$ numerically. Assuming that the
``averaged'' spin is undergoing pure precession, so that
$\tilde{\chi}\cdot\dot{\tilde{\chi}} = 0$, we define a new coordinate
system $(\hat{e}_{1},\hat{e}_{2},\hat{e}_{3})$ by
$\hat{e}_{1}=\tilde{\chi},\hat{e}_{2}=\dot{\tilde{\chi}}/|\dot{\tilde{\chi}}|,
\hat{e}_{3} = \hat{e}_1\times \hat{e}_{2} $. The spin is now
projected onto the $\hat{e}_{2}-\hat{e}_{3}$ plane, thus showing the
motion of the spin in a frame ``coprecessing'' with the averaged
spin. This allows us to approximately decouple precession and nutation
and compare them separately between PN and NR.
Figure~\ref{fig:ProjectionSpinsQ5} plots the projection of the spins
$\chi_1^{\rm NR}$ and $\chi_1^{\rm PN}$ onto their respective ``orbit
averaged'' $\hat{e}_{2}-\hat{e}_{3}$ planes. We see that the behavior
of the NR spin and the PN spins are qualitatively different: For this
single-spin system, the PN spin essentially changes only in the
$\hat{e}_3$ direction (i.e., orthogonal to its average motion
$\dot{\tilde{\chi}}^{\rm PN}$). In contrast, the NR spin undergoes
elliptical motion with the excursion along its $\hat e_2$ axis (i.e.,
along the direction of the average motion) about several times larger
than the oscillations along $\hat e_{3}$. The symbols plotted in
Fig.~\ref{fig:ProjectionSpinsQ5} reveal that each of the elliptic
``orbits'' corresponds approximately to half an orbit of the binary,
consistent with the interpretation of this motion as nutation. The
features exhibited in Fig.~\ref{fig:ProjectionEllHatQ5} are similar
across all the single-spinning precessing cases considered in this
work. The small variations in spin direction exhibited in
Fig.~\ref{fig:ProjectionSpinsQ5} are orders of magnitude smaller than
parameter estimation capabilities of LIGO, e.g.~\cite{Veitch:2014wba},
and so we do not expect that these nutation features will have a
negative impact on GW detectors.
\begin{figure*}
\includegraphics[width=0.48\linewidth]{AngleLAll} \hfill
\includegraphics[width=0.48\linewidth]{AngleLAllVsPhi} \\[0.2cm]
\includegraphics[width=0.48\linewidth]{AngleSAllVsPhi} \hfill
\includegraphics[width=0.48\linewidth]{AngleSSmoothVsPhi}
\caption{ \label{fig:AngleLAll} Comparison of orbital plane and spin
precession for the primary six precessing NR simulations. {\bf
Top Left}: $\angle L$ as a function of time to merger. {\bf Top
right}: $\angle L$ as a function of \emph{orbital phase}. {\bf
Bottom left}: $\angle \chi_{1}$ as a function of orbital
phase. {\bf Bottom right}: $\angle \tilde{\chi}_{1}$ between the
averaged spins. All data plotted are averages over 12 matching
intervals, cf. Fig.~\ref{fig:avProcedure}, utilizing the Taylor T4
PN approximant. The thin horizontal lines in the top left panel
show the widest edges of the PN matching intervals.}
\end{figure*}
Let us now apply our nutation analysis to the orbital angular momentum
directions $\ensuremath{\hat{\ell}}$. Analogous to the spin, we compute averages
$\tilde\ell^{\rm NR}$ and $\tilde\ell^{\rm PN}$, and compute the angle
between the directions of the averages, $\angle\tilde
L=\angle\left(\tilde\ell^{\rm PN}, \tilde\ell^{\rm NR}\right)$. This
angle---plotted in the top panel of
Fig.~\ref{fig:ProjectionEllHatQ5}---agrees very well with the
difference $\angle L$ that was computed without orbit-averaging. This
indicates that the nutation features of $\hat\ell$ agree between NR
and PN. The top panel of Fig.~\ref{fig:AngleLAll} also plots the
angle between the raw $\hat\ell^{\rm NR}$ and the averaged
$\tilde\ell^{\rm NR}$, i.e. the opening angle of the nutation
oscillations. As is apparent in Fig.~\ref{fig:ProjectionEllHatQ5},
the angle between $\hat\ell^{\rm NR}$ and $\tilde\ell^{\rm NR}$ is
about 10 times larger than the difference between NR and PN ($\angle
L$ or $\angle \tilde L$), confirming that
nutation features are captured. The lower panel of
Fig.~\ref{fig:ProjectionEllHatQ5} shows the projection of $\hat\ell$
orthogonal to the direction of the average $\tilde\ell$. In contrast
to the spins shown in Fig.~\ref{fig:ProjectionSpinsQ5}, the nutation
behavior of $\ensuremath{\hat{\ell}}$ is in close agreement between NR and PN: For
both, $\ensuremath{\hat{\ell}}$ precesses in a circle around $\tilde\ell$, with
identical period, phasing, and with almost identical amplitude. We
also point out that the shape of the nutation features differs between
$\ensuremath{\hat{\ell}}$ and $\hat{\chi}_{1}$: $\ensuremath{\hat{\ell}}$ circles twice per orbit
around its average $\tilde\ell$, on an almost perfect circle with
equal amplitude in the $\hat e_2$ and $\hat e_3$ direction.
We now extend our precession dynamics analysis to the remaining five
primary precessing NR simulations listed in
Table~\ref{tbl:Parameters}. The top left panel of Figure
\ref{fig:AngleLAll} shows $\angle L$. The difference in the direction
of the normal to the orbital plane is small; generally $\angle
L\lesssim 10^{-2}$ radians, except close to merger. Thus it is evident
that the trends seen in Fig.~\ref{fig:2casesLAngles} for $\angle L$
hold across all the precessing cases. To make this behavior clearer,
we parameterize the inspiral using the orbital phase instead of time,
by plotting the angles versus the orbital phase in the NR simulation,
as shown in the top right panel of Fig.~\ref{fig:AngleLAll}.
Thus, until a few orbits to merger PN represents the precession and
nutation of the orbital plane well.
The bottom left panel of Fig.~\ref{fig:AngleLAll} establishes
qualitatively good agreement for $\angle\chi_1$, with slightly higher
values than $\angle L$. As already illustrated in
Fig.~\ref{fig:AngleSmoothedS2cases}, nutation features dominate the
difference. Averaging away the nutation features, we plot the angle
$\angle \tilde{\chi}_{1}$ between the smoothed spins in the bottom
left panel of Fig.~\ref{fig:AngleLAll}, where the behavior of $\angle
\chi_{1}$ is very similar to that of $\angle L$. This confirms that
the main disagreement between PN and NR spin dynamics comes from
nutation features, and suggests that the secular precession of the
spins is well captured across all cases, whereas the nutation of the
spins is not.
\begin{figure}
\centering
\includegraphics[width=0.96\linewidth]{AngleLRandom31}
\caption{ \label{fig:AngleLRandom31}
$\angle L$ for additional 31 precessing configurations with
arbitrary oriented spins as well as the case q1.97\_random. Here
$q\in(1,2),\ \chi_{1,2}\le 0.5$. For all cases, $\angle L <
0.5^{\circ}$ throughout most of the inspiral.
All data plotted
are averages over 12 matching intervals,
cf. Fig.~\ref{fig:avProcedure}.}
\end{figure}
All configurations considered so far except q1.97\_random have
$\vec{S}\cdot\ensuremath{\hat{\ell}}=0$ at the start of the simulations, where $\vec
S=\vec S_1+\vec S_2$ is the total spin angular momentum of the
system. When $\vec{S}\cdot\ensuremath{\hat{\ell}}=0$, several terms in PN equations
vanish, in particular the spin orbit terms in the expansions of the
binding energy, the flux and the orbital precession frequency, see
Eqs.~\eqref{eq:EExpansion}, \eqref{eq:FExpansion},
and~\eqref{eq:varpi} in Appendix A.
To verify whether $\vec{S}\cdot\ensuremath{\hat{\ell}}=0$ introduces a bias to our
analysis, we perform our comparison on an additional set of 31
binaries with randomly oriented spins. These binaries have mass ratio
$1\le q \le 2$, spin magnitudes $0\le\chi_{1,2}\le0.5$, and correspond
to cases SXS:BBH:0115 - SXS:BBH:0146 in the SXS
catalog. Fig.~\ref{fig:AngleLRandom31} plots $\angle L$ for these
additional 31 PN-NR comparisons in gray, with q1.97\_random
highlighted in orange. The disagreement between PN and NR is similarly
small in all of these cases, leading us to conclude that our results
are robust in this region of the parameter space.
\subsection{Orbital Phase Comparisons}
\label{sec:OrbitalPhaseComparisons}
\begin{figure}
\centering
\includegraphics[width=0.98\linewidth]{massRatioDepLinear}
\caption{\label{fig:massRatioDep} $\Phi_{\Delta}$ as a function of
mass ratio for BBH systems with $\chi_1=0.5$, and spin direction
aligned (top), orthogonal (middle), and anti-aligned (bottom) with
the orbital angular momentum. For clarity, the
aligned/anti-aligned data are offset by $+0.5$ and $-0.5$,
respectively, with the thin horizontal black lines indicating zero
for each set of curves. Plotted is $\Phi_\Delta$ averaged over
the 12 matching intervals, cf. Fig.~\ref{fig:avProcedure}, and for
three different Taylor approximants.}
\end{figure}
Along with the precession quantities described above, the orbital
phase plays a key role in constructing PN waveforms. We use
$\Phi_{\Delta}$, a geometrically invariant angle that reduces to the
orbital phase difference for non-precessing binaries
(cf. Sec.~\ref{CharacterizingPrecessionByRotors}) to characterize
phasing effects. We focus on single spin systems with mass-ratios
from 1 to 8, where the more massive black hole carries a spin of
$\chi_1=0.5$, and where the spin is aligned or anti-aligned with the
orbital angular momentum, or where the spin is initially tangent to
the orbital plane. We match all NR simulations to post-Newtonian
inspiral dynamics as described in Sec.~\ref{sec:matching}, using the
12 matching intervals specified in Eqs.~\eqref{eq:Omega_m}
and~\eqref{eq:deltaOmega}. We then compute the phase difference
$\Phi_\Delta$ at the time at which the NR simulation reaches orbital
frequency $m\Omega=0.3$.
The results are presented in Fig.~\ref{fig:massRatioDep}, grouped
based on the initial orientation of the spins: aligned, anti-aligned,
and in the initial orbital plane. For aligned runs, there are clear
trends for Taylor T1 and T5 approximants: for T1, differences decrease
with increasing mass ratio (at least up to $q=8$); for T5, differences
increase. For Taylor T4, the phase difference $\Phi_\Delta$ has a
minimum and there is an overall increase for higher mass ratios. For
anti-aligned runs, Taylor T5 shows the same trends as for the aligned
spins. Taylor T4 and T1 behaviors, however, have reversed: T4
demonstrates a clear increasing trend with mass ratio, whereas T1
passes through a minimum with overall increases for higher mass
ratios. Our results are also qualitatively consistent with the results
described in \cite{HannamEtAl:2010} as we find that for equal mass
binaries, the Taylor T4 approximant performs better than the Taylor T1
approximant (both for aligned and anti-aligned spins).
For the in-plane precessing runs, we see clear trends for all 3
approximants: Taylor T4 and T5 both show increasing differences with
increasing mass ratio, and T1 shows decreasing differences. These
trends for precessing binaries are consistent with previous work on
non-spinning binaries~\cite{MacDonald:2012mp}, which is expected since
for $\vec{S}\cdot\ensuremath{\hat{\ell}}$ many of the same terms in the binding energy
and flux vanish as for non-spinning binaries. Overall, we find that
for different orientations and mass ratios, no one Taylor approximant
performs better than the rest, as expected if the differences between
the approximants arise from different treatment of higher-order terms.
\subsection{Convergence with PN order}
\label{sec:PNConvergence}
\begin{figure}
\centering
\includegraphics[width=0.96\linewidth, trim=0 26 0 0]{q3_PrecessionConv}\\
\includegraphics[width=0.96\linewidth]{q3_PrecessionConvSpin}
\caption{Comparison of PN-NR precession dynamics when the expansion
order of the PN precession equations is varied. Shown is the case
q3\_0.5x. The top panel shows the precession of the orbital plane,
and the bottom panel of the spin $\hat\chi_1$ (without and with
averaging). All data shown are averages over 12 matching
intervals, cf. Fig.~\ref{fig:avProcedure}.}
\label{fig:q3PrecConv}
\end{figure}
So far all comparisons were performed using all available
post-Newtonian information. It is also instructive to consider
behavior at different PN order, as this reveals the convergence
properties of the PN series, and allows estimates of how accurate
higher order PN expressions might be.
The precession frequency $\varpi$, given in Eq.~\eqref{eq:varpi}, is a
product of series in the frequency parameter $x$. We multiply out
this product, and truncate it at various PN orders from leading order
(corresponding to 1.5PN) through next-to-next-to-leading order
(corresponding to 3.5PN). Similarly, the spin precession frequencies
$\vec\Omega_{1,2}$ in Eqs.~\eqref{eq:SpinsEv}
and~\eqref{eq:OmegaSpinsEv} are power series in $x$. We truncate the
power series for $\vec\Omega_{1,2}$ in the same fashion as the power
series for $\varpi$, but keep the orbital phase evolution at 3.5PN
order, where we use the TaylorT4 prescription to implement the energy
flux balance. For different precession-truncation orders, we match
the PN dynamics to the NR simulations with the same techniques and at
the same matching frequencies as in the preceding sections.
When applied to the NR simulation q3\_0.5x, we obtain the results
shown in Fig.~\ref{fig:q3PrecConv}. This figure shows clearly that
with increasing PN order in the precession equations, PN precession
dynamics tracks the NR simulation more and more accurately. When only
the leading order terms of the precession equations are included
(1.5PN order), $\angle L$ and $\angle\chi_1$ are $\approx 0.1$rad; at
3.5PN order this difference drops by nearly two orders of magnitude.
\begin{figure}
\centering
\includegraphics[width=0.96\linewidth,trim=0 22 0 0,clip=true]{ConvPrecEllHat}\\
\includegraphics[width=0.96\linewidth]{ConvPrecSpin}
\caption{Convergence of the PN precession equations for all cases in
Table \ref{tbl:Parameters}. The evolution was done with the Taylor
T4 approximant at 3.5 PN order. The leading order spin-orbit
correction is at 1.5 PN order and the spin-squared corrections
appear at 2 PN order. Each data point is the average $\angle L$
over PN-NR comparisons performed using 12 matching intervals,
cf. Fig.~\ref{fig:avProcedure}, with error bars showing the maximal
and minimal $\angle L$ and $\angle \chi_1$ of the 12 fits. }
\label{fig:PrecPNOrder}
\end{figure}
We repeat this comparison for our six main precessing cases from
Table~\ref{tbl:Parameters}. The results are shown in
Fig.~\ref{fig:PrecPNOrder}. It is evident that for \emph{all cases}
$\angle L $ decreases with increasing order in the precession
equations with almost 2 orders of magnitude improvement between
leading order and next-to-next leading order truncations. A similar
trend is seen in the convergence of the spin angle $\angle\chi_{1}$
shown in bottom panel of Fig.~\ref{fig:PrecPNOrder}. The angle
decreases with PN order almost monotonically for all cases except
q1.0\_twospins. However, this is an artificial consequence of picking
a particular matching point at $m\Omega=0.03$: as can be seen from the
bottom panel of Fig.~\ref{fig:q3PrecConv} $\angle \chi_{1}$ shows
large oscillations and it is a coincidence that the matching point
happens to be in a ``trough'' of $\chi_{1}$.
So far we have varied the PN order of the precession equations, while
keeping the orbital frequency evolution at 3.5PN order. Let us now
investigate the opposite case: varying the PN order of the orbital
frequency and monitoring its impact on the orbital phase evolution.
We keep the PN order of the precession equations at 3.5PN, and match
PN with different orders of the orbital frequency evolution (and
TaylorT4 energy-balance prescription) to the NR simulations. We then
evaluate $\Phi_\Delta$ (a quantity that reduces to the orbital phase
difference in cases where the latter is unambiguously defined) at the
time at which the NR simulation reaches the frequency $m\Omega=0.03$.
We examine our six primary precessing runs,
and also the aligned-spin and anti-aligned spin binaries listed in
Table~\ref{tbl:Parameters}.
When the spin is initially in the orbital plane, as seen in the top
panel of Fig.~\ref{fig:q3_OmegaConv}, the overall trend is a
non-monotonic error decrease with PN order, with spikes at 1 and 2.5
PN orders as has been seen previously with non-spinning
binaries~\cite{Boyle2007}. All of the aligned cases show a large
improvement at 1.5 PN order, associated with the leading order
spin-orbit contribution. The phase differences then spike at 2 and 2.5
PN orders and then decrease at 3 PN order. Finally, different cases
show different results at 3.5 PN with some showing decreases
differences while for others the differences increase
For the anti-aligned cases the picture is similar to precessing cases
with a spike at 1 and 2.5 PN orders and monotonic improvement
thereafter. The main difference from precessing cases is the magnitude
of the phase differences, which is larger by a factor of $\sim 5$ at
3.5 PN order for the anti-aligned cases (see for example
q1.5\_s0.5x\_0)
These results suggest that convergence of the orbital phase evolution
depends sensitively on the exact parameters of the system under study.
Further investigation of the parameter space is warranted.
\begin{figure}
\includegraphics[width=0.96\linewidth,trim=0 26 0 0]{d14_q3_ConvOmegaPN}\\
\includegraphics[width=0.96\linewidth,trim=0 26 0 0]{omegaConvergenceAligned}\\
\includegraphics[width=0.96\linewidth]{omegaConvergenceAntiAligned}
\caption{Convergence of the Taylor T4 approximant with PN
order. Shown are all cases from Table \ref{tbl:Parameters}. {\bf
Top}: all precessing cases. {\bf Middle}: aligned spin
cases. {\bf Bottom}: anti-aligned spin cases. Each data point
shown is averaged over PN-NR comparison with 12 matching
intervals, cf. Fig.~\ref{fig:avProcedure}. Error bars are
omitted for clarity, but would be of similar size to those in
Fig.~\ref{fig:spinTruncationSummary}.}
\label{fig:q3_OmegaConv}
\end{figure}
\subsection{Impact of PN spin truncation}
\label{sec:ChangeSpinTruncation}
\begin{figure}
\includegraphics[width=0.96\linewidth]{spinTruncation}
\caption{Impact of different choices for spin truncation on orbital
phase difference $\Phi_{\Delta}$, as a function of mass ratio.
The lines are labeled by the truncation types, as explained in the
text. The upper panel shows all cases for which the spins are
aligned with the orbital angular momentum; the lower panel shows
the anti-aligned cases.}
\label{fig:spinTruncationSummary}
\end{figure}
As mentioned in Sec.~\ref{sec:PNOrders}, post-Newtonian expansions are
not fully known to the same orders for spin and non-spin terms. Thus,
for example, the expression for flux $\mathcal{F}$ is complete to 3.5
PN order for non-spinning systems, but spinning systems may involve
unknown terms at 2.5 PN order; a similar statement holds for $dE/dx$.
This means that when the ratio in Eq.~\eqref{eq:OmEv}, $\mathcal{F} /
(dE/dx)$, is re-expanded as in the T4 approximant, known terms will
mix with unknown terms. It is not clear, \textit{a priori}, how such
terms should be handled when truncating that re-expanded series.
Here we examine the effects of different truncation strategies. We
focus on the Taylor T4 approximant while considering various possible
truncations of the re-expanded form of $\mathcal{F} / (dE/dx)$. We
denote these possibilities by the orders of (1) the truncation of
non-spin terms, (2) the truncation of spin-linear terms, and (3) the
truncation of spin-quadratic terms. Thus, for example, in the case
where we keep non-spin terms to 3.5 PN order, keep spin-linear terms
to 2.5 PN order, and keep spin-quadratic terms only to 2.0 PN order,
we write (3.5, 2.5, 2.0). We consider the following five
possibilities:\\
\hspace*{1em}(i)\,\, (3.5, 3.5, 3.5)\\
\hspace*{1em}(ii)\, (3.5, 4.0, 4.0)\\
\hspace*{1em}(iii) (3.5, 2.5, 2.0)\\
\hspace*{1em}(iv) (3.5, 3.5, 2.0)\\
\hspace*{1em}(v)\, (3.5, 4.0, 2.0).
To increase the impact of the spin-orbit terms, we examine aligned
and anti-aligned cases from Table~\ref{tbl:Parameters}, with results
presented in Fig.~\ref{fig:spinTruncationSummary}. For aligned
cases, no one choice of spin truncation results in small differences
across all mass ratios. All choices of spin truncation excepting
(3.5, 4.0, 4.0) have increasing errors with increasing mass ratio.
Truncating spin corrections at 2.5 PN order (3.5, 2.5, 2)
consistently results in the worst matches. On the other hand, we
find that, for anti-aligned runs, adding higher order terms always
improves the match, keeping all terms yields the best result, and
all choices of truncation give errors which are monotonically
increasing with mass ratio. Overall, anti-aligned cases have larger
values of $\Phi_{\Delta}$ when compared to cases with same mass
ratios. This result is consistent with findings by Nitz
et~al.~\cite{Nitz:2013mxa} for comparisons between TaylorT4 and
EOBNRv1 approximants.
\subsection{Further numerical considerations}
\label{sec:tech_cons}
\subsubsection{Numerical truncation error}
Still to be addressed is the effect of the resolution of NR
simulations in the present work. The simulation q1\_twospins is
available at four different resolutions labeled N1, N2, N3 and N4. We
match each of these four numerical resolutions with the Taylor T4
approximant, and plot the resulting phase differences $\Phi_\Delta$ in
Fig.~\ref{fig:convergenceWithLev} as the data with symbols and
error bars (recall that the error bars are obtained from the 12
different matching regions we use, cf. Fig.~\ref{fig:avProcedure}).
All four numerical resolutions yield essentially the same
$\Phi_\Delta$. We furthermore match the three lowest numerical
resolutions against the highest numerical resolution N4 and compute
the phase difference $\Phi_\Delta$. The top panel of Figure
\ref{fig:convergenceWithLev} shows $\Phi_{\Delta}$ computed with
these 4 different numerical resolutions. All the curves lie on top of
each other and the differences between them are well within the
uncertainties due to the matching procedure. The bottom panel shows
the differences in $\Phi_{\Delta}$ between the highest resolution and
all others. Throughout most of the inspiral, the difference is $\sim
10 \%$. Similar behavior is observed in other cases where multiple
resolutions of NR simulations are available. We therefore conclude
that the effects of varying numerical resolution do not impact our
analysis.
\begin{figure}
\includegraphics[width=0.96\linewidth,trim=0 26 0 0]{convergenceWithLev}
\includegraphics[width=0.96\linewidth]{convergenceWithLevAngleOmega}
\caption{Convergence test with the numerical resolution of the NR
simulation q1\_twospins. {\bf Top panel}: $\Phi_\Delta$
with comparisons done at different resolutions. All the curves
lie within uncertainties due to the matching procedure,
indicating that numerical truncation error does is not important
in this comparison. The difference between each curve and the
highest resolution are of order 15\% and are within the matching
uncertainties. {\bf Bottom panel}: $\angle L$ with comparisons
done at all the resolutions. The curves lie within the matching
uncertainties.}
\label{fig:convergenceWithLev}
\end{figure}
\subsubsection{Numerical gauge change}
\label{sec:Gauge}
\begin{figure}
\centering
\includegraphics[width=0.98\linewidth]{q5_Gauge}
\caption{Gauge change during numerical simulation q5\_s0.5x. The
solid curves represent the recent re-run of q5\_0.5x that is
analyzed in the rest of this paper. The dashed curves represent
an earlier run SXS:BBH:0058 which changes the gauge at $t-t_{\rm
merge}\approx -3200M$. {\bf Top}: behavior of the orbital
frequency $m\Omega$ in evolution with (dashed curve) and without
gauge change (solid curve). {\bf Bottom:} $\Phi_{\Delta}$ for
all Taylor approximants. To avoid matching during the gauge
change, the matching was done with $m\Omega_{c}=0.017$. }
\label{fig:gaugeProblems}
\end{figure}
The simulation SXS:BBH:0058 in the SXS catalog uses identical BBH
parameters than q5\_0.5x, but suffers from two deficiencies,
exploration of which will provide some additional insights. First,
the switch from generalized harmonic gauge with \emph{fixed}
gauge-source functions~\cite{Boyle2007} to \emph{dynamical}
gauge-source functions~\cite{Lindblom2009c,Szilagyi:2009qz} happens
near the middle of the inspiral, rather than close to merger as for
the other simulations considered. This will give us an opportunity to
investigate the impact of such a gauge change, the topic of this
subsection. Second, this simulation also used too low resolution in
the computation of the black hole spin during the inspiral, which we
will discuss in the next subsection. We emphasize that the
comparisons presented above did not utilize SXS:BBH:0058, but rather a
re-run with improved technology. We use SXS:BBH:0058 in this section
to explore the effects of its deficiencies.
While the difference between PN and NR gauges does not strongly impact
the nature of the matching results, a gauge change performed during
some of the runs \emph{does} result in unphysical behavior of physical
quantities such as the orbital frequency. Figure
\ref{fig:gaugeProblems} demonstrates this for case q5\_s0.5x. The old
run SXS:BBH:0058 with the gauge change exhibits a bump in the orbital
frequency (top panel), which is not present in the re-run (solid
curve). When matching both the old and the new run to PN, and
computing the phase difference $\Phi_\Delta$, the old run exhibits a
nearly discontinuous change in $\Phi_{\Delta}$ (bottom panel, dashed
curves) while no such discontinuity is apparent in the re-run.
\begin{figure}
\hfill \includegraphics[scale=0.985775, trim=0 26 0 0]{q5SpinComparison}\\
\hfill \includegraphics[trim=0 26 0 0]{q5SpinDirectionComparison}\\
\hfill\includegraphics[scale=1]{q5PNComparison}
\caption{{\bf Top}: The magnitude of the spin as a function of time in
the original run (black) and the new run (blue) as well as the value
computed with the procedure described in the text (orange). {\bf
Middle panel}: angles between the spins and normals to the orbital
plane (thin curves) and their averaged values (bold curves) for the
original run and the re-run. {\bf Lower panel}: $\angle
\tilde{\chi}_{1}$ and $\angle \tilde{\ell}$ for both the old run and
the re-run (the data of this panel are averaged over 12 matching
intervals, cf. Fig.~\ref{fig:avProcedure}). To avoid matching during
the gauge change, the matching was done with $m\Omega_{c}=0.017$.}
\label{fig:q5SpinComparison}
\end{figure}
\subsubsection{Problems in quasi-local quantities}
\label{sec:AH-resolution}
Computation of the quasi-local spin involves the solution of an
eigenvalue problem on the apparent horizon followed by an integration
over the apparent horizon,
cf.~\cite{Lovelace2008,OwenThesis,Cook2007}. In the simulations
q1.0\_0.5x, q1.5\_0.5x and q3.0\_0.5x and in SXS:BBH:0058
(corresponding to q5\_0.5x), too low numerical resolution was used for
these two steps. While the evolution itself is acceptable, the
extracted spin shows unphysical features. Most importantly, the
reported spin magnitude is not constant, but varies by several per
cent. Figure~\ref{fig:q5SpinComparison} shows as example $\chi_1$
from SXS:BBH:0058. For $t-t_{\rm merge}\le 3200M$ oscillations are
clearly visible. These oscillations vanish at $t-t_{\rm merge}\approx
3200M$, coincident with a switch to damped harmonic gauge
(cf. Sec.~\ref{sec:Gauge}). Similar oscillations in q3\_0.5 disappear
when the resolution of the spin computation is manually increased
about 1/3 through the inspiral, without changing the evolution gauge.
Our new re-run q5\_0.5x (using damped harmonic gauge throughout), also
reports a clean $\chi_1$, cf. Fig.~\ref{fig:q5SpinComparison}. Thus,
we conclude that the unphysical variations in the spin magnitude are
only present if \emph{both} the resolution of the spin computation is
low, and the old gauge conditions of constant $H_a$ are employed.
The NR spin magnitude is used to initialize the PN spin magnitude,
cf. Eq.~\eqref{eq:conserved}. Therefore, an error in the calculation
of the NR spin would compromise our comparison with PN. For the
affected runs, we correct the spin reported by the quasi-local spin
computation by first finding all maxima of the spin-magnitude $\chi$
between $500M$ and $2000M$ after the start of the numerical
simulation. We then take the average value of $\chi$ at those maxima
as the corrected spin-magnitude of the NR simulation.
Figure~\ref{fig:q5SpinComparison} shows the case q5\_0.5x as well as
the rerun described in Sec. \ref{sec:Gauge}. It is evident that this
procedure produces a spin value which is very close to the spin in the
rerun where the problematic behavior is no longer present. Thus, we
adopt it for the three cases where an oscillation in the spin
magnitude is present.
The nutation features shown in Fig.~\ref{fig:ProjectionSpinsQ5} are
qualitatively similar for all our simulations, independent of
resolution of the spin computation and evolution gauge. When the spin
is inaccurately measured, the nutation trajectory picks up extra
modulations, which are small on the scale of
Fig.~\ref{fig:ProjectionSpinsQ5} and do not alter the qualitative
behavior.
The lower two panels of Fig.~\ref{fig:q5SpinComparison} quantify the
impact of inaccurate spin measurement on the precession-dynamics
comparisons performed in this paper: The middle panel shows the
differences between the spin directions in the original 0058 run and
our re-run q5\_0.5x. The spin directions differ by as much as 0.01
radians. However, as the lower panel shows, this difference can
mostly be absorbed by the PN matching, so that $\angle\chi_1$ and
$\angle L$ are of similar magnitude of about $10^{-3}$ radians.
\section{Discussion}
\label{sec:discussion}
We have presented an algorithm for matching PN precession dynamics to
NR simulations which uses constrained minimization. Using this
algorithm, we perform a systematic comparison between PN and NR for
precessing binary black hole systems. The focus of the comparison is
black hole dynamics only, and we defer discussion of waveforms to
future work. By employing our matching procedure, we find excellent
agreement between PN and NR for the precession and nutation of the
orbital plane. The normals to the orbital plane generally lie within
$10^{-2}$ radians, cf. Fig.~\ref{fig:AngleLAll}. Moreover, nutation
features on the orbital time-scale also agree well between NR and PN,
cf. Fig.~\ref{fig:ProjectionEllHatQ5}
For the black hole spin direction, the results are less uniform. The
NR spin direction $\hat\chi_1^{\rm NR}$ shows nutation features that
are qualitatively different than the PN nutation features,
cf. Fig.~\ref{fig:ProjectionSpinsQ5}. The disagreement in nutation
dominates the agreement of $\hat \chi_1^{\rm NR}$ with
$\hat\chi_1^{\rm PN}$; averaging away the nutation features
substantially improves agreement,
cf. Fig.~\ref{fig:AngleSmoothedS2cases}. The orbit-averaged
spin directions agree with PN to the same extent that the $\hat\ell$
direction does (with and without orbit averaging),
cf.~Fig.~\ref{fig:AngleLAll}.
Turning to the convergence properties of PN, we have performed PN-NR
comparisons at different PN order of the precession equations. For
both orbital angular momentum $\hat\ell$ and the spin direction
$\hat\chi_1$, we observe that the convergence of the PN results toward
NR is fast and nearly universally monotonic, cf.
Fig.~\ref{fig:PrecPNOrder}. At the highest PN orders, the spin
results might be dominated by the difference in nutation features
between PN and NR.
The good agreement between PN and NR precession dynamics are promising
news for gravitational wave modeling. Precessing waveform models
often rely on the post-Newtonian precession equations,
e.g.~\cite{Arun:2008kb,Hannam:2013oca}. Our results indicate that the
PN precession equations are well suited to model the precessing frame,
thus reducing the problem of modeling precessing waveforms to the
modeling of orbital phasing only.
The accuracy of the PN orbital phase evolution, unfortunately, does
not improve for precessing systems. Rather, orbital phasing errors are
comparable between non-precessing and precessing configurations, \
cf. Fig.~\ref{fig:q3_OmegaConv}. Moreover, depending on mass-ratio
and spins, some Taylor approximants match the NR data particularly
well, whereas others give substantially larger phase differences,
cf. Fig.~\ref{fig:massRatioDep}. This confirms previous
work~\cite{Damour:2010,Hannam:2010,Hannam:2010,Santamaria:2010yb,MacDonald:2012mp,MacDonald:2011ne}
that the PN truncation error of the phase evolution is important for
waveform modeling.
We have also examined the effects of including
partially known spin contributions to the evolution of the orbital
frequency for the Taylor T4 approximant. For aligned runs, including
such incomplete information usually improves the match, but the
results are still sensitive to the mass ratio of the binary (top panel
of Fig~\ref{fig:spinTruncationSummary}). For anti-aligned runs, it
appears that incomplete information always improves the agreement of
the phasing between PN and NR (bottom panel of
Fig~\ref{fig:spinTruncationSummary}).
In this work we compare gauge-dependent quantities, and thus must
examine the impact of gauge choices on the conclusions listed above.
We consider it likely that the different nutation features of
$\hat\chi_1$ are determined by different gauge choices. We have also
seen that different NR gauges lead to measurably different evolutions
of $\hat\chi$, $\hat\ell$, and the phasing,
cf. Fig.~\ref{fig:gaugeProblems} and~\ref{fig:q5SpinComparison}. We
expect, however, that our conclusions are fairly robust to the
gauge ambiguities for two reasons. First, in the matched PN-NR
comparison, the impact of gauge differences is quite small, cf. lowest
panel of Fig.~\ref{fig:q5SpinComparison}. Second, the near universal,
monotonic, and quick convergence of the precession dynamics with
precession PN order visible in Fig.~\ref{fig:PrecPNOrder} would not be
realized if the comparison were dominated by gauge effects. Instead,
we would expect PN to converge to a solution {\em different} from the
NR data.
\begin{acknowledgments}
We thank Kipp Cannon, Francois Foucart, Prayush Kumar, Abdul Mrou\'e
and Aaron Zimmerman for useful discussions. Calculations were
performed with the {\tt SpEC}-code~\cite{SpECwebsite}. We
gratefully acknowledge support from NSERC of Canada, from the Canada
Research Chairs Program, and from the Canadian Institute for
Advanced Research. We further gratefully acknowledge support from
the Sherman Fairchild Foundation; from NSF Grants PHY-1306125 and
AST-1333129 at Cornell; and from NSF Grants No. PHY-1440083 and
AST-1333520 at Caltech. Calculations were performed at the GPC
supercomputer at the SciNet HPC Consortium~\cite{scinet}; SciNet is
funded by: the Canada Foundation for Innovation (CFI) under the
auspices of Compute Canada; the Government of Ontario; Ontario
Research Fund (ORF) -- Research Excellence; and the University of
Toronto. Further computations were performed on the Zwicky cluster
at Caltech, which is supported by the Sherman Fairchild Foundation
and by NSF award PHY-0960291; and on the NSF XSEDE network under
grant TG-PHY990007N
\end{acknowledgments}
| {'timestamp': '2015-02-09T02:01:15', 'yymm': '1502', 'arxiv_id': '1502.01747', 'language': 'en', 'url': 'https://arxiv.org/abs/1502.01747'} |
\section{Introduction}
As natural language understanding of sentences and short documents continues to improve, there has been growing interest in tackling longer-form documents such as academic papers \citep{Ren2014,Bhagavatula2018}, novels \citep{Iyyer2016} and screenplays \citep{Gorinski2018}.
Analyses of such documents can take place at multiple levels, e.g. identifying both document-level labels (such as genre), as well as narrative trajectories (how do levels of humor and romance vary over the course of a romantic comedy?).
However, one of the key challenges for these tasks is that the signal-to-noise ratio over lengthy texts is generally low (as indicated by the performance of such models on curated datasets like NarrativeQA \citep{Kocisky2018}), making it difficult to apply end-to-end neural network solutions that have recently achieved state-of-the-art on other tasks \citep{Barrault2019,Williams2018,Wang2019}.
Instead, models either rely on a) a \emph{pipeline} that provides a battery of syntactic and semantic information from which to craft features (e.g., the BookNLP pipeline \citep{Bamman2014} for literary text, graph-based features \citep{Gorinski2015} for movie scripts, or outputs from a discourse parser \citep{Ji2017} for text categorization) and/or b) the \emph{linguistic intuitions} of the model designer to select features relevant to the task at hand (e.g., rather than ingesting the entire text, \citet{Bhagavatula2018} only consider certain subsections like the title and abstract of an academic publication).
While there is much to recommend these approaches, end-to-end neural modeling offers several key advantages: in particular, it obviates the need for auxiliary feature-generating models, minimizes the risk of error propagation, and offers improved generalization across large-scale corpora.
This work explores how models can leverage the inherent structure of a document class to facilitate an end-to-end approach.
Here, we focus on screenplays, investigating whether we can effectively extract key information by first segmenting them into scenes, and then further exploiting the structural regularities within each scene.
With an average of \textgreater 20k tokens per script in our evaluation corpus, extracting salient aspects is far from trivial.
Through a series of carefully controlled experiments, we show that a structure-aware approach significantly improves document classification by effectively collating sparsely distributed information.
Further, this method produces both document- and scene-level embeddings, which can be used downstream to visualize narrative trajectories of interest (e.g., the prominence of various themes across the script).
The overarching strategy of this work is to incorporate structural priors as biases into the \emph{architecture} of the neural network model itself (e.g., \citet{Socher2013}, \citet{Strubell2018}, \emph{inter alia}).
The methods we propose can readily generalize to any long-form text with an exploitable internal structure, including novels (chapters), theatrical plays (scenes), chat logs (turn-taking), online games (levels/rounds/gameplay events), and academic texts (sections and subsections).
The paper is organized as follows: In \S\ref{sec:script_structure}, we detail how a script can be formally decomposed into scenes, and each scene can be further decomposed into granular elements with distinct discourse functions. \S\ref{sec:hse} elaborates on how this structure can be effectively leveraged with a proposed encoder based on hierarchical attention \citep{Yang2016}. In \S\ref{sec:tag_prediction}, the predictive performance of the hierarchical encoder is validated on two multi-label tag prediction tasks, one of which rigorously establishes the utility of modeling structure at multiple granularities (i.e., at the level of line, scene, and script).
Notably, while the resulting scene-encoded representation is useful for prediction tasks, it is not amenable to easy interpretation or examination.
To shed further light on encoded document representation, in \S\ref{sec:unsup_dict_learn}, we propose an unsupervised interpretability module that can be attached to an encoder of any complexity.
\S\ref{sec:qual_analysis} outlines our application of this module to the scene encoder, and the resulting visualizations of the screenplay, which neatly illustrate how plot elements vary over the course of the narrative arc.
\S\ref{sec:related} draws connections to related work, before concluding.
\section{Script Structure}
\label{sec:script_structure}
\begin{figure}[!t]
\begin{center}
\includegraphics[height=0.15\textheight, width=0.9\textwidth]{./pulp_fiction_scene.pdf}
\end{center}
\caption{A portion of the screenplay for \emph{Pulp Fiction}, annotated with the common scene components.}
\vspace{-0.3cm}
\label{fig:example_screenplay}
\end{figure}
Movie and television scripts, also known as screenplays, are traditionally segmented into \emph{scenes}, with a rough rule of thumb that each scene lasts about a minute on-screen.
A scene is not necessarily a distinct narrative unit (which are most often sequences of several consecutive scenes), but is constituted by a piece of continuous action at a single location
\begin{table}[!h]
\begin{center}
\scriptsize
\begin{tabular}{p{1.2cm}p{12pt}p{12pt}p{0.5cm}p{1cm}p{1.8cm}}
Title& Line& Scene& Type& Character& Text \\\hline
Pulp Fiction& 204& 4& Scene & & {EXT. APART..} \\
Pulp Fiction& 205& 4& Action & & {Vincent and Jules.} \\
Pulp Fiction& 206& 4& Action & & {We TRACK...} \\
Pulp Fiction& 207& 4& Dial. & VINCENT & What's her name?\\
Pulp Fiction& 208& 4& Dial. & JULES & Mia. \\
Pulp Fiction& 209& 4& Dial. & VINCENT & How did... \\
\end{tabular}
\end{center}
\vspace{-0.3cm}
\caption{Post-processed version of Fig.\ref{fig:example_screenplay}.}
\label{tab:processed_script}
\end{table}
Fig. \ref{fig:example_screenplay} contains a segment of a scene from the screenplay for the movie \emph{Pulp Fiction}, a 1994 American film.
These segments tend to follow a standard format.
Each scene starts with a scene heading or ``slug line" that briefly describes the scene setting, followed by a sequence of statements.
Screenwriters typically use formatting to distinguish between dialogue and action statements \citep{Argentini1998}.
The first kind contains lines of a dialogue and identifies the character who utters it either on- or off-screen (the latter is often indicated with `(V.O.)' for voice-over).
Occasionally, parentheticals are used to include special instructions for how an utterance should be delivered by the character.
Action statements, on the other hand, are all non-dialogue constituents of the screenplay ``often used by the screenwriter to describe character actions, camera movement, appearance, and other details" \citep{Pavel2015}.
In this work, we consider action and dialogue statements, as well as character identities for each dialogue segment, and ignore slug lines and parentheticals.
\section{Hierarchical Scene Encoders}
\label{sec:hse}
Given the size of a movie script, it is computationally infeasible to treat these screenplays as single blocks of text to be ingested by a recurrent encoder.
Instead, we propose a hierarchical encoder that mirrors the standard structure of a screenplay (\S\ref{sec:script_structure}) -- a sequence of scenes, each of which is in turn an interwoven sequence of action and dialogue statements.
The encoder is three-tiered, as illustrated in Fig. \ref{fig:model_schematic} and processes the text of a script as follows.
\subsection{Model Architecture}
\label{sec:model_architecture}
First, an \textbf{action-statement encoder} transforms the sequence of words in an action statement (represented by their pretrained word embeddings) into an action statement embedding.
Next, an \textbf{action-scene encoder} transforms the chronological sequence of action statement embeddings within a scene into an action scene embedding.
Analogously, a \textbf{dialogue-statement encoder} and a \textbf{dialogue-scene encoder} are used to obtain dialogue statement embeddings and aggregate them into dialogue scene embeddings.
To evaluate the effect of character information, characters with at least one dialogue statement in a given scene are represented by an individual character embedding (these are randomly initialized and estimated during model training), and a scene-level character embedding is constructed by averaging the embeddings of all the characters in the scene\footnote{We only take into account characters at the \emph{scene} level i.e., we do not associate characters with each dialogue statement, leaving this addition to future work.}.
Finally, the action, dialogue and scene-level character embeddings for each scene are concatenated into a single scene embedding.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.9\textwidth,keepaspectratio,trim={0 3.5cm 11.5cm 0.5cm}]{./model_schematic.pdf}
\end{center}
\caption{The architecture of our script encoder, largely following the structure in Fig. \ref{fig:example_screenplay}.}
\label{fig:model_schematic}
\end{figure}
\vspace{-0.25cm}
Scene-level predictions or analyses can then be obtained by feeding the scene embeddings into a subsequent module of the neural architecture, e.g. a feedforward layer can be used for supervised tagging tasks.
Alternatively, if a single representation of the entire screenplay is required, a final \textbf{script encoder} is used to transform the sequence of scene embeddings for a script into a single script embedding.
A key assumption underlying the model is that action and dialogue statements -- as instances of written narrative and spoken language respectively -- are distinct categories of text and must therefore be processed separately.
We evaluate this assumption in the tag classification experiments (\S\ref{sec:tag_prediction}).
\subsection{Encoders}
\label{sec:encoders}
The proposed model incorporates strong inductive biases regarding the overall structure of input documents.
In addition, each of the aforementioned encoders in \S\ref{sec:model_architecture} can be specified in multiple ways, and we evaluate three different instantiations of the encoder components:
\begin{enumerate}
\item \textbf{Sequential (GRU):} A bidirectional GRU \citep{Bahdanau2015} is used to encode the temporal sequence of inputs (of words, statements or scenes).
Given a sequence of input embeddings $\mathbf{e}_1, \dots, \mathbf{e}_T$ for a sequence of length $T$, we obtain GRU outputs $\mathbf{c}_1, \dots, \mathbf{c}_T$, and use $\mathbf{c}_T$ as the recurrent encoder's final output.
Other sequential encoders could also be used as alternatives.
\item \textbf{Sequential with Attention (GRU + Attn):} Attention \citep{Bahdanau2015} can be used to combine sequential outputs $\mathbf{c}_1, \dots, \mathbf{c}_T$, providing a mechanism for more or less informative inputs to be filtered accordingly.
We calculate attention weights using a parametrized vector $\mathbf{p}$ of the same dimensionality as the GRU outputs \citep{Sukhbaatar2015,Yang2016}:
\begin{align*}
\alpha_i = \frac{\mathbf{p}^T \mathbf{c}_i}{\Sigma_{j=1}^{T}\mathbf{p}^T \mathbf{c}_j}
\end{align*}
These weights are used to compute the final output of the encoder as:
\begin{align*}
\mathbf{c} = \Sigma_{j=1}^{T} \alpha_i \mathbf{c}_i
\end{align*}
Other encoders with attention could be used as alternatives to this formulation.
\item \textbf{Bag-of-Embeddings with Attention (BoE + Attn):} Another option is to disregard the sequential encoding and simply compute an attention-weighted average of the inputs to the encoder as follows:
\begin{align*}
\alpha_i &= \frac{\mathbf{p}^T \mathbf{e}_i}{\Sigma_{j=1}^{T}\mathbf{p}^T \mathbf{e}_j}\\
\mathbf{c} &= \Sigma_{j=1}^{T} \alpha_i \mathbf{e}_i
\end{align*}
This encoder stands in contrast to a bag-of-embeddings (BoE) encoder which computes a simple average of its inputs.
While defining a far more constrained function space than recurrent encoders, BoE and BoE + Attn representations have the advantage of being interpretable (in the sense that the encoder's output is in the same space as the input word embeddings).
We leverage this property in \S\ref{sec:unsup_dict_learn} where we develop an interpretability layer on top of the encoder outputs.
\end{enumerate}
\subsection{Loss for Tag Classification}
\label{sec:classifier_loss}
The final script embedding being passed into a feedforward classifier (FFNN).
As both supervised learning tasks in our evaluation are multi-label classification problems, we use a variant of a simple multi-label one-versus-rest loss, where correlations among tags are ignored.
The tag sets have high cardinalities and the fractions of positive samples are inconsistent across tags (Table \ref{tab:tag_cardinality} in \ref{sec:add_stats}); this motivates us to train the model with a reweighted loss function:
\begin{align}
L(y,z) &= \tfrac{1}{NL} \Sigma_{i=1}^{N} \Sigma_{j=1}^{L} y_{ij} \log\sigma(z_{ij}) \nonumber \\
&+ \lambda_j (1 - y_{ij}) (1 - \log\sigma(z_{ij})) \label{eq:loss_fn}
\end{align}
where $N$ is the number of samples, $L$ is the number of tag labels, $y \in \{0, 1\}$ is the tag label, $z$ is the output of the FFNN, $\sigma$ is the sigmoid function, and $\lambda_j$ is the ratio of positive to negative samples (precomputed over the entire training set, since the development set is too small to tune this parameter) for the tag label indexed by $j$.
With this loss function, we account for label imbalance without using separate thresholds for each tag tuned on the validation set.
\section{Interpreting Scene Embeddings}
\label{sec:unsup_dict_learn}
As the complexity of learning methods used to encode sentences and documents has increased, so has the need to understand the properties of the encoded representations.
Probing-based methods \citep{Linzen2016, Conneau2018} are used to gauge the information captured in an embedding by evaluating its performance on downstream classification tasks, either with manually collected annotations \citep{Shi2016} or carefully selected self-supervised proxies \citep{Adi2016}.
In our case, it is laborious and expensive to collect such annotations at the scene level (requiring domain experts), and the proxy evaluation tasks proposed in the literature do not probe the narrative properties we wish to surface.
Instead, we take inspiration from \citet{Iyyer2016} and learn a \textbf{scene descriptor model} that can be trained without relying on any such annotations.
Using a dictionary learning perspective \citep{Olshausen1997}, the model learns to represent each scene embedding as a weighted mixture of various topics estimated over the entire corpus.
It thus acts as an ``interpretability layer" that can be applied over the scene encoder.
This model class is similar in spirit to dynamic topic models \citep{Blei2006}, with the added advantage of producing topics that are both more coherent and more interpretable than those generated by LDA \citep{He2017unsupervised,Mitcheltree2018}.
\subsection{Scene Descriptor Model}
The model has three main components: a \textbf{scene encoder}, a set of topics or \textbf{descriptors} that form the ``basis elements" used to describe an interpretable scene, and a \textbf{predictor} that predicts weights over descriptors for a given scene embedding.
The scene encoder uses the text of a given scene $s_t$ to produce a corresponding scene embedding $\mathbf{v}_t$.
This encoder can take any form -- from an extractor that derives a hand-crafted feature set from the scene text, as in \citet{Gorinski2018}, to an instantiation of the scene encoder in \S\ref{sec:hse}.
\vspace{-0.25cm}
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.75\textwidth,keepaspectratio,trim={1cm 0 6cm 0}]{./descriptors_schematic.pdf}
\end{center}
\caption{A pictorial representation of the scene descriptor model.}
\label{fig:descriptor_schematic}
\end{figure}
\vspace{-0.3cm}
To probe the contents of scene embedding $\mathbf{v}_t$, we compute the descriptor-based representation $\mathbf{w}_t \in \mathbb{R}^d$ in terms of a descriptor matrix $\mathbf{R}^{k \times d}$, where $k$ is the number of topics or descriptors:
\begin{align}
\mathbf{o}_t &= \textrm{softmax}(f(\mathbf{v}_t)) \label{eq:predictor} \\
\mathbf{w}_t &= \mathbf{R}^T \mathbf{o}_t \nonumber
\end{align}
where $\mathbf{o}_t \in \mathbb{R}^k$ is the weight (probability) vector over $k$ descriptors and $f(\mathbf{v}_t)$ is a predictor (illustrated by the leftmost pipeline in Fig. \ref{fig:descriptor_schematic}) which converts $\mathbf{v}_t$ into $\mathbf{o}_t$.
Two variants are $f = \textrm{FFNN}(\mathbf{v}_t)$ and $f = \textrm{FFNN}([\mathbf{v}_t; \mathbf{o}_{t-1}])$ (concatenation); we use the former in \S\ref{sec:qual_analysis}.
Furthermore, we can incorporate additional recurrence into the model by modifying Eq. \ref{eq:predictor} to add the previous state:
\begin{align}
\mathbf{o}_t = (1-\alpha)\cdot\textrm{FFNN}([\mathbf{v}_t; \mathbf{o}_{t-1}]) + \alpha\cdot\mathbf{o}_{t-1} \label{eq:recurrent_predictor}
\end{align}
\subsection{Reconstruction Task}
We wish to minimize the reconstruction error between two scene representations: (1) the descriptor-based embedding $\mathbf{w}_t$ which depends on the scene embedding $\mathbf{v}_t$, and (2) an attention-weighted bag-of-words embedding for $s_t$.
This ensures that the computed descriptor weights are indicative of the scene's actual content (specifically portions of its text that indicate attributes of interest such as genre, plot, and mood).
We use a \texttt{BoE+Attn} scene encoder (\S\ref{sec:encoders}) pretrained on the tag classification task (bottom right of Fig. \ref{fig:descriptor_schematic}), which yields a vector $\mathbf{u}_t \in \mathbb{R}^d$ for scene $s_t$.
The scene descriptor model is then trained using a hinge loss objective \citep{Weston2011} to minimize the reconstruction error between $\mathbf{w_t}$ and $\mathbf{u}_t$, with an additional orthogonality constraint on $\mathbf{R}$ to encourage semantically distinct descriptors:
\begin{equation}
L = \Sigma_{j=1}^{n} \max(0, 1 - \mathbf{w}_t^T \mathbf{u}_t + \mathbf{w}_t^T \mathbf{u}_j ) + \lambda \| \mathbf{RR}^T - \mathbf{I} \|_2
\end{equation}
where $\mathbf{u}_1 \dots \mathbf{u}_n$ are $n$ negative samples selected from other scenes in the same screenplay.
The motivation for using the output of a \texttt{BoE+Attn} scene encoder is that $\mathbf{w}_t$ (and therefore the rows in $\mathbf{R}$) lies in the same space as the input word embeddings.
Thus, a given descriptor can be semantically interpreted by querying in the word embedding space
The predicted descriptor weights for a scene $s_t$ can be obtained by running a forward pass through the model.
\section{Evaluation}
\label{sec:evaluation}
We evaluate the proposed script encoder architecture and its variants through two supervised multi-label tag prediction tasks, and a qualitative analysis based on extracting descriptor trajectories in an unsupervised setting.
\subsection{Datasets}
\label{sec:datasets}
Our evaluation is based on the ScriptBase-J corpus, released by \citet{Gorinski2018}.\footnote{https://github.com/EdinburghNLP/scriptbase}
In this corpus, each movie is associated with a set of expert-curated tags that range across 6 tag attributes: mood, plot, genre, attitude, place, and flag (\ref{tab:tag_examples}); in addition to evaluating on these tags, we also used an internal dataset, where the same movies were hand-labeled by in-house domain experts across 3 tag attributes: genre, plot, and mood.
The tag taxonomies between these two datasets are distinct (Table \ref{tab:tag_cardinality}).
ScriptBase-J was used both to directly compare our approach with an implementation of the multilabel encoder architecture in \citet{Gorinski2018} and to provide an open-source evaluation standard.
\subsubsection*{Script Preprocessing}
As in \citet{Pavel2015}, we leveraged the standard screenplay format \citep{Argentini1998} to extract a structured representation of the scripts (relevant formatting cues included capitalization and tab-spacing; see Fig. \ref{fig:example_screenplay} and Table \ref{tab:processed_script} for an example).
Filtering erroneously processed scripts removed 6\% of the corpus, resulting in 857 scripts total.
We set aside 20\% (172 scripts) for heldout evaluation; the remainder was used for training.
The average number of tokens per script is around 23k; additional statistics are shown in Table \ref{tab:token_stats}.
Next, we split extremely long scenes into smaller ones, capping the maximum number of lines in a scene (across both action and dialogue) to 60 (keeping within GPU memory limits).
For the vocabulary, a word count of 5 across the script corpus was set as the minimum threshold.
The number of samples (scripts) per tag value ranges from high (e.g., for some genre tags) to low (for most plot and mood tags) in both datasets (\S\ref{sec:add_stats}), and coupled with high tag cardinality for each attribute, motivates the need for the reweighted loss in Eq.~\ref{eq:loss_fn}.
\subsection{Experimental Setup}
All inputs to the hierarchical scene encoder are 100-dimensional GloVe embeddings \cite{Pennington2014}.\footnote{Using richer contextual word representations will improve performance, but is orthogonal to the purpose of this work.}
Our sequential models are biGRUs with a single 50-dimensional hidden layer in each direction, resulting in 100-dimensional outputs.
The attention model is parametrized by a 100-dimensional vector $\mathbf{p}$; BoE models naturally output 100-dimensional representations, and character embeddings are 10-dimensional.
The output of the script encoder is passed through a linear layer with a sigmoid activation function and binarized by thresholding at 0.5.
One simplification in our experiments is to utilize the same encoder \emph{type} for all encoders described in \S\ref{sec:model_architecture}.
However, it is conceivable that different encoder types might perform better at different tiers of the architecture: e.g. scene aggregation can be done in a permutation-invariant manner, since narratives are interwoven and scenes may not be truly sequential.
We implement the script encoder on top of AllenNLP \citep{Gardner2017} and PyTorch \citep{Paszke2019}, and all experiments were conducted on an AWS \texttt{p2.8xlarge} machine.
We use the Adam optimizer with an initial learning rate of $5e^{-3}$, clip gradients at a maximum norm of 5, and do not use dropout.
The model is trained for a maximum of 20 epochs to maximize average precision score, and with early stopping in place if the validation metric does not improve for 5 epochs.
\subsection{Tag Prediction Experiments}
\label{sec:tag_prediction}
ScriptBase-J also comes with ``loglines", or short, 1-2 sentence human-crafted summaries of the movie's plot and mood (see Table \ref{tab:tag_examples}).
A model trained on these summaries can be expected to provide a reasonable baseline for tag prediction, since human summarization is likely to pick out relevant parts of the text for this task.
The loglines model is a bidirectional GRU with inputs of size 100 (GloVe embeddings) and hidden units of size 50 in each direction, whose output feeds into a linear classifier.\footnote{We tried both with and without attention and found the variant without attention to give slightly better results.}
\begin{table}[!h]
\vspace{-0.4cm}
\begin{tabular}{lrrr}
{\textbf{Model}} & {\textbf{Genre}} & {\textbf{Plot}} & {\textbf{Mood}} \\\hline
Loglines & 49.9 (0.8) & 12.7 (0.9) & 17.5 (0.2) \\
\hline
\multicolumn{4}{l}{\textit{Comparing encoder variations:}}\\
BoE & 49.0 (1.1) & 8.3 (0.6) & 12.9 (0.7) \\
BoE + Attn & 51.9 (2.3) & 11.3 (0.4) & 16.3 (0.6) \\
GRU & 57.9 (1.9) & 13.0 (1.3) & 19.1 (1.0) \\
GRU + Attn & 60.5 (2.0) & {\bf 15.2} (0.4) & {\bf 22.9} (1.4) \\
\hline
\multicolumn{4}{l}{\textit{Variants on GRU + Attn for action \& dialog:}} \\
+ Chars & {\bf 62.5} (0.7) & 11.7 (0.3) & 18.2 (0.3) \\
- Action & 60.5 (2.9) & 13.5 (1.4) & 20.0 (1.2) \\
- Dialogue & 60.5 (0.6) & 13.4 (1.7) & 19.1 (1.4) \\
2-tier & 61.3 (2.3) & 13.7 (1.7) & 20.6 (1.2) \\
HAN & 61.5 (0.6) & 14.2 (1.7) & 20.7 (1.4)
\end{tabular}
\caption{Investigation of the effects of different architectural (BoE +/- Attn, GRU +/- Attn) and structural choices on a tag prediction task, using an internally tagged dataset: F-1 scores with sample standard deviation in parentheses. Across the 3 tag attributes we find that modeling sentential and scene-level structure helps, and attention helps extract representations more salient to the task at hand.}
\label{tab:content_exps}
\vspace{-0.2cm}
\end{table}
Table \ref{tab:content_exps} contains results for the tag prediction task on our internally-tagged dataset.
First, a set of models trained using action and dialogue inputs are used to evaluate the architectural choices in \S\ref{sec:model_architecture}.
We find that modeling recurrence at the sentential and scene levels, and using attention to select relevant words or scenes, help considerably and are necessary for robust improvement over the loglines' baseline (see the first five rows in Table \ref{tab:content_exps}).
Next, we assess the effect that various structural elements of a screenplay have on classification performance. Notably, the difficulty of the prediction task is directly related to the set size of the tag attribute: higher-cardinality tag attributes with correlated tag values (like plot and mood) are significantly more difficult to predict than lower-cardinality tags with more discriminable values (like genre).
We find that adding character information to the best-performing GRU + Attn model (\texttt{+Char}) improves prediction of genre, while using both dialogue and action statements improves performance on plot and mood, compared to using only one or the other.
We also evaluate (1) a \texttt{2-tier} variant of the \texttt{GRU+Attn} model without action/dialogue-statement encoders (i.e., all action statements are concatenated into a single sequence of words and passed into the action-scene encoder, and similarly with dialogue) and (2) a variant similar to \citet{Yang2016} (\texttt{HAN}) that does not distinguish between action and dialogue (i.e., all statements in the text of a scene are encoded using a statement encoder and statement embeddings are passed to a single scene encoder, the output of which is passed into the script encoder).
Both models perform slightly better than \texttt{GRU+Attn} on genre, but worse on plot and mood, showing that for more difficult prediction tasks, it helps to incorporate hierarchy and to distinguish action and dialogue statements.
\begin{table}[!h]
\vspace{-0.2cm}
\begin{center}
\begin{tabular}{l|rr}
\textbf{Tag} & \textbf{G\&L} & \textbf{HSE} \\
\hline
Attitude & 72.6 & 70.1 \\
Flag & 52.5 & 52.6 \\
Genre & 55.1 & 42.5 \\
Mood & 45.5 & 51.2 \\
Place & 57.7 & 29.1 \\
Plot & 34.6 & 34.5
\end{tabular}
\end{center}
\vspace{-0.4cm}
\caption{F-1 scores on ScriptBase-J provided tag set, comparing \citet{Gorinski2018}'s approach to ours.}
\label{tab:jinni}
\end{table}
For the results in Table \ref{tab:jinni}, we compared the \texttt{GRU+Attn} configuration in Table \ref{tab:content_exps} (\texttt{HSE}) with an implementation of \citet{Gorinski2018} (\texttt{G\&L}) that was run on the previous train-test split.
\texttt{G\&L} contains a number of handcrafted lexical, graph-based, and interactive features that were designed for optimal performance for screenplay analysis.
In contrast, \texttt{HSE} directly encodes standard screenplay structure into a neural network architecture, and is an alternative, arguably more lightweight way of building a domain-specific textual representation.
Our results are comparable, with the exception of ``place", which can often be identified deterministically from scene headings.
\subsection{Similarity-based F-1}
\label{sec:similarity_scoring}
Results in Tables \ref{tab:content_exps} and \ref{tab:jinni} are stated using standard multi-label F-1 score (one-vs-rest classification evaluation, micro-averaged over each tag attribute), which requires an exact match between predicted and actual tag value to be deemed correct.
However, the characteristics of our tag taxonomies suggest that this measure may not be ideal.
In particular, our human-crafted tag sets have tag attributes with dozens of highly correlated, overlapping values, as well as missing tags not assigned by the annotator.
A standard scoring procedure may underestimate model performance when, e.g., a prediction of ``Crime" for a target label of ``Heist", is counted as equivalently wrong to ``Romance" (Table \ref{tab:tag_similarity} in \ref{sec:add_stats}).
One way to deal with tag sets is to leverage a similarity-based scoring procedure (see \citet{Maynard2006} for related approaches).
Such a measure takes into account the latent relationships among tags via \emph{similarity thresholding}, wherein a prediction is counted as correct if it is within a certain distance of the target.
In particular, we treat a prediction as correct based the percentile of its similarity to the actual label.
The percentile cutoff can be varied to illustrate how estimated model performance varies as a function of the degree of ``enforced" similarity between target and prediction.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\textwidth,keepaspectratio,trim={1cm, 1cm 0cm 0cm}]{sim_performance.png}
\caption{F1 score of various tag attributes as a function of the similarity threshold percentile.}
\label{fig:sim_improvements}
\end{figure}
In Fig.~\ref{fig:sim_improvements} we examine how our results might vary if we adopted a similarity-based scoring procedure, by re-evaluating the \texttt{GRU + Attn} model outputs (row 5 in Table \ref{tab:content_exps}) with this evaluation metric.
When the similarity percentile cutoff equals 100, the result is identical to the standard F-1 score.
Even decreasing the cutoff to the 90\textsuperscript{th} percentile shows striking improvements for high-cardinality attributes (180\% for \texttt{mood} and 250\% for \texttt{plot}).
Leveraging a similarity-based scoring procedure for complex tag taxonomies may yield results that more accurately reflect human perception of the model's performance \citep{Maynard2006}.
\subsection{Qualitative Scene-level Analysis}
\label{sec:qual_analysis}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.75\paperwidth,keepaspectratio]{trajectories.png}
\caption{Descriptor Trajectories for \emph{Pearl Harbor}, \emph{Pretty Woman}, and \emph{Pulp Fiction}. The $y$-axis is a smoothed and rescaled descriptor weight, i.e. $\mathbf{o}_t$ in Eq.\ref{eq:predictor}. Events: (A) Attack on Pearl Harbor begins (B) Rising tension at the equestrian club and (C) Confrontation at the pawn shop. Word clusters corresponding to each descriptor are in Table \ref{tab:clusters}.}
\label{fig:descriptor_trajectories}
\end{figure*}
To extract narrative trajectories with the scene descriptor model, we compared the three model variants in \S\ref{sec:model_architecture} for the choice of scene encoder and found that while attention aids the creation of interpretable descriptors (in-line with previous work), sequential and non-sequential models produce similarly interpretable clusters -- thus, we use the \texttt{BoE+Attn} model.
Similar to \citet{Iyyer2016}, we limit the input vocabulary for both \texttt{BoW + Attn} encoders to words occurring in at least 50 movies (7.3\% of the training set), outside the 500 most frequent words.
The number of descriptors $k$ is set to 25 to allow for a wide range of topics while keeping manual examination feasible.
Descriptors are initialized either randomly \citep{Glorot2010} or with the centroids of a $k$-means clustering of the input word embeddings.
For the predictor, $f$ is a two-layer FFNN with ReLU activations and a softmax final layer that transforms $\mathbf{v}_t$ (from the scene encoder) into a 100-dimensional intermediate state and then into $\mathbf{o}_t$.
Further modeling choices are evaluated using the semantic coherence metric \citep{Mimno2011}, which assesses the quality of word clusters induced by topic modeling algorithms.
These choices include: the presence of recurrence in the predictor (i.e., toggling between Eqns. \ref{eq:predictor} and \ref{eq:recurrent_predictor}, with $\alpha=0.5$) and the value of hyperparameter $\lambda$.
While the $k$-means initialized descriptors score slightly higher on semantic coherence, they are qualitatively quite similar to the initial centroids and do not reflect the corpus as well as the randomly initialized version.
We also find that incorporating recurrence and $\lambda=10$ (tuned using simple grid search) result in the highest coherence.
The outputs of the scene descriptor model are shown in Table \ref{tab:clusters} and Figure \ref{fig:descriptor_trajectories}.
Table \ref{tab:clusters} presents five example descriptors, each identified by representative words closest to them in the word embedding space, with their topic names manually annotated.
Figure \ref{fig:descriptor_trajectories} presents the corresponding narrative trajectories of a subset of these descriptors over the course of three sample screenplays: Pretty Woman, Pulp Fiction, and Pearl Harbor, using a \emph{streamgraph} \citep{Byron2008}.
The descriptor weight $\mathbf{o}_t$ (Eq.\ref{eq:predictor}) as a function of scene order is rescaled and smoothed, with the width of a region at a given scene indicating the weight value.
A critical event for each screenplay is indicated by a letter on each trajectory.
A qualitative analysis of such events indicates general alignment between scripts and their topic trajectories, and the potential applicability of this method to identifying significant moments in long-form documents.
\begin{table}[h!]
\small
\vspace{-0.2cm}
\begin{center}
\begin{tabular}{p{40pt}|l}
\textbf{Topic} & \textbf{Words} \\
\hline
Violence & fires blazes explosions grenade blasts \\
Residential & loft terrace courtyard foyer apartments \\
Military & leadership army victorious commanding elected \\
Vehicles & suv automobile wagon sedan cars \\
Geography & sand slope winds sloping cliffs
\end{tabular}
\end{center}
\caption{\footnotesize Examples of retrieved descriptors. Trajectories for ``Violence", ``Military", and ``Residential" are shown in Fig. \ref{fig:descriptor_trajectories}.}
\label{tab:clusters}
\vspace{-0.2cm}
\end{table}
\section{Related Work}
\label{sec:related}
Computational narrative analysis of large texts has been explored in a number of contexts \citep{Mani2012} and for a number of years \citep{Lehnert1981}.
More recent work has analyzed narrative from a plot \citep{Chambers2008,Goyal2010} and character \citep{Elsner2012,Bamman2014} perspective.
While movie narratives have received attention \citep{Bamman2013,Chaturvedi2018,Kar2018}, the computational analysis of entire screenplays has not been as common.
Notably, \citet{Gorinski2015} introduced a summarization method that takes into account an entire script at a time, extracting graph-based features that summarize the key scene sequences.
\citet{Gorinski2018} then build on top of this work, crafting additional features for use in a specially-designed multi-label encoder.
Our work suggests an orthogonal approach -- our automatically learned scene representations offer an alternative to their feature-engineered inputs.
\citet{Gorinski2018} emphasize the difficulty of their tag prediction task, which we find in our tasks as well.
One possibility we consider is that at least some of this difficulty owes not to the length or richness of the text per se, but rather to the complexity of the tag taxonomy.
The pattern of results we obtain from a similarity-based scoring measure offers a significantly brighter picture of model performance, and suggests more broadly that the standard multilabel F1 measure may not be appropriate for complex, human-crafted tag sets \citep{Maynard2006}.
Nevertheless, dealing with long-form text remains a significant challenge.
One possible solution is to infer richer representations of latent structure by using a structured attention mechanism \citep{Liu2018}, which might highlight key dependencies between scenes in a script.
Another method could be to define auxiliary tasks as in \citet{Jiang2018} to encourage better selection and memorization.
Lastly, sparse versions of the softmax function \citep{Martins2016} can be used to enforce the notion that salient information for downstream tasks is sparsely distributed across the screenplay.
\section{Conclusion}
In this work, we propose and evaluate various neural network architectures for learning fixed-dimensional representations of full-length film scripts.
We hypothesize that designing the network to mimic the documents' internal structure will boost performance. Experiments conducted on two tag prediction tasks provide evidence in favour of this hypothesis, confirming the benefits of (1) using hierarchical attention-based models and (2) incorporating distinctions between different kinds of scene components directly into the model.
Additionally, as a means of exploring the information contained within scene-level embeddings, we presented an unsupervised technique for bootstrapping ``scene descriptors" and visualizing their trajectories through the screenplay.
For future work, we plan to investigate richer ways of incorporating character identities into the model.
For example, character embeddings could be used to analyze character archetypes across different movies.
A persona-based characterization of the screenplay would provide a complementary view to the plot-based analysis elucidated here.
Finally, as noted at the outset, our structure-aware methods are fundamentally generalizable, and can be adapted to natural language understanding across virtually any domain in which structure can be extracted, including books, technical reports, and online chat logs, among others.
| {'timestamp': '2020-05-01T02:06:30', 'yymm': '2004', 'arxiv_id': '2004.14532', 'language': 'en', 'url': 'https://arxiv.org/abs/2004.14532'} |
\section{Introduction}\label{sec.introduction}
In a world in which technology has been incorporated, both in business and homes, communications have become essential. Due to the centralization of information and storage of data in the cloud, a failure in communications (especially in the business environment) can cause chaos. If this happens in a population nucleus instead of an enterprise, the consequences can be much more serious, especially if it is due to a natural disaster or some kind of catastrophe.
When there is a communications outage situation, it is about finding a solution, establishing a temporary communications point that allows, at least, to have a connection point that allows basic communications to maintain control of the situation.
Unlike centralized communications, decentralized or distributed communications enable direct communication between devices in a network. This means that in this type of communications it is not necessary to have a central device so that the different devices in the network can communicate. Between these two topologies, there is also an intermediate case called hybrid (See Fig.~\ref{networktopologies}).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{nettypes.png}
\caption{Network topologies.}
\label{networktopologies}
\end{figure}
In order to establish a communications point in a place where communications have broken down or a temporary communication point is desired, this paper proposes a MANET (Mobile Ad Hoc Network) based on the use of robots that could adopt the acronym R-MANET (Robotic Mobile Ad Hoc Network) or directly RANET (Robotic Ad Hoc Network). The robots will be in charge of forming a multi-hop \cite{ref8} network to finally provide a communications point to the place where the network final nodes (robots) are established.
Robotics has increasingly gone into homes, to the point that we have kitchen robots \cite{ref20}, or robots that clean the floor \cite{ref19} for us. The latter, vacuum cleaners have several advantages such as the possibility of returning automatically to the battery charging station when necessary, so they are being used for other tasks that are not related to cleaning. The proposal therefore uses vacuum robots as network nodes.
In this work, the security of the proposed system is an essential component. We work in the form of authentication of the different network nodes, verifying that each of them are authorized by the network. In addition, a way of encrypting the data is proposed, so that communications are secure and the information will be protected from the sender to the receiver. An IDentity-Based Signcryption (IDBS) \cite{refIDBS} is used for communication confidentiality, authenticity and integrity, both among peers, and between the central station and robots.
The following section discusses a series of works close to the proposal made. Section 3 explains the proposed system in more detail and section 4 also details the components of each robot-based network node. Section 5 shows the security mechanisms used in the proposed system. Finally, a conclusion close the paper.
\section{Related works}\label{sec.relatedworks}
One of the fields in which decentralised communications can be very important is in IoT \cite{ref1} (Internet of Things). More and more devices are connected, and it may be really interesting that between them they form a Wireless Sensor Network (WSN). Many times, this type of network can be the solution for sending data to the cloud when the device does not have a direct Internet connection, but jumping between devices is able to send the information to one with an Internet connection available. In \cite{ref11}, different Multi-Hop routing protocols for Ad Hoc wireless networks are studied and compared. In this type of network, it is important to measure transmission rates between nodes, so that you can choose the optimal way to send information more quickly and efficiently \cite{ref12}.
In IoT, different works are based on the use of Raspberry Pi for data collection through sensors that form WSAN (Wireless Sensor and Actuator Network), which form a MANET between the different devices for data transmission between them \cite{ref9}. Another device used in these sensor networks is the Arduino microcontroller, as in \cite{ref10}, which form a network of sensors for temperature monitoring and to help air conditioning systems operate more efficiently.
The increase in computing capacity of smartphones has also made them useful for decentralized networking. Bluetooth is one of the technologies chosen to form MANET networks between mobile devices \cite{ref13}. These devices are also used to create VANETs that allow the exchange of information between vehicles on the road \cite{ref14}, so that they can share some useful information such as traffic status. The ability to obtain data through built-in sensors means that smartphones can be mounted on a device to move it to one location and obtain data \cite{ref15}, sometimes forming a MANET to deploy several devices at once.
MANET networks are also mounted on drones, such as in \cite{ref2}, where these networks are used to increase the coverage range of the remote control.
The iRobot Roomba robots apart from the tasks for which they have been designed \cite{ref5}, are able to perform other actions thanks to the Open Interface that it has, through which it is possible to send commands. This, in addition to the low cost of these robots, has caused them to be used for different jobs \cite{ref3}. In \cite{ref4}, the authors have worked on the control of an iRobot robot using an external keyboard and voice commands. In \cite{ref6}, a camera is placed on a Roomba robot to compare the captured video with one previously recorded to detect if there are abandoned objects in the examined area. Moreover, by applying fuzzy logic to these robots \cite{ref7} it is possible to develop navigation systems that allow them to obtain more optimal routes while are avoiding obstacles in their path.
In a decentralised network, it is very important to take into account its security. Two of the aspects to be studied are the form of authentication in the network and the protection of the data that is sent between the different nodes until reaching the final destination. Some authors propose models for authentication and auto-configuration of nodes in MANET networks \cite{ref16} and even on preventing impersonation attacks using double authentication factor \cite{ref17}. Apart from this, \cite{ref18} studies different cryptographic protocols for use in MANETs at the level of energy consumption, which is important in mobile devices, concluding that the AES standard is the one that produces the best performance among the different algorithms analyzed.
\section{Proposal description}\label{sec.proposaldescription}
The system proposed in this paper is a MANET in which the different network nodes are robots. The aim is to place the different nodes in the necessary positions to form a network in which a communications point is established at the end point. This communication point will provide the possibility of having a voice and/or data connection in an area that for some reason is out of communication, and where communications with other places are needed. The use of robots as network nodes allows each node to be moved remotely to the desired location, which can be important if it is a dangerous area for people to transport equipment.
To start with the deployment of the robots that will form the network, first you must study the area and calculate the number of nodes needed to extend the network. This calculation is made considering the maximum range of wireless technology used for the connection between the nodes. It should be noted that in general, the theoretical maximum ranges in practice can decrease considerably. The technology chosen to deploy the network is Wi-Fi \cite{ref21}, which has a maximum range of 100 meters. If, for example, we wanted to establish a communications point 500 meters from the starting area, the theory tells us that we would have to form a network with 5 nodes to reach the desired area. To this we must add the base station located at the starting point, from which the signal is transmitted and the network is controlled. The diagram in Fig.~\ref{network} shows the general idea of the proposal.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.489\textwidth]{network2.png}
\caption{Network scheme.}
\label{network}
\end{figure}
The system proposed to deploy a temporary communication point through the use of robots is based on different functionalities: remote control of robots, streaming, object tracking and wireless network deployment.
First of all, there is the remote control of robots. This functionality allows each node of the network to be moved remotely to the point where it has been previously decided. From the base station it is possible to select a robot within the network and send it real-time movement commands.
Secondly, we have audio and video streaming \cite{ref22}. To be able to move the robots remotely, it is necessary to know where they are at any given moment, so this second functionality is essential. Thanks to streaming, it is possible to visualize the environment in which the robot is located, at the same time that it is remotely controlled from the base station. In addition, audio is included in the streaming in case it would be useful to listen to the sound of the environment in which it is moving. To carry out this functionality, the base station must have a streaming server to which each robot will send the audio and video signal. The RTSP protocol \cite{ref23} is used to send this signal from the robots to the streaming server, and the RTMP protocol \cite{ref24} is used for receiving the video from the server to the base station equipment. In both cases, port 1935 is used (See Fig.~\ref{streaming}).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.489\textwidth]{streaming3.png}
\caption{Streaming scheme.}
\label{streaming}
\end{figure}
To perform these two functions, a web application is incorporated in the base station. It shows different blocks, and each one of the blocks refers to a robot, showing in them the video received through streaming together with buttons that allow sending movement orders to each robot.
Third is object tracking \cite{ref25}, which is used for the most interesting part of the proposal. Directing each robot remotely to the desired location can be a tedious task (especially if the number of nodes is too large), so it is proposed that by controlling only one of the robots the rest will be able to follow it autonomously. For this, a master node will be selected and the rest will be slaves of it. Each robot has a QR code on the back that identifies it within the network. Each slave robot has to do image recognition, looking for any QR code in the images captured by your camera. This is when the robot has to check if the detected QR code corresponds with the robot you have programmed to follow. The flowchart of Fig.~\ref{trackingflowchart} shows this process.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.25\textwidth]{trackingflowchart.png}
\caption{Flowchart of robot tracking.}
\label{trackingflowchart}
\end{figure}
Once a slave robot has detected the QR code of the robot to be followed, it performs calculations to establish the adequate movement. To do this, a proximity sensor is used, so that if the robot is more than a certain distance away, it has to start following the predecessor robot. In addition, depending on the area of the image in which the QR of the front node is detected, it will move in a certain direction (See Fig.~\ref{qrdetection}).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.48\textwidth]{tracking2.jpg}
\caption{QR detection.}
\label{qrdetection}
\end{figure}
The latest functionality is responsible for deploying a wireless network at each node of the network. Each robot receives the Wi-Fi signal from its predecessor, so that it can have a connection to the base station by jumping between the different nodes in the network. In turn, they need to deploy a new Wi-Fi network so that the next node can connect to it to maintain a connection chain between the base station and the end node.
\section{Node description}\label{sec.nodedescription}
So far, the proposed system for deploying a RANET has been described, but the technologies and elements available to each network node have not been defined. It has also not been specified what type of robot is used.
Each node is built around an iRobot Roomba \cite{ref31} vacuum robot, which is connected to a Raspberry Pi, these two forming the core of the node. The Raspberry Pi is connected to the robot to send commands to it and is also responsible for establishing communications with the base station and the other nodes of the network.
As can be seen in the connection diagram of the different elements (See Fig.~\ref{node}), the Raspberry Pi is the central element of the node (all elements are connected to it thanks to the USB ports that it incorporates, GPIO pins or ports for specific use).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.489\textwidth]{node.png}
\caption{Node wiring diagram.}
\label{node}
\end{figure}
First of all there is the camera, which is connected through a specific port that incorporates the Raspberry Pi. Second are the USB ports, which are used to connect a second Wi-Fi adapter and a microphone for audio reception. Thirdly, an ultrasonic sensor is connected to the GPIO pins for detecting obstacles (adding a resistance that adjusts the tension provided on the Raspberry Pi) and the Roomba robot. To connect the robot, a resistive voltage divider \cite{ref32} is incorporated in the Roomba transmission pin to reduce the 5V supplied by it to a maximum of 3.3V, which allows the Raspberry Pi reception port.
$$ V_{out} = \frac{R_{2}}{R_{1} + R_{2}} \cdot V_{in} $$
And finally there is the connection of the Raspberry Pi's power supply. Taking advantage of the fact that the Roomba is able to provide power to external devices, the battery is used to avoid having to incorporate an external one, using a voltage regulator that converts the 14.4V that provides the Roomba battery to the 5V that Raspberry Pi needs.
In addition to the Wi-Fi interface that Raspberry Pi incorporates, a second interface has been added through the USB port to extend the network that gets the first interface, so that an interface is used in the usual reception mode, and the second is configured in Ad Hoc mode. For simpler configuration of network interfaces, as well as to allow external devices to connect to wireless networks created by different network nodes, a single class B network \cite{ref33} will be configured, which allows the connection of 65534 devices (discounting the base station and broadcast address), among which may be network nodes, or end devices. It should be noted that each network node will use 2 IP addresses, as it has two wireless cards; instead, each end device will use a single IP address. IP addressing of devices connected to the network is carried out automatically thanks to the incorporation of a DHCP server in the base station. To do this, a range of addresses is reserved for the network nodes, and all other addresses are assigned automatically.
\section{Security}\label{sec.security}
The security of communications is a crucial point of the system.
An ID-Based Signcryption scheme (IBSC) is used in order to achieve secure communications.
This kind of cryptosystem is a combination of ID-Based Encryption (IBE) and ID-Based Signature (IBS), where all the shared messages are encrypted and signed.
This proposed scheme offers the advantage of simplifying management of keys by not having to define a public key infrastructure.
A part of this, this scheme has a low computational complexity and it is efficient in terms of memory.
In this RANET system, the identification of nodes is a crucial point.
Each robot has a QR code on the back that identifies it with a unique ID, this is the public identification used for the IBSC.
First of all, some mathematical tools used is described.
Let consider $(G, +)$ and $(V, \cdot)$ be two cyclic groups of the same prime order $q$.
$P$ is a generator of $G$ and there is a bilinear map paring $\hat{e} : G \times G \rightarrow V$
satisfying the following conditions:
\begin{itemize}
\item Bilinear: $ \forall P, Q \in G$ and $\forall a, b \in \mathbb{Z}$, $\hat{e}(aP, bQ)
= \hat{e}(P, Q)^{ab} $
\item Non-degenerate: $ \exists P_1, P_2 \in G $ that $\hat{e}(P_1,P_2) \neq 1$. This means if $P$ is generator of $G$, then $\hat{e}(P,P)$ is a generator of $Q$.
\item Computability: there exists an algorithm to compute $\hat{e}(P,Q), \forall P,Q \in G$
\end{itemize}
\par Note, some hash functions denoted as follows are also needed: $H_1: \{0, 1\}^* \rightarrow G^*, H_2 : \{0, 1\}^* \rightarrow \mathbb{Z}^*_q, H_3 : \mathbb{Z}^*_q \rightarrow \{0, 1\}^n$,
where the size of the message is defined by $n$.
Apart of this, $a \xleftarrow{r} N$ denotes a selection of an element $a$ randomly from a set $N$,
$a \leftarrow b$ stands the assignation of the value $a$ to $b$ and $||$ is used for concatenation.
The steps used in this signcryption scheme are the following:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item \textbf{SETUP:} This step is execute in the base station to the establishment of the basic parameters and to the generation of the base station public key ($pk_{bs}$) and the base station secret key ($sk_{bs}$). To achieve this, a prime value $q$ is generated based on some private data $k \in \mathbb{Z}$ and the system select two groups $G$ and $V$ of order $q$ and a bilinear pairing map $\hat{e}: G \times G \rightarrow V$. $P \in G$ is selected randomly and the hash functions $H_1$, $H_2$ and $H_3$ are also chosen.
$$ pk_{bs} \stackrel{r}{\leftarrow} \mathbb{Z}^*_q $$
$$ sk_{bs} \leftarrow pk_{bs} \cdot P $$
\item \textbf{EXTRACT ($ID_r$):} In this step, the keys for each robot is generated based on their ID.
The robot $r$ with the identity $ID_r$ sends its information to the base station.
The public key $Q_{ID_r} \in G$ and the secret key $S_{ID_r} \in G$ are calculated taking into account the $sk_{bs}$.
This key exchange of data is performed in a secure way
$$ Q_{ID_r} \leftarrow H_1(ID_r) $$
$$ S_{ID_r} \leftarrow sk_{bs} \cdot Q_{ID_r} $$
\item \textbf{SIGNCRYPTION ($S_{ID_{ra}}$, $ID_{rb}$, $m$):} If a robot $A$ with the identity $ID_{ra}$ shares a message $m \in\{0,1\}^n$ to the robot $B$ with the identity
$ID_{rb}$, the information will be encrypted and signed. The robot receiver's public key is generated taking into account the $ID_{rb}$ and then the message is signed with $S_{ID_{ra}}$ and encrypted with $Q_{ID_{rb}}$ giving as result $\sigma$ (a t-uple of three components: c, T, U).
$$ Q_{ID_{rb}} \leftarrow H_1(ID_{rb}) $$
$$ x \stackrel{r}{\leftarrow}\mathbb{Z}^*_q$$
$$ T \leftarrow x \cdot P$$
$$ r \leftarrow H_2(T || m)$$
$$ W \leftarrow x \cdot pk_{bs}$$
$$ U \leftarrow r \cdot S_{ID_{ra}} + W $$
$$ y \leftarrow \hat{e}(W, Q_{ID_{rb}}) $$
$$ k \leftarrow H_3(y)$$
$$ c \leftarrow k \oplus m$$
$$ \sigma \leftarrow (c, T, U)$$
\item \textbf{UNSIGNCRYPTION ($ID_{ra}$, $S_{ID_{rb}}$, $\sigma$):} The robot $B$ receive $\sigma$ and parse the information.
If everything is right, the message $m \in \{0,1\}^n$ is returned. Otherwise, if there are some problems in
the signature or in the encryption of $m$, $\bot$ is returned. The sender's public key is
generated taking into account $ID_{ra}$ and then the message is unencrypted with $S_{ID_{rb}}$.
$$ Q_{ID_{ra}} \leftarrow H_1(ID_{ra}) $$
$$ split \quad \sigma \quad as \quad (c, T, U) $$
$$ y \leftarrow \hat{e}(S_{ID_{rb}}, T)$$
$$ k \leftarrow y $$
$$ m \leftarrow k \oplus c $$
$$ r \leftarrow H_2(T || m) $$
Verification:
$$\hat{e}(U, P) == \hat{e}(Q_{ID_{ra}}, pk_{bs} )^r \cdot \hat{e}(T, pk_{bs})$$
Note: if the verification is successful $m$ is returned, otherwise $\bot$ is returned.
\end{itemize}
\section{Conclusion}\label{sec.conclusion}
MANET networks allow the connection between different devices and exchange information in a decentralized way. The behavior of such networks is similar to that of a P2P network. The main characteristic of this network is that the different nodes that form it are in motion. In a MANET network with robots, the ability to move nodes remotely is added. In this way it is not necessary to go to the different points where the nodes will be located, but it is possible to move them remotely thanks to streaming video visualization. In addition, the proposal incorporates object tracking, which allows that when moving one of the robots the rest moves behind the first one in an autonomous way.
Using Raspberry Pi as the primary device for each network node gives the system many possibilities. Having a device that runs a Linux operating system gives the possibility to add different services that run for some purpose and the development of applications that may be useful for system users. The USB ports and GPIO pins for device connection provide the possibility of adding different types of sensors for data acquisition or even actuators that can interact with the environment. The low battery consumption of these devices gives the possibility to use a small battery as a power source for these devices, as well as allowing a long use of them before draining the electricity provided by the battery.
In the proposed system, Wi-Fi is used as wireless technology in order to take advantage of the interface that incorporates Raspberry Pi, but it is possible to use any other type of wireless technology, allowing a greater range of coverage, which has an interface that can be connected to Raspberry Pi.
The incorporation of security mechanisms provides added value to the system, so that it can ensure that client devices connected to the generated network are secure. Using Identity-Based Cryptography we are able to provide the information encryption and authentication system by signing, with a low computational complexity.
Apart from the functionality for which this system has been designed, it can be used to perform video surveillance tasks using the camera and microphone incorporated in each of the network nodes, and can be moved thanks to robots to view images from different areas. This can be quite useful in hard-to-reach areas, or where people's physical integrity may be at risk.
To improve object tracking, in this case QR codes, in the future the design can be improved by including a small screen in the back of each robot. In this way, the permanent QR code is replaced by a screen that would allow the code to be changed from time to time. This change brings extra security to the system, since every few minutes the code used would become obsolete, thus preventing anyone from being able to move the robots to another location using a QR code that is too old.
\section*{Acknowledgments}
Research supported by the Spanish Ministry of Economy and Competitiveness, TESIS2015010102, the European FEDER Fund, and the CajaCanarias Foundation, under Projects TEC2014-54110-R, RTC-2014-1648-8, MTM2015-69138-REDT and DIG02-INSITU.
\section{Introduction}\label{sec.introduction}
In a world in which technology has been incorporated, both in business and homes, communications have become essential. Due to the centralization of information and storage of data in the cloud, a failure in communications (especially in the business environment) can cause chaos. If this happens in a population nucleus instead of an enterprise, the consequences can be much more serious, especially if it is due to a natural disaster or some kind of catastrophe.
When there is a communications outage situation, it is about finding a solution, establishing a temporary communications point that allows, at least, to have a connection point that allows basic communications to maintain control of the situation.
Unlike centralized communications, decentralized or distributed communications enable direct communication between devices in a network. This means that in this type of communications it is not necessary to have a central device so that the different devices in the network can communicate. Between these two topologies, there is also an intermediate case called hybrid (See Fig.~\ref{networktopologies}).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{nettypes.png}
\caption{Network topologies.}
\label{networktopologies}
\end{figure}
In order to establish a communications point in a place where communications have broken down or a temporary communication point is desired, this paper proposes a MANET (Mobile Ad Hoc Network) based on the use of robots that could adopt the acronym R-MANET (Robotic Mobile Ad Hoc Network) or directly RANET (Robotic Ad Hoc Network). The robots will be in charge of forming a multi-hop \cite{ref8} network to finally provide a communications point to the place where the network final nodes (robots) are established.
Robotics has increasingly gone into homes, to the point that we have kitchen robots \cite{ref20}, or robots that clean the floor \cite{ref19} for us. The latter, vacuum cleaners have several advantages such as the possibility of returning automatically to the battery charging station when necessary, so they are being used for other tasks that are not related to cleaning. The proposal therefore uses vacuum robots as network nodes.
In this work, the security of the proposed system is an essential component. We work in the form of authentication of the different network nodes, verifying that each of them are authorized by the network. In addition, a way of encrypting the data is proposed, so that communications are secure and the information will be protected from the sender to the receiver. An IDentity-Based Signcryption (IDBS) \cite{refIDBS} is used for communication confidentiality, authenticity and integrity, both among peers, and between the central station and robots.
The following section discusses a series of works close to the proposal made. Section 3 explains the proposed system in more detail and section 4 also details the components of each robot-based network node. Section 5 shows the security mechanisms used in the proposed system. Finally, a conclusion close the paper.
\section{Related works}\label{sec.relatedworks}
One of the fields in which decentralised communications can be very important is in IoT \cite{ref1} (Internet of Things). More and more devices are connected, and it may be really interesting that between them they form a Wireless Sensor Network (WSN). Many times, this type of network can be the solution for sending data to the cloud when the device does not have a direct Internet connection, but jumping between devices is able to send the information to one with an Internet connection available. In \cite{ref11}, different Multi-Hop routing protocols for Ad Hoc wireless networks are studied and compared. In this type of network, it is important to measure transmission rates between nodes, so that you can choose the optimal way to send information more quickly and efficiently \cite{ref12}.
In IoT, different works are based on the use of Raspberry Pi for data collection through sensors that form WSAN (Wireless Sensor and Actuator Network), which form a MANET between the different devices for data transmission between them \cite{ref9}. Another device used in these sensor networks is the Arduino microcontroller, as in \cite{ref10}, which form a network of sensors for temperature monitoring and to help air conditioning systems operate more efficiently.
The increase in computing capacity of smartphones has also made them useful for decentralized networking. Bluetooth is one of the technologies chosen to form MANET networks between mobile devices \cite{ref13}. These devices are also used to create VANETs that allow the exchange of information between vehicles on the road \cite{ref14}, so that they can share some useful information such as traffic status. The ability to obtain data through built-in sensors means that smartphones can be mounted on a device to move it to one location and obtain data \cite{ref15}, sometimes forming a MANET to deploy several devices at once.
MANET networks are also mounted on drones, such as in \cite{ref2}, where these networks are used to increase the coverage range of the remote control.
The iRobot Roomba robots apart from the tasks for which they have been designed \cite{ref5}, are able to perform other actions thanks to the Open Interface that it has, through which it is possible to send commands. This, in addition to the low cost of these robots, has caused them to be used for different jobs \cite{ref3}. In \cite{ref4}, the authors have worked on the control of an iRobot robot using an external keyboard and voice commands. In \cite{ref6}, a camera is placed on a Roomba robot to compare the captured video with one previously recorded to detect if there are abandoned objects in the examined area. Moreover, by applying fuzzy logic to these robots \cite{ref7} it is possible to develop navigation systems that allow them to obtain more optimal routes while are avoiding obstacles in their path.
In a decentralised network, it is very important to take into account its security. Two of the aspects to be studied are the form of authentication in the network and the protection of the data that is sent between the different nodes until reaching the final destination. Some authors propose models for authentication and auto-configuration of nodes in MANET networks \cite{ref16} and even on preventing impersonation attacks using double authentication factor \cite{ref17}. Apart from this, \cite{ref18} studies different cryptographic protocols for use in MANETs at the level of energy consumption, which is important in mobile devices, concluding that the AES standard is the one that produces the best performance among the different algorithms analyzed.
\section{Proposal description}\label{sec.proposaldescription}
The system proposed in this paper is a MANET in which the different network nodes are robots. The aim is to place the different nodes in the necessary positions to form a network in which a communications point is established at the end point. This communication point will provide the possibility of having a voice and/or data connection in an area that for some reason is out of communication, and where communications with other places are needed. The use of robots as network nodes allows each node to be moved remotely to the desired location, which can be important if it is a dangerous area for people to transport equipment.
To start with the deployment of the robots that will form the network, first you must study the area and calculate the number of nodes needed to extend the network. This calculation is made considering the maximum range of wireless technology used for the connection between the nodes. It should be noted that in general, the theoretical maximum ranges in practice can decrease considerably. The technology chosen to deploy the network is Wi-Fi \cite{ref21}, which has a maximum range of 100 meters. If, for example, we wanted to establish a communications point 500 meters from the starting area, the theory tells us that we would have to form a network with 5 nodes to reach the desired area. To this we must add the base station located at the starting point, from which the signal is transmitted and the network is controlled. The diagram in Fig.~\ref{network} shows the general idea of the proposal.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.489\textwidth]{network2.png}
\caption{Network scheme.}
\label{network}
\end{figure}
The system proposed to deploy a temporary communication point through the use of robots is based on different functionalities: remote control of robots, streaming, object tracking and wireless network deployment.
First of all, there is the remote control of robots. This functionality allows each node of the network to be moved remotely to the point where it has been previously decided. From the base station it is possible to select a robot within the network and send it real-time movement commands.
Secondly, we have audio and video streaming \cite{ref22}. To be able to move the robots remotely, it is necessary to know where they are at any given moment, so this second functionality is essential. Thanks to streaming, it is possible to visualize the environment in which the robot is located, at the same time that it is remotely controlled from the base station. In addition, audio is included in the streaming in case it would be useful to listen to the sound of the environment in which it is moving. To carry out this functionality, the base station must have a streaming server to which each robot will send the audio and video signal. The RTSP protocol \cite{ref23} is used to send this signal from the robots to the streaming server, and the RTMP protocol \cite{ref24} is used for receiving the video from the server to the base station equipment. In both cases, port 1935 is used (See Fig.~\ref{streaming}).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.489\textwidth]{streaming3.png}
\caption{Streaming scheme.}
\label{streaming}
\end{figure}
To perform these two functions, a web application is incorporated in the base station. It shows different blocks, and each one of the blocks refers to a robot, showing in them the video received through streaming together with buttons that allow sending movement orders to each robot.
Third is object tracking \cite{ref25}, which is used for the most interesting part of the proposal. Directing each robot remotely to the desired location can be a tedious task (especially if the number of nodes is too large), so it is proposed that by controlling only one of the robots the rest will be able to follow it autonomously. For this, a master node will be selected and the rest will be slaves of it. Each robot has a QR code on the back that identifies it within the network. Each slave robot has to do image recognition, looking for any QR code in the images captured by your camera. This is when the robot has to check if the detected QR code corresponds with the robot you have programmed to follow. The flowchart of Fig.~\ref{trackingflowchart} shows this process.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.25\textwidth]{trackingflowchart.png}
\caption{Flowchart of robot tracking.}
\label{trackingflowchart}
\end{figure}
Once a slave robot has detected the QR code of the robot to be followed, it performs calculations to establish the adequate movement. To do this, a proximity sensor is used, so that if the robot is more than a certain distance away, it has to start following the predecessor robot. In addition, depending on the area of the image in which the QR of the front node is detected, it will move in a certain direction (See Fig.~\ref{qrdetection}).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.48\textwidth]{tracking2.jpg}
\caption{QR detection.}
\label{qrdetection}
\end{figure}
The latest functionality is responsible for deploying a wireless network at each node of the network. Each robot receives the Wi-Fi signal from its predecessor, so that it can have a connection to the base station by jumping between the different nodes in the network. In turn, they need to deploy a new Wi-Fi network so that the next node can connect to it to maintain a connection chain between the base station and the end node.
\section{Node description}\label{sec.nodedescription}
So far, the proposed system for deploying a RANET has been described, but the technologies and elements available to each network node have not been defined. It has also not been specified what type of robot is used.
Each node is built around an iRobot Roomba \cite{ref31} vacuum robot, which is connected to a Raspberry Pi, these two forming the core of the node. The Raspberry Pi is connected to the robot to send commands to it and is also responsible for establishing communications with the base station and the other nodes of the network.
As can be seen in the connection diagram of the different elements (See Fig.~\ref{node}), the Raspberry Pi is the central element of the node (all elements are connected to it thanks to the USB ports that it incorporates, GPIO pins or ports for specific use).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.489\textwidth]{node.png}
\caption{Node wiring diagram.}
\label{node}
\end{figure}
First of all there is the camera, which is connected through a specific port that incorporates the Raspberry Pi. Second are the USB ports, which are used to connect a second Wi-Fi adapter and a microphone for audio reception. Thirdly, an ultrasonic sensor is connected to the GPIO pins for detecting obstacles (adding a resistance that adjusts the tension provided on the Raspberry Pi) and the Roomba robot. To connect the robot, a resistive voltage divider \cite{ref32} is incorporated in the Roomba transmission pin to reduce the 5V supplied by it to a maximum of 3.3V, which allows the Raspberry Pi reception port.
$$ V_{out} = \frac{R_{2}}{R_{1} + R_{2}} \cdot V_{in} $$
And finally there is the connection of the Raspberry Pi's power supply. Taking advantage of the fact that the Roomba is able to provide power to external devices, the battery is used to avoid having to incorporate an external one, using a voltage regulator that converts the 14.4V that provides the Roomba battery to the 5V that Raspberry Pi needs.
In addition to the Wi-Fi interface that Raspberry Pi incorporates, a second interface has been added through the USB port to extend the network that gets the first interface, so that an interface is used in the usual reception mode, and the second is configured in Ad Hoc mode. For simpler configuration of network interfaces, as well as to allow external devices to connect to wireless networks created by different network nodes, a single class B network \cite{ref33} will be configured, which allows the connection of 65534 devices (discounting the base station and broadcast address), among which may be network nodes, or end devices. It should be noted that each network node will use 2 IP addresses, as it has two wireless cards; instead, each end device will use a single IP address. IP addressing of devices connected to the network is carried out automatically thanks to the incorporation of a DHCP server in the base station. To do this, a range of addresses is reserved for the network nodes, and all other addresses are assigned automatically.
\section{Security}\label{sec.security}
The security of communications is a crucial point of the system.
An ID-Based Signcryption scheme (IBSC) is used in order to achieve secure communications.
This kind of cryptosystem is a combination of ID-Based Encryption (IBE) and ID-Based Signature (IBS), where all the shared messages are encrypted and signed.
This proposed scheme offers the advantage of simplifying management of keys by not having to define a public key infrastructure.
A part of this, this scheme has a low computational complexity and it is efficient in terms of memory.
In this RANET system, the identification of nodes is a crucial point.
Each robot has a QR code on the back that identifies it with a unique ID, this is the public identification used for the IBSC.
First of all, some mathematical tools used is described.
Let consider $(G, +)$ and $(V, \cdot)$ be two cyclic groups of the same prime order $q$.
$P$ is a generator of $G$ and there is a bilinear map paring $\hat{e} : G \times G \rightarrow V$
satisfying the following conditions:
\begin{itemize}
\item Bilinear: $ \forall P, Q \in G$ and $\forall a, b \in \mathbb{Z}$, $\hat{e}(aP, bQ)
= \hat{e}(P, Q)^{ab} $
\item Non-degenerate: $ \exists P_1, P_2 \in G $ that $\hat{e}(P_1,P_2) \neq 1$. This means if $P$ is generator of $G$, then $\hat{e}(P,P)$ is a generator of $Q$.
\item Computability: there exists an algorithm to compute $\hat{e}(P,Q), \forall P,Q \in G$
\end{itemize}
\par Note, some hash functions denoted as follows are also needed: $H_1: \{0, 1\}^* \rightarrow G^*, H_2 : \{0, 1\}^* \rightarrow \mathbb{Z}^*_q, H_3 : \mathbb{Z}^*_q \rightarrow \{0, 1\}^n$,
where the size of the message is defined by $n$.
Apart of this, $a \xleftarrow{r} N$ denotes a selection of an element $a$ randomly from a set $N$,
$a \leftarrow b$ stands the assignation of the value $a$ to $b$ and $||$ is used for concatenation.
The steps used in this signcryption scheme are the following:
\begin{itemize}
\renewcommand{\labelitemi}{$\bullet$}
\item \textbf{SETUP:} This step is execute in the base station to the establishment of the basic parameters and to the generation of the base station public key ($pk_{bs}$) and the base station secret key ($sk_{bs}$). To achieve this, a prime value $q$ is generated based on some private data $k \in \mathbb{Z}$ and the system select two groups $G$ and $V$ of order $q$ and a bilinear pairing map $\hat{e}: G \times G \rightarrow V$. $P \in G$ is selected randomly and the hash functions $H_1$, $H_2$ and $H_3$ are also chosen.
$$ pk_{bs} \stackrel{r}{\leftarrow} \mathbb{Z}^*_q $$
$$ sk_{bs} \leftarrow pk_{bs} \cdot P $$
\item \textbf{EXTRACT ($ID_r$):} In this step, the keys for each robot is generated based on their ID.
The robot $r$ with the identity $ID_r$ sends its information to the base station.
The public key $Q_{ID_r} \in G$ and the secret key $S_{ID_r} \in G$ are calculated taking into account the $sk_{bs}$.
This key exchange of data is performed in a secure way
$$ Q_{ID_r} \leftarrow H_1(ID_r) $$
$$ S_{ID_r} \leftarrow sk_{bs} \cdot Q_{ID_r} $$
\item \textbf{SIGNCRYPTION ($S_{ID_{ra}}$, $ID_{rb}$, $m$):} If a robot $A$ with the identity $ID_{ra}$ shares a message $m \in\{0,1\}^n$ to the robot $B$ with the identity
$ID_{rb}$, the information will be encrypted and signed. The robot receiver's public key is generated taking into account the $ID_{rb}$ and then the message is signed with $S_{ID_{ra}}$ and encrypted with $Q_{ID_{rb}}$ giving as result $\sigma$ (a t-uple of three components: c, T, U).
$$ Q_{ID_{rb}} \leftarrow H_1(ID_{rb}) $$
$$ x \stackrel{r}{\leftarrow}\mathbb{Z}^*_q$$
$$ T \leftarrow x \cdot P$$
$$ r \leftarrow H_2(T || m)$$
$$ W \leftarrow x \cdot pk_{bs}$$
$$ U \leftarrow r \cdot S_{ID_{ra}} + W $$
$$ y \leftarrow \hat{e}(W, Q_{ID_{rb}}) $$
$$ k \leftarrow H_3(y)$$
$$ c \leftarrow k \oplus m$$
$$ \sigma \leftarrow (c, T, U)$$
\item \textbf{UNSIGNCRYPTION ($ID_{ra}$, $S_{ID_{rb}}$, $\sigma$):} The robot $B$ receive $\sigma$ and parse the information.
If everything is right, the message $m \in \{0,1\}^n$ is returned. Otherwise, if there are some problems in
the signature or in the encryption of $m$, $\bot$ is returned. The sender's public key is
generated taking into account $ID_{ra}$ and then the message is unencrypted with $S_{ID_{rb}}$.
$$ Q_{ID_{ra}} \leftarrow H_1(ID_{ra}) $$
$$ split \quad \sigma \quad as \quad (c, T, U) $$
$$ y \leftarrow \hat{e}(S_{ID_{rb}}, T)$$
$$ k \leftarrow y $$
$$ m \leftarrow k \oplus c $$
$$ r \leftarrow H_2(T || m) $$
Verification:
$$\hat{e}(U, P) == \hat{e}(Q_{ID_{ra}}, pk_{bs} )^r \cdot \hat{e}(T, pk_{bs})$$
Note: if the verification is successful $m$ is returned, otherwise $\bot$ is returned.
\end{itemize}
\section{Conclusion}\label{sec.conclusion}
MANET networks allow the connection between different devices and exchange information in a decentralized way. The behavior of such networks is similar to that of a P2P network. The main characteristic of this network is that the different nodes that form it are in motion. In a MANET network with robots, the ability to move nodes remotely is added. In this way it is not necessary to go to the different points where the nodes will be located, but it is possible to move them remotely thanks to streaming video visualization. In addition, the proposal incorporates object tracking, which allows that when moving one of the robots the rest moves behind the first one in an autonomous way.
Using Raspberry Pi as the primary device for each network node gives the system many possibilities. Having a device that runs a Linux operating system gives the possibility to add different services that run for some purpose and the development of applications that may be useful for system users. The USB ports and GPIO pins for device connection provide the possibility of adding different types of sensors for data acquisition or even actuators that can interact with the environment. The low battery consumption of these devices gives the possibility to use a small battery as a power source for these devices, as well as allowing a long use of them before draining the electricity provided by the battery.
In the proposed system, Wi-Fi is used as wireless technology in order to take advantage of the interface that incorporates Raspberry Pi, but it is possible to use any other type of wireless technology, allowing a greater range of coverage, which has an interface that can be connected to Raspberry Pi.
The incorporation of security mechanisms provides added value to the system, so that it can ensure that client devices connected to the generated network are secure. Using Identity-Based Cryptography we are able to provide the information encryption and authentication system by signing, with a low computational complexity.
Apart from the functionality for which this system has been designed, it can be used to perform video surveillance tasks using the camera and microphone incorporated in each of the network nodes, and can be moved thanks to robots to view images from different areas. This can be quite useful in hard-to-reach areas, or where people's physical integrity may be at risk.
To improve object tracking, in this case QR codes, in the future the design can be improved by including a small screen in the back of each robot. In this way, the permanent QR code is replaced by a screen that would allow the code to be changed from time to time. This change brings extra security to the system, since every few minutes the code used would become obsolete, thus preventing anyone from being able to move the robots to another location using a QR code that is too old.
\section*{Acknowledgments}
Research supported by the Spanish Ministry of Economy and Competitiveness, TESIS2015010102, the European FEDER Fund, and the CajaCanarias Foundation, under Projects TEC2014-54110-R, RTC-2014-1648-8, MTM2015-69138-REDT and DIG02-INSITU.
| {'timestamp': '2022-09-27T02:03:24', 'yymm': '2209', 'arxiv_id': '2209.11861', 'language': 'en', 'url': 'https://arxiv.org/abs/2209.11861'} |
\section{\label{}}
\section{EXTRA-SPECTRAL COMPONENTS}
Most of the prompt emission spectra of gamma-ray bursts (GRBs)
can be described by the well-known Band function \citep{ban93};
the photon number spectrum
$\propto \varepsilon^{\alpha}$
below $\varepsilon_{\rm p}$,
and $\propto \varepsilon^{\beta}$ above it.
The spectral peak energy $\varepsilon_{\rm p}$
are usually seen in the MeV range.
One of main scientific targets for the {\it Fermi} satellite
was to investigate whether the spectral shape is consistent
with the Band function even in the GeV band.
The {\it Fermi} detected GeV photons from
several very bright bursts ($E_{\rm iso}>10^{54}$ erg) such as
GRB 080916C \citep{916C}, GRB 090902B \citep{902B}, and
GRB 090926A \citep{926A}.
In such bursts the onset of the GeV emission is delayed with respect to
the MeV emission.
Some of them also have an extra spectral component above a
few GeV, distinct and additional to the usual Band function.
Interestingly, GRB 090902B and GRB 090510 \citep{abd09b,ack10} show
a further, soft excess feature below $\sim 20$ keV,
which is consistent with a continuation of the GeV power-law component.
While such spectra may be explained by the early onset of the afterglow
\citep{ghi10,kum10}, here we pursue possibility of internal-shock origins.
\section{HADRONIC MODELS}
If the spectral excesses in GeV and keV bands have
the same origin, such a wide photon-energy range may imply
the cascade emission due to hadrons.
If the GRB accelerated ultra-high-energy protons,
synchrotron and inverse Compton (IC) emission from
an electron-positron pair cascade
triggered by photopion interactions of
the protons with low-energy photons \citep{boe98,gup07,asa07}
can reproduce power-law photon spectra
as seen in {\it Fermi}-LAT GRBs.
Through Monte Carlo simulations,
Asano et al. \citep{asa09b} have shown that
a proton luminosity much larger than gamma-ray luminosity is required
to produce the extra spectral component in GRB 090510
as $L_{\rm p}>10^{55}$ erg $\mbox{s}^{-1}$ (see Fig.\ref{f1}).
Namely, the efficiency of photopion production is very low.
In this case, the spectrum of the GeV component
is very hard with photon index $\sim -1.6$,
which requires a inverse Compton (IC) contribution from the secondary pairs.
The prominent IC component leads to
a weaker magnetic field.
This entails a lower maximum energy of protons,
and hence lower photopion production efficiency.
Therefore, the required proton luminosity
is so large in GRB 090510.
\begin{figure*}[t]
\includegraphics[width=70mm]{fA1.eps}
\caption{Simulated spectra with hadronic models for GRB 090510.
The assumed fraction of the magnetic energy density
to the Band component photons is $10^{-3}$,
and the required proton luminosity
is 200 times the luminosity of the Band component.}
\label{f1}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=70mm]{fA2.eps}
\caption{Simulated spectra with hadronic models for GRB 090902B.
The assumed fraction of the magnetic energy density
to the Band component photons is $1.0$, and
the required proton luminosity
is 3 times the luminosities of the Band component.}
\label{f2}
\end{figure*}
The assumed bulk Lorentz factor in Fig.\ref{f1} is 1500,
and the emission radius is $R=10^{14}$ cm.
If we adopt a smaller value of $\Gamma$,
the pion production efficiency would increases as
$t_{\rm exp}/t_\pi \propto R^{-1} \Gamma^{-2}$.
However, we should note that there is a lower limit to $\Gamma$,
which is required to
make the source optically thin to GeV photons.
Given the photon luminosity and spectral shape,
this minimum Lorentz factor can be estimated as shown
in the online supporting materials
in Abdo et al. (2009) \citep{916C}.
In order to lower $\Gamma$, we have to increase
the emission radius $R$.
The lower limit of the Lorentz factor
$\propto R^{1/(\beta-3)}$ does not decrease drastically
(since $\beta \simeq -3$, $\Gamma_{\rm min} \propto R^{-1/6}$).
The required large luminosity is rather favorable
for the GRB-UHECR scenario, but it imposes stringent requirements
on the energy budget of the central engine.
On the other hand, GRB 090902B is encouraging for the hadronic model
because of its flat spectrum (photon index $\sim -2$)
\citep{asa10}.
The Band component in this burst
is an atypically narrow energy distribution
as shown in Fig. \ref{f2},
which may imply the photospheric emission \citep{ryd10}.
The hadronic cascade emission can well reproduce the
observed flat spectra including
the soft excess feature below 50 keV
(model parameters: $R=10^{14}$ cm, $\Gamma=1300$).
Owing to the flat spectral shape of the extra component,
no IC component is required.
We can adopt a strong magnetic field,
which enhance the photomeson production efficiency.
In this case the flux of the extra component
is relatively low compared to the Band component,
which also decreases the required proton luminosity.
Therefore, the necessary nonthermal proton luminosity is
then not excessive and only comparable to the Band component luminosity.
\begin{figure*}[t]
\centering
\includegraphics[width=65mm]{fA3.eps}
\caption{The SSC-model fluence obtained from our
time-dependent simulation (assuming $z=1$).
The dashed line neglects absorption due to EBL.} \label{f3}
\end{figure*}
\section{LEPTONIC MODELS}
As Corsi et al. \citep{cor10} discussed, the GeV emission may be due to IC
emission from internal dissipation regions.
However, it seems difficult to explain spectral excesses
in both keV and GeV bands by IC emission.
Recently, we carry out time-dependent simulations
of photon emissions with leptonic models \citep{asa11}.
In our simulations,
as the photon energy density increases with time
because of synchrotron emission,
the SSC component gradually grows and dominate
the photon field later.
This late growth of the IC component has been
observed also in the simulations of Bo\v{s}njak et al. (2009)
\citep{bos09}.
The resultant lightcurves show delayed onset
of GeV emission, but the delay timescale would be within
the approximate timescale of the keV-MeV pulse width.
However, the longer delay compared to the pulse
timescale such as observed in GRB 080916C
is not explained by this effect only.
As shown in Fig. \ref{f3}
(model parameters: $R=6 \times 10^{15}$ cm,
$\Gamma=1000$, $B=100$ G,
$E_{\rm iso}=10^{54}$ erg,
$\varepsilon_{\rm e,min}=11.3$ GeV), the model spectrum
obtained from our simulations reproduce
both the low and high energy excesses.
When the magnetic field is weak enough,
even at the end of the electron injection,
the cooled electrons can be still relativistic.
Such cooled electrons
continue emitting synchrotron photons.
The cooling due to IC gradually becomes inefficient as
the seed photon density decreases.
Such late synchrotron emission can yield
the low-energy excess, while IC emission
makes a GeV extra component.
\begin{figure*}[t]
\centering
\includegraphics[width=80mm]{fA4.eps}
\caption{The lightcurves for the external IC model.
In the MeV band the external photons from inside region dominate,
while GeV and eV emissions are orginated from accelerated electrons
in outside dissipation region.} \label{f4}
\end{figure*}
Another possible model is the external IC emissions \citep{tom09,tom10},
which can explain the delayed onset of GeV emission.
The spatial separation between the source of the external photons
and the site of the internal shock region corresponds
to the delay timescale.
We also carry out a simulation for this model
assuming an external emission $L_{\rm seed}=10^{53}~{\rm erg}~{\rm s}^{-1}$
with the Band function:
$\varepsilon_{\rm peak}\simeq 1$ MeV in the observer frame,
$\alpha=-0.6$, and $\beta=-2.6$.
The parameters for the internal shock
are similar to those in Fig.\ref{f3} as $R=6 \times 10^{15}$ cm,
$\Gamma=1000$, $B=100$ G,
$E_{\rm iso}=3 \times 10^{53}$ erg, except for
$\varepsilon_{\rm e,min}=50.6$ MeV.
Our simulation reproduces the delayed onset of GeV emission,
GeV extra component, and softening of the Band component.
Since the external photon field in the rest frame of the emission region
is highly anisotropic, the marginally high-latitude emission contributes
the most to the flux.
Thus, the simulated GeV lightcurve shows larger delayed onset
and longer tail than that in the usual leptonic models (Fig.\ref{f4}).
\section{IMPLICATION}
The hadronic models usually require a much larger
energy of protons than observed gamma-rays
except for some examples such as GRB 090902B.
Some may consider
that the leptonic SSC models seem rather reasonable to
reproduce both the GeV and keV excesses.
However, a lower magnetic field and high Lorentz factor,
required to make an optically thin source for GeV
photons generated via IC emission,
leads to a very high minimum Lorentz factor ($\gamma_{\rm e,min}
\sim 10^4$)
for random motion of accelerated electrons.
If all electrons are accelerated in internal shock regions,
such a high value may be impossible.
Thus, the fraction of accelerated electrons should be
small in leptonic models to explain {\it Fermi}-LAT GRBs.
On the contrary, the minimum Lorentz factor of
electrons $\gamma_{\rm e,min}$
in the external IC model should be small
to adjust the energy of scattered photons.
Given the total energy, such small $\gamma_{\rm e,min}$
means large number of electrons.
If the numbers of electrons and protons are the same,
the proton energy becomes fairly large
($\sim 1.9 \times 10^{54}$ erg in the case of Fig.\ref{f4},
while the luminosity of the Band component
$L_{\rm seed}=10^{53}~{\rm erg}~{\rm s}^{-1}$).
To enhance the emission efficiency,
electron-positron plasma should be introduced in such models.
Despite the ability of our straightforward simulations to reproduce various observed
properties of the GRB prompt emission,
the discrepancy with the low-energy index $\alpha$ remains unexplained.
The injected electrons cool via synchrotron radiation,
and distribute below $\gamma_{\rm e,min}$ with
a power-law index $-2$.
The resultant photon index becomes $\alpha \sim -1.5$,
while the observed typical values are $-1.0$ or harder.
The low-energy photon index affects
the photomeson production efficiency.
We have also tried to resolve this problem \citep{asa09c,mur11}
considering continuous acceleration/heating due to magnetic turbulences
induced via various types of instabilities
such as Richtmyer-Meshkov instability \citep{ino11}.
The acceleration/heating balances with the synchrotron cooling,
so the observed low-energy spectral index is naturally explained.
Such effect should be included in the time-dependent code
to reproduce the global shape of the GRB prompt spectra
from eV to GeV.
Especially, electron injection due to hadronic processes
and succeeding acceleration by turbulences may
explain very high $\gamma_{\rm e,min}$ and
GeV emission.
We plan to develop the time-dependent code
shown here to treat hadronic processes or
continuous acceleration/heating.
Note that the results for the hadronic models shown here
were calculated based on the steady state approximation.
We will carry out simulations for
various situations involving dissipative photospheres
and internal or external dissipation or shock regions.
Moreover, the code will be useful for simulating emissions
of other high-energy sources, such as active galactic nuclei,
supernova remnant, and clusters of galaxies.
\begin{acknowledgments}
The series of our studies introduced here are
partially supported by Grants-in-Aid for Scientific Research
No.22740117 (KA), No.22540278 (SI),
and No.21540259 (TT) from the Ministry of Education,
Culture, Sports, Science and Technology (MEXT) of Japan.
\end{acknowledgments}
\bigskip
| {'timestamp': '2011-11-02T01:01:33', 'yymm': '1111', 'arxiv_id': '1111.0127', 'language': 'en', 'url': 'https://arxiv.org/abs/1111.0127'} |
\subsection{Datasets}
\begin{table}[t]
\centering
\begin{tabular}{@{}C{1.6cm}C{1.9cm}C{1.9cm}C{1.9cm}@{}}
\toprule[0.015in]
Dataset & No augmentation & SpecAugment~\cite{park2019specaugment} & SapAugment \\ \midrule
\multicolumn{4}{l}{\hspace{-0.1in}LibriSpeech 100h}\\\hline
~~test-clean & $12.0\pm0.30$ & $10.6\pm0.20$ & {$\mathbf {8.3\pm0.05}$} \\
~~test-other &$30.9\pm0.35$ & $24.5\pm0.10$ & $ \mathbf {21.5\pm0.10}$ \\ \midrule
\multicolumn{4}{l}{\hspace{-0.1in}LibriSpeech 960h}\\\hline
~~test-clean & $4.4\pm0.05$ & $3.9\pm0.10$ & $\mathbf {3.5\pm0.05}$\\
~~test-other & $11.5\pm0.10$ & $9.4\pm0.10$ & $\mathbf {8.5\pm0.10}$\\
%
\bottomrule
\end{tabular}
\caption{Comparison of SapAugment against baselines on the LibriSpeech dataset. All numbers are percentage word error rate (WER) (lower is better). }
\label{tab:sota}
\end{table}
\begin{table}[t]
\begin{tabular}{@{}L{5.7cm}|C{0.97cm}C{0.97cm}@{}}
\toprule[0.015in]
Augmentation/policy & test-clean & test-other\\ \midrule
No aug. & $12.0$ & $30.9$\\
CutMix aug. without policy & $11.2$ & $26.6$\\
SamplePairing aug. without policy & $10.2$ & $26.3$\\
All aug. without policy & $9.6$ & $22.3$\\% \midrule
All aug. with selection policy & $9.4$ & $22.2$\\
All aug. with sample-adaptive policy & $9.3$ & $21.9$\\
SapAugment: All aug. with sample-adaptive and selection policies & $\mathbf{8.3}$ & $\mathbf{21.5}$\\
\bottomrule
\end{tabular}
\caption{Ablation study of SapAugment on LibriSpeech 100h training dataset.
All numbers are percentage word error rate (WER).
First row is the baseline WER without any augmentation.
Rows $2$-$4$ apply various combination of augmentations without any learned policy.
Row $5$ applies a constant policy without considering the training loss value to determine the amount of augmentation.
Row $6$ applies a sample-adaptive policy but does not include the policy selection probability, i.e. all the augmentations are always applied, only their magnitude is controlled by the policy using sample loss value.
Last row is SapAugment using all augmentations and both policies.}
\label{tab:ablation}
\end{table}
To evaluate our method, we use the LibriSpeech~\cite{panayotov2015librispeech} dataset that contains $1,000$ hours of speech from public domain audio books.
Following the standard protocol on this dataset, we train acoustic models by using both the full training dataset (LibriSpeech 960h), and the subset with clean, US accent speech only (LibriSpeech 100h).
Results are reported on the two official test sets - (1)~test-clean contains clean speech, and (2)~test-other contains noisy speech.
\subsection{Implementation}
For the network architecture of an end-to-end acoustic model, we choose the Transformer \cite{vaswani2017attention} because of its efficiency in training.
We utilize the Transformer acoustic model implemented in ESPNet \cite{watanabe2018espnet}.
Following the original implementation in \cite{watanabe2018espnet}, we use the ``small" configuration of Transformer for LibriSpeech 100h, and the ``large" configuration for LibriSpeech 960h.
All of the model checkpoints are saved during the training process, and the final model is generated by averaging the $5$ models with highest validation accuracy.
For the acoustic feature, we apply $80$-dimensional filter bank features.
\subsection{Evaluation}
In Table~\ref{tab:sota}, we compare the proposed SapAugment to the baseline model without any augmentation, and the model trained with SpecAugment~\cite{park2019specaugment}, the current state-of-the-art augmentation method in ASR.
For a fair comparison, we ran all experiments using the same ESPNet model on all baselines and report results without the language model to isolate the improvements from data augmentation.
The results show that the proposed SapAugment outperforms SpecAugment -- up to $21\%$ relative WER reduction with LibriSpeech 100h and $10\%$ relative WER reduction with LibriSpeech 960h.
The improvement in WER is more significant in the smaller scale dataset (LibriSpeech 100h) because augmentations reduce overfitting which is more severe for a small dataset.
\subsection{Ablation Study}
To study the contribution of each component of SapAugment, we present an ablation study on the LibriSpeech 100h dataset in Table~\ref{tab:ablation}.
From this study, we make the following observations:
First, the proposed two augmentations in the raw speech domain help improve the performance compared to no augmentation.
Second, learning a policy gives better results than no policy.
Third, learning both the selection policy and the sample-adaptive policy is better than either one of them.
\subsection{Augmentation methods used in SapAugment}
SapAugment combines $N$ augmentation methods by jointly learning $N$ policies, $f_{s_1,a_1}, \dots, f_{s_N, a_N}$ and their selection probabilities, $p_1, \dots, p_N$.
In this work, we combine $5$ augmentation methods, $3$ in the feature (log mel spectrogram) domain and $2$ in raw speech domain.
\subsubsection{Feature domain augmentations}
\begin{itemize}
\item \emph{Time masking} and \emph{frequency masking} were originally introduced in SpecAugment \cite{park2019specaugment}.
The two methods simply mask out segments of sizes $m_t$ and $m_f$ along the time and the frequency axes, respectively, in log mel spectrogram feature.
In our implementation, the masked region is filled with the average values along the masked dimensions.
For both masking schemes, the number of masks is kept constant ($4$ in our experiments).
\item \emph{Time stretching}~\cite{nguyen2020improving, ko2015audio} re-samples the feature sequence along the time axis by a fix ratio $\rho$, effectively changing the speaking rate in the signal.
Given speech features for a sequence of $T$ frames, $\mathbf f_0, \dots, \mathbf f_{T-1}$, time stretching generates $(1+\rho) T $ features, $\mathbf f'_0, \dots, \mathbf f'_{(1+\rho)T-1}$, such that $\mathbf f'_i = \mathbf f_{\lfloor i/(1+\rho) \rfloor}, i=0,1,..., \lfloor (1+\rho)T \rfloor -1$.
The stretching ratio $\rho$ is sampled from a uniform distribution $U(-\rho_0, \rho_0)$
\end{itemize}
\subsubsection{Raw speech domain augmentations}
\label{subsec: raw speech augmentations}
\begin{itemize}
\item \emph{SamplePairing} (SP)~\cite{samplepairs}, originally developed for image augmentation, mixes $i^{\text{th}}$ input, $\boldsymbol x_i$, with another speech $\boldsymbol x_j$ in the training set:
$\boldsymbol x_i' = (1-\lambda_{sp}) \boldsymbol x_i + \lambda_{sp} \boldsymbol x_j,$
where $\boldsymbol x_i'$ is the augmented speech and $\lambda_{sp}$ is the combination weight.
Since the duration of $\boldsymbol x_i$ and $\boldsymbol x_j$ may differ, we pad (by repeating) or clip $\boldsymbol x_j$ as needed.
\item \emph{CutMix}~\cite{yun2019cutmix}, originally proposed for images, replaces part of an input speech $\boldsymbol x_i$ by segments from another training sample $\boldsymbol x_j$:
\begin{equation*}
\boldsymbol x_i[t_{i,k}:t_{i,k} + w] \leftarrow \boldsymbol x_j[t_{j,k}:t_{j,k} + w], k=1,2,...,N_{cm},
\end{equation*}
where $N_{cm}$ is the total number of replaced segments (set to $6$ in our experiments), and $w$ is the size of the segments.
Unlike the original CutMix for image augmentation, we do not change the label information of the perturbed sample.
\end{itemize}
Each augmentation method contains one parameter to control the augmentation magnitude, as listed in Table~\ref{tab:magnitude_control}.
The value of $\lambda \in [0, 1]$ from the policy is mapped to this augmentation parameter by a simple $1$-d affine transformation, as shown in the last column of the Table.
\begin{table}[t]
\centering
\begin{tabular}{@{}C{2.9cm}|C{1.9cm}|C{2.78cm}@{}}
\toprule[0.015in]
aug. parameter & range & mapping from $\lambda$ \\ \midrule
$m_t$ (time masking) & \{2,3,4,5,6\} & $m_t^*= \lfloor 2+4\lambda \rfloor$ \\
$m_f$ (freq. masking ) & \{2,3,4,5,6\} & $m_f^*= \lfloor 2+4\lambda \rfloor $ \\
$\rho_0$ (time stretching) & {[}0.2,0.6{]} & $\rho_0^*= 0.2+0.4\lambda$ \\
$\lambda_{sp}$ (SamplePairing) & {[}0,0.1{]} & $\lambda_{sp}^*=0.1\lambda$ \\
$w$ (CutMix) & {[}1600, 4800{]} & $w^*=1600+3200\lambda$ \\ \bottomrule
\end{tabular}
\caption{The augmentation parameter values, chosen from the above range, determine the amount of augmentation.
For a given input sample, the learned policy chooses one of the values in the range above.
For CutMix, the range of $w$ correspond to $[0.1, 0.3]$ seconds at $16$kHz sampling rate of audio.
}
\label{tab:magnitude_control}
\end{table}
\subsection{Policy search}
The training of SapAugment framework is formulated as finding a policy to select and adapt augmentation parameters that maximizes the validation accuracy.
The policy search space is the set of policy hyper-parameters for all the augmentations, $\boldsymbol v_p = [s_1,a_1,s_2,a_2,...,s_N, a_N]$, and their selection probabilities, $\boldsymbol p = [p_1, \dots, p_N]$, where $N$ is the number of augmentation methods.
We use Bayesian optimization \cite{ginsbourger2008multi}, which uses a Gaussian process to model the relation between $(\boldsymbol v_p, \boldsymbol p)$ and the validation accuracy.
To further accelerate the search process, we utilize constant liar strategy to parallelize the suggestion step of Bayesian optimization.
Please refer to the Algorithm~$2$ of \cite{ginsbourger2008multi} for more details.
\section{Introduction}
\label{sec: intro}
\input{intro.tex}
\section{Related Works}
\input{related_works.tex}
\section{Sample Adaptive Data Augmentation}
\input{method.tex}
\section{Experiments}
\label{sec:experiment}
\input{experiments.tex}
\section{Conclusion}
\input{conclusion.tex}
| {'timestamp': '2021-02-16T02:39:51', 'yymm': '2011', 'arxiv_id': '2011.01156', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.01156'} |
\section{Introduction}
\label{sec:intro}
Medical image diagnosis with convolutional neural networks (CNNs) is still challenging due to the semantic ambiguity of pathologies \cite{liang2020atlas}.
For example, lung nodules and non-nodules can share instinctive similarity, which can lead to high false-positive rates from 51\% to 83.2\% according to experts' inspection \cite{harsono2020lung}.
As such, the ability of CNN to quantify uncertainty has recently been identified as key for its application in clinical routine to assist clinicians' decision making and gain trusts
\cite{amodei2016concrete}.
Recently, approximating the posterior of a CNN using dropout and MC samples provides a simple approach to estimate uncertainty \cite{gal2016dropout}, and has been applied for several medical image diagnosis tasks.
As for classification tasks, example works include the modeling of uncertainty for diabetic retinopathy diagnosis from fundus images \cite{leibig2017leveraging} and lesion detection from knee MRI \cite{pedoia20193d}.
As for segmentation tasks, uncertainty estimation has been applied to localize lung nodules \cite{ozdemir2017propagating}, brain sclerosis lesion \cite{nair2020exploring}, brain tumor \cite{wang2019aleatoric}, \textit{etc.}.
Moreover, performance boost has been observed from the above studies by utilizing uncertainty as a filter for false positive predictions besides the probability thresholding.
Despite the aforementioned work, only limited studies have explored the uncertainty estimation for detection task.
The main challenge is that, while MC samples from segmentation and classification are naturally well aligned, bounding box samples from the detection task are spatially diverse, and must be associated before aggregation \cite{miller2019evaluating}.
One existing solution is to derive pixel-level uncertainty from segmentation, and then aggregate pixels of a connected region with a log-sum operation for instance-level uncertainty \cite{nair2020exploring}.
However, the method cannot optimize uncertainty in an end-to-end manner, and the log-sum aggregation has the potential stability issue.
Another solution is to define bounding box clustering strategies for merging box samples of a same object, \textit{e.g.} affinity-clustering \cite{miller2019evaluating} and enhanced Non-Maximum Suppression (NMS) \cite{he2019bounding} as proposed for estimating boundary uncertainty from natural photos.
However, the methods require extra clustering parameters from handcrafting, and the optimal values can depend on specific tasks.
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=0.75\linewidth]{figs/overview.pdf}
\vspace*{-\baselineskip}
\end{center}
\caption{Model overview. (a) a single-scale multi-level pyramid CNN with dropout is used as the detector. (b) During training, bounding box predictions of probability, predictive variance, and location parameters are trained directly against ground-truth. (c) During inference, MC samples of bounding box for each pyramid level are first in-place aggregated for MC variances, which further averaged with predictive variances as the uncertainty estimation.}
\label{fig:overview}
\vspace*{-\baselineskip}
\end{figure*}
Different from previous work, we propose to estimate instance-level uncertainty directly from a single-scale multi-level detector in an end-to-end manner.
Specifically, since object occlusion is rare in medical analysis, a single-scale bounding box prediction is generated at each pyramid level, which enables the simple clustering of multiple MC samples without the need for alignment.
Two types of uncertainty measure, \textit{i.e.} predictive variance and MC sample variance, are studied.
Our experiments show that: (\textit{i}) a combination of both types of uncertainty leads to best performance, and (\textit{ii}) using uncertainty as a bounding box inclusion criteria besides probability allows superior operating points.
\vspace{-3mm}%
\section{methods}
\vspace{-0.2cm}
Figure \ref{fig:overview} shows the overview of our method.
We develop a 2.5D single-scale multi-level pyramid CNN with dropout to predict bounding boxes of pathologies with attributes of probability and uncertainty from an input image.
During training, box-wise probability, predictive variance and location parameters are directly trained with ground-truth supervision.
During testing, an unseen image is passed through the model $T$ times to estimate MC variance.
With the single-scale structure, both MC and predictive variances of bounding boxes can be simply averaged in-place for aggregation.
\vspace{-0.2cm}
\subsection{Single-Scale Multi-Level Detector}
\label{sec:det}
A Feature Pyramid Network (FPN) similar to \cite{lin2017focal} is applied to extract multi-scale features as shown in Figure \ref{fig:overview}(a).
By referring to existing medical detectors \cite{zhang2018sequentialsegnet,khosravan2018s4nd,liang2020oralcam}, the input volumes are designed to be stacked 2D slices of a 3D image for model efficiency, referred as 2.5D.
By following the \cite{liu2016ssd}, bounding box predictions are generated from the multi-level feature maps for label probability $P$ (Figure \ref{fig:overview}(1)) and location/size deltas $\omega$ (Figure \ref{fig:overview}(2)).
Different from detectors for natural scenarios, only one base scale (aspect ratio and size) for bounding boxes is defined at the each level by considering that the pathologies in medical domain can be more regular in shapes.
The model can also be viewed as a multi-level extension of S4ND \cite{khosravan2018s4nd}, a CNN detector with a single-scale bounding box design for nodule detection.
The base bounding box sizes at different levels can be designed to be sufficient to fit all the expected pathology sizes in a target task.
\vspace{-0.2cm}
\subsection{Monte Carlo Variance}
\vspace{-0.1cm}
Previous studies show that minimizing the cross-entropy loss of a network with dropout applied after certain layers enables capturing the posterior distribution of the network weights \cite{gal2016dropout}.
We follow the method by enabling dropout operations during both training and testing stages.
For each inference, $T$ forward passes of a target volume are conducted, which results in $T$ sets of bounding box MC samples with $P$ and $\omega$ at each pyramid level (Figure \ref{fig:overview}(3)).
As such, MC variance $V_{mc}$ can be defined as the variance of the $P$ for all associated MC samples of a bounding box, and is used as a measure of an instance-level uncertainty.
\vspace{-0.3cm}
\subsection{Predictive Variance}
\vspace{-0.1cm}
During training, the weights of the network are also trained to directly estimate a bounding-box-wise variance $V_{pred}$ (Figure \ref{fig:overview}(4)) by following the method of \cite{kendall2017uncertainties}.
In specific, by assuming the classification logits $\mathbf{z}$ are corrupted by a Gaussian noise with variance $\boldsymbol{\sigma}$ at each bounding box prediction, the weight updates during back-propagation encourages the network to learn the variance estimates without having explicit labels for them. Defining the corrupted output $\mathbf{z}_{i,t} = \mathbf{z}_i + \boldsymbol{\sigma}_i\times\boldsymbol{\epsilon}_t,\boldsymbol{\epsilon}_t \sim \mathcal{N}(\mathbf{0},\mathbf{I})$, the loss function can be written as $\mathcal{L}_{cls} = \sum_{i}\log\frac{1}{T'}\sum_{t} \mathrm{softmax}(\mathbf{z}_{i,t})$,
where the $T'$ is the number of times for MC integration.
Since the method models object ambiguity during the optimization, it has shown to enable improved performance \cite{nair2020exploring,kendall2017uncertainties}.
During the inference, each MC sample of a bounding box prediction comes with a predictive variance $V_{pred}$, and is used as another uncertainty estimation.
\begin{figure*}[ht!]
\begin{center}
\vspace*{-\baselineskip}
\includegraphics[width=\linewidth,height=7.5cm]{figs/new_results.pdf}
\vspace*{-\baselineskip}
\end{center}
\caption{(a) Estimated uncertainty ($V_{avg}$) distribution of nodule and non-nodule detections. (b) CPM curves under different uncertainty thresholds. (c) F1 score under different probability and uncertainty thresholds. (d) Examples of nodule detections. (e) Examples of non-nodule detections.
\label{fig:threshold}
\vspace*{-\baselineskip}
\end{figure*}
\vspace{-0.3cm}%
\subsection{MC Sample Aggregation}
All the bounding boxes samples from multiple MC inference in each pyramid level are aggregated in-place as shown in Figure \ref{fig:overview}(5).
In specific, the probability $P$ and location parameters $\omega$ of a bounding box at any grid point can be obtained by averaging those of all the MC samples that are at the same location.
The MC variance of a bounding box can be calculated as $V_{mc} = (\sum_{t}\mathbf{P}_t^2)/T - (\sum_{t}\mathbf{P}_t/T)^2$, where $P_t$ represents the predicted probability of a MC sample at the location.
The predictive variance of a bounding box can be represented as $V_{pred} = \sum_t\mathrm{softmax}(\boldsymbol{\sigma}_t^2)/T$, where $\boldsymbol{\sigma}_t$ is the predictive variance output for each MC sample.
The aggregated bounding boxes among different pyramid levels are then post-processed with regular NMS for removing overlaps.
The final variance for a predicted bounding box is taken as the average of its predictive variance and MC variance, represented as $V_{avg} = (V_{mc} + V_{pred}) / 2$.
\vspace{-0.3cm}
\section{Experiments}
\vspace{-0.2cm}
\subsection{Dataset and Metrics}
We evaluated the proposed method for the lung nodule detection task on LUNA16 dataset~\cite{setio2017validation},
where 200/30/30 scans were randomly selected for training, validation, and testing.
As suggested by the original challenge, we employed the Competition Performance Metric (CPM) to evaluate the model performance, which is defined as the mean of sensitivities at the average false positives (FPs) of 0.125, 0.25, 0.5, 1, 2, 4, and 8 per scan.
Moreover, we also utilized F1-score as the nodule detection performance to study the effect of using different probability and uncertainty as thresholds.
\vspace{-0.3cm}
\subsection{Implementation}
\vspace{-0.2cm}
We set the size of an input patch to be 228$\times$228$\times$7.
The base sizes of bounding box predictions are set to be 8$\times$8$\times$7, 16$\times$16$\times$7, 32$\times$32$\times$7, and 64$\times$64$\times$7 at the four pyramid levels in order to fit most nodules.
We set a dropout rate of 0.1 for all dropout layers, and the number MC sampling $T$ to be 10 for inference.
All volumes from LUNA16 were re-sampled to a voxel spacing of 0.7mm$\times$0.7mm$\times$1.25mm, are are clipped to intensity range of $[-1200, 800]$ as common practice in lung nodule detection problem\footnote{LUNA16 winner: \url{shorturl.at/coqI7} and another work\cite{jaeger2020retina} for LIDC.}.
Intensive augmentations of scaling, rotation, shifting, and random noise were applied during training. All models are trained from scratch for 200 epochs using Adam optimizer at the initial learning rate of $10^{-4}$. The baseline network design closely follows \cite{jaeger2020retina}.
\vspace*{-\baselineskip}
\begin{table}[ht!]
\centering
\caption{Performance comparison between different models. M1-3 are our networks with different variance potentials, while Unet1-3 are segmentation-based detection models.}
\label{tab:main_results}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}c|c|c|c|c@{}}
\toprule
Methods & V$_{mc}$ & V$_{pred}$ & Loss Function & CPM($\%$) \\ \midrule\midrule
Liao et al.~\cite{liao2019evaluate} & $\times$& $\times$ & -- & 83.4 \\
Zhu et al.~\cite{zhu2018deeplung} & $\times$& $\times$ & -- & 84.2 \\
Li et al.~\cite{li2019deepseed:} &$\times$ & $\times$ & -- & 86.2
\\\midrule
Unet1 & $\times$ & $\times$ & $\mathcal{L}_{\mathrm{CE}}+\mathcal{L}_{\mathrm{Dice}}$ & 78.78 -- \\
Unet2 & \checkmark & $\times$ & $\mathcal{L}_{\mathrm{CE}}+\mathcal{L}_{\mathrm{Dice}}$ & 80.00 $\uparrow$ \\
Unet3 & \checkmark & \checkmark & $\mathcal{L}_{\mathrm{cls}}+\mathcal{L}_{\mathrm{Dice}}$ & 80.13 $\boldsymbol{\uparrow}$ \\\midrule
M1 & $\times$ & $\times$ & $\mathcal{L}_{\mathrm{CE}}+\mathcal{L}_{\mathrm{SmoothL1}}$ & 84.57 -- \\
M2 & \checkmark & $\times$ & $\mathcal{L}_{\mathrm{CE}}+\mathcal{L}_{\mathrm{SmoothL1}}$ & 87.14 $\uparrow$ \\
M3 & \textbf{\checkmark} & \checkmark & $\mathcal{L}_{cls}+\mathcal{L}_{\mathrm{SmoothL1}}$ & 88.86 $\boldsymbol{\uparrow}$ \\
M3$_{\eta=0.456}$ & \checkmark & \checkmark & $\mathcal{L}_{cls}+\mathcal{L}_{\mathrm{SmoothL1}}$ & \textbf{89.52} $\boldsymbol{\uparrow}$
\\ \bottomrule
\end{tabular}%
}
\vspace*{-\baselineskip}
\end{table}
\vspace{-0.3cm}
\subsection{Results and Discussion}
\vspace{-0.1cm}%
\subsubsection{Performance Comparison}
\vspace{-0.1cm}
Table~\ref{tab:main_results} shows the performance of different models.
Multiple recent detection-based networks without estimating uncertainty \cite{liao2019evaluate,setio2016pulmonary,zhu2018deeplung,li2019deepseed:} is used as the baseline models for comparison.
Unet1-3 are segmentation-based models that draw detection output by merging connected pixels by following the strategy of \cite{nair2020exploring}: Unet1 exploits the nnUNet \cite{isensee2019nnu-net:} without modeling uncertainty, Unet2 extends Unet1 with MC variance, and Unet3 models both MC and predictive variance.
M1-3 are our detection models for enabling bounding-box-level uncertainty: M1 is the base detector without uncertainty measure, while M2 has MC variance and M3 is the full model that enables both MC and predictive variance.
We can see that our models (M1-3) achieve comparable results with existing detection-based methods.
In specific, MC inference improves CPM by 2.57\% with dropout layers incorporated (M1$\rightarrow$M2), and $V_{pred}$ further improves CPM by 1.72\% by introducing predictive variance in the optimization process.
Moreover, we empirically set the uncertainty threshold of 0.456 (96\% percentile) from search in validation set to filter out bounding box predictions with a high uncertainty, where the optimal threshold value is determined based on the validation set. The extra filter further boosts the model performance by 0.66\% in CPM (M3$\rightarrow$M3$_{\eta=0.456}$).
By comparing our method with uncertainty estimation from merging segmentation results (Unet1-3), we can see that deriving bounding-box-wise uncertainty in an end-to-end manner enables higher CPM boosts.
\vspace{-0.3cm}
\subsubsection{Exploiting Uncertainty Information and Case Study}
Figure \ref{fig:threshold}(a) shows the distribution of the estimated uncertainty (using a combination of $V_{pred}$ and $V_{mc}$) of positive detections and negative detections.
We can see that bounding boxes for non-nodules have a generally higher uncertainty level than nodules, which indicates that the uncertainty estimation can be used to improve model performance by filtering out false positive findings.
In Figure \ref{fig:threshold}(b), we study the model performance under different uncertainty thresholds on the testing set: the CPM score of the model first increases mainly due to the filtering out of false positive detections; and then it decreases possibly because certain true positive findings are mis-removed.
We also observe that using a combination of $V_{pred}$ and $V_{mc}$ leads to the highest CPM, which indicates the two types of variance can complement for the improved performance.
We further plot the model performance as F1 score under different values of uncertainty and probability thresholds in Figure\ref{fig:threshold}(c), which also confirms that that using both parameters as thresholds lead to the optimal performance.
In the real application, the uncertainty and probability thresholds can be tuned together to meet the specific precision and recall requirements of a task.
Figure \ref{fig:threshold}(d,e) visualizes example bounding-box detections with the estimated $p$, $V_{avg}$, $V_{pred}$, and $V_{mc}$.
In specific, Figure \ref{fig:threshold}(d) includes true positive detections with outputs of high probability and low averaged uncertainty, where the model is confident about the prediction; Meanwhile, Figure \ref{fig:threshold}(e) includes non-nodule detections with high estimated probability but high uncertainty, and are correctly filtered out by our model by using uncertainty as a threshold.
Moreover, cases show that predictive and MC variances can capture different uncertainties and complement each other (Figure \ref{fig:threshold}(1,2)), which also validates our method of combining both types of variance.
\vspace{-0.4cm}%
\section{Conclusion}
\vspace{-0.2cm}
In this work, we propose to estimate instance-level uncertainty in an end-to-end manner with a single-scale multi-level detection network.
Two types of uncertainty measures, \textit{i.e.}, predictive variance and MC variance, are studied.
Experimental results prove that the incorporation of uncertainty estimation improves model performance, can act as filters for false positive detections, and can be used with probability as thresholds for setting superior operating points.
\vspace{-0.2cm}
\section{Acknowledgments}
\vspace{-0.2cm}
This manuscript has no conflict of interest.
\vspace{-0.2cm}
\section{Compliance with Ethical Standards}
\vspace{-0.3cm}
This research study was conducted retrospectively using human subject data made available in open access by LUNA16\footnote{https://luna16.grand-challenge.org/}. The related paper is \cite{setio2017validation}. Ethical approval was not required as confirmed by the license attached with the open access data.
\vspace{-0.3cm}
\bibliographystyle{IEEEbib}
| {'timestamp': '2021-02-09T02:20:38', 'yymm': '2012', 'arxiv_id': '2012.12880', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.12880'} |
\section{Introduction}
\label{intro}
With smartphones, mobile applications and expanded connectivity, people are consuming more video content than ever. That increase in mobile video consumption is taking a toll on the internet, which has seen data traffic increase many-fold in a few years. However, the high temporal variability of on-demand video services leaves the network underutilized during off-peak hours. Utilizing the channel resources by employing low-cost cache memory to store contents at the user end during off-peak times is an effective way to ease the network traffic congestion in peak hours. In the conventional caching scheme, the users' demands are met by filling the caches during the placement phase (in off-peak hours without knowing the user demands) and by transmitting the remaining requested file contents in the uncoded form during the delivery phase (in the peak hours after knowing the user demands). In coded caching, introduced in \cite{MaN}, it was shown that by employing coding in the delivery phase, among the requested contents, it is possible to bring down the network load further compared to the conventional caching scheme. In \cite{MaN}, Maddah-Ali and Niesen consider a single server broadcast network, where the server is connected to $K$ users through an error-free shared link. The server has a library of $N$ files, and the user caches have a capacity of $M$ files, where $M\leq N$ (since each user is equipped with a dedicated cache memory, we refer to this network as the dedicated cache network). During the placement phase, the server stores file contents (equivalent to $M$ files) in the caches without knowing the user demands. In the delivery phase, each user requests a single file from the server. In the delivery phase, the server makes coded transmission so that the users can decode their demanded files by making use of the cache contents and the coded transmission. The goal of the coded caching problem is to jointly design the placement and the delivery phases such that the rate (the size of transmission) in the delivery phase is minimized. The scheme in \cite{MaN} achieved a rate $K(1-M/N)/(1+KM/N)$, which is later shown to be the optimal rate under uncoded placement \cite{YMA,WTP}. In \cite{CFL}, it was shown that by employing coding among the contents stored in the placement phase in addition to the coding in the delivery phase, a better rate-memory trade-off could be achieved. By using coded placement, further improvement in the rate was obtained by the schemes in \cite{AmG,TiC,Vil1,ZhT,Vil2}. The works in \cite{STC,WLG,WBW,GhR}, came up with different converse bounds for the optimal rate-memory trade-off of the coded caching scheme for the dedicated cache network.
Motivated by different practical scenarios, coded caching was extended to various network models, including multi-access networks \cite{HKD}, shared cache networks \cite{PUE} etc. In the multi-access coded caching (MACC) scheme proposed in \cite{HKD}, the number of users, $K$ was kept to be the same as the number of caches. Further, each user was allowed to access $r<K$ neighbouring caches in a cyclic wrap-around fashion. The coded caching scheme under the cyclic-wrap model was studied in \cite{ReK1,SPE,ReK2,MaR,SaR,CWLZC}. Those works proposed different achievable schemes, all restricted to uncoded cache placement. In \cite{NaR1}, the authors proposed a coded caching scheme with an MDS code-based coded placement technique and achieved a better rate-memory trade-off in the low memory regime compared to the schemes with uncoded placement. Further, by deriving an information-theoretic converse, the scheme in \cite{NaR1} was shown to be optimal in \cite{NaR2}. The MACC problem (in the cyclic wrap-around network) was studied by incorporating security and privacy constraints in \cite{NaR3,NaR4}. The first work that considered a multi-access network with more users than the caches is \cite{KMR}. The authors established a connection between MACC problem and design theory and obtained classes of MACC schemes from resolvable designs. The works in \cite{MKR1,DaR} also relied on design theory to obtain MACC schemes.
In this paper, we consider the combinatorial multi-access network model introduced in \cite{MKR2}, which consists of $C$ caches and $K=\binom{C}{r}$ users, where $r$ is the number of caches accessed by a user. The combinatorial MACC scheme presented in \cite{MKR2} achieves the rate $R=\binom{C}{t+r}/\binom{C}{t}$ at cache memory $M=Nt/C$, where $t\in\{1,2,\dots,C\}$. The optimality of that scheme under uncoded placement was shown in \cite{BrE}. In addition to that, in \cite{BrE}, the system model in \cite{MKR2} was generalized to a setting where more than one user can access the same set of $r$ caches. In \cite{ChR}, the authors addressed the combinatorial MACC problem with privacy and security constraints.
\begin{figure}[t]
\captionsetup{justification = centering}
\captionsetup{font=small,labelfont=small}
\begin{center}
\captionsetup{justification = centering}
\includegraphics[width = 0.9\columnwidth]{network.eps}
\caption{$(C,r,M,N)$ Combinatorial multi-access Network. }
\label{network}
\end{center}
\end{figure}
\subsection{Contributions}
In this paper, we study the combinatorial multi-access setting introduced in \cite{MKR2} and make the following technical contributions:
\begin{itemize}
\item In \cite{MKR2}, the authors proposed a combinatorial MACC scheme that achieves the optimal rate under uncoded placement. However, in the scheme in \cite{MKR2}, a user get multiple copies of the same subfile from different caches that it accesses. In order to remove that redundancy, we introduce an MDS code-based coded placement technique. The coded subfiles transmitted in the delivery phase of our proposed scheme (Scheme 1) remain the same as the transmissions in the scheme in \cite{MKR2}. Thereby, our proposed scheme in Theorem \ref{thm:scheme1} achieves the same rate as the scheme in \cite{MKR2} at a lesser cache memory value. For every $M>N/C$, Scheme 1 outperforms the scheme in \cite{MKR2} in terms of the rate achieved.
\item For $0\leq M\leq N/C$, Scheme 1 has the same rate as the scheme in \cite{MKR2}. Thus we present another achievability result for the combinatorial MACC scheme in Theorem \ref{thm:scheme2}. The new coding scheme (Scheme 2) achieves the rate $N-CM$ for $0\leq M\leq \frac{N-\binom{C-1}{r}}{C}$, which is strictly less than the optimal scheme with uncoded placement when the number of files with the server is no more than the number of users in the system. In effect, by employing coding in the placement phase, we brought down the rate (except at $M=N/C$) further compared to the optimal scheme under uncoded placement.
\item We derive an information-theoretic lower bound on the optimal rate-memory trade-off of the combinatorial MACC scheme (Theorem \ref{lowerbound}). While deriving the lower bound, we do not impose any restriction on the placement phase. Thus, the lower bound is applicable even if the scheme employs coding in the placement phase. By using the derived lower bound, we show that Scheme 1 is optimal for higher values of $M$, specifically when $\frac{M}{N}\geq \frac{\binom{C}{r}-1}{\binom{C}{r}}$ (Theorem \ref{optimality1}). Further, we show that Scheme 2 is optimal when $N\leq \binom{C}{r}$ (Theorem \ref{optimality2}).
\end{itemize}
\subsection{Notations}
For a positive integer $n$, $[n]$ denotes the set $ \left\{1,2,\hdots,n\right\}$. For two positive integers $a,b$ such that $a\leq b$, $[a:b] = \{a,a+1,\hdots,b\}$. For two non-negative integers $n,m$, we have $\binom{n}{m} =\frac{n!}{m!(n-m)!}$, if $n\geq m$, and $\binom{n}{m}=0$ if $n<m$. Uppercase letters (excluding the letters $C,K,M,N$ and $R$) are used to denote random variables (the letters $C,K,M,N$ and $R$ are reserved for denoting system parameters). The set of random variables $\{V_a,V_{a+1},\dots,V_b\}$ is denoted as $V_{[a:b]}$. Calligraphic letters are used to denote sets. Further, for a set of positive integers $\mathcal{I}$, $V_{\mathcal{I}}$ represents a set of random variables indexed by the elements in $\mathcal{I}$ (for instance, $V_{\{2,4,5\}}=\{V_2,V_4,V_5\}$). Bold lowercase letters represent vectors, and bold uppercase letters represent matrices. The identity matrix of size $n\times n$ is denoted as $\mathbf{I}_n$. Further, $\mathbb{F}_q$ represents the finite field of size $q$. Finally, for a real number $z$, $(z)^+=\max(0,z)$.
The rest of the paper is organized as follows. Section \ref{system} describes the system model, and Section \ref{prelims} describes useful preliminaries. The main results on the achievability and converse are presented in Section \ref{main}. The proofs of the main results are given in Section \ref{proofs}.
\section{System Model and Problem Formulation}
\label{system}
The system model as shown in Fig. \ref{network} consists of a central server with a library of $N$ independent files, $W_{[1:N]}\triangleq \{W_1,W_2,\dots,W_N\}$, each of size 1 unit (we assume that 1 unit of a file is constituted by $f$ symbols from the finite field of size $q$)\footnote{We assume that $q$ is such that all MDS codes considered in this work exist over $\mathbb{F}_q$.}. There are $C$ caches, each with capacity $M$ units ($0\leq M\leq N$). i.e., each cache can store contents equivalent to $M$ files. There are $K$ users in the system, and each of them can access $r$ out of the $C$ caches, where $r<C$. We assume that there exists a user corresponding to any choice of $r$ caches from the total $C$ caches. Thus the number of users in the system is $K=\binom{C}{r}$. Further, each user is denoted with a set $\mathcal{U}$, where $\mathcal{U}\subset [C]$ such that $|\mathcal{U}|=r$. That is, a user is indexed with the set of caches that it accesses. In other words, user $\mathcal{U}$ has access to all the caches in the set $\mathcal{U}$. A system under the aforementioned setting is called the $(C,r,M,N)$ combinatorial multi-access network. The coded caching scheme under this model was introduced in \cite{MKR2}
The $(C,r,M,N)$ combinatorial MACC scheme works in two phases, the placement phase and the delivery phase. In the placement phase, the server populates the caches with the file contents. The cache placement is done without knowing the demands of the users. The placement can be either coded or uncoded. By uncoded placement, we mean that files are split into subfiles and kept in the caches as such, while coded placement means that coded combinations of the subfiles are allowed to be kept in the caches. The number of subfiles into which a file is split is termed the subpacketization number. The contents stored in cache $c$, $c\in [C]$, is denoted as $Z_c$. In the delivery phase, user $\mathcal{U}$ requests file $W_{d_\mathcal{U}}$ from the server, where $d_\mathcal{U}\in [N]$, and the demand vector $\mathbf{d} = (d_\mathcal{U}:\mathcal{U}\subset [C],|\mathcal{U}|=r)$. In the demand vector, we arrange the users (subsets of size $r$) in the lexicographic order. Corresponding to the demand vector, the server makes a transmission $X$ of size $R$ units. We assume that the broadcast channel from the server to the users is error-free. The non-negative real number $R$ is said to be the rate of transmission. Finally, user $\mathcal{U}$ should be able to decode $W_{d_\mathcal{U}}$ using transmission $X$ and the accessible cache contents $Z_c,c\in \mathcal{U}$. That is, for $\mathcal{U}\subset [C]$ such that $|\mathcal{U}|=r$
\begin{equation}
\label{decodability}
H(W_{d_\mathcal{U}}|Z_\mathcal{U},X)=0,
\end{equation}
where $Z_\mathcal{U} =\{Z_c: c\in \mathcal{U}\}$.
\begin{defn}
For the $(C,r,M,N)$ combinatorial MACC scheme, a rate $R$ is said to be achievable if the scheme satisfies \eqref{decodability} with a rate less than or equal to $R$ for every possible demand vector. Further, the optimal rate-memory trade-off is
\begin{equation}
R^*(M) =\inf \{R: R \text{ is achievable}\}.
\end{equation}
\end{defn}
\section{Preliminaries}
\label{prelims}
In this section, we discuss the preliminaries required for describing the coding scheme as well as the converse bound.
\subsection{Review of the combinatorial MACC scheme in \cite{MKR2}}
In the sequel, we describe the combinatorial MACC scheme presented in \cite{MKR2}. Here onwards, we refer to the scheme in \cite{MKR2} as the \textit{MKR scheme}.
In the placement phase, the server divides each file into $\binom{C}{t}$ non-overlapping subfiles of equal size, where $t\triangleq CM/N\in [C]$. Th subfiles are indexed with $t$-sized subsets of $[C]$. Therefore, we have, $W_n = \{W_{n,\mathcal{T}}:\mathcal{T}\subseteq [C],|\mathcal{T}|=t\}$ for all $n\in [N]$. The server fills cache $c$ as follows:
\begin{equation}
Z_c = \left\{W_{n,\mathcal{T}}:\mathcal{T}\ni c, \mathcal{T}\subseteq [C],|\mathcal{T}|=t,n\in [N]\right \}
\end{equation}
for every $c\in [C]$. According to the above placement, the server stores $\binom{C-1}{t-1}$ subfiles of all the files in a cache. Thus, we have, $M/N =\binom{C-1}{t-1}/\binom{C}{t}=t/C$. Now, assume that user $\mathcal{U}$ demands file $W_{d_\mathcal{U}}$ from the server. In the delivery phase, the server makes coded transmissions corresponding to every $\mathcal{S}\subseteq [C]$ such that $|\mathcal{S}|=t+r$. The server transmission is as follows:
\begin{equation}
\bigoplus_{\substack{\mathcal{U}\subseteq \mathcal{S}\\|\mathcal{U}|=r}} W_{d_\mathcal{U},\mathcal{S}\backslash \mathcal{U}}.
\end{equation}
Therefore, the number of coded subfiles transmitted is $\binom{C}{t+r}$, where each coded subfile is $1/\binom{C}{t}^{\text{th}}$ of a file size. Thus the rate of transmission is $\binom{C}{t+r}/\binom{C}{t}$.
Notice that user $\mathcal{U}$ gets subfile $W_{n,\mathcal{T}}$ for all $n\in [N]$ from the cache placement if $|\mathcal{U}\cap \mathcal{T}|\neq 0$. Now, let us see how does the user get subfile $W_{d_\mathcal{U},\mathcal{T}}$ if $|\mathcal{U}\cap \mathcal{T}|= 0$. Consider the transmission corresponding to the $(t+r)$-sized set $\mathcal{S}=\mathcal{U}\cup \mathcal{T}$. In the coded message
\begin{equation*}
\bigoplus_{\substack{\mathcal{U}\subseteq \mathcal{S}\\|\mathcal{U}|=r}} W_{d_\mathcal{U},\mathcal{S}\backslash \mathcal{U}}=W_{d_\mathcal{U},\mathcal{T}}\oplus \bigoplus_{\substack{\mathcal{U'}\subseteq \mathcal{S}\\|\mathcal{U}'|=r\\\mathcal{U}'\neq \mathcal{U}}} W_{d_{\mathcal{U}'},\mathcal{S}\backslash \mathcal{U}'}.
\end{equation*}
user $\mathcal{U}$ has access to $W_{d_{\mathcal{U}'},\mathcal{S}\backslash \mathcal{U}'}$ for every $\mathcal{U}'\neq \mathcal{U}$, since $|\mathcal{U}\cap\mathcal{S}\backslash \mathcal{U}'|\neq 0$. Therefore, user $\mathcal{U}$ can decode the demanded file $W_{d_\mathcal{U}}$.
\subsection{Maximum distance separable (MDS) codes \cite{MaS}}
An $[n,k]$ maximum distance separable (MDS) code is an erasure code that allows recovering the $k$ message/information symbols from any $k$ out of the $n$ coded symbols. Consider a systematic $[n,k]$ MDS code (over the finite field $\mathbb{F}_q$) generator matrix $\mathbf{G}_{k\times n} = [\mathbf{I}_k|\mathbf{P}_{k\times n-k}]$. Then, we have
\begin{align*}
[m_1,m_2,\dots,m_k,c_1,c_2,\dots,c_{n-k}]=[m_1,m_2,\dots,m_k]\mathbf{G}
\end{align*}
where the message vector $[m_1,m_2,\dots,m_k]\in \mathbb{F}_q^k$.
\subsection{Han's Inequality \cite{Han}}
To derive a lower bound on $R^*(M)$, we use the following lemma which gives an inequality on subset of random variables.
\begin{lem}[Han's Inequality \cite{Han}]
\label{Entropy}
Let $V_{[1:m]}=\{V_1,V_2,\dots,V_m\}$ be a set of $m$ random variables. Further, let $\mathcal{A}$ and $\mathcal{B}$ denote subsets of $[1:m]$ such that $|\mathcal{A}|=a$ and $|\mathcal{B}|=b$ with $a\geq b$. Then, we have
\begin{equation}
\frac{1}{\binom{m}{a}}\sum_{\substack{\mathcal{A}\subseteq [1:m] \\ |\mathcal{A}|=a}} \frac{H(V_{\mathcal{A}})}{a}\leq \frac{1}{\binom{m}{b}}\sum_{\substack{\mathcal{B}\subseteq [1:m] \\ |\mathcal{B}|=b}} \frac{H(V_{\mathcal{B}})}{b}
\end{equation}
where $V_{\mathcal{A}}$ and $V_{\mathcal{B}}$ denote the set of random variables indexed by the elements in $\mathcal{A}$ and $\mathcal{B}$, respectively.
\end{lem}
\subsection{Motivating Example}
\label{motex}
Even though the scheme in \cite{MKR2} is optimal under uncoded placement, with an example, we show that further reduction in rate is possible with the help of coded placement.
\begin{figure}[t]
\captionsetup{justification = centering}
\captionsetup{font=small,labelfont=small}
\begin{center}
\captionsetup{justification = centering}
\includegraphics[width = 0.9\columnwidth]{example.eps}
\caption{$(4,2,M,N)$ Combinatorial multi-access Network. }
\label{examplefigure}
\end{center}
\end{figure}
\begin{table*}[h!]
\begin{center}
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
Cache 1 &Cache 2 &Cache 3 &Cache 4 \\
\hline
\hline
$W_{n,12}; \forall n\in [6]$ & $W_{n,12}; \forall n\in [6]$ & $W_{n,13}; \forall n\in [6]$ & $W_{n,14}; \forall n\in [6]$\\
$W_{n,13}; \forall n\in [6]$ & $W_{n,23}; \forall n\in [6]$ & $W_{n,23}; \forall n\in [6]$ & $W_{n,24}; \forall n\in [6]$\\
$W_{n,14}; \forall n\in [6]$ & $W_{n,24}; \forall n\in [6]$ & $W_{n,34}; \forall n\in [6]$ & $W_{n,34}; \forall n\in [6]$\\
\hline
\end{tabular}
}
\caption{$(C=4,r=2,M=3,N=6)$: Cache contents \cite{MKR2}}
\label{exampleplacement}
\end{center}
\end{table*}
\begin{table*}[ht!]
\begin{center}
\resizebox{0.95\linewidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
Cache 1 &Cache 2 &Cache 3 &Cache 4 \\
\hline
\hline
$W_{n,12}^{(1)}; \forall n\in [6]$ & $W_{n,12}^{(2)}; \forall n\in [6]$ & $W_{n,13}^{(2)}; \forall n\in [6]$ & $W_{n,14}^{(2)}; \forall n\in [6]$\\
$W_{n,13}^{(1)}; \forall n\in [6]$ & $W_{n,23}^{(1)}; \forall n\in [6]$ & $W_{n,23}^{(2)}; \forall n\in [6]$ & $W_{n,24}^{(2)}; \forall n\in [6]$\\
$W_{n,14}^{(1)}; \forall n\in [6]$ & $W_{n,24}^{(1)}; \forall n\in [6]$ & $W_{n,34}^{(1)}; \forall n\in [6]$ & $W_{n,34}^{(2)}; \forall n\in [6]$\\
$W_{n,12}^{(2)}+W_{n,13}^{(2)}; \forall n\in [6]$ & $W_{n,12}^{(1)}+W_{n,23}^{(2)}; \forall n\in [6]$ & $W_{n,13}^{(1)}+W_{n,23}^{(1)}; \forall n\in [6]$ & $W_{n,14}^{(1)}+W_{n,24}^{(1)}; \forall n\in [6]$\\
$W_{n,13}^{(2)}+W_{n,14}^{(2)}; \forall n\in [6]$ & $W_{n,23}^{(2)}+W_{n,24}^{(2)}; \forall n\in [6]$ & $W_{n,23}^{(1)}+W_{n,34}^{(2)}; \forall n\in [6]$ & $W_{n,24}^{(1)}+W_{n,34}^{(1)}; \forall n\in [6]$\\
\hline
\end{tabular}
}
\caption{$(C=4,r=2,M=2.5,N=6)$: Cache contents}
\label{examplecodedplacement}
\end{center}
\end{table*}
Consider the $(C=4,r=2,M=3,N=6)$ combinatorial multi-access network in Fig. \ref{examplefigure}. There are $\binom{4}{2}=6$ users in the system, each denoted with a set $\mathcal{U}$, where $\mathcal{U} \subset [C]$ such that $|\mathcal{U}|=2$. Let us first see the MKR scheme \cite{MKR2}, where the placement is uncoded. In the placement phase, the server divides each file into 6 non-overlapping subfiles of equal size. Further, each subfile is denoted with a set $\mathcal{T}$, where $\mathcal{T}\subset[C]$ such that $|\mathcal{T}|=2$. Thus, we have $W_n =\{W_{n,12},W_{n,13},W_{n,14},W_{n,23},W_{n,24},W_{n,34}\}$ for all $n\in [6]$. Note that, even though the subfile indices are sets, we omitted the curly braces for simplicity. The server populates the $k^{\text{th}}$ cache as follows:
\begin{equation*}
Z_k = \{W_{n,\mathcal{T}}: k\in \mathcal{T},\mathcal{T}\subset[C], |\mathcal{T}|=2\}.
\end{equation*}
The contents stored in the caches are given in Table \ref{exampleplacement}.
From the cache placement, user $\mathcal{U}$ has access to all the subfiles (of every file) except those indexed with 2-sized subsets of $[C]\backslash \mathcal{U}$. For example, user $\{12\}$ has access to all the subfiles except subfile $W_{n,34}$.
In the delivery phase, the users reveal their demands. Let $W_{d_\mathcal{U}}$ be the file demanded by user $\mathcal{U}$. Then the server transmits $W_{d_{12},34}+W_{d_{13},24}+W_{d_{14},23}+W_{d_{23},14}+W_{d_{24},13}+W_{d_{34},12}$. The claim is that all the users will get their demanded file from the above placement and delivery phases. For example, user $\{12\}$ has access to all the subfiles except subfile $W_{n,34}$. However, user $\{12\}$ can get $W_{d_{12},34}$ by subtracting the remaining subfiles from the transmitted message. Similarly, it can be verified that all the users will get their demanded files. Thus the rate achieved is 1/6. This achieved rate is, in fact, optimal under uncoded placement \cite{BrE}. Notice that every subfile is cached in two caches. For instance, subfile $W_{n,12}$ is cached in cache 1 and cache 2. Thus user $\{12\}$ who accesses those two caches get multiple copies of the same subfile. This leads to the question of whether we can do better than the MKR scheme.
Now, we employ a coded placement technique and achieve the same rate 1/6 at $M=2.5$. The coding scheme is described hereinafter. The server divides each file into 6 subfiles of equal size, and each subfile is indexed with a set $\mathcal{T}$, where $\mathcal{T} \subset [C]$ such that $|\mathcal{T}|=2$. Thus, we have $W_n =\{W_{n,12},W_{n,13},W_{n,14},W_{n,23},W_{n,24},W_{n,34}\}$ for all $n\in [6]$. Further, each subfile is divided into 2 mini-subfiles both having half of a subfile size, we have $W_{n,\mathcal{T}} =\{W_{n,\mathcal{T}}^{(1)},W_{n,\mathcal{T}}^{(2)}\}$ for all subfiles. The cache placement is summarized in Table \ref{examplecodedplacement}. Each cache contains 5 mini-subfiles (3 uncoded and 2 coded mini-subfiles), each of size $1/12^{th}$ of a file size. Thus, the size of the caches is $2.5$.
Let $W_{d_\mathcal{U}}$ be the file demanded by user $\mathcal{U}$ in the delivery phase. Then the server transmits $W_{d_{12},34}+W_{d_{13},24}+W_{d_{14},23}+W_{d_{23},14}+W_{d_{24},13}+W_{d_{34},12}$. Notice that, this coded message is the same as the transmitted message in the MKR scheme. Thus the rate achieved is also the same as the MKR scheme, which is 1/6. Now, it remains to show that the users can decode their demanded files. Since our scheme follows the same delivery policy as the MKR scheme, it is enough to show that a user in our scheme can decode the subfiles, which are accessible for the corresponding user (user accessing the same set of caches) in the MKR scheme. Consider user $\mathcal{U}$ who accesses the cache $k$ for all $k\in \mathcal{U}$. Consider user $\{12\}$ accessing cache 1 and cache 2. The user gets $W_{n,12}^{(1)}$ from cache 1 and $W_{n,12}^{(2)}$ from cache 2, and thus it obtains the subfile $W_{n,12}=\{W_{n,12}^{(1)},W_{n,12}^{(2)}\}$ for all $n\in [6]$. Using $W_{n,12}^{(2)}$, the user can decode $W_{n,13}^{(2)}$ and $W_{n,14}^{(2)}$ from cache 1. Similarly, from cache 2, the subfiles $W_{n,23}^{(2)}$ and $W_{n,24}^{(2)}$ can be decoded using $W_{n,12}^{(1)}$. That means, user $\{12\}$ obtains the subfiles $W_{n,12},W_{n,13},W_{n,14},W_{n,23}$ and $W_{n,24}$ for every $n\in [6]$. From Table \ref{examplecodedplacement}, it can be verified that a user $\mathcal{U}$ gets all the subfile $W_{n,\mathcal{T}}$ such that $\mathcal{T}\cap \mathcal{U}\neq \phi$. Thus a user can decode the subfiles, which are accessible for the corresponding user in the MKR scheme in the placement phase. In essence, by employing a coded placement technique, we could achieve the same rate, but for a smaller cache memory value.
Next, we show that for the $(C=4,r=2,M=3,N=6)$ coded caching scheme, it is possible to meet all the user demands without the server transmission. In other words, for the $(C=4,r=2,M=3,N=6)$ coded caching scheme, rate $R=0$ is achievable. In the placement phase, each file is divided into 2 subfiles of equal size, $W_n=\{W_{n,1},W_{n,2}\}$ for all $n\in [6]$. Then encode $(W_{n,1},W_{n,2})$ with a $[4,2]$ MDS code generator matrix $\mathbf{G}_{2\times 4}$. Thus, we have the coded subfiles
\begin{equation*}
(C_{n,1},C_{n,2},C_{n,3},C_{n,4}) = (W_{n,1},W_{n,2})\mathbf{G}_{2\times 4}.
\end{equation*}
Then the server fills the caches as follows: $Z_1 = \{C_{n,1};n\in[6]\}$, $Z_2 = \{C_{n,2};n\in[6]\}$, $Z_3 = \{C_{n,3};n\in[6]\}$, and $Z_4 = \{C_{n,4};n\in[6]\}$. From this placement, one user gets two coded subfiles from the caches (one from each cache). Since, any two columns of matrix $\mathbf{G}$ are independent, users can decode all the files, without further transmissions.
\section{Main Results}
\label{main}
In this section, we present two achievability results for the $(C,r,M,N)$ combinatorial MACC scheme. Further, we derive an information-theoretic lower bound on the optimal rate-memory trade-off, $R^*(M)$.
\begin{thm}
\label{thm:scheme1}
Let $t\in [C-r+1]$. Then, for the $(C,r,M,N)$ combinatorial MACC scheme, the rate $\binom{C}{t+r}/\binom{C}{t}$ is achievable at cache memory
\begin{equation}
\label{thm1eqn}
M=\begin{cases}
N\left(\frac{t}{C}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{r-1}\frac{r-i}{r}\binom{r}{i-1}\binom{C-r}{t-r+i-1}\right); \text{if } t\geq r.\\
N\left(\frac{t}{C}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{t-1}\frac{t-i}{r}\binom{r}{t-i+1}\binom{C-r}{i-1}\right); \text{if } t< r.
\end{cases}
\end{equation}
\end{thm}
Proof of Theorem \ref{thm:scheme1} is given in Section \ref{proof:thm1}. \hfill$\blacksquare$
In Section \ref{proof:thm1}, for the $(C,r,M,N)$ combinatorial MACC problem, we explicitly present a coding scheme making use of coded placement. As we have seen in the example in Section \ref{motex}, coding in the placement phase helps to avoid redundancy in the cached contents (accessed by a user) in the MKR scheme \cite{MKR2}. The parameter $t$ in the MKR scheme represents the number of times the entire server library is duplicated among the caches. So, when $t=1$, there is no duplication in cached contents. Thus, performance improvement by coding is available when $t>1$. Further, the placement phase of the MKR scheme is independent of $r$, whereas our scheme takes $r$ into consideration during the cache placement. Also, our scheme needs to divide each file into $\tilde{r}!\binom{C}{t}$, where $\tilde{r}=\min(r,t)$. At the same time, the MKR scheme needs a lesser subpacketization of $\binom{C}{t}$.
The MKR scheme achieves the rate $\binom{C}{t+r}/\binom{C}{t}$ at $M=Nt/C$, which is optimal under uncoded placement. However, when $t>1$, notice that we achieve the same rate at a lesser value of $M$ with the help of coding in the placement phase. If $t=1$, we have cache memory $M=N/C$. For $0\leq M\leq N/C$, the scheme in Theorem \ref{thm:scheme1} has the same rate-memory trade-off as the MKR scheme. For every $M>N/C$, our proposed scheme performs better than the MKR scheme in terms of the rate achieved. In Fig. \ref{plot1}, we plot the rate-memory trade-off in Theorem \ref{thm:scheme1} along with that of the MKR scheme. For $0\leq M\leq 7$ ($t=1$ gives $M=7$), both the schemes have the same rate-memory trade-off. However, for $M> 7$, our scheme has a strictly lower rate compared to the MKR scheme. It is worth noting that, in order to achieve $R(M)=0$, it is sufficient to have $M=N/r$, whereas, in the optimal scheme with uncoded placement, $M=N(C-r+1)/C$ is required to achieve $R(M)=0$.
\begin{figure}[t]
\captionsetup{justification = centering}
\captionsetup{font=small,labelfont=small}
\begin{center}
\captionsetup{justification = centering}
\includegraphics[width = \columnwidth]{fig1.eps}
\caption{$(8,3,M,56)$ Combinatorial MACC scheme. A comparison between the MKR scheme and Scheme 1 for $M\geq 6$ }
\label{plot1}
\end{center}
\end{figure}
Next, we present a combinatorial MACC scheme that performs better than the MKR scheme in the lower memory regime. In the following theorem, we present an achievability result for $ M\leq (N-\binom{C-1}{r})/C$.
\begin{thm}
\label{thm:scheme2}
For the $(C,r,M,N)$ combinatorial MACC scheme with $N>\binom{C-1}{r}$, the rate $R(M) = N-CM$ is achievable for $0\leq M\leq (N-\binom{C-1}{r})/C$.
\end{thm}
Proof of Theorem \ref{thm:scheme2} is given in Section \ref{proof:thm2}. \hfill$\blacksquare$
When the number of files $N$ is such that $\binom{C-1}{r}<N\leq \binom{C}{r}$, the rate $R(M) = N-CM$ is strictly less than the rate of the MKR scheme for $0\leq M\leq (N-\binom{C-1}{r})/C$. In Fig.\ref{plot3}, we compare the performance of the $(5,3,M,10)$ combinatorial MACC scheme under uncoded and coded placements. The rate-memory trade-off of the optimal scheme under uncoded placement (MKR scheme) is denoted as $R^*_{uncoded}(M)$, whereas $R_{coded}(M)$ is obtained from Theorem \ref{thm:scheme1} and Theorem \ref{thm:scheme2}, where the placement is coded.
\begin{figure}[t]
\captionsetup{justification = centering}
\captionsetup{font=small,labelfont=small}
\begin{center}
\captionsetup{justification = centering}
\includegraphics[width = \columnwidth]{fig3.eps}
\caption{$(5,3,M,10)$ Combinatorial MACC scheme: A comparison between uncoded placement and coded placement }
\label{plot3}
\end{center}
\end{figure}
Now, we present a lower bound on the optimal rate-memory trade-off for the $(C,r,M,N)$ combinatorial MACC scheme.
\begin{thm}
\label{lowerbound}
For the $(C,r,M,N)$ combinatorial MACC scheme
\begin{align}
R^*(M) \geq \max_{\substack{s\in \{r,r+1,r+2,\hdots,C\} \\ \ell\in \left\{1,2,\hdots,\ceil{N/\binom{s}{r}}\right\}}} \frac{1}{\ell}&\Big\{N-\frac{\omega_{s,\ell}}{s+\omega_{s,\ell}}\Big(N-\ell\binom{s}{r}\Big)^+ \notag\\- &\Big(N- \ell\binom{C}{r}\Big)^+-sM\Big\} \label{lowereqn}
\end{align}
where $\omega_{s,\ell} = \min(C-s,\min\limits_i \binom{s+i}{r}\geq \ceil{\frac{N}{\ell}})$.
\end{thm}
Proof of Theorem \ref{lowerbound} is given in Section \ref{proof:thm3}. \hfill$\blacksquare$
The lower bound in Theorem \ref{lowerbound} has two parameters: $s$, which is related to the number of caches, and $\ell$, which is associated with the number of transmissions. To obtain this lower bound, we consider $s\in [r:C] $ caches and $\binom{s}{r}$ users who access caches only from those $s$ caches. From $\ceil{{N}/{\binom{s}{r}}}$ number of transmissions, all the $N$ files can be decoded at the $\binom{s}{r}$ users' end by appropriately choosing the demand vectors. Further, we split the total $\ceil{{N}/{\binom{s}{r}}}$ transmissions into $\ell$ transmissions and the remaining $\ceil{{N}/{\binom{s}{r}}}-\ell$ transmissions. Then, we bound the uncertainty in those two cases separately. This bounding technique is similar to the approach used in \cite{STC} and \cite{NaR2}, where lower bounds were derived for the dedicated cache scheme and the MACC scheme with cyclic wrap-around, respectively. For the $(4,2,M,6)$ combinatorial MACC scheme, the improvement in rate using coded placement can be observed from Fig. \ref{plot2}. Also, notice that the rate-memory trade-off in Theorem \ref{thm:scheme1}, Theorem \ref{thm:scheme2} are optimal when $M\geq 2.5$, and $M\leq 0.75$, respectively.
\begin{figure}[t]
\captionsetup{justification = centering}
\captionsetup{font=small,labelfont=small}
\begin{center}
\captionsetup{justification = centering}
\includegraphics[width = \columnwidth]{fig2.eps}
\caption{$(4,2,M,6)$ Combinatorial MACC scheme: The MKR scheme, Scheme 1, Scheme 2 and the lower bound }
\label{plot2}
\end{center}
\end{figure}
Using the lower bound in Theorem \ref{lowerbound}, we show the following optimality results. First, we prove that the rate-memory trade-off in Theorem \ref{thm:scheme1} is optimal at a higher memory regime.
\begin{thm}
\label{optimality1}
For the $(C,r,M,N)$ combinatorial MACC scheme
\begin{equation}
R^*(M) = 1-\frac{rM}{N}
\end{equation}
for $\frac{\binom{C}{r}-1}{r\binom{C}{r}}\leq \frac{M}{N}\leq \frac{1}{r}$.
\end{thm}
Proof of Theorem \ref{optimality1} is given in Section \ref{proof:thm4}. \hfill$\blacksquare$
Now, we show that the the rate-memory trade-off in Theorem \ref{thm:scheme2} is optimal if the number of users is not more than the number of files.
\begin{thm}
\label{optimality2}
For the $(C,r,M,N)$ combinatorial MACC scheme with $\binom{C-1}{r}<N\leq \binom{C}{r}$, we have
\begin{equation}
R^*(M) = N-CM
\end{equation}
for $0\leq M\leq \frac{N-\binom{C-1}{r}}{C}$.
\end{thm}
Proof of Theorem \ref{optimality2} is given in Section \ref{proof:thm5}. \hfill$\blacksquare$
\section{Proofs}
\label{proofs}
\subsection{Proof of Theorem \ref{thm:scheme1}}
\label{proof:thm1}
In this section, we present a coding scheme (we refer to this scheme as \textit{Scheme 1}) that achieves the rate $\binom{C}{t+r}/\binom{C}{t}$ at cache memory $M$ (corresponding to every $t\in [C-r+1]$) as given in \eqref{thm1eqn}.
Let $t\in [C-r]$ (for $t=C-r+1$, we give a separate scheme at the end of this proof). We consider the cases $t\geq r$ and $t<r$ separately. Let us first consider the former case.
\noindent a1) \textit{Placement phase ($t\geq r$)}: The server divides each file into $\binom{C}{t}$ non-overlapping subfiles of equal size. Each subfile is indexed with a $t$-sized set $\mathcal{T}\subseteq [C]$. We assume $\mathcal{T}$ to be an ordered set. i.e., $\mathcal{T} = \{\tau_1,\tau_2,\dots,\tau_i,\dots,\tau_t\}$ with $\tau_1<\tau_2<\dots<\tau_t$. Also, whenever an ordering among the subfiles -indexed with $t$ sized subsets of $[C]$- is required, lexicographic ordering is followed. We have $W_n =\{W_{n,\mathcal{T}}: \mathcal{T}\subseteq [C],|\mathcal{T}|=t\} $ for every $n\in [N]$. Then each subfile is divided into $r!$ mini-subfiles. That is, for a $\mathcal{T}\subseteq [C]$ such that $|\mathcal{T}|=t$, we have $W_{n,\mathcal{T}}=\{W_{n,\mathcal{T}}^j: j\in [r!]\}$. Let $\mathbf{G}$ be a generator matrix of an $[r!t,r!]$ MDS code. For every $n\in [N]$ and for every $\mathcal{T}\subseteq [C]$ such that $|\mathcal{T}|=t$, the server encodes the subfiles $(W_{n,\mathcal{T}}^j:j\in [r!])$ with $\mathbf{G}$ as follows:
\begin{equation*}
(Y_{n,\mathcal{T}}^{1},Y_{n,\mathcal{T}}^{2},\dots,Y_{n,\mathcal{T}}^{r!t}) = (W_{n,\mathcal{T}}^j:j\in [r!])\mathbf{G}.
\end{equation*}
Consider cache $c$, where $c\in [C]$, and a set $\mathcal{T}\subseteq [C]$, $|\mathcal{T}|=t$ such that $\mathcal{T}\ni c$. Now, for a given $t,c\in [C]$, define a function $\phi_c^t:\{\mathcal{T}\subseteq[C]:\mathcal{T}\ni c,|\mathcal{T}|=t\}\rightarrow [t]$. The function $\phi_c^t(\mathcal{T})$ gives the position of $c$ in the ordered set $\mathcal{T}$ of size $t$. If $\mathcal{T} = \{\tau_1,\tau_2,\dots,\tau_i=c,\dots,\tau_t\}$, where $\tau_1<\tau_2<\dots<\tau_t$, then $\phi_c^t(\mathcal{T})=i$.\\
\noindent The placement phase consists of $r$ rounds- Round 0, Round 1, $\dots$, Round $r-1$. The content stored in cache $c$ in Round $\alpha$ is denoted as $Z_c^\alpha$. In Round 0, the server fills cache $c$ as
\begin{align*}
Z_c^0 = \Big\{Y_{n,\mathcal{T}}^{\ell_0}&:\ell_0 \in \big[(\phi_c^t(\mathcal{T})-1)(r-1)!+1:\\\phi_c^t(&\mathcal{T})(r-1)!\big],\mathcal{T}\subseteq[C]:\mathcal{T}\ni c,|\mathcal{T}|=t,n\in [N]\Big\}.
\end{align*}
Notice that $(r-1)!\binom{C-1}{t-1}$ coded mini-subfiles (of all the files) are placed in all the caches in Round 0. Before the subsequent rounds, the server further encodes the already encoded mini-subfiles. Let $\mathbf{G}^{(1)}$ be a systematic generator matrix of a $[2\binom{C-1}{t-1}-\binom{C-r}{t-r},\binom{C-1}{t-1}]$ MDS code, and let $\mathbf{G}^{(1)}=[\mathbf{I}_{\binom{C-1}{t-1}}|\mathbf{P}^{(1)}_{\binom{C-1}{t-1}\times \binom{C-1}{t-1}-\binom{C-r}{t-r}}]$. In Round 1, the server picks the coded mini-subfiles $Y_{n,\mathcal{T}}^{q_\mathcal{T}^1+\ell_1}$ for every set $\mathcal{T}\subseteq[C],|\mathcal{T}|=t$ such that $\mathcal{T}\ni c$, where $q_{\mathcal{T}}^1 = (r-1)!t+\left(\phi_c^t(\mathcal{T})-1\right)(r-2)!$ and $\ell_1\in [(r-2)!]$. Those subfiles are encoded with $\mathbf{P}^{(1)}$ as follows:
\begin{align*}
(Q_{n,c}^{\ell_1,1},Q_{n,c}^{\ell_1,2}&,\dots,Q_{n,c}^{\ell_1,\binom{C-1}{t-1}-\binom{C-r}{t-r}}) =\\& \left(Y_{n,\mathcal{T}}^{q_\mathcal{T}^1+\ell_1}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\right)\mathbf{P}^{(1)},\\&\hspace{3cm}\forall \ell_1\in [(r-2)!], n\in [N].
\end{align*}
In Round 1, the server places the following content in cache $c$,
\begin{align*}
Z_c^1 =& \Big\{Q_{n,c}^{\ell_1,1},Q_{n,c}^{\ell_1,2},\dots,Q_{n,c}^{\ell_1,\binom{C-1}{t-1}-\binom{C-r}{t-r}}:\ell_1\in [(r-2)!],\\&\hspace{6.5cm}n\in [N]\Big\}.
\end{align*}
In Round 1, $(r-2)!\left(\binom{C-1}{t-1}-\binom{C-r}{t-r}\right)$ number of ``doubly-encoded" mini-subfiles of every file are kept in the caches. Now let us focus on Round $b$, where $b\in [2:r-1]$. Let $\mathbf{G}^{(b)}$ be a systematic generator matrix of a $[2\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1},\binom{C-1}{t-1}]$ MDS code, and let $\mathbf{G}^{(b)}=[\mathbf{I}_{\binom{C-1}{t-1}}|\mathbf{P}^{(b)}_{\binom{C-1}{t-1}\times \binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}}]$. In Round $b$, the server picks the coded mini-subfiles $Y_{n,\mathcal{T}}^{q_\mathcal{T}^b+\ell_b}$ for every set $\mathcal{T}\subseteq[C],|\mathcal{T}|=t$ such that $\mathcal{T}\ni c$, where $q_{\mathcal{T}}^b = \frac{r!}{r-b}t+\left(\phi_c^t(\mathcal{T})-1\right)\frac{r!}{(r-b)(r-b+1)}$ and $\ell_b\in \left[\frac{r!}{(r-b)(r-b+1)}\right]$. Those subfiles are encoded with $\mathbf{P}^{(b)}$ as follows:
\begin{align*}
(Q_{n,c}^{\ell_b,1},Q_{n,c}^{\ell_b,2},&\dots,Q_{n,c}^{\ell_b,\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}}) =\\& \left(Y_{n,\mathcal{T}}^{q_\mathcal{T}^b+\ell_b}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\right)\mathbf{P}^{(b)},\\& \hspace{1cm} \forall \ell_b\in \left[\frac{r!}{(r-b)(r-b+1)}\right], n\in [N].
\end{align*}
In Round $b$, the server places the following content in cache $c$,
\begin{align*}
Z_c^b = \Big\{Q_{n,c}^{\ell_b,1},Q_{n,c}^{\ell_b,2},\dots,Q_{n,c}^{\ell_b,\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}}:\\\ell_b\in \left[\frac{r!}{(r-b)(r-b+1)}\right], n\in [N]\Big\}.
\end{align*}
In Round $b$, $\frac{r!}{(r-b)(r-b+1)}\left(\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}\right)$ number of doubly-encoded mini-subfiles of each file are kept in the caches. So, the overall contents stored in cache $c$ is
\begin{equation*}
Z_c = \bigcup\limits_{\alpha=0}^{r-1} Z_c^\alpha, \hspace{0.2cm}\forall c\in [C].
\end{equation*}
Each coded mini-subfile has $\frac{1}{r!\binom{C}{t}}^{th}$ of a file-size. Therefore, the normalized cache memory is
\begin{align}
\frac{M}{N} &= \frac{(r-1)!\binom{C-1}{t-1})}{r!\binom{C}{t}}+\notag\\&\qquad\frac{\sum\limits_{b=1}^{r-1}\frac{r!}{(r-b)(r-b+1)}\left(\binom{C-1}{t-1}-\sum\limits_{i=1}^{b}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}\right)}{r!\binom{C}{t}}\notag \\
&=\frac{t}{C}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{r-1}\frac{r-i}{r}\binom{r}{i-1}\binom{C-r}{t-r+i-1}\label{mbyn1}.
\end{align}
The calculation of $M/N$ value is elaborated in Appendix \ref{AppendixZ}.
\noindent \textit{a2) Placement phase ($t<r$):} The server divides each file into $\binom{C}{t}$ non-overlapping subfiles of equal size. Each subfile is indexed with a $t$-sized set $\mathcal{T}\subseteq [C]$. Thus, we have $W_n =\{W_{n,\mathcal{T}}: \mathcal{T}\subseteq [C],|\mathcal{T}|=t\} $ for every $n\in [N]$. Then each subfile is divided into $t!$ mini-subfiles. That is, for a $\mathcal{T}\subseteq [C]$ such that $|\mathcal{T}|=t$, we have $W_{n,\mathcal{T}}=\{W_{n,\mathcal{T}}^j: j\in [t!]\}$. Let $\mathbf{G}$ be a generator matrix of a $[t!t,t!]$ MDS code. For every $n\in [N]$ and for every $\mathcal{T}\subseteq [C]$ such that $|\mathcal{T}|=t$, the server encodes the subfiles $(W_{n,\mathcal{T}}^j:j\in [t!])$ with $\mathbf{G}$ as follows:
\begin{equation*}
(Y_{n,\mathcal{T}}^{1},Y_{n,\mathcal{T}}^{2},\dots,Y_{n,\mathcal{T}}^{t!t}) = (W_{n,\mathcal{T}}^j:j\in [t!])\mathbf{G}.
\end{equation*}
The placement phase consists of $t$ rounds- Round 0, Round 1, $\dots$, Round $t-1$. The content stored in cache $c$ in Round $\alpha$ is denoted with $Z_c^\alpha$. In Round 0, the server fills cache $c$ as
\begin{align*}
Z_c^0 = \Big\{Y_{n,\mathcal{T}}^{\ell_0}:&\ell_0 \in \left[(\phi_c^t(\mathcal{T})-1)(t-1)!+1:\phi_c^t(\mathcal{T})(t-1)!\right]\,\\&\mathcal{T}\subseteq[C]:\mathcal{T}\ni c,|\mathcal{T}|=t,n\in [N]\Big\}.
\end{align*}
Notice that $(t-1)!\binom{C-1}{t-1}$ coded mini-subfiles (of all the files) are placed in all the caches in Round 0. Before the subsequent rounds, the server further encodes the already encoded mini-subfiles. Let $\mathbf{G}^{(1)}$ be a systematic generator matrix of a $[2\binom{C-1}{t-1}-\binom{r-1}{t-1},\binom{C-1}{t-1}]$ MDS code, and let $\mathbf{G}^{(1)}=[\mathbf{I}_{\binom{C-1}{t-1}}|\mathbf{P}^{(1)}_{\binom{C-1}{t-1}\times \binom{C-1}{t-1}-\binom{r-1}{t-1}}]$. In Round 1, the server picks the coded mini-subfiles $Y_{n,\mathcal{T}}^{q_\mathcal{T}^1+\ell_1}$ for every set $\mathcal{T}\subseteq[C],|\mathcal{T}|=t$ such that $\mathcal{T}\ni c$, where $q_{\mathcal{T}}^1 = t!+\left(\phi_c^t(\mathcal{T})-1\right)(t-2)!$ and $\ell_1\in [(t-2)!]$. Those subfiles are encoded with $\mathbf{P}^{(1)}$ as follows:
\begin{align*}
(Q_{n,c}^{\ell_1,1},Q_{n,c}^{\ell_1,2},\dots,&Q_{n,c}^{\ell_1,\binom{C-1}{t-1}-\binom{r-1}{t-1}}) =\\& \left(Y_{n,\mathcal{T}}^{q_\mathcal{T}^1+\ell_1}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\right)\mathbf{P}^{(1)},\\&\hspace{2cm}\forall \ell_1\in [(t-2)!], n\in [N].
\end{align*}
In Round 1, the server places the following content in cache $c$,
\begin{align*}
Z_c^1 = \Big\{Q_{n,c}^{\ell_1,1},Q_{n,c}^{\ell_1,2}&,\dots,Q_{n,c}^{\ell_1,\binom{C-1}{t-1}-\binom{r-1}{t-1}}:\\&\hspace{2cm}\ell_1\in [(t-2)!], n\in [N]\Big\}.
\end{align*}
In Round 1, $(t-2)!\left(\binom{C-1}{t-1}-\binom{r-1}{t-1}\right)$ number of doubly-encoded mini-subfiles of every file are kept in the caches. Now let us focus on Round $b$, where $b\in [2:t-1]$. Let $\mathbf{G}^{(b)}$ be a systematic generator matrix of a $[2\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{t-i}\binom{C-r}{i-1},\binom{C-1}{t-1}]$ MDS code, and let $\mathbf{G}^{(b)}=[\mathbf{I}_{\binom{C-1}{t-1}}|\mathbf{P}^{(b)}_{\binom{C-1}{t-1}\times \binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{t-i}\binom{C-r}{i-1}}]$. In Round $b$, the server picks the coded mini-subfiles $Y_{n,\mathcal{T}}^{q_\mathcal{T}^b+\ell_b}$ for every set $\mathcal{T}\subseteq[C],|\mathcal{T}|=t$ such that $\mathcal{T}\ni c$, where $q_{\mathcal{T}}^b = \frac{t!}{t-b}t+\left(\phi_c^t(\mathcal{T})-1\right)\frac{t!}{(t-b)(t-b+1)}$ and $\ell_b\in \left[\frac{t!}{(t-b)(t-b+1)}\right]$. Those subfiles are encoded with $\mathbf{P}^{(b)}$ as follows:
\begin{align*}
(Q_{n,c}^{\ell_b,1},Q_{n,c}^{\ell_b,2},&\dots,Q_{n,c}^{\ell_b,\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{t-i}\binom{C-r}{i-1}}) =\\& \left(Y_{n,\mathcal{T}}^{q_\mathcal{T}^b+\ell_b}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\right)\mathbf{P}^{(b)},\\ & \hspace{1.2cm}\forall \ell_b\in \left[\frac{t!}{(t-b)(t-b+1)}\right], n\in [N].
\end{align*}
In Round $b$, the server places the following content in cache $c$,
\begin{align*}
Z_c^b =& \Big\{Q_{n,c}^{\ell_b,1},Q_{n,c}^{\ell_b,2},\dots,Q_{n,c}^{\ell_b,\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{t-i}\binom{C-r}{i-1}}:\\&\hspace{2.4cm}\ell_b\in \left[\frac{t!}{(t-b)(t-b+1)}\right], n\in [N]\Big\}.
\end{align*}
In Round $b$, $\frac{t!}{(t-b)(t-b+1)}\left(\binom{C-1}{t-1}-\sum_{i=1}^{b}\binom{r-1}{t-i}\binom{C-r}{i-1}\right)$ number of doubly-encoded mini-subfiles of each file are kept in the caches. So, the overall contents stored in cache $c$ is
\begin{equation*}
Z_c = \bigcup\limits_{\alpha=0}^{t-1} Z_c^\alpha, \hspace{0.2cm}\forall c\in [C].
\end{equation*}
Each coded mini-subfile has $\frac{1}{t!\binom{C}{t}}^{th}$ of a file-size. Therefore, the normalized cache memory is
\begin{align}
\frac{M}{N} &= \frac{(t-1)!\binom{C-1}{t-1}}{t!\binom{C}{t}}+\notag\\&\hspace{1.5cm}\frac{\sum\limits_{b=1}^{t-1}\frac{t!}{(t-b)(t-b+1)}\left(\binom{C-1}{t-1}-\sum\limits_{i=1}^{b}\binom{r-1}{t-i}\binom{C-r}{i-1}\right)}{t!\binom{C}{t}}\notag\\
&=\frac{t}{C}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{t-1}\frac{t-i}{r}\binom{r}{t-i+1}\binom{C-r}{i-1}\label{mbyn2}.
\end{align}
The detailed calculation of $M/N$ is provided in Appendix \ref{AppendixY}.
\noindent \textit{b) Delivery phase:} The delivery phase does not depend on whether $t\geq r$ or $t<r$. Let $W_{d_\mathcal{U}}$ be the file demanded by user $\mathcal{U}$, where $\mathcal{U}\subseteq [C]$ and $|\mathcal{U}|=r$. During the delivery phase, the server makes a transmission corresponding to every $(t+r)$-sized subsets of $[C]$. The transmission corresponding to a set $\mathcal{S}\subseteq [C]$ such that $|\mathcal{S}|=t+r$ is
\begin{equation*}
\bigoplus_{\substack{\mathcal{U}\subseteq \mathcal{S}\\|\mathcal{U}|=r}} W_{d_\mathcal{U},\mathcal{S}\backslash \mathcal{U}}.
\end{equation*}
Note that, for a given $t\in [C]$, this delivery phase is the same as the delivery phase in the MKR scheme. There are $\binom{C}{t+r}$ number of $(t+r)$-sized subsets for $[C]$. The server makes transmission corresponding to each of those subsets. Therefore the number of coded subfiles transmitted in the delivery phase is $\binom{C}{t+r}$. Each subfile has $\frac{1}{\binom{C}{t}}^{th}$ of a file-size. Therefore the rate of transmission is ${\binom{C}{t+r}}/{\binom{C}{t}}$.
Next, we show that by employing the above placement and delivery phases, all the users can decode their demanded files. From the coded subfiles placed in the caches, if a user can decode all the subfiles that can be accessed by a corresponding user in the MKR scheme (by corresponding user, we mean that the user accessing the same set of caches), then the decodability of the demanded files is guaranteed. This is because the delivery phase of our proposed scheme is the same as the delivery phase in the MKR scheme. Thus, we show that user $\mathcal{U}$ can decode the subfiles $W_{n,\mathcal{T}}$ such that $\mathcal{U}\cap \mathcal{T}\neq \phi$ from the coded subfiles stored in the accessible caches. That will ensure the decodability of the demanded files. In order to prove that, we consider a user $\mathcal{U}$. Let cache $c$ be one of the $r$ caches the user accesses. Then we show that the user can decode the subfile $W_{n,\mathcal{T}}$ if $\mathcal{T}\ni c$.
\noindent \textit{c1) Decodability ($t\geq r$):} Firstly, note that from any $r!$ coded mini-subfiles from the $r!t$ coded mini-subfiles $\{Y_{n,\mathcal{T}}^{1},Y_{n,\mathcal{T}}^{2},\dots,Y_{n,\mathcal{T}}^{r!t}\}$, one can decode the $r!$ mini-subfiles of the subfile $W_{n,\mathcal{T}}$. That means from any $r!$ coded mini-subfiles from $\{Y_{n,\mathcal{T}}^{1},Y_{n,\mathcal{T}}^{2},\dots,Y_{n,\mathcal{T}}^{r!t}\}$, one can completely decode the subfile $W_{n,\mathcal{T}}$. Now, consider user $\mathcal{U}$ who has access to cache $c$ for every $c\in \mathcal{U}$. For every $\mathcal{T}\ni c$, cache $c$ contains $(r-1)!$ coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ from Round 0 of the placement phase. Therefore user $\mathcal{U}$ gets $(r-1)!\ell$ distinct coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{T}\cap \mathcal{U}|=\ell$, where $\ell \in [r]$. Thus the user gets $(r-1)!r=r!$ coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{T}\cap \mathcal{U}|=r$. That means, the user can decode the subfile $W_{n,\mathcal{T}}$ if $|\mathcal{T}\cap \mathcal{U}|=r$, from the contents placed in Round 0 alone. In other words, a user can decode $\binom{C-r}{t-r}$ number of subfiles of every file using the contents placed in Round 0. If the user decodes $W_{n,\mathcal{T}}$ then it knows the coded mini-subfiles $\{Y_{n,\mathcal{T}}^{1},Y_{n,\mathcal{T}}^{2},\dots,Y_{n,\mathcal{T}}^{r!t}\}$. Now, consider the contents placed in Round 1 of the placement phase. In Round 1, the server placed $\binom{C-1}{r-1}-\binom{C-r}{t-r}$ number of coded combinations of the coded mini-subfiles $\{Y_{n,\mathcal{T}}^{q_\mathcal{T}^1+\ell_1}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\}$. From the previous decoding (from the contents stored in Round 0), the user knows some $\binom{C-r}{t-r}$ coded mini-subfiles in $\{Y_{n,\mathcal{T}}^{q_\mathcal{T}^1+\ell_1}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\}$. Using those coded mini-subfiles and the doubly-encoded mini-subfiles $\{Q_{n,c}^{\ell_1,1},Q_{n,c}^{\ell_1,2},\dots,Q_{n,c}^{\ell_1,\binom{C-1}{t-1}-\binom{C-r}{t-r}}\}$, the user can find all the coded mini-subfiles $\{Y_{n,\mathcal{T}}^{q_\mathcal{T}^1+\ell_1}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\}$. That is, corresponding to every $\ell_1\in [(r-2)!]$, user $\mathcal{U}$ can find out a coded mini-subfile of $W_{n,\mathcal{T}}$ if $1\leq |\mathcal{U}\cap \mathcal{T}|\leq r-1$. For $\mathcal{T}$ such that $|\mathcal{U}\cap \mathcal{T}|= r-1$, the user can get $(r-2)!$ distinct coded mini-subfiles of $W_{n,\mathcal{T}}$ from $r-1$ caches (caches indexed with the elements in the set $\mathcal{U}\cap \mathcal{T}$) that it accesses. Also, the user already has $(r-1)!(r-1)$ coded mini-subfiles of $W_{n,\mathcal{T}}$ that were decoded using the contents stored in Round 0 of the placement phase. That is, the user has $(r-1)!(r-1)+(r-2)!(r-1)=r!$ coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$. Thus user can decode the subfile $W_{n,\mathcal{T}}$ if $|\mathcal{U}\cap \mathcal{T}|= r-1$ using the contents stored in Round 0 and Round 1 of the placement phase.
\noindent Let us assume that user $\mathcal{U}$ has decoded $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|\geq r-\beta+1$ for some $\beta \in [r-1]$, from the contents stored in Round 0, Round 1,$\dots$, Round $\beta-1$ of the placement phase. Also, assume that the user has decoded $\frac{r!}{r-\beta+1}\gamma$ number of coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|=\gamma$ for every $\gamma \in [r-\beta]$. Then, we prove that the user can decode the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|\geq r-\beta$. Further, we show that the user can decode $\frac{r!}{r-\beta}\gamma$ number of coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|=\gamma$ for every $\gamma \in [r-\beta-1]$. Note that, we have already shown that this is, in fact, true for $\beta=1$. Now, let us consider the contents stored in Round $\beta$ of the placement phase. In Round $\beta$, the server placed $\binom{C-1}{r-1}-\sum_{i=1}^{\beta}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}$ number of coded combinations of the coded mini-subfiles $\{Y_{n,\mathcal{T}}^{q_\mathcal{T}^\beta+\ell_\beta}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\}$, where $q_{\mathcal{T}}^\beta = \frac{r!}{r-\beta}t+\left(\phi_c^t(\mathcal{T})-1\right)\frac{r!}{(r-\beta)(r-\beta+1)}$ and $\ell_\beta\in \left[\frac{r!}{(r-\beta)(r-\beta+1)}\right]$. We assumed that the user has already decoded the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|\geq r-\beta+1$. The number of sets $\mathcal{T}\subseteq [C]$ such that $|\mathcal{T}|=t$, $\mathcal{T}\ni c$, and $|\mathcal{U}\cap \mathcal{T}|=r-i+1$, $i\in [r-1]$, where $\mathcal{U}\subseteq [C]$ with $|\mathcal{U}|=r\leq t$, and $\mathcal{U}\ni c$ is $\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}$. Thus the user knows some $\sum_{i=1}^{\beta}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}$ coded mini-subfiles in $\{Y_{n,\mathcal{T}}^{q_\mathcal{T}^\beta+\ell_\beta}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\}$. Using those coded mini-subfiles and the doubly-encoded mini-subfiles $\{Q_{n,c}^{\ell_\beta,1},Q_{n,c}^{\ell_\beta,2},\dots,Q_{n,c}^{\ell_\beta,\binom{C-1}{t-1}-\sum_{i=1}^{\beta}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}}\}$, the user can find all the coded mini-subfiles $\{Y_{n,\mathcal{T}}^{q_\mathcal{T}^\beta+\ell_\beta}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\}$. That is, corresponding to every $\ell_\beta\in [\frac{r!}{(r-\beta)(r-\beta+1)}]$, user $\mathcal{U}$ can find out a coded mini-subfile of $W_{n,\mathcal{T}}$ if $1\leq |\mathcal{U}\cap \mathcal{T}|\leq r-\beta$. According to our previous assumption, user $\mathcal{U}$ has decoded $\frac{r!}{r-\beta+1}(r-\beta)$ number of coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|=r-\beta$ (by substituting $\gamma=r-\beta$). In addition to that, the user decoded $\frac{r!}{(r-\beta)(r-\beta+1)}$ coded mini-subfiles of $W_{n,\mathcal{T}}$ from $r-\beta$ caches. Therefore, the user has $\frac{r!}{r-\beta+1}(r-\beta)+\frac{r!}{(r-\beta)(r-\beta+1)}(r-\beta)=r!$ coded mini-subfiles of $W_{n,\mathcal{T}}$. Thus, user $\mathcal{U}$ can decode the subfile $W_{n,\mathcal{T}}$ for all $|\mathcal{U}\cap\mathcal{T}|= r-\beta$. Further, the user has decoded $\frac{r!}{r-\beta+1}\gamma+\frac{r!}{(r-\beta)(r-\beta+1)}\gamma=\frac{r!}{r-\beta}\gamma$ number of coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|=\gamma$ for every $\gamma \in [r-\beta-1]$. By induction on $\beta$, we can see that the user can decode the subfile $W_{n,\mathcal{T}}$ such that $|\mathcal{U}\cap\mathcal{T}|\geq 1$. In other words, user $\mathcal{U}$ can decode the subfile indexed with a set $\mathcal{T}$ such that $\mathcal{U}\cap\mathcal{T}\neq \phi$. That is, from the coded subfiles placed in the caches, a user can decode all the subfiles that can be accessed by a corresponding user in the MKR scheme. Therefore, the decodability of the demanded files is guaranteed if $t\geq r$.
\noindent \textit{c2) Decodability ($t< r$):} Each subfile was divided into $t!$ mini-subfiles during cache placement. Therefore, from any $t!$ coded mini-subfiles from the $t!t$ coded mini-subfiles $\{Y_{n,\mathcal{T}}^{1},Y_{n,\mathcal{T}}^{2},\dots,Y_{n,\mathcal{T}}^{t!t}\}$, one can decode the $t!$ mini-subfiles of the subfile $W_{n,\mathcal{T}}$, and thus the subfile itself. That is, from any $t!$ coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$, the subfile can be decoded. Consider user $\mathcal{U}$, where $\mathcal{U}\subseteq [C]$ and $|\mathcal{U}|=r$. Consider cache $c$, and assume that cache $c$ is one of the $r$ caches accessed by user $\mathcal{U}$. Note that, $c\in \mathcal{U}$. Cache $c$ contains $(t-1)!$ coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}\ni c$ and $n\in [N]$, from Round 0 of the placement phase. Therefore user $\mathcal{U}$ gets $(t-1)!\ell$ distinct coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{T}\cap \mathcal{U}|=\ell$, where $\ell \in [t]$. Thus the user gets $(t-1)!t=t!$ coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{T}\cap \mathcal{U}|=t$. That means, the user can decode the subfile $W_{n,\mathcal{T}}$ if $|\mathcal{T}\cap \mathcal{U}|=t$, from the contents placed in Round 0. Thus the user can decode $\binom{r}{t}$ subfiles solely using the contents placed in Round 0. Out of those subfiles, $\binom{r-1}{t-1}$ subfiles are indexed with a set $\mathcal{T}\ni c$. Now, consider the contents placed in cache $c$ in Round 1 of the placement phase. In Round 1, the server placed $\binom{C-1}{r-1}-\binom{r-1}{t-1}$ number of coded combinations of the coded mini-subfiles $\{Y_{n,\mathcal{T}}^{q_\mathcal{T}^1+\ell_1}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\}$ for every $\ell_1\in [(t-2)!]$. We have already seen that user $\mathcal{U}$ knows some $\binom{r-1}{t-1}$ coded mini-subfiles in $\{Y_{n,\mathcal{T}}^{q_\mathcal{T}^1+\ell_1}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\}$. Using those coded mini-subfiles and the doubly-encoded mini-subfiles $\{Q_{n,c}^{\ell_1,1},Q_{n,c}^{\ell_1,2},\dots,Q_{n,c}^{\ell_1,\binom{C-1}{t-1}-\binom{r-1}{t-1}}\}$, the user can find all the coded mini-subfiles $\{Y_{n,\mathcal{T}}^{q_\mathcal{T}^1+\ell_1}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\}$. That is, corresponding to every $\ell_1\in [(t-2)!]$, user $\mathcal{U}$ can find out a coded mini-subfile of $W_{n,\mathcal{T}}$ if $|\mathcal{U}\cap \mathcal{T}|\geq 1$. For $\mathcal{T}$ such that $|\mathcal{U}\cap \mathcal{T}|= t-1$, the user can get $(t-2)!$ distinct coded mini-subfiles of $W_{n,\mathcal{T}}$ from $t-1$ caches (caches indexed with the elements in the set $\mathcal{U}\cap \mathcal{T}$) that it accesses. Also, the user already has $(t-1)!(t-1)$ coded mini-subfiles of $W_{n,\mathcal{T}}$ that were decoded using the contents stored in Round 0 of the placement phase. That is, the user has $(t-1)!(t-1)+(t-2)!(t-1)=t!$ coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$. Thus user can decode the subfile $W_{n,\mathcal{T}}$ if $|\mathcal{U}\cap \mathcal{T}|= t-1$ using the contents stored in Round 0 and Round 1 of the placement phase.
\noindent As the previous case (case where $t\geq r$), we prove the decodability of the subfiles by using mathematical induction. Let us assume that user $\mathcal{U}$ has decoded $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|\geq t-\beta+1$ for some $\beta \in [t-1]$, from the contents stored in Round 0, Round 1,$\dots$, Round $\beta-1$ of the placement phase. Also, assume that the user has decoded $\frac{t!}{t-\beta+1}\gamma$ number of coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|=\gamma$ for every $\gamma \in [t-\beta]$. Then, we prove that the user can decode the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|\geq t-\beta$. Further, we show that the user can decode $\frac{t!}{t-\beta}\gamma$ number of coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|=\gamma$ for every $\gamma \in [t-\beta-1]$. Note that, we have already shown that this is, in fact, true for $\beta=1$. Now, let us consider the contents stored in Round $\beta$ of the placement phase. In Round $\beta$, the server placed $\binom{C-1}{r-1}-\sum_{i=1}^{\beta}\binom{r-1}{t-i}\binom{C-r}{i-1}$ number of coded combinations of the coded mini-subfiles $\{Y_{n,\mathcal{T}}^{q_\mathcal{T}^\beta+\ell_\beta}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\}$, where $q_{\mathcal{T}}^\beta = \frac{t!}{t-\beta}t+\left(\phi_c^t(\mathcal{T})-1\right)\frac{t!}{(t-\beta)(t-\beta+1)}$ and $\ell_\beta\in \left[\frac{t!}{(t-\beta)(t-\beta+1)}\right]$. We assumed that the user has already decoded the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|\geq t-\beta+1$. The number of sets $\mathcal{T}\subseteq [C]$ such that $|\mathcal{T}|=t$, $\mathcal{T}\ni c$, and $|\mathcal{U}\cap \mathcal{T}|=t-i+1$, $i\in [t-1]$, where $\mathcal{U}\subseteq [C]$ with $|\mathcal{U}|=r>t$, and $\mathcal{U}\ni c$ is $\binom{r-1}{t-i}\binom{C-r}{i-1}$. Thus the user knows some $\sum_{i=1}^{\beta}\binom{r-1}{t-i}\binom{C-r}{i-1}$ coded mini-subfiles in $\{Y_{n,\mathcal{T}}^{q_\mathcal{T}^\beta+\ell_\beta}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\}$. Using those coded mini-subfiles and the doubly-encoded mini-subfiles $\{Q_{n,c}^{\ell_\beta,1},Q_{n,c}^{\ell_\beta,2},\dots,Q_{n,c}^{\ell_\beta,\binom{C-1}{t-1}-\sum_{i=1}^{\beta}\binom{r-1}{t-i}\binom{C-r}{i-1}}\}$, the user can find all the coded mini-subfiles $\{Y_{n,\mathcal{T}}^{q_\mathcal{T}^\beta+\ell_\beta}: \mathcal{T}\subseteq[C],|\mathcal{T}|=t,\mathcal{T}\ni c\}$. That is, corresponding to every $\ell_\beta\in [\frac{t!}{(t-\beta)(t-\beta+1)}]$, user $\mathcal{U}$ can find out a coded mini-subfile of $W_{n,\mathcal{T}}$ if $1\leq |\mathcal{U}\cap \mathcal{T}|\leq t-\beta$. According to our previous assumption, user $\mathcal{U}$ has decoded $\frac{t!}{t-\beta+1}(t-\beta)$ number of coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|=t-\beta$ (by substituting $\gamma=t-\beta$). In addition to that, the user decoded $\frac{t!}{(t-\beta)(t-\beta+1)}$ coded mini-subfiles of $W_{n,\mathcal{T}}$ from $t-\beta$ caches. Therefore, the user has $\frac{t!}{t-\beta+1}(t-\beta)+\frac{t!}{(t-\beta)(t-\beta+1)}(t-\beta)=t!$ coded mini-subfiles of $W_{n,\mathcal{T}}$. Thus, user $\mathcal{U}$ can decode the subfile $W_{n,\mathcal{T}}$ for all $|\mathcal{U}\cap\mathcal{T}|= t-\beta$. Further, the user has decoded $\frac{t!}{t-\beta+1}\gamma+\frac{t!}{(t-\beta)(t-\beta+1)}\gamma=\frac{t!}{t-\beta}\gamma$ number of coded mini-subfiles of the subfile $W_{n,\mathcal{T}}$ for all $\mathcal{T}$ such that $|\mathcal{U}\cap\mathcal{T}|=\gamma$ for every $\gamma \in [t-\beta-1]$. By induction on $\beta$, we can see that the user can decode the subfile $W_{n,\mathcal{T}}$ such that $|\mathcal{U}\cap\mathcal{T}|\geq 1$. In other words, user $\mathcal{U}$ can decode the subfile indexed with a set $\mathcal{T}$ such that $\mathcal{U}\cap\mathcal{T}\neq \phi$. That is, from the coded subfiles placed in the caches, a user can decode all the subfiles that can be accessed by a corresponding user in the MKR scheme. Therefore, the decodability of the demanded files is guaranteed if $t<r$ as well.
Finally, we consider the case $t=C-r+1$. The parameter $t=C-r+1$ corresponds to $M=N/r$ (shown in Appendix \ref{AppendixA}). In that case, it is possible to achieve rate $R(N/r)=0$ by placing the contents by properly employing coding. That is, from the accessible cache contents itself, all the users can decode the entire library of files. The placement is as follows.\\
The server divides each file into $r$ non-overlapping subfiles of equal size. i.e., $W_n = \{W_{n,1},W_{n,2},\dots,W_{n,r}\}$ for all $n\in [N]$. Let $\mathbf{G}_{r\times C}$ be a generator matrix of a $[C,r]$ MDS code. For every $n\in [N]$, the server does the following encoding procedure:
\begin{equation*}
(\tilde{W}_{n,1},\tilde{W}_{n,2},\dots,\tilde{W}_{n,C}) = (W_{n,1},W_{n,2},\dots,W_{n,r})\mathbf{G}.
\end{equation*}
Then the server fills the caches as
\begin{equation*}
Z_c = \{\tilde{W}_{n,c}, \forall n\in [N]\}.
\end{equation*}
Thus user $\mathcal{U}$ gets access to $Z_c$ for every $c\in \mathcal{U}$. That means, the user has $r$ distinct coded subfiles $\tilde{W}_{n,c}$ for every $c\in \mathcal{U}$ and for all $n\in [N]$. From those coded subfiles the user can decode all the files (from any $r$ coded subfiles, $r$ subfiles of a file can be decoded). This completes the proof of Theorem \ref{thm:scheme2}.\hfill$\blacksquare$
\subsection{Proof of Theorem \ref{thm:scheme2}}
\label{proof:thm2}
In this section, we present a coding scheme (we refer to this scheme as \textit{Scheme 2}).
First, we show the achievability of the rate $R(M) = \binom{C-1}{r}$ at $M = (N-\binom{C-1}{r})/C$ by presenting a coding scheme.
\noindent \textit{a) Placement phase:} The server divides each file into $C$ non-overlapping subfiles of equal size. Thus, we have $W_n = \{W_{n,1},W_{n,2},\dots,W_{n,C}\}$ for all $n\in [N]$. Let $\mathbf{G}$ be a systematic generator matrix of a $[2N-\binom{C-1}{r},N]$ MDS code. Thus we can write $\mathbf{G}=[\mathbf{I}_N|\mathbf{P}_{N\times N-\binom{C-1}{r}}]$, where $\mathbf{I}_N$ is the identity matrix of size $N$. Then the server encodes the subfiles $(W_{1,i},W_{2,i},\dots,W_{N,i})$ using $\mathbf{P}_{N\times N-\binom{C-1}{r}}$, which results in $N-\binom{C-1}{r}$ coded subfiles. That is
\begin{align*}
(Q_{1,i},Q_{2,i},&\dots,Q_{N-\binom{C-1}{r},i}) =\\& (W_{1,i},W_{2,i},\dots,W_{N,i})\mathbf{P}_{N\times N-\binom{C-1}{r}}, \hspace{0.2cm} \forall i\in [C].
\end{align*}
Then the server fills the caches as follows:
\begin{equation*}
Z_i = \{Q_{1,i},Q_{2,i},\dots,Q_{N-\binom{C-1}{r},i}\}, \hspace{.2cm} \forall i\in [C].
\end{equation*}
The $i^{\text{th}}$ cache contains $N-\binom{C-1}{r}$ linearly independent coded combinations of the subfiles $(W_{1,i},W_{2,i},\dots,W_{N,i})$. The size of a coded subfile is the same as the size of a subfile. Thus, we have the total size of a cache, $M = (N-\binom{C-1}{r})/C$.
\noindent \textit{b) Delivery phase:} Let $W_{d_\mathcal{U}}$ be the file demanded by user $\mathcal{U}$ in the delivery phase. Let $n(\mathbf{d})$ denote the number of distinct files demanded by the users, i.e., $n(\mathbf{d}) = |\{W_{d_\mathcal{U}}:\mathcal{U}\subseteq [C],|\mathcal{U}|=r\}|$ If $n(\mathbf{d})\leq \binom{c-1}{r}$, then the server simply broadcasts those $n(\mathbf{d})$ files. Now, consider the case where $n(\mathbf{d})>\binom{C-1}{r}$. Let $n_i(\mathbf{d})$ denote the number of distinct files demanded by the users who do not access the $i^{\text{th}}$ cache, i.e.,
$n_i(\mathbf{d}) = |\{W_{d_\mathcal{U}}:\mathcal{U}\subseteq [C]\backslash\{i\},|\mathcal{U}|=r\}|$. Note that, $n_i(\mathbf{d})\leq \binom{C-1}{r}$ for all $i\in [C]$. If $n_i(\mathbf{d})=\binom{C-1}{r}$, then the server transmits $W_{d_\mathcal{U},i}$ for all $\mathcal{U}\subseteq [C]\backslash\{i\},$ such that $|\mathcal{U}|=r$. That is, the server transmits the $i^{\text{th}}$ subfile of the files demanded by the users who are not accessing cache $i$. If $n_i(\mathbf{d})<\binom{C-1}{r}$, then the server transmits the $i^{\text{th}}$ subfile of those distinct $n_i(\mathbf{d})$ files (files demanded by the users who do not access cache $i$) and the $i^{\text{th}}$ subfile of some other $\binom{C-1}{r}-n_i(\mathbf{d})$ files. The same procedure is done for all $i\in[C]$.
Corresponding to every $i\in [C]$, the server transmits $\binom{C-1}{r}$ subfiles. Each subfile has $1/C^\text{th}$ of a file size. Thus the rate achieved is $\binom{C-1}{r}$. Now, the claim is that all the users can decode their demanded files completely. When $n(\mathbf{d})\leq \binom{C-1}{r}$, the decodability is trivial, as the demanded files are sent as such by the server. Let us consider the case where $n(\mathbf{d})> \binom{C-1}{r}$. Consider user $\mathcal{U}$ who accesses cache $i$ for all $i\in \mathcal{U}$. The user directly receives $W_{d_\mathcal{U},j}$ for all $j\in [C]\backslash \mathcal{U}$. Further, user $\mathcal{U}$ receives the $i^{\text{th}}$ subfile of some $\binom{C-1}{r}$ files, for every $i\in \mathcal{U}$. Also, for all $i\in \mathcal{U}$, the user has access to $N-\binom{C-1}{r}$ coded subfiles of the subfiles $\{W_{1,i},W_{2,i},\dots,W_{N,i}\}$ from cache $i$. Thus the user can decode $W_{n,i}$ for all $n\in [N]$ and $i\in \mathcal{U}$, since any $N$ columns of $\mathbf{G}$ are linearly independent. Therefore, user $\mathcal{U}$ gets $W_{d_\mathcal{U},k}$ for every $k\in [C]$. Hence, all the users can decode their demanded files.
Further, if $M=0$, the rate $R=N$ is trivially achievable by simply broadcasting all the files. Thus, by memory sharing, the rate $R(M) = N-CM$ is achievable for $0\leq M\leq (N-\binom{C-1}{r})/C$. This completes the proof of Theorem \ref{thm:scheme2}. \hfill$\blacksquare$
\subsection{Proof of Theorem \ref{lowerbound}}
\label{proof:thm3}
The server has $N$ files $W_{[1:N]}$. Assume that the users are arranged in the lexicographic order of the subsets (of $[C]$ of size $r$) by which they are indexed. That is, $\{1,2,\dots,r\}$ will be the first in the order, $\{1,2,\dots,r-1,r+1\}$ will be the second and so on. Consider the set of first $s$ caches, where $s\in [r:C]$. The number of users who access caches only from that set is $\binom{s}{r}$. Let us focus on those $\binom{s}{r}$ users. Consider a demand vector $\mathbf{d}_1$ in which those $\binom{s}{r}$ users request for files $W_1,W_2,\dots,W_{\binom{s}{r}}$. That is, the first user (the user comes first in the lexicographic ordering among those $\binom{s}{r}$ users) demands $W_1$, the second user demands $W_2$, and so on. The remaining users request arbitrary files from $W_{[1:N]}$, which we are not interested in. Corresponding to $\mathbf{d}_1$, the server makes transmission $X_1$. From contents in the first $s$ caches $Z_{[1:s]}$ and transmission $X_1$, the considered $\binom{s}{r}$ users can collectively decode the first $\binom{s}{r}$ files (one file at each user). Now, consider a demand vector $\mathbf{d}_2$ in which those $\binom{s}{r}$ users request for files $W_{\binom{s}{r}+1},W_{\binom{s}{r}+2},\dots,W_{2\binom{s}{r}}$, and transmission $X_2$ corresponding to $\mathbf{d}_2$. From cache contents $Z_{[1:s]}$ and transmission $X_2$, those users can collectively decode the files $W_{[\binom{s}{r}+1:2\binom{s}{r}]}$. If we consider $\delta = \Big\lceil\frac{N}{\binom{s}{r}}\Big\rceil$ such demand vectors and the corresponding transmissions, all the $N$ files can be decoded at those $\binom{s}{r}$ user end. Thus, we have
\begingroup
\allowdisplaybreaks
\begin{subequations}
\begin{align}
N& = H(W_{[1:N]})\leq H(Z_{[1:s]},X_{[1:\delta]})\\&= H(Z_{[1:s]})+H(X_{[1:\delta]}|Z_{[1:s]})\label{lb1}.
\end{align}
\end{subequations}
Consider an integer $\ell \in [1:\delta]$. We can expand \eqref{lb1} as,
\begin{subequations}
\begin{align}
N&\leq sM+H(X_{[1:\ell]}|Z_{[1:s]})+H(X_{[\ell+1:\delta]}|Z_{[1:s]},X_{[1:\ell]})\\
&\leq sM+H(X_{[1:\ell]})+H(X_{[\ell+1:\delta]}|Z_{[1:s]},X_{[1:\ell]},W_{[1:\tilde{N}]})\label{Wls}
\end{align}
\end{subequations}
where $\tilde{N} = \min(N,\ell \binom{s}{r})$. Using the cache contents $Z_{[1:s]}$ and transmissions $X_{[1:\ell]}$, the files $W_{[1:\tilde{N}]}$ can be decoded, hence \eqref{Wls} follows. Let us define, $\omega_{s,\ell} \triangleq \min(C-s,\min\limits_i \binom{s+i}{r}\geq \ceil{\frac{N}{\ell}})$. We can bound the entropy of $\ell$ transmissions by $\ell R^*(M)$, where each transmission rate is $R^*(M)$. Thus, we have
\begin{subequations}
\begin{align}
&N\leq sM+\ell R^*(M)+\notag\\&\qquad H(X_{[\ell+1:\delta]},Z_{[s+1:s+\omega_{s,\ell}]}|Z_{[1:s]},X_{[1:\ell]},W_{[1:\tilde{N}]})\label{lRstar}\\
&\leq sM+\ell R^*(M)+\underbrace{H(Z_{[s+1:s+\omega_{s,\ell}]}|Z_{[1:s]},X_{[1:\ell]},W_{[1:\tilde{N}]})}_{\triangleq \mu}\notag\\&\qquad+\underbrace{H(X_{[\ell+1:\delta]}|Z_{[1:s+\omega_{s,\ell}]},X_{[1:\ell]},W_{[1:\tilde{N}]})}_{\triangleq \psi} \label{mupsi}.
\end{align}
\end{subequations}
\endgroup
Now, we find an upper bound on $\mu$ as follows:
\begin{subequations}
\begin{align}
\mu &= H(Z_{[s+1:s+\omega_{s,\ell}]}|Z_{[1:s]},X_{[1:\ell]},W_{[1:\tilde{N}]})\\
&\leq H(Z_{[s+1:s+\omega_{s,\ell}]}|Z_{[1:s]},W_{[1:\tilde{N}]})\\
&= H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})-H(Z_{[1:s]}|W_{[1:\tilde{N}]}).\label{mueqn}
\end{align}
\end{subequations}
By considering any $s$ caches from $[1:s+\omega_{s,\ell}]$, we can write an inequality similar to the one in \eqref{mueqn}.That is,
\begin{equation}
\mu\leq H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})-H(Z_{\mathcal{A}}|W_{[1:\tilde{N}]})\label{toavg}
\end{equation}
where $\mathcal{A}\subseteq [s+\omega_{s,\ell}],|\mathcal{A}|=s$, and $Z_{\mathcal{A}}$ is the contents stored in the caches indexed by the elements in the set $\mathcal{A}$. That means we can find $\binom{s+\omega_{s,\ell}}{s}$ such inequalities. Averaging over all those inequalities, we get
\begin{align}
\mu\leq H(Z_{[1:s+\omega_{s,\ell}]}&|W_{[1:\tilde{N}]})-\notag\\&\frac{1}{\binom{s+\omega_{s,\ell}}{s}}\sum_{\substack{\mathcal{A}\subseteq [s+\omega_{s,\ell}] \\ |\mathcal{A}|=s}} H(Z_{\mathcal{A}}|W_{[1:\tilde{N}]})\label{mubound}.
\end{align}
By applying Lemma \ref{Entropy} for the random variables $Z_{[1:s+\omega_{s,\ell}]}$, we get
\begin{align*}
\frac{1}{s+\omega_{s,\ell}} H(Z_{[1:s+\omega_{s,\ell}]}&|W_{[1:\tilde{N}]})\leq\\& \frac{1}{\binom{s+\omega_{s,\ell}}{s}}\sum_{\substack{\mathcal{A}\subseteq [s+\omega_{s,\ell}] \\ |\mathcal{A}|=s}} \frac{H(Z_{\mathcal{A}}|W_{[1:\tilde{N}]})}{s}.
\end{align*}
Upon rearranging, we have
\begin{align}
\frac{1}{\binom{s+\omega_{s,\ell}}{s}}\sum_{\substack{\mathcal{A}\subseteq [s+\omega_{s,\ell}] \\ |\mathcal{A}|=s}} H(&Z_{\mathcal{A}}|W_{[1:\tilde{N}]})\geq\notag\\& \frac{s}{s+\omega_{s,\ell}} H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})\label{mueqn2}.
\end{align}
Substituting \eqref{mueqn2} in \eqref{mubound}, we get
\begingroup
\allowdisplaybreaks
\begin{subequations}
\begin{align*}
\mu &\leq H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})-\frac{s}{s+\omega_{s,\ell}} H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})\\
&= \frac{\omega_{s,\ell}}{s+\omega_{s,\ell}}H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]}).
\end{align*}
\end{subequations}
\endgroup
Now, consider two cases a) if $\tilde{N} = \min(N,\ell \binom{s}{r})=N$, then
\begin{align}
H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})=H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:N]})=0\label{mueqn3}
\end{align}
and b) if $\tilde{N} = \min(N,\ell \binom{s}{r})=\ell \binom{s}{r}$, then
\begingroup
\allowdisplaybreaks
\begin{subequations}
\begin{align}
H(Z_{[1:s+\omega_{s,\ell}]}&|W_{[1:\tilde{N}]})=H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\ell\binom{s}{r}]})\\&\leq H(Z_{[1:s+\omega_{s,\ell}]},W_{[\ell\binom{s}{r}+1:N]}|W_{[1:\ell\binom{s}{r}]})\\
&= H(W_{[\ell\binom{s}{r}+1:N]}|W_{[1:\ell\binom{s}{r}]})+\notag\\&\qquad H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:N]})\\
&\leq H(W_{[\ell\binom{s}{r}+1:N]})\leq N-\ell\binom{s}{r}\label{mueqn4}
\end{align}
\end{subequations}
\endgroup
where \eqref{mueqn3} and \eqref{mueqn4} follow from the fact that the cache contents are the functions of the files. Therefore, we have
\begin{align}
H(Z_{[1:s+\omega_{s,\ell}]}|W_{[1:\tilde{N}]})\leq \left(N-\ell \binom{s}{r}\right)^+.
\end{align}
Therefore, we have the following upper bound on $\mu$:
\begin{align}
\mu \leq \frac{\omega_{s,\ell}}{s+\omega_{s,\ell}}\left(N-\ell \binom{s}{r}\right)^+.\label{mu}
\end{align}
Now, we find an upper bound on $\psi$, where $\psi = H(X_{[\ell+1:\delta]}|Z_{[1:s+\omega_{s,\ell}]},X_{[1:\ell]},W_{[1:\tilde{N}]})$. \\
We consider two cases a) if $N\leq \ell\binom{s+\omega_{s,\ell}}{r}$, then it is possible to decode all the files $W_{[1:N]}$ from $Z_{[1:s+\omega_{s,\ell}]}$ and $X_{[1:\ell]}$ by appropriately choosing the demand vectors $\mathbf{d}_1,\mathbf{d}_2,\dots,\mathbf{d}_\delta$. Then the uncertainty in $X_{[\ell+1:\delta]}$ is zero. That is, when $N\leq \ell \binom{s+\omega_{s,\ell}}{r}$, we have $\psi = 0$.\\
b) The second case $N> \ell\binom{s+\omega_{s,\ell}}{r}$ means that, $\omega_{s,\ell}=C-s$. Note that, $\omega_{s,\ell}$ is defined such that using the first $s+\omega_{s,\ell}$ caches and $\ell$ transmissions, it is possible to decode the remaining $N-\ell \binom{s}{r}$ files by appropriately choosing the demands of the remaining $\binom{s+\omega_{s,\ell}}{r}-\binom{s}{r}$ users, if $N\leq \ell\binom{C}{r}$. That is, $N> \ell\binom{s+\omega_{s,\ell}}{r}$ means that $N> \ell\binom{C}{r}$. Then, we have
\begingroup
\allowdisplaybreaks
\begin{subequations}
\begin{align}
\psi &= H(X_{[\ell+1:\delta]}|Z_{[1:C]},X_{[1:\ell]},W_{[1:\tilde{N}]})\\
&= H(X_{[\ell+1:\delta]}|Z_{[1:C]},X_{[1:\ell]},W_{[1:\ell\binom{C}{r}]})\label{psieqn1}\\
&\leq H(X_{[\ell+1:\delta]},W_{[\ell\binom{C}{r}+1:N]}|Z_{[1:C]},X_{[1:\ell]},W_{[1:\ell\binom{C}{r}]})\\
&\leq H(W_{[\ell\binom{C}{r}+1:N]}|Z_{[1:C]},X_{[1:\ell]},W_{[1:\ell\binom{C}{r}]})\notag\\ &\qquad +H(X_{[\ell+1:\delta]}|Z_{[1:C]},X_{[1:\ell]},W_{[1:N]})\\
&= H(W_{[\ell\binom{C}{r}+1:N]}|Z_{[1:C]},X_{[1:\ell]},W_{[1:\ell\binom{C}{r}]})\label{psieqn2}\\
&\leq H(W_{[\ell\binom{C}{r}+1:N]})= N-\ell\binom{C}{r}.
\end{align}
\end{subequations}
\endgroup
From cache contents $Z_{[1:C]}$, and the transmissions $X_{[1:\ell]}$ it is possible to decode the files $W_{[1:\ell\binom{C}{r}]}$, hence \eqref{psieqn1} follows. Further, \eqref{psieqn2} follows from the fact that given $W_{[1:N]}$, there is no uncertainty in the transmissions. Thus, we have the upper bound on $\psi$,
\begin{align}
\psi \leq (N-\ell\binom{C}{r})^+ \label{psi}.
\end{align}
Substituting \eqref{mu} and \eqref{psi} in \eqref{mupsi}, we get
\begin{align*}
N \leq sM+&\ell R^*(M)+\\&\frac{\omega_{s,\ell}}{s+\omega_{s,\ell}}\left(N-\ell\binom{s}{r}\right)^+ + \left(N-\ell\binom{C}{r}\right)^+.
\end{align*}
Upon rearranging the terms, and optimizing over all the possible values of $s$ and $\ell$, we have the following lower bound on $R^*(M)$
\begin{align*}
&R^*(M) \geq \max_{\substack{s\in \{r,r+1,r+2,\hdots,C\} \\ \ell\in \left\{1,2,\hdots,\ceil{N/\binom{s}{r}}\right\}}} \frac{1}{\ell}\Big\{N-\frac{\omega_{s,\ell}}{s+\omega_{s,\ell}}\left(N-\ell\binom{s}{r}\right)^+\\&\hspace{4.5cm}- \left(N-\ell\binom{C}{r}\right)^+-sM\Big\}
\end{align*}
where $\omega_{s,\ell} = \min(C-s,\min\limits_i \binom{s+i}{r}\geq \ceil{\frac{N}{\ell}})$. This completes the proof of Theorem \ref{lowerbound}.\hfill $\blacksquare$
\subsection{Proof of Theorem \ref{optimality1}}
\label{proof:thm4}
First, we show that the rate $R(M) = 1-\frac{rM}{N}$ is achievable for $\frac{\binom{C}{r}-1}{r\binom{C}{r}}\leq \frac{M}{N}\leq \frac{1}{r}$. Substituting $t=C-r$ in \eqref{thm1eqn} gives $\frac{M}{N}=\frac{\binom{C}{r}-1}{r\binom{C}{r}}$ (refer Appendix \ref{AppendixB}), and $R = 1/\binom{C}{r}$. Similarly, $t=C-r+1$ gives $M/N=1/r$ and $R=0$. By memory sharing, the rate $R(M) = 1-\frac{rM}{N}$ is achievable for $\frac{\binom{C}{r}-1}{r\binom{C}{r}}\leq \frac{M}{N}\leq \frac{1}{r}$. Now, to show the converse, let us substitute $s=r$ and $\ell=N$ in \eqref{lowereqn}. That gives,
\begin{equation*}
R^*(M)\geq 1-\frac{rM}{N}.
\end{equation*}
Therefore, we can conclude that
\begin{equation*}
R^*(M)= 1-\frac{rM}{N}
\end{equation*}
for $\frac{\binom{C}{r}-1}{r\binom{C}{r}}\leq \frac{M}{N}\leq \frac{1}{r}$. This completes the proof Theorem \ref{optimality1}. \hfill $\blacksquare$
\subsection{Proof of Theorem \ref{optimality2}}
\label{proof:thm5}
From Theorem \ref{thm:scheme2}, we know that for the $(C,r,M,N)$ combinatorial MACC scheme the rate
\begin{equation*}
R(M) = N-CM
\end{equation*}
is achievable when $0\leq M\leq \frac{N-\binom{C-1}{r}}{C}$.
By substituting $s=C$ and $\ell=1$, we get
\begin{equation*}
R^*(M) \geq N-CM.
\end{equation*}
Thus, we conclude that
\begin{equation*}
R^*(M) = N-CM
\end{equation*}
for $0\leq M\leq \frac{N-\binom{C-1}{r}}{C}$. \hfill$\blacksquare$
\section{Conclusion}
\label{conclusion}
In this work, we presented two coding schemes for the combinatorial MACC setting introduced in \cite{MKR2}. Both the presented schemes employ coding in the placement phase in addition to the coded transmissions in the delivery phase. That is, we showed that with the help of a coded placement phase, it is possible to achieve a reduced rate compared to the optimal coded caching scheme under uncoded placement. Finally, we derived an information-theoretic lower bound on the optimal rate-memory trade-off of the combinatorial MACC scheme and showed that the first scheme is optimal at a higher memory regime and the second scheme is optimal when the number of files with the server is no more than the number of users in the system.
\section*{Acknowledgment}
This work was supported partly by the Science and Engineering Research Board (SERB) of Department of Science and Technology (DST), Government of India, through J.C. Bose National Fellowship to B. Sundar Rajan, and by the Ministry of Human Resource Development (MHRD), Government of India, through Prime Minister’s Research Fellowship (PMRF) to K. K. Krishnan Namboodiri.
\begin{appendices}
\section{}
\label{AppendixZ}
In this section, we calculate the normalized cache memory as a continuation of \eqref{mbyn1}. From \eqref{mbyn1}, we have
\begin{align*}
\frac{M}{N} &= \frac{(r-1)!\binom{C-1}{t-1})}{r!\binom{C}{t}}+\notag\\&\qquad\frac{\sum\limits_{b=1}^{r-1}\frac{r!}{(r-b)(r-b+1)}\left(\binom{C-1}{t-1}-\sum\limits_{i=1}^{b}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}\right)}{r!\binom{C}{t}}\\
&=\frac{\binom{C-1}{t-1}\left((r-1)!+\sum\limits_{b=1}^{r-1}\frac{r!}{(r-b)(r-b+1)}\right)}{r!\binom{C}{t}}-\\&\hspace{2cm}\frac{\sum\limits_{b=1}^{r-1}\frac{r!}{(r-b)(r-b+1)}\sum\limits_{i=1}^{b}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}}{r!\binom{C}{t}}.
\end{align*}
By expanding and cancelling the terms, we get
\begin{align*}
\sum_{b=1}^{r-1}\frac{r!}{(r-b)(r-b+1)} &= r!\sum_{b=1}^{r-1}\left(\frac{1}{r-b}- \frac{1}{r-b+1}\right)\\
& = r!\left(1- \frac{1}{r}\right)=(r-1)!(r-1).
\end{align*}
Thus, we have
\begin{align*}
\frac{M}{N}&=\frac{r!\binom{C-1}{t-1}-\sum\limits_{b=1}^{r-1}\frac{r!}{(r-b)(r-b+1)}\sum\limits_{i=1}^{b}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}}{r!\binom{C}{t}}.
\end{align*}
By changing the order of summation, we get
\begin{align*}
\frac{M}{N}&=\frac{r!\binom{C-1}{t-1}-\sum\limits_{i=1}^{r-1}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}\sum\limits_{b=i}^{r-1}\frac{r!}{(r-b)(r-b+1)}}{r!\binom{C}{t}}.
\end{align*}
By expanding and cancelling the terms, we get
\begin{align*}
\sum_{b=i}^{r-1}\frac{r!}{(r-b)(r-b+1)} &= r!\sum_{b=i}^{r-1}\left(\frac{1}{r-b}- \frac{1}{r-b+1}\right)\\
= r!&\left(1- \frac{1}{r-i+1}\right)=r!\frac{r-i}{r-i+1}.
\end{align*}
Therefore
\begingroup
\allowdisplaybreaks
\begin{align*}
\frac{M}{N}&=\frac{r!\binom{C-1}{t-1}-r!\sum\limits_{i=1}^{r-1}\frac{r-i}{r-i+1}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}}{r!\binom{C}{t}}\\
&= \frac{\binom{C-1}{t-1}}{\binom{C}{t}}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{r-1}\frac{r-i}{r-i+1}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}\\
&=\frac{t}{C}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{r-1}\frac{r-i}{r-i+1}\binom{r-1}{r-i}\binom{C-r}{t-r+i-1}\\
&=\frac{t}{C}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{r-1}\frac{r-i}{r}\binom{r}{i-1}\binom{C-r}{t-r+i-1}.
\end{align*}
\endgroup
\section{}
\label{AppendixY}
In this section, we calculate the normalized cache memory as a continuation of \eqref{mbyn2}. From \eqref{mbyn2}, we have
\begin{align*}
\frac{M}{N} &= \frac{(t-1)!\binom{C-1}{t-1}}{t!\binom{C}{t}}+\\&\hspace{1.5cm}\frac{\sum\limits_{b=1}^{t-1}\frac{t!}{(t-b)(t-b+1)}\left(\binom{C-1}{t-1}-\sum\limits_{i=1}^{b}\binom{r-1}{t-i}\binom{C-r}{i-1}\right)}{t!\binom{C}{t}}\\
&=\frac{\binom{C-1}{t-1}\left((t-1)!+\sum\limits_{b=1}^{t-1}\frac{t!}{(t-b)(t-b+1)}\right)}{t!\binom{C}{t}}-\\&\hspace{1.5cm}\frac{\sum\limits_{b=1}^{t-1}\frac{t!}{(t-b)(t-b+1)}\sum\limits_{i=1}^{b}\binom{r-1}{t-i}\binom{C-r}{i-1}}{t!\binom{C}{t}}.
\end{align*}
By expanding and cancelling the terms, we get
\begin{align*}
\sum_{b=1}^{t-1}\frac{t!}{(t-b)(t-b+1)} &= t!\sum_{b=1}^{t-1}\left(\frac{1}{t-b}- \frac{1}{t-b+1}\right)\\
& = t!\left(1- \frac{1}{t}\right)=(t-1)!(t-1).
\end{align*}
Thus, we have
\begin{align*}
\frac{M}{N}&=\frac{t!\binom{C-1}{t-1}-\sum\limits_{b=1}^{t-1}\frac{t!}{(t-b)(t-b+1)}\sum\limits_{i=1}^{b}\binom{r-1}{t-i}\binom{C-r}{i-1}}{t!\binom{C}{t}}.
\end{align*}
By changing the order of summation, we get
\begin{align*}
\frac{M}{N}&=\frac{t!\binom{C-1}{t-1}-\sum\limits_{i=1}^{t-1}\binom{r-1}{t-i}\binom{C-r}{i-1}\sum\limits_{b=i}^{t-1}\frac{t!}{(t-b)(t-b+1)}}{t!\binom{C}{t}}.
\end{align*}
By expanding and cancelling the terms, we get
\begin{align*}
\sum_{b=i}^{t-1}\frac{t!}{(t-b)(t-b+1)} &= t!\sum_{b=i}^{t-1}\left(\frac{1}{t-b}- \frac{1}{t-b+1}\right)\\
= t!&\left(1- \frac{1}{t-i+1}\right)=t!\frac{t-i}{t-i+1}.
\end{align*}
Therefore
\begin{align*}
\frac{M}{N}&=\frac{t!\binom{C-1}{t-1}-t!\sum\limits_{i=1}^{t-1}\frac{t-i}{t-i+1}\binom{r-1}{t-i}\binom{C-r}{i-1}}{t!\binom{C}{t}}\\
&= \frac{\binom{C-1}{t-1}}{\binom{C}{t}}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{t-1}\frac{t-i}{t-i+1}\binom{r-1}{t-i}\binom{C-r}{i-1}\\
&=\frac{t}{C}-\frac{1}{\binom{C}{t}}\sum\limits_{i=1}^{t-1}\frac{t-i}{r}\binom{r}{t-i+1}\binom{C-r}{i-1}.
\end{align*}
\section{}
\label{AppendixA}
In this section, we show that, if we substitute $t=C-r+1$ in \eqref{thm1eqn}, we get $\frac{M}{N}=\frac{1}{r}$. \\
a) Consider the case $t\geq r$. Then, we have
\begin{align}
\frac{M}{N}&=\frac{t}{C}-\frac{1}{r\binom{C}{r-1}}\sum\limits_{i=1}^{r-1}(r-i)\binom{r}{i-1}\binom{C-r}{r-i}\notag\\
&=\frac{C-r+1}{C}-\frac{1}{r\binom{C}{r-1}}\sum\limits_{i=1}^{r-1}(r-i)\binom{r}{i-1}\binom{C-r}{r-i}\notag.
\end{align}
We can use the following identity to compute the above sum.
\begin{lem}[\cite{Mes}]
\label{lemmavander}
Let $n_1,n_2$ be arbitrary positive integers. If $m$ is a positive integer such that $m\leq n_1+n_2$. Then
\begin{equation*}
\sum\limits_{k_1+k_2=m} k_1\binom{n_1}{k_1}\binom{n_2}{k_2}=\frac{mn_1}{n_1+n_2}\binom{n_1+n_2}{m}
\end{equation*}
where summation ranges over all non-negative integers $k_1$ and $k_2$ such that $k_1\leq n_1$, $k_2\leq n_2$ and $k_1+k_2=m$.
\end{lem}
Lemma \ref{lemmavander} is a generalization of the Vandermonde identity:
\begin{equation}
\label{eqnvander}
\sum\limits_{k_1+k_2=m} \binom{n_1}{k_1}\binom{n_2}{k_2}=\binom{n_1+n_2}{m}.
\end{equation}
From Lemma \ref{lemmavander}, we have
\begin{equation*}
\sum\limits_{i=1}^{r-1}(r-i)\binom{r}{i-1}\binom{C-r}{r-i} = \frac{(r-1)(C-r)\binom{C}{r-1}}{C}.
\end{equation*}
Thus, we get
\begin{align}
\frac{M}{N}&=\frac{C-r+1}{C}-\frac{(r-1)(C-r)\binom{C}{r-1}}{rC\binom{C}{r-1}}\notag.\\
&=\frac{C-r+1}{C}-\frac{(r-1)(C-r)}{rC} = \frac{1}{r}\notag.
\end{align}
b) Now, consider the case $t<r$. Then, we have
\begin{align}
\frac{M}{N}&=\frac{t}{C}-\frac{1}{r\binom{C}{t}}\sum\limits_{i=1}^{t-1}(t-i)\binom{r}{t-i+1}\binom{C-r}{i-1}\notag\\
&=\frac{t}{C}-\frac{\frac{rt}{C}\binom{C}{t}-\binom{C}{t}}{r\binom{C}{t}}\label{appeqn20}\\
&=\frac{t}{C}-\frac{rt-C}{rC} = \frac{1}{r}\notag
\end{align}
where \eqref{appeqn20} follows from Lemma \ref{lemmavander} and \eqref{eqnvander}. Thus we established that $t=C-r+1$ corresponds to $\frac{M}{N} = \frac{1}{r}$.\hfill $\blacksquare$
\section{}
\label{AppendixB}
In this section, we show that $t=C-r$ results in $\frac{M}{N}=\frac{\binom{C}{r}-1}{r\binom{C}{r}}$ in \eqref{thm1eqn}.
\noindent a) Consider the case $t=C-r\geq r$. Then,
\begin{align}
\frac{M}{N}&=\frac{\binom{C-1}{t-1}}{\binom{C}{t}}-\frac{1}{\binom{C}{C-r}}\sum\limits_{i=1}^{r-1}\frac{r-i}{r}\binom{r}{i-1}\binom{C-r}{C-2r+i-1}\notag\\
&=\frac{\binom{C-1}{C-r-1}}{\binom{C}{r}}-\frac{1}{r\binom{C}{r}}\sum\limits_{i=1}^{r-1}(r-i)\binom{r}{i-1}\binom{C-r}{r-i+1}\label{appeqn-1}.
\end{align}
Let us compute the second term in \eqref{appeqn-1}. We have
\begin{align}
\sum\limits_{i=1}^{r-1}(r-i)&\binom{r}{i-1}\binom{C-r}{r-i+1} =\notag\\ &\sum\limits_{i=1}^{r-1}(r-i+1)\binom{r}{i-1}\binom{C-r}{r-i+1}-\notag\\
&\hspace{2cm}\sum\limits_{i=1}^{r-1}\binom{r}{i-1}\binom{C-r}{r-i+1}\label{appeq0}.
\end{align}
Using Lemma \ref{lemmavander} and \eqref{eqnvander}, we get
\begin{align}
\sum\limits_{i=1}^{r-1}&(r-i+1)\binom{r}{i-1}\binom{C-r}{r-i+1}\notag\\ &=\sum\limits_{i=1}^{r+1}(r-i+1)\binom{r}{i-1}\binom{C-r}{r-i+1}-r(C-r)\notag\\
& \hspace{3cm}=\frac{r(C-r)}{C}\binom{C}{r}-r(C-r) \label{appeq1}
\end{align}
where \eqref{appeq1} follows from Lemma \ref{lemmavander}. Similarly, by using \eqref{eqnvander}, we get
\begin{align}
\sum\limits_{i=1}^{r-1}\binom{r}{i-1}\binom{C-r}{r-i+1}=\binom{C}{r}-r(C-r)-1.\label{appeq2}
\end{align}
By substituting \eqref{appeq1} and \eqref{appeq2} in \eqref{appeqn-1}, we get
\begin{align}
\frac{M}{N}&=\frac{r\binom{C-1}{C-r-1}-\frac{r(C-r)}{C}\binom{C}{r}+\binom{C}{r}-1}{r\binom{C}{r}}\notag\\
&=\frac{\binom{C}{r}-1}{r\binom{C}{r}}\label{appeq3}.
\end{align}
\noindent b) Consider the case $t<r$. Then, we have
\begin{align}
\frac{M}{N}&=\frac{\binom{C-1}{t-1}}{\binom{C}{t}}-\frac{1}{\binom{C}{C-r}}\sum\limits_{i=1}^{t-1}\frac{t-i}{r}\binom{r}{t-i+1}\binom{C-r}{i-1}\notag\\
&=\frac{\binom{C-1}{C-r-1}}{\binom{C}{r}}-\frac{1}{r\binom{C}{r}}\sum\limits_{i=1}^{t-1}(t-i)\binom{r}{t-i+1}\binom{C-r}{i-1}\label{appeqn9}.
\end{align}
Let us compute the second term in \eqref{appeqn9} by using Lemma \ref{lemmavander} and \eqref{eqnvander}. We have
\begin{align}
\sum\limits_{i=1}^{t-1}(t-i)&\binom{r}{t-i+1}\binom{C-r}{i-1} =\notag\\ &\sum\limits_{i=1}^{t-1}(t-i+1)\binom{r}{t-i+1}\binom{C-r}{i-1}-\notag\\
&\hspace{2cm}\sum\limits_{i=1}^{t-1}\binom{r}{t-i+1}\binom{C-r}{i-1}\notag\\
&=\frac{r(C-r)}{C}\binom{C}{r}-\binom{C}{r}+1.\label{appeq11}
\end{align}
Substituting \eqref{appeq11} in \eqref{appeqn9}, we get,
\begin{equation*}
\frac{M}{N} = \frac{\binom{C}{r}-1}{r\binom{C}{r}}.
\end{equation*}
Thus we established that $t=C-r$ corresponds to $\frac{M}{N} = \frac{\binom{C}{r}-1}{r\binom{C}{r}}$.
\end{appendices}
| {'timestamp': '2022-12-27T02:04:55', 'yymm': '2212', 'arxiv_id': '2212.12686', 'language': 'en', 'url': 'https://arxiv.org/abs/2212.12686'} |
\section{Introduction}
In 1964, H.\ Bass \cite{bass} showed that if $R$ is a ring and $H$ a subgroup of the general linear group $\operatorname{GL}_n(R)$, then
\begin{equation*}
H\text{ is normalised by }\operatorname{E}_n(R)~\Leftrightarrow~ \exists \text{ ideal }I:~\operatorname{E}_n(R,I)\subseteq H\subseteq \operatorname{C}_n(R,I)
\end{equation*}
provided $n$ is large enough with respect to the stable rank of $R$. Here $\operatorname{E}_n(R)$ denotes the elementary subgroup, $\operatorname{E}_n(R,I)$ the relative elementary subgroup of level $I$ and $\operatorname{C}_n(R,I)$ the full congruence subgroup of level $I$ (cf. \cite{preusser2}). Bass's result, which is one of the central points in the structure theory of general linear groups, is known as {\it Sandwich Classification Theorem}. In the 1970's and 80's, the validity of this theorem was extended by J.\ Wilson \cite{wilson}, I.\ Golubchik \cite{golubchik}, L.\ Vaserstein \cite{vaserstein, vaserstein_ban, vaserstein_neum} and others. It holds true, for example, if $R$ is almost commutative (i.e. finitely generated as a module over its center) and $n\geq 3$.
It follows from the Sandwich Classification Theorem that if $R$ is a commutative ring, $n\geq 3$, $\sigma\in \operatorname{GL}_n(R)$, $i\neq j$ and $k\neq l$, then the elementary transvection $t_{kl}(\sigma_{ij})$ can be expressed as a finite product of $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
In 1960, J.\ Brenner \cite{brenner} showed that if $R=\mathbb{Z}$, then there is a bound for the number of factors needed for such an expression of $t_{kl}(\sigma_{ij})$, but until recently there was not much hope to prove such a result for arbitrary commutative rings $R$.
However, in 2018 the author found an explicit expression of $t_{kl}(\sigma_{ij})$ (where $\sigma\in \operatorname{GL}_n(R)$, $R$ is a commutative ring, $n\geq 3$, $i\neq j$, $k\neq l$) as a product of $8$ elementary conjugates of $\sigma$ and $\sigma^{-1}$, yielding a very short proof of the Sandwich Classification Theorem for commutative rings \cite{preusser2}. Similar results were obtained for the even- and odd-dimensional orthogonal groups $\operatorname{O}_{2n}(R)$ and $\operatorname{O}_{2n+1}(R)$, and the even- and odd-dimensional unitary groups $\operatorname{U}_{2n}(R,\Lambda)$ and $\operatorname{U}_{2n+1}(R,\Delta)$ where $R$ is a commutative ring and $n\geq 3$ \cite{preusser2, preusser3}.
In order to show the normality of the elementary subgroup $\operatorname{E}_n(R)$ in $\operatorname{GL}_n(R)$, one has to prove that $\operatorname{E}_n(R)^\sigma\subseteq \operatorname{E}_n(R)$ for any $\sigma\in\operatorname{GL}_n(R)$. {\it Decomposition of unipotents} provides explicit formulae expressing the generators $t_{kl}(x)^\sigma~(k\neq l, x\in R)$ of $\operatorname{E}_n(R)^{\sigma}$ as products of elementary transvections, see \cite{stepanov-vavilov}. In order to show the Sandwich Classification Theorem for commutative rings, one has to prove that $\operatorname{E}_n(I)\subseteq \sigma^{E_n(R)}$ for any $\sigma\in\operatorname{GL}_n(R)$ where $I$ denotes the ideal generated by the nondiagonal entries of $\sigma$ and all differences of diagonal entries. The paper \cite{preusser2} provides explicit formulae expressing the generators $t_{kl}(x\sigma_{ij}),t_{kl}(x(\sigma_{ii}-\sigma_{jj}))~(k\neq l,i\neq j,x\in R)$ of $\operatorname{E}_n(I)$, as products of $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. N.\ Vavilov considered the papers \cite{preusser2, preusser3} ``the first major advance in the direction of what can be dubbed the {\it reverse decomposition of unipotents}'', see \cite{vavilov}.
In this paper we obtain bounds for the reverse decomposition of unipotents over different classes of noncommutative rings.
Pick a $\sigma\in \operatorname{GL}_n(R)$. Our goal is to express the matrix $t_{kl}(\sigma_{ij})$ where $k\neq l$ and $i\neq j$ as a product of $E_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. An idea is to start with $\sigma$ and then apply successively operations of types (a) and (b) below until one arrives at $t_{kl}(\sigma_{ij})$.
\begin{enumerate}[(a)]
\item $\operatorname{GL}_n(R)\rightarrow \operatorname{GL}_n(R),\tau\mapsto \tau^\xi$ where $\xi\in \operatorname{E}_n(R)$.
\item $\operatorname{GL}_n(R)\rightarrow \operatorname{GL}_n(R),\tau\mapsto [\tau,\xi]$ where $\xi\in \operatorname{E}_n(R)$.
\end{enumerate}
If $\tau$ lies in $\sigma^{\operatorname{E}_n(R)}$ and $\rho$ is obtained from $\tau$ by applying an operation of type (a) or (b), then clearly $\rho$ again lies in $\sigma^{\operatorname{E}_n(R)}$.
In this paper we also use operations of type (c) below.
\begin{enumerate}[(c)]
\setcounter{enumi}{3}
\item $\operatorname{GL}_n(R)\times\operatorname{GL}_n(R)\rightarrow \operatorname{GL}_n(R)\times\operatorname{GL}_n(R),(\tau_1,\tau_2)\mapsto ([\tau_1^{-1},\xi],[\xi,\tau_2])$ where $\xi\in \operatorname{E}_n(R)$.
\end{enumerate}
We show in Section 4 that if the product $\tau_1\tau_2$ lies in $\sigma^{\operatorname{E}_n(R)}$ where $\tau_1\in\operatorname{E}_n(R)$, and $(\rho_1, \rho_2)$ is obtained from $(\tau_1,\tau_2)$ by applying an operation of type (c), then the product $\rho_1\rho_2$ again lies in $\sigma^{\operatorname{E}_n(R)}$.
The rest of the paper is organised as follows. In Section 2 we recall some standard notation which is used throughout the paper. In Section 3 we recall the definitions of the general linear group $\operatorname{GL}_n(R)$ and its elementary subgroup $\operatorname{E}_n(R)$. In Section 4, we introduce the operations $(c)$ above in the abstract context of groups. Then we use these operations to obtain results for general linear groups which are crucial for the following sections. In Section 5 we show how \cite[Theorem 12]{preusser2} can be derived from the results of Section 4. In Sections 6-10 we get bounds for the reverse decomposition of unipotents over von Neumann regular rings, Banach algebras, rings satisfying a stable range condition, rings with Euclidean algorithm and almost commutative rings. In the last section we list some open problems.
\section{Notation}
$\mathbb{N}$ denotes the set of positive integers. If $G$ is a group and $g,h\in G$, we let $g^h:=h^{-1}gh$, and $[g,h]:=ghg^{-1}h^{-1}$. If $g\in G$ and $H$ is a subgroup of $G$, we denote by $g^{H}$ the subgroup of $G$ generated by the set $\{g^h\mid h\in H\}$. By a ring we mean an associative ring with $1\neq 0$. By an ideal of a ring we mean a twosided ideal.
Throughout the paper $R$ denotes a ring and $n$ a positive integer greater than $2$. We denote by $^n\!R$ the set of all rows $u=(u_1,\dots,u_n)$ with entries in $R$ and by $R^n$ the set of all columns $v=(v_1,\dots,v_n)^t$ with entries in $R$. Furthermore, the set of all $n\times n$ matrices over $R$ is denoted by $\operatorname{M}_n(R)$. The identity matrix in $\operatorname{M}_n(R)$ is denoted by $e$ or $e_{n\times n}$ and the matrix with a one at position $(i,j)$ and zeros elsewhere is denoted by $e^{ij}$. If $\sigma\in \operatorname{M}_{n
}(R)$, we denote the
the entry of $\sigma$ at position $(i,j)$ by $\sigma_{ij}$. We denote the $i$-th row of $\sigma$ by $\sigma_{i*}$ and its $j$-th column by $\sigma_{*j}$. If $\sigma\in \operatorname{M}_n(R)$ is invertible, we denote the entry
of $\sigma^{-1}$ at position $(i,j)$ by $\sigma'_{ij}$, the $i$-th row of $\sigma^{-1}$ by $\sigma'_{i*}$ and the $j$-th column of $\sigma^{-1}$ by $\sigma'_{*j}$.
\section{The general linear group and its elementary subgroup}
\begin{definition}
The group $\operatorname{GL}_{n}(R)$ consisting of all invertible elements of $\operatorname{M}_n(R)$ is called the {\it general linear group} of degree $n$ over $R$.
\end{definition}
\begin{definition}
Let $x\in R$ and $i,j\in\{1,\dots,n\}$ such that $i\neq j$. Then the matrix $t_{ij}(x):=e+xe^{ij}$ is called an {\it elementary transvection}. The subgroup $\operatorname{E}_n(R)$ of $\operatorname{GL}_n(R)$ generated by the elementary transvections is called the {\it elementary subgroup}.
\end{definition}
\begin{lemma}\label{lemelrel}
The relations
\begin{align*}
t_{ij}(x)t_{ij}(y)&=t_{ij}(x+y), \tag{R1}\\
[t_{ij}(x),t_{hk}(y)]&=e \tag{R2}\text{ and}\\
[t_{ij}(x),t_{jk}(y)]&=t_{ik}(xy) \tag{R3}\\
\end{align*}
hold where $i\neq k, j\neq h$ in $(R2)$ and $i\neq k$ in $(R3)$.
\end{lemma}
\begin{proof}
Straightforward computation.
\end{proof}
\begin{definition}\label{defp}
Let $i,j\in\{1,\dots,n\}$ such that $i\neq j$. Then the matrix $p_{ij}:=e+e^{ij}-e^{ji}-e^{ii}-e^{jj}=t_{ij}(1)t_{ji}(-1)t_{ij}(1)\in \operatorname{E}_{n}(R)$ is called a {\it generalised permutation matrix}. It is easy to show that $p_{ij}^{-1}=p_{ji}$. The subgroup of $\operatorname{E}_n(R)$ generated by the generalised permutation matrices is denoted by $\operatorname{P}_n(R)$.
\end{definition}
\begin{lemma}\label{lemp}
Let $\sigma\in \operatorname{GL}_n(R)$. Further let $x\in R$ and $i,j,k,l\in\{1,\dots,n\}$ such that $i\neq j$ and $k\neq l$. Then there are $\tau,\rho\in \operatorname{P}_n(R)$ such that $(\sigma^{\tau})_{kl}=\sigma_{ij}$ and $t_{kl}(x)^{\rho}=t_{ij}(x)$.
\end{lemma}
\begin{proof}
Easy exercise.
\end{proof}
\section{The key results}
\subsection{Simultaneous reduction in groups}
In this subsection $G$ denotes a group.
\begin{definition}
Let $(a_1,b_1), (a_2,b_2)\in G\times G$. If there is an $g\in G$ such that
\[a_2=[a_1^{-1},g]\text{ and }b_2=[g,b_1],\]
then we write $(a_1,b_1)\xrightarrow{g} (a_2,b_2)$. In this case $(a_1,b_1)$ is called {\it reducible to $(a_2,b_2)$ by $g$}.
\end{definition}
\begin{definition}
If $(a_1,b_1),\dots, (a_{n+1},b_{n+1})\in G\times G$ and $g_1,\dots,g_{n}\in G$ such that
\[(a_1,b_1) \xrightarrow{g_1} (a_2,b_2)\xrightarrow{g_2}\dots \xrightarrow{g_{n}} (a_{n+1},b_{n+1}),\]
then we write $(a_1,b_1) \xrightarrow{g_1,\dots,g_{n}}(a_{n+1},b_{n+1})$. In this case $(a_1,b_1)$ is called {\it reducible to $(a_{n+1},b_{n+1})$ by $g_1,\dots,g_n$}.
\end{definition}
Let $H$ be a subgroup of $G$. If $g\in G$ and $h\in H$, then we call $g^h$ an {\it $H$-conjugate} of $g$.
\begin{lemma}\label{lemredux}
Let $(a_1,b_1),(a_2,b_2)\in G\times G$. If $(a_1,b_1)\xrightarrow{g_1,\dots,g_n}(a_2,b_2)$ for some $g_1,\dots,g_n\in G$, then $a_2b_2$ is a product of $2^n$ $H$-conjugates of $a_1b_1$ and $(a_1b_1)^{-1}$ where $H$ is the subgroup of $G$ generated by $\{a_1,g_1,\dots, g_n\}$.
\end{lemma}
\begin{proof}
Assume that $n=1$. Then
\[a_2b_2=[a_1^{-1},g_1][g_1,b_1]=(a_1b_1)^{g_1^{-1}a_1}\cdot((a_1b_1)^{-1})^{a_1}.\]
Hence $a_2b_2$ is a product of two $H$-conjugates of $a_1b_1$ and $(a_1b_1)^{-1}$. The general case follows by induction (note that $a_1,\dots,a_n\in H$).
\end{proof}
\subsection{Application to general linear groups}
\begin{proposition}\label{propkey}
Let $\sigma\in \operatorname{GL}_n(R)$. Let $x_1,\dots,x_{n},y\in R$ such that $y\sum\limits_{p=1}^{n}\sigma_{1p}x_p=0$. Then the following holds.
\begin{enumerate}[(i)]
\itemsep0pt
\item Suppose that $x_n=0$, $y=1$. Then for any $k\neq l$ and $a,b\in R$, the elementary transvection $t_{kl}(ax_1b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\item Suppose that $x_n=1$, $y=1$. Then for any $k\neq l$ and $a,b\in R$, the elementary transvection $t_{kl}(ax_1b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\item Suppose that $x_1=1$. Then for any $k\neq l$ and $a,b\in R$, the elementary transvection $t_{kl}(ayb)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}[(i)]
\item Set $\tau:=\prod\limits_{p=1}^{n-1}t_{pn}(x_p)$. Clearly $(\sigma\tau^{-1})_{1*}=\sigma_{1*}$ and hence $(\sigma\tau^{-1}\sigma^{-1})_{1*}=e_{1*}$. A straightforward computation shows
\[(\tau,\sigma\tau^{-1}\sigma^{-1})\xrightarrow{t_{21}(a),t_{n1}(-b)}(t_{21}(ax_1b), e).\]
It follows from Lemma \ref{lemredux} that $t_{21}(ax_1b)$ is a product of $4$ $\operatorname{E}_n(R)$-conjugates of $[\tau,\sigma]$ and $[\tau,\sigma]^{-1}$. Thus $t_{21}(ax_1b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Assertion (i) follows now from Lemma \ref{lemp}.
\item Set $\tau:=\prod\limits_{p=1}^{n-1}t_{pn}(x_p)$. Clearly $(\sigma\tau)_{1n}=0$. A straightforward computation shows that
\[(\tau,\tau^{-1}\sigma^{-1})\xrightarrow{t_{21}(a),t_{n1}(-b),t_{n2}(1)}(t_{n1}(ax_1b), e).\]
It follows from Lemma \ref{lemredux} that $t_{n1}(ax_1b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Assertion (ii) follows now from Lemma \ref{lemp}.
\item Clearly $t_{n1}(y)^{\sigma}=e+\sigma'_{*n}y\sigma_{1*}$. Set $\tau:=\prod\limits_{p=2}^{n}t_{p1}(x_p)$. One checks easily that $(t_{n1}(y)^{\sigma\tau})_{*1}=e_{*1}$. A straightforward computation shows that
\[(t_{n1}(-y),t_{n1}(y)^{\sigma\tau})\xrightarrow{t_{12}(b),t_{1n}(-a)}(t_{12}(ayb), e).\]
It follows from Lemma \ref{lemredux} that $t_{12}(ayb)$ is a product of $4$ $\operatorname{E}_n(R)$-conjugates of $t_{n1}(-y)t_{n1}(y)^{\sigma\tau}=(\sigma^{-1})^{\tau t_{n1}(y)}\cdot \sigma^\tau$ and its inverse. Thus $t_{12}(ayb)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Assertion (iii) follows now from Lemma \ref{lemp}.
\end{enumerate}
\end{proof}
\begin{corollary}\label{corkeyB}
Let $\sigma\in \operatorname{GL}_n(R)$ such that $\sigma_{1n}=0$. Then for any $2\leq j\leq n$, $k\neq l$ and $a,b\in R$, the elementary transvection $t_{kl}(a\sigma'_{1j}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{corollary}
\begin{proof}
Clearly we have $\sum\limits_{p=1}^{n-1}\sigma_{1p}\sigma'_{pj}=0$ for any $2\leq j\leq n$. Hence, by Proposition \ref{propkey}(i), the elementary transvection $t_{kl}(a\sigma'_{1j}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{proof}
\begin{corollary}\label{corkeyA}
Let $\sigma\in \operatorname{GL}_n(R)$ such that $\sigma_{11}$ is right invertible. Then for any $k\neq l$ and $a,b\in R$, the elementary transvection $t_{kl}(a\sigma_{1n}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{corollary}
\begin{proof}
Let $z$ be a right inverse of $\sigma_{11}$. Then $-\sigma_{11}z\sigma_{1n}+\sigma_{1n}=0$. Hence, by Proposition \ref{propkey}(ii), the elementary transvection $t_{kl}(a\sigma_{1n}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{proof}
\begin{corollary}\label{corkeyC}
Let $\sigma\in \operatorname{GL}_n(R)$ such that $\sigma_{1n}$ is an idempotent. Then for any $k\neq l$ and $a,b\in R$, the elementary transvection $t_{kl}(a\sigma_{1n}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. In particular, if $\sigma_{1n}=1$, then $\operatorname{E}_n(R)\subseteq \sigma^{\operatorname{E}_n(R)}$.
\end{corollary}
\begin{proof}
Clearly we have $\sigma_{1n}(\sigma_{11}-\sigma_{1n}\sigma_{11})=0$. Hence, by Proposition \ref{propkey}(iii), the elementary transvection $t_{kl}(a\sigma_{1n}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{proof}
\begin{corollary}\label{corkeyD}
Let $\sigma\in \operatorname{GL}_n(R)$. Let $x_1,\dots,x_{n-1}\in R$ such that $\sum\limits_{p=1}^{n-1}\sigma_{1p}x_p+\sigma_{1n}=0$. Then for any $2\leq j\leq n$, $k\neq l$ and $a,b\in R$, the elementary transvection $t_{kl}(a\sigma'_{1j}b)$ is a product of $16$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{corollary}
\begin{proof}
One checks easily that $\sum\limits_{p=1}^{n-1}\sigma_{1p}(\sigma'_{pj}-x_p\sigma'_{nj})=0$. Hence, by Proposition \ref{propkey}(i), the elementary transvection $t_{kl}(a(\sigma'_{1j}-x_1\sigma'_{nj})b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. By Proposition \ref{propkey}(ii), $t_{kl}(ax_1\sigma'_{nj}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Thus $t_{kl}(a\sigma'_{1j}b)$ is a product of $16$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{proof}
\section{RDU over commutative rings}
\begin{theorem}[cf. {\cite[Theorem 12]{preusser2}}]\label{thmcomm}
Suppose that $R$ is commutative. Let $\sigma\in \operatorname{GL}_n(R)$, $i\neq j$, $k\neq l$ and $a\in R$. Then
\begin{enumerate}[(i)]
\itemsep0pt
\item $t_{kl}(a\sigma_{ij})$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$ and
\item $t_{kl}(a(\sigma_{ii}-\sigma_{jj}))$ is a product of $24$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item Clearly $\sigma_{11}\sigma_{12}-\sigma_{12}\sigma_{11}=0$. Hence, by Proposition \ref{propkey}(i), $t_{kl}(a\sigma_{12})$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Assertion (i) now follows from Lemma \ref{lemp}.
\item Clearly the entry of $\sigma^{t_{ji}(-1)}$ at position $(j,i)$ equals $\sigma_{ii}-\sigma_{jj}+\sigma_{ji}-\sigma_{ij}$. Applying (i) to $\sigma^{t_{ji}(-1)}$ we get that $t_{kl}(a(\sigma_{ii}-\sigma_{jj}+\sigma_{ji}-\sigma_{ij}))$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Applying (i) to $\sigma$ we get that $t_{kl}(a(\sigma_{ij}-\sigma_{ji}))=t_{kl}(a\sigma_{ij})t_{kl}(-a\sigma_{ji})$ is a product of $8+8=16$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. It follows that $t_{kl}(a(\sigma_{ii}-\sigma_{jj}))=t_{kl}(a(\sigma_{ii}-\sigma_{jj}+\sigma_{ji}-\sigma_{ij}))t_{kl}(a(\sigma_{ij}-\sigma_{ji}))$ is a product of $24$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{enumerate}
\end{proof}
\section{RDU over von Neumann regular rings}
Recall that $R$ is called {\it von Neumann regular}, if for any $x\in R$ there is a $y\in R$ such that $xyx=x$.
\begin{theorem}\label{thmNeum}
Suppose that $R$ is von Neumann regular. Let $\sigma\in \operatorname{GL}_n(R)$, $i\neq j$, $k\neq l$ and $a,b,c\in R$. Then
\begin{enumerate}[(i)]
\itemsep0pt
\item $t_{kl}(a\sigma_{ij}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$ and
\item $t_{kl}(a(c\sigma_{ii}-\sigma_{jj}c)b)$ is a product of $24$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item Choose a $z$ such that $\sigma_{12}z\sigma_{12}=\sigma_{12}$. Then $\sigma_{12}z(\sigma_{11}-\sigma_{12}z\sigma_{11})=0$. By Proposition \ref{propkey}(iii) (applied with $y=\sigma_{12}z$, $x_1=1$, $x_2=-z\sigma_{11}$ and $x_3,\dots,x_n=0$), we get that $t_{kl}(a\sigma_{12}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Assertion (i) now follows from Lemma \ref{lemp}.
\item Clearly the entry of $\sigma^{t_{ji}(-c)}$ at position $(j,i)$ equals $c\sigma_{ii}-\sigma_{jj}c+\sigma_{ji}-c\sigma_{ij}c$. Applying (i) to $\sigma^{t_{ji}(-c)}$ we get that $t_{kl}(a(c\sigma_{ii}-\sigma_{jj}c+\sigma_{ji}-c\sigma_{ij}c)b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Applying (i) to $\sigma$ we get that $t_{kl}(a(c\sigma_{ij}c-\sigma_{ji})b)=t_{kl}(ac\sigma_{ij}cb)t_{kl}(-a\sigma_{ji}b)$ is a product of $8+8=16$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. It follows that $t_{kl}(a(c\sigma_{ii}-\sigma_{jj}c)b)=t_{kl}(a(c\sigma_{ii}-\sigma_{jj}c+\sigma_{ji}-c\sigma_{ij}c)b)t_{kl}(a(c\sigma_{ij}c-\sigma_{ji})b)$ is a product of $24$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{enumerate}
\end{proof}
\section{RDU over Banach algebras}
Recall that a {\it Banach algebra} is an algebra $R$ over the real or the complex numbers that at the same time is also a {\it Banach space}, i.e. a normed vector space that is complete with respect to the metric induced by the norm. The norm is required to satisfy $\|x\,y\|\ \leq \|x\|\,\|y\|$ for any $x,y\in R$.
It follows from \cite[Chapter VII, Lemma 2.1]{conway_book} that any Banach algebra $R$ has the property (1) below where $R^*$ denotes the set of all right invertible elements of $R$.
\begin{equation}
\text{For any }x,z\in R\text{ there is a }y\in R^*\text{ such that } 1+xyz\in R^*.
\end{equation}
\begin{theorem}\label{thmBan}
Suppose that $R$ is a ring satisfying (1) (which is true e.g. if $R$ is a Banach algebra). Let $\sigma\in \operatorname{GL}_n(R)$, $i\neq j$, $k\neq l$ and $a,b,c\in R$. Then
\begin{enumerate}[(i)]
\itemsep0pt
\item $t_{kl}(a\sigma_{ij}b)$ is a product of $160$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$ and
\item $t_{kl}(a(c\sigma_{ii}-\sigma_{jj}c)b)$ is a product of $480$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item {\bf Step 1} Let $x\in R$ and set $\tau:=[\sigma,t_{1n}(x)]$. Then $\tau_{11}=1+\sigma_{11}x\sigma'_{n1}\in R^*$ for an appropriate $x\in R^*$. Let $x^{-1}$ denote a right inverse of $x$. It follows from Corollary \ref{corkeyA} that for any $k'\neq l'$ and $a',b'\in R$, the elementary transvection
\[t_{k'l'}(a'\tau_{1n}b')=t_{k'l'}(a'(\sigma_{11}x\sigma'_{nn}-(1+\sigma_{11}x\sigma'_{n1})x)b')=t_{k'l'}(a'(\sigma_{11}\alpha-1)xb'),\]
where $\alpha=x\sigma'_{nn}x^{-1}-x\sigma'_{n1}$, is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\tau$ and $\tau^{-1}$. Hence
\begin{equation}
t_{k'l'}(a'(\sigma_{11}\alpha-1)b')\text{ is a product of }16~\operatorname{E}_n(R)\text{-conjugates of }\sigma \text{ and }\sigma^{-1}.
\end{equation}
{\bf Step 2} Set $\zeta:=\sigma t_{1n}(-\alpha\sigma_{1n})$ where $\alpha$ is defined as in Step 1. Clearly
\[(t_{1n}(\alpha\sigma_{1n}),\zeta)\xrightarrow{t_{n2}(y)}(t_{12}(-\alpha\sigma_{1n}y),\xi)\]
for any $y\in R$, where $\xi=[t_{n2}(y),\zeta]$. It follows from Lemma \ref{lemredux} that
\begin{equation}
t_{12}(-\alpha\sigma_{1n}y)\xi\text{ is a product of }2~\operatorname{E}_n(R)\text{-conjugates of }\sigma \text{ and }\sigma^{-1}.
\end{equation}
Choose a $y\in R^*$ such that $\xi_{11}=1-\zeta_{1n}y\zeta'_{21}\in R^*$. Let $\xi_{11}^{-1}$ denote a right inverse of $\xi_{11}$. Then $\rho:=\xi t_{12}(-\xi_{11}^{-1}\xi_{12})$ has the property that $\rho_{12}=0$. Clearly $\xi_{12}=-\zeta_{1n}y\zeta'_{22}=-(1-\sigma_{11}\alpha)\sigma_{1n}y\zeta'_{22}$. Hence, by (2),
\begin{equation}
t_{12}(-\xi_{11}^{-1}\xi_{12})\text{ is a product of }16~\operatorname{E}_n(R)\text{-conjugates of }\sigma \text{ and }\sigma^{-1}.
\end{equation}
It follows from (3) and (4) that
\begin{equation}
t_{12}(-\alpha\sigma_{1n}y)\rho\text{ is a product of }18~\operatorname{E}_n(R)\text{-conjugates of }\sigma \text{ and }\sigma^{-1}.
\end{equation}
{\bf Step 3} Let $y^{-1}$ denote a right inverse of $y$. One checks easily that for any $a'',b''\in R$ we have
\[(t_{12}(-\alpha\sigma_{1n}y),\rho)\xrightarrow{t_{23}(y^{-1}b''),t_{21}(a''),t_{31}(-1)}(t_{21}(a''\alpha\sigma_{1n}b''), e)\]
(recall from Step 2 that $\rho_{12}=0$). It follows from Lemma \ref{lemredux} that $t_{21}(a''\alpha\sigma_{1n}b'')$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $t_{12}(-\alpha\sigma_{1n}y)\rho$ and its inverse. Hence, by (5),
\begin{equation}
t_{21}(a''\alpha\sigma_{1n}b'')\text{ is a product of }8\cdot 18=144~\operatorname{E}_n(R)\text{-conjugates of }\sigma \text{ and }\sigma^{-1}.
\end{equation}
It follows from (2) and (6) that $t_{21}(a\sigma_{1n}b)=t_{21}(a(1-\sigma_{11}\alpha)\sigma_{1n}b)t_{21}(a\sigma_{11}\alpha\sigma_{1n}b)$ is a product of $144+16=160$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Assertion (i) now follows from Lemma \ref{lemp}.
\item See the proof of Theorem \ref{thmNeum}.
\end{enumerate}
\end{proof}
\section{RDU in the stable range}
Recall that a row vector $u\in {}^m\!R$ is called {\it unimodular} if there is a column vector $v\in R^m$ such that $uv=1$. The {\it stable rank $\operatorname{sr}(R)$} is the least $m\in\mathbb{N}$ such that for any $p\geq m$ and unimodular row $(u_1,\dots,u_{p+1})\in {}^{p+1}\!R$ there are elements $x_1,\dots,x_p\in R$ such that $(u_1+u_{p+1}x_1,\dots,u_p+u_{p+1}x_p)\in {}^p\!R$ is unimodular (if no such $m$ exists, then $\operatorname{sr}(R)=\infty$).
Define $E^*_n(R):=\langle t_{kl}(x)\mid x\in R,k\neq l, k\neq 1, l\neq n\rangle$.
\begin{lemma}\label{lemsr}
Suppose that $\operatorname{sr}(R)<n$ and let $\sigma\in \operatorname{GL}_n(R)$. Then there is a $\rho\in E^*_n(R)$ such that the row vector $((\sigma^{\rho})_{11},\dots,(\sigma^{\rho})_{1,\operatorname{sr}(R)})$ is unimodular.
\end{lemma}
\begin{proof}
Since $\operatorname{sr}(R)<n$, there is a $\rho$ of the form
\[\rho=(\prod\limits_{j=1}^{n-1}t_{nj}(*))(\prod\limits_{j=1}^{n-2}t_{n-1,j}(*))\dots (\prod\limits_{j=1}^{\operatorname{sr}(R)}t_{\operatorname{sr}(R)+1,j}(*))\in E^*_n(R)\]
such that $((\sigma\rho)_{11},\dots,(\sigma\rho)_{1,\operatorname{sr}(R)})$ is unimodular. Clearly $((\sigma^{\rho})_{11},\dots,(\sigma^{\rho})_{1,\operatorname{sr}(R)})=((\sigma\rho)_{11},\dots,(\sigma\rho)_{1,\operatorname{sr}(R)})$ since $\rho\in E^*_n(R)$.
\end{proof}
\begin{theorem}\label{thmsr=1}
Suppose that $\operatorname{sr}(R)=1$. Let $\sigma\in \operatorname{GL}_n(R)$, $i\neq j$, $k\neq l$ and $a,b,c\in R$. Then
\begin{enumerate}[(i)]
\itemsep0pt
\item $t_{kl}(a\sigma_{ij}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$ and
\item $t_{kl}(a(c\sigma_{ii}-\sigma_{jj}c)b)$ is a product of $24$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item By Lemma \ref{lemsr} there there is a $\rho\in E^*_n(R)$ such that $\hat\sigma_{11}$ is right invertible where $\hat\sigma=\sigma^{\rho}$. It follows from Corollary \ref{corkeyA} that $t_{kl}(a\hat\sigma_{1n}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\hat\sigma$ and $\hat\sigma^{-1}$. Clearly $\hat\sigma_{1n}=\sigma_{1n}$ since $\rho\in E^*_n(R)$. Moreover, any $\operatorname{E}_n(R)$-conjugate of $\hat\sigma$ or $\hat\sigma^{-1}$ is an $\operatorname{E}_n(R)$-conjugate of $\sigma$ or $\sigma^{-1}$. Hence $t_{kl}(a\sigma_{1n}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Assertion (i) now follows from Lemma \ref{lemp}.
\item See the proof of Theorem \ref{thmNeum}.
\end{enumerate}
\end{proof}
\begin{theorem}
Suppose that $1<\operatorname{sr}(R)<n$. Let $\sigma\in \operatorname{GL}_n(R)$, $i\neq j$, $k\neq l$ and $a,b,c\in R$. Then
\begin{enumerate}[(i)]
\itemsep0pt
\item $t_{kl}(a\sigma_{ij}b)$ is a product of $16$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$ and
\item $t_{kl}(a(c\sigma_{ii}-\sigma_{jj}c)b)$ is a product of $48$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item By Lemma \ref{lemsr} there there is a $\rho\in E^*_n(R)$ such that $(\hat\sigma_{11},\dots,\hat\sigma_{1,\operatorname{sr}(R)})$ is unimodular where $\hat\sigma=\sigma^{\rho}$. Clearly there are $x_1,\dots,x_{\operatorname{sr}(R)}\in R$ such that $\sum\limits_{p=1}^{\operatorname{sr}(R)}\hat\sigma_{1p}x_p+\hat\sigma_{1n}=0$. It follows from Corollary \ref{corkeyD} that $t_{kl}(a\hat\sigma'_{1n}b)$ is a product of $16$ $\operatorname{E}_n(R)$-conjugates of $\hat\sigma$ and $\hat\sigma^{-1}$. Clearly $\hat\sigma'_{1n}=\sigma'_{1n}$ since $\rho\in E^*_n(R)$. Moreover, any $\operatorname{E}_n(R)$-conjugate of $\hat\sigma$ or $\hat\sigma^{-1}$ is an $\operatorname{E}_n(R)$-conjugate of $\sigma$ or $\sigma^{-1}$. Hence $t_{kl}(a\sigma'_{1n}b)$ is a product of $16$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Assertion (i) now follows from Lemma \ref{lemp} (after swapping the roles of $\sigma$ and $\sigma^{-1}$).
\item See the proof of Theorem \ref{thmNeum}.
\end{enumerate}
\end{proof}
\section{RDU over rings with Euclidean algorithm}
We call $R$ a {\it ring with $m$-term Euclidean algorithm} if for any row vector $v\in {}^m\!R$ there is a $\tau\in \operatorname{E}_m(R)$ such that $v\tau$ has a zero entry. Note that the rings with $2$-term Euclidean algorithm are precisely the right quasi-Euclidean rings defined in \cite{alahmadi} (follows from \cite[Theorem 11]{alahmadi}).
If $\tau\in \operatorname{GL}_m(R)$ for some $1\leq m<n$, then we identify $\tau$ with its image in $\operatorname{GL}_n(R)$ under the embedding
\begin{align*}
\operatorname{GL}_m(R)&\hookrightarrow \operatorname{GL}_n(R)\\
\sigma&\mapsto\begin{pmatrix}e_{(n-m)\times(n-m)}&0\\0&\sigma\end{pmatrix}
\end{align*}
\begin{lemma}\label{lemeucl}
Let $\sigma\in \operatorname{GL}_n(R)$ and suppose $(\sigma\tau)_{1n}=0$ for some $\tau\in E_m(R)$ where $1\leq m<n$. Then for any $k\neq l$ and $a,b\in R$,
\begin{enumerate}[(i)]
\itemsep0pt
\item $t_{kl}(a\sigma'_{1j}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$ if $j\in \{2,\dots,n-m\}$ and
\item $t_{kl}(a\sigma'_{1j}b)$ is a product of $8m$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$ if $j\in \{n-m+1,\dots,n\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Set $\xi:=\sigma^{\tau}$. Then $\xi_{1n}=0$. It follows from Corollary \ref{corkeyB} that for any $2\leq j\leq n$, $k\neq l$ and $a,b\in R$,
\begin{equation}
t_{kl}(a\xi'_{1j}b)\text{ is a product of }8~ \operatorname{E}_n(R)\text{-conjugates of }\sigma\text{ and }\sigma^{-1}.
\end{equation}
Clearly $\sigma^{-1}=(\xi^{-1})^{\tau^{-1}}$ and hence
\begin{equation}
\sigma'_{1j}=\begin{cases}
\xi'_{1j},&\text{ if } j\in \{2,\dots,n-m\},\\
\sum\limits_{p=n-m+1}^{n}\xi'_{1p}\tau'_{pj},&\text{ if } j\in \{n-m+1,\dots,n\}.
\end{cases}
\end{equation}
The assertion of the theorem follows from (7) and (8).
\end{proof}
\begin{theorem}\label{thmeucl_1}
Suppose that $R$ is a ring with $m$-term Euclidean algorithm for some $1\leq m\leq n-2$. Let $\sigma\in \operatorname{GL}_n(R)$, $i\neq j$, $k\neq l$ and $a,b,c\in R$. Then
\begin{enumerate}[(i)]
\itemsep0pt
\item $t_{kl}(a\sigma_{ij}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$ and
\item $t_{kl}(a(c\sigma_{ii}-\sigma_{jj}c)b)$ is a product of $24$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item Since $R$ is a ring with $m$-term Euclidean algorithm, there is a $\tau\in \operatorname{E}_m(R)$ such that $(\sigma\tau)_{1n}=0$. It follows from Lemma \ref{lemeucl} that $t_{kl}(a\sigma'_{12}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$ (note that $n-m\geq 2$). Assertion (i) now follows from Lemma \ref{lemp} (after swapping the roles of $\sigma$ and $\sigma^{-1}$).
\item See the proof of Theorem \ref{thmNeum}.
\end{enumerate}
\end{proof}
\begin{theorem}\label{thmeucl_2}
Suppose that $R$ is a ring with $n-1$-term Euclidean algorithm. Let $\sigma\in \operatorname{GL}_n(R)$, $i\neq j$, $k\neq l$ and $a,b,c\in R$. Then
\begin{enumerate}[(i)]
\itemsep0pt
\item $t_{kl}(a\sigma_{ij}b)$ is a product of $8(n-1)$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$ and
\item $t_{kl}(a(c\sigma_{ii}-\sigma_{jj}c)b)$ is a product of $24(n-1)$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item Since $R$ is a ring with $n-1$-term Euclidean algorithm, there is a $\tau\in \operatorname{E}_{n-1}(R)$ such that $(\sigma\tau)_{1n}=0$. It follows from Lemma \ref{lemeucl} that $t_{kl}(a\sigma'_{12}b)$ is a product of $8(n-1)$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Assertion (i) now follows from Lemma \ref{lemp} (after swapping the roles of $\sigma$ and $\sigma^{-1}$).
\item See the proof of Theorem \ref{thmNeum}.
\end{enumerate}
\end{proof}
Theorems \ref{thmeucl_1} and \ref{thmeucl_2} show that if $\sigma\in \operatorname{GL}_n(R)$ where $R$ is a ring with $m$-term Euclidean algorithm for some $1\leq m\leq n-1$, then the nondiagonal entries of $\sigma$ can be ``extracted'' (i.e. the matrices $t_{kl}(\sigma_{ij})~(k\neq l, i\neq j)$ lie in $\sigma^{\operatorname{E}_n(R)}$). Does this also hold for $m=n$? The author does not know the answer to this question. However, if $R$ is a ring with {\it strong} $n$-term Euclidean algorithm (see Definition \ref{defstrong} below), then the nondiagonal entries of $\sigma$ can be extracted, as Theorem \ref{thmeucl_3} shows.
\begin{definition}\label{defstrong}
We call $R$ a {\it ring with strong $m$-term Euclidean algorithm} if for any row vector $v\in {}^m\!R$ there is a $\tau\in \operatorname{E}_m(R)$ with $\tau_{11}=1$ such that $(v\tau)_{n}=0$.
\end{definition}
\begin{remark}
If $R$ is a ring with strong $m$-term Euclidean algorithm, then obviously $R$ is also a ring with $m$-term Euclidean algorithm.
\end{remark}
\begin{theorem}\label{thmeucl_3}
Suppose that $R$ is a ring with strong $n$-term Euclidean algorithm. Let $\sigma\in \operatorname{GL}_n(R)$, $i\neq j$, $k\neq l$ and $a,b,c\in R$. Then
\begin{enumerate}[(i)]
\itemsep0pt
\item $t_{kl}(a\sigma_{ij}b)$ is a product of $80(n-1)$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$ and
\item $t_{kl}(a(c\sigma_{ii}-\sigma_{jj}c)b)$ is a product of $240(n-1)$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item Since $R$ is a ring with strong $n$-term Euclidean algorithm, there is a $\tau\in \operatorname{E}_{n}(R)$ with $\tau_{11}=1$ such that $(\sigma\tau)_{1n}=0$. Clearly the matrix $\tau\prod\limits_{j=2}^{n-1}t_{1j}(-\tau_{1j})$ has the same properties as $\tau$. Hence we may assume that $\tau_{1j}=0~(j=2,\dots,n-1)$. \\
{\bf Step 1}
Clearly $\tau=\rho t_{1n}(\tau_{1n})$ for some $\rho\in \operatorname{E}_n(R)$ with trivial first row. Set $\xi:=t_{1n}(\tau_{1n})\sigma^{\tau}=\rho^{-1}\sigma\tau$. Then $\xi_{1n}=0$. One checks easily that
\[(t_{1n}(-\tau_{1n}),\xi)\xrightarrow{t_{n2}(b'),t_{31}(a'),t_{21}(-1)}(t_{31}(a'\tau_{1n}b'), e)\]
for any $a',b'\in R$. It follows from Lemma \ref{lemp} and Lemma \ref{lemredux} that
\begin{equation}
t_{k'l'}(a'\tau_{1n}b')\text{ is a product of }8~\operatorname{E}_n(R)\text{-conjugates of }\sigma\text{ and }\sigma^{-1}
\end{equation}
for any $k'\neq l'$ and $a',b'\in R$.\\
{\bf Step 2} Let $\xi$ be defined as in Step 1. Corollary \ref{corkeyB} implies that for any $2\leq j\leq n$, $k''\neq l''$ and $a'',b''\in R$, the matrix $t_{k''l''}(a''\xi'_{1j}b'')$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\xi$ and $\xi^{-1}$. Hence, by (9),
\begin{equation}
t_{k''l''}(a''\xi'_{1j}b'')\text{ is a product of }8\cdot 9=72~ \operatorname{E}_n(R)\text{-conjugates of }\sigma\text{ and }\sigma^{-1}.
\end{equation}
{\bf Step 3} Clearly $\sigma^{-1}=\tau\xi^{-1}\rho^{-1}$ and hence $\sigma'_{12}=\sum\limits_{j=2}^{n}(\xi'_{1j}+\tau_{1n}\xi'_{nj})\rho'_{j2}$. It follows from (9) and (10) that $t_{kl}(a\sigma'_{12}b)$ is a product of $(72+8)(n-1)=80(n-1)$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. Assertion (i) now follows from Lemma \ref{lemp} (after swapping the roles of $\sigma$ and $\sigma^{-1}$).
\item See the proof of Theorem \ref{thmNeum}.
\end{enumerate}
\end{proof}
\section{RDU over almost commutative rings}
In this subsection $C$ denotes the center of $R$.
\begin{definition}
Let $x\in R$. An element $z\in C$ is called a {\it central multiple} of $x$ if $z=xy=yx$ for some $y\in R$. The ideal of $C$ consisting of all central multiples of $x$ is denoted by $\mathcal{O}(x)$.
\end{definition}
\begin{lemma}\label{lemalmcom1}
Let $\sigma\in \operatorname{GL}_n(R)$ and $z\in \mathcal{O}(\sigma_{11})$. Then for any $k\neq l$ and $a,b\in R$, $t_{kl}(az\sigma_{12}b)$ is a product of of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{lemma}
\begin{proof}
Since $z\in \mathcal{O}(\sigma_{11})$, there is a $y\in R$ such that $z=\sigma_{11}y=y\sigma_{11}$. Clearly $\sigma_{11}y\sigma_{12}-\sigma_{12}\sigma_{11}y=0$. It follows from Proposition \ref{propkey}(i) that $t_{kl}(az\sigma_{12}b)$ is a product of $8$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{proof}
Set $\operatorname{E}^{**}_n(R):=\langle t_{k1}(x)\mid x\in R,k\neq 1\rangle$.
\begin{lemma}\label{lemalmcom2}
Let $\sigma\in \operatorname{GL}_n(R)$. Suppose that there are $\tau_p\in E^{**}_n(R)~(1\leq p\leq q)$ and $z_p\in \mathcal{O}((\sigma^{\tau_{_p}})_{11})~(1\leq p\leq q)$ such that $\sum\limits_{p=1}^{q}z_p=1$. Then for any $k\neq l$ and $a,b\in R$, $t_{kl}(a\sigma_{12}b)$ is a product of of $8q$ $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{lemma}
\begin{proof}
The previous lemma implies that for any $1\leq p \leq q$, the elementary transvection $t_{kl}(az_p(\sigma^{\tau_{p}})_{12}b)=t_{kl}(az_p\sigma_{12}b)$ (for this equation we have used that $\tau_{p}\in E^{**}_n(R)$) is a product of $8$ elementary $\sigma$-conjugates (note that any elementary $\sigma^{\tau_{p}}$-conjugate is also an elementary $\sigma$-conjugate). It follows that $t_{kl}(a\sigma_{12}b)=t_{kl}(az_1\sigma_{12}b)\dots t_{kl}(az_q\sigma_{12}b)$ is a product of $8q$ elementary $\sigma$-conjugates.
\end{proof}
Recall that $R$ is called {\it almost commutative} if it is finitely generated as a $C$-module. We will show that if $R$ is almost commutative, then the requirements of Lemma \ref{lemalmcom2} are always satisfied.
We denote by $\operatorname{Max}(C)$ the set of all maximal ideals of $C$. If $\mathfrak{m}\in \operatorname{Max}(C)$, then we denote by $R_{\mathfrak{m}}$ the localisation of $R$ with respect to the multiplicative set $S_{\mathfrak{m}}:= C\setminus \mathfrak{m}$. We denote by $Q_{\mathfrak{m}}$ the quotient $R_{\mathfrak{m}}/\operatorname{Rad}(R_{\mathfrak{m}})$ where $\operatorname{Rad}(R_{\mathfrak{m}})$ is the Jacobson radical of $R_{\mathfrak{m}}$. Moreover, we denote the canonical ring homomorphism $R\rightarrow Q_{\mathfrak{m}}$ by $\phi_{\mathfrak{m}}$.
\begin{lemma}\label{lemalmcom3}
If $R$ is almost commutative, then for any $\mathfrak{m}\in \operatorname{Max}(C)$, $\phi_{\mathfrak{m}}$ is surjective and $\operatorname{sr}(Q_{\mathfrak{m}})=1$.
\end{lemma}
\begin{proof}
First we show that $\phi_\mathfrak{m}$ is surjective. Set $C_\mathfrak{m}:=S_\mathfrak{m}^{-1}C$. Clearly $R_\mathfrak{m}$ is finitely generated as a $C_\mathfrak{m}$-module. Hence, by \cite[Corollary 5.9]{lam_book}, we have $\mathfrak{m}=\operatorname{Rad}(C_\mathfrak{m})\subseteq \operatorname{Rad} (R_\mathfrak{m})$. Let now $\frac{r}{s}\in R_\mathfrak{m}$. Since $sC+\mathfrak{m}=C$, there is a $c\in C$ and an $m\in\mathfrak{m}$ such that $sc+m=1$. Multiplying this equality by $\frac{r}{s}$ we get $\frac{r}{s}=rc+\frac{rm}{s}\in rc+\operatorname{Rad}(R_\mathfrak{m})$. Thus $\phi_{\mathfrak{m}}$ is surjective.\\
Next we show that $\operatorname{sr}(Q_{\mathfrak{m}})=1$. Clearly $Q_\mathfrak{m}=R_\mathfrak{m}/\operatorname{Rad}(R_{\mathfrak{m}})$ is finitely generated as a $C_\mathfrak{m}/\operatorname{Rad}(C_{\mathfrak{m}})$-module. But $C_\mathfrak{m}/\operatorname{Rad}(C_{\mathfrak{m}})$ is a field. Hence $Q_\mathfrak{m}$ is a semisimple ring (see \cite[\S 7]{lam_book}) and therefore $\operatorname{sr}(Q_{\mathfrak{m}})=1$ (see \cite[\S6]{bass}).
\end{proof}
\begin{lemma}\label{lemalmcom4}
Suppose that $R$ is almost commutative. Let $u=(u_1,\dots,u_n)\in {}^n\!R$ be a unimodular row. Then there are matrices $\tau_{\mathfrak{m}}\in \operatorname{E}^{**}_n(R)~(\mathfrak{m}\in \operatorname{Max}(C))$ such that $C=\sum\limits_{\mathfrak{m}\in \operatorname{Max}(C)}\mathcal{O}((u\tau_{\mathfrak{m}})_1)$.
\end{lemma}
\begin{proof}
Let $\mathfrak{m}\in\operatorname{Max}(C)$ and set $\hat u:=\phi_\mathfrak{m}(u)\in {}^n\!(Q_m)$. Since $\operatorname{sr}(Q_{\mathfrak{m}})=1$ and $\hat u$ is unimodular, there is a $\hat\tau_{\mathfrak{m}}\in \operatorname{E}_n(Q_{\mathfrak{m}})$ such that $(\hat u\hat\tau_{\mathfrak{m}})_1$ is invertible (note that rings of stable rank $1$ are Dedekind finite, see e.g. \cite[Lemma 1.7]{lam}). Clearly $\hat\tau_{\mathfrak{m}}$ can be chosen to be in $E_n^{**}(Q_\mathfrak{m})$ (follows from \cite[Theorem 1]{vaserstein_sr}). Since $\phi_{\mathfrak{m}}$ is surjective, it induces a surjective homomorphism $\operatorname{E}^{**}_n(R)\rightarrow\operatorname{E}^{**}_n(Q_{\mathfrak{m}})$ which we also denote by $\phi_\mathfrak{m}$. Choose a $\tau_\mathfrak{m}\in E^{**}_n(R)$ such that $\phi_\mathfrak{m}(\tau_\mathfrak{m})=\hat\tau_\mathfrak{m}$.\\
Since $\phi_{\mathfrak{m}}((u\tau_{\mathfrak{m}})_1)=(\hat u\hat\tau_{\mathfrak{m}})_1$ is invertible in $Q_\mathfrak{m}$, $(u\tau_{\mathfrak{m}})_1$ is invertible in $R_{\mathfrak{m}}$. Write $((u\tau_{\mathfrak{m}})_1)^{-1}=\frac{x}{s}$ where $x\in R$ and $s\in S_{\mathfrak{m}}$. Then
\begin{align*}
\frac{(u\tau_{\mathfrak{m}})_1}{1}\frac{x}{s}=\frac{1}{1}~\Leftrightarrow~\exists t\in S_{\mathfrak{m}}:t(u\tau_{\mathfrak{m}})_1x=ts
\end{align*}
and
\begin{align*}
\frac{x}{s}\frac{(u\tau_{\mathfrak{m}})_1}{1}=\frac{1}{1}~\Leftrightarrow~\exists t'\in S_{\mathfrak{m}}:t'x(u\tau_{\mathfrak{m}})_1=t's.
\end{align*}
Clearly $z_{\mathfrak{m}}:=(u\tau_{\mathfrak{m}})_1tt'x=tt's=tt'x(u\tau_{\mathfrak{m}})_1\in \mathcal{O}((u\tau_{\mathfrak{m}})_1)\cap S_{\mathfrak{m}}$. We have shown that for any $\mathfrak{m}\in\operatorname{Max}(C)$ there is a $\tau_{\mathfrak{m}}\in \operatorname{E}^{**}_n(R)$ and a $z_{\mathfrak{m}}\in \mathcal{O}((u\tau_{\mathfrak{m}})_1)$ such that $z_{\mathfrak{m}}\not\in \mathfrak{m}$. The assertion of the lemma follows.
\end{proof}
\begin{corollary}\label{coralmcom}
Suppose that $R$ is almost commutative and let $\sigma\in \operatorname{GL}_n(R)$. Then there is a $q\leq |\operatorname{Max}(C)|$, $\tau_p\in E^{**}_n(R)~(1\leq p\leq q)$ and $z_p\in \mathcal{O}((\sigma^{\tau_{_p}})_{11})~(1\leq p\leq q)$ such that $\sum\limits_{p=1}^{q}z_p=1$.
\end{corollary}
\begin{proof}
By Lemma \ref{lemalmcom4} there are matrices $\tau_{\mathfrak{m}}\in E^{**}_n(R)~(\mathfrak{m}\in \operatorname{Max}(C))$ such that
\[C=\sum\limits_{\mathfrak{m}\in \operatorname{Max}(C)}\mathcal{O}((\sigma_{1*}\tau_{\mathfrak{m}})_1)=\sum\limits_{\mathfrak{m}\in \operatorname{Max}(C)}\mathcal{O}((\sigma\tau_{\mathfrak{m}})_{11})=\sum\limits_{\mathfrak{m}\in \operatorname{Max}(C)}\mathcal{O}((\sigma^{\tau_{\mathfrak{m}}})_{11})\]
(for the last equation we have used that $\tau_{\mathfrak{m}}\in E^{**}_n(R)$). The assertion of the corollary follows.
\end{proof}
\begin{theorem}\label{thmalmcom}
Suppose that $R$ is almost commutative. Let $\sigma\in \operatorname{GL}_n(R)$, $i\neq j$, $k\neq l$ and $a,b,c\in R$. Then
\begin{enumerate}[(i)]
\itemsep0pt
\item $t_{kl}(a\sigma_{ij}b)$ is a finite product of $8|\operatorname{Max}(C)|$ or less $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$ and
\item $t_{kl}(a(c\sigma_{ii}-\sigma_{jj}c)b)$ is a finite product of $24|\operatorname{Max}(C)|$ or less $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item It follows from Lemma \ref{lemalmcom2} and Corollary \ref{coralmcom} that that $t_{kl}(a\sigma_{12}b)$ is a product of $8q$ elementary $\sigma$-conjugates for some $q\leq|\operatorname{Max}(C)|$. Assertion (i) now follows from Lemma \ref{lemp}.
\item See the proof of Theorem \ref{thmNeum}.
\end{enumerate}
\end{proof}
\section{Open problems}
\begin{definition}
Let $m\in\mathbb{N}$. Then $m$ is called an {\it RDU bound for $\operatorname{GL}_n(R)$} if for any $\sigma\in\operatorname{GL}_n(R)$, $k\neq l$ and $i\neq j$ the matrix $t_{kl}(\sigma_{ij})$ is a product of $m$ or less $\operatorname{E}_n(R)$-conjugates of $\sigma$ and $\sigma^{-1}$. If $m$ is an RDU bound for $\operatorname{GL}_n(R)$ for any $n\geq 3$, then $m$ is called an {\it RDU bound for $R$}. If $\mathfrak{C}$ is a class of rings and $m$ is an RDU bound for $R$ for any $R\in \mathfrak{C}$, then $m$ is called an {\it RDU bound for $\mathfrak{C}$}.
\end{definition}
It follows from Sections 5, 6 and 8 that $8$ is an RDU bound for the class of commutative rings, the class of von Neumann regular rings, and the class of rings of stable rank $1$. It follows from Sections 7 and 9 that $160$ is an RDU bound for the class of Banach algebras and $16$ is an RDU bound for the class of quasi-Euclidean rings. Note that Section 10 does not yield an RDU bound for the class of almost commutative rings since the maximal spectrum of an almost commutative ring might be infinite.
It is natural to ask the following two question.
\begin{question}\label{Q1}
Which rings have an RDU bound?
\end{question}
\begin{question}\label{Q2}
What is the optimal (i.e. smallest) RDU bound for the class of commutative rings (resp. von Neumann regular rings, rings of stable rank $1$ etc.)?
\end{question}
Using a computer program the author found out that $2$ is the optimal RDU bound for $\operatorname{GL}_3(\mathbb{F}_2)$ and $\operatorname{GL}_3(\mathbb{F}_3)$ where $\mathbb{F}_2$ and $\mathbb{F}_3$ denote the fields of $2$ and $3$ elements, respectively. He does not know the optimal bounds in any other cases.
\begin{lemma}
Suppose that $1$ is an RDU bound for a ring $R$. Then $\operatorname{GL}_n(R)=\operatorname{E}_n(R)$ for any $n\geq 3$.
\end{lemma}
\begin{proof}
If $n\geq 3$ and $\sigma\in \operatorname{GL}_n(R)$, then, by the definition of an RDU bound, $\sigma=t_{12}(\sigma_{12})^{\tau}$ or $\sigma^{-1}=t_{12}(\sigma_{12})^{\tau}$ for some $\tau\in \operatorname{E}_n(R)$. Hence $\sigma\in \operatorname{E}_n(R)$.
\end{proof}
Since $\operatorname{E}_n(R)\subseteq\operatorname{SL}_n(R)\neq \operatorname{GL}_n(R)$ if $R$ is a field with more than two elements, we get the following result.
\begin{theorem}
The optimal RDU bound for the class of commutative rings (resp. von Neumann regular rings, rings of stable rank $1$) lies between $2$ and $8$.
\end{theorem}
The author thinks that the following question is interesting too (compare Lemma \ref{lemeucl}).
\begin{question}\label{Q3}
Let $\sigma\in \operatorname{GL}_n(R)$ and suppose $(\sigma\tau)_{1n}=0$ for some $\tau\in E_n(R)$. Can one extract some nondiagonal entry of $\sigma$ or $\sigma^{-1}$? (I.e., is it true that $t_{kl}(\sigma_{ij})\in \sigma^{\operatorname{E}_n(R)}$ or $t_{kl}(\sigma'_{ij})\in \sigma^{\operatorname{E}_n(R)}$ for some $k\neq l$ and $i\neq j$?)
\end{question}
A positive answer to Question \ref{Q3} would have the interesting consequence that for any $\sigma$ lying in the elementary subgroup $\operatorname{E}_n(R)$, one could extract some nondiagonal entry of $\sigma$ or $\sigma^{-1}$ (since $(\sigma\sigma^{-1})_{1n}=0$).
| {'timestamp': '2019-12-10T02:09:24', 'yymm': '1912', 'arxiv_id': '1912.03536', 'language': 'en', 'url': 'https://arxiv.org/abs/1912.03536'} |
\section{appendix}
\label{app}
Here we give details about the static Bogoliubov transformation, the classical model, the Floquet theory and Mathieu functions.
\subsection{Static Bogoliubov transformation}
For the time-independent case, Hamiltonian (2) in the main text can be expressed as
\begin{equation}
H_q= v_F q\left[(1+g_4)\left(2 J_{0,q}-1\right)+ g_2 \left(J_{+,q}+J_{-,q}\right)\right]
\end{equation}
where $2J_{0,q}=b_{Rq}^{\dagger}b_{Rq}+b_{Lq}b_{Lq}^{\dagger}$, $J_{+,q}=b_{Rq}^{\dagger}b_{Lq}^{\dagger}$, $J_{-,q}=J_{+,q}^{\dagger}$. For the sake of simplicity, in the following we will drop the index $q$. We observe that $J_i (i=0,\pm)$ form a $su(1,1)$ algebra; therefore the Tomonaga-Luttinger Hamiltonian can be diagonalized by the Schrieffer-Wolff transformation obtained through the unitary operator
\begin{equation}
\label{eq:Schrieffer_Wolff}
U=e^{z}\qquad z=\vartheta\left(J_+-J_-\right)
\end{equation}
\begin{equation}
\begin{split}
H & \longrightarrow \tilde{H}=UHU^{-1}\\
|\rm{GS}\rangle & \longrightarrow |\widetilde{\rm{GS}}\rangle=U|\rm{GS}\rangle \hspace{2pt}.
\end{split}
\end{equation}
The transformed Hamiltonian $\tilde{H}$ is diagonal in the old bosonic basis
\begin{equation}
\begin{split}
\tilde{H}&=\Delta\left(b_R^{\dagger}b_R+b_L^{\dagger}b_L\right)+\Delta- v_F q (1+g_4)\\
&=2\Delta J_0- v_F q (1+g_4) \hspace{10pt}
\end{split}
\end{equation}
with $\Delta= v_F q\sqrt{(1+g_4)^2-g_2^2}$ if $\tanh{(2\vartheta)}=\tanh{(2\bar{\vartheta})}\equiv g_2/(1+g_4)$, implying $\cosh(2\bar{\vartheta})=v_Fq(1+g_4)/\Delta$, $\sinh(2\bar{\vartheta})=v_Fqg_2/\Delta$.\\
Notice that we have used a passive transformation, where the operators are rotated, while the states defined by the original creation and annihilation operators stay the same.
\subsection{Classical model}
The Hamiltonian
\begin{equation}
H(t)=(A+B(t))\left(a^{\dagger}a+b^{\dagger}b\right)+C(t)\left(ab+a^{\dagger}b^{\dagger}\right)
\end{equation}
with real coefficients $A, B(t), C(t)$ can be mapped to the classical model
\begin{equation}\label{eq:Ham_cl}
H(t)=\frac{1}{2}(A+B(t))\left(x^2+y^2+p_x^2+p_y^2\right)+C(t)\left(xy-p_xp_y\right)
\end{equation}
with the substitution
\begin{equation}
a=\frac{1}{\sqrt{2}}\left(x+\imath p_x\right)\qquad b=\frac{1}{\sqrt{2}}\left(y+\imath p_y\right) \hspace{2pt}.
\end{equation}
The Hamilton's equations for Hamiltonian (\ref{eq:Ham_cl}) are
\begin{eqnarray}
&\dot{x}=\frac{\partial H}{\partial p_x}=\left(A+B(t)\right)p_x-C(t)p_y \label{eq:Hamilton_x}\\
&\dot{y}=\frac{\partial H}{\partial p_y}=\left(A+B(t)\right)p_x-C(t)p_x \label{eq:Hamilton_y}\\
&\dot{p_x}=-\frac{\partial H}{\partial x}=-\left(A+B(t)\right)x-C(t)y \label{eq:Hamilton_px}\\
&\dot{p_y}=-\frac{\partial H}{\partial y}=-\left(A+B(t)\right)y-C(t)x \label{eq:Hamilton_py} \hspace{2pt}.
\end{eqnarray}
By summing equations (\ref{eq:Hamilton_x}) and (\ref{eq:Hamilton_y}), and equations (\ref{eq:Hamilton_px}) and (\ref{eq:Hamilton_py}), we get the system
\begin{eqnarray}
&\dot{v}=\left(A+B(t)-C(t)\right)p_v\\
&\dot{p_v}=-\left(A+B(t)+C(t)\right)v
\end{eqnarray}
for $v=x+y$ and $p_v=p_x+p_y$, which yields
\begin{equation}
\ddot{v}-\frac{\dot{B}(t)-\dot{C}(t)}{A+B(t)-C(t)}\dot{v}+\left[\left(A+B(t)\right)^2-C^2(t)\right]v=0 \hspace{2pt}.
\end{equation}
In the special case $C(t)=B(t)=\bar{\rho}+\rho\cos(\omega t)$, one recovers the Mathieu equation
\begin{equation}
\ddot{v}+\left[A^2+2A\bar{\rho}+2A\rho\cos(\omega t)\right]v=0 \hspace{2pt}.
\end{equation}
The classical equations of motions also give insight to the role of damping,
which generally reduces the regions of instability of the Mathieu equation.
In fact, in Ref.~\cite{StabilityChart} it is shown that the presence of linear damping
pushes the instability zone upwards in Fig.~2, so that below a critical value of amplitude
$\rho$
stable solution are always possible and no instabilities occur.
In Ref.~\cite{StabilityChart} it is also shown that nonlinear effects can generate subharmonic stable motions. These cubic terms correspond to band curvature in the original band structure, so
in a real system the nonlinearity of the band further stabilizes the
system. While a large number of density waves is still expected to occur at the critical $q-$values, the sum over all momenta becomes well defined leading to the predicted
density order of scenario 2) in the main paper.
\subsection{Floquet theory}
Analogously to the Bloch theorem, the Floquet theorem asserts that the Schr\"odinger equation for a time-periodic Hamiltonian admits steady-state solutions of the form
\begin{equation}
|\Psi(t)\rangle=e^{-\imath\epsilon t}|u(t)\rangle \hspace{2pt},
\end{equation}
where the modes $|u(t)\rangle=|u(t+T)\rangle $ inherit the periodicity from the Hamiltonian, and the quantity $\epsilon$ is the so-called Floquet quasienergy. Indeed the steady-state Schr\"odinger equation can be recasted in the form of an eigenvalue equation for the quasienergy operator $\mathscr{H}=H(t)-\imath\partial_t$ in the extended Hilbert space generated by the product of the state space of the quantum system and the space of square-integrable $T$-periodic functions:
\begin{equation}
\label{eq:eig2}
\mathscr{H}|u(t)\rangle=\epsilon|u(t)\rangle \hspace{2pt}.
\end{equation}
By expanding both the Hamiltonian and the Floquet mode in Fourier series
\begin{gather}
H(t)=\sum_m e^{\imath m \omega t}H^{(m)} \hspace{2pt},\\
|u(t)\rangle=\sum_m e^{\imath m \omega t}|u_m\rangle \hspace{2pt},
\end{gather}
Eq.~(\ref{eq:eig2}) yields
\begin{equation}
\label{eq:Fl_components}
\left(H^{(0)}+m\omega\right)|u_m\rangle +H^{(1)}\left(|u_{m-1}\rangle+|u_{m+1}\rangle\right)=\epsilon|u_m\rangle \hspace{2pt},
\end{equation}
which turns out to be an eigenvalue equation for the infinite tridiagonal matrix
\begin{equation}
\label{eq:matr1}
{\cal M}_{\cal F}=\left(\begin{array}{ccccc}
\vdots & \vdots & \vdots & \vdots & \\
\hdots & H^{(0)}-\omega & H^{(1)} & 0 & \hdots \\
\hdots & H^{(1)} & H^{(0)} & H^{(1)}& \hdots\\
\hdots & 0 & H^{(1)} & H^{(0)}+\omega & \hdots\\
& \vdots & \vdots & \vdots &
\end{array}
\right) \hspace{2pt}.
\end{equation}
\subsection{Floquet solution in terms of Mathieu functions}
The solution of the Mathieu equation
\begin{equation}
\ddot{y}(t)+(a-2p\cos{\omega t})y(t)=0
\end{equation}
is usually discussed in terms of even and odd solutions, known respectively as Mathieu cosine ${\cal C}$ and Mathieu sine ${\cal S}$ functions. A general solution can be therefore written as
\begin{equation}
\label{SM:Mat_Sol}
y(t)=c_1 {\cal C}(a,p,\tau)+c_2{\cal S}(a,p,\tau) \hspace{2pt},
\end{equation}
with $\tau=\omega t/2$.
However, being the equation time-periodic, it always admits a Floquet solution
\begin{equation}
\label{SM:Fl_Sol}
y(t)=e^{\imath\nu\tau}P_{\nu}(\tau)
\end{equation}
with $P_{\nu}(\tau)=P_{\nu}(\tau\pm\pi)$. We want to use the quantum number $\nu$, which is commonly referred to as Mathieu characteristic exponent. Therefore, in this section we clarify the relation between the latter and the Mathieu functions.
Comparing Eqs.~(\ref{SM:Mat_Sol}) and (\ref{SM:Fl_Sol}) and employing the periodicity of $P_{\nu}(\tau)$, we get the following relation
\begin{equation}
\begin{split}
c_1{\cal C}(a,p,\tau)+c_2&{\cal S}(a,p,\tau)=\\
&e^{\mp\imath \nu\pi}\left(c_1{\cal C}(a,p,\tau\pm\pi)+c_2{\cal S}(a,p,\tau\pm\pi)\right)\hspace{2pt}.
\end{split}
\end{equation}
Evaluating this expression in $\tau=0$ and normalizing the Mathieu functions such that ${\cal C}(a,p,0)={\cal S}(a,p,\pi)=1$, we obtain
\begin{equation}
c_1(e^{\pm\imath\pi\nu}-{\cal C}(a,p,\pi))=\pm c_2 {\cal S}(a,p,\pi) \hspace{2pt},
\end{equation}
from which we finally get
\begin{equation}
\begin{split}
&\cos{\pi\nu}={\cal C}(a,p,\pi)\hspace{2pt},\qquad\text{and}\\
& c_2=\imath c_1\sin{\pi\nu} \hspace{2pt}.
\end{split}
\end{equation}
| {'timestamp': '2020-12-01T02:53:56', 'yymm': '2011', 'arxiv_id': '2011.15129', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.15129'} |
\section{Introduction}
The entanglement entropy is an important quantity for quantum field theories. The entanglement entropy is non-vanishing even at zero temperature and it is proportional to the degrees of freedom. Moreover, it can classify the topological phases described by the ground state of the FQHE or the Chern-Simons effective theory~\cite{wenx,kita,DFLN}. In the context of the AdS/CFT correspondence~\cite{juan,anom}, the formula of holographic Entanglement entropy (holographic EE)~\cite{RT} for CFTs has been proposed as the area of minimal surface $\gamma_A$ which ends on the boundary of the region $A$ as follows:
\begin{eqnarray}
S_A=\dfrac{\text{Area}(\gamma_A)}{4G_N}, \label{HOL1}
\end{eqnarray}
where $G_N$ is the Newton constant in the Einstein gravity. Here, $\gamma_A$ is required to be homologous to $A$. It has been shown~\cite{RT,Sol,LNST,CaHu,Solog} that the log term of the entanglement entropy agrees with the computation in the gravity side. In~\cite{FRWT}, the topological entanglement entropy is clarified via the AdS/CFT correspondence.
Moreover, it is shown~\cite{HHM} that using holographic EE, the correlation in the holographic system arises from the quantum entanglement rather than the classical correlation. The review of the holographic EE are given in~\cite{NRT} and in the review part of~\cite{Tonni:2010pv}.
While the holographic EE of~\cite{RT} can be used for the Einstein gravity with no higher derivative terms, it is shown~\cite{Myers:2010xs,Myers:2010tj,Casini:2011kv,Hung:2011ta,Hung:2011nu} that for the spherical surface of the boundary of $A$, the proposal for holographic EE \eqref{HOL1} can be extended to certain higher curvature theories such as Gauss-Bonnet gravities.
It is proposed that using the conformal mapping, EE across the spherical surface $S^2$ with the radius $L$ can be related with the thermal entropy on $\mathbb{R}\times H^{3}$ with $T=1/(2\pi L)$ and this thermal entropy is given by the black hole entropy in the gravity side. It is also shown that the log term of the holographic EE is then proportional to the $A$-type anomaly.
Thus, it is considered to be interesting to apply these results for higher derivative corrections in the holographic EE in some string models.
An interesting model is the 5-dimensional curvature gravity instead of beginning with the 10-dimensional theory, where the curvature-squared terms arise from the D-branes~\cite{bng,ode,Buchel:2008vz,kp}. Here, the higher curvature terms correct also the cosmological constant which should be determined to reproduce the central charges $a$ and $c$ of the 4-dimensional theory. Using the holographic Weyl anomaly~\cite{bng,ode,holo4}, these terms affect the two central charges $a$ and $c$ in the subleading order of $N$. Moreover, this curvature-squared terms appear in the context of perturbative expansion of the string theory and it is known that they can be modified by field redefinitions.
A purpose of this paper is to compute the holographic EE and $A$-type anomaly in the gravity dual to $d=4$ $\mathcal{N}=2$ SCFTs in F-theory. Here, this gravity dual is described by the curvature gravity theory of above the string model.
These SCFTs include the theory with $G=D_4,E_n,...$ flavor symmetries. Except for $D_4$ all of them are isolated theories, i.e. their gauge couplings are frozen and a perturbative description is not available. There are previous works for holographic computation of the central charges in these SCFTs~\cite{anom2,at} in which they analyzed Chern-Simons terms in the dual string theory. Their result implies that the AdS/CFT correspondence can be applied even for the finite string coupling. On the other hand, the curvature term of our gravity theory makes the analysis difficult since for $d=5$ these terms depend on the metric of the spacetime. According to~\cite{Buchel:2008vz}, we choose the cosmological constant to reproduce the central charges $a,c$ of the dual SCFT in the context of the holographic Weyl anomaly. As stated above, we can also transform the effective $d=5$ curvature gravity obtained from the F-theory compactifications to the $d=5$ Gauss-Bonnet gravity in the subleading order of $l_s^2/L^2$ expansion using the field redefinition, which is important in the context of the holographic c-theorem.
And we derive the holographic EE computing the black hole entropy.
This paper is organized as follows: In section 2, we briefly review the Wald's entropy formula which is important for computing the black hole entropy in the presence of the higher curvature terms. In section 3, we review the construction of the Type IIB background after including the backreaction of the $N$ D3-branes at singularities in F-theory.
In the beginning of section 4, we review the construction of the effective curvature gravity from F-theory. In subsection 4.1, we analyze what terms contribute to the central charges in SCFTs. In subsection 4.2, by using the holographic Weyl anomaly,
we determine the parameters of the gravity to realize the central charges in the field theory side. In subsection 4.3, we generalize the argument in section 4 for the general SCFTs in F-theory.
In section 5, we compute the holographic EE and the $A$-type anomaly using the $d=5$ effective curvature gravity. We also consider the transformation of the effective $d=5$ curvature gravity to the $d=5$ Gauss-Bonnet gravity using the field redefinition.
\section{Holographic EE as the Wald's entropy formula}
In this section, we briefly review the holographic EE formula in the presence of the higher curvature terms.
It was conjectured~\cite{Myers:2010xs,Myers:2010tj} that EE across the sphere $S^2$ with the radius $L$ can be given by the black hole entropy in the gravity side and the contribution to the EE is then proportional to the coefficient of $A$-type anomaly. This is since when we write the pure AdS by the topological BH with the horizon of radius $L$, there appear two hyperbolic spaces inside and outside the horizon, respectively. By the conformal mapping, the thermal entropy with temperature $T=1/(2\pi L)$ in a hyperbolic space is shown to be equal to the EE across the sphere $S^2$ as follows:
\begin{eqnarray}
S_{EE}=S_{thermal}.
\end{eqnarray}
Thus, the horizon entropy of the topological BH should be considered as EE of the dual theory using the AdS/CFT correspondence.
Moreover, two hyperbolic spaces are considered as two separated space when we compute the entanglement entropy.
We start with the general gravity theory in $AdS$ with higher derivative terms as follows:
\begin{eqnarray}
S=\dfrac{1}{2\kappa^2_5}\int d^5x\sqrt{-g}F(R^{\mu\nu}{}_{\rho\lambda})+S_{matter}, \label{ACT03}
\end{eqnarray}
where $F(R^{\mu\nu}{}_{\rho\lambda})$ is constructed from the Riemann tensors and $\mu,\nu,\rho,\lambda=0,1,2,3,4$.
The Einstein equation for \eqref{ACT03} is given by
\begin{eqnarray}
F'_{(\mu}{}^{\rho\lambda\alpha}R_{\nu)\rho\lambda\alpha}-\dfrac{F(R^{\mu\nu}{}_{\rho\lambda})}{2}g_{\mu\nu}+2\nabla^{\rho}\nabla^{\lambda}F'_{\mu\rho\nu\lambda}=\kappa^2_5T_{\mu\nu},
\end{eqnarray}
where $F'_{\mu\nu\rho\lambda}=\delta F(R^{\alpha\beta}{}_{\gamma\delta})/\delta R^{\mu\nu\rho\lambda}$, round brackets describe the symmetrization, and $\nabla_{\mu}$ is the covariant derivative. Note that the tensor $F'_{\mu\nu\rho\lambda}$ is antisymmetrized and symmetrized as seen in the Riemann tensor.
To compute the holographic EE, we should choose the $AdS_5$ topological black hole solution with its radius $L$ where the terms including the covariant derivatives vanish since Riemann curvature of maximally symmetric space like $AdS_5$ can be described by the metric (see appedix A):
\begin{equation}
ds^2=\dfrac{d\rho^2}{\frac{\rho^2}{L^2}-1}-\Big(\frac{\rho^2}{L^2}-1\Big)d\tau^2+\rho^2(du^2+\sinh^2ud\Omega_{2}^2), \label{MET05}
\end{equation}
where $d\Omega_2^2$ represents the metric of $S^2$ and the BH horizon is at $\rho ={L}$.
Then, the holographic EE of the topological black hole is given by extremizing Wald's entropy formula as follows:
\begin{eqnarray}
S_{entropy}=-2\pi \int_{horizon}d^3x\sqrt{h}\dfrac{\delta S}{\delta R^{\mu\nu}{}_{\rho\lambda}}\epsilon^{\mu\nu}\epsilon_{\rho\lambda}.
\end{eqnarray}
where $\epsilon^{\mu\nu}$ is the binormal to the horizon satisfying $\epsilon_{\mu\nu}\epsilon^{\mu\nu}=-2$ for \eqref{MET05}.
According to~~\cite{Sinha:2010ai,Myers:2010xs,Myers:2010tj,Casini:2011kv},
the integrand can be related with the $A$-type anomaly in the SCFT side as follows:
\begin{eqnarray}
a_4=-\pi^2L^3\dfrac{\delta S}{\delta R^{\mu\nu}{}_{\rho\lambda}}\epsilon^{\mu\nu}\epsilon_{\rho\lambda}\Big|_{horizon}. \label{CEN5}
\end{eqnarray}
This $a_4$ is also given by the $a$-function used in the context of the holographic c-theorem if it is evaluated at an AdS vacua.
For the case of Einstein gravity $F(R^{\mu\nu}{}_{\rho\lambda})=R-\Lambda$, in addition, the $A$-type anomaly is given by
\begin{eqnarray}
a=\dfrac{\pi^2L^3}{\kappa_5^2}.
\end{eqnarray}
We will use \eqref{CEN5} in section 4 to estimate the corrections
for the $A$-type anomaly and in section 5, use for computing the $A$-type anomaly for the $\mathcal{N}=2$ SCFT in F-theory.
\section{The gravity dual to $d=4$ $\mathcal{N}=2$ SCFTs}
In this section, we review the construction of the Type IIB background dual to $\mathcal{N}=2$ SCFTs.
It has been known that $\mathcal{N}=2$ SCFTs arise as worldvolume theories of D3-branes moving near 7-branes, namely 3-branes in F-theory~\cite{ft1,ft2,ft3,ft4,ft5}.
The simplest theory is the worldvolume theory of D3-branes in a $\mathbf{Z}_2$ orientifold and becomes a $USp(2N)$ $\mathcal{N}=2$ gauge theory with an antisymmetric and four fundamental hypermultiplets in the low energy limit. Its gravity dual is the type IIB supergravity on an orientifold of $AdS_5\times S^5$~\cite{Fayyazuddin:1998fb,Aharony:1998xz}.
For the gravity side, we consider the type IIB string theory on $AdS_5\times X_{5}$ with $N$ units of $F_5$ flux on $X_5$ taking the near-horizon limit.
The metric is as follows:
\begin{equation}
ds^2=\dfrac{L^2}{z^2}(d\vec{x}^2+dz^2)+L^2ds^2_{X_5},
\end{equation}
where $L=(4\pi g_s\Delta N)^{1/4}l_s$ is the radius of $AdS_5$~\cite{Gubser:1998vd}
satisfying $L^4\text{Vol}(X_5)=L^4_{S^5} \text{Vol}(S^5)$.
Here, $X_{5}$ is the 5-sphere
\begin{equation}
\{|z_1|^2+|z_2|^2+|z_3|^2=const \}, \label{MET6}
\end{equation}
with the periodicity $2\pi/\Delta$ along the phase
$\phi\equiv$arg$(z_3)$ and
the metric of $X_{5}$ is given by
\begin{equation}
ds^2_{X_5}=d\theta^2+\sin^2\theta d\phi^2 +\cos^2\theta d\Omega^2_3,
\end{equation}
where $d\Omega ^2_3$ is the metric of $S^3$, $0\le \theta \le \pi/2$. At the singularity $\theta =0$, the metric becomes $S^3$. It is known that the singularities are related with flavor groups of 7-branes, which are called $G$-type 7-branes for the $G$-type singularity ($G=H_0,H_1,H_2,D_4,E_6,..$). This 7-brane then wraps whole $AdS_5\times S^3$. Indeed, the transverse direction $Re(z_3),Im(z_3)$ operated by $\mathbb{Z}_2$ orbifold is the transverse direction of an O7-brane for $\Delta=2$. The number of 7-branes is related with the deficit angle $\Delta$ and
is given by $n_7=12(1-1/\Delta)$.
The gravity action dimensionally reduced to 5-dimension is given by
\begin{equation}
I\equiv \int d^{5}x\sqrt{-g}\mathcal{L}=\dfrac{1}{2\kappa_5^2}\int d^5x\sqrt{-g}(R+12/L^2+...), \label{AC09}
\end{equation}
where $2\kappa^2_{10} =(2\pi)^7g_s^2l_s^8$ and we defined
\begin{equation}
\dfrac{1}{2\kappa_5^2}\equiv \dfrac{L^5\text{Vol} (X_5)}{2\kappa_{10}^2} \label{DEF10}
\end{equation} Here, dots include the matter fields and the higher derivative corrections. To describe the gravity dual of the $d=4$ $\mathcal{N}=2$ SCFT with flavor, moreover, we should
include the curvature-squared terms of 7-branes since these terms affect the $A$-type anomaly as the $1/N$ correction (see appendix B).
In next section, we start with the 5-dimensional effective curvature gravity from F-theory instead of considering Type IIB D-branes.
\section{Effective curvature gravity dual with the orientifold}
It has been known~\cite{Buchel:2008vz,kp} that if the central charges of the dual CFT satisfy
\begin{eqnarray}
c\sim a \gg 1 \ \text{and} \ |c-a|/c \ll 1,
\end{eqnarray}
the effective gravity theory is described by the curvature gravity with the higher curvature term coupled to a negative cosmological constant.
Using the constants $a_1$, $a_2$, and $a_3$, the effective curvature gravity is given by
\begin{eqnarray}
S=\dfrac{1}{2\kappa_{5}^2}\int d^5x\sqrt{-g}\Big(R+\dfrac{12}{ \tilde{L}^2}+a_1R^2+a_2R_{\mu\nu}R^{\mu\nu}+a_3R_{\mu\nu\rho\lambda}R^{\mu\nu\rho\lambda} \Big), \label{ACT6}
\end{eqnarray}
where $-12/\tilde{L}$ describes the cosmological constant.
It is known that in the case of $\mathcal{N}=2$ SCFT dual to $AdS_5\times S^5/\mathbb{Z}_2$, moreover, we can construct the effective curvature gravity theory from F-theory~\cite{bng,anom2}. In this section, we review construction of this effective curvature gravity from the F-theory. And in subsection 4.1, we consider the generalization for the case of the curvature
gravity on $AdS_5\times X_5$ with general $\Delta$.
In order to obtain the higher derivative terms in the action of F-theory compactifications, we use a duality chain
\begin{equation}
{\rm Heterotic\;}SO(32)/T^{2} \rightarrow {\rm Type\; I}/T^{2} \rightarrow {\rm Type IIB}/T^{2}/\mathbb{Z}_{2}, \label{dual7}
\end{equation}
where we break the gauge symmetry of $SO(32)$ into $SO(8)^{4}$ in Heterotic string theory and Type I string theory by turning on Wilson lines.
In the ten dimensional supergravity action of Heterotic string theory, the terms which include Riemann tensors up to the quadratic order is
\begin{equation}
S_{het}=\frac{1}{(2 \pi)^{7}l_{s}^{8}} \int dx^{10}\sqrt{g^{h}}e^{-2\phi^{h}}\left(R+\frac{l_{s}^{2}}{4}R_{MNLP}R^{MNLP}\right),
\label{eq:action_het}
\end{equation}
where $M,N,L,P=1,...,10$.
When one moves to Type I string theory, the duality relations are
\begin{equation}
\phi^{h} = -\phi^{I},\;\;
g_{MN}^{h} = e^{-\phi_{I}} g_{MN}^{I}.
\end{equation}
Then, the action \eqref{eq:action_het} becomes
\begin{equation}
S_{I}=\frac{1}{(2 \pi)^{7}l_{s}^{8}} \int dx^{10}\sqrt{g^{I}}\left(e^{-2\phi_{I}} R + \frac{l_{s}^{2}}{4}e^{-\phi_{I}}R_{MNLP}R^{MNLP} \right),
\label{eq:action_typeI}
\end{equation}
and we have 32 D9-branes and 1 O9-plane. We also consider $N$ coincident 5-branes wrapped on the torus $T^{2}$. From such a configuration, one obtains $USp(2N)$ gauge symmetry with a hypermultiplet in the anti-symmetric representation on 5-branes. Besides we have 32 additional half-hypermultiplets in the {\bf 2N} representation of $USp(2N)$.
As for the second step, we take two T-duality along the torus. Then, we have 16 D7-branes and 4 O-planes which are separated into four groups. The difference of the number of D7-brane comes from the fact that the charges on both sides are related as $\mu_{7}^{I}=\frac{1}{2}\mu_{7}^{IIB}$. Each group locate at the four fixed points of $T^{2}/\mathbb{Z}_{2}$. We consider the case where $T^{2}$ is large and focus on one fixed point. Other fields related to other fixed points are very massive and can be integrated out in the low energy field theory. Furthermore, $N$ 5-branes become $N$ 3-branes after the two T-dualities along $T^{2}$. Hence, the low energy field theory is described by $\mathcal{N}=2$ $USp(2N)$ gauge theory with four fundamental hypermultiplets and an additional hypermultiplet in the anti-symmetric representation. The duality relations between Type I and Type IIB string theory are
\begin{equation}
V_{I}e^{-2\phi^{I}} = V_{IIB} e^{-2\phi^{IIB}},\;\; R_{I}^2 = \frac{l_{s}^{4}}{L^2},
\end{equation}
where $L$ is the radius of the torus $T^2$ in Type IIB theory and $V_I$ and $V_{II}$ are the volume of the torus in Type I and Type IIB theory, respectively.
The action of Type IIB string theory becomes
\begin{equation}
S_{IIB}=\frac{1}{(2 \pi)^{7}l_{s}^{8}} \int dx^{10}\sqrt{g^{IIB}}e^{-2\phi_{IIB}} R + \frac{1}{(2 \pi)^{7}l_{s}^{8}} \frac{(2\pi)^{2}l_{s}^{2}}{2}\int d^{8}x \sqrt{g^{IIB}} \frac{l_{s}^{2}}{4}e^{-\phi_{IIB}}R_{MNLP}R^{MNLP}.
\label{eq:action_typeIIB}
\end{equation}
Note that the second term in \eqref{eq:action_typeIIB} is proportional to $e^{-\phi^{IIB}}$ and the world volume is eight dimension. Hence, this term originates from the D7-brane and O7-plane. We also take into account the difference between the charge of D9-branes in Type I string theory and the charge of D9-branes in Type IIB string theory $\mu_{9}^{I} = \frac{1}{\sqrt{2}}\mu_{9}^{IIB}$. Since we will focus on the one of the four fixed points in $T^{2}/\mathbb{Z}_{2}$, we further multiply a factor $\frac{6}{24}=\frac{1}{4}$ to the second term in \eqref{eq:action_typeIIB}
\begin{equation}
S_{IIB}=\frac{1}{(2 \pi)^{7}l_{s}^{8}} \int dx^{10}\sqrt{g^{IIB}}e^{-2\phi_{IIB}} R + \frac{1}{(2 \pi)^{7}l_{s}^{4}} \frac{(2\pi)^{2}}{2} \frac{1}{4} \frac{1}{4}\int d^{8}x \sqrt{g^{IIB}}e^{-\phi_{IIB}}R_{MNLP}R^{MNLP}.
\label{s1}
\end{equation}
By the dimensional reduction of the 5-dimensional and 3-dimensional internal directions, \eqref{s1} reduces following effective five dimensional action plus cosmological constant $-12/\tilde{L}^2$ which can have the $AdS_5$ vacua as the solution:
\begin{eqnarray}
&S_{IIB}=\frac{L^5vol(X_{\Delta=2})e^{-2\phi_{IIB}}}{(2 \pi)^{7}l_{s}^{8}} \int dx^{5}\sqrt{g^{IIB}} \Big(R+\dfrac{12}{{\tilde{L}}^{2}}\Big) \\ \notag
&+ \frac{L^3vol(S^{3})}{(2 \pi)^{7}l_{s}^{4}} \frac{(2\pi)^{2}}{2} \frac{1}{4} \frac{1}{4}\int d^{5}x \sqrt{g^{IIB}}e^{-\phi_{IIB}}R_{MNLP}R^{MNLP},
\label{s3}
\end{eqnarray}
Note that the higher derivative terms also correct the cosmological constant.
As pointed out in~\cite{bng,Fukuma:2001uf,Fukuma:2002sb},
the cosmological constant $-12/\tilde{L}^2$, which is different from $-12/L^2$ via $1/N$ correction, is determined later for the gravity dual to $\mathcal{N}=2$ $USp(2N)$ gauge theory and also depends on the field redefinition.
Note that $\frac{(2\pi)^{2}}{2}=vol(T^{2}/\mathbb{Z}_{2})/L^{2}$ and we can formally write the second term with a coefficient $vol(X_{5,\Delta=2})$. Hence this factor can be factored out and the action \eqref{s3} becomes
\begin{equation}
S_{IIB}=\frac{L^5vol(X_{5,\Delta=2})e^{-2\phi_{IIB}}}{(2 \pi)^{7}l_{s}^{8}} \int dx^{5}\sqrt{g^{IIB}}\left( R+\dfrac{12}{\tilde{L}^2} + \frac{L^2}{16N}R_{MNLP}R^{MNLP} \right),
\label{eq:action_typeIIB_4}
\end{equation}
where we used the relation $L=(4\pi g_s\Delta N)^{1/4}l_s$ with $\Delta =2$.
Thus, the coefficient $a_3=L^2/(16N)$ for the gravity dual on $AdS_5\times S^5/\mathbb{Z}_2$ is determined using the duality chain \eqref{dual7}.
\subsection{Power counting}
So far, we have only focused on the two terms, namely the Einstein-Hilbert action in the bulk and the quadratic term of the Riemann tensor on 7-branes. Let us see which terms will contribute to the $A$-type anomaly from the entanglement entropy. For simplicity, we call $R$ as either $R_{MNKL}, R_{MN}, R$. Since the $A$-type anomaly depends on the parameters $N, \Delta$ and $n_{7}$, we will focus on the $N, \Delta$ and $n_{7}$ dependence in the action. In general, we have the terms
\begin{equation}
\frac{e^{-2\phi_{IIB}}}{l_{s}^{8}} \int d^{10}x \; \sqrt{g^{IIB}} l_{s}^{2(k-1)} R^{k} + \frac{n_{7}e^{-\phi_{IIB}}}{l_{s}^{8}} \int d^{8}x \; \sqrt{g^{IIB}} l_{s}^{2k} R^{k}.
\label{eq:action_counting1}
\end{equation}
The first term of \eqref{eq:action_counting1} is the one in the bulk action and the second term of \eqref{eq:action_counting1} is the one in the 7-brane action. Hence, the second term is proportional to the number of 7-branes. Remember that supersymmetry protects string loop corrections up to the lowest order of the DBI terms and then the second term only depending on
$e^{-\phi_{IIB}}$ arises from 7-branes~\cite{Tseytlin:1999dj}. By the dimensional reduction on $X_{5}$, \eqref{eq:action_counting1} becomes
\begin{equation}
\frac{e^{-2\phi_{IIB}}L^{5}}{l_{s}^{10-2k} \Delta} \int d^{5}x \; \sqrt{g^{IIB}} R^{k} + \frac{e^{-\phi_{IIB}}n_{7} L^{3}}{l_{s}^{8-2k}} \int d^{5}x \; \sqrt{g^{IIB}}R^{k}.
\end{equation}
The factor $\frac{1}{\Delta}$ appears from the dimensional reduction of $vol(X_{5}) \propto \frac{1}{\Delta}$. By applying the Wald formula \eqref{CEN5} for the entanglement entropy and the $A$-type anomaly, the $A$-type anomaly is roughly expressed as
\begin{equation}
a \sim \frac{e^{-2\phi_{IIB}}L^{10-2k}}{l_{s}^{10-2k} \Delta} + \frac{e^{-\phi_{IIB}}n_{7} L^{8-2k}}{l_{s}^{8-2k}}.
\label{eq:action_counting3}
\end{equation}
The first term originates from the term in the bulk action and the second term originates from the term in the 7-brane action. Because of the relation $L \propto N^{\frac{1}{4}}$, $\mathcal{O}(N^{2})$ contribution only comes from the $k=1$ term in the bulk action, namely the Einstein-Hilbert term. The $\mathcal{O}(N)$ can come from the $k=3$ terms in the bulk action and $k=2$ term in the 7-brane action. However, there are no $k=3$ terms in the five dimensional action since higher derivative correction for the Type IIB theory starts with $\mathcal{O}(R^4)$~\cite{Gris,Ogawa:2011fw}. Here, $k=3$ term may be generated in terms of the dimensional reduction of $k=4$ term. However, it has the role of $k=4$ term since the dimensional reduction does not change the result. In the dual SCFT side, moreover, the coefficient of $k=3$ term corresponding to the three point function of the stress tensor, which would vanish for any SCFTs~\cite{Hofman:2008ar}.
Hence $\mathcal{O}(N)$ contribution only comes from the $k=2$ term in the 7-brane action. Therefore, the first term in \eqref{eq:action_typeIIB} will generate a $\mathcal{O}(N^{2})$ contribution to the $A$-type anomaly and the second term in \eqref{eq:action_typeIIB} will make a $\mathcal{O}(N)$ contribution. From the general expression of \eqref{eq:action_counting3} and concentrating only on the $\mathcal(N^{2})$ term and the $\mathcal{O}(N)$ in the $A$-type anomaly, we can also determine the $N, \Delta$ and $n_{7}$ dependence of the $A$-type anomaly. The $A$-type anomaly should behave as
\begin{equation}
a = \alpha_{1}N^{2}\Delta + \alpha_{2} N \Delta n_{7},
\label{eq:a-anomaly_general}
\end{equation}
where $\alpha_{1}$ and $\alpha_{2}$ are numerical coefficients.
Finally, it is interesting to investigate the $1/N^2$ correction. Since the bulk $R^3$ term is not allowed for any SCFTs and since the more higher derivative terms seem not to contribute the central charges, the $1/N^2$ correction should be included in the gravity theory with curvature squared terms. In other words, this correction can be included not in more higher derivative terms but in coefficients of the gravity theory such as the cosmological constant.
\subsection{Determination of $\tilde{L}^2$}
Next, we determine the cosmological constant $-12/\tilde{L}^2$ by comparing the central charges $a,c$ in SCFT side with that in the holographic Weyl anomaly~\cite{bng,ode,holo4}. This method
includes the review of~\cite{deBoer:2011wk,Hung:2011xb} up to the scaling transformation of the coefficients of the gravity theory.
In the field theory side, we can identify $a,c$-charges from the coefficients of the trace anomaly in 4-dimension as follows:
\begin{eqnarray}
\langle T^{a}{}_{a}\rangle =\dfrac{c}{16\pi^2}(R_{abcd}R^{abcd}-2R_{ab}R^{ab}+R^2/3)-\dfrac{a}{16\pi^2}(R_{abcd}R^{abcd}-4R_{ab}R^{ab}+R^2),
\end{eqnarray}
where $a,b=0,..,3$ and $R_{abcd}$ is the 4-dimensional Riemann tensor. This coefficients $a$ and $c$ can be derived using holographic Weyl anomaly formula. We start with the curvature gravity \eqref{ACT6}.
We first derive the AdS vacua for \eqref{ACT6} by solving the EOM.
In the EOM for the action \eqref{ACT6}, the terms including the covariant derivatives vanish in the critical points describing $AdS_5$. Leaving $a_1$, $a_2$, and $a_3$,
the EOM becomes
\begin{eqnarray}
&-\dfrac{1}{2}g_{\mu\nu}\Big(a_1R^2+a_2R_{\alpha\beta}R^{\alpha\beta}+a_3R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma\delta}+R+\dfrac{12}{\tilde{L}^2}\Big) \nonumber \\
&+2a_1RR_{\mu\nu}+2a_2R_{\mu\alpha}R_{\nu}{}^{\alpha}+2cR_{\mu\alpha\beta\gamma}R_{\nu}{}^{\alpha\beta\gamma}+R_{\mu\nu}=0. \label{EOM11}
\end{eqnarray}
Substituting the formula in the appendix A,
\eqref{EOM11} is rewritten as
\begin{eqnarray}
&1 l^4-12 \tilde{L}^2l^2+(80a_1+16a_2+8a_3)\tilde{L}^2=0. \label{EQ26}
\end{eqnarray}
Substituting $a_3=L^2/16N$, $a_2=a_1=0$ into above the equation, \eqref{EQ26} is solved by
\begin{eqnarray}
&l^2=\tilde{L}^2\dfrac{1+\sqrt{1-\dfrac{4}{3\tilde{L}^2}(20a_1+4a_2+2a_3)}}{2}, \\
&=\dfrac{\tilde{L}^2}{2}\Big({1+\sqrt{1-\dfrac{L^2}{6N\tilde{L}^2}}}\Big).
\label{CUR12}
\end{eqnarray}
where $l$ is the AdS radius in an AdS vacua obtained from \eqref{EOM11}.
According to~\cite{ode} and using \eqref{CUR12}, $a,c$-charges are given by
\begin{align}
&a=\dfrac{\pi^2 l^3}{\kappa_5^2}(1-4a_3/l^2), \label{ACA26} \\
& c=\dfrac{\pi^2 l^3}{\kappa^2_5}(1+4a_3/l^2). \nonumber
\end{align}
It can be shown that these central charges are invariant under the scaling transformation of the metric $g_{\mu\nu}\to e^{2a}g_{\mu\nu}$.
Under this scaling transformation, the coefficients in the action \eqref{ACT6} are transformed as $\kappa_5^2\to \kappa_5^2e^{-3a}$, $\tilde{L}\to \tilde{L}e^{-a}$, $l\to le^{-a}$, and $a_i\to a_ie^{-2a}$. Thus, central charges $a,c$ are invariant under above scaling transformation. For later purpose, we leave this dependence of $e^a$ in the coefficient of parameters.
To recover the central charges in the field theory side~\cite{at} with $\Delta=2$
\begin{align}
&a=\dfrac{\Delta N^2}{4}+\dfrac{(\Delta -1)N}{2}-\dfrac{1}{24},\quad c=\dfrac{\Delta N^2}{4}+\dfrac{(\Delta -1)3N}{4}-\dfrac{1}{12}, \label{CEN47}
\end{align}
we should solve the simultaneous equations~\eqref{ACA26}.
$l$ and $a_3$ are then given by
\begin{eqnarray}
&l^2=L^2e^{-2a}\Big(1+\dfrac{5}{4N}-\dfrac{1}{8N^2}\Big)^{{2}/{3}}, \label{LLL40} \\
&a_3=\dfrac{L^3e^{-2a}}{16l}\Big(\dfrac{1}{N}-\dfrac{1}{6 N^2} \Big).
\end{eqnarray}
The radius $\tilde{L}$ is determined by solving \eqref{CUR12} and including the factor $e^{-a}$ as follows:
\begin{eqnarray}
\tilde{L}^2=\dfrac{4l^4N(144Nl^2-6L^2)e^{-2a}}{(L^2-24Nl^2)^2},
\end{eqnarray}
where we used $l$ in \eqref{LLL40}.
To recover $a_3=L^2/16N$ obtained from F-theory, moreover, we should set
\begin{eqnarray}
e^{-2a}=\dfrac{l}{L\Big(1-\dfrac{1}{6N}\Big)}.
\end{eqnarray}
It is concluded that the Taylor-series of $\tilde{L}$ and $l$ in terms of $1/N$ are given by
\begin{eqnarray}
l^2= {L}^{2}+\dfrac{17L^2}{12N}+\dfrac{L^2}{9N^2}+O \left( {N}^{-3} \right),
\end{eqnarray}
\begin{eqnarray}
\tilde{L}^2= {L}^{2}+\dfrac{35L^2}{24N}+\dfrac{79L^2}{576N^2}+O \left( {N}^{-3} \right).
\end{eqnarray}
\subsection{General cases}
For general cases with the gravity dual on $AdS_5\times X_5$, moreover, the coefficient $a_3$ should be determined using the normalization of $a_3=L^2/(16N)$ for $\Delta =2$ since the origin of this squared curvature term is 7-brane in the Type IIB theory~\cite{at} as follows:
\begin{equation}
a_3=\dfrac{(1-1/\Delta)L^2}{8N}.
\end{equation}
This coefficient has the information of the number of 7-branes $n_7=12(1-1/\Delta)$.
This formula is also obtained in~\cite{Buchel:2008vz} requiring that the difference of two central charges $a-c$ in the field theory side is reproduced using the effective curvature gravity.
For the gravity dual on $AdS_5\times X_5$, thus, we should substitute $a_1=0$, $a_2=0$, and $a_3=(1-1/\Delta)L^2/8N$ into \eqref{ACT6} and the
effective curvature gravity action becomes
\begin{eqnarray}
S=\dfrac{1}{2\kappa_{5}^2}\int d^5x\sqrt{-g}\Big(R+\dfrac{12}{\tilde{L}^2}+\dfrac{(1-1/\Delta)L^2}{8N}R_{\mu\nu\rho\lambda}R^{\mu\nu\rho\lambda} \Big),
\label{ACT7} \end{eqnarray}
where $\tilde{L}$ can be determined by using the method in previous section and by using \eqref{CEN47}. In terms of the $1/N$ expansion, the $AdS$ radius $l$ and $\tilde{L}$ are given by
\begin{eqnarray}
l^2= {L}^{2}+{\frac {{L}^{2} \left( 15{\Delta}^{
2}-29\Delta+15 \right) }{ 6N\left( \Delta-1 \right) \Delta}}+{
\frac {{L}^{2} \left( 6{\Delta}^{2}-11\Delta+6 \right) }{
36N^2\left( \Delta-1 \right) ^{2}\Delta}}+O \left( {N}^{-3} \right),
\end{eqnarray}
\begin{eqnarray}
\tilde{L}^2= {L}^{2}+{\frac {{L}^{2} \left( 31{\Delta}^
{2}-60\Delta+31 \right) }{12N \left( \Delta-1 \right) \Delta}}+{\frac {{L}^{2} \left( 11{\Delta}^{4}-18{\Delta}^{3}+18
{\Delta}^{2}-18\Delta+11 \right) }{144N^2 \left( \Delta-1
\right) ^{2}{\Delta}^{2}}}+O \left( {N}^{-3} \right).
\end{eqnarray}
\section{Computation of the $A$-type anomaly from the entanglement entropy}
In this section, we compute the holographic EE for $d=4$ $\mathcal{N}=2$ SCFT with its gravity dual using holographic EE formula.
We first consider $O(N^2)$ part of the anomaly. Since the curvature correction and $1/N$ correction do not contribute the central charge of this order, we can ignore the $1/N$ corrections for $\tilde{L}$, $\kappa_5$, $a_3$ and the curvature corrections ($a_3=0$). Thus, central charges $a$ equals $c$ since we consider only Einstein-Hilbert term \eqref{AC09}.
An useful metric of $AdS_5$ is a hyperbolic foliation of $AdS_5$ and is given by \eqref{MET05}.
The horizon entropy can be calculated using Wald's entropy
\begin{eqnarray}
S=-2\pi \int_{horizon}d^{d-1}x\sqrt{h}\dfrac{\partial \mathcal{L}}{\partial R^{\mu\nu}{}_{\rho\lambda}}\epsilon^{\mu\nu}\epsilon_{\rho\lambda},
\end{eqnarray}
where $\epsilon_{ab}$ denotes the binormal to the horizon. Using the wedge product, the binormal is represented by $d\tau\wedge d\rho$.
The integrand for the action \eqref{AC09} can be rewritten as
\begin{eqnarray}
\dfrac{\partial \mathcal{L}}{\partial R^{\mu\nu}{}_{\rho\lambda}}\epsilon^{\mu\nu}\epsilon_{\rho\lambda}\Big|_{AdS}=-\dfrac{vol(X_5)}{\kappa_{10}^2}L^5,
\end{eqnarray}
where we substituted the only non-zero component of the binormal $\epsilon_{t\rho}=1$ or the relation $\epsilon_{\mu\nu}\epsilon^{\mu\nu}=-2$.
Thus, using the relation proven in~\cite{Myers:2010xs,Myers:2010tj}
\begin{eqnarray}
\dfrac{\partial \mathcal{L}}{\partial R^{\mu\nu}{}_{\rho\lambda}}\epsilon^{\mu\nu}\epsilon_{\rho\lambda}\Big|_{AdS}=-\dfrac{a_4}{\pi^2 L^3}, \label{ACA8} \end{eqnarray}
the central charge $a_4^{(2)}$, namely the $O(N^2)$ contribution for the $A$-type anomaly is computed as follows:
\begin{eqnarray}
a_4^{(2)}=\dfrac{\pi^2L^8vol(X_5)}{\kappa_{10}^2}=\dfrac{\Delta N^2}{4}. \label{CEN32}
\end{eqnarray}
This central charge correctly reproduces the $A$-anomaly of dual $\mathcal{N}=2$ SCFT. Remember that \eqref{CEN32} agrees with the central charges $a_4$ using the GKP-W relation~\cite{anom}.
\subsection{$1/N$ correction of $A$-type anomaly}
For the action \eqref{ACT7}, the horizon entropy becomes
\begin{align}
&S^E=\dfrac{2\pi}{\kappa_5^2}\Big(1-4a_3 \dfrac{1}{r_h^2}\Big)\oint d^{d-1}x\sqrt{h(r_h)} \nonumber \\
&=\dfrac{2\pi}{\kappa_5^2}\Big(1-\dfrac{(1-1/\Delta)L^2}{2Nl^2} \Big)\oint d^{3}x\sqrt{h(r_h)}, \label{ENT22}
\end{align}
where in the last equality, we used $l=r_h$.
According to~\eqref{ACA8}, the $A$-type anomaly is computed as follows:
\begin{eqnarray}
a_4=\dfrac{\pi^2{l}^3}{\kappa_5^2}\Big(1-\dfrac{(1-1/\Delta)L^2}{2Nl^2}\Big)=\dfrac{\Delta N^2}{4}+\dfrac{(\Delta-1) N}{2}-\dfrac{1}{24}, \label{ATY36}
\end{eqnarray}
where we used \eqref{DEF10} and \eqref{CEN32}. The central charge \eqref{ATY36} is the same as the holographic Weyl anomaly \eqref{ACA26}. It is concluded that central charge \eqref{ATY36} agrees with the $A$-type anomaly in the CFT side.
Then, we consider the field redefinition of the curvature theory \eqref{ACT7} to avoid the graviton ghost in the subleading order of $l_s^2/L^2$ expansion.
As pointed out in~\cite{Myers:2010tj}, the linearized gravition equation of \eqref{ACT7} is not second order but is 4-th order. So, there appear gravition ghosts for 4th order equations, which lead to the non-unitary operator in the dual SCFTs.
Using the field redefinition~\cite{Buchel:2008vz,kp}, however, we show that in the subleading order of $l_s^2/L^2$ expansion, we can transform the curvature gravity \eqref{ACT7} to the Gauss-Bonnet gravity, which has second order linearized graviton equations and preserves unitarity.
We transform the metric as
\begin{eqnarray}
g_{\mu\nu}\to g_{\mu\nu}+b_1g_{\mu\nu}R+b_2R_{\mu\nu},
\end{eqnarray}
where $b_1=-(1-1/\Delta)L^2/12N$ and $b_2=(1-1/\Delta)L^2/2N$.
Then, the gravity action \eqref{ACT7} can be transformed to
\begin{eqnarray}
S=\dfrac{1}{2\kappa_{5}^2\beta}\int d^5x\sqrt{-g}\Big(R+\dfrac{12\beta}{ \tilde{L}^2}+\dfrac{(1-1/\Delta)L^2}{8N}(R_{\mu\nu\rho\lambda}R^{\mu\nu\rho\lambda}+R^2-4R_{\mu\nu}R^{\mu\nu})+... \Big),
\label{ACT19} \end{eqnarray}
where $\beta^{ -1}=1+(1-1/\Delta)/(2N)$ and dots describe the terms of $\mathcal{O}(R^3)$ and terms of $O(N^{-2})$ corrections.
Using \eqref{EOM11}, the $AdS$ radius of the $AdS$ solution obtained from \eqref{ACT19} is given by
\begin{eqnarray}
&\tilde{l}^2=\tilde{L}^2\dfrac{1+\sqrt{1-\dfrac{4\beta^2}{3\tilde{L}^2}(20a_1+4a_2+2a_3)}}{2\beta}, \\
&=\dfrac{\tilde{L}^2}{2\beta}\Big({1+\sqrt{1-\dfrac{1-1/\Delta}{N}}}+O(N^{-2})\Big).
\label{CUR2}
\end{eqnarray}
For the action \eqref{ACT19}, the horizon entropy\footnote{In the context of Kerr/CFT correspondence~\cite{Guica:2008mu}, the Wald's entropy of a $d=5$ rotating black hole in the presence of Gauss-Bonnet terms coincides with the Cardy's formula~\cite{Hayashi:2011uf}.} becomes
\begin{align}
&S^E=\dfrac{2\pi}{\kappa_5^2\beta}\Big(1-12a_3\beta \dfrac{1}{r_h^2}\Big)\oint d^{d-1}x\sqrt{h(r_h)} \nonumber \\
&=\dfrac{2\pi}{\kappa_5^2\beta}\Big(1-\dfrac{3(1-1/\Delta)\beta}{2N} \Big)\oint d^{3}x\sqrt{h(r_h)}, \label{ENT22x}
\end{align}
where in the last equality, we used ${L}=r_h$ of the leading order of $N$.
According to~\eqref{ACA8}, the $A$-type anomaly is computed as follows:
\begin{eqnarray}
a_4=\dfrac{\pi^2\tilde{l}^3}{\kappa_5^2\beta}\Big(1-\dfrac{3(1-1/\Delta)\beta}{2N}\Big)=\dfrac{\Delta N^2}{4}+\dfrac{(\Delta-1) N}{2}+O(1), \label{ATY36x}
\end{eqnarray}
where we used \eqref{DEF10} and \eqref{CEN32}. Since \eqref{ATY36x} is the same form as \eqref{ACA26}, this central charge is invariant under the scaling transformation $g_{\mu\nu}\to e^{2a}g_{\mu\nu}$.
Central charge \eqref{ATY36x} agrees with the $A$-type anomaly in the CFT side up to order $N$~\eqref{CEN47}.
Lastly, we discuss the holographic c-theorem proposed in~\cite{Myers:2010tj,Liu:2011ii}, where the usual $c$-theorem in quantum field theories~\cite{Zam,Cardy:1988cwa,Osborn:1989td,Jack:1990eb,Komargodski:2011vj} states that central charges decrease in the IR. To describe the holographic $c$-theorem, the bulk matter should satisfy two conditions, namely, the null energy condition $-(T^t_t-T^r_r)\ge 0$ and $\sigma'\ge 0$ where
\begin{eqnarray}
\sigma =\dfrac{16a_1+5a_2+4a_3}{16}R', \label{NUL21}
\end{eqnarray}
where $R'$ is the derivative of the Ricci scalar along the AdS radial direction. Here, we used the convention of the metric in~\cite{Liu:2011ii}.
The Gauss-Bonnet gravity \eqref{ACT19} clearly satisfies the second condition.
So, it is concluded that if we set the coefficients of the curvature square terms properly using the field-redefinition, we may only consider the bulk null energy condition $-(T^t_t-T^r_r)\ge 0$ to be consistent with the holographic c-theorem.
\section{Discussion}
In this paper, we computed the holographic EE in the effective curvature gravity dual to the $\mathcal{N}=2$ SCFTs in the F-theory using the Wald's entropy~\cite{Myers:2010tj} and confirmed that it is consistent with the proposal in~\cite{Myers:2010tj}. We realized the $A$-type anomaly including the $1/N^2$ corrections in the SCFT side from the log term of the holographic EE not using the Gauss-Bonnet type but using the general curvature squared gravity, where it was also discussed in~\cite{deBoer:2011wk,Hung:2011xb}. Here, the curvature term of this theory can be transformed into the Gauss-Bonnet term \eqref{ACT19} in the subleading order of $l_s^2/L^2$ expansion. Our new analysis in section 4.1 also shows that
the $1/N^2$ correction is included not in the higher derivative terms of $O(R^3)$ but in the gravity theory with general curvature squared terms. While the $A$-type anomaly can be computed by using the $a$-maximalization in supersymmetric theories, the analysis of this paper gives the interesting method to compute the entanglement entropy and the $A$-type anomaly.
Precisely, we chose the cosmological constant $-12/\tilde{L}^2$ to reproduce the Weyl anomaly in the dual SCFTs using the holography since this cosmological constant has not been determined from the 10-dimensional theory yet.
In appendix B, we considered the $1/N$ correction from the higher derivative term of the probe $D_4$-type 7-brane, where we can take the $g_s\to 0$ limit. This $1/N$ correction in our approximation agreed with the result in the CFT side up to a factor 15/16. We leave the similar analysis for more complicated cases $G=E_6,E_7,..$ with a constant coupling.
To obtain consistent results, we also discussed a holographic c-theorem~\cite{Myers:2010tj,Liu:2011ii}. Using the field redefinition properly, we showed that the Gauss-Bonnet gravity \eqref{ACT19} satisfies the null energy condition in the subleading order of the $l_s^2/L^2$ expansion. This means that using the field redefinition, a theory with non-trivial $\sigma$ \eqref{ACT7} may be equal to the Gauss-Bonnet gravity with $\sigma=0$ \eqref{ACT19} of the finite order of $N$.\footnote{ Note that as another direction, the action \eqref{ACT19} is the Gauss-Bonnet action and we can apply the holographic EE on the $AdS_5$ soliton~\cite{Ogawa:2011fw} using the action \eqref{ACT19}. We then notice an instability of the holographic EE for $a_3>0$. However, a mechanism of the supersymmetry seems to work for our theory for $\mathcal{N}=2$ SCFTs and we do not mind this instability. }
See also~\cite{Gubser:2002zh,Fujita:2008rs} for the paper discussing the holographic c-theorem in the context of double trace deformation.
\bigskip
\noindent {\bf Acknowledgments:} We would like to thank A. Buchel, C. Hoyos, M. Kaminski, A. Karch, T. Nishioka, N. Ogawa, H. Ooguri, S. Sugimoto, Y. Tachikawa and T. Takayanagi for discussions and comments. I would like to thank H. Hayashi and T. S. Tai for collaboration in the initial stage of this project and for helpful discussions. MF is supported by the postdoctoral fellowship program of the Japan Society for the Promotion of Science
(JSPS), and partly by JSPS Grant-in-Aid for JSPS Fellows No. 22-1028.
| {'timestamp': '2012-06-08T02:01:02', 'yymm': '1112', 'arxiv_id': '1112.5535', 'language': 'en', 'url': 'https://arxiv.org/abs/1112.5535'} |
\section{Introduction}
Electroencephalogram (EEG) is a neuroimage technique which register the electromagnetic field created on the cerebral cortex. The sources are the synchronized post-synaptic potentials between interconnected neurons. The activations are caused by large micro-scale masses of neuronal activity. However, they are small generating sources compared to the range obtained by an electrode during its recording \cite{Sri19}. In the EEG, the sensors are organized like a discrete array on the scalp. This is a non-invasive method with high temporal resolution. Although, the measurements are distorted by tissues of the head that define a non-homogeneous conductive volume. This condition introduce difficulties in the localization process of generated sources. The EEG Inverse Problem (IP) proposes to estimate the origin of the measurements recorded on the scalp \cite{Sri19}.
The solution of EEG IP needs a forward model that represents the electromagnetic field created by neuronal activations and their effects on the scalp. The best-known approaches approximate large neural masses like a finite set of points (dipoles). The relationship between these dipoles and the electrodes is linear when the model is discretized by Maxwell equations. The outcome is a linear regression \cite{Paz17}:
\begin{equation}
V^{m\times n}=K^{m\times n}J^{n\times t}+ e^{m\times t}
\label{eq:1}
\end{equation}
where $V$ is a matrix of $m\times t$ with the potential differences between $m$ electrodes and the sensor of reference for each instant of time ($t$). The matrix $K^{m\times n}$ represents the relationship between $n$ possible sources and the recorded values obtained by $m$ electrodes. The columns of $K$ are calculated with the conductivity and the geometry of intermediate layers between the cerebral cortex and the scalp. For example: the skin, the skull and the cerebrospinal fluid\cite{Nu06}. The matrix $J^{n\times t}$ represents a Primary Current Density Distribution (PCD) of $n$ dipoles in $t$ instants of time.
In general, the number of sensors is too small (in the order of tens) with respect to the amount of possible sources (in the order of thousands). For this reason, the system (\ref{eq:1}) is indeterminate and an ill-posed problem\cite{Veg19}. In the matrix $K$, the columns associated to close sources produce similar coefficients. These high correlations cause the bad condition of the system and increase their sensibility on noisy data. In nature problems, the noise is an inherent factor and the EEG signal is not an exception. Thus, this research proposed evolutionary strategies with more tolerance to noisy data than numerical methods based on mathematical models \cite{Qian15}.
Electromagnetic Source Imaging (ESI) represents the approaches proposed to resolve the EEG IP. Due to the bad condition of the linear equation, their include a priori information in the optimization process. Regularization models minimize the linear combination between the squared fit and penality functions. These constraints represent assumptions on the truth nature of the sources. A classic model to obtain sparse estimations is LASSO (Least Absolute Shrinkage Selection Operator) which defines the norm of L1 space as a penalty function $L1$ ($\|J\|_1$):
\begin{equation}
\min_J \|V-KJ\|^2_2 + \lambda\|J\|_1
\label{eq:2}
\end{equation}
However, L0 ($\|\cdot\|_0: \mathbb{R}^{n\times 1}\rightarrow \mathbb{N}$) is the right norm to model sparsity and L1 is just an approximation. In \cite{Nata15}, B.K. Natarajan proved how the complexity of (2) with the norm L0 as a penality function is NP-Hard in Combinatorial Optimization. Unfortunately, this penality function is discontinuous and non-convex in $\mathbb{R}$.
The regularization models allow to combine the objective function with other restrictions to obtain smoothness and nonnegative coefficients. In the field, some relevant methods are: ENET (Elastic Net) \cite{Zou05}, Fused LASSO (fussed LASSO) \cite{Tib05}, Fusion LASSO (LASSO fusion), SLASSO (Smooth LASSO) \cite{Land96} and NNG (Nonnegative Garrote) \cite{Gij15}. They are particular derivations from the expression \cite{Veg19}:
\begin{equation}
\hat{J}=argmin_{J} \{(V-KJ)^T(V-KJ)+\Uppsi(J)\},
\label{eq:3}
\end{equation}
where $\Uppsi(J)=\sum_{r=1}^{R} \lambda_r \sum^{N_r}_{i=1} g^{(r)}(|\theta^{(r)}_i|)$ and $\theta^{(r)}=L^{(r)}J$. A more compact equation is:
\begin{equation}
\min_{J}\, \|V-KJ\|^2_2 + \sum_{r=1}^{R} \lambda_rp^{(r)}(\theta^{(r)})
\label{eq:4}
\end{equation}
where the objective function is a weighted sum of the squared error and the penalty functions $p^{(r)}(\theta^{(r)})=\sum_{i=1}^{N_r}g^{(r)}(|\theta^{(r)}_i|)$ such that $r\in \{1,\dots,R\}$. The $L$ matrix is structural information incorporated into the model and represents possible relationships between variables. For example: Ridge L is a derivative model with the norm L2 ($p$) and a Laplacian matrix ($L$) of second derivatives. The regularization parameter $\lambda$ is unknown and its estimation involve heuristic methods to choose the optimal value\cite{Paz17}. For example, the optimal solution for the Generalized Cross-Validation (GCV) function\cite{Veg19}. Although, this method does not guarantee the success for all cases, it proposes solutions good enough to the problem. Multi-objective approaches discard these limitations and explore the solutions space guiding by the best compromise between objective functions. The next section introduces relevant concepts in the multi-objective theory. The section 3 describes the proposed evolutionary approach and dedicates a brief subsection to each stage.
\section{Muti-objective optimization}
The Multi-objective optimization estimates a set of feasible solutions that conserve high evaluations and equal commitment among the objective functions. These functions have to share the same priority in the model and the solution methods have to avoid preferences in the search process. The multi-objective version for regularization models incorporates the same functions to a vector of objectives. The unrestricted bi-objective model for LASSO is:
\begin{equation}
\min (\|V-KJ\|^2_2, \|J\|_1)
\label{eq:5}
\end{equation}
Multi-objective formulations do not use unknown auxiliary parameters like regularization models. This advantage allows them to solve problems with poor information about the relationship among the objective functions. Some approaches propose the model \ref{eq:5} to solve inverse problems in image and signal processing \cite{Gong16, Gong17}. The multi-objective model for the equation \ref{eq:4} without regularization parameters is:
\begin{equation}
\min_J (\|V-KJ\|^2_2, p^{(1)}(\theta^1),\dots, p^{(R)}(\theta^R)) \qquad R\in\mathbb{N}
\label{eq:6}
\end{equation}
Thus, a multi-objective version for each regularization model is possible to obtain changing the functions $p^{(i)} (\theta^i)$. For example: the equation \ref{eq:5} assumes that $R=1$, $g^{(1)}(x)=x$ and $L^1= I_n$; therefore $p^{(1)} (\theta^1)= \|J\|_1$.
In multi-objective optimization, the definition of Pareto Dominance establishes a set of optimal solutions (Pareto Optimal Set) for the model. If $F:\Omega\rightarrow \Lambda$ is a vector of functions, $\Omega$ is the space of decision variables and $\Lambda$ is the space defined by the objective functions, then \cite{Coe07}:
\begin{definition}[Pareto Dominance]
\label{def:1}
A vector $u=(u_1,\dots,u_k)$ dominates another vector $v=(v_1,\dots,v_k)$ (denoted as $u\preceq v$) if and only if $u$ is partially less than $v$, that is, $\forall i \in \{1,\dots,k\}$, $u_i\leq v_i \wedge \exists j\in \{1,\dots,k\}:u_j<v_j$.
\end{definition}
A solution for a Multi-Objective Problem (MOP) is defined as:
\begin{definition}[Pareto Optimality]
\label{def:2}
A solution $x\in \Omega$ is Pareto Optimal on the $\Omega$ space if and only if $\nexists x'\in \Omega$ for which $v=F(x')=(f_1(x'),\dots,f_k(x'))$ dominates $u=F(x)=(f_1(x),\dots,f_k(x))$.
\end{definition}
A set of the solutions is defined as:
\begin{definition}[Pareto Optimal Set]
\label{def:3}
For a given MOP with $F(x)$ vector of objective functions, the Pareto Optimal Set $P^{*}$, is defined as:
\begin{equation}
P^{*}:=\{x\in\Omega\, |\, \nexists\, x'\in \Omega ,\; F(x')\preceq F(x)\}
\label{eq6}
\end{equation}
\end{definition}
Then, $F(P^*)\subset \Lambda$ defines a Pareto Front in the image space. The formal definition is:
\begin{definition}[Pareto Front]
\label{def:6}
For a MOP, the vector of objectives functions $F(x)$ and the Pareto Optimal Set $P^*$, the Pareto Front $FP^*$ is defined as:
\begin{equation}
FP^*:=\{u=F(x)\,|\,x\in P^*\}
\label{eq7}
\end{equation}
\end{definition}
The proposed algorithm searches the $P^*$ in the solution space that guarantees a $PF^*$ with uniform distribution between its points. For MOPs that require only one solution, the researchers have proposed to apply a selection criterion to choose a result from the $PF^*$. Desicion Maker (DM) is the procedure to select a solution from $PF^*$ and some algorithms used at the beginning (a priori), at the end (a posteriori) or during the execution (progressive) \cite{Coe07}.
\section{Method}
Evolutionary algorithms estimate solutions to multi-objective problems with high quality in nature life \cite{Coe07}. These strategies allow the hybridization with other algorithms to exploit heuristics information on the problem. In this research, the proposed Multi-objective Evolutionary Algorithm (MOEA) combines co-evolutionary strategies and local search techniques to resolve the EEG IP. In runtime, the algorithm holds a set of solutions that updates in each iteration. The evaluation over the objective functions and the theory of the problem establish the comparison between solutions. However, multi-objective models have to use a comparison criterion based on Pareto Dominance. The stability of the algorithm was evaluated in different levels of noise over simulated data. The quality measurements were Localization Error, Spatial Resolution and Visibility.
\subsection{Evolutionary Multi-objective Algorithm based on Anatomical Restrictions}
This evolutionary approach incorporated a coevolutionary strategy based on anatomical constraints to generate a new set of solutions. For this reason, the method was named Multi-objective Evolutionary Algorithm based on Anatomical Restrictions (MOEAAR). In addition, the procedure included a local search method to improve accuracy and intensify the search in specific areas of the solution space. The last stage applied a Decision Maker which select the result from a set of possible candidates ($P^*$). The Figure \ref{fig:1} customizes the stages of the algorithm in a diagram.
\begin{figure}
\centering
\includegraphics[scale=0.6]{diagram}
\caption{Diagram of MOEAAR with general steps. The workflow is divided in 2 general stage: Search Procedure and Decision Maker. The first one explores the solution space to find relevant solutions that conserve a commitment among the objectives. The second one selects the result from the last $FP^*$ obtained in the previous stage.}
\label{fig:1}
\end{figure}
\subsection*{Initial Population}
The Search Procedure maintains a set of solutions with improvements obtained between iterations. The solutions are represented as individuals in a population ($Pob_i$). The initial population ($Pob_0$) is made up of N individuals representing each region of interest (ROI). An individual represents the region that limits the sources with activations in the associated solution. The value of the activations to create the initial population was the norm L1 of the pseudoinverse ($K^+$) solution \cite{Dem97}.
\subsection*{Cooperative Coevolutionary}
The Cooperative Coevolutionary methodology divided a high-dimensional problem into subproblems with fewer decision variables. In the Coevolution stage, these subsets of variables defined new subpopulations by projecting the original individuals to the subspaces associated. Then, the Search Procedure began with an evolutionary algorithm which generates new candidates with just one step by subproblem. In this approach, NSGA II was selected for the task. The result was the combination of successive improvements through independent optimizations by subpopulation. For this reason, a Cooperation process was applied to estimate relevant solutions in the evaluation stage between iterations. This step required a context vector (CV) with information on features improved in previous iterations.
This strategy emphasizes the search based on correlated variables that share similar functions. In the EEG IP, anatomical constraints allow clustering possible generating sources. The algorithm proposed follow the same criterion. In the Algorithm \ref{algo:1}, a pseudocode customizes the steps of the Cooperative Coevolutionary procedure. In the line 2, a partition is established by an anatomical atlas from the Montreal Neurological Institute. In the line 5, a subset of variables ($G_j$) define a subpopulation ($pop_i^j$) by projection. Then, NSGA II (EA function) generate news individuals with highest quality from $pop_i$. Afterwards, the variables of $G_j$ are updated in VC with $best_j$. The steps 4-9 are repeated $t$ iterations until stop.
\begin{algorithm}
\DontPrintSemicolon
\KwData{$pop_i$(last population), $f$(objective function)}
\KwResult{$pop_{new}$}
\Begin
{
$P(G)=\{G_1,\dots,G_k\} \leftarrow \textnormal{partition(n)}$ \;
\While{condition()}
{
\ForEach{$G_j \in P(G)$}
{
$pop_i^j\leftarrow \textnormal{subPopulation}(pop_i,G_j)$\;
$[best_j, popR]\leftarrow \textnormal{EA}(f,pop_i^j,VC,G_j)$\;
$pop_{new}\leftarrow \textnormal{sortPopulation}(popR,G_j)$\;
$VC \leftarrow \textnormal{updateVC}(best_j,G_j)$\;
}
$pop_i\leftarrow pop_{new}$\;
}
}
\caption{CC($pop_i,f$)}
\label{algo:1}
\end{algorithm}
The NSGA II used reproduction operators to generate new individuals (or solutions) between iterations. Reproduction operators are designed to ensure the exploitation and exploration of promising regions in a feasible solutions space \cite{Coe07}. First, a Binary Tournament procedure with repetitions selected the individuals to generate new solutions. Then, an arithmetic crossover operator generated new individuals from the selection result. A new individual is defined as:
\begin{equation}
d_i=\alpha p^1_i + (1-\alpha)p^2_i.
\end{equation}
where $d=\{d_1,d_2,\dots,d_n \}$ is the solution vector for the newest individuals and $p^k=\{p_1^k,p_2^k,\dots,p_n^k\}$ are the selected parents such that $k\in \{1,2\}$. The values of $\alpha$ were generated randomly following a normal distribution $N(0,1)$.
A mutation procedure was the next step to guarantee more diversity in the evolution process. The option selected was the polynomial mutation and new individuals were obtained like:
\begin{equation}
d_i^{(t+1)}=d_i^{(t)} + \sigma^{(t+1)}\cdot N(0,1).
\end{equation}
The perturbation $\upsigma$ is the mutation step and its initial value decreases during the execution of the algorithm. This parameter controls the diversity between parents ($d_i^{(t)}$) and the new individuals ($d_i^{(t+1)}$). The mutation process obtained a new population from a subset of solutions generated by crossover operator in the previous step.
The new subpopulations are the join between individuals from crossover and mutation operator. The result was a population ($Pop_{CC}$) created by a reconstruction process that combine the subpopulations. The next stage was designed to improve the accuracy by forcing a deep search in promising areas of the solution space. For this reason, a Local Search procedure was selected for this task and a brief description can be found in the section below.
\subsection*{Local Search}
Local search (LS) algorithms are optimization techniques that search optimal solutions using neighborhood information. Among the most relevant methods for continuous problems are: the Coordinate Descent method, the Maximum Descent method and the Newton-Raphson variants. In addition, new strategies have proposed in the last decades that combine thresholding techniques with directional descent methods.
In MOEAAR, the LS was used to achieve high precision in the solutions obtained by Cooperative Coevolutive stage. This module performs a descending search on each individual in the population $Pop_{CC}$ to generate $Pop_{LS}$. The implemented method was proposed in \cite{Li14} as Local Smooth Threshold Search (LSTS). However, some adaptations were included for the EEG IP.
The method LSTS is a variant of the Smoothed Threshold Iterative Algorithm designed to obtain sparse estimations in minimization problems with L1 norm. The algorithm generated a sequence of optimal solutions $\{J^k,k=0,1,2,\dots,n\}$ where $J^{k+1}$ was obtained from $J^k$ to resolve the optimization problem:
\begin{equation}
J^{k+1}=\min_J \frac{1}{2} \|J-v^k\|^2_2 + \frac{\lambda}{\beta^k} p(J)
\end{equation}
where the diagonal matrix $\beta^kI$ is an estimation of the Hessian matrix $\nabla^2f(J^k)$ and
\begin{equation}
v^k=J^k-\frac{1}{\beta^k}\nabla f(J^k).
\end{equation}
Then, if $p(J)=\|J\|_1$:
\begin{equation}
J_i^{k+1}=soft(v_i^k,\frac{\lambda}{\beta^k})
\end{equation}
with $soft(u,a)=sgn(u)\cdot\max\{|u|-a,0\}$ is the smoothed threshold function. For the value $\lambda$ that equals both terms $(\|V-KJ\|_2^2,\|J\|_1)$ was proposed:
\begin{equation}
\hat{\lambda}=\frac{\|V-KJ\|^2_2}{\|J\|_1}.
\end{equation}
In this case, $J=J_{CC}$ was considered the best-fit estimation obtained in $Pop_{CC}$. The Barizilai-Borwein equation was proposed to obtain the parameter $\beta^k$ in \cite{Li14}.
The result was a new population $Pop_{LS}$ and a temporary set defined as $Pop_T=Pop_{CC} \cup Pop_{LS}$ such that $num(Pop_T)=2\cdot N$. However, this set is twice huger than the initial population ($N$). Thus, the next step was to reduce $Pop_T$ to $N$ individuals with the highest evaluation and compromise between the objective functions. First, the individuals were ordered by the operators: Non-dominated Sorting \cite{Zha01} and Crowding Distance \cite{Deb02, Chu10}. Then, the $N$ individuals more relevant were selected and the rest rejected to conform a new population with the best from $Pop_T$. Also, this process generated a Pareto Front ($PF_{know}$) that was updated in each cycle. A Decision Maker process selected the result of the algorithm from the last $PF_{known}$.
\subsection*{Desicion Maker}
The Decision Maker process select the solution with more compromise from Pareto Front. In this approach, a first stage selected the region with more instances among the solutions of the $PF_{know}$. This criterion of convergence between structures is justified by the trend towards a uniform population in the last iterations. Then, the Pareto Optimal Set was reduced to solutions that converge to the selected structure. In other words, the $ROI^*=ROI_{i}$ such that $i= arg\max_i cantRep(ROI_i)$ where:
\begin{equation}
cantRep(ROI_i)=\sum_{i=1}^{|FP_{know}|} g(ROI_i,Ind_i)
\end{equation}
\begin{equation}
g(ROI,Ind)=
\left\lbrace
\begin{array}{ll}
1,& \exists v\in\mathbb{R}^3, [v\in ROI]\wedge\\
&\hspace{30pt} [Ind(C^{-1}(v))\in\mathbb{R}^{*}] \\
&\\
0,& \textnormal{otherwise}.
\end{array}
\right.
\end{equation}
where that $Ind$ is an individual of the Pareto Optimal Set associated with $PF_{know}$, $C:I\leftarrow R^3$ is a mapping function and $I=\{1,\dots,n\}$ is the set of indexes. The result was obtained from the elbow on the B-Spline curve that interpolates the selected points from the filter process on the $PF_{know}$.
\section{Results}
The study was designed to compare MOEAAR with 3 classic methods selected for their relevance in the resolution of EEG Inverse Problem. The testing data included 16 simulations with different topological characteristics. In the tests, MOEAAR performed 100 iterations in the Search Procedure before estimate the PF. In the coevolutionary stage, the crossover operator generated new individuals with the $80\%$ of the individuals selected with repetition. However, the mutation process just involved the $50\%$. The comparative analysis was among MOEAAR solutions and the estimations proposed by Ridge-L, LASSO and ENET-L. The evolutionary approach was tested with different models (L0, L1 and L2) and each combination defined a method.
The simulations were superficial with different amplitudes bounded by 5 $nA/cm^2$ and extensions lower than two millimeters. The approximated surface (cortex) was a grid with 6,000 vertices and 12,000 triangles. However, the model just accepted 5656 possible generators and the matrix $K$ was calculated using the mathematical model of three-spheres concentric, homogeneous and isotropic\cite{Han07}. In addition, the sensor distribution was a 10-20 assembly with 128 electrodes. The sources were located in 4 regions or structures: Frontal (Upper Left), Temporal (Lower Left), Occipital (Upper Left) and Precentral (Left). Sources can be classifying in two types by extension: Punctual or Gaussian. The first one does not involve others activations. However, the second one follow a similar behavior to Normal distribution creating activations in voxels around the source. For each region, two representative simulations were created with Punctual and Gaussian extension for the same generator. Then, the equation \ref{eq:1} was applied to generate two synthetic data $V$ (SNR=0 and SNR=3) for each vector $J$. The result was a testing set which involve 16 synthetic data with differences in three factors: region, extension type and level of noise. The indicators for evaluation were the classic quality measures in the field: Localization Error, Spatial Resolution\cite{Tru04} and Visibility \cite{Veg08}.
The Figure \ref{fig:2} illustrates the Localization error obtained from estimations generated by the methods. This measurement quantifies how well an algorithm estimate the right coordinates of a source on the cortex. The subgraph A compares the results on data with punctual sources for different levels of noise. In this context, the coefficients are superior to 0.9 for SNR=0 but the quality change between the classic methods for SNR=3. This condition affects the classic methods (see Ridge-L$^*$, LASSO$^*$ and ENET-L$^*$) decreasing their accuracy in some regions. However, the MOEAAR variants hold their coefficients stable and over 0.9. The same behavior shows the subgraph B for data with gaussian sources.
The Figure \ref{fig:3} shows comparative information on Visibility indicator for estimations proposed by the methods. The subgraph A characterizes the case study with a punctual source by simulation. The result highlights the estimations of LASSO with an exception in the Temporal region. Unfortunately, its quality decrease as the rest of classic methods on data with noise (SNR=3) in all regions. In this subgraph, the MOEAAR-L0 maintains relevant results on data without noise and with SNR=3. The subgraph B represents the same evaluation process for data with a gaussian source in both levels of noise. In this case, the quality between regions is different in each method and the noise affect the coefficients in all cases too.
The Spatial Resolution (SR) indicator quantify the performance of the methods to estimate the right extensions of the sources. The Figure \ref{fig:4} customized the quality of each method according to their estimations. The subgraph A shows the result from data with a punctual source and the most relevant coefficients belongs to LASSO and MOEAAR-L0. Both methods do not change too much between data with SNR=0 and SNR=3. In addition, MOEAAR-L1 proposed solutions with extension according to the punctual simulations with an exception in Temporal region for data without noise. The same difficulty decreases its quality over noised data (see MOEAAR-L1$^*$, Figure \ref{fig:4}, subgraph A) in the Precentral region. The subgraph B represents the SR data associated to the proposed estimations for simulations with a gaussian source. The ENET-L estimate relevant solutions with coefficients over 0.6 but found some difficulty in the Frontal region. In addition, the estimations fail for data with noise (SNR=3) in Occipital, Precentral and Frontal regions. In the graph, MOEAAR-L2$^*$ proposes higher coefficients for Occipital, Temporal and Occipital regions. However, its quality decrease from 0.97647 to 0.46678 on data with noise on Occipital region.
\section{Discussion}
This section analyses some highlight results that stem from the previous comparative study which evidences some facts on the tolerance of MOEAAR variants to noisy data. The evolutionary approach guaranteed high accuracy and stability in the source localization (Figure \ref{fig:2}). The method maintains coefficients over 0.9 for both kind of sources: puntual and gaussian. Even, the proposed approach did not change the localization error too much with different levels of noise (SNR=0 and SNR=3). The main reason is the high tolerance of evolutionary strategies to ghost sources and noisy data. However, the classic methods worst their quality with the increasing of noise.
The Visibility is a complicated indicator and the improvements in the field have been poor in the last decades. However, the result obtained in the Figure \ref{fig:3} for punctual simulations without noise highlight the quality of MOEAAR with L0. In general, this norm does not useful with the classic methods. In fact, previous researches propose approximation like SCAD, Thresholding methods or the norm L1. These efforts are justified because the right function for sparsity is the norm L0. Thus, this is the first approach that include the norm L0 in the estimation of solutions for the EEG Inverse Problem. A comparative analysis for puntual sources between MOEAAR-L0$^*$ and LASSO$^*$ in the subgraph \ref{fig:3}.A shows highest stability in MOEAAR with L0 than LASSO on noisy data.
The Spatial Resolution indicator evaluate the capacity of the method to estimate the extension of the real source. For punctual simulations, LASSO obtained the foremost quality among the classic methods (see Figure \ref{fig:4}.A). In general, sparse methods are well-known to propose estimations with extension very close to the truth\cite{Veg19}. In addition, MOEAAR with norm L0 reaches relevant coefficients with both levels of noise (SNR=0 and SNR=3) follows by MOEAAR-L1. However, this method had some difficulties to estimate the right extension in Temporal region. In gaussian simulations, the best results were obtained between ENET-L and MOEAAR with L2 on the data without noise. Although, the last one proposed estimations with more quality for this measure than ENET-L in Occipital, Temporal and Frontal region.
\section{Conclusions}
The multi-objective formulation with a priori information allows the application of MOEAs to resolve the EEG IP without regularization parameters. The results in the Localization Error proved its capacity to find cortical and deep sources (Temporal and Precentral region). MOEAAR is an algorithm with a high stability and tolerance to noisy data in the source localization. The analysis gave reasons enough to consider the evolutionary approach as a viable method to resolve the EEG Inverse Problem. Therefore, MOEAAR could be testing with real data and Evocated Potentials in future works. The evolutionary strategies propose a way to obtain sparse solution with the norm L0.
\bibliographystyle{unsrtnat}
| {'timestamp': '2021-12-28T02:29:11', 'yymm': '2107', 'arxiv_id': '2107.10325', 'language': 'en', 'url': 'https://arxiv.org/abs/2107.10325'} |
\section{Introduction}
Gravitational-wave (GW) observations~\cite{LIGOScientific:2016aoc,LIGOScientific:2020ibl, LIGOScientific:2018mvr} have improved our understanding of compact binary systems, their properties, and their formation channels~\cite{LIGOScientific:2018jsj,LIGOScientific:2020kqk}.
A crucial component in searching for GW signals and inferring their parameters is accurate analytical waveform models, in which spin is an important ingredient given its significant effect on the orbital dynamics.
Three main analytical approximation methods exist for describing the dynamics during the inspiral phase:
the post-Newtonian (PN), the post-Minkowskian (PM), and the small-mass-ratio (gravitational self-force (GSF)) approximations.
The PN approximation is valid for small velocities and weak gravitational potential $v^2/c^2 \sim GM/rc^2 \ll 1$, and is most applicable for comparable-mass binaries in bound orbits.
Many studies have contributed to improving the description of the conservative PN dynamics, for nonspinning binaries~\cite{Blanchet:2000ub,Jaranowski:1997ky,Pati:2000vt,Damour:2014jta,Damour:2015isa,Damour:2016abl,Bernard:2017ktp,Marchand:2017pir,Foffa:2019rdf,Foffa:2019yfl,Blumlein:2020pog,Blumlein:2020pyo,Larrouturou:2021dma,Blumlein:2021txe,Almeida:2021xwn}, at spin-orbit (SO) level~\cite{tulczyjew1959equations,Tagoshi:2000zg,Porto:2005ac,Faye:2006gx,Blanchet:2006gy,Damour:2007nc,Blanchet:2011zv,Hartung:2011te,Perrodin:2010dy,Porto:2010tr,Hartung:2013dza,Marsat:2012fn, Bohe:2012mr,Levi:2015uxa,Levi:2020kvb}, spin-spin~\cite{Hergt:2008jn,Porto:2006bt,Porto:2008jj,Porto:2008tb,Hergt:2010pa,Hartung:2011ea,Levi:2011eq,Levi:2015ixa,Cho:2021mqw}, and higher orders in spin~\cite{Levi:2014gsa,Levi:2016ofk,Levi:2019kgk,Levi:2020lfn,Vines:2016qwa,Siemonsen:2019dsu}.
For reviews, see Refs.~\cite{Futamase:2007zz,Blanchet:2013haa,Schafer:2018kuf,Levi:2015msa,Porto:2016pyg,Levi:2018nxp}.
The PM approximation is valid for arbitrary velocities in the weak field $GM/rc^2 \ll 1$, and is most applicable for scattering motion since relativistic velocities can be achieved.
It was pioneered by the classic results of Westpfahl~\cite{Westpfahl:1979gu,Westpfahl:1980mk},
with rapid progress using
classical methods~\cite{Bel:1981be,schafer1986adm,Ledvinka:2008tk,Damour:2016gwp,Damour:2017zjx,Damour:2019lcq,Blanchet:2018yvb},
scattering amplitudes~\cite{Arkani-Hamed:2017jhn,Bjerrum-Bohr:2018xdl,Kosower:2018adc,Cheung:2018wkq,Bautista:2019tdr,Bern:2019nnu,Bern:2021dqo,Bjerrum-Bohr:2019kec,Bjerrum-Bohr:2021din,Cristofoli:2020uzm},
effective field theory~\cite{Foffa:2013gja,Kalin:2019rwq,Kalin:2020fhe,Kalin:2020mvi,Dlapa:2021npj},
and worldline quantum field theory~\cite{Mogull:2020sak,Jakobsen:2021smu}.
Spin effects were included in PM expansions using all these approaches in Refs. \cite{Bini:2017xzy,Bini:2018ywr,Vines:2017hyw,Vines:2018gqi,Guevara:2019fsj,Kalin:2019inp,Bern:2020buy,Chung:2020rrz,Maybee:2019jus,Bautista:2021wfy,Kosmopoulos:2021zoq,Liu:2021zxr,Jakobsen:2021lvp,Jakobsen:2021zvh},
and radiative contributions in Refs.~\cite{Damour:2020tta,Bini:2021gat,Bini:2021qvf,Bini:2021jmj,Saketh:2021sri}.
The small-mass-ratio approximation $m_1/m_2 \ll 1$ is based on GSF theory, and is most applicable for extreme-mass-ratio inspirals (see, e.g., Refs.~\cite{Mino:1996nk,Quinn:1996am,Barack:2001gx,Barack:2002mh,Detweiler:2002mi,Barack:2002bt,Detweiler:2005kq,Rosenthal:2006iy,Hinderer:2008dm,Gralla:2008fg,Shah:2010bi,Keidl:2010pm,Pound:2010pj,Barack:2011ed,Pound:2012nt,Pound:2012dk,Gralla:2012db,Pound:2014koa,Pound:2017psq,vandeMeent:2016pee,vandeMeent:2017bcc} and the reviews~\cite{Barack:2009ux,Pound:2015tma,Barack:2018yvs,Pound:2021qin}.)
Analytic GSF calculations to high PN orders were performed at first order in the mass ratio for the gauge-invariant redshift~\cite{Detweiler:2008ft,Bini:2013rfa,Kavanagh:2015lva,Hopper:2015icj,Bini:2015bfb,Kavanagh:2016idg,Bini:2018zde,Bini:2019lcd,Bini:2020zqy} and the spin-precession frequency~\cite{Dolan:2013roa,Bini:2014ica,Akcay:2016dku,Kavanagh:2017wot,Bini:2018ylh,Bini:2019lkm}.
There has also been recent important work on numerically calculating the binding energy and energy flux at second order in the mass ratio~\cite{Pound:2019lzj,Warburton:2021kwk}.
The effective-one-body (EOB) formalism~\cite{Buonanno:1998gg,Buonanno:2000ef} combines information from different analytical approximations with numerical relativity (NR) results, while recovering the strong-field test-body limit, thereby extending each approximation's domain of validity and improving the inspiral-merger-ringdown waveforms.
EOB models have been constructed for nonspinning \cite{Damour:2015isa,Damour:2000we,Buonanno:2007pf,Damour:2008gu,Pan:2011gk,Nagar:2019wds}, spinning~\cite{Damour:2008qf,Barausse:2009xi,Barausse:2011ys,Nagar:2011fx,Damour:2014sva,Balmelli:2015zsa,Khalil:2020mmr,Pan:2013rra,Taracchini:2012ig,Taracchini:2013rva,Babak:2016tgq,Bohe:2016gbl,Cotesta:2018fcv,Ossokine:2020kjp,Nagar:2018plt,Nagar:2018zoe}, and eccentric binaries~\cite{Bini:2012ji,Hinderer:2017jcs,Nagar:2021gss,Khalil:2021txt}.
In addition, information from the post-Minkowskian ~\cite{Damour:2016gwp,Damour:2017zjx,Antonelli:2019ytb,Damgaard:2021rnk} and small mass-ratio approximations~\cite{Damour:2009sm,Barausse:2011dq,Akcay:2012ea,Antonelli:2019fmq} have been incorporated in EOB models.
Recently, a method~\cite{Bini:2019nra}, sometimes dubbed the ``Tutti Frutti'' method~\cite{Bini:2021gat}, that combines all these formalisms has been used to derive PN results valid for arbitrary mass ratios from GSF results at first order in the mass ratio.
The method relies on the simple mass-ratio dependence of the PM-expanded scattering angle~\cite{Damour:2019lcq} (see also Ref.~\cite{Vines:2018gqi}), making it possible to relate the local-in-time part of the Hamiltonian, or radial action, to GSF invariants, such as the redshift and precession frequency.
The nonlocal-in-time part of the conservative dynamics, due to backscattered radiation emitted at earlier times, is derived separately, since it is calculated in an eccentricity expansion that differs between bound and unbound orbits.
This approach has been used to derive the 5PN conservative dynamics for nonspinning binaries except for two coefficients~\cite{Bini:2019nra,Bini:2020wpo}, the 6PN dynamics except for four coefficients~\cite{Bini:2020hmy,Bini:2020nsb}, and the full 4.5PN SO and 5PN aligned spin$_1$-spin$_2$ dynamics~\cite{Antonelli:2020aeb,Antonelli:2020ybz}.
In this paper, we determine the 5.5PN SO coupling for the two-body dynamics, which is the fourth-subleading PN order, except for one coefficient at second order in the mass ratio.
Throughout, we perform all calculations for spins aligned, or antialigned, with the direction of the orbital angular momentum. However, the results are valid for precessing spins~\cite{Antonelli:2020aeb}, since at SO level, the spin vector only couples to the angular momentum vector.
The results of this paper and the procedure used can be summarized as follows:
\begin{enumerate}
\item In Sec.~\ref{sec:nonloc}, we calculate the nonlocal contribution to the 5.5PN SO Hamiltonian for bound orbits, in a small-eccentricity expansion up to eighth order in eccentricity.
We do this for a harmonic-coordinates Hamiltonian, then incorporate those results into the gyro-gravitomagnetic factors in an EOB Hamiltonian.
\item In Sec.~\ref{sec:local}, we determine the local contribution by relating the coefficients of the local Hamiltonian to those of the PM-expanded scattering angle.
We then calculate the redshift and spin-precession invariants from the total Hamiltonian, and match their small-mass-ratio expansion to first-order self-force (1SF) results.
This allows us to recover all the coefficients of the local part except for one unknown. However, by computing the EOB binding energy and comparing it to NR, we show that the effect of the remaining unknown on the dynamics is small.
\item In Sec.~\ref{sec:nonlocscatter}, we complement our results for unbound orbits by calculating the nonlocal part of the gauge-invariant scattering angle, to leading order in the large-eccentricity expansion.
\item In Sec.~\ref{sec:radAction}, we provide two gauge-invariant quantities that characterize bound orbits: the radial action as a function of energy and angular momentum, and the circular-orbit binding energy as a function of frequency.
\end{enumerate}
We conclude in Sec.~\ref{sec:conc} with a discussion of the results, and provide in Appendix~\ref{app:qkepler} a summary of the quasi-Keplerian parametrization at leading SO order.
The main results of this paper are provided in the Supplemental Material as a \textsc{Mathematica} file~\cite{ancprd}.
\subsection*{Notation}
We use the metric signature $(-,+,+,+)$, and units in which $G=c=1$, but sometimes write them explicitly in PM and PN expansions for clarity.
For a binary with masses $m_1$ and $m_2$, with $m_2 \geq m_1$, and spins $\bm{S}_1$ and $\bm{S}_2$, we define the following combinations of the masses:
\begin{gather}\label{massmap}
M= m_1 + m_2, \quad \mu = \frac{m_1m_2}{M}, \quad \nu = \frac{\mu}{M}, \nonumber\\
q = \frac{m_1}{m_2}, \quad \delta =\frac{m_2 - m_1}{M},
\end{gather}
define the mass-rescaled spins
\begin{equation}
\bm{a}_1 = \frac{\bm{S}_1}{m_1}, \qquad
\bm{a}_2 = \frac{\bm{S}_2}{m_2},
\end{equation}
the dimensionless spin magnitudes
\begin{gather}
\chi_1 = \frac{|\bm{S}_1|}{m_1^2}, \qquad
\chi_2 = \frac{|\bm{S}_2|}{m_2^2},
\end{gather}
and the spin combinations
\begin{gather}
\bm{S} = \bm{S}_1 + \bm{S}_2, \qquad \bm{S}^* = \frac{m_2}{m_1} \bm{S}_1 + \frac{m_1}{m_2} \bm{S}_2, \nonumber\\
\chi_S = \frac{1}{2}(\chi_1 + \chi_2), \qquad
\chi_A = \frac{1}{2}(\chi_2 - \chi_1).
\end{gather}
We use several variables related to the total energy $E$ of the binary system: the binding energy $\bar{E}= E - M c^2$, the mass-rescaled energy $\Gamma = E / M$, and the effective energy $E_\text{eff}$ defined by the energy map
\begin{equation}
E = M \sqrt{1 + 2\nu \left(\frac{E_\text{eff}}{\mu} - 1\right)}\,.
\end{equation}
We also define the asymptotic relative velocity $v$ and Lorentz factor $\gamma$ via
\begin{gather}
\gamma = \frac{E_\text{eff}}{\mu}, \nonumber\\
v = \frac{\sqrt{\gamma^2 - 1}}{\gamma}, \quad \leftrightarrow \quad \gamma = \frac{1}{\sqrt{1-v^2}},
\end{gather}
and define the dimensionless energy variable
\begin{equation}
\varepsilon = \gamma^2 - 1 = \gamma^2 v^2.
\end{equation}
(Note that $\varepsilon$ used here is denoted $p_\infty^2$ in Ref.~\cite{Bini:2020wpo}.)
The magnitude of the orbital angular momentum is denoted $L$, and is related to the relative position $r$, radial momentum $p_r$, and total linear momentum $p$ via
\begin{equation}
p^2 = p_r^2 + \frac{L^2}{r^2}.
\end{equation}
We often use dimensionless rescaled quantities, such as
\begin{gather}
r=\frac{r^\text{phys}}{M}, \quad L=\frac{L^\text{phys}}{M\mu}, \quad p_r = \frac{p_r^\text{phys}}{\mu}, \nonumber\\
E = \frac{E^\text{phys}}{\mu}, \quad H = \frac{H^\text{phys}}{\mu},
\end{gather}
and similarly for related variables, e.g. $\bar{E}=\bar{E}^\text{phys}/\mu$, etc.
It should be clear from the context whether physical or rescaled quantities are being used.
\section{Nonlocal 5.5PN SO Hamiltonian for bound orbits}
\label{sec:nonloc}
The total conservative action at a given PN order can be split into local and nonlocal-in-time parts, such that
\begin{equation}
S_\text{tot}^\text{nPN} = S_\text{loc}^\text{nPN} + S_\text{nonloc}^\text{nPN},
\end{equation}
where the nonlocal part is due to tail effects projected on the conservative dynamics~\cite{Blanchet:1987wq,Damour:2014jta,Bernard:2015njp}, i.e., radiation emitted at earlier times and backscattered onto the binary.
The nonlocal contribution starts at 4PN order, and has been derived for nonspinning binaries up to 6PN order~\cite{Damour:2015isa,Bini:2020wpo,Bini:2020hmy}.
In this section, we derive the leading-order spin contribution to the nonlocal part, which is at 5.5PN order.
The nonlocal part of the action can be calculated via the following integral:
\begin{equation}
S_\text{nonloc}^\text{LO} = \frac{GM}{c^3} \int dt\, \text{Pf}_{2s/c} \int \frac{dt'}{|t-t'|} \mathcal{F}_\text{LO}^\text{split}(t,t'),
\end{equation}
where the label `LO' here means that we include the \emph{leading-order} nonspinning and SO contributions,
and where the Hadamard \textit{partie finie} (Pf) operation is used since the integral is logarithmically divergent at $t'= t$.
The time-split (or time-symmetric) GW energy flux $\mathcal{F}_\text{LO}^\text{split}(t,t')$ is written in terms of the source multipole moments as~\cite{Damour:2014jta}
\begin{equation}
\label{Fsplit}
\mathcal{F}_\text{LO}^\text{split}(t,t') = \frac{G}{c^5} \left[
\frac{1}{5} I_{ij}^{(3)}(t)I_{ij}^{(3)}(t') + \frac{16}{45c^2} J_{ij}^{(3)}(t)J_{ij}^{(3)}(t')
\right].
\end{equation}
The mass quadrupole $I^{ij}$ and the current quadrupole $J^{ij}$ (in harmonic coordinates and using the Newton-Wigner spin-supplementary condition~\cite{pryce1948mass,newton1949localized}) are given by \cite{blanchet1989post,Kidder:1995zr}
\begin{align}
\label{sourceMom}
I_{ij} &= m_1 x_1^{\langle i} x_1^{j \rangle} + \frac{3}{c^3} x_1^{\langle i} (\bm{v}_1\times \bm{S}_1)^{j \rangle} \nonumber\\
&\quad - \frac{4}{3c^3} \frac{d}{dt} x_1^{\langle i}(\bm{x}_1\times \bm{S}_1)^{j \rangle} + 1 \leftrightarrow 2, \nonumber\\
J_{ij} &= m_1 x_1^{\langle i} (\bm{x}_1 \times \bm{v}_1)^{j \rangle} + \frac{3}{2c} x_1^{\langle i} S_1^{j \rangle} + 1 \leftrightarrow 2,
\end{align}
where the indices in angle brackets denote a symmetric trace-free part.
As was shown in Refs.~\cite{Damour:2014jta,Damour:2015isa}, the nonlocal part of the action can be written in terms of $\tau \equiv t' - t$ as
\begin{align}
\label{nonlocH}
S_\text{nonloc}^\text{LO} &= - \int dt\, \delta H_\text{nonloc}^\text{LO}(t), \nonumber\\
\delta H_\text{nonloc}^\text{LO}(t) &= -\frac{GM}{c^3} \text{Pf}_{2s/c} \int \frac{d\tau}{|\tau|} \mathcal{F}_\text{LO}^\text{split}(t,t+\tau) \nonumber\\
&\quad
+ 2\frac{GM}{c^3} \mathcal{F}_\text{LO}^\text{split}(t,t) \ln\left(\frac{r}{s}\right).
\end{align}
Following Ref.~\cite{Bini:2020wpo}, we choose the arbitrary length scale $s$ entering the \textit{partie finie} operation to be the radial distance $r$ between the two bodies in harmonic coordinates.
This has the advantage of simplifying the local part by removing its dependence on $\ln r$.
\subsection{Computation of the nonlocal Hamiltonian in a small-eccentricity expansion}
The integral for the nonlocal Hamiltonian in Eq.~\eqref{nonlocH} can be performed in a small-eccentricity expansion using the quasi-Keplerian parametrization~\cite{damour1985general}, which can be expressed, up to 1.5PN order, by the following equations:
\begin{align}
r &= a_r (1 - e_r \cos u), \label{qKr}\\
\ell &= n t = u - e_t \sin u, \label{kepEq}\\
\phi &= 2 K \arctan\left[\sqrt{\frac{1+e_\phi}{1-e_\phi}} \tan \frac{u}{2}\right], \label{qKphi}
\end{align}
where $a_r$ is the semi-major axis, $u$ the eccentric anomaly, $\ell$ the mean anomaly, $n$ the mean motion (radial angular frequency), $K$ the periastron advance, and ($e_r,e_t,e_\phi$) the radial, time, and phase eccentricities.
The quasi-Keplerian parametrization was generalized to 3PN order in Ref.~\cite{Memmesheimer:2004cv}, and including SO and spin-spin contributions in Refs.~\cite{Tessmer:2010hp,Tessmer:2012xr}.
We summarize in Appendix~\ref{app:qKepBound} the relations between the quantities used in the quasi-Keplerian parametrization and the energy and angular momentum at leading SO order.
Using the quasi-Keplerian parametrization, we express the source multipole moments in terms of the variables ($a_r,e_t,t$), and expand the moments in eccentricity.
In the center-of-mass frame, the position vectors of the two bodies, $\bm{x}_1$ and $\bm{x}_2$, are related to $\bm{x}\equiv\bm{x}_1-\bm{x}_2$ via~\cite{Will:2005sn}
\begin{align}
x_1^i &= \frac{m_2}{M} x^i - \frac{\nu}{2c^3\delta M} {\epsilon^i}_{jk} v^j (S^k - {S^*}^k), \nonumber\\
x_2^i &= -\frac{m_1}{M} x^i - \frac{\nu}{2c^3\delta M} {\epsilon^i}_{jk} v^j (S^k - {S^*}^k),
\end{align}
where $\bm{v}\equiv \bm{v}_1 - \bm{v}_2$, and hence the source moments from Eq.~\eqref{sourceMom} can be written as
\begin{align}
I_{ij} &= M \nu x^{\langle i} x^{j \rangle}
+ \frac{1}{3c^3} \bigg[
\frac{m_2^2}{M^2} \Big(4 v^{\langle i} (\bm{S}_1\times \bm{x})^{j \rangle} \nonumber\\
&\quad\qquad
- 5 x^{\langle i} (\bm{S}_1\times \bm{v})^{j \rangle}\Big)
+ 1\leftrightarrow 2
\bigg], \\
J_{ij} &= M \delta \nu x^{\langle i} (\bm{v}\times \bm{x})^{j \rangle}
+ \frac{3}{2c} \left[\frac{m_2}{M} x^{\langle i} S_1^{j \rangle}
- \frac{m_1}{M} x^{\langle i} S_2^{j \rangle} \right].\nonumber
\end{align}
In polar coordinates,
\begin{align}
\bm{x} &= r (\cos \phi,\, \sin \phi), \nonumber\\
\bm{v} &= \dot{r} (\cos \phi,\, \sin \phi) + r \dot{\phi} (-\sin\phi,\, \cos\phi),
\end{align}
with $r$ and $\phi$ given by Eqs.~\eqref{qKr} and~\eqref{qKphi}, $e_r$ and $e_\phi$ related to $e_t$ via Eqs.~\eqref{eret} and ~\eqref{ephiet},
while $\dot{r}$ and $\dot{\phi}$ are given by
\begin{align}
\dot{r} &= \frac{e_t \sin u}{\sqrt{a_r} \left(1-e_t \cos u\right)}\,, \nonumber\\
\dot{\phi} &=\frac{\sqrt{a_r-a_r e_t^2}}{a_r^2 \left(1-e_t \cos u\right)^2}\,,
\end{align}
which are only needed at leading order.
We then write the eccentric anomaly $u$ in terms of time $t$ using Kepler's equation~\eqref{kepEq}, which has a solution in terms of a Fourier-Bessel series expansion
\begin{align}
u &= n t + \sum_{k=1}^{\infty} \frac{2}{k} J_k(ke_t) \sin(kn t) \nonumber\\
&\simeq
n t + e_t \sin(n t) + \frac{1}{2} e_t^2 \sin(2 n t) + \dots.
\end{align}
We perform the eccentricity expansion for the nonlocal part up to $\Order(e_t^8)$ since it corresponds to an expansion to $\Order(p_r^8)$, which is the highest power of $p_r$ in the 5.5PN SO local part.
However, to simplify the presentation, we write the intermediate steps only expanded to $\Order(e_t)$.
Plugging the expressions for $(r,\phi,\dot{r},\dot{\phi})$ in terms of $(a_r,e_t,t)$ into the source moments used in the time-split energy flux~\eqref{Fsplit} and expanding in eccentricity yields
\begin{widetext}
\begin{align}
&\mathcal{F}_\text{LO}^\text{split}(t,t+\tau) = \nu^2 a_r^4 n^6 \left\lbrace \frac{32}{5} \cos (2 n \tau )
+\frac{12}{5} e_t \left[9 \cos (nt+3 n\tau)+9 \cos (nt-2n \tau)-\cos (n t-n\tau )-\cos (n t+2n \tau )\right] \right\rbrace \nonumber\\
&\quad
+ \frac{8 \nu^2}{15} n^6 a_r^{5/2} \bigg\lbrace
48 n \tau \sin (2 n \tau ) \left(2 \delta \chi _A-\nu \chi _S+2 \chi _S\right)
-32 \cos (2 n \tau ) \left(9 \delta \chi _A-5 \nu \chi _S+9 \chi _S\right)
-\cos (n \tau ) \left(\delta \chi _A-4 \nu \chi _S+\chi _S\right) \nonumber\\
&\qquad
+e_t \cos \left(n t+\frac{n \tau }{2}\right) \bigg[
\cos \left(\frac{3 n \tau }{2}\right) \left(352 \delta \chi _A+352\chi _S-157 \nu \chi _S\right)
-27 \cos \left(\frac{5 n \tau }{2}\right) \left(84 \delta \chi _A-47 \nu \chi _S+84 \chi _S\right) \nonumber\\
&\qquad\qquad
-36 n \tau \left(\sin \left(\frac{3 n \tau }{2}\right)-9 \sin \left(\frac{5 n \tau }{2}\right)\right) \left(2 \delta \chi _A-\nu \chi _S+2 \chi _S\right)
\bigg]
\bigg\rbrace + \Order\left(e_t^2\right),
\end{align}
the orbit average of which is given by
\begin{align}
\left\langle \mathcal{F}_\text{LO}^\text{split}(t,t+\tau)\right\rangle &= \frac{n}{2\pi} \int_{0}^{2\pi/n} \mathcal{F}_\text{LO}^\text{split}(t,t+\tau) dt \nonumber\\
&=\frac{32}{5} \nu^2 n^6 a_r^4 \cos (2 n \tau ) + \frac{8}{15} \nu^2 n^6 a_r^{5/2} \Big[
48 n \tau \sin (2 n \tau ) \left(2 \delta \chi _A-\nu \chi _S+2 \chi _S\right) -\cos (n \tau ) \left(\delta \chi _A-4 \nu \chi _S+\chi _S\right)\nonumber\\
&\quad\qquad
-32 \cos (2 n \tau ) \left(9 \delta \chi _A-5 \nu \chi _S+9 \chi _S\right)
\Big] + \Order\left(e_t^2\right).
\end{align}
In the limit $\tau = 0$, this equation agrees with the eccentricity expansion of the energy flux from Eq.~(64) of Ref.~\cite{Tessmer:2012xr}.
Then, we perform the \textit{partie finie} operation with time scale $2s/c$ using Eq.~(4.2) of Ref.~\cite{Damour:2014jta}, which reads
\begin{equation}
\text{Pf}_T \int_{0}^{\infty} \frac{dv}{v} g(v) = \int_{0}^{T} \frac{dv}{v} [g(v) - g(0)] + \int_{T}^{\infty} \frac{dv}{v} g(v).
\end{equation}
The first line of Eq.~\eqref{nonlocH} yields
\begin{align}
&- \text{Pf}_{2s/c} \int \frac{d\tau}{|\tau|} \left\langle \mathcal{F}_\text{LO}^\text{split}(t,t+\tau)\right\rangle =
\frac{64}{5} \nu^2 n^6 a_r^4 \left[\ln (4 n s)+\gamma_E\right]
-\frac{16}{15} \nu^2 n^6 a_r^{5/2} \Big\lbrace
\left[289 \delta \chi _A+ (289-164 \nu)\chi _S\right]\ln (n s)
\nonumber\\
&\qquad\qquad
+\left[289 \gamma_E +48+577 \ln 2\right]\delta \chi _A +\left[\gamma_E (289-164 \nu )-12 \nu (2+27 \ln 2)+48+577 \ln 2\right] \chi _S
\Big\rbrace + \Order\left(e_t^2\right),
\end{align}
while the second line
\begin{align}
2 \left\langle \mathcal{F}_\text{LO}^\text{split}(t,t)\right\rangle \ln\left(\frac{r}{s}\right) =
\frac{64}{5} \nu^2 n^6 a_r^4 \ln \left(\frac{a_r}{s}\right)-\frac{16}{15} \nu^2 n^6 a_r^{5/2} \ln \left(\frac{a_r}{s}\right) \left[289 \delta \chi _A+(289-164 \nu ) \chi _S\right] + \Order\left(e_t^2\right).
\end{align}
Adding the two expressions removes the dependence on $s$.
When performing the calculation to $\Order\left(e_t^8\right)$, we obtain the following Delaunay-averaged nonlocal Hamiltonian:
\begin{align}
\label{Hnonloc}
\left\langle \delta H_\text{nonloc}^\text{LO}\right\rangle &=
\frac{\nu^2}{a_r^5} \left[\mathcal{A}^\text{4PN}(e_t) + \mathcal{B}^\text{4PN}(e_t) \ln a_r\right] \nonumber\\
&\quad
+ \frac{\nu^2\delta\chi_A}{a_r^{13/2}}
\Bigg\lbrace
\frac{584}{15} \ln a_r-\frac{64}{5}-\frac{464}{3} \ln 2 -\frac{1168}{15} \gamma_E \nonumber\\
&\quad\quad
+ e_t^2 \left[\frac{2908}{5} \ln a_r-\frac{5816}{5}\gamma_E+\frac{2172}{5}-\frac{3304}{15} \ln 2-\frac{10206}{5} \ln 3\right] \nonumber\\
&\quad\quad
+e_t^4 \left[\frac{43843}{15} \ln a_r-\frac{87686}{15} \gamma_E+\frac{114991}{30}-\frac{201362}{5} \ln 2+\frac{48843}{4} \ln 3\right] \nonumber\\
&\quad\quad
+e_t^6 \left[\frac{55313}{6} \ln a_r-\frac{55313}{3} \gamma_E+\frac{961807}{60}+\frac{6896921}{45} \ln 2-\frac{3236031}{160} \ln 3-\frac{24296875}{288} \ln 5\right] \nonumber\\
&\quad\quad
+e_t^8 \left[\frac{134921}{6} \ln a_r-\frac{134921}{3} \gamma_E+\frac{135264629}{2880}-\frac{94244416}{135} \ln 2+\frac{12145234375}{27648} \ln 5-\frac{1684497627}{5120} \ln 3\right]
\!\Bigg\rbrace \nonumber\\
&\quad
+\frac{\nu^2 \chi_S}{a_r^{13/2}} \Bigg\lbrace
-\frac{64}{5}+\frac{32 \nu }{5}+\left(\frac{896 \nu }{15}-\frac{1168}{15}\right) \gamma_E+\left(\frac{576 \nu }{5}-\frac{464}{3}\right) \ln 2 + \left(\frac{584}{15}-\frac{448 \nu }{15}\right) \ln a_r \nonumber\\
&\quad\quad
+e_t^2 \bigg[\frac{2172}{5}-\frac{4412 \nu }{15} +\left(\frac{4216 \nu }{5}-\frac{5816}{5}\right) \gamma_E+\left(\frac{5192 \nu }{15}-\frac{3304}{15}\right) \ln 2 +\left(\frac{6561 \nu }{5}-\frac{10206}{5}\right) \ln 3\nonumber\\
&\quad\qquad
+ \left(\frac{2908}{5}-\frac{2108 \nu }{5}\right) \ln a_r\bigg]
+e_t^4 \bigg[\frac{114991}{30}-\frac{38702 \nu }{15}+\left(\frac{62134 \nu }{15}-\frac{87686}{15}\right) \gamma_E \nonumber\\
&\quad\qquad
+\left(\frac{386414 \nu }{15}-\frac{201362}{5}\right) \ln 2 +\left(\frac{48843}{4}-\frac{28431 \nu }{4}\right) \ln 3+\left(\frac{43843}{15}-\frac{31067 \nu }{15}\right) \ln a_r\bigg] \nonumber\\
&\quad\quad
+e_t^6 \bigg[
\frac{961807}{60}-\frac{215703 \nu }{20} + \left(\frac{193718 \nu }{15}-\frac{55313}{3}\right) \gamma_E +\left(\frac{6896921}{45}-\frac{12343118 \nu }{135}\right) \ln 2 \nonumber\\
&\quad\qquad
+\left(\frac{3768201 \nu }{320}-\frac{3236031}{160}\right) \ln 3+\left(\frac{92421875 \nu }{1728}-\frac{24296875}{288}\right) \ln 5 + \left(\frac{55313}{6}-\frac{96859 \nu }{15}\right) \ln a_r
\bigg] \nonumber\\
&\quad\quad
+e_t^8 \bigg[\frac{135264629}{2880}-\frac{45491177 \nu }{1440}+\left(\frac{93850 \nu }{3}-\frac{134921}{3}\right) \gamma_E +\left(\frac{118966123 \nu }{270}-\frac{94244416}{135}\right) \ln 2 \nonumber\\
&\quad\qquad
+\left(\frac{537837489 \nu }{2560}-\frac{1684497627}{5120}\right) \ln 3 +\left(\frac{12145234375}{27648}-\frac{3790703125 \nu }{13824}\right) \ln 5 \nonumber\\
&\quad\qquad
+ \left(\frac{134921}{6}-\frac{46925 \nu }{3}\right) \ln a_r\bigg]
\Bigg\rbrace + \Order\left(e_t^{10}\right),
\end{align}
\end{widetext}
where the functions $\mathcal{A}^\text{4PN}(e_t)$ and $\mathcal{B}^\text{4PN}(e_t)$ in the 4PN part are given in Table I of Ref.~\cite{Bini:2020wpo}.
\subsection{Nonlocal part of the EOB Hamiltonian}
The (dimensionless) EOB Hamiltonian is given by the energy map
\begin{equation}
\label{Heob}
H_\text{EOB} = \frac{1}{\nu} \sqrt{1 + 2 \nu \left(H_\text{eff} - 1\right)}\,,
\end{equation}
where the effective Hamiltonian
\begin{align}
H_\text{eff} &= \sqrt{A(r) \left[1 + p^2 + \left(A(r) \bar{D}(r) - 1\right) p_r^2 + Q(r,p_r)\right]} \nonumber\\
&\quad
+ \frac{1}{c^3 r^3} \bm{L}\cdot\left[g_S(r,p_r) \bm{S} + g_{S^*}(r,p_r) \bm{S}^*\right].
\end{align}
The nonspinning potentials $A,\bar{D},$ and $Q$ were obtained at 4PN order in Ref.~\cite{Damour:2015isa}.
The 4.5PN gyro-gravitomagnetic factors, $g_S$ and $g_{S^*}$, are given by Eq.~(5.6) of Ref.~\cite{Antonelli:2020ybz}, and are in a gauge such that they are independent of the angular momentum.
Note that the gyro-gravitomagnetic factors are the same for both aligned and precessing spins, since the spin vector only couples to the angular momentum vector at SO level. Hence, even though the calculations are specialized to aligned spins, the final result for the gyro-gravitomagnetic factors is valid for precessing spins.
Splitting the potentials $A,\bar{D},Q$ into a local and a nonlocal piece, and writing the gyro-gravitomagnetic factors as
\begin{align}
\label{gyros}
g_S &= 2 + \dots + \frac{1}{c^8}\left(g_{S}^\text{5.5PN,loc} + g_S^\text{5.5PN,nonloc}\right), \nonumber\\
g_{S^*} &= \frac{3}{2} + \dots + \frac{1}{c^8} \left(g_{S^*}^\text{5.5PN,loc} + g_{S^*}^\text{5.5PN,nonloc}\right)
\end{align}
yields the following LO nonlocal part of the PN-expanded effective Hamiltonian
\begin{align}
H_\text{eff} &= H_\text{eff}^\text{loc} + \frac{1}{c^8} H_\text{eff}^\text{nonloc} + \Order(\text{5PN, 6.5PN SO}), \nonumber\\
H_\text{eff}^\text{nonloc} &= \frac{1}{2} \left(A^\text{nonloc} + \bar{D}^\text{nonloc} p_r^2 + Q^\text{nonloc}\right) \nonumber\\
&\quad
+ \frac{\nu \bm{L}}{c^3r^3} \cdot \left[\bm{S} g_S^\text{5.5PN,nonloc} + \bm{S}^* g_{S^*}^\text{5.5PN,nonloc}\right].
\end{align}
Then, we write the nonlocal piece of the potentials and gyro-gravitomagnetic factors in terms of unknown coefficients, calculate the Delaunay average of $H_\text{eff}^\text{nonloc}$ in terms of the EOB coordinates $(a_r,e_t)$, and match it to the harmonic-coordinates Hamiltonian from Eq.~\eqref{Hnonloc}. Since harmonic and EOB coordinates agree at leading SO order, no canonical transformation is needed between the two at that order.
This yields the results in Table IV of Ref.~\cite{Bini:2020wpo} for the 4PN nonspinning part, and the following SO part expanded to $\Order\left(p_r^8\right)$:
\begin{widetext}
\begin{align}
\label{gyroNonloc}
&g_S^\text{5.5PN,nonloc} = 2 \nu \bigg\lbrace\!\!
\left(\frac{292 }{15}\ln r -\frac{32}{5}-\frac{584}{15}\gamma_E-\frac{232}{3} \ln 2\right) \frac{1}{r^4}
+ \left(\frac{12782}{15}-104 \gamma_E+\frac{32744}{15} \ln 2-\frac{11664 }{5}\ln 3+52 \ln r\right) \frac{p_r^2}{r^3} \nonumber\\
&\quad\qquad
+ \left(\frac{12503}{15}-\frac{635456}{9} \ln 2+\frac{218943}{5}\ln 3 \right) \frac{p_r^4}{r^2}
+ \left(\frac{38246}{25}+\frac{176799232}{225} \ln 2-\frac{2517237}{10} \ln 3-\frac{3015625}{18} \ln 5\right) \frac{p_r^6}{r} \nonumber\\
&\quad\qquad
+ \left(\frac{503099}{350}-\frac{898982848}{189} \ln 2+\frac{6352671875}{3024} \ln 5-\frac{31129029}{400} \ln 3\right) p_r^8 + \Order\left(p_r^{10}\right)\bigg\rbrace, \nonumber\\
&g_{S^*}^\text{5.5PN,nonloc} = \frac{3}{2} \nu \bigg\lbrace\!\!
\left(16 \ln r-\frac{32}{5}-32 \gamma_E-\frac{2912}{45} \ln 2\right) \frac{1}{r^4}
+ \left(\frac{35024}{45}-\frac{1024 \gamma_E}{15}+\frac{93952}{45} \ln 2-\frac{10692}{5} \ln 3 +\frac{512}{15} \ln r\right) \frac{p_r^2}{r^3} \nonumber\\
&\quad\qquad
+ \left(\frac{9232}{15}-\frac{2978624}{45} \ln 2+\frac{206064}{5}\ln 3\right)\frac{ p_r^4}{r^2}
+ \left(\frac{33048}{25}+\frac{1497436672}{2025} \ln 2-\frac{1199934 }{5}\ln 3 -\frac{12593750}{81} \ln 5\right) \frac{p_r^6}{r} \nonumber\\
&\quad\qquad
+ \left(\frac{651176}{525}-\frac{9076395968}{2025} \ln 2+\frac{2226734375 }{1134}\ln 5-\frac{697653}{14} \ln 3\right) p_r^8 + \Order\left(p_r^{10}\right)\bigg\rbrace.
\end{align}
\end{widetext}
\section{Local 5.5PN SO Hamiltonian and scattering angle}
\label{sec:local}
In this section, we determine the local part of the Hamiltonian and scattering angle from 1SF results by making use of the simple mass dependence of the PM-expanded scattering angle.
\subsection{Mass dependence of the scattering angle}
Based on the structure of the PM expansion, Poincar\'e symmetry, and dimensional analysis, Ref.~\cite{Damour:2019lcq} (see also Ref.~\cite{Vines:2018gqi}) showed that the magnitude of the impulse (net change in momentum), for nonspinning systems in the center-of-mass frame, has the following dependence on the masses:
\begin{align}
\ms Q&= (\Delta p_{1\mu}\Delta p^{1\mu})^{1/2} \nonumber\\
&=\frac{2Gm_1m_2}{b}\bigg[\ms Q^\mr{1PM}
\nonumber\\
&\quad+\frac{G}{b}\bigg(m_1\ms Q^\mr{2PM}_{m_1}+m_2\ms Q^\mr{2PM}_{m_2}\bigg)
\nonumber\\
&\quad+\frac{G^2}{b^2}\bigg(m_1^2\ms Q^\mr{3PM}_{m_1^2}+m_1m_2 \ms Q_{m_1m_2}^\mr{3PM}+m_2^2\ms Q^\mr{3PM}_{m_2^2}\bigg)
\nonumber\\
&\quad+\cdots\bigg],
\end{align}
where each PM order is a homogeneous polynomial in the two masses.
For nonspinning bodies, the $\ms Q$'s on the right-hand side are functions only of energy (or velocity $v$).
This mass dependence has been extended in Ref.~\cite{Antonelli:2020ybz} to include spin, such that
\begin{align}
\ms Q^{n\mr{PM}}_{m_1^im_2^j}&=\ms Q^{n\mr{PM}}_{m_1^im_2^j}\left(v,\frac{a_1}{b},\frac{a_2}{b}\right)
\nonumber\\
&=\ms Q^{n\mr{PM}}_{m_1^im_2^ja^0}(v)
+\frac{a_1}{b}\ms Q^{n\mr{PM}}_{m_1^im_2^ja_1}(v)\nonumber\\
&\quad
+\frac{a_2}{b}\ms Q^{n\mr{PM}}_{m_1^im_2^ja_2}(v) + \Order(a_i^2),
\end{align}
where $b$ is the \emph{covariant} impact parameter defined as the orthogonal distance between the incoming worldlines when using the covariant (Tulczyjew-Dixon) spin-supplementary condition~\cite{Tulczyjew:1959,Dixon:1979} $p_{\ms i\mu}S_{\ms i}^{\mu\nu}=0$. (See Refs.~\cite{Vines:2016unv,Vines:2017hyw,Vines:2018gqi,Antonelli:2020ybz} for more details.)
The scattering angle $\chi$ by which the two bodies are deflected in the center-of-mass frame is related to $\ms Q$ via~\cite{Damour:2019lcq}
\begin{equation}
\sin\frac{\chi}{2}=\frac{\ms Q}{2P_\text{c.m.}},
\end{equation}
where $P_\text{c.m.}$ is the magnitude of the total linear momentum in the center-of-mass frame and is given by
\begin{equation}
P_\text{c.m.}=\frac{m_1m_2}{E}\sqrt{\gamma^2-1}\,,
\end{equation}
where we recall that
\begin{align}
\label{Egamma}
E^2&=m_1^2+m_2^2+2m_1m_2\gamma \nonumber\\
&= M^2 \left[1 + 2\nu (\gamma -1 )\right], \nonumber\\
\gamma &= \frac{1}{\sqrt{1 - v^2}}.
\end{align}
Therefore, the scattering angle scaled by $E/m_1m_2$ has the same mass dependence as $\ms Q$. (Equivalently, $\chi/\Gamma$ has the same mass dependence as $\ms Q/\mu$, where $\Gamma\equiv E/M$.)
For nonspinning binaries, and because of the symmetry under the exchange of the two bodies' labels, the mass dependence of $\chi/\Gamma$ can be written as a polynomial in the symmetric mass ratio $\nu$.
This is because any homogeneous polynomial in the masses $(m_1,m_2)$ of degree $n$ can be written as polynomial in $\nu$ of degree $\lfloor n/2 \rfloor$. For example,
\begin{align}
&c_1 m_1^3+c_2 m_1^2 m_2 +c_2 m_1 m_2^2+c_1 m_2^3 \nonumber\\
&\qquad\qquad
= M^3[ c_1 + (c_2-3 c_1)\nu],
\end{align}
for some mass-independent factors $c_i$.
Hence, at each $n$PM order, $\chi/\Gamma$ is a polynomial in $\nu$ of degree $\lfloor (n-1)/2 \rfloor$.
When including spin, we also obtain a dependence on the antisymmetric mass ratio $\delta\equiv (m_2 - m_1)/ M $, since
\begin{align}
&a_1 \left(c_1 m_1^3+ c_2 m_1^2 m_2 + c_3 m_1 m_2^2+ c_4 m_2^3\right) \nonumber\\
&\qquad\qquad
= M^3 a_1 \left(\alpha_1 +\alpha_2 \delta+\alpha_3 \nu +\alpha_4 \nu\delta\right),
\end{align}
where $\alpha_i$ are some linear combinations of $c_i$.
Thus, we find that the scattering angle, up to 5PM and to linear order in spin, has the following mass dependence:
\begin{align}
\chi &= \chi_{a^0} + \chi_{a} + \Order(a^2), \\
\frac{\chi_{a^0}}{\Gamma} &= \frac{GM}{b}\ms X_1^0 +\Big(\frac{GM}{b}\Big)^2\ms X_2^0 \nonumber\\
&\quad
+\Big(\frac{GM}{b}\Big)^3\Big[\ms X_3^0+\nu \ms X_3^{0,\nu}\Big]
\nonumber\\
&\quad
+\Big(\frac{GM}{b}\Big)^4\Big[\ms X_4^0+\nu \ms X_4^{0,\nu}\Big] \nonumber\\
&\quad + \Big(\frac{GM}{b}\Big)^5\Big[\ms X_5^0+\nu \ms X_5^{0,\nu}+\nu^2 \ms X_5^{0,\nu^2}\Big] + \dots \\
\label{chiMass}
\frac{\chi_a}{\Gamma} &= \frac{a_1}{b} \bigg\lbrace
\frac{GM}{b}\ms X_{1}
+\Big(\frac{GM}{b}\Big)^2 \Big[\ms X_{2} + \delta \ms X_{2}^\delta\Big]
\nonumber\\
&\quad +\Big(\frac{GM}{b}\Big)^3\Big[\ms X_{3} + \delta \ms X_{3}^\delta +\nu \ms X_{3}^{\nu}\Big] \nonumber\\
&\quad
+\Big(\frac{GM}{b}\Big)^4\Big[\ms X_{4} + \delta \ms X_{4}^\delta +\nu \ms X_{4}^{\nu}+\nu\delta \ms X_{4}^{\nu\delta}\Big] \nonumber\\
&\quad + \Big(\frac{GM}{b}\Big)^5\Big[\ms X_{5} + \delta \ms X_{5}^\delta+\nu \ms X_{5}^{\nu}+\nu\delta \ms X_{5}^{\nu\delta}+\nu^2 \ms X_{5}^{\nu^2}\Big] \nonumber\\
&\quad + \dots \bigg\rbrace + 1 \leftrightarrow 2,
\end{align}
where the $\ms X_{\ms i}^\text{\dots}$ are functions only of energy/velocity.
Since $\nu$ and $\nu\delta$ are of order $q$ when expanded in the mass ratio, their coefficients can be recovered from 1SF results.
This mass-ratio dependence holds for the \emph{total} (local + nonlocal) scattering angle. However, by choosing the split between the local and nonlocal parts as we did in Sec.~\ref{sec:nonloc}, i.e., by choosing the arbitrary length scale $s$ to be the radial distance $r$, we get the same mass-ratio dependence for the \emph{local} part of the 5.5PN SO scattering angle. This is confirmed by the independent calculation of the nonlocal part of the scattering angle in Eq.~\eqref{chinonloc} below, which is linear in $\nu$. (In Ref.~\cite{Bini:2020wpo}, the authors introduced a `flexibility' factor in the relation between $s$ and $r$ to ensure that this mass-ratio dependence continues to hold at 5PN order for both the local and nonlocal contributions separately.)
Terms independent of $\nu$ in the scattering angle can be determined from the scattering angle of a spinning test particle in a Kerr background, which was calculated in Ref.~\cite{Bini:2017pee}.
For a test body with spin $s$ in a Kerr background with spin $a$, the 5PM test-body scattering angle to all PN orders and to linear order in spins can be obtained by integrating Eq. (65) of Ref.~\cite{Bini:2017pee}, leading to
\begin{widetext}
\begin{align}
\chi_\text{test} &= \frac{GM}{b} \left[\frac{2v^2+2}{v^2}-\frac{4 (a+s)}{b v}\right]
+\pi\Big(\frac{GM}{b}\Big)^2 \left[ \frac{3 \left(v^2+4\right)}{4 v^2} -\frac{ \left(3 v^2+2\right) (4 a+3 s)}{2 b v^3}\right]
\nonumber\\
&\quad
+ \Big(\frac{GM}{b}\Big)^3 \Bigg[\frac{2 \left(5 v^6+45 v^4+15 v^2-1\right)}{3 v^6}
-\frac{4 \left(5 v^4+10 v^2+1\right) (3 a+2 s)}{b v^5}\Bigg]
+\pi\Big(\frac{GM}{b}\Big)^4 \Bigg[\frac{105 \left(v^4+16 v^2+16\right)}{64 v^4} \nonumber\\
&\quad\qquad
-\frac{21 \left(5 v^4+20 v^2+8\right) (8 a+5 s)}{16 b v^5}\Bigg]
+\Big(\frac{GM}{b}\Big)^5 \Bigg[\frac{2 \left(21 v^{10}+525 v^8+1050 v^6+210 v^4-15 v^2+1\right)}{5 v^{10}}\nonumber\\
&\quad\qquad
-\frac{4 \left(63 v^8+420 v^6+378 v^4+36 v^2-1\right) (5 a+3 s)}{3 b v^9}\Bigg] + \Order\left(G^6\right) + \Order(a^2,as,s^2).
\end{align}
Plugging this into Eq.~\eqref{chiMass} determines all the $\ms X_{\ms i}(v)$ and $\ms X_{\ms i}^\delta(v)$ functions.
Hence, we can write the 5PM SO part of the local scattering angle, expanded to 5.5PN order, as follows:
\begin{align}
\label{chiAnz}
\frac{\chi_a^\text{loc}}{\Gamma} &=
\frac{a_1}{b} \Bigg\lbrace \left(\frac{GM}{v^2b}\right) (-4 v) + \pi\left(\frac{GM}{v^2b}\right)^2 \left[\left(\frac{\delta }{2}-\frac{7}{2}\right) v + \left(\frac{3 \delta }{4}-\frac{21}{4}\right) v^3\right] \nonumber \\
&\quad
+\left(\frac{GM}{v^2b}\right)^3 \left[
(2 \delta -10 + \ms X_{31}^\nu \nu) v + (20 \delta -100 + \ms X_{33}^\nu \nu) v^3 +(10 \delta-50 + \ms X_{35}^\nu \nu) v^5 + \ms X_{37}^\nu \nu v^7 + \ms X_{39}^\nu \nu v^9
\right] \nonumber\\
&\quad
+ \pi\left(\frac{GM}{v^2b}\right)^4 \bigg[
\left(\ms X_{41}^{\delta\nu} \delta \nu + \ms X_{41}^{\nu} \nu\right)v
+\left(\frac{63}{4}\delta-\frac{273}{4} + \ms X_{43}^{\delta\nu} \delta \nu + \ms X_{43}^{\nu} \nu \right)v^3
+ \left(\frac{315}{8}\delta-\frac{1365}{8} + \ms X_{45}^{\delta\nu} \delta \nu + \ms X_{45}^{\nu}\nu \right) v^5\nonumber \\
&\quad\qquad\qquad
+ \left(\frac{315 \delta }{32} -\frac{1365}{32} + \ms X_{47}^{\delta\nu} \delta \nu + \ms X_{47}^{\nu} \nu \right) v^7
+ \left(\ms X_{49}^{\delta\nu} \delta \nu + \ms X_{49}^{\nu} \nu\right) v^9
\bigg]\nonumber \\
&\quad
+\left(\frac{GM}{v^2b}\right)^5 \bigg[
\left(-\frac{4 \delta }{3}+\frac{16}{3}+ \ms X_{51}^{\delta\nu} \delta\nu+ \ms X_{51}^{\nu} \nu+ \ms X_{51}^{\nu^2} \nu^2\right) v
+\left(48 \delta -192 + \ms X_{53}^{\delta\nu} \delta \nu +\ms X_{53}^{\nu} \nu + \ms X_{53}^{\nu^2}\nu^2\right) v^3 \nonumber\\
&\quad\qquad\qquad
+ \left(504 \delta -2016 + \ms X_{55}^{\delta\nu} \delta \nu +\ms X_{55}^{\nu} \nu + \ms X_{55}^{\nu^2}\nu^2 \right) v^5
+ \left(560 \delta -2240 + \ms X_{57}^{\delta\nu} \delta \nu +\ms X_{57}^{\nu} \nu + \ms X_{57}^{\nu^2}\nu^2 \right) v^7 \nonumber\\
&\quad\qquad\qquad
+ \left(84\delta -336 + \ms X_{59}^{\delta\nu} \delta \nu + \ms X_{59}^{\nu} \nu + \ms X_{59}^{\nu^2} \nu^2\right)v^9
\bigg]\Bigg\rbrace + 1 \leftrightarrow 2,
\end{align}
\end{widetext}
where the $\ms X_{ij}^\nu$ and $\ms X_{ij}^{\delta\nu}$ coefficients are independent of the masses, and can be determined, as explained below, from 1SF results. The coefficient $\ms X_{59}^{\nu^2}$ could be determined from future second-order self-force results.
\subsection{Relating the local Hamiltonian to the scattering angle}
The scattering angle can be calculated from the Hamiltonian by inverting the Hamiltonian and solving for $p_r(E,L,r)$, then evaluating the integral
\begin{equation}
\label{Htochi}
\chi= -2 \int_{r_0}^{\infty}\frac{\partial p_r(E,L,r)}{\partial L}dr - \pi\,,
\end{equation}
where $r_0$ is the turning point, obtained from the largest root of $p_r(E,L,r)=0$.
$E$ and $L$ represent the physical center-of-mass energy and \emph{canonical} angular momentum, respectively.
As noted above, we express the scattering angle in terms of the \emph{covariant} impact parameter $b$, but use the \emph{canonical} angular momentum $L$ in the Hamiltonian (corresponding to the Newton-Wigner spin-supplementary condition). The two are related via~\cite{Vines:2017hyw,Vines:2018gqi}
\begin{align}
\label{Lcov}
L&=L_\text{cov} + \Delta L, \nonumber\\
L_\text{cov} &= P_\text{c.m.} b = \frac{\mu}{\Gamma}\gamma vb, \nonumber\\
\Delta L &=M\frac{\Gamma-1}{2}\left[a_1+a_2-\frac{\delta}{\Gamma}(a_2-a_1)\right],
\end{align}
which can be used to replace $L$ with $b$ in the scattering angle.
We can also replace $E$ with $v$ using Eq.~\eqref{Egamma}.
Starting from the 4.5PN SO Hamiltonian, as given by Eq.~(5.6) of Ref.~\cite{Antonelli:2020ybz}, determines all the unknown coefficients in the scattering angle in Eq.~\eqref{chiAnz} up to that order.
Writing an ansatz for the local 5.5PN part in terms of unknown coefficients, such as
\begin{align}
g_S^\text{5.5PN,loc} = \frac{g_{04}}{r^4} + g_{23} \frac{p_r^2}{r^3} + g_{42} \frac{p_r^4}{r^2} + g_{61} \frac{p_r^6}{r} + g_{80} p_r^8, \nonumber\\
g_{S^*}^\text{5.5PN,loc} = \frac{g_{04}^*}{r^4} + g_{23}^* \frac{p_r^2}{r^3} + g_{42}^* \frac{p_r^4}{r^2} + g_{61}^* \frac{p_r^6}{r} + g_{80}^* p_r^8,
\end{align}
calculating the scattering angle, and matching to Eq.~\eqref{chiAnz} allows us to relate the 10 unknowns in that ansatz to the 6 unknowns in the scattering angle at that order. This leads to
\begin{widetext}
\begin{align}
\label{gyrosloc}
g_{S}^\text{5.5PN,loc} &= 2 \Bigg\lbrace\frac{1}{r^4} \Bigg[\nu \left(\frac{3 \ms X_{59}^{\delta\nu}}{32}-2 \ms X_{49}^{\delta\nu}-\frac{35 \ms X_{39}^{\nu}}{16}+2 \ms X_{49}^{\nu}-\frac{3 \ms X_{59}^{\nu}}{32}+\frac{309077}{1152}-\frac{35449 \pi ^2}{3072}\right)+\nu^2 \left(\frac{235111}{2304}-\frac{3 \ms X_{59}^{\nu^2}}{32}-\frac{583 \pi ^2}{192}\right) \nonumber\\
&\qquad
-\frac{\nu ^4}{64}-\frac{413 \nu ^3}{512} \Bigg]
+\frac{p_r^2}{r^3} \left[\frac{3 \nu ^4}{8}-\frac{8259 \nu ^3}{128}+ \left(\frac{198133}{384}-\frac{1087 \pi ^2}{128}\right) \nu ^2+\nu \left(2 \ms X_{49}^{\delta\nu}+\frac{35 \ms X_{39}^{\nu}}{8}-2 \ms X_{49}^{\nu}+\frac{1125}{16}\right)\right] \nonumber\\
&\quad
+\frac{p_r^4}{r^2} \left[-\frac{107 \nu ^4}{64}-\frac{73547 \nu ^3}{512}+\frac{31913 \nu ^2}{256}+\nu \left(\frac{8597}{128}-\frac{35 \ms X_{39}^{\nu}}{24}\right)\right] \nonumber\\
&\quad
+\frac{p_r^6}{r} \left[\frac{1577 \nu ^4}{320}-\frac{11397 \nu ^3}{512}-\frac{2553 \nu ^2}{256}-\frac{893 \nu }{256}\right]
+ p_r^8 \left[\frac{189 \nu ^4}{64}+\frac{945 \nu ^3}{512}+\frac{99 \nu ^2}{256}-\frac{27 \nu }{128}\right]\Bigg\rbrace, \nonumber\\
g_{S^*}^\text{5.5PN,loc} &= \frac{3}{2}\Bigg\lbrace\frac{1}{r^4} \bigg[
-\frac{15 \nu ^4}{512}-\frac{111 \nu ^3}{128}+\nu \left(2 \ms X_{49}^{\delta\nu}-\frac{3 \ms X_{59}^{\delta\nu}}{32}-\frac{35 \ms X_{39}^{\nu}}{16}+2 \ms X_{49}^{\nu}-\frac{3 \ms X_{59}^{\nu}}{32}+\frac{131519}{576}-\frac{90149 \pi ^2}{12288}\right)-\frac{1701}{512} \nonumber\\
&\qquad
+\nu ^2 \left(-\frac{3 \ms X_{59}^{\nu^2}}{32}-\frac{123 \pi ^2}{64}+\frac{29081}{512}\right)
\bigg]
+\frac{p_r^2}{r^3} \bigg[
\frac{171 \nu ^4}{256}-\frac{489 \nu ^3}{8}+\left(\frac{77201}{256}-\frac{123 \pi ^2}{16}\right) \nu ^2-\frac{27}{64}
\nonumber\\
&\qquad
+\nu \left(-2 \ms X_{49}^{\delta\nu}+\frac{35 \ms X_{39}^{\nu}}{8}-2 \ms X_{49}^{\nu}+\frac{86897}{768}-\frac{27697 \pi ^2}{2048}\right)\bigg]
+\frac{p_r^4}{r^2} \bigg[-\frac{1377 \nu ^4}{512}-\frac{13905 \nu ^3}{128}+\frac{12135 \nu ^2}{512}+\frac{2525}{512} \nonumber\\
&\qquad
+\nu \left(\frac{10569}{256}-\frac{35 \ms X_{39}^{\nu}}{24}\right) \bigg]
+ \frac{p_r^6}{r}\left[\frac{16077 \nu ^4}{2560}-\frac{2391 \nu ^3}{640}-\frac{879 \nu ^2}{512}+\frac{77 \nu }{32}+\frac{3555}{512}\right] \nonumber\\
&\quad
+ p_r^8\left[\frac{945 \nu ^4}{512}+\frac{315 \nu ^3}{128}+\frac{1053 \nu ^2}{512}+\frac{189 \nu }{128}+\frac{693}{512}\right]\Bigg\rbrace,
\end{align}
\end{widetext}
where we switched to dimensionless variables.
We see that the 5 unknowns ($\ms X_{39}^\nu, \ms X_{49}^\nu,\ms X_{49}^{\delta\nu},\ms X_{59}^{\nu},\ms X_{59}^{\delta\nu}$) from the scattering angle only appear in the linear-in-$\nu$ coefficients of the gyro-gravitomagnetic factors up to order $p_r^4$, while the unknown $\ms X_{59}^{\nu^2}$ only appears in the quadratic-in-$\nu$ coefficients of the circular-orbit ($1/r^4$) part. All other coefficients have been determined, due to the structure of the PM-expanded scattering angle, and from lower-order and test-body results.
\subsection{Redshift and precession frequency}
To determine the linear-in-$\nu$ coefficients in the local Hamiltonian from 1SF results, we calculate the redshift and spin-precession invariants from the \emph{total} (local + nonlocal) Hamiltonian, since GSF calculations do not differentiate between the two, then match their small-mass-ratio expansion to 1SF expressions known in the literature.
An important step in this calculation is the first law of binary mechanics, which was derived for nonspinning particles in circular orbits in Ref.~\cite{LeTiec:2011ab}, generalized to spinning particles in circular orbits in Ref.~\cite{Blanchet:2012at}, to nonspinning particles in eccentric orbits in Refs.~\cite{Tiec:2015cxa,Blanchet:2017rcn}, and to spinning particles in eccentric orbits in Ref.~\cite{Antonelli:2020ybz}. It reads
\begin{equation}\label{1law}
\mr d E = \Omega_r \mr d I_r + \Omega_\phi \mr d L + \sum_{\mr i} ( z_{\mr i} \mr d m_{\mr i} + \Omega_{S_{\mr i}} \mr d S_{\mr i} ),
\end{equation}
where $\Omega_r$ and $\Omega_\phi$ are the radial and azimuthal frequencies, $I_r$ is the radial action, $z_{\mr i}$ is the redshift, and $ \Omega_{S_{\mr i}}$ is the spin-precession frequency.
The orbit-averaged redshift is a gauge-invariant quantity that can be calculated from the Hamiltonian using
\begin{equation}
z_1 = \left\langle \frac{\partial H}{\partial m_1} \right\rangle = \frac{1}{T_r} \oint \frac{\partial H}{\partial m_1} dt,
\end{equation}
where $T_r$ is the radial period.
The spin-precession frequency $\Omega_{S_1}$ and spin-precession invariant $\psi_1$ are given by
\begin{align}
\Omega_{S_1} &= \left\langle \frac{\partial H}{\partial S_1} \right\rangle = \frac{1}{T_r} \oint \frac{\partial H}{\partial S_1} dt, \nonumber\\
\psi_1 &\equiv \frac{ \Omega_{S_1}}{\Omega_\phi}.
\end{align}
In evaluating these integrals, we follow Refs.~\cite{Bini:2019lcd,Bini:2019lkm} in using the Keplerian parametrization for the radial variable
\begin{equation}
r = \frac{1}{u_p\left(1+e \cos \xi\right)},
\end{equation}
where $u_p$ is the inverse of the semi-latus rectum, $e$ is the eccentricity, and $\xi$ is the relativistic anomaly.
The radial and azimuthal periods are calculated from the Hamiltonian using
\begin{align}
T_r &\equiv \oint dt = 2 \int_0^\pi \left(\frac{\partial H}{\partial p_r}\right)^{-1} \frac{dr}{d\xi} d\xi, \\
T_\phi &\equiv \oint d\phi = 2 \int_0^\pi \frac{\partial H}{\partial L}\left(\frac{\partial H}{\partial p_r}\right)^{-1} \frac{dr}{d\xi} d\xi.
\end{align}
Performing the above steps yields the redshift and spin-precession invariants in terms of the gauge-dependent $u_p$ and $e$, i.e., $z_1(u_p, e)$ and $\psi_1(u_p, e)$.
We then express them in terms of the gauge-independent variables
\begin{equation}
\label{xiota}
x \equiv (M \Omega_\phi)^{2/3}, \quad \iota \equiv \frac{3 x}{k},
\end{equation}
where $k\equiv T_\phi/(2\pi)-1$ is the fractional periastron advance.
The expressions we obtain for $z_1(x,\iota)$ and $\psi (x,\iota)$ agree up to 3.5PN order with those in Eq.~(50) of Ref.~\cite{Bini:2019lcd} and Eq.~(83) of Ref.~\cite{Bini:2019lkm}, respectively.
Note that the denominator of $\iota$ in Eq.~\eqref{xiota} is of order 1PN, which effectively scales down the PN ordering such that, to obtain the spin-precession invariant at fourth-subleading PN order, we need to include the 5PN nonspinning part of the Hamiltonian, which is given in Refs.~\cite{Bini:2019nra,Bini:2020wpo}.
\subsection{Comparison with self-force results}
Next, we expand the redshift $z_1(x, \iota)$ and spin-precession invariant $\psi_1(x, \iota)$ to first order in the mass ratio $q$, first order in the massive body's spin $a_2\equiv a$, and zeroth order in the spin of the smaller companion $a_1$.
In doing so, we make use of another set of variables ($y,\lambda$), defined by
\begin{align}
\label{ylambda}
y &\equiv (m_2 \Omega_\phi)^{2/3} = \frac{x}{(1+q)^{2/3}}, \nonumber\\
\lambda &\equiv\frac{3y}{T_\phi/(2\pi)-1} = \frac{\iota}{(1+q)^{2/3}} \,,
\end{align}
where the mass ratio $q=m_1/m_2$.
Schematically, those expansions have the following dependence on the scattering-angle unknowns:
\begin{align}
z_1(y,\lambda) &= \dots + q \left[\dots + a \left\{\ms X_{39}^\nu, \ms X_{49}^\nu-\ms X_{49}^{\delta\nu}, \ms X_{59}^\nu - \ms X_{59}^{\delta\nu}\right\}\right], \nonumber\\
\psi_1(y,\lambda) &= \dots + q \left\{\ms X_{39}^\nu, \ms X_{49}^\nu+\ms X_{49}^{\delta\nu}, \ms X_{59}^\nu + \ms X_{59}^{\delta\nu}\right\},
\end{align}
which can be seen from the structure of the scattering angle in Eq.~\eqref{chiAnz}.
In those expressions, the $\Order(a)$ part of the redshift depends on the unknown $\ms X_{39}^\nu$ and the \emph{difference} of the two pairs of unknowns $(\ms X_{49}^\nu,\ms X_{49}^{\delta\nu})$ and $(\ms X_{59}^\nu, \ms X_{59}^{\delta\nu})$, while the spin-precession invariant depends on $\ms X_{39}^\nu$ and the \emph{sum} of the two pairs of unknowns.
This means that solving for $\ms X_{39}^\nu$ requires 1SF result for \emph{either} $z_1$ or $\psi_1$, while solving for the other unknowns requires \emph{both}.
Hence, to solve for all five unknowns, we need at least three (or two) orders in eccentricity in the redshift, at first order in the Kerr spin, and two (or three) orders in eccentricity in the spin-precession invariant, at zeroth order in both spins.
Equivalently, instead of the spin-precession invariant, one could use the redshift at linear order in the spin of the smaller body $a_1$, but that is known from 1SF results for circular orbits only~\cite{Bini:2018zde}.
Incidentally, the available 1SF results are just enough to solve for the five unknowns, since the redshift is known to $\Order(e^4)$~\cite{Kavanagh:2016idg,Bini:2016dvs,Bini:2019lcd} and the spin-precession invariant to $\Order(e^2)$~\cite{Kavanagh:2017wot}.
The last unknown $\ms X_{59}^{\nu^2}$ in the 5.5PN scattering angle appears in both the redshift and spin-precession invariants at \emph{second} order in the mass ratio, thus requiring second-order self-force results for circular orbits.
To compare $z_1(y,\lambda)$ and $\psi_1(y,\lambda)$ with GSF results, we write them in terms of the Kerr background values of the variables ($y,\lambda$) expressed in terms of $(u_p, e)$. The relations between the two sets of variables are explained in detail in Appendix~B of Ref.~\cite{Antonelli:2020ybz}, and we just need to append to Eqs.~(B16)-(B20) there the following PN terms
\begin{align}
y(u_p,e)&=y_{0}(u_p,e)+ a\, y_{a}(u_p,e)+\mathcal{O}(a^2), \\
\lambda(u_p,e)&=\lambda_{0}(u_p,e)+ a\, \lambda_{a}(u_p,e)+\mathcal{O}(a^2),\nonumber\\
y_a(u_p,e)&= \dots + \left(\frac{4829 e^4}{12}-4984 e^2\right) u_p^{13/2}, \nonumber\\
\lambda_a(u_p,e)&= \dots + \left(\frac{4671}{8}+\frac{13959 e^2}{8}-\frac{19657 e^4}{12}\right) u_p^{11/2}.\nonumber
\end{align}
We obtain the following 1SF part of the inverse redshift $U_1 \equiv 1/z_1$ and spin-precession invariant $\psi_1$
\begin{align}
U_1&= U^{(0)}_{1a^0} +a\, U^{(0)}_{1a}+q\left(U^\text{1SF}_{1a^0} +a\, U^\text{1SF}_{1a}\right)+\mathcal{O}(q^2,a^2), \nonumber\\
\psi_1&= \psi^{(0)}_{1a^0} +q \psi^\text{1SF}_{1a^0} +\mathcal{O}(q^2,a)\,,
\end{align}
\begin{widetext}
\begin{align}
\label{U1SF}
U^\text{1SF}_{1a}&=\left(3-\frac{7 e^2}{2}-\frac{e^4}{8}\right) u_p^{5/2}+\left(18-4
e^2-\frac{117 e^4}{4}\right) u_p^{7/2} +\left(87+\frac{287 e^2}{2}-\frac{6277 e^4}{16}\right)u_p^{9/2}\nonumber\\
&\quad +\left[\frac{3890}{9}-\frac{241 \pi ^2}{96}+\left(\frac{5876}{3}-\frac{569 \pi ^2}{64}\right) e^2+\left(\frac{2025 \pi ^2}{128}-3547\right) e^4\right]u_p^{11/2} \nonumber\\
&\quad
+ \bigg[
8 \ms X_{49}^{\delta\nu}-\frac{3 \ms X_{59}^{\delta\nu}}{8}+\frac{35 \ms X_{39}^{\nu}}{4}-8 \ms X_{49}^{\nu}+\frac{3 \ms X_{59}^{\nu}}{8}+\frac{17917 \pi ^2}{768}+\frac{2027413}{2880}+\frac{2336 \gamma_E }{15}+\frac{928 \ln 2}{3} + \frac{1168 \ln u_p}{15} \nonumber\\
&\quad\qquad
+ e^2 \bigg(
\frac{4832 \ln u_p}{15}+24 \ms X_{49}^{\delta\nu}-\frac{21 \ms X_{59}^{\delta\nu}}{16}+\frac{175 \ms X_{39}^{\nu}}{8}-24 \ms X_{49}^{\nu}+\frac{21 \ms X_{59}^{\nu}}{16}+\frac{182411 \pi ^2}{1536}+\frac{31389241}{2880}+\frac{9664 \gamma_E }{15} \nonumber\\
&\qquad\qquad
-1728 \ln 2+2916 \ln 3
\bigg)
+ e^4 \bigg(-\frac{1248 \ln u_p}{5}-42 \ms X_{49}^{\delta\nu}+\frac{63 \ms X_{59}^{\delta\nu}}{32}-\frac{175 \ms X_{39}^{\nu}}{4}+42 \ms X_{49}^{\nu}-\frac{63 \ms X_{59}^{\nu}}{32}-\frac{2496 \gamma_E }{5} \nonumber\\
&\qquad\qquad
-\frac{200393 \pi ^2}{1024}-\frac{137249131}{7680}+\frac{782912 \ln 2}{15}-\frac{328779 \ln 3}{10}\bigg)
\bigg]u_p^{13/2}
\,, \\
\label{psi1SF}
\psi_{1\, a^0}^\text{1SF} &=-u_p+\left(\frac{9}{4}+e^2\right) u_p^2+
\left[\frac{739}{16}-\frac{123 \pi ^2}{64}+\left(\frac{341}{16}-\frac{123 \pi ^2}{256}\right) e^2\right]u_p^3\nonumber\\
&\quad+\bigg[\frac{628 \ln u_p}{15}+\frac{31697 \pi ^2}{6144}-\frac{587831}{2880}+\frac{1256 \gamma_E }{15} + \frac{296}{15} \ln 2+\frac{729 \ln 3}{5}\nonumber\\
&\quad\qquad
+e^2
\bigg(\frac{268 \ln u_p}{5}-\frac{164123}{480}-\frac{23729 \pi ^2}{4096}+\frac{536 \gamma_E }{5}+\frac{11720 \ln 2}{3}-\frac{10206 \ln 3}{5}\bigg)\bigg]u_p^4 \nonumber\\
&\quad
+ \bigg[
4 \ms X_{49}^{\delta\nu}-\frac{3 \ms X_{59}^{\delta\nu}}{16}-\frac{35 \ms X_{39}^{\nu}}{8}+4 \ms X_{49}^{\nu}-\frac{3 \ms X_{59}^{\nu}}{16}+\frac{6793111 \pi ^2}{24576}-\frac{22306 \gamma_E }{35}-\frac{115984853}{57600}+\frac{22058 \ln 2}{105}-\frac{31347 \ln 3}{28} \nonumber\\
&\qquad\quad
-\frac{11153}{35} \ln u_p
+ e^2 \bigg(\frac{4248047}{4800}+18 \ms X_{49}^{\delta\nu}-\frac{15 \ms X_{59}^{\delta\nu}}{16}-\frac{35 \ms X_{39}^{\nu}}{2}+18 \ms X_{49}^{\nu}-\frac{15 \ms X_{59}^{\nu}}{16}+\frac{4895607 \pi ^2}{16384}-\frac{22682 \gamma_E }{15} \nonumber\\
&\qquad\quad
+\frac{4430133 \ln 3}{320}+\frac{9765625 \ln 5}{1344}-\frac{4836254 \ln 2}{105}-\frac{11341 \ln u_p}{15}\bigg)
\bigg]u_p^5
\,.
\end{align}
\end{widetext}
These results can be directly compared with those derived in GSF literature. In particular, for the redshift, we match to Eq.~(4.1) of Ref.~\cite{Kavanagh:2016idg}, Eq.~(23) of Ref.~\cite{Bini:2016dvs}, and Eq.~(20) of Ref.~\cite{Bini:2019lcd}, while for the precession frequency, we match to Eq.~(3.33) of Ref.~\cite{Kavanagh:2017wot}\footnote{
Note that the $\Order(e^2 u_p^5)$ term in Eq.~(3.33) of Ref.~\cite{Kavanagh:2017wot} has a typo, but the correct expression is provided in the Black Hole Perturbation Toolkit~\cite{BHPToolkit}.
}.
The matching to 1SF results leads to the following solution for the unknown coefficients in the scattering angle:
\begin{align}
\label{Xsol}
\ms X_{39}^{\nu} &= \frac{26571}{1120}, \nonumber\\
\ms X_{49}^{\delta\nu} &= \frac{533669}{4800}-\frac{97585 \pi ^2}{8192}, \nonumber\\
\ms X_{49}^{\nu} &= -\frac{403129}{4800}+\frac{80823 \pi ^2}{8192}, \nonumber\\
\ms X_{59}^{\delta\nu} &= \frac{285673}{240}-\frac{2477 \pi ^2}{16}, \nonumber\\
\ms X_{59}^{\nu} &= \frac{402799}{270}-\frac{4135 \pi ^2}{144}.
\end{align}
Note that all the logarithms and Euler constants, which are purely nonlocal, cancel between GSF results and those in Eqs.~\eqref{U1SF} and \eqref{psi1SF}, thus providing a good check for our calculations.
Another check would be possible once 1SF results are computed at higher orders in eccentricity, since one could directly compare them to our results for the redshift and spin-precession invariants that are provided in the Supplemental Material~\cite{ancprd} expanded to $\Order(e^8)$.
\subsection{Local scattering angle and Hamiltonian}
Inserting the solution from Eq.~\eqref{Xsol} into the scattering angle in Eq.~\eqref{chiAnz}, yields
\begin{widetext}
\begin{align}
\label{chiLoc}
\frac{\chi_a^\text{loc}}{\Gamma} &=
\frac{a_1}{b} \Bigg\lbrace \left(\frac{GM}{v^2b}\right) (-4 v) + \pi\left(\frac{GM}{v^2b}\right)^2 \left[\left(\frac{\delta }{2}-\frac{7}{2}\right) v + \left(\frac{3 \delta }{4}-\frac{21}{4}\right) v^3\right] \nonumber \\
&\quad
+\left(\frac{GM}{v^2b}\right)^3 \left[
(2 \delta -10) v + (20 \delta -100 + 10 \nu) v^3 +\left(10 \delta-50 + \frac{77}{2} \nu\right) v^5 + \frac{177}{4} \nu v^7 + \frac{26571}{1120} \nu v^9
\right] \nonumber\\
&\quad
+ \pi\left(\frac{GM}{v^2b}\right)^4 \bigg[
\left(\frac{63}{4}\delta-\frac{273}{4} - \frac{3}{4} \delta \nu + \frac{39}{4} \nu \right)v^3
+ \left(\frac{315}{8}\delta-\frac{1365}{8} - \frac{45}{8} \delta \nu + \frac{777}{8} \nu \right) v^5\nonumber \\
&\qquad\qquad
+ \left(\frac{315 \delta }{32} -\frac{1365}{32} + \left(-\frac{257}{96}-\frac{251 \pi ^2}{256}\right) \delta \nu + \left(\frac{23717}{96}-\frac{733 \pi ^2}{256}\right) \nu \right) v^7 \nonumber\\
&\qquad\qquad
+ \left(\left(\frac{533669}{4800}-\frac{97585 \pi ^2}{8192}\right) \delta \nu + \left(\frac{80823 \pi ^2}{8192}-\frac{403129}{4800}\right) \nu\right) v^9
\bigg]\nonumber \\
&\quad
+\left(\frac{GM}{v^2b}\right)^5 \bigg[
\left(-\frac{4 \delta }{3}+\frac{16}{3}\right) v
+\left(48 \delta -192 - 4 \delta \nu + 32 \nu\right) v^3
+ \left(504 \delta -2016 - 109 \delta \nu + 1032 \nu - 16 \nu^2 \right) v^5 \nonumber\\
&\qquad\qquad
+ \left(560 \delta -2240 + \left(-\frac{21995}{54}-\frac{80 \pi ^2}{9}\right) \delta \nu + \left(\frac{150220}{27}-\frac{2755 \pi ^2}{36}\right) \nu - 168 \nu^2 \right) v^7 \nonumber\\
&\qquad\qquad
+ \left(84\delta -336 + \left(\frac{285673}{240}-\frac{2477 \pi ^2}{16}\right) \delta \nu + \left(\frac{402799}{270}-\frac{4135 \pi ^2}{144}\right) \nu + \ms X_{59}^{\nu^2} \nu^2\right)v^9
\bigg]\Bigg\rbrace + 1 \leftrightarrow 2.
\end{align}
For the gyro-gravitomagnetic factors, which are one of the main results of this paper, substituting the solution~\eqref{Xsol} in Eq.~\eqref{gyrosloc} yields the following local part:
\begin{align}
\label{gSLoc}
g_S^\text{5.5PN,loc} &= 2\Bigg\lbrace\left[-\frac{\nu ^4}{64}-\frac{413 \nu ^3}{512}+\nu ^2 \left(-\frac{3 \ms X_{59}^{\nu^2}}{32}-\frac{583 \pi ^2}{192}+\frac{235111}{2304}\right)+\left(\frac{62041 \pi ^2}{3072}-\frac{11646877}{57600}\right) \nu \right]\frac{1}{r^4} \nonumber\\
&\quad
+ \left[\frac{3 \nu ^4}{8}-\frac{8259 \nu ^3}{128}+\left(\frac{198133}{384}-\frac{1087 \pi ^2}{128}\right) \nu ^2+\left(\frac{3612403}{6400}-\frac{22301 \pi ^2}{512}\right) \nu \right] \frac{p_r^2}{r^3} \nonumber\\
&\quad
+ \left[-\frac{107 \nu ^4}{64}-\frac{73547 \nu ^3}{512}+\frac{31913 \nu ^2}{256}+\frac{8337 \nu }{256}\right] \frac{p_r^4}{r^2}
+ \left[\frac{1577 \nu ^4}{320}-\frac{11397 \nu ^3}{512}-\frac{2553 \nu ^2}{256}-\frac{893 \nu }{256}\right] \frac{p_r^6}{r} \nonumber\\
&\quad
+ \left[\frac{189 \nu ^4}{64}+\frac{945 \nu ^3}{512}+\frac{99 \nu ^2}{256}-\frac{27 \nu }{128}\right] p_r^8\Bigg\rbrace, \\
\label{gSstrLoc}
g_{S^*}^\text{5.5PN,loc} &= \frac{3}{2} \Bigg\lbrace
\left[-\frac{5 \nu ^4}{128}-\frac{37 \nu ^3}{32}+\nu ^2 \left(-\frac{\ms X_{59}^{\nu^2}}{8}-\frac{41 \pi ^2}{16}+\frac{29081}{384}\right)+\left(\frac{23663 \pi ^2}{3072}-\frac{55}{2}\right) \nu -\frac{567}{128}\right]\frac{1}{r^4} \nonumber\\
&\quad
+ \left[\frac{57 \nu ^4}{64}-\frac{163 \nu ^3}{2}+\left(\frac{77201}{192}-\frac{41 \pi ^2}{4}\right) \nu ^2+\left(\frac{34677}{160}-\frac{4829 \pi ^2}{384}\right) \nu -\frac{9}{16}\right] \frac{p_r^2}{r^3} \nonumber\\
&\quad
+ \left[-\frac{459 \nu ^4}{128}-\frac{4635 \nu ^3}{32}+\frac{4045 \nu ^2}{128}+\frac{107 \nu }{12}+\frac{2525}{384}\right] \frac{p_r^4}{r^2}
+ \left[\frac{5359 \nu ^4}{640}-\frac{797 \nu ^3}{160}-\frac{293 \nu ^2}{128}+\frac{77 \nu }{24}+\frac{1185}{128}\right] \frac{p_r^6}{r} \nonumber\\
&\quad
+ \left[\frac{315 \nu ^4}{128}+\frac{105 \nu ^3}{32}+\frac{351 \nu ^2}{128}+\frac{63 \nu }{32}+\frac{231}{128}\right]p_r^8\Bigg\rbrace.
\end{align}
\end{widetext}
\subsection{Comparison with numerical relativity}
\label{sec:Eb}
To quantify the effect of the 5.5PN SO part on the dynamics, and that of the remaining unknown coefficient $\ms X_{59}^{\nu^2}$, we compare the binding energy calculated from the EOB Hamiltonian to NR.
The binding energy provides a good diagnostic for the conservative dynamics of the binary system~\cite{Barausse:2011dq,Damour:2011fu,Nagar:2015xqa}, and can be calculated from accurate NR simulations by subtracting the radiated energy $E_\text{rad}$ from the ADM energy $E_\text{ADM}$ at the beginning of the simulation~\cite{Ruiz:2007yx}, i.e.,
\begin{equation}
\bar{E}_\text{NR} = E_\text{ADM} - E_\text{rad} - M.
\end{equation}
To isolate the SO contribution $\bar{E}^\text{SO}$ to the binding energy, we combine configurations with different spin orientations (parallel or anti-parallel to the orbital angular momentum), as explained in Refs.~\cite{Dietrich:2016lyp,Ossokine:2017dge}. One possibility is to use
\begin{equation}
\label{EbSO}
\bar{E}^\text{SO}(\nu,\chi,\chi) \simeq \frac{1}{2} \left[ \bar{E}(\nu,\chi,\chi) - \bar{E}(\nu,-\chi,-\chi)\right],
\end{equation}
where $\chi$ here is the magnitude of the dimensionless spin.
This relation subtracts the nonspinning and spin-spin parts, with corrections remaining at order $\chi^3$, which provides a good approximation since the spin-cubed contribution to the binding energy is about an order of magnitude smaller than the SO contribution, as was shown in Ref.~\cite{Ossokine:2017dge}.
We calculate the binding energy for circular orbits from the EOB Hamiltonian using $\bar{E}_\text{EOB} = H_\text{EOB} - M$ while neglecting radiation reaction effects, which implies that $\bar{E}_\text{EOB}$ is not expected to agree well with $\bar{E}_\text{NR}$ near the end of the inspiral.
We set $p_r=0$ in the Hamiltonian and numerically solve $\dot{p}_r=0=-\partial H/\partial r$ for the angular momentum $L$ at different orbital separations.
Then, we plot $\bar{E}$ versus the dimensionless parameter
\begin{equation}
v_\Omega \equiv (M\Omega)^{1/3},
\end{equation}
where the orbital frequency $\Omega = \partial H / \partial L$.
Finally, we compare the EOB binding energy to NR data for the binding energy that were extracted in Ref.~\cite{Ossokine:2017dge} from the Simulating eXtreme Spacetimes (SXS) catalog~\cite{SXS,Boyle:2019kee}. In particular, we use the simulations with SXS ID 0228 and 0215 for $q=1$, 0291 and 0264 for $q=1/3$, all with spin magnitudes $\chi=0.6$ aligned and antialigned. The numerical error in these simulations is significantly smaller than the SO contribution to the binding energy.
In Fig.~\ref{fig:Eb}, we plot the relative difference in the SO contribution $\bar{E}^\text{SO}$ between EOB and NR for two mass ratios, $q=1$ and $q=1/3$, as a function of $v_\Omega$ up to $v_\Omega = 0.38$, which corresponds to about an orbit before merger.
We see that the inclusion of the 5.5PN SO part (with the remaining unknown $\ms X_{59}^{\nu^2} = 0$) provides an improvement over 4.5PN, but the difference is smaller than that between 3.5PN and 4.5PN.
In addition, since the remaining unknown $\ms X_{59}^{\nu^2}$ is expected to be about $\Order(10^2)$, based on the other coefficients in the scattering angle, we plotted the energy for $\ms X_{59}^{\nu^2}=500$ and $\ms X_{59}^{\nu^2}=-500$, demonstrating that the effect of that unknown is less than the difference between 4.5PN and 5.5PN, with decreasing effect for small mass ratios.
\begin{figure}
\includegraphics{deltaEb}
\caption{The relative difference in the SO contribution to the binding energy between EOB and NR, plotted versus the frequency parameter $v_\Omega$. The 5.5PN curve corresponds to $\ms X_{59}^{\nu^2} = 0$, while the upper and lower edges of the shaded region around it correspond to $\ms X_{59}^{\nu^2} = -500$ and $\ms X_{59}^{\nu^2} = 500$, respectively.}
\label{fig:Eb}
\end{figure}
\section{Nonlocal 5.5PN SO scattering angle}
\label{sec:nonlocscatter}
The local part of the Hamiltonian and scattering angle calculated in the previous section is valid for both bound and unbound orbits. However, the nonlocal part of the Hamiltonian from Sec.~\ref{sec:nonloc} is only valid for bound orbits since it was calculated in a small-eccentricity expansion.
In this section, we complement these results by calculating the nonlocal part for unbound orbits in a large-eccentricity (or large-angular-momentum) expansion.
The nonlocal part of the 4PN scattering angle was first computed in Ref.~\cite{Bini:2017wfr}, in both the time and frequency domains, at leading order in the large-eccentricity expansion.
This was extended in Ref.~\cite{Bini:2020wpo} at 5PN at leading order in eccentricity, and in Ref.~\cite{Bini:2020hmy} at 6PN to next-to-next-to-leading order in eccentricity.
In addition, Refs.~\cite{Bini:2020uiq,Bini:2020rzn} recovered analytical expressions for the nonlocal scattering angle by using high-precision arithmetic methods.
It was shown in Ref.~\cite{Bini:2017wfr} that the nonlocal contribution to the scattering angle is given by
\begin{align}
\chi_\text{nonloc} &= \frac{1}{\nu} \frac{\partial}{\partial L} W_\text{nonloc}(E,L), \\
W_\text{nonloc} &= \int dt\, \delta H_\text{nonloc},
\end{align}
with $\delta H_\text{nonloc}$ given by Eq.~\eqref{nonlocH}, leading to
\begin{align}
W_\text{nonloc} &= W^\text{flux split} + W^\text{flux}, \\
W^\text{flux split} &= - \frac{G M}{c^3} \int dt\, \text{Pf}_{2s/c} \int \frac{d\tau}{|\tau|} \mathcal{F}_\text{LO}^\text{split}(t,t+\tau), \label{Wsplit} \\
W^\text{flux} &= \frac{2GM}{c^3} \int dt\, \mathcal{F}_\text{LO}(t,t) \ln\left(\frac{r}{s}\right). \label{Wflux}
\end{align}
To evaluate the integral in the large-eccentricity limit, we follow the steps used in Refs.~\cite{Bini:2017wfr,Bini:2020wpo}.
We use the quasi-Keplerian parametrization for hyperbolic motion~\cite{damour1985general,Cho:2018upo}
\begin{align}
r &= \bar{a}_r (e_r \cosh \bar{u} - 1), \\
\bar{n} t &= e_t \sinh \bar{u} - \bar{u}, \label{KepEqhyp}\\
\phi &= 2 K \arctan\left[\sqrt{\frac{e_\phi+1}{e_\phi-1}} \tanh \frac{\bar{u}}{2}\right], \label{qKphihyp}
\end{align}
which is the analytic continuation of the parametrization for elliptic orbits in Eqs.~\eqref{qKr}--\eqref{qKphi}. In Appendix.~\ref{app:qKephyp}, we summarize the relations for these quantities in terms of the energy and angular momentum.
We begin by expressing the variables $(r,\dot{\phi},\dot{r})$, which enter the multipole moments, in terms of $(\phi,L,e_t)$, such that
\begin{align}
r &= \frac{L^2}{1 + e_t \cos\phi} +\frac{2 \delta \chi _A+(2-\nu ) \chi _S}{L \left(e_t \cos \phi +1\right)^2} \nonumber\\
&\qquad
\times \left(2 \phi e_t \sin \phi+4 e_t \cos\phi +e_t^2+3\right), \nonumber\\
\dot{\phi} &= \frac{\left(e_t \cos \phi +1\right)^2}{L^3} +\frac{\left(e_t \cos \phi +1\right) \left(2 \delta \chi_A+(2-\nu ) \chi _S\right)}{2 L^6} \nonumber\\
&\quad
\times \left(-8 \phi e_t \sin \phi +e_t^2 \cos (2 \phi )-12 e_t \cos \phi-3 e_t^2-10\right), \nonumber\\
\dot{r} &= \frac{e_t \sin \phi}{L} +\frac{e_t \left(2 \delta \chi _A+(2-\nu) \chi _S\right)}{2 L^4} \nonumber\\
&\qquad
\times \left(e_t \sin (2 \phi )-2 \sin \phi +4 \phi \cos \phi \right).
\end{align}
We then use these relations to obtain an exact expression for the flux-split function $\mathcal{F}_\text{LO}^\text{split}(\phi,\phi')$, with no eccentricity expansion, which takes the form
\begin{align}
\label{Fsplitphi}
\mathcal{F}_\text{LO}^\text{split}(\phi,\phi') &= \frac{4\nu^2}{15L^{10}} (1+e_t\cos\phi)^2 (1+e_t\cos\phi')^2 \nonumber\\
&\qquad
\times \left(F_0 +F_1 e_t + F_2 e_t^2\right) \nonumber\\
&\quad + \frac{\nu^2}{L^{13}} (1+e_t\cos\phi) (1+e_t\cos\phi') \nonumber\\
&\qquad
\times \left(F_0^s +F_1^s e_t + \dots + F_6^s e_t^6\right),
\end{align}
where the functions $F_{\ms i}(\phi,\phi')$ are given by Eq.~(92) of Ref.~\cite{Bini:2017xzy}, but the functions $F_{\ms i}^s(\phi,\phi')$ in the SO part are too lengthy to write here.
Instead, we expand $\mathcal{F}_\text{LO}^\text{split}(\phi,\phi')$ to leading order in a large-eccentricity expansion (in powers of $1/e_t$).
To do that, we define the rescaled mean motion $\tilde{n} \equiv \bar{n} / e_t$, write the Kepler Eq.~\eqref{KepEqhyp} as
\begin{equation}
\tilde{n}t = \sinh \bar{u} - \frac{\bar{u}}{e_t},
\end{equation}
and solve for $\bar{u}$ in a $1/e_t$ expansion
\begin{equation}
\bar{u} = \sinh ^{-1}\left(t \tilde{n}\right) + \frac{\sinh ^{-1}\left(t \tilde{n}\right)}{e_t \sqrt{1+t^2 \tilde{n}^2}} + \Order(e_t^{-2}).
\end{equation}
Substituting in Eq.~\eqref{qKphihyp} and expanding yields
\begin{align}
\phi(t) &= \tan ^{-1}\left(t \tilde{n}\right)+ \Order\left(e_t^{-1}\right) \nonumber\\
&\quad - \frac{t \tilde{n} e_t \left[2 \delta \chi _A+(2-\nu) \chi _S\right]}{L^3 \sqrt{t^2 \tilde{n}^2+1}} + \Order\left(e_t^0\right).
\end{align}
Defining $\tilde{t} \equiv \tilde{n} t$ and $\tilde{\tau} \equiv \tilde{n} \tau$, then substituting in Eq.~\eqref{Fsplitphi} and expanding yields
\begin{widetext}
\begin{align}
\mathcal{F}^\text{split}(t,t+\tau) &= \frac{4 \nu^2 e_t^6 }{15 L^{10}} \frac{\ms f_6(\tilde{t},\tilde{\tau})}{\left(\tilde{t}^2+1\right)^{5/2} \left(2 \tilde{\tau } \tilde{t}+\tilde{t}^2+\tilde{\tau }^2+1\right)^{5/2}} + \Order(e_t^5) \nonumber\\
&\quad
+ \frac{4 \nu^2 e_t^8 }{15 L^{13} } \frac{\chi_S \ms f_8^S(\tilde{t},\tilde{\tau}) + \delta \chi_A \ms f_8^A(\tilde{t},\tilde{\tau})}{\left(\tilde{t}^2+1\right)^{7/2} \left(2 \tilde{\tau } \tilde{t}+\tilde{t}^2+\tilde{\tau }^2+1\right)^{7/2}}+ \Order(e_t^7),
\end{align}
with
\begin{align}
\ms f_6(\tilde{t},\tilde{\tau}) &=2 \tilde{t}^6+ 6 \tilde{\tau } \tilde{t}^5+\left(6 \tilde{\tau }^2+28\right) \tilde{t}^4+2 \tilde{\tau } \left(\tilde{\tau }^2+28\right) \tilde{t}^3+\left(39 \tilde{\tau }^2+50\right) \tilde{t}^2+\tilde{\tau } \left(11 \tilde{\tau }^2+50\right) \tilde{t}-12 \left(\tilde{\tau }^2-2\right),
\nonumber\\
\ms f_8^S(\tilde{t},\tilde{\tau}) &= 4 (9 \nu -5) \tilde{t}^8 + 16 (9 \nu -5) \tilde{\tau } \tilde{t}^7+2 \tilde{t}^6 \left[18 \nu \left(6 \tilde{\tau }^2+5\right)-63 \tilde{\tau }^2-67\right]+2 \tilde{\tau } \tilde{t}^5 \left[18 \nu \left(4 \tilde{\tau }^2+15\right)-49 \tilde{\tau }^2-201\right] \nonumber\\
&\quad
+\tilde{t}^4 \left[18 \nu \left(2 \tilde{\tau }^4+35 \tilde{\tau }^2+18\right)-2 \left(19 \tilde{\tau }^4+206 \tilde{\tau }^2+141\right)\right]-2 \tilde{\tau } \tilde{t}^3 \left[-36 \nu \left(5 \tilde{\tau }^2+9\right)+3 \tilde{\tau }^4+77 \tilde{\tau }^2+282\right] \nonumber\\
&\quad
+\tilde{t}^2 \left[3 \nu \left(35 \tilde{\tau }^4+130 \tilde{\tau }^2+84\right)-2 \left(8 \tilde{\tau }^4+169 \tilde{\tau }^2+121\right)\right]
+\tilde{\tau } \tilde{t} \left[3 \nu \left(5 \tilde{\tau }^4+22 \tilde{\tau }^2+84\right)-2 \left(3 \tilde{\tau }^4+28 \tilde{\tau }^2+121\right)\right] \nonumber\\
&\quad
+22 \tilde{\tau }^4-52 \tilde{\tau }^2-74 -12 \nu \left(3 \tilde{\tau }^4+2 \tilde{\tau }^2-6\right),
\nonumber\\
\ms f_8^A(\tilde{t},\tilde{\tau}) &= -2 \left(\tilde{t}^2+1\right) \Big[40 \tilde{\tau } \tilde{t}^5+\left(63 \tilde{\tau }^2+57\right) \tilde{t}^4+7 \tilde{\tau } \left(7 \tilde{\tau }^2+23\right) \tilde{t}^3+\left(19 \tilde{\tau }^4+143 \tilde{\tau }^2+84\right) \tilde{t}^2+\tilde{\tau } \left(3 \tilde{\tau }^4+28 \tilde{\tau }^2+121\right) \tilde{t} \nonumber\\
&\quad\qquad
+10 \tilde{t}^6-11 \tilde{\tau }^4+26 \tilde{\tau }^2+37 \Big].
\end{align}
Then, we evaluate the \emph{partie finie} integral in Eq.~\eqref{Wsplit}, after writing it in terms of $\tilde{T} \equiv 2\tilde{n}s/c$, to obtain
\begin{align}
&-\text{Pf}_{\tilde{T}} \int \frac{d\tilde{\tau}}{|\tilde{\tau}|} \mathcal{F}_\text{LO}^\text{split}(t,t+\tau)
= \frac{8 \nu^2 e_t^6 }{15 L^{10} \left(\tilde{t}^2+1\right)^3} \left[36 + 7 \tilde{t}^2 + 2 \left(\tilde{t}^2+12\right) \ln \left(\frac{\tilde{T}}{2 \tilde{t}^2+2}\right)\right] \nonumber\\
&\qquad\qquad
+\frac{16 \nu^2 e_t^8}{15 L^{13} \left(\tilde{t}^2+1\right)^4} \Bigg\lbrace
\chi _S \left[(3 \nu -5) \tilde{t}^4+(36 \nu +5) \tilde{t}^2-\left(2 (5-9 \nu ) \tilde{t}^2-36 \nu +37\right) \ln \left(\frac{\tilde{T}}{2\tilde{t}^2+2}\right)+60 \nu -53\right]
\nonumber\\
&\qquad\qquad\qquad
-\delta \chi _A \left[\left(10 \tilde{t}^2+37\right) \ln \left(\frac{\tilde{T}}{2\tilde{t}^2+2}\right)+5 \tilde{t}^4-5 \tilde{t}^2+53\right]
\Bigg\rbrace.
\end{align}
Integrating over $t$, we obtain the flux-split potential
\begin{align}
W^\text{flux split} &= \frac{2\pi \nu^2}{15 e_t^3 a_r^{7/2}} \left[100 + 37 \ln\left(\frac{s}{4 e_t a_r^{3/2}}\right)\right]
+\frac{\pi \nu^2}{30 e_t^4 a_r^5} \Bigg\lbrace
31\delta \chi_A \left[-137 - 46 \ln \left(\frac{s}{4 a_r^{3/2} e_t}\right) \right] \nonumber\\
&\qquad
+ \chi_S \left[
2774 \nu -4247 - 2(713-457 \nu ) \ln \left(\frac{s}{4 a_r^{3/2} e_t}\right)
\right]
\Bigg\rbrace.
\end{align}
The second contribution $W^\text{flux}$ in Eq.~\eqref{Wflux} can be easily integrated to yield
\begin{align}
W^\text{flux} &= \frac{2\pi \nu^2}{15 e_t^3 a_r^{7/2}} \left[-\frac{85}{4} - 37 \ln\left(\frac{s}{2 a_r e_t}\right)\right]
+\frac{\pi \nu^2}{60 e_t^4 a_r^5} \bigg\lbrace
\delta \chi_A \left[2255 + 2852\ln\left( \frac{s}{2 a_r e_t}\right) \right] \nonumber\\
&\qquad
+ \chi_S \left[
2255-1365 \nu + (2852-1828 \nu ) \ln \left(\frac{s}{2 a_r e_t}\right)
\right]
\bigg\rbrace,
\end{align}
\end{widetext}
where we used
\begin{equation}
r(t) = \frac{L^2 \sqrt{\tilde{t}^2+1}}{e_t}+\frac{1}{L}\left[2 \delta \chi _A+(2-\nu) \chi _S\right].
\end{equation}
Adding the two contributions leads to
\begin{align}
&W^\text{nonloc} = -\frac{\pi \nu^2}{30 e_t^3 a_r^{7/2}} \left[74 \ln \left(4a_r\right)-315\right] \nonumber\\
&\quad
+ \frac{\pi \nu^2 }{60 e_t^4 a_r^5} \Big\lbrace
\chi _S \left[4183 \nu -6239 + 2 (713-457 \nu ) \ln (4 a_r)\right]
\nonumber\\
&\quad\qquad
+\delta \chi _A \left[-6239+1426 \ln \left(4a_r\right)\right]
\Big\rbrace,
\end{align}
where we see that $s$ cancels.
In terms of the energy and angular momentum, and expanding in $1/L$ to leading order,
\begin{align}
W^\text{nonloc} &=\frac{2\pi \nu^2 \bar{E}^2}{15 L^3} \left[315+74 \ln \left(\frac{\bar{E}}{2}\right)\right] \\
&\quad
+ \frac{2\pi \nu^2 \bar{E}^3 }{15 L^4} \bigg\lbrace\!
(4183 \nu -6239) \chi _S-6239 \delta \chi _A \nonumber\\
&\qquad
- 2\left[713 \delta \chi _A+(713-457 \nu ) \chi _S\right]\ln \left(\frac{E}{2}\right)
\bigg\rbrace,\nonumber
\end{align}
and the nonlocal part of the scattering angle
\begin{align}
\chi^\text{nonloc} &= -\frac{2\pi \nu \bar{E}^2}{5 L^4} \left[315+74 \ln \left(\frac{\bar{E}}{2}\right)\right] \\
&\quad
- \frac{8\pi \nu \bar{E}^3 }{15 L^5} \bigg\lbrace\!
(4183 \nu -6239) \chi _S-6239 \delta \chi _A \nonumber\\
&\qquad
- 2\left[713 \delta \chi _A+(713-457 \nu ) \chi _S\right]\ln \left(\frac{E}{2}\right)
\bigg\rbrace.\nonumber
\end{align}
In terms of $b$ and $v$, using Eqs.~\eqref{Lcov} and~\eqref{Egamma},
\begin{align}
\label{chinonloc}
\chi^\text{nonloc} &= -\frac{\pi \nu}{10 b^4} \left[148 \ln \left(\frac{v}{2}\right)+315\right] \nonumber\\
&\quad
- \frac{\pi \nu v}{15 b^5} \bigg\lbrace
4 \ln \left(\frac{v}{2}\right) \left[(679 \nu -824) \chi _S-824 \delta \chi _A\right] \nonumber\\
&\qquad
+ (6073 \nu -7184) \chi _S-7184 \delta \chi _A
\bigg\rbrace.
\end{align}
\section{Gauge-invariant quantities for bound orbits}
\label{sec:radAction}
In this section, we obtain two gauge-invariant quantities that characterize bound orbits: the radial action as a function of the energy and angular momentum, and the binding energy for circular orbits as a function of the orbital frequency.
\subsection{Radial action}
\label{sec:Ir}
The radial action function contains the same gauge-invariant information as the Hamiltonian, and from it several other functions can be derived that describe bound orbits, such as the periastron advance, which can be directly related to the scattering angle via analytic continuation~\cite{Kalin:2019rwq,Kalin:2019inp}. This means that the entire calculation in Sec.~\ref{sec:local} could be performed using the radial action instead of the Hamiltonian, as was done in Ref.~\cite{Antonelli:2020ybz}.
The radial action is defined by the integral
\begin{equation}
\label{IrInteg}
I_r = \frac{1}{2\pi} \oint p_r dr,
\end{equation}
and we split it into a local contribution and a nonlocal one, such that
\begin{equation}
I_r = I_r^\text{loc} + I_r^\text{nonloc}.
\end{equation}
We calculate the local part from the local EOB Hamiltonian, i.e., Eq.~\eqref{Heob} with the nonlocal parts of the potentials and gyro-gravitomagnetic factors set to zero. We invert the local Hamiltonian iteratively to obtain $p_r(\varepsilon,L,r)$ in a PN expansion, where we recall that
\begin{align}
H_\text{EOB} &= \frac{1}{\nu} \sqrt{1 + 2\nu (\gamma -1)}, \nonumber\\
\varepsilon &\equiv \gamma^2 - 1,
\end{align}
with $\varepsilon<0,\, \gamma < 1$ for bound orbits.
Then, we integrate
\begin{equation}
I_r = \frac{1}{\pi} \int_{r_-}^{r_+} p_r(\varepsilon,L,r) \, dr,
\end{equation}
where $r_\pm$ are the zeros of the Newtonian-order $p_r^{(0)} = \sqrt{\varepsilon + 2/r - L^2/r^2}$, which are given by
\begin{equation}
r_\pm = \frac{1 \pm \sqrt{1 + L^2\varepsilon}}{-\varepsilon}.
\end{equation}
It is convenient to express the radial action in terms of the \emph{covariant} angular momentum $L_\text{cov}=L-\Delta L$, with $\Delta L$ given by Eq.~\eqref{Lcov}, since it can then be directly related to the coefficients of the scattering angle, as discussed in Ref.~\cite{Antonelli:2020ybz}, and leads to slightly simpler coefficients for the SO part.
We obtain for the local part
\begin{align}
I_r^\text{loc} &= -L + I_0
+ \frac{I_1}{\Gamma L_\text{cov}} + \frac{I_2^s}{(\Gamma L_\text{cov})^2} \\
&\quad + \frac{I_3}{(\Gamma L_\text{cov})^3} + \frac{I_4^s}{(\Gamma L_\text{cov})^4}
+ \frac{I_5}{(\Gamma L_\text{cov})^5} + \frac{I_6^s}{(\Gamma L_\text{cov})^6} \nonumber\\
&\quad + \frac{I_7}{(\Gamma L_\text{cov})^7} + \frac{I_8^s}{(\Gamma L_\text{cov})^8}
+ \frac{I_9}{(\Gamma L_\text{cov})^9} + \frac{I_{10}^s}{(\Gamma L_\text{cov})^{10}},\nonumber
\end{align}
where each term starts at a given PN order, with 0.5PN order corresponding to each power in $1/L$. Also, as noted in Ref.~\cite{Bini:2020wpo}, when the radial action is written in this form, in terms of $\Gamma$, the coefficients $I_{2n+1}^{(s)}$ become simple polynomials in $\nu$ of degree $\lfloor n \rfloor$.
The coefficients $I_n$ for the nonspinning local radial action up to 5PN order are given by Eq.~(13.20) of Ref.~\cite{Bini:2020wpo}.
The SO coefficients $I_n^s$ were derived in Ref.~\cite{Antonelli:2020ybz} to the 4.5PN order, but we list them here for completeness.
The coefficients $I_0,I_1,I_2^s$ are exact, and are given by
\begin{align}
I_0 &= \frac{1 + 2\varepsilon}{\sqrt{-\varepsilon}}, \nonumber\\
I_1 &= \frac{3}{4} (4 + 5\varepsilon), \nonumber\\
I_2^s &=- \frac{1}{4} \gamma (5 \varepsilon +2) \left(4 a_b+3 a_t\right),
\end{align}
where $a_b\equiv S/M, a_t \equiv S^*/M$. The other SO coefficients, up to 5.5PN, read
\begin{widetext}
\begin{align}
I_4^s &= -\frac{21}{64} \gamma \left(33 \varepsilon ^2+36 \varepsilon +8\right) \left(8 a_b+5 a_t\right)
+ \gamma \nu \bigg\lbrace
\frac{21 a_b}{8}+\frac{9 a_t}{4} + \varepsilon \left(\frac{495 a_b}{16}+\frac{219 a_t}{8}\right)
+\varepsilon ^2 \bigg[\left(\frac{17423}{192}-\frac{241 \pi ^2}{512}\right) a_b \nonumber\\
&\quad\qquad
+\left(\frac{2759}{32}-\frac{123 \pi ^2}{128}\right) a_t\bigg]
-\varepsilon ^3 \left[\left(\frac{156133}{3200}-\frac{22301 \pi ^2}{4096}\right) a_b+\left(\frac{8381 \pi ^2}{16384}-\frac{6527}{960}\right) a_t\right] + \Order(\varepsilon^4)
\bigg\rbrace, \nonumber\\
I_6^s &= \left(-\frac{25 \nu ^2}{8}+\frac{1755 \nu }{16}-\frac{495}{2}\right) a_b+\left(-\frac{45 \nu ^2}{16}+\frac{165 \nu }{2}-\frac{1155}{8}\right) a_t \nonumber\\
&\quad
-\varepsilon \bigg\lbrace\left[\frac{645 \nu ^2}{8}+\left(\frac{3665 \pi ^2}{256}-\frac{39715}{24}\right) \nu +\frac{3465}{2}\right] a_b
+\left[\frac{1185 \nu ^2}{16}+\left(\frac{1845 \pi ^2}{128}-\frac{10305}{8}\right) \nu +\frac{8085}{8}\right] a_t\bigg\rbrace \nonumber\\
&\quad
+\varepsilon ^2 \bigg\lbrace a_b \left(\left(\frac{10640477}{1920}-\frac{176785 \pi ^2}{2048}\right) \nu +\nu ^2 \left(\frac{45 \ms X_{59}^{\nu^2}}{128}+\frac{2755 \pi ^2}{512}-\frac{176815}{384}\right)-\frac{121275}{32}\right) \nonumber\\
&\quad\qquad
+a_t \left(\left(\frac{433715}{96}-\frac{1748755 \pi ^2}{16384}\right) \nu +\nu ^2 \left(\frac{45 \ms X_{59}^{\nu^2}}{128}+\frac{5535 \pi ^2}{1024}-\frac{26175}{64}\right)-\frac{282975}{128}\right)\bigg\rbrace + \Order(\varepsilon^3), \nonumber\\
I_8^s &= \left[\frac{455 \nu ^3}{128}-\frac{10185 \nu ^2}{32}+\left(\frac{3755465}{1152}-\frac{42875 \pi ^2}{1536}\right) \nu -\frac{25025}{8}\right] a_b \nonumber\\
&\quad
+\left[\frac{105 \nu ^3}{32}-\frac{16485 \nu ^2}{64}+\left(\frac{437605}{192}-\frac{1435 \pi ^2}{64}\right) \nu -\frac{225225}{128}\right] a_t \nonumber\\
&\quad
+\varepsilon \bigg\lbrace
a_b \left[\frac{4935 \nu ^3}{32}+\left(\frac{8263591}{180}-\frac{9948785 \pi ^2}{12288}\right) \nu +\nu ^2 \left(\frac{105 \ms X_{59}^{\nu^2}}{64}-\frac{1583995}{192}+\frac{27895 \pi ^2}{256}\right)-\frac{225225}{8}\right] \nonumber\\
&\quad\qquad
+a_t \left[\frac{1155 \nu ^3}{8}+\left(\frac{4594121}{144}-\frac{29957165 \pi ^2}{49152}\right) \nu +\nu ^2 \left(\frac{105 \ms X_{59}^{\nu^2}}{64}-\frac{209195}{32}+\frac{47355 \pi ^2}{512}\right)-\frac{2027025}{128}\right]\bigg\rbrace
+ \Order(\varepsilon^2), \nonumber\\
I_{10}^s &= a_b \left[-\frac{63 \nu ^4}{16}+\frac{90405 \nu ^3}{128}+\left(\frac{88995311}{1280}-\frac{545853 \pi ^2}{512}\right) \nu +\nu ^2 \left(\frac{189 \ms X_{59}^{\nu^2}}{128}+\frac{109515 \pi ^2}{512}-\frac{1119461}{64}\right)-\frac{1322685}{32}\right] \nonumber\\
&\quad
+a_t \left[-\frac{945 \nu ^4}{256}+\frac{38115 \nu ^3}{64}+\left(\frac{14456349}{320}-\frac{11632089 \pi ^2}{16384}\right) \nu +\nu ^2 \left(\frac{189 \ms X_{59}^{\nu^2}}{128}+\frac{167895 \pi ^2}{1024}-\frac{3292149}{256}\right)-\frac{2909907}{128}\right] \nonumber\\
&\quad + \Order(\varepsilon).
\end{align}
\end{widetext}
The nonlocal part can be calculated similarly by starting from the total Hamiltonian, expanding Eq.~\eqref{IrInteg} in eccentricity, then subtracting the local part.
Alternatively, it can be calculated directly from the nonlocal Hamiltonian via~\cite{Bini:2020hmy}
\begin{equation}
I_r^\text{nonloc} = - \frac{H_\text{nonloc}}{\Omega_r} ,
\end{equation}
where $\Omega_r=2\pi/T_r$ is the radial frequency given by Eq.~\eqref{nExp}.
The nonlocal Hamiltonian $H_\text{nonloc}$ in Eq.~\eqref{Hnonloc} is expressed in terms of $(e_t,a_r)$, but we can use Eqs.~\eqref{eaEL} and \eqref{eret} to obtain $I_r^\text{nonloc}(E,L)$, i.e., as a function of energy and angular momentum.
Then, we replace $E$ with $(e_t,L)$ using Eq.~\eqref{etEL}, expand in eccentricity to $\Order(e_t^8)$, and revert back to $(E,L)$. This way, we obtain an expression for $I_r^\text{nonloc}$ in powers of $1/L$ that is valid to eighth order in eccentricity, and in which each $\varepsilon^n$ contributes up to order $e^{2n}$.
The result for the 4PN and 5.5PN SO contributions reads
\begin{widetext}
\begin{align}
\frac{I_r^\text{nonloc}}{\nu} &= \frac{1}{L_\text{cov}^7} \left(\frac{170 \ln L_\text{cov}}{3}-\frac{170 \gamma_E }{3}+\frac{18299}{96}-\frac{4777903}{90} \ln 2+\frac{13671875 \ln 5}{512}-\frac{15081309 \ln 3}{2560}\right) \nonumber\\
&\quad
+\frac{\varepsilon}{L_{\text{cov}}^5} \left(\frac{244 \ln L_\text{cov}}{5}-\frac{244 \gamma_E }{5}+\frac{157823}{360}-\frac{10040414}{45} \ln 2+\frac{126953125 \ln 5}{1152}-\frac{13542147 \ln 3}{640}\right) \nonumber\\
&\quad
+ \frac{\varepsilon ^2}{L_{\text{cov}}^3} \left(\frac{74 \ln L_\text{cov}}{15}-\frac{74 \gamma_E }{15}+\frac{89881}{240}-\frac{5292281}{15} \ln 2+\frac{130859375 \ln 5}{768}-\frac{35029179 \ln 3}{1280}\right) \nonumber\\
&\quad
+\frac{\varepsilon ^3}{L_{\text{cov}}} \left(\frac{6187}{40}-\frac{11186786}{45} \ln 2+\frac{44921875 \ln 5}{384}-\frac{1878147 \ln 3}{128}\right)\nonumber\\
&\quad
+\varepsilon ^4 L_{\text{cov}} \left(\frac{40253}{1440}-\frac{1185023}{18} \ln 2+\frac{138671875 \ln 5}{4608}-\frac{6591861 \ln 3}{2560}\right) \nonumber\\
&\quad
+ a_b \bigg[
\frac{1}{L_{\text{cov}}^{10}} \left(-\frac{12579 \ln L_\text{cov}}{5}-\frac{24068101}{2880}+\frac{12579 \gamma_E }{5}+\frac{398742736 \ln 2}{135}+\frac{281496303 \ln 3}{1024}-\frac{40131484375 \ln 5}{27648}\right) \nonumber\\
&\qquad
+\frac{\varepsilon }{L_{\text{cov}}^8} \left(-2499 \ln L_\text{cov}-\frac{13345921}{720}+2499 \gamma_E +\frac{1548980449 \ln 2}{135}+\frac{1132984827 \ln 3}{1280}-\frac{38230234375 \ln 5}{6912}\right) \nonumber\\
&\qquad
+\frac{\varepsilon ^2}{L_{\text{cov}}^6} \left(-537 \ln L_\text{cov}-\frac{7268749}{480}+537 \gamma_E +\frac{149780983 \ln 2}{9}+\frac{2572548417 \ln 3}{2560}-\frac{36141484375 \ln 5}{4608}\right)\nonumber\\
&\qquad
+\frac{\varepsilon ^3}{L_{\text{cov}}^4} \left(-13 \ln L_\text{cov}-\frac{4143337}{720}+13 \gamma_E +\frac{1439288647 \ln 2}{135}+\frac{116812287 \ln 3}{256}-\frac{33865234375 \ln 5}{6912}\right)\nonumber\\
&\qquad
+\frac{\varepsilon ^4}{L_{\text{cov}}^2} \left(-\frac{2608213}{2880}+\frac{342877711 \ln 2}{135}+\frac{318592683 \ln 3}{5120}-\frac{31401484375 \ln 5}{27648}\right)
\bigg] \nonumber\\
&\quad
+a_t \bigg[
\frac{1}{L_{\text{cov}}^{10}} \left(-\frac{8673 \ln L_\text{cov}}{5}-\frac{16708517}{2880}+\frac{8673 \gamma_E }{5}+\frac{582216271 \ln 2}{270}+\frac{980901819 \ln 3}{5120}-\frac{29135234375 \ln 5}{27648}\right) \nonumber\\
&\qquad
+\frac{\varepsilon }{L_{\text{cov}}^8} \left(-\frac{5593 \ln L_\text{cov}}{3}-\frac{1921829}{144}+\frac{5593 \gamma_E }{3}+\frac{231474971 \ln 2}{27}+\frac{806618331 \ln 3}{1280}-\frac{28420234375 \ln 5}{6912}\right) \nonumber\\
&\qquad
+\frac{\varepsilon ^2}{L_{\text{cov}}^6} \left(-455 \ln L_\text{cov}-\frac{5444717}{480}+455 \gamma_E +\frac{191532524 \ln 2}{15}+\frac{1877210721 \ln 3}{2560}-\frac{9203828125 \ln 5}{1536}\right) \nonumber\\
&\qquad
+\frac{\varepsilon ^3}{L_{\text{cov}}^4} \left(-\frac{69 \ln L_\text{cov}}{5}-\frac{3226241}{720}+\frac{69 \gamma_E }{5}+\frac{227762869 \ln 2}{27}+\frac{87726159 \ln 3}{256}-\frac{26708984375 \ln 5}{6912}\right) \nonumber\\
&\qquad
+\frac{\varepsilon ^4}{L_{\text{cov}}^2} \left(-\frac{2112181}{2880}+\frac{562665401 \ln 2}{270}+\frac{49433247 \ln 3}{1024}-\frac{25712734375 \ln 5}{27648}\right)
\bigg].
\end{align}
\end{widetext}
\subsection{Circular-orbit binding energy}
Here, we calculate the gauge-invariant binding energy $\bar{E}$ analytically in a PN expansion, as opposed to the numerical calculation in Sec.~\ref{sec:Eb} for the EOB binding energy.
For circular orbits and aligned spins, $\bar{E}$ can be calculated from the Hamiltonian~\eqref{Heob} by setting $p_r=0$ and perturbatively solving $\dot{p}_r=0=-\partial H/\partial r$ for the angular momentum $L(r)$. Then, solving $\Omega = \partial H / \partial L$ for $r(\Omega)$, and substituting in the Hamiltonian yields $\bar{E}$ as a function of the orbital frequency.
It is convenient to express $\bar{E}$ in terms of the dimensionless frequency parameter $v_\Omega \equiv (M\Omega_\phi)^{1/3}$.
The nonspinning 4PN binding energy is given by Eq.~(5.5) of Ref.~\cite{Damour:2014jta}, and the 4.5PN SO part is given by Eq.~(5.11) of Ref.~\cite{Antonelli:2020ybz}.
We obtain for the 5.5PN SO part
\begin{widetext}
\begin{align}
\label{Ebind}
\bar{E}^{\text{5.5PN,SO}} &= \nu v_\Omega^{13} \bigg\{
S \bigg[-\frac{4725}{32} +\nu \left(\frac{1411663}{640}-\frac{10325 \pi ^2}{64}+\frac{352 \gamma_E }{3}+\frac{2080 \ln 2}{9}\right) + \frac{352}{3}\nu\ln v_\Omega
+\frac{310795 \nu ^3}{5184}+\frac{35 \nu ^4}{1458} \nonumber\\
&\qquad
+ \nu^2 \left(\frac{5 \ms X_{59}^{\nu^2}}{8}+\frac{2425 \pi ^2}{864}-\frac{1975415}{5184}\right)\bigg]
+ S^* \bigg[
-\frac{2835}{128} +\nu \left(\frac{126715}{144}-\frac{102355 \pi ^2}{1536}+\frac{160 \gamma_E }{3}+\frac{992 \ln 2}{9}\right) \nonumber\\
&\qquad
+ \frac{160 }{3}\nu \ln v_\Omega+\nu ^2 \left(\frac{5 \ms X_{59}^{\nu^2}}{8}-\frac{205 \pi ^2}{576}-\frac{275245}{3456}\right)
+\frac{46765 \nu ^3}{864}+\frac{875 \nu ^4}{31104}
\bigg]
\bigg\}.
\end{align}
\end{widetext}
\section{Conclusions}
\label{sec:conc}
Improving the spin description in waveform models is crucial for GW observations with the continually increasing sensitivities of the Advanced LIGO, Virgo, and KAGRA detectors~\cite{KAGRA:2013rdx}, and for future GW detectors, such as the Laser Interferometer Space Antenna (LISA)~\cite{LISA:2017pwj}, the Einstein Telescope (ET)~\cite{Punturo:2010zz}, the DECi-hertz Interferometer Gravitational wave Observatory (DECIGO)~\cite{Kawamura:2006up}, and Cosmic Explorer (CE)~\cite{Reitze:2019iox}.
More accurate waveform models can lead to better estimates for the spins of binary systems, and for the orthogonal component of spin in precessing systems, which helps in identifying their formation channels~\cite{LIGOScientific:2018jsj,LIGOScientific:2020kqk}.
For this purpose, we extended in this paper the SO coupling to the 5.5PN level.
We employed an approach~\cite{Bini:2019nra,Antonelli:2020ybz} that combines several analytical approximation methods to obtain arbitrary-mass-ratio PN results from first-order self-force results.
We computed the nonlocal-in-time contribution to the dynamics for bound orbits in a small-eccentricity expansion, Eq.~\eqref{gyroNonloc}, and for unbound motion in a large-eccentricity expansion, Eq.~\eqref{chinonloc}.
To our knowledge, this is the first time that nonlocal contributions to the conservative dynamics have been computed in the spin sector.
For the local-in-time contribution, we exploited the simple mass-ratio dependence of the PM-expanded scattering angle and related the Hamiltonian coefficients to those of the scattering angle.
This allowed us to determine all the unknowns at that order from first-order self-force results, except for one unknown at second order in the mass ratio, see Eqs.~\eqref{chiLoc}--\eqref{gSstrLoc}.
We also provided the radial action, in Sec.~\ref{sec:Ir}, and the circular-orbit binding energy, in Eq.~\eqref{Ebind}, as two important gauge-invariant quantities for bound orbits.
We stress again that, although all calculations in this paper were performed for aligned spins, the SO coupling is applicable for generic precessing spins.
The local part of the 5.5PN SO coupling still has an unknown coefficient, but as we showed in Fig.~\ref{fig:Eb}, its effect on the dynamics is smaller than the difference between the 4.5 and 5.5PN orders.
Determining that unknown could be done through targeted PN calculations, as was illustrated in Ref.~\cite{Bini:2021gat}, in which the authors related the two missing coefficients at 5PN order to coefficients that can be calculated from an EFT approach.
Alternatively, one could use analytical \emph{second}-order self-force results, which might become available in the near future, given the recent work on numerically computing the binding energy and energy flux~\cite{Pound:2019lzj,Warburton:2021kwk}.
Until then, one could still use the partial 5.5PN SO results in EOB waveform models complemented by NR calibration.
Such an implementation would be straightforward, since we obtained the gyro-gravitomagnetic factors that enter directly into the \texttt{SEOBNR}~\cite{Bohe:2016gbl,Cotesta:2018fcv,Ossokine:2020kjp} and \texttt{TEOBResumS}~\cite{Nagar:2018zoe,Nagar:2019wds,Nagar:2021gss} waveform models, and less directly in the \texttt{IMRPhenom} models~\cite{Hannam:2013oca,Husa:2015iqa,Khan:2019kot,Pratten:2020fqn}, which are used in GW analyses.
\section*{Acknowledgments}
I am grateful to Alessandra Buonanno, Jan Steinhoff, and Justin Vines for fruitful discussions and for their invaluable feedback on earlier drafts of this paper. I also thank Sergei Ossokine for providing NR data for the binding energy, and thank the anonymous referee for useful suggestions.
| {'timestamp': '2021-11-23T02:11:02', 'yymm': '2110', 'arxiv_id': '2110.12813', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.12813'} |
\section{Introduction}
\subsection{Introduction and main results} Suppose $(M,g)$ is a compact Riemannian manifold of dimension $n$, and let $\triangle_g$ be the Laplace--Beltrami operator. A classical result in analysis, dating back to Minakshisundaram--Pleijel \cite{Minakshisundaram1949} and Seeley \cite{seeley}, states that the trace density of $(-\triangle_g)^{-\cv}$ is well-defined for $\Re \cv > \n2$ and extends to a density-valued meromorphic function of the complex variable $\cv$. The meromorphic continuation, henceforth denoted by $\zeta_g(\cv)$, gives after integrating on $M$ the celebrated \emph{spectral zeta function} of $-\triangle_g$ (or \emph{Minakshisundaram--Pleijel zeta function}).
A fundamental fact shown independently by Wodzicki \cite{wodzicki} and Guillemin \cite{Guillemin1985} is that each residue of $\zeta_g(\cv)$ equals an integral of a distinguished term in the polyhomogeneous expansion of the symbol of $(-\triangle_g)^{-\cv}$. The so-defined \emph{Guillemin--Wodzicki residue density} is remarkable because it has an {intrinsic} meaning and involves local geometric quantities, such as the scalar curvature $R_g$ for even $n\geqslant 4$. It can also be intrinsically defined for more general classes of elliptic pseudo-differential operators (see \sec{ss:wodzicki}) and has a deep relationship with the Dixmier trace found by Connes \cite{Connes1988} (cf.~Connes--Moscovici \cite{Connes1995}).
If now $(M,g)$ is a Lorentzian manifold (not necessarily compact), the corresponding Laplace--Beltrami operator $\square_g$, better known as the wave operator or d'Alembertian, is far from being elliptic. However, it was recently shown that if $(M,g)$ is well-behaved at infinity or has special symmetries, $\square_g$ is essentially self-adjoint in $L^2(M,g)$ \cite{derezinski,vasyessential,nakamurataira,Derezinski2019}, and consequently complex powers $(\square_g-i \varepsilon)^{-\cv}$ can be defined by functional calculus for any $\varepsilon>0$. Furthermore, for large $\Re \cv$, $(\square_g-i \varepsilon)^{-\cv}$ has a well-defined trace-density, which extends to a meromorphic function \cite{Dang2020}, denoted from now on by $\zeta_{g,\varepsilon}(\cv)$. The residues of the so-obtained \emph{Lorentzian spectral zeta function density} $\zeta_{g,\varepsilon}(\cv)$ contain interesting geometric information (for instance the Lorentzian scalar curvature $R_g$ occurs in the residue at $\cv=\frac{n}{2}-1$ for even $n\geqslant4$ \cite{Dang2020}), so it is natural to ask if these analytic residues coincide with a suitable generalization of the Guillemin--Wodzicki residue.
The problem is that the notion of Guillemin--Wodzicki residue relies on the symbolic calculus of pseudo\-differential operators, and even though there is a natural generalization to Fourier integral operators due to Guillemin \cite{Guillemin1993} (see also \cite{Hartung2015}), Lorentzian complex powers fall outside of that class in view of their on-diagonal behavior. A priori one needs therefore a more singular calculus, based for instance on paired Lagrangian distributions \cite{Guillemin1981,Melrose1979,Antoniano1985,Greenleaf1990,joshi,Joshi1998}.
Instead of basing the analysis on a detailed symbolic calculus, the idea pursued in the present paper (and implicit in the work of Connes--Moscovici \cite{Connes1995}) is that regardless of how the calculus is obtained, terms of different order should be distinguished by different scaling behavior as one approaches the diagonal $\Delta\subset M\times M$ of the Schwartz kernel. We define the scaling as being generated by an \emph{Euler vector field} $\euler$ (see \sec{ss:euler}), the prime example being $X=\sum_{i=1}^n h^i \p_{h^i}$ if $(x,h)$ are local coordinates in which the diagonal is $\Delta=\{ h^{i}=0, \ i=1,\dots,n \}$. Now if $u$ is a distribution defined near $\Delta\subset M\times M$ and it scales in a log-polyhomogeneous way, the Laplace transform
\beq\label{eq:lapl}
s\mapsto \int_0^\infty e^{-ts} { \left(e^{-t\euler}u\right)} \,dt
\eeq
is a meromorphic function with values in distributions, and the poles are called \emph{Pollicott--Ruelle resonances} \cite{Pollicott1986,Ruelle1986}. We define the \emph{dynamical residue} $\res_X u$ as the trace density of $X \Pi_0(u)$ where $\Pi_0(u)$ is the residue at $s=0$ of \eqref{eq:lapl}.
As a first consistency check, we show that the dynamical residue and the Guillemin--Wodzicki residue coincide for classical pseudo\-differential operators (i.e., with one-step polyhomogeneous symbol).
\begin{theorem}[{cf.~Theorem \ref{wodzickipdo}}]\label{tthm1} For any classical $A\in \Psi^m(M)$ with Schwartz kernel $K_{A}$, the dynamical residue {$\res_X \pazocal{K}_{A}$} is well-defined, independent on the choice of Euler vector field $X$, and {$(\res_X \pazocal{K}_{A}) \dvol_g$} equals the Guillemin--Wodzicki residue density of $A$.
\end{theorem}
Next, we consider the case of a Lorentzian manifold $(M,g)$ of even dimension $n$.
The well-definiteness and meromorphic continuation of $\zeta_{g,\varepsilon}(\cv)$ is proved in \cite{Dang2020} in the setting of globally hyperbolic \emph{non-trapping Lorentzian scattering spaces} introduced by Vasy \cite{vasyessential}. This class is general enough to contain perturbations of Minkowski space, one can however expect that it is not the most general possible for which $\zeta_{g,\varepsilon}(\cv)$ exists. For this reason, instead of making assumptions on $(M,g)$ directly, we point out the analytic properties which guarantee that $\zeta_{g,\varepsilon}(\cv)$ is a well-defined meromorphic function. Namely, we assume that $\square_g$ has \emph{Feynman resolvent}, by which we mean that:\smallskip
\ben
\item[] $\square_g$ acting on $C_{\rm c}^\infty(M)$ has a self-adjoint extension, and the resolvent $(\square_g-z)^{-1}$ of this self-adjoint extension has \emph{Feynman wavefront set} uniformly in $\Im z >0$.
\een
\smallskip
\noindent The Feynman wavefront set condition roughly says that microlocally, the Schwartz kernel of $(\square_g-z)^{-1}$ has the same singularities as the \emph{Feynman propagator} on Minkowski space, i.e.~the Fourier multiplier by $(-\xi_0^2 + \xi_1^2 + \cdots+ \xi_{n-1}^2 - i0)^{-1}$ (see \cite{GHV,Vasy2017b,vasywrochna,GWfeynman,Gerard2019b,Taira2020a} for results in this direction with fixed $z$).
The precise meaning of uniformity is given in Definition \ref{deff} and involves decay in $z$ along the integration contour used to define complex powers. We remark that outside of the class of Lorentzian scattering spaces, $\square_g$ is known to have Feynman resolvent for instance on ultra-static spacetimes with compact Cauchy surface, see Derezi\'nski--Siemssen \cite{derezinski} for the self-adjointness and \cite{Dang2020} for the microlocal estimates.
Our main result can be summarized as follows.
\begin{theorem}[{cf.~Theorem \ref{thm:dynres1}}]\label{thm:dynres} Let $(M,g)$ be a Lorentzian manifold of even dimension $n$, and suppose $\square_g$ has Feynman resolvent. For all $\cv\in\cc$ and $\Im z > 0$, the dynamical residue $\resdyn(\square_g-z)^{-\cv}$ is well-defined and independent on the choice of Euler vector field $\euler$. Furthermore, for all $k=1,\dots,\frac{n}{2}$ and $\varepsilon>0$,
\beq\label{eq:main}
\resdyn \left(\square_g-i \varepsilon \right)^{-k} = {2}\res_{\cv =k}\zeta_{g,\varepsilon}(\cv),
\eeq
where $\zeta_{g,\varepsilon}(\cv)$ is the spectral zeta function density of $\square_g-i \varepsilon$.
\end{theorem}
By Theorem \ref{tthm1}, the dynamical residue is a generalization of the Guillemin--Wodzicki residue density. Thus, Theorem \ref{thm:dynres} { generalizes to the Lorentzian setting results known previously only in the elliptic case: the analytic poles of spectral zeta function densities coincide with a more explicit quantity which refers to the scaling properties of complex powers. } In physicists' terminology, this gives precise meaning to the statement that the residues of $\zeta_{g,\varepsilon}(\cv)$ can be interpreted as \emph{scaling anomalies}.
We also give a more direct expression for the l.h.s.~of \eqref{eq:main} which allows to make the relation with local geometric quantities, see \eqref{eq:explicit} in the main part of the text. In particular, we obtain in this way the identity (which also follows from \eqref{eq:main} and \cite[Thm.~1.1]{Dang2020}) for $n\geqslant 4$:
\beq\label{eq:main2}
\lim_{\varepsilon\to 0^+}\resdyn \left(\square_g-i \varepsilon \right)^{-\frac{n}{2}+1} = \frac{R_g(x)}{3i\Gamma(\frac{n}{2}-1)\left(4\pi\right)^{\frac{n}{2}}}.
\eeq
This identity implies that the l.h.s.~can be interpreted as a spectral action for gravity.
\subsection{Summary} The notion of dynamical residue is introduced in \sec{section2}, preceded by preliminary results on Euler vector fields. A pedagogical model is given in \sec{ss:toy} and serves as a motivation for the definition.
The equivalence of the two notions of residue for pseudo-differential operators (Theorem \ref{tthm1}) is proved in \sec{ss:wodzickipdo}. An important role is played by the so-called Kuranishi trick which allows us to adapt the phase of quantized symbols to the coordinates in which a given Euler field $X$ has a particularly simple form.
The remaining two sections \secs{section4}{section5} are devoted to the proof of Theorem \ref{thm:dynres}.
The main ingredient is the \emph{Hadamard parametrix} $H_N(z)$ for $\square_g-z$, the construction of which we briefly recall in \sec{s:hadamardformal}. Strictly speaking, in the Lorentzian case there are several choices of parametrices: the one relevant here is the \emph{Feynman Hadamard parametrix}, which approximates $(\square_g-z)^{-1}$ thanks to the Feynman property combined with uniform estimates for $H_N(z)$ shown in \cite{Dang2020}. The log-homogeneous expansion of the Hadamard parametrix $H_N(z)$ is shown in \sec{ss:polhom} through an oscillatory integral representation with singular symbols. An important role is played again by the Kuranishi trick adapted from the elliptic setting. { However, there are extra difficulties due to the fact that we do not work with standard symbol classes anymore: the ``symbols'' are distribution-valued and special care is required when operating with expansions and controlling the remainders.} The dynamical residue is computed in \sec{ss:rcc} with the help of extra expansions that exploit the homogeneity of individual terms and account for the dependence on $z$.
Next, following \cite{Dang2020} we introduce in \sec{ss:hc} a generalization $H_N^{(\cv)}(z)$ of the Hadamard parametrix for complex powers $(\square_g-z)^{-\cv}$, and we adapt the analysis from \sec{section4}. Together with the fact (discussed in \sec{ss:lg}) that $H_N^{(\cv)}(z)$ approximates $(\square_g-z)^{-\cv}$, this allows us to conclude the theorem.
{ As an aside, in Appendix \ref{app} we briefly discuss what happens when $(\square_g-z)^{-\cv}$ is replaced by $Q(\square_g-z)^{-\cv}$ for an arbitrary differential operator $Q$. We show that in this greater generality, the trace density still exists for large $\Re \cv$ and analytically continues to at least $\cc\setminus \zz$. This can be interpreted as an analogue of the Kontsevich--Vishik canonical trace density \cite{Kontsevich1995} in our setting. }
\subsection{Bibliographical remarks}\label{ss:br}
Our approach to the Guillemin--Wodzicki residue \cite{wodzicki,Guillemin1985} is strongly influenced by works in the pseudodifferential setting by Connes--Moscovici \cite{Connes1995}, Kontsevich--Vishik \cite{Kontsevich1995}, Lesch \cite{Lesch}, Lesch--Pflaum \cite{Lesch2000}, Paycha \cite{paycha2,paycha} and Maeda--Manchon--Paycha \cite{Maeda2005}.
It also draws from the theory of Pollicott--Ruelle resonances \cite{Pollicott1986,Ruelle1986} in the analysis and spectral theory
of hyperbolic dynamics (see Baladi \cite{Baladi2018} for a review of the subject and further references), in particular from the work of Dyatlov--Zworski \cite{Dyatlov2016} on dynamical zeta functions.
The Feynman wavefront set condition plays an important role in various
developments connecting the global theory of hyperbolic operators with
local geometry, in particular in works on index theory by B\"ar--Strohmaier and other authors \cite{Bar2019,Baer2020,Shen2021}, and on trace formulae and Weyl laws by Strohmaier--Zelditch \cite{Strohmaier2020b,Strohmaier2020,Strohmaier2020a} (including a spectral-theoretical formula for the scalar curvature).
The Hadamard parametrix for inverses of the Laplace--Beltrami operator is a classical tool in analysis, see e.g.~\cite{HormanderIII,soggeHangzhou,StevenZelditch2012,Zelditch2017} for the Riemannian or Lorentzian time-independent case. For fixed $z$, the Feynman Hadamard parametrix is constructed by Zelditch \cite{StevenZelditch2012} in the ultra-static case and in the general case by Lewandowski \cite{Lewandowski2020}, cf.~Bär--Strohmaier \cite{Baer2020} for a unified treatment of even and odd dimensions. The present work relies on the construction and the uniform in $z$ estimates from \cite{Dang2020}, see also Sogge \cite{Sogge1988}, Dos Santos Ferreira--Kenig--Salo \cite{Ferreira2014} and Bourgain--Shao--Sogge--Yao \cite{Bourgain2015} for uniform estimates in the Riemmanian case.
In Quantum Field Theory on Lorentzian manifolds, the Hadamard parametrix plays a fundamental role in renormalization, see e.g.~\cite{DeWitt1975,Fulling1989,Kay1991,Radzikowski1996,Moretti1999,Brunetti2000,Hollands2001}. Other rigorous renormalization schemes (originated in works by Dowker--Critchley \cite{Dowker1976} and Hawking \cite{Hawking1977}) use a formal, \emph{local} spectral zeta function or heat kernel, and their relationships with the Hadamard parametrix were studied by Wald \cite{Wald1979}, Moretti \cite{Moretti1999,morettilong} and Hack--Moretti \cite{Hack2012a}. We remark in this context that in Theorem \ref{thm:dynres} we can replace globally defined complex powers $(\square_g-z)^{-\cv}$ with the local parametrix $H_N^{(\cv)}(z)$ and correspondingly we can replace the spectral zeta density $\zeta_{g,\varepsilon}(\cv)$ by a local analogue $\zeta_{g,\varepsilon}^{\rm loc}(\cv)$ defined using $H_N^{(\cv)}(z)$. This weaker, local formulation does not use the Feynman condition and thus holds true generally.
\subsection*{Acknowledgments} { We thank the anonymous reviewers for their useful suggestions and feedback. } Support from the grant ANR-16-CE40-0012-01 is gratefully acknowledged. The authors also grateful to the MSRI in Berkeley and the Mittag--Leffler Institute in Djursholm for their kind hospitality during thematic programs and workshops in 2019--20.
\section{Log-polyhomogeneous scaling and dynamical residue}\label{section2}
\subsection{Notation} { Throughout the paper, given a vector field $V\in C^\infty(T\pazocal{M})$ and a smooth function $f\in C^\infty(\pazocal{M})$ on a smooth manifold $\pazocal{M}$, we denote by $e^{tV}:\pazocal{M}\mapsto \pazocal{M}$, $t\in \mathbb{R}$ the flow generated by $V$, and by $e^{-tV}f:=f(e^{-tV}.)\in C^\infty(\pazocal{M})$ the pull-back of $f$ by the flow $e^{-tV}$. Furthermore, when writing $Vf\in C^\infty(\pazocal{M})$ we will mean that the vector field $V$ acts on $f$ by {Lie derivative}, i.e.~$Vf=\left(\frac{d}{dt}\left(e^{tV}f\right)\right)|_{t=0}$. }
\subsection{Euler vector fields and scaling dynamics}\label{ss:euler}
Let $M$ be a smooth manifold, and let $\Delta=\{(x,x) \st x\in M\}$ be the diagonal in $M\times M$. Our first objective is to introduce a class of
{
{Schwartz kernels} defined in some neighborhood of $\Delta$},
which have prescribed
analytical
behavior under scaling with respect to $\Delta$.
More precisely, an adequate notion of scaling is provided by the dynamics generated by the following class of vector fields.
\begin{defi}[Euler vector fields]
Let $\pazocal{I}\subset C^\infty(M\times M)$ be the ideal of smooth functions
vanishing at the diagonal $\Delta=\{(x,x) \st x\in M\}\subset M\times M$ and $\pazocal{I}^k$ its $k$-th power.
{ A vector field} $\euler$
defined near the diagonal
$\Delta$
is called \emph{Euler} if near $\Delta$, $\euler f=f+\pazocal{I}^2$ for all $f\in \pazocal{I}$.
For the sake of simplicity, we will only consider { Euler vector fields $X$ scaling with respect to the diagonal} which in addition preserve the fibration
$\pi: M\times M \ni (x,y)\mapsto x\in M$
projecting on the first factor. We refer to any such $\euler$ simply as to an \emph{Euler vector field}.
\end{defi}
{ In our definition, $\euler$ only needs to be defined on some neighborhood of $\Delta$ which is stable by the dynamics.
Euler vector fields appear to have been first defined { by Mark Joshi, who called them \emph{radial vector fields}. They were used in his works \cite{10.2307/2162103,Joshi1998} for defining polyhomogeneous Lagrangian and paired Lagrangian distributions by scaling}. Then unaware of Joshi's work, it appeared in the first author's thesis \cite{dangthesis}, see also \cite[Def.~1.1]{DangAHP}. They were independently found by Bursztyn--Lima--Meinrenken \cite{Bursztyn2019}, see also \cite{Bischo2020} and the survey \cite{Meinrenken2021}.
}
A consequence of the definition of Euler vector fields $\euler$ is that
if $f\in \pazocal{I}^k$ then $\euler f-kf\in \pazocal{I}^{k+1}$ which is
easily proved by induction using Hadamard's lemma.
Another useful consequence of the definition of $\euler$ is that
we have the equation:
\begin{equation}\label{e:idneardiageuler}
{ \left(Xdf-df\right)|_{\Delta}=0}
\end{equation}
for all { smooth functions $f$ defined near} $\Delta$, { where $\euler df$ means the vector field $\euler$ acting on the $1$-form $df$ by Lie derivative, and $|_\Delta$ means the restriction on the diagonal.}
{ The equation (\ref{e:idneardiageuler})} can be easily checked by an immediate coordinate calculation.
We view $df|_\Delta$ as a smooth section of $T^*M^2$, a $1$--form, restricted over $\Delta$.
{ Recall that for $t\in \rr$, $e^{t\euler}$ is the flow of $\euler$ at time $t$. }
\begin{ex}
On $\mathbb{R}^4$, the dynamics $ e^{t\euler}: \left(\mathbb{R}^4\right)^2\ni(x,y) \mapsto (x,e^{t}(y-x)+x)\in \left(\mathbb{R}^4\right)^2 $ preserves
the fibers of $\left(\mathbb{R}^4\right)^2 \ni (x,y)\mapsto x\in \mathbb{R}^4$.
\end{ex}
Euler vector fields can be obtained from any torsion-free connection $\nabla$ and the geodesic exponential $\exp_x^\nabla:T_xM\to M$ defined using $\nabla$.
Namely, a \emph{geodesic Euler vector field} is obtained by setting
$$
{ \euler f(x,y)=\frac{d}{dt}f\big(x,\exp^\nabla_x(tv)\big)|_{t=1}},
$$
where $y=\exp_x^\nabla(v)$. Moreover, Euler vector fields form a \emph{particular class} of the Morse--Bott vector fields
where $\Delta$ is the critical manifold, the Morse index is $0$ and all Lyapunov exponents of $\euler$ equal $1$ or $0$.
Let us describe in simple terms the dynamics of Euler vector fields.
\begin{lemm}[{Lyapunov exponents and bundles}]
Let $\euler$ be an Euler vector field. There exists a unique subbundle
$N\Delta\subset T_\Delta \left(M\times M\right)$ such that
$de^{t\euler}=e^{t}\,\id: N\Delta\to N\Delta$~\footnote{In the terminology of dynamical systems, this is a simple instance of a Lyapunov
bundle.}.
\end{lemm}
\begin{proof}
The flow $e^{-t\euler}$ fixes $\Delta$
hence the differential $de^{-t\euler} :TM^2\to TM^2$ restricted to $\Delta$ defines a family of bundle isomorphisms
$de^{-t\euler}: T M^2|_\Delta \to T M^2|_\Delta$, $\forall t\in \mathbb{R}$.
Now using the group property of the flow $e^{-t\euler}e^{-s\euler}=e^{-(t+s)\euler}$,
we deduce that $de^{-tX}de^{-sX}=de^{-(t+s)\euler}:T M^2|_\Delta \to T M^2|_\Delta$.
We define the bundle map $L_\euler:T M^2|_\Delta \to T M^2|_\Delta$ as
$\frac{d}{dt}de^{t\euler}|_{t=0}$, which is the linearized action of $\euler$ localized at $\Delta$. By uniqueness of solutions to ODE and the group property of $de^{t\euler}:T M^2|_\Delta \to T M^2|_\Delta$, we find that $de^{t\euler}=e^{tL_\euler}:T M^2|_\Delta \to T M^2|_\Delta, \forall t\in \mathbb{R}$. Recall that for all smooth germs $f$ near $\Delta$, we have
$Xdf=df|_\Delta$, we view $df|_\Delta$ as a smooth section of $T^*M^2$ over $\Delta$. Now we observe the following { identity} on $1$-forms restricted over $\Delta$:
{
\begin{eqnarray*}
\forall f, df=
\euler df= \Big(\frac{d}{dt}
\left(e^{t\euler}df\right)\Big)|_{t=0}=\Big(\frac{d}{dt}
d\left(e^{t\euler}f\right)\Big)|_{t=0}=\Big(\frac{d}{dt}
\left(df\circ de^{t\euler}\right)\Big)|_{t=0}=L_\euler^* df
\end{eqnarray*}
}
where $L_\euler^*:T^*M^2|_\Delta\to T^*M^2|_\Delta$ is the transpose of {$L_\euler$}.
The above equation implies that the eigenvalues of the bundle map $L_\euler:TM^2|_\Delta\mapsto TM^2|_\Delta$ are {$1$ or $0$}. So we define $N\Delta\subset TM^2|_\Delta$ as the eigenbundle of $L_\euler$ for the eigenvalue $1$.
\end{proof}
\begin{lemm}[Stable neighborhood]\label{l:stable}
There exists a neighborhood $\pazocal{U}$ of $\Delta$ in $M\times M$ such that $\pazocal{U}$ is stable by the backward flow, i.e.~$e^{-t\euler}\pazocal{U}\subset \pazocal{U}$ for all $t\in \mathbb{R}_{\geqslant 0}$.
\end{lemm}
The diagonal $\Delta\subset M\times M$
is a critical manifold of $\euler$
and is preserved by the flow, and $\pazocal{U}$
is the \emph{unstable manifold}
of $\Delta$ in the terminology
of dynamical systems.
The vector field $\euler$ is
\emph{hyperbolic} in the normal direction $N\Delta$
as we will next see.
\begin{refproof}{Lemma \ref{l:stable}} The idea is to observe that by definition of an Euler vector field $V$, near any $p\in \Delta$ we can choose an arbitrary coordinate frame $(x^i,h^i)$ such that $\Delta$ is locally given by the equations $\{h^i=0\}$ and $\euler=(h^i+A_i(x,h))\partial_{h^i}$ where $A_i\in \pazocal{I}^2$. The fact that there is no component in the direction $\partial_{x^i}$
comes from the fact that our vector field $\euler$ preserves the fibration with leaves $x=\text{constant}$.
Fix a compact $K\subset M$ and consider the product $K\times M $,
which contains $\Delta_K=\{(x,x)\in M^2 \st x\in K\}$ and is preserved by the flow.
For the moment we work in $K\times M$ and we conclude a global statement later on.
We also choose some Riemannian metric $g$ on $M$ and consider the smooth function germ $M^2 \ni (m_1,m_2)\mapsto \mathbf{d}_g^2(m_1,m_2)\in \mathbb{R}_{\geqslant 0} $
defined near the diagonal $\Delta_K\subset K\times M$, where $\mathbf{d}_g$ is the distance function.
In the local coordinate frame $(x^i,h^i)_{i=1}^n$ defined near $p$, $\mathbf{d}^2$ reads
$$ \mathbf{d}^2((x,0),(x,h)) = A_{ij}(x)h^ih^j+\pazocal{O}(\vert h\vert^3) $$
where $A_{ij}(x)$ is a positive definite matrix. Thus setting $f=\mathbf{d}^2$ yields
$\euler f=2f+\pazocal{O}(\vert h\vert^3)$ by definition of $\euler$
and
therefore there exists some
$\varepsilon>0$ such that $\forall (x,h)\in K\times M, f\leqslant \varepsilon\implies \euler f\geqslant 0$.
Observe that $\euler \log f=2+\pazocal{O}(\mathbf{d}_g) $, $\euler \log(f)|_{\Delta_K}=2$ and $\euler \log(f)$ is continuous near $\Delta_K$.
By compactness of $K$, there exists some $\varepsilon>0$ s.t. if $f\leqslant \varepsilon$
then $\euler \log(f)\geqslant \frac{3}{2}$.
We take $\pazocal{U}_K=\{f\leqslant \varepsilon\}\cap K\times M$.
The vector field $\euler$ vanishes on $\Delta$ therefore the flow $e^{-t\euler}$
preserves $\Delta$.
Assume there exists $(x,h)\in \pazocal{U}_K\setminus \Delta_K$ such that $e^{-T\euler}(x,h)\notin \pazocal{U}_K$ for some $T> 0$.
Without loss of generality, we may even assume that $f(x,h)=\varepsilon$.
Then,
let us denote $T_1=\inf\{ t \st t>0,\ f(e^{-t\euler}(x,h))=\varepsilon \}$ which is intuitively
the first time for which $f(e^{-T_1\euler}(x,h))=f(x,h)=\varepsilon$.
Since $(x,h)\notin \Delta_K$, we have $-\euler \mathbf{d}^2(x,h)\leqslant -\frac{3}{2} \mathbf{d}^2(x,h)<0$ and setting $f=\mathbf{d}^2$
yields
$$ f(e^{-t\euler}(x,h))=f(x,h)-t\euler f(x,h)+\pazocal{O} (t^2) $$ which means that $f(e^{-t\euler}(x,h))$ is strictly decreasing near $t=0$, hence necessarily $T_1>0$. By the fundamental theorem of calculus,
$$f(e^{-T_1\euler}(x,h))-f(x,h)=\int_0^{T_1} -{\left(\euler f\right)}(e^{-s\euler}(x,h))ds $$
and since
$$ -{\left(\euler f\right)}(e^{-s\euler}(x,h))\leqslant -\frac{3}{2}f(e^{-s\euler}(x,h) ) <0$$ for all $s\in [0,T_1]$, we conclude that
$f(e^{-T_1\euler}(x,h))<f(x,h)$ which yields a contradiction. So
for all compact $K\subset M$, we found a neighborhood $\pazocal{U}_K\subset K\times M$ of $\Delta_K$ (for the induced topology) which is stable by $e^{-t\euler},t\geqslant 0$. Then by paracompactness of $M$, we can take a locally finite subcover of $\Delta$ by such sets and we deduce the existence of a global neighborhood $\pazocal{U}$ of $\Delta$ which is stable by
$e^{-t\euler},t\geqslant 0$.
\end{refproof}
{
In the present section, instead of using charts, we
favor a presentation using coordinate frames,
which makes notation
simpler. The two viewpoints are equivalent
since given a chart $\kappa:U\subset \pazocal{M}\to \kappa(U)\subset \mathbb{R}^n$ on some smooth manifold $\pazocal{M}$ of dimension $n$, the linear
coordinates $(x^i)_{i=1}^n \in \mathbb{R}^{n*}$ on $\mathbb{R}^n$ can be pulled back on
$U$ as a coordinate frame $(\kappa^*x^i)_{i=1}^n\in C^\infty(U;\mathbb{R}^n)$.
}
The next proposition gives a normal form for Euler vector fields.
\begin{prop}[Normal form for Euler vector fields]\label{p:normalform}
Let $\euler$ be an Euler vector field.
There exists a unique subbundle
$N\Delta\subset T_\Delta \left(M\times M\right)$, such that
$ de^{t\euler}=e^{t}\id: N\Delta\to N\Delta$.
For all $p\in \Delta$, there exist coordinate functions $(x^i,h^i)_{i=1}^n$ defined near $p$ such that { in these local coordinates} near $p$, $\Delta=\{ h^i=0 \}$ and
$
\euler=\sum_{i=1}^n h^i\partial_{h^i}$ $\forall i\in \{1,\dots,n\}$.
\end{prop}
\begin{rema}
This result was proved in \cite{dangthesis} and also later
in the paper by Bursztyn--Lima--Meinrenken \cite{Bursztyn2019}, cf.~the review \cite{Meinrenken2021}. Our proof here is different and more in the spirit of the Sternberg--Chen linearization theorem.
\end{rema}
\begin{proof}
\step{1}
We prove the dynamics contracts exponentially fast. We use the distance function $f=\mathbf{d}^2$ and note that
$-\euler \log(f)\leqslant -\frac{3}{2} $ on the open set $\pazocal{U}$ constructed in Lemma~\ref{l:stable} therefore
$ { e^{-t\euler}}f\leqslant e^{-\frac{3}{2}t}f $ by Gronwall Lemma.
Consequently, there exists a neighborhood $\pazocal{U}$ of $\Delta$ s.t. for any function $f\in \pazocal{I}$ ($f$ vanishes on
the diagonal $\Delta$) and $U$ is some bounded open subset, we have the exponential decay
$\Vert { e^{-t\euler}}f\Vert_{L^\infty(U)} \leqslant C e^{-Kt}
$ for some $C>0, K>\frac{1}{2}$ due to the hyperbolicity
in the normal direction of $e^{-t\euler}$.
Moreover, Hadamard's lemma states that if $f\in \pazocal{I}^k$ which means $f$ vanishes of order $k$, then
locally we can always write
$f$ as $\sum_{\vert \beta\vert = k} h^\beta { g_\beta}(x,h) $
where $h\in \pazocal{I}$ and therefore gluing with a partition of unity
yields a decay estimate of the form
$$\Vert { e^{-t\euler}}f\Vert_{L^\infty(U)} \leqslant C e^{-Kkt}$$
where $C>0$ and we have better exponential decay.
So starting from the coordinates $(x^i,h^i)$ from {the proof of Lemma~\ref{l:stable}}, we will correct the coordinates
$(h^i)_{i=1}^n$ using the exponential contractivity of the flow
to obtain normal forms coordinates.
\step{2 }We now correct $h^i$ so that
$\euler h^i=h^i$ modulo an element in $\pazocal{I}^\infty$.
First observe that $\euler h^i-h^i\in \pazocal{I}^2$ by definition, therefore
setting $ h_1^i= h^i +\varepsilon^i_1$, $\varepsilon^i_1= -\frac{(\euler h^i-h^i)}{2} $, we verify that
\begin{eqnarray}
\euler h_1^i-h_1^i \in \pazocal{I}^3.
\end{eqnarray}
By recursion, we define a sequence $(h_k^i)_{i=1}^n, k\in \mathbb{N}$,
defined as
$h_{k+1}^i=h_k^i+\varepsilon_{k+1}^i$
where $\varepsilon_{k+1}^i= -\frac{(\euler h^i_k-h^i_k)}{k+2} $
and we verify that for all $k\in \mathbb{N}$, we have $ \euler h_k^i-h_k^i \in \pazocal{I}^{k+2}$.
By Borel's Lemma, we may find a smooth germ
$h_\infty^i\sim h^i + \sum_{k=1}^\infty \varepsilon_k^i $ hence we
deduce that there exists $(h_\infty^i)_{i=1}^n$ s.t.
$ \euler h_\infty^i-h_\infty^i \in \pazocal{I}^\infty $.
\step{3} We use the flow to make the coordinate functions $(h_\infty^i)_{i=1}^n$ exact solutions of
$\euler f=f$.
Set $$\tilde{h}^i=h^i_\infty-\int_0^\infty e^{t} { \left( e^{-t\euler}\left((\euler-1)h^i_\infty \right)\right) }dt$$
where the integrand converges absolutely since $(\euler-1)h^i_\infty\in \pazocal{I}^\infty $, hence ${ e^{-t\euler}\left((\euler-1)h^i_\infty \right)}=\pazocal{O}(e^{-tNK}) $ for all $N>0$ where $K>\frac{1}{2}$. The function $\tilde{h}^i$ is smooth since
the ideal $\pazocal{I}^\infty$ is stable by derivatives therefore
differentiating under the integral $\int_0^\infty e^{t} { \left( e^{-t\euler}\left((\euler-1)h^i_\infty \right)\right) }dt$
does not affect the decay of the integral.
So we obtain that for all $i\in \{1,\dots,n\}$, $\euler \tilde{h}^i=\tilde{h}^i$ which
solves the problem since $(x^i,\tilde{h}^i)$ is a germ of smooth coordinate frame near $p$.
\end{proof}
\subsection{Log-polyhomogeneity}
Let $\euler$ be an Euler vector field.
One says that a distribution $u\in \pazocal{D}^\prime(\pazocal{U})$
is \emph{weakly
homogeneous of degree $s$} w.r.t.~scaling with $\euler$ if the family $(e^{ts} { (e^{-t\euler}u)})_{t\in \mathbb{R}_{\geqslant 0}}$
is bounded in $\pazocal{D}^\prime(\pazocal{U})$ (cf.~Meyer \cite{Meyer}). One can also introduce a more precise variant of that definition by replacing $\pazocal{D}^\prime(\pazocal{U})$ with $\pazocal{D}^\prime_\Gamma(\pazocal{U})$ for some closed conic $\Gamma\subset T^*M^2\setminus\zero$, where $\pazocal{D}^\prime_\Gamma(\pazocal{U})$ is Hörmander's space of distributions with wavefront set in $\Gamma$ (see \cite[\S8.2]{H} for the precise definition). As shown in \cite[Thm.~1.4]{DangAHP}, {in the first situation without the wavefront condition, this defines} a class of distributions that is intrinsic, i.e.~which does not depend on the choice of Euler vector field $\euler$.
We consider distributions with the following log-polyhomogenous behaviour under scaling transversally to the diagonal.
\begin{defi}[log-polyhomogeneous distributions]
Let $\Gamma$ be a closed conic set
such that for some $\euler$-stable neighborhood $\pazocal{U}$ of the diagonal,
\begin{eqnarray}
\forall t\geqslant 0, \ { e^{-t\euler}}\Gamma|_\pazocal{U}\subset \Gamma|_{\pazocal{U}},\\
\overline{\Gamma}\cap T^*_\Delta M^2=N^*\Delta.
\end{eqnarray}
We say that $u\in \pazocal{D}^\prime_\Gamma(\pazocal{U})$ is \emph{log-polyhomogeneous} w.r.t.~$\euler$ if
it admits the following asymptotic expansion under scaling: there exists $p\in \mathbb{Z}$, {$l\in \nn$}
and distributions $(u_k)_{k=p}^{\infty}, 1\leqslant i \leqslant l $
in $\pazocal{D}^\prime_\Gamma(\pazocal{U})$ such that for all $N>0$ and all $\varepsilon>0$,
\begin{eqnarray}\label{phexp}
{ e^{-t\euler}}u=\sum_{p\leqslant k\leqslant N, 0\leqslant i\leqslant l-1} e^{-tk} \frac{(-1)^i t^i}{i!}\left(\euler-k\right)^i u_k +\pazocal{O}_{\pazocal{D}^\prime_\Gamma(\pazocal{U})}(e^{-t(N+1-\varepsilon)}).
\end{eqnarray}
A distribution is called \emph{polyhomogeneous} if $l=0$. { In contrast, a non-zero value for $l$ indicates the occurrence of \emph{logarithmic mixing} under scaling}.
We endow such {distributions} with a notion of convergence as follows: a sequence of
log-polyhomogeneous {distributions} $u_n$ converges
$u_n\to v$ in log-polyhomogeneous {distributions} if
$u_n\rightarrow v$ in $\pazocal{D}^\prime_\Gamma(M)$, for every $N$ each term in the asymptotic expansion converge
$u_{n,k}\rightarrow v_{k}$, $k\leqslant N $ and the remainders $u_n-\sum_{k=p}^N u_{n,k} $ converge to $v-\sum_{k=p}^N v_{k} $ in the sense that
$$
{ e^{-t\euler}}\bigg(u_n-\sum_{k=p}^N u_{n,k}- \Big(v-\sum_{k=p}^N v_{k} \Big)\bigg)=\pazocal{O}_{\pazocal{D}^\prime_\Gamma(\pazocal{U})}(e^{-t(N+1-\varepsilon)})
$$
for all
$\varepsilon>0$.
\end{defi}
Thus, log-polyhomogeneous distributions have resonance type expansions under scaling with the vector field $\euler$.
We stress, however, that each distribution $u_k$ in the expansion \eqref{phexp} is not necessarily homogeneous.
In fact, it does
not necessarily scale like ${ e^{-t\euler}}u_k=e^{-tk}u_k$, but we may have
logarithmic mixing in the sense that:
\begin{eqnarray*}
{ e^{-t\euler}}u_k=\sum_{i=0}^{l-1} e^{-tk} \frac{(-1)^i t^i}{i!}\left(\euler-k\right)^i u_k.
\end{eqnarray*}
This means that restricted to the linear span of $(u_k,(\euler-k) u_k,\dots,(\euler-k)^{l-1}u_k)$, the matrix of $\euler$
reads
\begin{eqnarray*}
\euler\left(\begin{array}{c}
u_k\\
(\euler-k) u_k\\
\vdots \\
(\euler-k)^{l-1}u_k
\end{array} \right)=\left(\begin{array}{cccc}
k&1& &0\\
&k& \ddots &\\
& &\ddots &1 \\
0 & & &k
\end{array} \right)\left(\begin{array}{c}
u_k\\
(\euler-k) u_k\\
\vdots \\
(\euler-k)^{l-1}u_k
\end{array} \right)
\end{eqnarray*}
so it has a Jordan block structure.
In the present paper, we will prove that log-polyhomogeneous distributions which are
Schwartz kernels of pseudodifferential operators with classical symbols as well as Feynman propagators have no Jordan blocks for the resonance $p\leqslant k<0$ and there are Jordan blocks of rank $2$ for all $k\geqslant 0$. In other words,
$(u_k,(\euler-k) u_k, (\euler-k)^2u_k) $ are linearly dependent of rank $2$ for every $k\geqslant 0$.
We introduce special terminology to emphasize this type of behaviour.
\begin{defi}[Tame log-polyhomogeneity]
A distribution $u\in \pazocal{D}^\prime_\Gamma(\pazocal{U})$ is \emph{tame log-polyhomogeneous} w.r.t.~$\euler$
if it is log-polyhomogeneous w.r.t.~$\euler$ and
\begin{eqnarray}
{ e^{-t\euler}}u=\sum_{p\leqslant k<0} e^{-tk} u_k + \sum_{0\leqslant k\leqslant N, 0\leqslant i\leqslant 1} e^{-tk} \frac{(-1)^i t^i}{i!}\left(\euler-k\right)^i u_k +\pazocal{O}_{\pazocal{D}^\prime_\Gamma(\pazocal{U})}(e^{-t(N+1-\varepsilon)})
\end{eqnarray}
for all $\varepsilon>0$, i.e.~the Jordan blocks only occur for non-negative $k$ and have rank at most $2$.
\end{defi}
For both pseudodifferential operators with classical symbols and Feynman powers,
we will prove that the property of being log-polyhomogeneous is \emph{intrinsic} and does not depend on the
choice of Euler vector field used to define the log-polyhomogeneity. This generalizes the fact that the class of pseudodifferential operators
with polyhomogeneous symbol is intrinsic.
\subsection{Pollicott--Ruelle resonances of \texorpdfstring{$e^{-t\euler}$}{exp(-tX)} acting on log-polyhomogeneous distributions}
For every tame log-polyhomogeneous distribution $\exd \in \cD^\prime(\pazocal{U})$ and every $n\in \mathbb{Z}$, we define a projector
$\Pi_n$ which extracts the quasihomogeneous part $\Pi_n(\exd) \in \cD^\prime(\pazocal{U})$ of the distribution $\exd$.
Note that if a distribution $u$ is log-polyhomogenous w.r.t.~$\euler$, then for any test form $\varphi\in \Omega^{\bullet}_{\rm c}(\pazocal{U})$~\footnote{{We consider test forms because Schwartz kernels of operators are not densities and it is appropriate to consider them as differential forms of degree $0$.}}
where $\pazocal{U}$ is $\euler$-stable,
we have an asymptotic expansion:
\begin{eqnarray*}
{\left\langle { (e^{-t\euler}u)},\varphi\right\rangle} = \sum_{k=p,0\leqslant i\leqslant l-1}^N e^{-tk}\frac{(-1)^it^i}{i!} \left\langle (\euler-k)^iu_{k},\varphi\right\rangle+\pazocal{O}(e^{-tN}).
\end{eqnarray*}
The l.h.s.~is similar to dynamical correlators studied in dynamics and the asymptotic expansion is similar
to expansions of dynamical correlators in hyperbolic dynamics.
So in analogy with dynamical system theory, we can define the Laplace transform
of the dynamical correlators and the Laplace transformed
correlators have meromorphic continuation to the complex plane with poles
along the arithmetic
progression $\{p,p+1,\dots \}$:
\[
\int_0^\infty e^{-tz}\left\langle { (e^{-t\euler}u)},\varphi\right\rangle dt=
\sum_{k=p,0\leqslant i\leqslant l-1}^N (-1)^i \frac{\left\langle (\euler-k)^i u_{k},\varphi\right\rangle}{(z+k)^{{i+1}}}+\text{holomorphic on }\Re z\leqslant N.
\]
These poles are \emph{Pollicott--Ruelle resonances} of the flow $e^{-t\euler}$ acting on
log-polyhomogeneous distributions in $\pazocal{D}^\prime(\pazocal{U})$.
We can now use the Laplace transform to define
the projector $\Pi_n$ which extracts quasihomogeneous parts of distributions.
\begin{defi} \label{def:pi}
Suppose $u\in \cD^\prime(\pazocal{U})$ is log-polyhomogeneous. Then for $n\in\zz$ we define
$$
\Pi_n(u)\defeq \frac{1}{2i\pi}\int_{\partial D} \mathfrak{L}_zu \,dz
$$
where $ \mathfrak{L}_z u= \int_0^\infty e^{-tz}{ (e^{-t\euler}u)} \,dt$ and $D\subset \mathbb{C}$ is a small disc around $n$.
\end{defi}
\subsection{Residues as homological obstructions and scaling anomalies}\label{ss:toy}
Before considering the general setting, let us explain the concept of residue in the following fundamental example (which is closely related to the discussion in the work of Connes--Moscovici~\cite[\S5]{Connes1995}, Lesch \cite{Lesch}, Lesch--Pflaum \cite{Lesch2000}, Paycha \cite{paycha2,paycha} and Maeda--Manchon--Paycha \cite{Maeda2005}).
Let $\Feuler\in {\cf(T\rr^{n})}$ be an \emph{Euler vector field with respect to $0\in \rr^{n}$}, i.e.~for all $f\in\cf(\rr^{n})$, $\Feuler f-f$ vanishes at $0$ with order $2$. For instance, we can consider $\Feuler=\sum_{i=1}^n \xi^i \partial_{\xi^i}$, where $(\xi^1,\dots,\xi^n)$ are the Euclidean coordinates. This simplified setting is meant to illustrate what happens on the level of \emph{symbols} or \emph{amplitudes} rather than Schwartz kernels near $\Delta\subset M\times M$, but these two points of view are very closely related. In our toy example, this simply corresponds to the relationship between momentum variables $\xi^{i}$ and position space variables $h^{i}$ by inverse Fourier transform, see Remark \ref{l:scalanomal}.
Suppose $\exd\in \pazocal{D}^{\prime,n}(\rr^n\setminus\{0\})$ is a de Rham current of top degree
which solves the linear PDE:
\beq\label{toextend}
\Feuler \exd=0 \mbox{ in the sense of } \pazocal{D}^{\prime,n}(\rr^n\setminus \{0\}),
\eeq
which
means that the current $\exd$ is scale invariant on $\rr^n\setminus \{0\}$.
\begin{lemm}
Under the above assumptions, $\iota_\Feuler \exd$ is a closed current in
{$\pazocal{D}^{\prime,n-1}(\rr^n\setminus \{0\})$} where $\iota_\Feuler$ denotes the contraction with $\Feuler$.
\end{lemm}
\begin{proof}
The current $\iota_\Feuler \exd$ is closed in $\pazocal{D}^{\prime,n-1}(\rr^n\setminus \{0\}) $
by the
Lie--Cartan formula $\left(d\iota_\Feuler+\iota_\Feuler d \right)=\Feuler$ and the fact that $\exd$ is closed as a top degree current:
\[
d\iota_\Feuler {\exd}=\left(d\iota_\Feuler+\iota_\Feuler d \right)\exd=\Feuler \exd=0.\vspace{-1em}
\]
\end{proof}
One can ask the question: \emph{is there a distributional extension
$\overline{\exd}\in \pazocal{D}^{\prime,n}(\rr^n)$ of $\exd$ which satisfies the same scale invariance PDE on $\rr^n$? } The answer
is positive unless there is an obstruction of cohomological nature which we explain in the following proposition.
\begin{prop}[Residue as homological obstruction]\label{p:residue}
Suppose $\exd\in \pazocal{D}^{\prime,n}(\rr^n\setminus\{0\})$ satisfies \eqref{toextend}. Let $\chi\in C^\infty_\c(\rr^n)$ be such that $\chi=1$ near $0$.
Then $d\chi$ is an exact form and
the pairing between the exact form $d\chi$ and the closed current $\iota_\Feuler \exd $
$$
\left\langle d\chi, \iota_\Feuler \exd \right\rangle=\int_{\rr^n} d\chi\wedge \iota_\Feuler \exd
$$
{does not depend} on the choice of $\chi$.
If moreover $\wf(\exd)\subset \{ (\xi,\tau dQ(\xi)) \st Q(\xi)=0,\ \tau<0\}$ for some non-degenerate quadratic form $Q$ on $\rr^n$, then
$$
\int_{\mathbb{S}^{n-1}} \iota_\Feuler \exd=\left\langle d\chi, \iota_\Feuler \exd \right\rangle.
$$
There is a scale invariant extension $\overline{\exd}$ of $u$ if and only if
the pairing $\left\langle d\chi, \iota_\Feuler \exd \right\rangle=0$, which is equivalent to saying that
the current $\iota_\Feuler \overline{\exd}\in \pazocal{D}^{\prime,n-1}(\rr^n)$ is closed.
\end{prop}
\begin{proof}
Since $\iota_\Feuler \exd$ is closed and $d\chi$ is exact the cohomological pairing
$\left\langle d\chi, \iota_\Feuler \exd \right\rangle
$ does not depend on the choice of $\chi$. In fact, as a de Rham current $d\chi\in \pazocal{D}^{\prime,1}(\rr^n)$ lies in the same cohomology class as the current $[\mathbb{S}^{n-1}]\in \pazocal{D}^{\prime,1}(\rr^n)$ of integration on a sphere $\mathbb{S}^{n-1}$ enclosing $0$.
If there is an extension $\overline{\exd}$ that satisfies $\Feuler \overline{\exd}=0$ in
$\pazocal{D}^{\prime,n}(\rr^n)$, it means that the current $\iota_\Feuler \overline{\exd}$ is closed in $\pazocal{D}^{\prime,n-1}(\rr^n) $
since $ d\iota_\Feuler \overline{\exd}=\left(d\iota_\Feuler+\iota_\Feuler d \right)\overline{\exd}=\Feuler \overline{\exd}=0 $.
Then by integration by parts (sometimes called the Stokes theorem for de Rham currents),
$\left\langle d\chi, \iota_\Feuler \exd \right\rangle=\left\langle d\chi, \iota_\Feuler \overline{\exd} \right\rangle=-\left\langle \chi, \Feuler \overline{\exd} \right\rangle= 0$
where we used the fact that $d\chi$ vanishes near $0$ and $\exd=\overline{\exd}$ in a neighborhood of the support of $d\chi$.
Conversely, assume the cohomological pairing vanishes: $\left\langle d\chi, \iota_\Feuler \exd \right\rangle=0
$. Let $\overline{\exd}$ be any extension of $\exd$. Then $\left\langle \chi, \Feuler \overline{\exd} \right\rangle=0
$ by integration by parts. But since $\exd=\overline{\exd}$ outside $0$ and $\Feuler \exd=0$ outside $0$,
the current $\Feuler \overline{\exd}$ is supported at $0$ and by a classical theorem of Schwartz must have the form
$$\Feuler \overline{\exd}=\bigg( c_0\delta_{\{0\}}(\xi) + \sum_{1\leqslant\vert \alpha \vert\leqslant N} c_\alpha\partial_\xi^\alpha\delta_{\{0\}}(\xi) \bigg) d\xi^1\wedge \dots\wedge d\xi^n $$
where all $\alpha$ are multi-indices and $N$ is the distributional
order of the current.
Since $\chi=1$ near $0$, it means
$\left\langle \chi, \Feuler \overline{\exd} \right\rangle=0
=c_0\chi(0)=c_0=0$ hence the constant term vanishes.
This means that $\Feuler \overline{\exd}= \sum_{1\leqslant\vert \alpha \vert\leqslant N}c_\alpha\partial_\xi^\alpha\delta_{\{0\}}(\xi) d\xi^1\wedge \dots\wedge d\xi^n$ and $\overline{\exd}- \sum_{1\leqslant\vert \alpha \vert\leqslant N}\frac{c_\alpha}{\vert\alpha\vert}\partial_\xi^\alpha\delta_{\{0\}}(\xi) d\xi^1\wedge \dots\wedge d\xi^n$ extends $\exd$ and $\Feuler\left(\overline{\exd}- \sum_{1\leqslant\vert \alpha \vert\leqslant N}\frac{c_\alpha}{\vert\alpha\vert}\partial_\xi^\alpha\delta_{\{0\}}(\xi) d\xi^1\wedge \dots\wedge d\xi^n \right)=0$.
When $\wf(\exd)\subset \{ (\xi,\tau dQ(\xi)) \st Q(\xi)=0, \ \tau<0\}$ then $\wf(\exd)$ does not meet the conormal of $\mathbb{S}^{n-1}$
and therefore we can repeat the exact above discussion with the indicator function $\one_B$ of the unit ball $B$ playing the role of $\chi$,
since the distributional product $\one_B \exd$ is well-defined because $\wf(\one_B)+ \wf(\exd)$ never meets the zero section.
Then we obtain the residue from the identity $\partial \one_B=[\mathbb{S}^{n-1}]$ for currents
where $[\mathbb{S}^{n-1}]$ is the integration current on the sphere $\mathbb{S}^{n-1}$.
\end{proof}
The quantity $\left\langle d\chi, \iota_\Feuler \exd \right\rangle=\left\langle [\mathbb{S}^{p-1}],[\iota_\Feuler \exd] \right\rangle$,
called \emph{residue} or \emph{residue pairing}, measures a cohomological obstruction to extend $\exd$ as a solution $\overline{\exd}$ solving $\Feuler \overline{\exd}=0$.
In fact, a slight modification of the previous proof shows that there is always an
extension $\overline{\exd}$ which satisfies the linear PDE
$$
\boxed{\Feuler \overline{\exd}=\left\langle d\chi, \iota_\Feuler \exd \right\rangle \delta_{\{0\}} d\xi^1\wedge \dots\wedge d\xi^n.}
$$
We show a useful vanishing {property} of {certain} residues.
\begin{coro}[{Residue vanishing}]\label{l:vanishing1}
{ Let $Q$ be a nondegenerate quadratic form on $\mathbb{R}^n$.}
Suppose $\exd\in \pazocal{D}^\prime(\mathbb{R}^n\setminus \{0\})$ is homogeneous of degree $-n+k>-n$ and $$\wf(\exd)\subset \{ (\xi,\tau dQ(\xi)) \st Q(\xi)=0, \ \tau<0\}.$$
Then for every
multi-index $\beta$ such that $\vert\beta\vert=k>0$,
$$
\int_{\mathbb{S}^n} \big(\partial_{\xi}^\beta \exd\big) \iota_\Feuler d\xi_1\dots d\xi_n =0.
$$
\end{coro}
\begin{proof}
Let $\one_B$ be the indicator function of the unit ball $B$. We denote by $\overline{\exd}$, the unique distributional extension of $\exd\in \pazocal{D}^\prime(\mathbb{R}^n\setminus \{0\})$ in $\pazocal{S}^\prime(\mathbb{R}^n)$ which is homogeneous of degree $-n+k$ by~\cite[Thm.~3.2.3, p.~75]{H}.
Therefore using the commutation relation $[\Feuler,\partial_\xi^\beta]=-\vert\beta\vert=-k$ yields immediately
that $\partial_\xi^\beta \overline{\exd}$ is a distribution
homogeneous of degree $-n$ and thus $\Feuler \left(\partial_\xi^\beta \overline{\exd}\,d^n\xi\right)=0$.
Then, by Proposition~\ref{p:residue}, the residue equals
$$ \int_{\mathbb{S}^{n-1}} \big(\partial_{\xi}^\beta \exd\big) \iota_\Feuler d\xi_1\dots d\xi_n=\int_{\mathbb{R}^n} \left(\partial \one_B\right) \iota_\Feuler \partial_\xi^\beta \overline{\exd}\,d^n\xi=0, $$
where the pairing is well-defined since $N^*(\mathbb{S}^{n-1})\cap \wf(\exd)=\emptyset$.
\end{proof}
\begin{remark}[Residue as scaling anomaly]\label{l:scalanomal}
Let $\exd\in \pazocal{D}^{\prime,n}(\rr^n\setminus\{0\})$ be a current of top degree, homogeneous of degree $0$ with respect to scaling and denote by $\overline{\exd}\in \pazocal{D}^{\prime,n}(\rr^n)$ its unique distributional extension of order $0$.
Denote by $\big(\pazocal{F}^{-1}\exd\big)(h)=\frac{1}{(2\pi)^n}\left\langle \overline{\exd}, e^{i\left\langle h, . \right\rangle}\right\rangle\in \pazocal{S}^\prime(\rr^n)$ its inverse Fourier transform.
Then the tempered distribution $\pazocal{F}^{-1}\exd$ satisfies the equations:
\begin{eqnarray*}
\pazocal{F}^{-1}\exd(\lambda.)=\pazocal{F}^{-1}\exd(.)+c\log\lambda,\\
\euler \pazocal{F}^{-1}\exd=c,
\end{eqnarray*}
where $\euler=\sum_{i=1}^n h^i\partial_{h^i}$ is the Euler vector field in position space and $c=\frac{1}{(2\pi)^n}\int_{\rr^n} d\chi\wedge \iota_\Feuler \exd$ is the residue. Therefore, residues defined as homological obstructions also arise as \emph{scaling anomalies}.
\end{remark}
This interpretation of residues as scaling anomalies have appeared in the first author's thesis \cite[\S8]{dangthesis} as well as in the physics literature on renormalization in Quantum Field Theory in the Epstein--Glaser approach \cite{Nikolov2013,Gracia-Bondia2014,Rejzner2020}.
\subsection{Dynamical definition of residue} After this motivation, we come back to the setting of an Euler vector field $\euler$ acting on a neighborhood of the diagonal $\Delta\subset M\times M$.
As we will explain, our approach to the Wodzicki residue uses scalings with Euler vector fields and a diagonal restriction.
Let $\iota_\Delta: x\mapsto (x,x)\in \Delta\subset M\times M$
denote the diagonal embedding.
We are ready to formulate our main definition.
\begin{defi}[Dynamical residue]
Let ${\pazocal{K}}\in \pazocal{D}^\prime_\Gamma(\pazocal{U})$ be a tame log-polyhomogeneous distribution on some neighborhood $\pazocal{U}$ of the diagonal $\Delta\subset M\times M$
and suppose $\Gamma|_{\Delta}\subset N^*\Delta$.
For any Euler vector field $\euler$, let $\Pi_0$
be the corresponding spectral projector on the resonance $0$, see Definition \ref{def:pi}. We define
the \emph{dynamical residue of ${\pazocal{K}}$} as:
$$
\resdyn {\pazocal{K}} = \iota_\Delta^*\big(\euler ( \Pi_0( {\pazocal{K}}) )\big) \in C^\infty(M),
$$
provided that the pull-back is well-defined.
\end{defi}
A priori, the dynamical residue can depend on the choice of Euler vector field $\euler$ and it is not obvious that one can pull-back the distribution $\euler (\Pi_0({\pazocal{K}}))$ by the diagonal embedding. We need therefore to examine the definition carefully for classes of Schwartz kernels that are relevant for complex powers of differential operators.
\section{Equivalence of definitions in pseudodifferential case}\label{ss:wodzickipdo}
\subsection{Log-polyhomogeneity of pseudodifferential operators}
In this section, $M$ is a smooth manifold of arbitrary dimension.
{We denote by $\vert \Lambda^{\top}M \vert$ the space of smooth densities
on $M$. For any operator $A:C^\infty_\c(M)\to \cD^\prime(M)$, recall that the corresponding Schwartz kernel is a distribution on $M\times M$ twisted by some smooth density. More precisely, the kernel of $A$ belongs to $\cD^\prime(M\times M)\otimes \pi_2^*\vert \Lambda^{\top}M \vert$ where $\pi_2$ is the projection
on the second factor and reads ${\pazocal{K}}(x,y)\dvol_g(y)$ where ${\pazocal{K}}\in \cD^\prime(M\times M)$ and $\dvol_g\in \vert \Lambda^{\top}M \vert$~\footnote{ In fact, $Au=\int_{y\in M} {\pazocal{K}}(.,y)u(y)\dvol_g(y)$ $\forall u\in C^\infty_\c(M)$. Neither ${\pazocal{K}}\in \cD^\prime(M\times M)$ nor $\dvol_g\in \vert \Lambda^{\top}M \vert$ are intrinsic, but their product is.}.}
{ In this part, we need to fix a density $\dvol_g\in \vert \Lambda^{\top}M \vert$ on our manifold $M$ since given a linear continuous operator from $C^\infty_\c(M)\to \pazocal{D}^\prime(M)$, its Schwartz kernel $ \pazocal{K}$ and hence its dynamical residue depends on the choice of density. However, we will see in the sequel that the product: (dynamical residue $\times$ density) $\in \vert \Lambda^{\top}M \vert$ does not depend on the choice of density.}
We first prove that pseudodifferential kernels
are tame log-polyhomogeneous with respect to \emph{any} Euler vector field $\euler$.
\begin{prop}\label{p:pseudopoly}
Let ${\pazocal{K}}(.,.)\pi_2^*\dvol_g\in \pazocal{D}^\prime_{N^*\Delta}(M\times M)\otimes \pi_2^*\vert \Lambda^{\top}M \vert$ be the kernel of a classical pseudodifferential operator $A\in \Psi^\cv_{\ph}(M)$, {$\cv\in \mathbb{C}$}.
Then for every Euler vector field $\euler$, there exists an $\euler$-stable neighborhood of the diagonal $\pazocal{U}$ such that ${\pazocal{K}}$ is tame log-polyhomogeneous w.r.t.~$\euler$.
In particular,
{
$$
\mathfrak{L}_s{\pazocal{K}} { := \int_0^\infty \left(e^{-t(\euler+s)}{\pazocal{K}}\right) dt \in \pazocal{D}^\prime_{N^*\Delta}(\pazocal{U})}
$$}
is a well-defined conormal distribution and extends as a {meromorphic function} of $s\in \mathbb{C}$
with poles at $s\in \cv+n-\mathbb{N}$.
{ If $\cv\geqslant -n$ is an integer,
the poles at $s=k$ are simple when $k<0$ and of multiplicity at most $2$ when
$k\geqslant 0$. If $\cv\in \mathbb{C}\setminus \clopen{-n,+\infty}\cap \mathbb{Z} $ then all poles are simple and $\Pi_0(\pazocal{K})=0$.}
\end{prop}
{In the proof, we make a crucial use of the Kuranishi trick, which allows us to represent a pseudodifferential kernel in normal form coordinates for a given Euler vector field $X$.
Concretely, in local coordinates, the phase term used to represent the pseudodifferential kernel as an oscillatory integral reads $e^{i\left\langle\xi,x-y \right\rangle}$, yet we would like to write it in the form $e^{i\left\langle\xi,h\right\rangle}$ where $X=\sum_{i=1}^nh^i\partial_{h^i}$. We also need to study how the symbol transforms in these normal form coordinates and to verify that it is still polyhomogeneous in the momentum variable $\xi$.
Our proof can be essentially seen as a revisited version of the theorem of change of variables for pseudodifferential operators combined with scaling of polyhomogeneous symbols.}
\begin{refproof}{Proposition \ref{p:pseudopoly}}
\step{1} Outside the diagonal the
Schwartz kernel ${\pazocal{K}}$ is smooth, hence for any test form $\chi_1\in C^\infty_{\rm c}(M\times M\setminus \Delta)$ and any smooth function
$\psi\in C^\infty(M\times M)$ supported away from the diagonal,
$$\left\langle { e^{-t\euler}} ( {\pazocal{K}} \psi) ,\chi\right\rangle=\pazocal{O}((e^{-t})^{+\infty}).$$
This shows we only need to prove the tame log-polyhomogeneity for a localized version of the kernel near the diagonal $\Delta\subset M\times M$.
\step{2} Then, by partition of unity, it suffices to prove the claim on sets
of the form $U\times U\subset M\times M$. By the results in~\cite{Lesch}, in a local chart $\kappa^2:U\times U\to \kappa(U)\times \kappa(U) $ with linear coordinates $(x,y)=(x^i,y^i)_{i=1}^n$, the pseudodifferential kernel reads:
\begin{eqnarray*}
{\left(\kappa^2_*\pazocal{K}\right)}(x,x-y)=\frac{1}{(2\pi)^n}\int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,x-y\right\rangle} \asigma(x;\xi) d^n\xi \in C^\infty(\kappa(U)\times \mathbb{R}^n\setminus\{0\})
\end{eqnarray*}
where $\asigma(x;\xi)\sim \sum_{k=0}^{+\infty} \asigma_{\cv-k}(x;\xi)$ and
$\asigma_k\in C^\infty(\kappa(U) \times \mathbb{R}^n\setminus\{0\} ) $
satisfies $\asigma_k(x;\lambda\xi)=\lambda^k\asigma(x;\xi)$, $\lambda>0$ for $\vert \xi\vert>0$.
{The normal form in Proposition ~\ref{p:normalform} yields the existence
of}
coordinate functions $(x^i,h^i)_{i=1}^n$, where $(x^i)_{i=1}^n$ are the initial linear coordinates, such that
$\kappa^2_*\euler=\sum_{i=1}^n h^i\partial_{h^i}$. We also view the coordinates $(h^i)_{i=1}^n$ as \emph{coordinate functions} $\left(h^i(x,y)\right)_{i=1}^n$
on $\kappa^2(U \times U)$, we also use the short notation $h(x,y)=(h^i(x,y))_{i=1}^n\in C^\infty(\kappa(U)^2,\mathbb{R}^n)$.
By the Kuranishi trick, the kernel
$\kappa^2_*{\pazocal{K}}$ can be rewritten as
\begin{eqnarray*}
\kappa^2_*{\pazocal{K}}(x,x-y)=\frac{1}{(2\pi)^n}\int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,h(x,y)\right\rangle} \asigma(x; \t M(x,y)^{-1} \xi) \module{M(x,y)}^{-1}d^n\xi \\ \in C^\infty(\kappa(U)\times \mathbb{R}^n\setminus\{0\})
\end{eqnarray*}
where $\module{M(x,y)}=\det{M(x,y)}$, and the matrix $M\in C^\infty(\kappa(U)^2,\GL_n(\mathbb{R}))$ satisfies $M(x,x)=\id$, $x-y=M(x,y)h(x,y)$. Since $(x^i,y^i)_{i=1}^n$ and $(x^i,h^i)_{i=1}^n$ are both coordinates systems in $\kappa(U)\times \kappa(U)$, we can view $(x-y)=(x^i-y^i)_{i=1}^n(.,.)\in C^\infty(\kappa(U)\times \mathbb{R}^n,\mathbb{R}^n)$
as a smooth function of $(x,h)\in \kappa(U)\times \mathbb{R}^n$
and $M^{-1}(x,h)$ can be expressed
as an integral:
$$
M^{-1}(x,h)=\int_0^1 d(x-y)|_{(x,th)} dt.
$$
\step{3} We need to eliminate the dependence in the $h$ variable in the symbol $A(x,y;\xi)= \asigma(x; \t M(x,y)^{-1} \xi)\module{M(x,y)}^{-1}$
keeping in mind this symbol has the polyhomogeneous expansion in the $\xi$ variable
$$
A(x,y;\xi)\sim \sum_{k=0}^{+\infty}\asigma_{\cv-k}(x; \t M(x,y)^{-1} \xi)\module{M(x,y)}^{-1}.
$$
By~\cite[Thm.~3.1]{shubin}, if we set $A(x,y;\xi)=\asigma(x; \t M(x,y)^{-1}\xi)\module{M(x,y)}^{-1}$,
then:
\begin{eqnarray*}
A(x,y;\xi)\sim {\sum_\beta} \frac{i^{-\vert \beta\vert}}{\beta !} \partial_\xi^\beta \partial_y^\beta A(x,y;\xi) |_{x=y}
\end{eqnarray*}
which implies that if we set $A_{\cv-k}(x,y;\xi)=\asigma_{\cv-k}(x; \t M(x,y)^{-1} \xi)\module{M(x,y)}^{-1}$, we get
the polyhomogeneous asymptotic expansion:
\begin{eqnarray}\label{e:exp1referee}
A(x,y;\xi)\sim \sum_{p=0}^{+\infty} {\sum_{\vert \beta\vert+k=p } \frac{i^{-\vert \beta\vert}}{\beta !} \partial_\xi^\beta \partial_y^\beta A_{\cv-k}(x,y;\xi) |_{x=y} }
\end{eqnarray}
where in the sum over $p$, each term is homogeneous of degree $\cv-p$ w.r.t.~scaling in the variable $\xi$.
At this step,
we obtain a representation of the form
\begin{eqnarray}\label{e:exp2referee}
{\left(\kappa^2_*\pazocal{K}\right)}(x,x-y)=\frac{1}{(2\pi)^n}\int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,h(x,y)\right\rangle} \tilde{\asigma}(x; \xi) d^n\xi \in C^\infty(\kappa(U)\times \mathbb{R}^n\setminus\{0\})
\end{eqnarray}
where $\tilde{\asigma}\in C^\infty(\kappa(U)\times \mathbb{R}^n)$ is a polyhomogeneous symbol.
\step{4} { It is at this particular step that we start to carefully distinguish between the cases $\alpha\in\mathbb{C}\setminus(\clopen{-n,+\infty}\cap \mathbb{Z})) $, which is in a certain sense easier to handle, and the case where $\alpha$ is an integer such that $\alpha\geqslant -n$.} {Up to a modification of ${\pazocal{K}}$} with a smoothing operator, we can always assume that $\tilde{\asigma}$ is smooth in $\xi$ and supported in $\vert \xi\vert\geqslant 1$.
For every $N$, let us decompose
$$\tilde{\asigma}(x;\xi)=\sum_{k=0}^N \tilde{\asigma}_{\cv-k}(x;\xi)+R_{\cv-N}(x;\xi) $$
where the behaviour of the summands can be summarized as follows:
\ben
\item $R_{\cv-N}\in C^\infty\left(\kappa(U)\times \mathbb{R}^n\setminus\{0\} \right)$ and satisfies the estimate
$$ \forall \xi\, \text{ s.t. } \vert\xi\vert\geqslant 1, \ \forall x\in \kappa(U),\ \vert \partial_\xi^\beta R_{\cv-N}(x;\xi) \vert\leqslant C_{\cv-N,\beta} \vert\xi\vert^{{\cv-N-\beta} }, $$
and $R_{\cv-N}(x;.) $ extends as a distribution in $\kappa(U)\times \mathbb{R}^n $
of order $ N- \cv-n +1$ by \cite[Thm.~1.8]{DangAHP} since $R_{\cv-N}(x;.) $ satisfies the required weak homogeneity assumption.
\item If $\cv-k>-n$, then the symbol $ \tilde{\asigma}_{\cv-k}\in C^\infty\left(\kappa(U)\times \mathbb{R}^n\setminus\{0\} \right)$ is homogeneous of degree $\cv-k $
and extends uniquely as a tempered distribution in $\xi$ homogeneous of degree $\cv-k$ by~\cite[Thm.~3.2.3]{H}.
\item { If $\cv-k\leqslant -n$ and
$\cv\in\mathbb{C}\setminus(\clopen{-n,+\infty}\cap \mathbb{Z})$,
then observe that $\cv-k\in\mathbb{C}\setminus(\clopen{-n,+\infty}\cap \mathbb{Z}) $ hence
$ \tilde{\asigma}_{\cv-k}\in C^\infty\left(\kappa(U)\times \mathbb{R}^n\setminus\{0\} \right)$ is homogeneous of degree $\cv-k $ in $\xi$
and extends \emph{uniquely} as a tempered distribution in $\xi$ homogeneous of degree $\cv-k$ by~\cite[Thm.~3.2.4]{H}. If $\cv-k\leqslant -n$ and
$\cv\geqslant -n$ is an integer,}
then $ \tilde{\asigma}_{\cv-k}\in C^\infty\left(\kappa(U)\times \mathbb{R}^n\setminus\{0\} \right)$ is homogeneous of degree $\cv-k $ in $\xi$
and extends \emph{non--uniquely} as a tempered distribution in $\xi$ quasihomogeneous of degree $\cv-k$ by~\cite[Thm.~3.2.4]{H}.
There are Jordan blocks in the scaling (see~\cite[(3.2.24)$^\prime$]{H}), in the sense that we can choose the distributional extension in $C^\infty(\kappa(U),\pazocal{S}^\prime(\mathbb{R}^n))$ in such a way that:
$$ \left(\xi_i\partial_{\xi_i}-\cv+k\right) \tilde{\asigma}_{\cv-k}= \sum_{\vert \beta\vert=k-\cv-n} C_\beta(x) \partial_\xi^\beta\delta^{\mathbb{R}^n}_{\{0\}}(\xi) . $$
\een
\step{5} We now study the consequences of the above representation in position space. { If $\cv\geqslant -n$ is an integer then we} have
$$
\bea
\frac{1}{(2\pi)^n}\int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,h\right\rangle} \tilde{\asigma}(x;\xi)d^n\xi &=
\sum_{k=0}^{\cv+n-1} T_{ n+\cv-k}(x,h) + \sum_{k=\cv+n}^{N} T_{n+\cv-k}(x,h) \fantom +
\frac{1}{(2\pi)^n}\int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,h\right\rangle}R_{\cv-N}(x;\xi)d^n\xi,
\eea
$$
where
$$
T_{n+\cv-k}(x,h)=\frac{1}{(2\pi)^n}\int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,h\right\rangle} \tilde{\asigma}_{\cv-k}(x;\xi)d^n\xi.
$$
It follows that by inverse Fourier transform, when $\cv-k>-n $, $ T_{n+\cv-k}(x,.)$ is tempered in the variable $h$ and is homogeneous in the sense of tempered distributions:
$$\forall \lambda>0, \quad T_{n+\cv-k}(x,\lambda h)=\lambda^{k-n-\cv}T_{n+\cv-k}(x,h).$$
When $\cv-k\leqslant -n $, the distribution $T_{n+\cv-k}$ is quasihomogeneous in the variable $h$, i.e., when we scale with any $\lambda>0$ w.r.t.~$h$ there is a $\log\lambda$ which appears in factor:
$$\left\langle T_{n+\cv-k}(x,\lambda.),\varphi\right\rangle=\lambda^{n- \cv+k} \left\langle T_{n+\cv-k}(x,.),\varphi\right\rangle+\lambda^{n- \cv+k}\log\lambda \left\langle (\euler-\cv+k) T_{n+\cv-k}(x,.),\varphi\right\rangle .$$
Observe that the remainder term
reads:
\begin{eqnarray*}
\frac{1}{(2\pi)^n}\int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,h\right\rangle}R_{\cv-N}(x;\xi)d^n\xi
\end{eqnarray*}
which belongs to $\pazocal{C}^{N-\cv-n}$ since for $\chi\in C^\infty_\c(\mathbb{R}^n)$, $\chi=1$ near $\xi=0$, we get:
$$ \vert (1-\chi)(\xi) R_{\cv-N}(x;\xi) \vert\leqslant C_{\cv-N}(1+\vert \xi\vert)^{\cv-N}$$
which implies that $\int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,h\right\rangle}(1-\chi)(\xi)R_{\cv-N}(x;\xi)d^n\xi\in \pazocal{C}^{N-\cv-n}$ by \cite[Lem.~D.2]{Dang2020} and we can also observe that $\int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,h\right\rangle}\chi(\xi)R_{\cv-N}(x;\xi)d^n\xi$ is analytic in $h$ by the Paley--Wiener theorem.
{ If $\cv\in\mathbb{C}\setminus(\clopen{-n,+\infty}\cap \mathbb{Z})$, then we have a simpler decomposition
$$ \bea
\frac{1}{(2\pi)^n}\int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,h\right\rangle} \tilde{\asigma}(x;\xi)d^n\xi &=
\sum_{k=0}^{N} T_{ n+\cv-k}(x,h) +
\frac{1}{(2\pi)^n}\int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,h\right\rangle}R_{\cv-N}(x;\xi)d^n\xi,
\eea$$
where each $T_{ n+\cv-k}(x,h)$ is smooth in $x$ and a tempered distribution in $h$ homogeneous of degree $n+\cv-k$ (there are no logarithmic terms).
}
\step{6} Observe that in the new coordinates $(x,h)$,
the scaling with respect to $\euler$ takes the simple form
$ {\left( e^{-t\euler}f\right)} (x,h) = f(x,e^{-t} h)$ for smooth functions $f$.
So the provisional conclusion { for integer $\cv\geqslant -n$} is that when we scale w.r.t.~the Euler vector field, we get an asymptotic expansion
in terms of conormal distributions:
$$ \bea
{ e^{-t\euler}}{ \pazocal{K}}&=\sum_{k=0}^{\cv+n-1} {e^{-(n+\cv-k)t}} T_{n+\cv-k}+T_0+t{ \left( \euler T_0\right)} \fantom +\sum_{k=\cv+n }^N
{e^{-(n+\cv-k)t}} \left(T_{n+\cv-k}+t (\euler -(k-\cv-n) ) T_{n+\cv-k} \right) + R(x,e^{-t}h)
\eea
$$
where $C_0=\euler T_0$ and the remainder term $R$ is a H\"older function of regularity $\pazocal{C}^{N-\cv-n}$ so it has a Taylor expansion up to order $N-\cv-n$.
By taking the Laplace transform in the variable $t$, for any test form $\chi$, we find that the dynamical correlator
$$ \int_0^\infty e^{-ts} \left\langle { \left(e^{-t\euler} \pazocal{K}\right)},\chi \right\rangle dt $$
admits an analytic continuation
to a meromorphic function on $\mathbb{C}\setminus \{n-\cv,\dots ,0 ,-1,\dots \}$
with simple poles at $\{n-\cv,\dots ,1 \}$ and poles of order at most $2$
at the points $\{0,-1,\dots, \}$.
We have a Laurent series expansion of the form:
$$\bea
\int_0^\infty e^{-ts} { \left(e^{-t\euler} \pazocal{K}\right)} dt&=\sum_{k=0}^{\cv+n-1} \frac{ T_{n+\cv-k}}{s+k-\cv-n}+ \frac{T_0}{s}+\frac{\euler T_0}{s^2}
\fantom +\sum_{k=\cv+n }^N \frac{ T_{n+\cv-k}}{s+k-\cv-n}+ \frac{(\euler-k+\cv+n)T_{n+\cv-k}}{(s+k-\cv-n)^2 } \fantom + \int_0^\infty e^{-ts} R(x,e^{-t}h) dt,
\eea
$$
where the term $\int_0^\infty e^{-ts} R(x,e^{-t}h)dt$ is holomorphic on the half-plane $\Re s >0$ and meromorphic
on the half-plane $\Re s > \cv+n-N$ due to the H\"older regularity $R\in \pazocal{C}^{N-\cv-n}$.
{ If $\cv\in\mathbb{C}\setminus(\clopen{-n,+\infty}\cap \mathbb{Z})$, then the above discussion much simplifies because of the absence of logarithmic mixing and we find that $\int_0^\infty e^{-ts} \left( e^{-t\euler}\pazocal{K}\right) dt$ extends as a meromorphic function with only simple poles at $n-\cv, n-1-\cv,\dots,$ and therefore $0$ is not a pole of $\mathfrak{L}_s\pazocal{K}$. It means that $\Pi_0(\pazocal{K})=0$ when $\cv\in\mathbb{C}\setminus(\clopen{-n,+\infty}\cap \mathbb{Z})$.}
\end{refproof}
\subsection{Dynamical residue equals Wodzicki residue in pseudodifferential case}\label{ss:wodzicki}
The log-polyhomogeneity of pseudodifferential Schwartz kernels ensures that their dynamical residue is well-defined. Our next objective is to show that it coincides with the Guillemin--Wodzicki residue.
More precisely, if $\Psi^m_\ph(M)$ is the class of classical pseudodifferential operators of order $m$, we are interested in the \emph{Guillemin--Wodzicki residue density} of $A\in\Psi^m_\ph(M)$, which can be defined at any $x\in M$ as follows. In a local coordinate chart $\kappa:U\mapsto \kappa(U)\subset \mathbb{R}^n$, the symbol $a(x;\xi)$ is given by
$$ \left(\kappa_{*} A\left(\kappa^*u\right)\right)(x)=\frac{1}{(2\pi)^n} \int_{\mathbb{R}^n\times \mathbb{R}^n} e^{i\left\langle\xi,x-y\right\rangle}\asigma(x;\xi)u(y) d^n\xi d^ny$$
for all $u\in C^\infty_{\rm c}\left(\kappa\left(U\right)\right)$, and one defines the density
$$
\wres A\defeq \frac{1}{(2\pi)^n}\left(\int_{\mathbb{S}^{n-1}} \asigma_{-n}(x;\xi) \iota_{V} d^n\xi\right)d^nx \in \vert\Lambda^{\top}M \vert,
$$
where $V=\sum_{i=1}^n\xi_i\partial_{\xi_i}$ and $\asigma_{-n}$ is the symbol of order $-n$ in the polyhomogeneous expansion.
If for instance $M$ is compact then the \emph{Guillemin--Wodzicki residue} is obtained by integrating over $x$. In what follows we will only consider densities as this allows for greater generality.
{ Note that when $A\in \Psi_{\rm cl}^m(M)$ with $m\in \mathbb{C}\setminus \clopen{-n,+\infty}\cap \mathbb{Z}$
then the above residue vanishes because in this case there is no term homogeneous of degree $-n$ in the asymptotic expansion of the symbol.}
It is proved in~\cite[Prop.~4.5]{Lesch} that the residue density is intrinsic. This is related to the fact that in the local chart, $d^nx\,d^n\xi$ is the Liouville measure, which is intrinsic and depends only on the canonical symplectic structure on $T^*M$.
\begin{thm}[Wodzicki residue, dynamical formulation]\label{wodzickipdo}
Let $M$ be a smooth manifold and let $K_{A}(.,.)\pi_2^*\dvol_g\in \pazocal{D}^\prime_{N^*\Delta}(M\times M)\otimes \pi_2^*\vert \Lambda^{\top}M \vert$ be the kernel of a classical pseudodifferential operator $A\in \Psi^\cv_{\ph}(M)$ of order {$\cv\in \mathbb{C}$}.
Then, for every Euler vector field $\euler$ we have the identity
\beq \label{toprove}
\wres A= (\resdyn {\pazocal{K}_{A}}) \dvol_g,
\eeq
where $\wres A\in\vert\Lambda^{\top}M \vert$ is the Guillemin--Wodzicki residue density of $A$ and $\resdyn {\pazocal{K}_{A}}=\iota_\Delta^* \euler ( \Pi_0 ({\pazocal{K}_{A}}))$ is the dynamical residue of ${\pazocal{K}_{A}}$. { If $\cv\in \mathbb{C}\setminus \clopen{-n,+\infty}\cap \mathbb{Z}$
then both sides of the above equality vanish.}
\end{thm}
In particular, $ (\resdyn {\pazocal{K}_{A}}) \dvol_g$ {does not depend} on $\euler$.
\begin{refproof}{Theorem \ref{wodzickipdo}}
We use the notation from the proof of Proposition~\ref{p:pseudopoly}.
Recall that
$$ \Pi_0({\pazocal{K}_{A}})(x,h)=T_0(x,h)=\frac{1}{(2\pi)^n} \int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,h\right\rangle}\tilde{\asigma}_{-n}(x;\xi)d^n\xi$$
where the oscillatory integral representation uses the homogeneous components of the symbol denoted by $\tilde{\asigma} \in C^\infty(\kappa(U)\times \mathbb{R}^n)$; this
symbol $\tilde{\asigma}$ was constructed from the initial symbol $\asigma\in C^\infty(\kappa(U)\times \mathbb{R}^n)$ using the Kuranishi trick and
is adapted to the coordinate frame $(x,h)\in C^\infty(\kappa(U)\times \mathbb{R}^n, \mathbb{R}^{2n} )$
in which $\euler$ has the normal form $\kappa^2_*\euler=h^i\partial_{h^i}$.
Let us examine the meaning of the term $\euler T_0$ and
relate it to the Wodzicki residue.
By Proposition~\ref{p:residue},
the residue is the homological obstruction for the term
$\tilde{\asigma}_{-n}(x;.) $ to admit a scale invariant distributional extension to $\kappa(U)\times \mathbb{R}^n$.
By Remark~\ref{l:scalanomal},
this reads
$$
(\xi_i\partial_{\xi_i}-n) \tilde{\asigma}_{-n}(x;\xi)= \left(\int_{\vert \xi\vert=1}\tilde{\asigma}_{-n}(x;\xi) \iota_{\sum_{i=1}^n\xi_i\partial_{\xi_i}} d^n\xi \right)\delta_{\{0\}}(\xi)
$$
{ where $\iota_{\sum_{i=1}^n\xi_i\partial_{\xi_i}}$ is the contraction operator by the vector field $\sum_{i=1}^n\xi_i\partial_{\xi_i}$ in the Cartan calculus.}
By inverse Fourier transform, $\euler T_0=\frac{1}{(2\pi)^n} \left(\int_{\vert \xi\vert=1}\tilde{\asigma}_{-n}(x;\xi)\iota_{\sum_{i=1}^n\xi_i\partial_{\xi_i}} d^n\xi\right)$, which is a smooth function of $x\in \kappa(U)$.
We are not finished yet since the Wodzicki residue density is defined in terms of the
symbol $\asigma(x;\xi) \in C^\infty(\kappa(U)\times \mathbb{R}^n)$ we started with. Let us recall that $\asigma$ is defined in
such a way that $\kappa^2_*{\pazocal{K}_A}(x,x-y)=\frac{1}{(2\pi)^n} \int_{\xi\in \mathbb{R}^n} e^{i\left\langle \xi,x-y\right\rangle}\asigma(x;\xi)d^n\xi$,
and the Wodzicki residue equals $$
\wres(A)(x)=\frac{1}{(2\pi)^n} \int_{\vert \xi \vert=1}\asigma_{-n}(x;\xi) \iota_{\sum_{i=1}^n\xi_i\partial_{\xi_i}} d^n\xi.$$
{ We use the identity from equation~(\ref{e:exp1referee}): $$\tilde{a}(x;\xi)\sim A(x,y;\xi)\sim \sum_{p=0}^\infty \sum_{\vert \beta\vert+k=p} \frac{i^{-\vert\beta\vert}}{\beta !} \left(\partial_\xi^\beta\partial_y^\beta A_{\alpha-k}\right)(x,y;\xi)|_{x=y} .$$
For the residue computation, we need to extract the relevant term $\tilde{a}_{-n}$ on the r.h.s.~which is homogeneous of degree $-n$, so we need to set $\cv-p=-n$ hence $p=n+\cv $. This term reads $\tilde{a}_{-n}(x;\xi)=\sum_{\vert \beta\vert+k=n+\cv} \frac{i^{-\vert\beta\vert}}{\beta !} \left(\partial_\xi^\beta\partial_y^\beta A_{\cv-k}\right)(x,y;\xi)|_{x=y} $.}
We now make the crucial observation that for all $x\in \kappa(U)$,
$$
\bea
&\int_{\vert \xi\vert=1} \tilde{\asigma}_{-n}(x;\xi) \iota_{\sum_{i=1}^n\xi_i\partial_{\xi_i}} d^n\xi
\\ &=
\sum_{\vert \beta\vert+k={ n+\cv} }
\int_{\vert \xi\vert=1} \frac{i^{-\vert \beta\vert}}{\beta !} \partial_\xi^\beta \partial_y^\beta A_{\cv-k-\beta}(x,y;\xi) |_{x=y} \iota_{\sum_{i=1}^n\xi_i\partial_{\xi_i}} d^n\xi \\
&=\int_{\vert \xi\vert=1} A_{-n}(x,y;\xi) |_{x=y} \iota_{\sum_{i=1}^n\xi_i\partial_{\xi_i}} d^n\xi=\int_{\vert \xi\vert=1} \asigma_{-n}(x;\xi) \iota_{\sum_{i=1}^n\xi_i\partial_{\xi_i}} d^n\xi
\eea
$$
by the vanishing property ({Corollary}~\ref{l:vanishing1}), which implies that the integral of
all the terms with derivatives vanish.
Therefore by inverse Fourier transform, we find that
\begin{eqnarray}
C_0(x)=\frac{1}{(2\pi)^n}\int_{\vert \xi\vert=1} \asigma_{-n}(x;\xi) \iota_{\sum_{i=1}^n\xi_i\partial_{\xi_i}} d^n\xi.
\end{eqnarray}
The residue density $\left(\int_{\vert \xi\vert=1} \asigma_{-n}(x;\xi)\iota_{\sum_{i=1}^n\xi_i\partial_{\xi_i}} d^n\xi\right) {d^nx}$ is \emph{intrinsic} as proved by Lesch \cite[Prop 4.5]{Lesch} (it is defined in coordinate charts but satisfies some compatibility conditions that makes it intrinsic on $M$).
To conclude observe that $\euler^2T_0=0$~\footnote{This is a consequence of the Jordan blocks having only rank $2$.} hence by the Cauchy formula, for any small disc $D$ around $0$~:
\begin{eqnarray*}
\frac{1}{2i\pi}\int_{\partial D}{ \left(\euler \pazocal{K}_A\right)}(z) dz|_{U\times U}={\left(\euler T_0\right)}(x,y)|_{U\times U}=\frac{1}{(2\pi)^n}\int_{\vert \xi\vert=1} \asigma_{-n}(x;\xi) \iota_{\sum_{i=1}^n\xi_i\partial_{\xi_i}} d^n\xi,
\end{eqnarray*}
which in combination with the fact that $y\mapsto{\left(\euler T_0\right)}(x,y)$ is locally constant proves \eqref{toprove} on $U$. The above identity globalizes immediately, which finishes the proof.
\end{refproof}
\section{Holonomic singularities of the Hadamard parametrix}
\label{section4}
\subsection{Hadamard parametrix} \label{s:hadamardformal}
We now consider the setting of a { time-oriented} Lorentzian manifold $(M,g)$, and we assume it is of \emph{even} dimension $n$.
Let $P=\square_g$ be the wave operator (or d'Alembertian), i.e.~it is the Laplace--Beltrami operator associated to the Lorentzian metric $g$. Explicitly, using the notation $\module{g}=\module{\det g}$,
$$
\bea
P&=\module{g(x)}^{-\frac{1}{2}}\partial_{x^j} \module{g(x)}^{\frac{1}{2}}g^{jk}(x)\partial_{x^k} \\
& = \partial_{x^j}g^{jk}(x)\partial_{x^k}+b^k(x)\partial_{x^k}
\eea
$$
where we sum over repeated indices, and $b^k(x)=\module{g(x)}^{-\12} g^{jk}(x)(\partial_{x^j}\module{g(x)}^{\frac{1}{2}} )$. For $\Im z\geqslant 0$ we consider the operator $P-z$.
The Hadamard parametrix for $P-z$ is constructed in several steps which we briefly recall following \cite{Dang2020}.
\subsubsection*{Step 1} Let $\eta=dx_0^2-(dx_1^2+\cdots+dx_{n-1}^2)$ be the Minkowski metric on $\rr^n$, and consider the corresponding quadratic form $$
\vert\xi\vert_\eta^2 = -\xi_0^2+\sum_{i=1}^{n-1}\xi_i^2,
$$
defined for convenience with a {minus} sign. For $\cv\in \cc$ and $\Im z >0$, the distribution $\left(\vert\xi\vert_\eta^2-z \right)^{-\cv}$ is well-defined by pull-back from $\rr$. More generally, for $\Im z \geqslant 0$, the limit $\left(\vert\xi\vert_\eta^2-z -i0\right)^{-\cv}=\lim_{\varepsilon\to 0^+}\left(\vert\xi\vert_\eta^2-z -i\varepsilon\right)^{-\cv}$ from the upper half-plane is well defined as a distribution on $\rr^n\setminus \{0\}$. If $z\neq 0$ it can be extended to a family of homogeneous distributions on $\rr^n$, holomorphic in $\cv\in \cc$ (and to a meromorphic family if $z=0$). We introduce special notation for its appropriately normalized Fourier transform,
\beq\label{eq:defFsz}
\Fs{z}\defeq\frac{\Gamma(\cv+1)}{(2\pi)^{n}} \int e^{i\left\langle x,\xi \right\rangle}\left(\vert\xi\vert_\eta^2-i0-z \right)^{-\cv-1}d^{n}\xi,
\eeq
which defines a family of distributions on $\rr^n$, holomorphic in $\cv\in\cc\setminus\{-1,-2,\dots\}$ for $\Im z\geqslant 0$, $z\neq 0$.
\subsubsection*{Step 2} Next, one { pulls back} the distributions $\Fse{z}$ to a neighborhood of the diagonal $\diag\subset M\times M$ using the exponential map.
More precisely, this can be done as follows. Let $\exp_x:T_xM\to M$ be the exponential geodesic map. We consider a neighborhood
of the zero section $\zero $ in $TM$
on which the
map
\beq \label{xv}
(x;v)\mapsto (x,\exp_x(v))\in M^2
\eeq
is a local
diffeomorphism
onto its image, denoted by $\pazocal{U}$.
Let
{$(e_1,\dots,e_n)$} be a local { time-oriented} orthonormal frame
defined on an open set and {$(\varalpha^i)_{i=1}^n$} the corresponding coframe. For $(x_1,x_2)\in\pazocal{U}$ (with $x_1,x_2$ in that open set), we define the map
\beq\label{applipullback}
G: (x_1,x_2)\mapsto {
\Big(G^i(x_1,x_2)=
\underset{\in T_{x_1}^* M}{\underbrace{\varalpha^i_{x_1}}}\underset{\in T_{x_1}M}{\underbrace{(\exp_{x_1}^{-1}(x_2))} } \Big)_{i=1}^n \in\mathbb{R}^{n}.}
\eeq
Here, $(x_1,x_2) \mapsto (x_1;\exp_{x_1}^{-1}(x_2))$ is a diffeomorphism as it is the inverse of \eqref{xv}, and so $G$ is a submersion.
For any distribution $f$ in
$\pazocal{D}^\prime(\mathbb{R}^{n})$, we can consider the pull-back $(x_1,x_2)\mapsto G^*f(x_1,x_2)$, and if $f$ is $O(1,n-1)_+^\uparrow$-invariant,
then
the pull-back does not depend on the choice
of orthonormal frame $(e_\mu)_\mu$. This allows us to canonically define the pullback $G^*f\in \pazocal{D}^\prime(\cU)$, of
$O(1,n-1)_+^\uparrow$-invariant
distributions to distributions
defined on an open set $\pazocal{U}$ which is in fact a neighborhood
of the diagonal $\diag$.
\begin{defi}\label{d:fsneardiag}
For $\cv\in\cc$, the distribution
$\Fe{z}= G^*\Fse{z}\in \pazocal{D}^\prime(\pazocal{U})$ is defined by pull-back of the $O(1,n-1)_+^\uparrow$-invariant distribution $\Fse{z}\in \pazocal{D}^\prime\left(\mathbb{R}^{n}\right)$ introduced in \eqref{eq:defFsz}.
\end{defi}
\subsubsection*{Step 3} The Hadamard parametrix is constructed in normal charts using the family $\Fe{z}$. Namely, for fixed
$\varvarm\in M$, we express the distribution $x\mapsto \mathbf{F}_\cv(z,\varvarm,x)$
in normal coordinates centered at $\varvarm$, defined on some $U\subset T_{x_0}{M}$. Instead of using the somewhat heavy notation $ \mathbf{F}_\cv(z,\varvarm,\exp_\varvarm(\cdot))$ we will simply write
$\Feg{z}\in \pazocal{D}^\prime(U)$. One then looks for a parametrix $H_N(z)$ of order $N$ of the form
\begin{eqnarray}
\boxed{H_N(z)=\sum_{k=0}^N u_k \Feg[k]{z} \in \pazocal{D}^\prime(U) },
\end{eqnarray}
and after computing $\left(P-z\right)H_N(z,.)$ one finds that the sequence of functions
$(u_k)_{k=0}^\infty$ in $C^\infty(U)$ should solve the hierarchy of transport equations
\begin{eqnarray}
\boxed{2k u_k+ b^i(x)\eta_{ij}x^j u_k+2 x^i\partial_{x^i} u_k+2Pu_{k-1}=0}
\end{eqnarray}
with initial condition $u_0(0)=1$, where by convention $u_{k-1}=0$ for $k=0$, and we sum over repeated indices. The transport equations have indeed a unique solution, and they imply that on $U$, $H_N(z,.)$ solves
\begin{equation}
\left(P-z\right)H_N(z,.)={\module{g}^{-\frac{1}{2}}}\delta_0+(Pu_N)\mathbf{F}_N.
\end{equation}
\subsubsection*{Step 4} The final step consists in considering the dependence on $x_0$ to obtain a parametrix on the neighborhood $\cU$ of the diagonal. One shows that $\pazocal{U} \ni (x_1,x_2) \mapsto u_k(\varalpha(\exp_{x_1}^{-1}(x_2))) $ is smooth in both arguments, and since $\mathbf{F}_{\cv}(z,.)$ is already defined on $\pazocal{U}$, $$H_N(z,x_1,x_2)=\sum_{k=0}^N u_k(\varalpha(\exp_{x_1}^{-1}(x_2)))\mathbf{F}_\cv(z,x_1,x_2)
$$
is well defined as a distribution on $\pazocal{U}$.
Dropping the exponential map in the notation from now on for simplicity, the \emph{Hadamard parametrix} $H_N(z,.)$ of order $N$ is by definition the distribution
\begin{equation}
\boxed{H_N(z,.)=\sum_{k=0}^N u_k \Fe[k]{z}\in \pazocal{D}^\prime(\pazocal{U}).}
\end{equation}
Finally, we use an arbitrary cutoff function $\chi\in \cf(M^2)$ supported in $\pazocal{U}$ to extend the definition of $H_N(z,.)$ to $M^2$,
$$
\boxed{H_N(z,.)=\sum_{k=0}^N \chi u_k \Fe[k]{z}\in \pazocal{D}^\prime({M\times M}).}
$$
The Hadamard parametrix extended to $M^2$ satisfies
\beq\label{eq:PzHN}
\left(P-z\right) H_N(z,.)={ \module{g}^{-\frac{1}{2}}}\delta_{\diag}+(Pu_N)\mathbf{F}_N(z,.)\chi+r_N(z,.),
\eeq
where
$\module{g}^{-\frac{1}{2}}\delta_\Delta(x_1,x_2)$ is the Schwartz kernel of the identity map and $r_N(z,.)\in \pazocal{D}^\prime({M\times M})$ is an error term supported in a punctured neighborhood of $\Delta$ which is due to the presence of the cutoff $\chi$.
\subsection{Oscillatory integral representation and log-polyhomogeneity}\label{ss:polhom} Given an Euler vector field $\euler$, our current objective is to study the behaviour
of ${\left( e^{-t\euler}H_N\right)}(z)$ and in particular to prove that
$H_N(z)$ is tame log-polyhomogeneous near $\Delta$. The proof uses an
oscillatory integral representation of the Hadamard parametrix involving
symbols with values in distributions { whose wave front set in the $\xi$ variable is contained in the conormal of the cone $\{Q=0\}\subset \mathbb{R}^n$. This conormal is a non-smooth Lagrangian in $T^*\mathbb{R}^n$ whose singularity is at the vertex of the cone $\{Q=0\}\subset \mathbb{R}^n$}.
{
\begin{rema}[Coordinate frames versus charts]
In the present part, instead of using charts we
favor a presentation using coordinate frames
which makes notation
simpler. The two viewpoints are equivalent
since given a chart $\kappa:U\to \kappa(U)\subset \mathbb{R}^n$, the linear
coordinates $(x^i)_{i=1}^n \in \mathbb{R}^{n*}$ on $\mathbb{R}^n$ can be pulled back on
$U$ as a coordinate frame $(\kappa^*x^i)_{i=1}^n\in C^\infty(U;\mathbb{R}^n)$.
\end{rema}}
We start by representing the distributions $\mathbf{F}_\cv$ defined in \sec{s:hadamardformal} by oscillatory integrals using the
coordinate frames from Proposition~\ref{p:normalform} adapted to
the Euler vector field $\euler$.
\begin{lemm}\label{l:kuranishi}
Let $(M,g)$ be a { time-oriented} Lorentzian manifold and $\euler$ an Euler vector field. Let $p\in \Delta$, and let $(x^i,h^i)_{i=1}^n$ be a local coordinate frame defined on a neighborhood $\Omega\subset M\times M$ of $p$ such that $\euler=\sum_{i=1}^n h^i\partial_{h^i}$ on $\Omega$.
In this coordinate frame, $\mathbf{F}_\cv(z,.,.)$ has the representation
$$
\mathbf{F}_\cv(z,x,h)=\int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle} A_\cv(z,x,h;\xi) d^n\xi,
$$
where $A_\cv$ depends holomorphically in $z\in \{\Im z>0\}$, is homogeneous in $(z,\xi)$ of degree $-2(\cv+1)$
w.r.t.~the scaling
$(\lambda^2z,\lambda\xi) $, and for $\mu\neq 0$, $A_\cv(i0+\mu,.,.;.)$ is a {distribution} in $\Omega\times \mathbb{R}^{n*}$.
\end{lemm}
Integrands such as $A_\cv(i0+\mu,.,.;.)$ are sometimes called distribution-valued amplitudes in the literature since they are not smooth symbols but distributions, yet they behave like symbols of oscillatory integrals in the sense they have homogeneity with respect to scaling and the scaling degree in $\xi$ is responsible for the singularities of $\mathbf{F}_\cv$.
\begin{refproof}{Lemma \ref{l:kuranishi}}
Our proof uses in an essential way the so-called Kuranishi trick { again}.
Let $s=(s^i)_{i=1}^n$ denote the orthonormal moving {coframe} from
\sec{s:hadamardformal}.
We denote by $\exp_{m}:T_mM\to M$ the geodesic exponential
map induced by the metric $g$.
We claim that
\beq\label{eq:idM}
{ \Big( \underset{\in T^*_{(x,0)}\Omega}{ \underbrace{s^i_{(x,0)}}}
\underset{\in T_{(x,0)}\Omega}{\underbrace{\left(\exp^{-1}_{(x,0)}(x,h)\right)}}= M(x,h)^i_jh^j\Big)_{i=1}^n }
\eeq
where
{
$M:\Omega \ni (x,h)\mapsto (M(x,h)_i^j)_{1\leqslant i,j\leqslant n}\in \GL_n(\mathbb{R})$} is a smooth map
such that $M(x,0)=\id$.
By the fundamental theorem of calculus,
\begin{eqnarray*}
\exp^{-1}_{(x,0)}(x,h)=\int_{0}^1 \frac{d}{dt} \exp^{-1}_{(x,0)}(x,th) dt =\left(\int_{0}^1 d\exp^{-1}_{(x,0)}(x,th) dt\right)(h).
\end{eqnarray*}
If we set $M(x,h)=s_{(x,0)}\left(\int_{0}^1 d\exp^{-1}_{(x,0)}(x,th) dt\right)$ then $M(x,0)=\id$
so up to choosing some smaller open set $\Omega$, the matrix $M(x,h)$ is
invertible for $(x,h)\in \Omega$ and satisfies \eqref{eq:idM}.
We now insert \eqref{eq:idM} into the definition of $\mathbf{F}_\cv$:
\[\bea
\mathbf{F}_\cv(z,x,h)&=\frac{\Gamma(\cv+1)}{(2\pi)^n}\int_{\mathbb{R}^n} e^{i\left\langle\xi,s_{(x,0)}\left(\exp^{-1}_{(x,0)}(x,h)\right)\right\rangle} \left(Q(\xi)-z \right)^{-\cv-1} d^n\xi\\
&=\frac{\Gamma(\cv+1)}{(2\pi)^n}\int_{\mathbb{R}^n} e^{i\left\langle \t M(x,h)\xi,h\right\rangle} \left(Q(\xi)-z \right)^{-\cv-1} d^n\xi
\\
&=\frac{\Gamma(\cv+1)}{(2\pi)^n}\int_{\mathbb{R}^n} e^{i\left\langle \xi,h\right\rangle} \left(Q((\t M(x,h))^{-1}\xi)-z \right)^{-\cv-1} \module{M(x,h)}^{-1} d^n\xi.
\eea
\]
This motivates setting
$A_\cv(z,x,h;\xi)=\left(Q((\t M(x,h))^{-1}\xi)-z \right)^{-\cv-1} \module{M(x,h)}^{-1}$
in $\Omega\times \mathbb{R}^{n*}$, which is homogeneous of degree $-2(\cv+1)$ w.r.t.~the scaling defined as $(\lambda^2z,\lambda\xi)$ for $\lambda>0$. If we let $\Im z\rightarrow 0^+$, then we view $A_\cv(-m^2+i0,x,h;\xi)$ as a distribution-valued symbol defined by the pull-back of
$\left(Q(.)+m^2-i0\right)^{-\cv-1}$ by the submersive map
$\Omega\times \mathbb{R}^{n*} \ni (x,h;\xi) \mapsto (\t M(x,h))^{-1}\xi\in \mathbb{R}^{n*}$, where the fact that it is a submersion comes from the invertibility of $M(x,h)\in M_n(\mathbb{R})$ for all $(x,h)\in \Omega$.
The formal change of variable can be justified with a dyadic partition of unity $1=\chi(\xi)+\sum_{j=1}^\infty \beta(2^{-j}\xi)$ as follows.
Observe that $1=\chi((\t M(x,h))^{-1}\xi)+\sum_{j=1}^\infty \beta((\t M(x,h))^{-1}2^{-j}\xi)$. We know that
$\left(Q(\xi)-z \right)^{-\cv-1}$ is a distribution of order $\plancher{\Re \cv}+1$ hence by the change of variable formula for
distributions:
$$
\bea
&\sum_{j=1}^\infty\left\langle (Q(.)-z)^{-\cv-1},\beta(2^{-j}.) e^{i\left\langle \t M(x,h).,h\right\rangle} \right\rangle+\left\langle (Q(.)-z)^{-\cv-1},\chi(.) e^{i\left\langle \t M(x,h).,h\right\rangle} \right\rangle \\
&=\sum_{j=1}^\infty\left\langle (Q((\t M(x,h))^{-1}.)-z)^{-k-1},\beta((\t M(x,h))^{-1}2^{-j}.) e^{i\left\langle .,h\right\rangle} \right\rangle \module{M(x,h)}^{-1}
\fantom + \left\langle (Q((\t M(x,h))^{-1}.)-z)^{-k-1},\chi((\t M(x,h))^{-1}.) e^{i\left\langle .,h\right\rangle} \right\rangle \module{M(x,h)}^{-1}\\
&=\sum_{j=1}^\infty 2^{j(n-2(\cv+1))}\left\langle (Q((\t M(x,h))^{-1}.)-2^{-2j}z)^{-k-1},\beta((\t M(x,h))^{-1}.) e^{i\left\langle 2^j.,h\right\rangle}\right\rangle \module{M(x,h)}^{-1}
\fantom + \left\langle (Q((\t M(x,h))^{-1}.)-z)^{-k-1},\chi((\t M(x,h))^{-1}.) e^{i\left\langle 2^j.,h\right\rangle}\right\rangle \module{M(x,h)}^{-1}
\eea
$$
where the series satisfies a bound of the form
$$
\bea
&\sum_{j=1}^\infty \Big| \left\langle (Q-z)^{-\cv-1},\beta(2^{-j}.) e^{i\left\langle \t M(x,h).,h\right\rangle} \right\rangle\Big| \\ &\leqslant C \sum_{j=1}^\infty 2^{j(n-(\Re\cv+1))} \sup_{(x,h)\in \Omega} \Vert \beta((\t M(x,h))^{-1}.)\Vert_{C^{\plancher{\Re\cv}+1}},
\eea
$$
where $C$ does not depend on $(x,h)\in \Omega$ and the series converges absolutely for $\Re\cv$ large enough.
Then the change of variable is justified for all $\cv\in \mathbb{C}$ by analytic continuation in $\cv\in \mathbb{C}$.
\end{refproof}
Given an Euler vector field $\euler$, let $(x,h)$ be the local coordinate
frame for which $\euler=h^i\partial_{h^i}$. From the proof of Lemma \ref{l:kuranishi} it follows that for any sufficiently small open set $\Omega$, we can represent the Hadamard parametrix in the form
\begin{eqnarray*}
H_N(z,x,h)|_\Omega= \sum_{k=0}^N \int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle} B_{2(k+1)}(z,x,h;\xi) d^n\xi
\end{eqnarray*}
where $B_{2(k+1)}\in \pazocal{D}^\prime(\Omega\times \mathbb{R}^{n*})$ is given by
\begin{eqnarray}\label{Bksymbol}
B_{2(k+1)}(z,x,h;\xi)=\frac{\Gamma(k+1)}{(2\pi)^{n}}\chi u_k(x,h) \left(Q((\t M(x,h))^{-1}\xi)-z \right)^{-k-1} \module{M(x,h)}^{-1},
\end{eqnarray}
where $M(x,h)$ is the matrix satisfying \eqref{eq:idM}. Observe that $B_{2(k+1)}$ is homogeneous of degree $-2k-2$ w.r.t.~the scaling
$ (\xi,z) \mapsto (\lambda\xi,\lambda^2z)$.
Since the Euler vector field $\euler$ reads
$\euler=h^i\partial_{h^i}$ in our local coordinates, the scaling of the Hadamard parametrix reads
$$\bea
{\left( e^{-t\euler} H_N\right)}(z,x,h)&=H_N(z,x,e^{-t}h)=
\sum_{k=1}^N \int_{\mathbb{R}^n} e^{i\left\langle\xi,e^{-t}h\right\rangle} B_{2(k+1)}(z,x,e^{-t}h;\xi) d^n\xi\\
&=\sum_{k=1}^N e^{tn} \int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle} B_{2(k+1)}(z,x,e^{-t}h;e^{t}\xi) d^n\xi.
\eea
$$
In consequence, to capture the $t\to+\infty$ behaviour we need to compute the asymptotic expansion of
each term $ B_{2(k+1)}(z,x,\lambda h;\frac{\xi}{\lambda})$,
and thus of $\big(Q(\t M(x,\lambda h)^{-1}\frac{\xi}{\lambda})-z\big)^{-k-1}$
as $\lambda\rightarrow 0^+$. We will see that this asymptotic expansion
occurs in a space of holonomic distributions singular along
the \emph{singular Lagrangian} { (it is the conormal bundle of the cone $\{Q=0\}$ in $\xi$ variables)} $$\{(\xi;\tau dQ(\xi)) \st \tau<0, Q(\xi)=0 \}.$$
\subsubsection{Asymptotic expansions of $\mathbf{F}_k(z)$ and $(Q(\frac{\xi}{\lambda})-z)^{-k-1}$.} As already remarked,
the distribution $$(Q( \t M^{-1}(x,h) \xi)-z)^{-\cv-1}$$ is homogeneous w.r.t.~scaling
$(x,z)\mapsto(\lambda\xi,\lambda^2z) $. We want to give a log-polyhomogeneous expansion as an asymptotic series of
distributions in the $\xi$ variables even though $\Im z>0$.
This leads us to consider the regularized distributions
$\pf (Q(\xi)-i0)^{-k}$ and $\pf (Q(\xi)-i0)^{-k} (Q(\xi)-z)^{-1}$ for all integers $k\geqslant \frac{n}{2}$, defined as follows.
Recall that {$(Q(\xi)-i0)^{-\cv}$} (resp.~$ (Q(\xi)-i0)^{-\cv} (Q(\xi)-z)^{-1} $ when $\Im z>0$) is a meromorphic family of tempered distributions with simple poles at $\cv=\{\frac{n}{2},\frac{n}{2}+1,\dots \}$. The residues are distributions supported at $\{0\}\subset \mathbb{R}^n$.
\begin{defi}\label{d:finitepart}
We define $\pf(Q(\xi)-i0)^{-k}$ {(resp.~$ \pf (Q(\xi)-i0)^{-k} (Q(\xi)-z)^{-1} $)} as the value at $\cv=k$ of the holomorphic part of the Laurent series expansion of
$(Q(\xi)-i0)^{-\cv}$ (resp $ (Q(\xi)-i0)^{-\cv} (Q(\xi)-z)^{-1} $) near $\cv=k$.
\end{defi}
By application of the pull-back theorem, we immediately find that
the distribution $\pf (Q(\xi)-i0)^{-k}$ is a tempered distribution
whose wavefront set is contained in the
\emph{singular Lagrangian} $$\{(x;\tau dQ(x)) \st Q(x)=0, \ \tau < 0 \}\cup T^*_0\mathbb{R}^n.$$ Let us briefly recall the reason why
$\pf(Q(\xi)-i0)^{-k}$ is quasihomogeneous and
give the equation it satisfies.
\begin{lemm}[Quasihomogeneity]
Let $\Feuler=\sum_{i=1}^n\xi_i\frac{\partial}{\partial \xi_i}$. We have the identity
$$
{\Feuler} \pf(Q(\xi)-i0)^{-k}=-2k\pf(Q(\xi)-i0)^{-k}+\res_{\cv=k}(Q(\xi)-i0)^{-\cv}
$$
and $${\Feuler}( \res_{\cv=k}(Q(\xi)-i0)^{-\cv})=-2k\res_{\cv=k}(Q(\xi)-i0)^{-\cv}.$$ Moreover, the distribution $\res_{\cv=k}(Q(\xi)-i0)^{-\cv}$ is supported at $\{0\}$.
\end{lemm}
\begin{proof}
For non-integer $\cv$, we always have
\begin{equation}\label{e:complexhom}
{\Feuler}(Q(\xi)-i0)^{-\cv}=-2\cv (Q(\xi)-i0)^{-\cv}
\end{equation}
since this holds true for large $-\Re\cv$ and
extends by analytic continuation in $\cv$.
Now for $\cv$ near $k$, we use the Laurent series expansion in $\cv$ near $k$ and identifying the regular parts on both sides of \eqref{e:complexhom} yields the result.
\end{proof}
We introduce the following notation on the inverse Fourier transform side.
\begin{defi}
{ Using Definition~\ref{d:finitepart} for the notion of finite part $\pf$, we} define
$$
\pf F_k(+i0,.):= \frac{\Gamma(k+1)}{(2\pi)^n} \int_{\mathbb{R}^n} e^{i \left\langle \xi,. \right\rangle} \pf(Q(\xi)-i0)^{-k-1} d^n\xi.
$$
\end{defi}
We now state the main proposition of the present paragraph,
which yields asymptotic expansions for the distributions
$F_k(z,.)$.
\begin{prop}[log-polyhomogeneity of $F_k(z,.) $]
\label{prop:logpolyF}
For every $N$, we have
the identity
$$
(Q(\xi)-z)^{-k-1}=\sum_{p=0}^N \begin{pmatrix}
-k-1\\
p,-k-1-p
\end{pmatrix} (-1)^pz^{p} (Q(\xi)-i0)^{-(k+p+1)}+E^{\geqslant k+N+2 }+T_N(z)
$$
where $E^{\geqslant N+2+k }$ denotes the space of all distributions $T\in \pazocal{S}^\prime(\rr^n)$ such that
$$ \lambda^{-N-2-k} T(\lambda^{-1}.)_{\lambda\in \opencl{0,1}} \mbox{ is bounded in } \pazocal{S}^\prime(\rr^n),$$ and $T_N(z)$ is a distribution supported at $0$ depending holomorphically in $z\in \{\Im z >0\}$.
It follows by inverse Fourier transform that
\begin{eqnarray}\label{e:expansionF}
F_k(z,.)=\sum_{p=0}^{N} \frac{(-1)^pz^{p}}{p!} {\pf F_{k+p}(+i0,.)}
+E^{\geqslant k+N+2-n }+P_N(z)
\end{eqnarray}
where $P_N(z)$ is a polynomial function on $\mathbb{R}^n$ depending holomorphically on $z\in \{\Im z>0\}$,
hence each distribution $F_k(z,.)$ is log-polyhomogeneous.
\end{prop}
\begin{proof}
We work in Fourier space with
the function $\left(Q(\xi)-z\right)^{-1}$ for $\Im z>0$.
In fact, even though $\left(Q(\xi)-z\right)^{-1}$ is a function, its asymptotic expansion in $\xi$ will involve the quasihomogeneous distributions
$\pf(Q(\xi)-i0)^{-k}$
because we need to consider the distributional extension to $\mathbb{R}^n$.
We start from the expression:
\begin{eqnarray*}
\sum_{k=0}^{N-1} z^k\pf \left(Q(\xi)-i0\right)^{-k-1}+z^N\pf\left(Q(\xi)-i0\right)^{-N} \left(Q(\xi)-z\right)^{-1}
\end{eqnarray*}
which is a well-defined
distribution in $\pazocal{S}^\prime(\mathbb{R}^n)$.
The product $\left(Q(\xi)-i0\right)^{-N} \left(Q(\xi)-z\right)^{-1} \in \pazocal{D}^\prime(\mathbb{R}^n\setminus\{0\}) $ is weakly homogeneous of degree $\leqslant -N-1 $ therefore it admits a distributional extension
$\pf\left(\left(Q(\xi)-i0\right)^{-N} \left(Q(\xi)-z\right)^{-1}\right)$ which is weakly homogeneous of degree $<-N-1$ and is defined by extending the distribution $\left(Q(\xi)-i0\right)^{-N} \left(Q(\xi)-z\right)^{-1}\in \pazocal{D}^\prime(\mathbb{R}^n\setminus\{0\})$ to $\pazocal{D}^\prime(\mathbb{R}^n)$, see \cite[Thm.~1.7]{DangAHP} (cf.~\cite{Meyer}).
We easily verify that we have the identity for $\Im z>0$:
\begin{eqnarray*}
(Q(\xi)-z) \left(\sum_{k=0}^{N-1} z^k\pf \left(Q(\xi)-i0\right)^{-k-1}+z^N\left(Q(\xi)-i0\right)^{-N} \left(Q(\xi)-z\right)^{-1}\right)=1
\end{eqnarray*}
in the sense of distributions on $\mathbb{R}^n\setminus \{0\}$ (we used the key fact that $Q(\xi)(Q(\xi)-i0)^{-k}=(Q(\xi)-i0)^{-k+1}$ which holds true in the distribution sense in $\pazocal{D}^\prime(\mathbb{R}^n\setminus\{0\}$).
{Since} the term inside the large brackets above makes sense as a distribution on $\mathbb{R}^n$,
it follows that we have the identity
$$
(Q(\xi)-z)\left(\sum_{k=0}^{N-1} z^k\pf \left(Q(\xi)-i0\right)^{-k-1}+z^N\pf\left(Q(\xi)-i0\right)^{-N} \left(Q(\xi)-z\right)^{-1}\right)=1+T_N(z)
$$
in the sense of tempered distributions in $\pazocal{S}^\prime(\mathbb{R}^n)$,
where $T_N(z)$ is a distribution supported at $\{0\}$ depending holomorphically on $z\in \{\Im z>0\}$.
It follows by inverse Fourier transform
that we get:
\begin{eqnarray*}
F_{0}(z,\vert x\vert_\eta)=\sum_{k=0}^{N-1} z^k \pf F_{k}(+i0,x)+E^{\geqslant N+1-n}+\pazocal{F}^{-1}\left(T_N\right)(x),
\end{eqnarray*}
where the inverse Fourier transform
$\pazocal{F}^{-1}\left(T_N\right)(x)$ is a polynomial function in $x$.
More generally, by the same method we find that
$$
(Q(\xi)-z)^{-k}=\sum_{p=0}^N \begin{pmatrix}
-k\\
p,-k-p
\end{pmatrix} (-1)^pz^{p} {\pf (Q(\xi)-i0)^{-(k+p)}}+E^{\geqslant k+N+1 }+T_N(z)\in \pazocal{S}^\prime(\mathbb{R}^n)
$$
where the generalized binomial coefficients are defined using the
Euler $\Gamma$ function, $E^{\geqslant N+1+k } $ denotes distributions $T\in \pazocal{S}^\prime$
s.t. the family $ \lambda^{-N-1-k} T(\lambda^{-1}.)_{\lambda\in \opencl{0,1}}$ is bounded in $\pazocal{S}^\prime$ and $T_N(z)$ is a distribution supported at $0$ depending holomorphically in $z\in \{\Im z>0\}$. Therefore, \eqref{e:expansionF}
follows by inverse Fourier transform.
\end{proof}
We now prove that $H_N(z)\in \pazocal{D}^\prime_\Lambda(M\times M)$ is tame log-polyhomogeneous regardless of the choice of Euler vector field $\euler$.
\begin{prop}\label{prop:feynmanlog}
Let $H_N(z)$ be the Hadamard parametrix of order $N$. Then for any Euler vector field $\euler$, there exists an $\euler$-stable neighborhood $\pazocal{U}$ of $\Delta\subset M\times M$ such that $H_N(z)\in \pazocal{D}^\prime(\pazocal{U})$
is tame log-polyhomogeneous w.r.t.~scaling with $\euler$.
In particular,
$$
\mathfrak{L}_s H_N(z)= \int_0^\infty e^{-t(\euler+s)}H_N(z) dt \in \pazocal{D}^\prime(\pazocal{U})
$$
is a well-defined distribution and extends as a \emph{meromorphic function} of $s\in \mathbb{C}$
with poles at {$s\in -2+n-\mathbb{N}$}. The poles at {$s=k\in \mathbb{Z}$} are simple when $k<0$ and of multiplicity {at most} $2$ when $k\geqslant 0$.
\end{prop}
In the proof we will frequently make use of smooth functions with values in tempered distributions in the following sense.
\begin{defi}\label{d:smoothfunctionsvaluedindistributions}
If $\Omega \subset M$ is an open set, we denote by $C^\infty(\Omega)\otimes \pazocal{S}^\prime(\mathbb{R}^n)$ the space of all $U\in \pazocal{D}^\prime(\Omega\times \mathbb{R}^n)$ such that
for all $\varphi_1\in C^\infty_{\rm c}(\Omega)$, $\varphi_2\in \pazocal{S}(\mathbb{R}^n)$,
$$
\left\langle U,\varphi_1\otimes \varphi_2 \right\rangle_{\Omega\times \mathbb{R}^n}=
\int_\Omega \left\langle U(x,.),\varphi_2\right\rangle_{\mathbb{R}^n} \varphi_1(x)\dvol_g(x)
$$
where $\Omega \ni x\mapsto \left\langle U(x,.),\varphi_2\right\rangle_{\mathbb{R}^n}$ is $C^\infty$.
\end{defi}
\begin{refproof}{Proposition \ref{prop:feynmanlog}} We employ a three steps asymptotic expansion. The first one comes from the Hadamard expansion, which is of the form
\begin{eqnarray*}
\sum_{k=0}^N \int_{\mathbb{R}^n} \dots e^{i\left\langle\xi,h\right\rangle} (Q(\t M(x,h)^{-1}\xi)-z)^{-k-1} \dots d^n\xi +\text{highly regular term }.
\end{eqnarray*}
\step{1} {(First expansion, in $z$).} \ The idea is to study the asymptotics
of
$$(Q(\t M(x,\lambda h)^{-1}\lambda^{-1}\xi)-z)^{-k-1}$$ when $\lambda\rightarrow 0^+$.
We start from the function
$(Q(\t M(x,h)^{-1}\xi)-{z})^{-k-1} $ where $M$ is the invertible matrix
depending smoothly on $(x,h)$ which was obtained by the Kuranishi trick.
Then each term $(Q(\t M(x,h)^{-1}\xi)-z)^{-k-1} $
appearing in the sum is expanded in powers of $z$ times homogeneous terms in $\xi$
{thanks to Proposition~\ref{prop:logpolyF}}.
The expansion in powers of $z$ reads:
$$
\bea
(Q(\t M(x, h)^{-1}\xi)-z)^{-k-1}
=\sum_{p=0}^N {(-1)^p} z^p \begin{pmatrix}
-k-1\\
p,-k-1-p
\end{pmatrix}
\pf(Q(\t M(x, h)^{-1}\xi)-i0)^{-k-1-p} \\ +R_N(z,x,h;\xi)
\eea
$$
where $R_N(z,x,h;\xi)\in C^\infty(\Omega )\otimes \pazocal{S}^\prime(\mathbb{R}^n)$ is weakly homogeneous of degree $\geqslant -k-1-N$ in $\xi$, i.e.
\begin{eqnarray*}
\lambda^{-N-k-1}R_N(z,x,h;\lambda^{-1}.)_{\lambda\in \opencl{0,1}} \text{ is bounded in }\pazocal{S}^\prime(\mathbb{R}^n)
\end{eqnarray*}
uniformly in $(x,h)\in K\subset \Omega$ where $K$ is a compact set.
\step{2} {(Second expansion, in $h$).} \ The key idea is to note that $\pf(Q(\t M(x, h)^{-1}\xi)-i0)^{-k-1-p}\in C^\infty(\Omega )\otimes \pazocal{S}^\prime(\mathbb{R}^n)$ since it is the pull-back of $\pf(Q(\xi)-i0)^{-k-1-p}\in \pazocal{S}^\prime(\mathbb{R}^n)$ by the submersion
$\Omega\times \mathbb{R}^{n*} \ni (x,h;\xi) \mapsto (\t M(x,h))^{-1}\xi\in \mathbb{R}^{n*}$. So by the push-forward theorem, for any test function $\chi\in \pazocal{S}(\mathbb{R}^n)$, the wave front set of
$$
(x,h)\in \Omega\mapsto \left\langle \pf(Q(\t M(x, h)^{-1}.)-i0)^{-k-1-p},\chi\right\rangle $$ is empty which implies $\pf(Q(\t M(x, h)^{-1}\xi)-i0)^{-k-1-p}\in C^\infty(\Omega )\otimes \pazocal{S}^\prime(\mathbb{R}^n)$.
The important subtlety is that when we differentiate $(Q(\t M(x,h)^{-1}\xi)-i0)^{-k} $ in $(x,h)$, we lose
distributional order in $\xi$. This is why we are not in usual spaces of symbols where differentiating in $(x,h)$ does not affect the regularity in $\xi$. However, all the $(x,h)$ derivatives $D_{x,h}^\beta(Q(\t M(x,h)^{-1}\xi)-i0)^{-k-1} $ are quasihomogeneous in $\xi$ of degree $-2k-2$:
$$D_{x,h}^\beta\big(Q(\t M(x,h)^{-1}\lambda^{-1}\xi )-i0\big)^{-k-1}=\lambda^{2k+2}D_{x,h}^\beta\big(Q(\t M(x,h)^{-1}\xi)-i0\big)^{-k} .$$
We then expand each term
$\pf(Q(\t M(x, h)^{-1}\xi)-i0)^{-k-p-1}$ using a Taylor expansion with remainder in the variable $h$ combined with the Fa\`a di Bruno formula { (which serves to compute higher derivatives of the composition of two functions)}. {Applying the Fa\`a di Bruno formula in our particular case, we get} for all $\cv$,
$$
\pf(Q(\t M(x, h)^{-1}\xi)-i0)^{-\cv} =\sum_{\ell, \vert \beta_1\vert+\dots+\vert\beta_\ell\vert\leqslant N} h^\beta Q_\beta( x,h;\xi)
|_{(x,0)}+I_N(z,x,h;\xi).
$$
where we denoted
$$
\bea
Q_\beta( x,h;\xi) & = \frac{(-\cv)\dots(-\cv-\ell-1) \left(\partial_h^{\beta_1}Q(\t M^{-1}(x,h)\xi)\right)\dots
\left(\partial_h^{\beta_\ell}Q(\t M^{-1}(x,h)\xi)\right) }{\beta_1!\dots\beta_\ell!\ell!}
\fantom \times \pf(Q(\xi)-i0)^{-\cv-\ell}.
\eea
$$
Each $h^\beta Q_\beta( x,h;\xi) |_{(x,0)}$
term is \emph{polynomial} in $h$ and a distribution in $\xi$ homogeneous of degree $-2\cv$ of order $\plancher{\Re\cv}+\ell+1$.
Let us describe the integral remainder,
\begin{eqnarray*}
I_N(z,x,h;\xi)= \sum_{\vert\beta\vert=N+1} \frac{(N+1)h^\beta}{\beta !} \left(\int_0^1 (1-s)^{N}\partial^\beta_h\pf(Q(\t M(x,sh)^{-1}\xi)-i0)^{-\cv}ds\right)
\end{eqnarray*}
where the derivative
$\partial^\beta_h\pf(Q(\t M(x,sh)^{-1}\xi)-i0)^{-\cv}$ can be expanded by Fa\`a di Bruno formula as above.
We deduce that the term $\partial^\beta_h\pf(Q(\t M(x,sh)^{-1}\xi)-i0)^{-\cv}$
is continuous in both $(s,h)$ with values in distributions in $\xi$ quasihomogeneous of degree $-2\cv$
of order $\plancher{\Re\cv}+N+2$ uniformly in $(x,sh)$. Therefore
$I_N(z,x,h;\xi)$ is continuous in $(x,h)$ with values in distributions in $\xi$ quasihomogeneous of degree $-2\cv$
of order $\plancher{\Re\cv}+N+2$ uniformly in $(x,h)$.
\step{3} {(Combination of both expansions).} \ Combining both expansions yields an expansion of
$$\big(Q(\t M(x,\lambda h)^{-1}\lambda^{-1}\xi)-z\big)^{-k-1} $$ in powers of $z$ and of $h$
with remainder that we write shortly as:
$$
\bea
&(Q(\t M(x,h)^{-1}\xi)-z)^{-k-1}
\\ & = \sum_{\ell,\sum_{i=1}^\ell\vert \beta_i\vert+2k+2+2p\leqslant N} C_{\beta,\ell,p,k}(x,\xi) z^p h^\beta \pf(Q(\t M(x, 0)^{-1}\xi)-i0)^{-k-1-\ell-p} +R_{k,N}(z,x,h;\xi),
\eea
$$
where $C_{\beta,\ell,p,k}$ depends smoothly on $x$ and is a universal polynomial in $\xi$ of degree $2\ell$, $\beta$ is a multi-index, the coefficients
of $C_{\beta,\ell,p,k}$ are combinatorially defined from the above expansions depending on derivatives of $M(x,h)$ in $h$ at $h=0$.
It is a crucial fact that the remainder $R_{k,N}(x,h;\xi)$ is a distribution weakly homogeneous in $\xi$ of degree $\geqslant k$, and
vanishes at order at least $N-k$ in $h$.
The important fact is that
$R_{k,N}(z,x,h;\xi)$ is an element in $C^\infty\left( \Omega\right)\otimes \pazocal{S}^\prime\left(\mathbb{R}^{n}\right)$ and
$\big( \lambda^{-N-1} R_{k,N}(z,x,\lambda h;\frac{\xi}{\lambda})\big)_{\lambda\in \opencl{0,1}} $ is
bounded in $C^\infty\left( \Omega\right)\otimes \pazocal{S}^\prime\left(\mathbb{R}^{n}\right)$.
Finally, we get
$$
\bea
H_N(z)&=\sum_{2(k+1)+2p+\vert \beta\vert\leqslant N} \frac{k!\left(\chi u_k\right)(x,h)h^\beta\module{M(x,h)}^{-1}{(-1)^p}z^p}{(2\pi)^n\beta !} \begin{pmatrix}
-k-1\\
p,-k-1-p
\end{pmatrix} \fantom\phantom{=========} \times \int_{\mathbb{R}^n}e^{i\left\langle\xi,h\right\rangle}
\partial^\beta_h \pf(Q(\t M(x, h)^{-1}\xi)-i0)^{-k-1-p}|_{(x,0)}d^n\xi
\fantom +\int_{\mathbb{R}^n}e^{i\left\langle\xi,h\right\rangle}R_{1,N}(z,x,h;\xi)d^n\xi +R_{2,N}(z,x,h),
\eea
$$
where $R_{2,N}(z,x,h)\in \pazocal{C}^{s}\left(\Omega\right)$ is a function of
H\"older regularity $s$ which can be made arbitrarily large by choosing $N$ large enough, the term
$R_{1,N}(z,x,h;\xi)$ is an element in $C^\infty\left( \Omega\right)\otimes \pazocal{S}^\prime\left(\mathbb{R}^{n}\right)$,
such that the family
$\big( \lambda^{-N-1} R_{1,N}(z,x,\lambda h;\frac{\xi}{\lambda})\big)_{\lambda\in \opencl{0,1}} $ is
bounded in $C^\infty\left( \Omega\right)\otimes \pazocal{S}^\prime\left(\mathbb{R}^{n}\right)$.
It follows that $\Pi_0\left(R_{1,N} \right)=\euler\Pi_0\left(R_{2,N} \right)=0$ if $N$ is chosen large enough.
It is clear from the construction that the terms
$\int_{\mathbb{R}^n}e^{i\left\langle\xi,h\right\rangle}
\partial^\beta_h \pf(Q(\t M(x, h)^{-1}\xi)-i0)^{-k-1-p}|_{(x,0)}d^n\xi$ are quasihomogeneous
and multiplying by smooth functions preserves the tame log-polyhomogeneity. This finishes the proof.
\end{refproof}
\subsection{Residue computation and conclusions} \label{ss:rcc} Now that we know $H_N(z)$ is tame log-polyhomogeneous, our next objective is to extract the term
$\euler \Pi_0(H_N(z))$ and express it in terms of the Hadamard coefficients $(u_k)_{k=0}^\infty$.
We first prove a key lemma related to the extraction of the dynamical residues which shows that the residue of many terms vanishes. { Recall that the notion of finite part $\pf$ was introduced in Definition~\ref{d:finitepart}.}
\begin{lemm}\label{l:vanishlemma2}
Let $\euler=h^i\partial_{h^i}$, $\varphi\in C^\infty(\Omega)$, $\beta=(\beta_1,\dots,\beta_\ell)\in \nn^\ell$, $k\in \mathbb{N}$ and let $P$ be a homogeneous polynomial on $\mathbb{R}^n$ of { even
degree}.
Then the residue
$$
\euler \Pi_0\left(h^\beta \varphi\int_{\mathbb{R}^n}P(\xi) e^{i\left\langle\xi,h\right\rangle}\pf(Q(\xi)-i0)^{-k}d^n\xi \right)
$$
vanishes if $-2k+\deg(P)\neq -n$ or $\vert \beta\vert>0$. On the other hand, in the special case $-2k=-n$,
$$
\euler \Pi_0\left(\varphi \int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle}\pf(Q(\xi)-i0)^{-k} d^n\xi\right)=\varphi(x,0)\int_{\mathbb{S}^{n-1}}
(Q(\xi)-i0)^{-k} \iota_\euler d^n\xi.
$$
\end{lemm}
\begin{remark}\label{rem:evalat0}
Note that the projector $\Pi_0$ has the effect of evaluating the test function
$\varphi$ at $h=0$.
\end{remark}
\begin{proof}
The important fact is that $P(\xi)\pf(Q(\xi)-i0)^{-k}$ is a quasihomogeneous
distribution in the $\xi$ variable.
By Taylor expansion of $\varphi$ in the $h$ variable, we get for any $N$:
$$ \bea
& h^\beta\varphi \int_{\mathbb{R}^n} P(\xi) e^{i\left\langle\xi,h\right\rangle}\pf(Q(\xi)-i0)^{-k} d^n\xi
\\ &=\sum_{\vert\beta_2\vert\leqslant N} \frac{h^{\beta+\beta_2}}{\beta_2!} \partial_h^{\beta_2}\varphi(x,0) \int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle}P(\xi)\pf(Q(\xi)-i0)^{-k} d^n\xi
\fantom +\sum_{\vert \beta_2\vert=N+1} h^{\beta+\beta_2} R_{\beta_2}(x,h) \int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle}
P(\xi)\pf(Q(\xi)-i0)^{-k} d^n\xi.
\eea
$$
By scaling, if $\vert \beta_2\vert=N+1$ then
$$\left\langle { e^{-t\euler}}\bigg( h^{\beta+\beta_2} R_{\beta_2}(x,h) \int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle}\pf(Q(\xi)-i0)^{-k} d^n\xi\bigg), \psi \right\rangle=\pazocal{O}(e^{-t((N+1)-2k+n -\varepsilon)}) $$
for all $\varepsilon>0$ which accounts for the corrective behaviours of polynomials in $t$ produced by the Jordan blocks. Then choosing $N$ large enough, we can take the Laplace transform
$$\int_0^\infty e^{-tz} \left\langle { e^{-t\euler}}\bigg( h^{\beta+\beta_2} R_{\beta_2}(x,h) \int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle}\pf(Q(\xi)-i0)^{-k} d^n\xi\bigg), \psi \right\rangle dt $$
holomorphic for $z$ near $0$. Therefore since the projector $\Pi_0$ is defined by contour integration using Cauchy's formula, we get
that
\begin{eqnarray*} \Pi_0\left( h^{\beta+\beta_2} R_{\beta_2}(x,h) \int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle}\pf(Q(\xi)-i0)^{-k} d^n\xi\right)=0.
\end{eqnarray*}
The provisional conclusion is that we need to inspect the expression
$$
\bea
&\Pi_0\left( h^\beta \int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle}P(\xi) \pf(Q(\xi)-i0)^{-k} d^n\xi\right) \\ &=
\Pi_0\left( i^{-\vert\beta\vert} \int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle} \partial_\xi^\beta P(\xi) \pf(Q(\xi)-i0)^{-k} d^n\xi\right).
\eea
$$
If $-\vert\beta \vert-2k+\deg(P)\neq -n$, the current $ \partial_\xi^\beta P(\xi) \pf(Q(\xi)-i0)^{-k} d^n\xi$ is quasihomogeneous of degree $-\vert\beta \vert-2k+n+\deg(P) $ hence its inverse Fourier transform is also quasihomogeneous of degree $p\neq 0$ and therefore its image under the projector
$\Pi_0$ vanishes.
If $\vert \beta\vert+2k=n, \vert\beta\vert>0$, then {Corollary}~\ref{l:vanishing1} together with Lemma~\ref{l:scalanomal} imply that
\begin{eqnarray*}
\euler\Pi_0\left(i^{-\vert\beta\vert} \int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle} \partial_\xi^\beta \pf(Q(\xi)-i0)^{-k} d^n\xi \right)
= \int_{\vert \xi\vert=1} \partial_\xi^\beta \pf(Q(\xi)-i0)^{-k} \iota_\Feuler d^n\xi =0.
\end{eqnarray*}
Finally, when $2k=n$ and $\vert\beta\vert=0$ Lemma~\ref{l:scalanomal} implies that
the residue equals
$$\euler\Pi_0\left( \int_{\mathbb{R}^n} e^{i\left\langle\xi,h\right\rangle} \pf(Q(\xi)-i0)^{-k} d^n\xi \right)=\int_{\mathbb{S}^{n-1}}
(Q(\xi)-i0)^{-k} \iota_\Feuler d^n\xi$$
as claimed.
\end{proof}
Now, Lemma \ref{l:vanishlemma2} applied to $H_N(z)$ gives first
$$ \bea
&\euler\Pi_0 \big( H_N(z)\big)\\
&=\sum_{2k+2+2p+\vert \beta\vert\leqslant N}\begin{pmatrix}
-k-1\\
p,-k-1-p
\end{pmatrix} k! {(-1)^p} z^p
\fantom \times \euler\Pi_0\left( \frac{\left(\chi u_k\right)(x,h)h^\beta\module{M(x,h)}^{-1}}{(2\pi)^n\beta !} \int_{\mathbb{R}^n}e^{i\left\langle\xi,h\right\rangle}
\partial^\beta_h \pf(Q(\t M(x, h)^{-1}\xi)-i0)^{-k-1-p}|_{(x,0)}d^n\xi\right)
\\
&=
\sum_{ 2k+2+2p=n }
\euler\Pi_0\bigg( \frac{k!\left(\chi u_k\right)(x,h)\module{M(x,h)}^{-1}{(-1)^p} z^p}{(2\pi)^n} \begin{pmatrix}
-k-1\\
p,-k-1-p
\end{pmatrix} \bigg. \\ &\bigg.\phantom{==========================}\times \int_{\mathbb{R}^n}e^{i\left\langle\xi,h\right\rangle}
\pf(Q(\xi)-i0)^{-k-1-p}|_{(x,0)}d^n\xi\bigg)
\eea
$$
where we used the fact that the projector $\Pi_0$ evaluates $\left(\chi u_k\right)(x,h)\module{M(x,h)}^{-1}$ at $h=0$ by Remark~\ref{rem:evalat0} and that $M(x,0)=\id, \chi(x,0)=1$, and then we obtain the shorter expression~\footnote{{We used here the identity
$\begin{pmatrix}
-k-1\\
p,-k-1-p
\end{pmatrix} k!= \frac{(-k-1)\dots (-k-p) }{p!}k!=(-1)^p\frac{k+p!}{p!} $ .}}
\beq\label{eq:shorter}
\bea
\euler\Pi_0 \big(H_N(z)\big)&=\sum_{ 2k+2p+2=n } \begin{pmatrix}
-k-1\\
p,-k-1-p
\end{pmatrix}
\frac{k!u_k(x,0){(-1)^p}z^p}{(2\pi)^n}\int_{\mathbb{S}^{n-1}}
(Q(\xi)-i0)^{-\frac{n}{2}} \iota_\Feuler d^n\xi\\
&=\sum_{ 2k+2p+2=n }
\frac{(k+p)!u_k(x,0)z^p}{p!(2\pi)^n}\int_{\mathbb{S}^{n-1}}
(Q(\xi)-i0)^{-\frac{n}{2}} \iota_\Feuler d^n\xi.
\eea
\eeq
Finally, to get a more direct expression for $\euler\Pi_0 \big( H_N(z)\big)$ we need to compute the integral on the r.h.s.
\begin{lemm}[Evaluation of the residue by Stokes theorem] \label{lem:Stokes}
We have the identity:
\begin{equation}\label{restokes}
\int_{\mathbb{S}^{n-1}}
( -\xi_1^2+\xi_2^2+\dots+\xi_n^2 -i0)^{-\frac{n}{2}} \iota_\Feuler d^n\xi=\frac{2i\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2})}.
\end{equation}
\end{lemm}
\begin{proof}
The proof follows by a Wick rotation argument as in \cite[\S8.3]{Dang2020}.
We complexify the whole setting and
define the holomorphic $(n-1,0)$-form:
\begin{eqnarray*}
\omega=\left( z_1^2+\dots+z_n^2\right)^{-\frac{n}{2}}\iota_{\sum_{i=1}^n z_i\partial_{z_i}} dz_1\wedge \dots\wedge dz_n \in \Omega^{n-1,0}\left( U \right),
\end{eqnarray*}
where $U$ is the Zariski open subset $\{z\in \mathbb{C}^n \st Q(z)\neq 0 \}$. By the Lie--Cartan formula
$$
\pazocal{L}_{\sum_{i=1}^n z_i\partial_{z_i}} =d \iota_{\sum_{i=1}^n z_i\partial_{z_i}}+\iota_{\sum_{i=1}^n z_i\partial_{z_i}}d,
$$
and
$$ d \left( z_1^2+\dots+z_n^2\right)^{-\frac{n}{2}} dz_1\wedge \dots\wedge dz_n=0 \in\Omega^{n,1}(U),
$$
hence
$$
\bea
&\pazocal{L}_{\sum_{i=1}^n z_i\partial_{z_i}} \left( z_1^2+\dots+z_n^2\right)^{-\frac{n}{2}} dz_1\wedge \dots\wedge dz_n \\ &= d\left( z_1^2+\dots+z_n^2\right)^{-\frac{n}{2}}\iota_{\sum_{i=1}^n z_i\partial_{z_i}} dz_1\wedge \dots\wedge dz_n=0,
\eea
$$
so the differential form $\omega$ is closed in $\Omega^{n-1,0}(U)$.
For every $\theta\in \clopen{0,-\frac{\pi}{2}}$,
we define the $n$-chain
$$ E_\theta=\{ (e^{iu}z_1,z_2,\dots,z_n) \st (z_1,\dots,z_n)\in \mathbb{S}^{n-1}\subset\mathbb{R}^n, \ u\in [\theta,0] \} $$
which is contained in $\mathbb{S}^{2n-1}$.
We denote by $\partial$ the boundary operator acting on de Rham currents, under some choice of orientation on $E_\theta$, we have the equation
$$
\partial E_\theta=[P_\theta ] -[P_0],
$$
where $[P_\theta]$ denotes the current of integration on the $(n-1)$-chain
$$
P_\theta=\{(e^{i\theta}z_1,z_2,\dots,z_n) \st (z_1,\dots,z_n)\in \mathbb{S}^{n-1}\subset \mathbb{R}^n\}.
$$
By Stokes theorem,
\begin{eqnarray*}
0= \int_{E_\theta} d\omega =\int_{ \partial E_\theta } \omega=\int_{P_\theta} \omega-\int_{P_0}\omega
\end{eqnarray*}
where the integration by parts is well-defined
since for $\theta\in \clopen{0,-\frac{\pi}{2}}$, the zero locus of
$ \sum_{i=1}^nz_i^2$ never meets $P_\theta$ so we are integrating well-defined smooth forms~\footnote{Indeed, if $\theta\in \open{0,-\frac{\pi}{2}}$ and $e^{2i\theta}z_1^2+z_2^2+\dots+z_n^2=0$ then $\sin(2\theta)z_1^2=0$, hence $z_1=0$ and $\sum_{i=1}^n z_i^2=0$, which contradicts the fact that $(z_1,\dots,z_n)\in \mathbb{S}^{n-1}$.}.
We define the linear automorphism
$ T_\theta: (z_1,\dots,z_n)\mapsto (e^{i\theta}z_1,\dots,z_n) $ and
note that
$$
\bea
\int_{P_\theta} \omega&=\int_{P_0} T_\theta^*\omega=
e^{i\theta}\int_{\mathbb{S}^{n-1}}
( e^{i2\theta}\xi_1^2+\xi_2^2+\dots+\xi_n^2)^{-\frac{n}{2}} \iota_\Feuler d^n\xi \\ &=\int_{\mathbb{S}^{n-1}}
(\xi_1^2+\xi_2^2+\dots+\xi_n^2)^{-\frac{n}{2}} \iota_\Feuler d^n\xi=\Vol(\mathbb{S}^{n-1}).
\eea
$$
By \cite[Lem.~D.1]{Dang2020},
$$( e^{i2\theta}\xi_1^2+\xi_2^2+\dots+\xi_n^2)^{-\frac{n}{2}}\rightarrow (Q(\xi)-i0)^{-\frac{n}{2}} \mbox{ in } \pazocal{D}^\prime_\Gamma(\mathbb{R}^n\setminus \{0\})$$ as $\theta\rightarrow -\frac{\pi}{2}$, where
$\Gamma=\{ (\xi;\tau dQ(\xi)) \st Q(\xi)=0,\tau<0 \}$ is the half-conormal of the cone $\{Q=0\}$. Since $\Gamma\cap N^*\mathbb{S}^{n-1}=\emptyset$,
in the limit we obtain $$\lim_{\theta\rightarrow -\frac{\pi}{2}^+} \int_{\mathbb{S}^{n-1}}
( e^{i2\theta}\xi_1^2+\xi_2^2+\dots+\xi_n^2)^{-\frac{n}{2}} \iota_\Feuler d^n\xi= \left\langle [\mathbb{S}^{n-1}], (Q(\xi)-i0)^{-\frac{n}{2}} \iota_\Feuler d^n\xi \right\rangle $$ where the distribution pairing is well-defined
by transversality of wavefront sets.
From this we conclude \eqref{restokes}.
\end{proof}
Combining \eqref{eq:shorter} with Lemma \ref{lem:Stokes} gives us
\beq\label{eq:shorter2}
\bea
\euler\Pi_0 \big( H_N(z)\big)&=\sum_{ 2k+2p+2=n }
\frac{(k+p)!u_k(x,0)z^p}{p!(2\pi)^n}\int_{\mathbb{S}^{n-1}}
(Q(\xi)-i0)^{-\frac{n}{2}} \iota_\Feuler d^n\xi\\
&=\frac{2i\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2})}\sum_{ 2k+2p+2=n }
\frac{(k+p)!u_k(x,0)z^p}{p!(2\pi)^n},
\eea
\eeq
from which we obtain { the following result.
\begin{prop}
Let $H_N(z)$ be the Hadamard parametrix of order $N$. Then for any Euler vector field $\euler$, the
dynamical residue satisfies
$$
\resdyn \big(H_N(z)\big)=i\sum_{p=0}^{\frac{n}{2}-1}
\frac{z^pu_{\frac{n}{2}-p-1}(x,0)}{p! 2^{n-1}\pi^{\frac{n}{2}}}.
$$
In particular, $\resdyn \big(H_N(z)\big)$ is independent on the choice of Euler vector field $X$.
\end{prop}
}
\section{Residues of local and spectral Lorentzian zeta functions} \label{section5}
\subsection{Hadamard parametrix for complex powers} \label{ss:hc} As previously, we consider the wave operator $P=\square_g$ on a { time-oriented} Lorentzian manifold $(M,g)$ of even dimension $n$. Just as the Hadamard parametrix $H_N(z)$ is designed to approximate Feynman inverses of $P-z$ near the diagonal, we can construct a more general parametrix $H_N^{(\cv)}(z)$ for $\cv\in\cc$ which is meant as an approximation (at least formally) of complex powers $(P-z)^{-\cv}$.
To motivate the definition of $H_N^{(\cv)}(z)$, let us recall that if $A$ is a self-adjoint operator in a Hilbert space then for all $z=\mu+i\varepsilon$ with $\mu\in\rr$ and $\varepsilon>0$,
$$
(A-z)^{-\cv}=\frac{1}{2\pi i}\int_{\gamma_\varepsilon} (\lambda-i\varepsilon)^{-\cv} (A-\mu-\lambda)^{-1} d\lambda
$$
in the strong operator topology (see e.g.~ \cite[App.~B]{Dang2020}). The contour of integration $\gamma_\varepsilon$ is represented in Figure \ref{fig:contour} and can be written as $\gamma_\varepsilon= \tilde\gamma_\varepsilon+i\varepsilon$, where
\beq
\tilde\gamma_{\varepsilon} = e^{i(\pi-\theta)}\opencl{-\infty,\textstyle\frac{\varepsilon}{2}}\cup \{\textstyle\frac{\varepsilon}{2} e^{i\omega}\, | \, \pi-\theta<\omega<\theta\}\cup e^{i\theta}\clopen{\textstyle\frac{\varepsilon}{2},+\infty}
\eeq
goes from $\Re \lambda\ll 0$ to $\Re \lambda\gg 0$ in the upper half-plane (for some fixed $\theta\in\open{0,\pid}$).
\begin{figure}
\begin{tikzpicture}[scale=1.4]
\def3{3}
\def15{15}
\def0.5{0.5}
\draw [help lines,->] (-1.0*3, 0) -- (1.0*3,0);
\draw [help lines,->] (0, -0.5*3) -- (0, 1.0*3);
\begin{scope}[shift={(0,2*0.5)}]
\node at (-0.9,0.42) {$\gamma_{\varepsilon}$};
\path[draw,line width=0.8pt,decoration={ markings,
mark=at position 0.15 with {\arrow{latex}},
mark=at position 0.53 with {\arrow{latex}},
mark=at position 0.85 with {\arrow{latex}}},postaction=decorate] (-3,{3*tan(15)}) -- (-15:-0.5) arc (180-15:360+15:0.5) -- (3,{3*tan(15)});
\path[draw,line width=0.2pt,postaction=decorate,<->] (0,0) -- (330:0.5);
\end{scope}
\path[draw,line width=0.2pt,postaction=decorate,dashed] (-3,2*0.5) -- (3,2*0.5);
\path[draw,line width=0.2pt,postaction=decorate,<->] (2,0) -- (2,2*0.5);
\node at (2.15,0.5){\scalebox{0.8}{$\varepsilon$}};
\node at (0.2,1.45*0.5){\scalebox{0.8}{$\frac{\varepsilon}{2}$}};
\node at (2.9,-0.2){$\scriptstyle \Re \lambda$};
\node at (0.35,2.8) {$\scriptstyle i \Im \lambda$};
\end{tikzpicture}
\caption{\label{fig:contour}The contour $\gamma_\varepsilon$ used to write $(A-i\varepsilon)^{-\cv}$ as an integral of the resolvent $(A-\lambda)^{-1}$ for $A$ self-adjoint.}
\end{figure}
This suggests immediately to set
$$
\bea
H_N^{(\cv)}(z;.)&\defeq \frac{1}{2\pi i}\int_{\gamma_\varepsilon} (\lambda-i\varepsilon)^{-\cv} H_N(\mu+\lambda,.) d\lambda \\
& =\sum_{\varm=0}^N \chi u_\varm\frac{1}{2\pi i}\int_{\gamma_{\varepsilon}}(\lambda-i\varepsilon)^{-\cv} \mathbf{F}_\varm(\mu+\lambda,.) d\lambda
\eea
$$
provided that the r.h.s.~makes sense. For $\Re \cv >0$ the integral converges by the estimate in \cite[Lem.~6.1]{Dang2020}. More generally, we can evaluate the integral thanks to the identity
$$
\frac{1}{2\pi i}\int_{\gamma_\varepsilon}(\lambda-i\varepsilon)^{-\cv} \mathbf{F}_\varm(\mu+\lambda,.) d\lambda =\frac{(-1)^\varm\Gamma(-\cv+1)}{\Gamma(-\cv-\varm+1)\Gamma(\cv+\varm)} \mathbf{F}_{\varm+\cv-1}(\mu+i\varepsilon,.)
$$
shown in \cite[\S7.1]{Dang2020}, and use it to analytically continue $H_N^{(\cv)}(z)=H_N^{(\cv)}(\mu+i\varepsilon)$. This gives
$$
H_N^{(\cv)}(z,.) =
\sum_{\varm=0}^N u_\varm(.)\frac{(-1)^\varm\Gamma(-\cv+1)}{\Gamma(-\cv-\varm+1)\Gamma(\cv+\varm)} \mathbf{F}_{\varm+\cv-1}(z,.)
$$
as a distribution in a neighborhood of $\Delta\subset M\times M$.
From now on the analysis in \secs{ss:polhom}{ss:rcc} can be applied with merely minor changes. For the sake of brevity we write `$\sim$' to denote identities which hold true modulo remainders as those discussed in \secs{ss:polhom}{ss:rcc}, which do not contribute to residues. In particular we can write
$$
H_N^{(\cv)}(z) \sim \sum_{k=0}^N u_k \frac{\cv\dots(\cv+k-1)}{(2\pi)^n} \int_{\mathbb{R}^n} e^{i\left\langle\xi,h \right\rangle}
(Q(\t M^{-1}(x,h)\xi)-i0)^{-k-\cv} \module{M(x,h)}^{-1} d^n\xi.
$$
Expanding in $z$ yields
$$
\bea
H_N^{(\cv)}(z)&\sim \sum_{k=0}^N\sum_{p=0}^\infty u_k (-1)^pz^p \begin{pmatrix}
-k-\cv
\\
p
\end{pmatrix} \frac{\cv\dots(\cv+k-1)}{(2\pi)^n} \fantom \times \int_{\mathbb{R}^n} e^{i\left\langle\xi,h \right\rangle}
(Q(\t M^{-1}(x,h)\xi)-i0)^{-k-\cv-p} \module{M(x,h)}^{-1} d^n\xi
\\ & \sim \sum_{ k=0}^\infty \sum_{p=0}^\infty z^p u_k \frac{\cv\dots(\cv+k+p-1)}{p!(2\pi)^n} \fantom\times\int_{\mathbb{R}^n} e^{i\left\langle\xi,h \right\rangle}
(Q(\t M^{-1}(x,h)\xi)-i0)^{-k-\cv-p} \module{M(x,h)}^{-1} d^n\xi.
\eea
$$
We take the dynamical residue and in view of Lemma \ref{l:vanishlemma2}, only the
terms with $\cv+k+p=\frac{n}{2}$ survive. We find that for $\cv=0,\dots,\n2$, the dynamical residue $\resdyn\big(H_N^{(\cv)}(z)\big)$ equals
$$ \bea
&\sum_{k+p+\cv=\frac{n}{2}}
z^p \frac{\cv\dots(\frac{n}{2}-1)}{p!(2\pi)^n} \resdyn\left( u_k\int_{\mathbb{R}^n} e^{i\left\langle\xi,h \right\rangle}
(Q(\t M^{-1}(x,h)\xi)-i0)^{-k-\cv-p} \module{M(x,h)}^{-1} d^n\xi\right)
\\
&=\sum_{p=0}^{\frac{n}{2}-\cv}
z^p \frac{\cv\dots(\frac{n}{2}-1)}{p!(2\pi)^n} \resdyn\left( u_{\frac{n}{2}-p-\cv}\int_{\mathbb{R}^n} e^{i\left\langle\xi,h \right\rangle}
(Q(\t M^{-1}(x,h)\xi)-i0)^{-\frac{n}{2}} \module{M(x,h)}^{-1} d^n\xi\right)\\
&=\frac{2i\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2})}\sum_{p=0}^{\frac{n}{2}-\cv}
\frac{z^p}{p!} \frac{\cv\dots(\frac{n}{2}-1)}{(2\pi)^n}u_{\frac{n}{2}-p-\cv}(x,x).
\eea
$$
In consequence, we obtain
\beq\label{eq:resdynHa}
\resdyn\big(H_N^{(\cv)}(z)\big)=
i\sum_{p=0}^{\frac{n}{2}-\cv}
\frac{z^pu_{\frac{n}{2}-p-\cv}(x,x)}{p!(\cv-1)!2^{n-1}\pi^{\frac{n}{2}}} .
\eeq
On the other hand, from \cite[\S8.3.1]{Dang2020} we know that for $N$ sufficiently large
\beq\label{eq:resdynHa2}
\res_{\alpha'=\alpha} \big( \iota^*_\Delta H_N^{(\alpha')}(z)\big)=
i\sum_{p=0}^{\frac{n}{2}-\cv}
\frac{z^p u_{\frac{n}{2}-p-\cv}(x,x)}{p!(\cv-1)!2^{n}\pi^{\frac{n}{2}}}
\eeq
where the residue is understood in the sense of complex analysis.
We summarize this as a proposition.
\begin{proposition} For any Euler vector field $\euler$, there exists an $\euler$-stable neighborhood $\pazocal{U}$ of $\Delta\subset M\times M$ such that $H_N^{(\cv)}(z)\in \pazocal{D}^\prime(\pazocal{U})$
is tame log-polyhomogeneous w.r.t.~$\euler$. The dynamical residue $\resdyn\big(H_N^{(\cv)}(z)\big)$ is {independent of} $\euler$ and satisfies
\beq\label{eq:resres}
\resdyn\big(H_N^{(\cv)}(z)\big)={2}\res_{\alpha'=\alpha} \big( \iota^*_\Delta H_N^{(\alpha')}(z)\big)
\eeq
where the residue on the r.h.s.~is understood in the sense of complex analysis.
For $\alpha=0,\dots,\n2$ it has the explicit expression \eqref{eq:resdynHa}.
\end{proposition}
\begin{remark} The parametrix $H_N^{(\cv)}(i\varepsilon)$ is interpreted as a local (and for the moment purely formal) approximation of $(\square_g-i \varepsilon)^{-\cv}$, and similarly if we define
$$
\zeta_{g,\varepsilon}^{\loc}(\cv)= \iota^*_\Delta H_{N(\alpha)}^{(\alpha)}(i \varepsilon)
$$
where $N(\alpha)$ is taken sufficiently large, $\zeta_{g,\varepsilon}^{\loc}(\cv)$ can be seen as a local approximation of the Lorentzian spectral zeta function density $\zeta_{g,\varepsilon}(\cv)$ studied in the next section.
\end{remark}
\subsection{From local to spectral zeta functions}\label{ss:lg}
Let us now analyze what happens in situations when $P=\square_g$ (or strictly speaking, $P-i \varepsilon$) has a well-defined spectral zeta function density $\zeta_{g,\varepsilon}(\cv)$ in the following sense.
\bed Suppose $P$ is a self-adjoint extension of $\square_g$ acting on $C_\c^\infty(M)$. Then, the spectral zeta function density of $P-i\varepsilon$ is the meromorphic continuation of
$$
\cv\mapsto \zeta_{g,\varepsilon}(\cv)= \iota^*_\Delta \big((P-i \varepsilon)^{-\cv} \big),
$$
defined initially for $\Re \cv$ sufficiently large, where $\iota^*_\Delta$ is the pull-back of the Schwartz kernel to the diagonal $\Delta\subset M\times M$.
\eed
It is a priori not clear whether the definition is useful at all because even if a self-adjoint extension $P$ exists, it is by far not evident { whether the Schwartz kernel of $(P-i \varepsilon)^{-\cv}$ has a well-defined restriction} to the diagonal for large $\Re \cv$, not to mention the analyticity aspects.
We can however formulate a natural sufficient condition in the present context. We start by stating a definition of the uniform wavefront set (which is equivalent to \cite[Def.~3.2]{Dang2020}). Below, $\zero$ is the zero section of $T^*M$ and $\bra z\ket=(1+\module{z}^2)^\12$.
\begin{definition}\label{defrrr}
The \emph{uniform operator wavefront set of order $s\in\rr$ and weight $\bra z\ket^{-\12}$} of $(P-z)^{-1}$ is the set
\beq\label{eq:wfs}
\wfl{12}((P-z)^{-1})\subset (T^*M\setminus\zero)\times (T^*M\setminus\zero)
\eeq
defined as follows: $((x_1;\xi_1),(x_2;\xi_2))$ does \emph{not} belong to \eqref{eq:wfs} if and only if for all $\varepsilon>0$ and all properly supported $B_i\in \Psi^{0}(M)$ elliptic at $(x_i,\xi_i)$ and all $r\in \rr$,
$$
\bra z\ket^{\12}B_1(P-z)^{-1} B_2^* \mbox{ is bounded in } B(H^{r}_\c(M), H_\loc^{r+s}(M)) \mbox{ along } z\in \gamma_\varepsilon.
$$
\end{definition}
The key property which we require of $\square_g$ is that it has a self-adjoint extension $P$, and that self-adjoint has \emph{Feynman wavefront set} in the sense of the uniform operator wavefront set. More precisely, we formalize this as follows.
\begin{definition}\label{deff} Suppose $P$ is a self-adjoint extension of $\square_g$ acting on $C_\c^\infty(M)\subset L^2(M)$. We say that $\square_g$ has \emph{Feynman resolvent} if for any $s\in\rr$, the family $\{(P-z)^{-1}\}_{z\in \gamma_\varepsilon}$ satisfies
\beq\label{feynwf2}
\bea
\wfl{12} \big( ( P-z)^{-1} \big)\subset \{ ((x_1;\xi_1),(x_2;\xi_2))\, | \, (x_1;\xi_1) { \succeq} (x_2;\xi_2) \mbox{ or } x_1=x_2 \}.
\eea
\eeq
Above, $(x_1;\xi_1){ \succeq} (x_2;\xi_2)$ means that $(x_1;\xi_1)$ lies in the characteristic set of $P$ and $(x_1;\xi_1)$ can be joined from $(x_2;\xi_2)$ by a forward{\footnote{{ We remark that the opposite convention for the Feynman wavefront set is often used in the literature on Quantum Field Theory on curved spacetimes. Note also that the notion of {forward} vs.~backward bicharacteristic depends on the sign convention for $P$ (or rather its principal symbol).}}} bicharacteristic.
\end{definition}
This type of precise information on the microlocal structure of $(P-z)^{-1}$ allows one to solve away the singular error term $r_N(z)$ which appears in \eqref{eq:PzHN} when computing $\left(P-z\right) H_N(z)$. In consequence, the Hadamard parametrix approximates $(P-z)^{-1}$ in the following uniform sense.
\begin{proposition}[{\cite[Prop.~6.3]{Dang2020}}]\label{prop:fh} If $\square_g$ has Feynman resolvent then for every $s,\ell\in \mathbb{R}_{\geqslant 0}$,
there exists $N$ such that
\begin{eqnarray}\label{eq:toinsert}
(P-z)^{-1}= H_N(z) +E_{N,1}(z)+E_{N,2}(z)
\end{eqnarray}
where for $z$ along $\gamma_\varepsilon$, $\bra z \ket^k \tilde\chi E_{N,1}(z)$ is bounded in $\cD'(M\times M)$ for some $\tilde\chi\in \cf(M\times M)$ supported near $\Delta$ and all $k\in \mathbb{Z}_{\geqslant 0}$,
and $\bra z \ket^\ell E_{N,2}(z)$ is bounded in $B(H^{r}_\c(M), H_\loc^{r+s}(M))$ for all $r\in \rr$.
\end{proposition}
Then, by integrating $(z-i\varepsilon)^{-\cv}$ times both sides of \eqref{eq:toinsert} along the contour $\gamma_\varepsilon$ we obtain for all $z$,
\beq
(P-z)^{-\cv}= H_N^{(\cv)}(z) +R_N^{(\cv)}(z),
\eeq
where for each $s\in\rr$ and $p\in \nn$ there exists $N\in \nn$ such that $R_N^{(\cv)}(z)$ is holomorphic in $\{\Re \cv >-p\}$ with values in $C^s_{\rm loc}(\pazocal{U})$. Thus, the error term does not contribute to neither analytical nor dynamical residues. By combining { all the above information} with Proposition \ref{prop:fh} we obtain the following final result.
\begin{thm}\label{thm:dynres1} Let $(M,g)$ be a { time-oriented} Lorentzian manifold of even dimension $n$, and suppose $\square_g$ has Feynman resolvent $(P-z)^{-1}$. Then for any Euler vector field $\euler$ there exists an $\euler$-stable neighborhood $\pazocal{U}$ of $\Delta\subset M\times M$ such that for all $\cv\in\cc$ and $\Im z > 0$ the Schwartz kernel $K_\cv\in \pazocal{D}^\prime(\pazocal{U})$ of $(P-z)^{-\cv}$
is tame log-polyhomogeneous w.r.t.~scaling with $\euler$.
The dynamical residue of $(P-z)^{-\cv}$ is {independent of} $\euler$ and equals
\beq\label{eq:explicit}
\resdyn\big(\left(P-z\right)^{-\cv} \big)= i\sum_{p=0}^{\frac{n}{2}-\cv}
\frac{z^pu_{\frac{n}{2}-p-\cv}(x,x)}{p!(\cv-1)!2^{n-1}\pi^{\frac{n}{2}}}
\eeq
if $\alpha = 1,\dots, \n2$, and zero otherwise, where $(u_j(x,x))_j$ are the Hadamard coefficients. Furthermore, for $k=1,\dots,\frac{n}{2}$ and $\varepsilon>0$, the dynamical residue satisfies
\beq\label{eq:mainn}
\resdyn \left(P-i \varepsilon \right)^{-k} = 2 \res_{\cv =k}\zeta_{g,\varepsilon}(\cv),
\eeq
where $\zeta_{g,\varepsilon}(\cv)$ is the spectral zeta function density of $P-i \varepsilon$.
\end{thm}
In particular, using the fact that $u_1(x,x)=\frac{-R_g(x)}{6}$ (see e.g.~\cite[\S8.6]{Dang2020}), setting $k=\frac{n}{2}-1$ and taking the limit $\varepsilon\to 0^+$, we find the relation \eqref{eq:main2} between the dynamical residue and the Einstein--Hilbert action stated in the introduction.
| {'timestamp': '2022-08-04T02:00:52', 'yymm': '2108', 'arxiv_id': '2108.07529', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.07529'} |
\section*{Introduction}
The tragic sinking of the SS El Faro vessel occurred while it was traveling from Florida to Puerto Rico~\cite{elfaro}.
The vessel with a crew of 33 sank about 1140 Hrs UTC on Oct. 1, 2015. As part of their investigation into the sinking of the El Faro, the National Transportation Safety Board (NTSB) has requested us an analysis on the occurrence of rogue waves during Hurricane Joaquin around the time and location of the El Faro's sinking~\cite{fedele2016prediction}. Here, we present the main results of our rogue wave analysis.
The data suggests that the El Faro vessel was drifting at an average speed of approximately~$2.5$~m/s prior to its sinking~\cite{fedele2016prediction}. As a result, El Faro has a higher probability to encounter a rogue wave while drifting over a period of time than that associated with a fixed observer at a point of the ocean. Indeed, the encounter of a rogue wave by a moving vessel is analogous to that of a big wave that a surfer is in search of~\cite{Fedele2012,fedele2013}. The surfer's likelihood to encounter a big wave increases if he moves around a large area instead of staying still. Indeed, if he spans a large area the chances to encounter a large wave increase. This is a space-time effect very important for ship navigation and it cannot be neglected. Such an effect is considered in our rogue wave analysis by way of a new probabilistic model for the prediction of rogue waves encountered by a vessel along its navigation path~\cite{Fedele2012,fedele2015JPO}. In particular, we give a theoretical formulation and interpretation of the exceedance probability, or occurrence frequency of a rogue wave by a moving vessel.
\begin{table}[ht]
\begin{centering}
\resizebox{\textwidth}{!}{
\begin{tabular}{lllll}
\hline
& \textbf{El Faro} & \textbf{Andrea} & \textbf{Draupner} & \textbf{Killard}\tabularnewline
\hline
\hline
Significant wave height $H_{s}$~{[}m{]} & 9.0 & 10.0 & 11.2 & 11.4\tabularnewline
\hline
Dominant wave period $T_{p}$~{[}s{]} & 10.2 & 14.3 & 15.0 & 17.2\tabularnewline
\hline
Mean zero-crossing wave period $T_{0}$~{[}s{]} & 9.2 & 11.1 & 11.3 & 13.2\tabularnewline
\hline
Mean wavelength $L_{0}$ {[}m{]} & 131 & 190 & 195 & 246\tabularnewline
\hline
Depth $d$~{[}m{]}, $k_0 d$ with $k_0=2\pi/L_0$ & 4700, 2.63 & 74, 2.23 & 70, 2.01 & 58, 1.36\tabularnewline
\hline
Spectral bandwidth $\nu$ & 0.49 & 0.35 & 0.36 & 0.37 \tabularnewline
\hline
Angular spreading $\theta_{\nu}$ & 0.79 & 0.43 & 0.44 & 0.39\tabularnewline
\hline
Parameter~$R=\theta_{\nu}^{2}/2\nu^{2}$ ~\cite{Janssen2009} & 1.34 & 0.72 & 0.75 & 0.56\tabularnewline
\hline
Benjamin Feir Index $BFI$ in deep water ~\cite{Janssen2003} & 0.36 & 0.24 & 0.23 & 0.18\tabularnewline
\hline
Tayfun NB skewness~$\lambda_{3,NB}$~\cite{Tayfun2006} & 0.26 & 0.159 & 0.165 & 0.145\tabularnewline
\hline
Mean skewness $\lambda_{3}$ from HOS simulations & 0.162 & 0.141 & 0.146 & 0.142\tabularnewline
\hline
Maximum NB dynamic excess kurtosis
$\lambda_{40,\textit{max}}^d$
~\cite{fedele2015kurtosis} &$ 10^{-3}$ & $1.3\cdot10^{-3}$ & $1.1\cdot10^{-3}$ & $1.6\cdot10^{-3}$\tabularnewline
\hline
Janssen NB bound excess kurtosis~$\lambda_{40,NB}^{d}$ ~\cite{JanssenJFM2009} & 0.049 & 0.065 & 0.074 & 0.076\tabularnewline
\hline
Mean excess kurtosis $\lambda_{40}$ from HOS simulations & 0.042 & 0.041 & 0.032 & $-0.011$ \tabularnewline
\hline
\hline
Actual maximum crest height $h/H_{s}$& 1. 68 & 1.55 & 1.63 & 1.62\tabularnewline
\hline
Actual maximum crest-to-trough (wave) height $H/H_{s}$ & 2.6 & 2.30 & 2.15 & 2.25\tabularnewline
\hline
\hline
\end{tabular}
}
\par\end{centering}
\protect\caption{Wave parameters and various statistics of the simulated El Faro sea state in comparison to the Andrea, Draupner and Killard rogue sea states~\cite{FedeleSP2016}. We refer to the Methods section for the definitions of the wave parameters.}
\end{table}
\begin{figure}[h]
\centering\includegraphics[scale=0.5]{FIG1_Hs_storm}
\caption{WAVEWATCH III parameters history during Hurricane Joaquin around the location where the El Faro vessel sank. (top-left) Hourly variation of the significant wave height $H_s$, (top-right) dominant wave period $T_p$, (bottom-left) dominant wave direction and (bottom-right) normalized $U_{10}/U_{10,max}$ wind speed (solid line) and direction (dashed line). Maximum wind speed $U_{10,max}=51 m/s$. Red vertical lines delimit the 1--hour interval during which the El Faro vessel sank.}
\label{FIG1}
\end{figure}
\begin{figure}[h]
\centering\includegraphics[scale=0.5]{FIG3_spectral_bandwidth}
\caption{WAVEWATCH III parameters history during Hurricane Joaquin around the location where the El Faro vessel sank. (top) Hourly variation of the spectral bandwidth $\nu$ history, (center) directional spreading $\theta_v$ and (bottom) directional factor $R=\frac{1}{2}\theta_v^2/\nu^2$. Red vertical lines delimit the 1-hour interval during which the El Faro vessel sank.}
\label{FIG3}
\end{figure}
\begin{figure}[h]
\centering\includegraphics[scale=0.5]{FIG4_wavewatch_STEEPNESS_KURTOSIS}
\caption{WAVEWATCH III parameters history during Hurricane Joaquin around the location where the El Faro vessel sank. (top) Hourly variation of the Tayfun steepness $\mu$ (solid line) with bounds (dashed lines), (center) excess kurtosis $\lambda_{40}$ and (bottom) nonlinear coefficient $\Lambda\sim 8\lambda_{40}/3$. Red vertical lines delimit the 1-hour interval during which the El Faro vessel sank.}
\label{FIG4}
\end{figure}
\section*{Results}
Our rogue wave analysis focused on the study of the 1-hour sea state of Hurricane Joaquin during which the El Faro vessel sank, hereafter referred to as the El Faro sea state. The convenient wave parameters and statistical models are defined in the Methods section.
\subsection*{Metaocean parameters of Hurricane Joaquin in the region of the sinking of El Faro}
We use the hindcast directional spectra by WAVEWATCH III and describe the wave characteristics of the sea states generated by Hurricane Joaquin about the time and location where the El Faro vessel sank~\cite{NTSB_meteo}.
The top panel on the left of Fig.~(\ref{FIG1}) shows hourly variation of the significant wave height $H_s$ during the event. The top-right panel displays the history of the dominant wave period $T_p$, and the dominant wave direction, the neutral stability 10-m wind speed $U_{10}$ and direction are shown in the bottom-panels respectively. The red vertical lines delimit the 1--hour interval during which the El Faro vessel sank.
The encountered 1-hour sea state by El Faro about the time and location of sinking had a significant wave height of $H_s\approx9$~m and the maximum wind speed was $U_{10,max}=51$~m/s. It was very multidirectional (short-crested) as indicated by the large values of both the spectral bandwidth $\nu$ and angular spreading $\theta_v$ as shown in~Fig.~(\ref{FIG3}).
In Table 1 we report the metaocean parameters of the El Faro sea state in comparison to those of the Draupner, Andrea and Killard rogue sea states~\cite{FedeleSP2016}. Note that the four sea states have similar metaocean characteristics. However, El Faro is a steeper sea state as the mean wavelengh $L_0$ is shorter than the other three states.
\subsection*{Statistical properties of Hurricane Joaquin-generated seas}\label{STAT}
The relative importance of ocean nonlinearities can be measured by integral statistics such as the wave skewness $\lambda_3$ and the excess kurtosis $\lambda_{40}$ of the zero-mean surface elevation $\eta(t)$. The skewness describes the effects of second-order bound nonlinearities on the geometry and statistics of the sea surface with higher sharper crests and shallower more rounded troughs~\cite{Tayfun1980,TayfunFedele2007,Fedele2009}. The excess kurtosis comprises a dynamic component $\lambda_{40}^{d}$ measuring third-order quasi-resonant wave-wave interactions and a bound contribution $\lambda_{40}^{b}$ induced by both second- and third-order bound nonlinearities~\cite{Tayfun1980,TayfunLo1990,TayfunFedele2007,Fedele2009,Fedele2008,Janssen2009}.
In deep waters, the dynamic kurtosis~\cite{fedele2015kurtosis} depends on the Benjamin-Feir index $BFI$ and the parameter $R$, which is a dimensionless measure of the multidirectionality of dominant waves~\cite{Janssen2009,Mori2011,fedele2015kurtosis}. For unidirectional (1D) waves $R=0$.
The bottom panel of Fig.~(\ref{FIG3}) displays the hourly variations of the directional factor $R$ during Hurricane Joaquin near the location where El Faro sank. About the peak of the hurricane the generated sea states are very multidirectional (short-crested) as $R>1$ and so wave energy can spread directionally. As a result, nonlinear focusing due to modulational instability effects diminishes~\cite{fedele2015kurtosis,Onorato2009,WasedaJPO2009,Toffoli2010} and becomes essentially insignificant under such realistic oceanic conditions~\cite{Shrira2013_JFM,Shrira2014_JPO,fedele2015kurtosis,FedeleSP2016}.
The top panel of Fig.~(\ref{FIG4}) displays the hourly variation of the Tayfun steepness $\mu$ (solid line) with bounds (dashed lines). The excess kurtosis $\lambda_{40}$ mostly due to bound nonlinearities is shown in the center panel and the associated $\varLambda$ parameter at the bottom. The red vertical lines delimit the 1-hour interval during which the El Faro vessel sank.
In Table 1 we compare the statistical parameters of the El Faro sea state and the Draupner, Andrea and Killard rogue sea states~(from~\cite{FedeleSP2016}). Note that the El Faro sea state has the largest directional spreading. Moreover, for all the four sea states the associated $BFI$ are less than unity and the maximum dynamic excess kurtosis is of~$O(10^{-3})$ and thus negligible in comparison to the associated bound component. Thus, third-order quasi-resonant interactions, including NLS-type modulational instabilities play an insignificant role in the formation of large waves~\cite{fedele2015kurtosis, FedeleSP2016} especially as the wave spectrum broadens~\cite{Fedele2014} in agreement with oceanic observations available so far~\cite{TayfunFedele2007,Tayfun2008,Christou2014}. On the contrary, NLS instabilities have been proven to be effective in the generation of optical rogue waves~\cite{Dudley2016}.
\subsection*{Higher Order Spectral (HOS) simulations of the El Faro sea state}
We have performed Higher-Order pseudo-Spectral (HOS) simulations~\cite{DommermuthYue1987HOS,West1987} of the El Faro sea state over an area of~$4$~km~x~$4$~km for a duration of 1 hour (see Methods section for a description of the numerical method). The initial wave field conditions are defined by the WAVEWATCH III hindcast directional spectrum ${S}(f,\theta)$ around the time and region of the El Faro sinking as shown in Fig.~\ref{FIG_WWIII}. Our HOS simulations are performed accounting only for the full (resonant and bound) nonlinearities of the Euler equations up to fourth order in wave steepness.
\begin{figure}[h]
\centering\includegraphics[scale=0.42]{WAVEWATCH_ElFARO_input_spectrum_J35}
\caption{WAVEWATCH III hindcast directional spectrum ${S}(f,\theta)$~$[m^2 s/rad]$ at approximately the time and location of the El-Faro sinking.}
\label{FIG_WWIII}
\end{figure}
The wavenumber-frequency spectrum $S(k,\omega)$ estimated from the HOS simulations is shown in Figure~\ref{FIG_EKW}.
Here, dashed lines indicate the theoretical dispersion curves related to the first-order ($1^{st})$ free waves as well as the second ($2^{nd})$ and third-order ($3^{rd}$) bound harmonic waves. The HOS predictions indicate that second order nonlinearities are dominant with a weak effect of third-order nonlinear bound interactions, in agreement with recent studies of rogue sea states~\cite{FedeleSP2016}. Further, fourth-order effects are insignificant.
The wave skewness and kurtosis rapidly reach a steady state in few wave mean periods as an indication that third-order quasi-resonant wave-wave interactions are negligible in agreement with theoretical predictions~\cite{fedele2015kurtosis} and simulations~\cite{FedeleSP2016}. Note that the theoretical narrowband predictions slightly overestimate the simulated values for skewness and excess kurtosis~(see Table 1). The same trend is also observed in recent studies on rogue waves~\cite{FedeleSP2016}. This is simply because narrowband approximations do not account for the directionality and finite spectral bandwidth of the El Faro wave spectrum.
\begin{figure}[h]
\centering\includegraphics[scale=0.6]{FIG_EKW_HugeDomain}
\caption{HOS simulations of the El Faro sea state: predicted wavenumber-frequency spectrum~$S(k,\omega)$~$[m^2 s/rad]$. Sea state duration of 1 hour over an area of~$4$~km~x~$4$~km; the wave field is resolved using $1024$~x~$1024$ Fourier modes.
}
\label{FIG_EKW}
\end{figure}
\begin{figure}[h]
\centering\includegraphics[scale=0.75]{FIG_crest_distributions}
\caption{HOS simulations of the El Faro sea state. Crest height scaled by the significant wave height ($\xi$) versus conditional return period ($N_h$) for the (left) Andrea, (center) Draupner and (right) Killard rogue sea states: HOS numerical predictions ($\square$) in comparison with theoretical models:F=Forristall (blue dashed) T=second-order Tayfun (blue solid), TF=third-order (red solid) and R=Rayleigh distributions (red solid). Confidence bands are also shown~(light dashes). $N_h(\xi)$ is the inverse of the exceedance probability $P(\xi)=\mathrm{Pr}[h>\xi H_s]$. Horizontal lines denote the rogue threshold~$1.25H_s$~\cite{DystheKrogstad2008} and~$1.6H_s$.}
\label{FIG7}
\end{figure}
\subsection*{Occurrence frequency of a rogue wave by a fixed observer: the return period of a wave whose crest height exceeds a given threshold}\label{TimeProb}
To describe the statistics of rogue waves encountered by a fixed observer at a given point of the ocean, we consider the conditional return period $N_h(\xi)$ of a wave whose crest height exceeds the threshold $h=\xi H_s$, namely
\begin{equation}
N_h(\xi)=\frac{1}{\mathrm{Pr}\left[h>\xi H_s\right]}=\frac{1}{P(\xi)},\label{Nh}
\end{equation}
where $P(\xi)$ is the probability or occurrence frequency of a wave crest height exceeding~$\xi H_s$ as encountered by a fixed observer. In other words, $P(\xi)$ is the probability to randomly pick from a time series observed at a fixed point of the ocean a wave crest that exceeds the threshold $\xi H_s$. Equation~\eqref{Nh} also implies that the threshold~$\xi H_s$, with $H_s=4\sigma$, is exceeded on average once every $N_{h}(\xi)$ waves. For weakly nonlinear random seas, the probability $P$ is hereafter described by the third-order Tayfun-Fedele~\cite{TayfunFedele2007} (TF), second-order Tayfun~\cite{Tayfun1980} (T), second-order Forristall~\cite{Forristall2000} (F) and the linear Rayleigh (R) distributions~(see Methods section).
Our statistical analysis of HOS wave data suggest that second-order effects are the dominant factors in shaping the probability structure of the El Faro sea state with a minor contribution of excess kurtosis effects. Such dominance is seen in Fig.~\ref{FIG7}, where the HOS numerical predictions of the conditional return period $N_h(\xi)$ of a crest exceeding the threshold $\xi H_s$ are compared against the theoretical predictions based on the linear Rayleigh (R), second-order Tayfun (T) and third-order (TF) models from~Eq.~\eqref{Pid} (sampled population of $10^6$ crest heights). In particular, $N_h(\xi)$ follows from Eq.~\eqref{Nh} as the inverse $1/P(\xi)$ of the empirical probabilities of a crest height exceeding the threshold $\xi H_s$. An excellent agreement is observed between simulations and the third-order TF model up to crest amplitudes $h/H_s\sim1.5$. For larger amplitudes, the associated confidence bands of the estimated empirical probabilities widen, but TF is still within the bands. Donelan and Magnusson~\cite{donelan2017} suggest that the TF model agrees with the Andrea rogue wave measurements up to $h/H_s\sim1.1$, concluding that TF is not suitable to predict larger rogue crest extremes~(see their Fig. 7 in~\cite{donelan2017}). Unfortunately, their analysis is based on a much smaller sampled population of~$\sim10^4$ crest heights and they do not report the confidence bands associated with their probability estimates, nor they provide any parameter values to validate their data analysis. The deviation of their data from the TF model is most likely due to the small sample of crests. Note also that TF slightly exceeds both the T and F models as an indication that second-order effects are dominant, whereas the linear R model underestimates the return periods.
For both third- and fourth-order nonlinearities, the return period $N_r$ of a wave whose crest height exceeds the rogue threshold~$1.25H_s\approx11$~m~\cite{DystheKrogstad2008} is nearly~$N_r\sim 10^{4}$ for the El Faro sea state and for the simulated Andrea, Draupner and Killard rogue sea states~\cite{FedeleSP2016}. This is in agreement with oceanic rogue wave measurements~\cite{Christou2014}, which yield roughly the same return period. Similarly, recent measurements off the west coast of Ireland~\cite{Flanagan2016} yield $N_r\sim6\cdot10^4$. In contrast, $N_r\sim 3\cdot 10^{5}$ in a Gaussian sea.
Note that the largest simulated wave crest height exceeds the threshold $1.6H_s\approx14$~m~(see Table~1). This is exceeded on average once every~$10^{6}$ waves in a time series extracted at a point in third- and fourth-order seas and extremely rarely in Gaussian seas, i.e. on average once every~$10^{9}$ waves. This implies that rogue waves observed at a fixed point of the ocean are likely to be rare occurrences of weakly random seas, or Tayfun sea states~\cite{Prestige2015}. Our results clearly confirm that rogue wave generation is the result of the constructive interference (focusing) of elementary waves enhanced by bound nonlinearities in agreement with the theory of stochastic wave groups proposed by Fedele~and~Tayfun~(2009)~\cite{Fedele2009}, which relies on Boccotti's~(2000) theory of quasi-determinism~\cite{Boccotti2000}. Our conclusions are also in agreement with observations~\cite{TayfunFedele2007,Fedele2008,Tayfun2008,Fedele2009}, recent rogue wave analyses~\cite{donelan2017,birkholz2016,FedeleSP2016,dudley2013hokusai} and studies on optical rogue waves caustics analogues~\cite{DudleyCaustics}.
\subsection*{Time profile of the simulated rogue waves}
The wave profile $\eta$ with the largest wave crest height~($>1.6 H_s\approx14$~m) observed in the time series of the surface fluctuations extracted at points randomly sparse over the simulated El Faro domain is shown in the left panel of~Fig.~(\ref{FIG8}). For comparison, the Draupner, Andrea and Killard rogue wave profiles are also shown~\cite{FedeleSP2016}. In the same figure, the mean sea level (MSL) below the crests is also shown. The estimation of the MSL follows by low-pass filtering the measured time series of the wave surface with frequency cutoff $f_c\sim f_p/2$, where $f_p$ is the frequency of the spectral peak~\cite{Walker2004}.
An analysis of the kinematics~\cite{Fedele_etalJFM2016,FedeleEPL2014} of the simulated rogue waves indicate that such waves were nearly incipient breaking~\cite{Barthelemy2015,BannerSaket2015,Fedele_etalJFM2016} suggesting that larger rogue events are less likely to occur~\cite{Fedele2014,Fedele_etalJFM2016}. The saturation of the crest height is mainly due to the nonlinear dispersion and it is an energy limiter for rogue waves.
The four wave profiles are very similar suggesting a common generation mechanism of the rogue events. Further, we observe a set-up below the simulated El Faro rogue wave most likely due to the multidirectionality of the sea state. A set-up is also observed for the actual Draupner rogue wave. Indeed, recent studies showed that Draupner occurred in a crossing sea consisting of swell waves propagating at approximately $80$ degrees to the wind sea~\cite{adcock2011did,cavaleri2016draupner}. This would explain the set-up observed under the large wave~\cite{Walker2004} instead of the second-order set-down normally expected~\cite{LonguetHiggins1964}.
\begin{figure}[h]
\centering\includegraphics[scale=0.4]{FIG_crest_profile_comparison}
\caption{Third-order HOS simulated extreme wave profiles $\eta/\eta_{max}$ (solid) and mean sea levels (MSL)
(dashed) versus the dimensionless time $t/T_p$ for (from left to right) El Faro, Andrea, Draupner and Killard
waves. $\eta_{max}$ is the maximum crest height given in Table 1. For comparisons, actual measurements (thick
solid) and MSLs (tick dashed) are also shown for Andrea, Draupner and Killard. Note that the Killard MSL is insignificant and the Andrea MSL is not available. $T_p$ is the dominant wave period~(see Methods section for definitions).
}
\label{FIG8}
\end{figure}
\subsection*{Space-time statistics of the encountered sea state by El Faro before sinking}
The largest crest height of a wave observed in time at a given point of the ocean represents a maximum observed at that point. Clearly, the maximum wave surface height observed over a given area during a time interval, i.e. space-time extreme, is much larger than that observed at a given point. Indeed, in relatively short-crested directional seas such as those generated by hurricanes, it is very unlikely that an observed large crest at a given point in time actually coincides with the largest crest of a group of waves propagating in space-time. In contrast, in accord with Boccotti's (2000) QD theory, it is most likely that the sea surface was in fact much higher somewhere near the measurement point.
\begin{figure}[t]
\centering\includegraphics[scale=0.7]{ST8} ;
\caption{(Left) the space-time (xyt) volume spanned by the El Faro vessel~(base area $A=241$~x~$30$~$m^2$) while drifting at the speed of $2.5$~$m/s$ over a time interval of $D=10$ minutes along the path $\Gamma$ is that of the slanted parallelepiped $V_a$; (center) the drifting vessel covers the strip area~($1500$~x~$30$~$m^2$) in the 10-minute interval and the associated space-time volume is that of the parallelepiped $V_b$; (right) if the vessel would be anchored at a location for the same duration, it would span instead the spacetime volume of the straight parallelepiped $V_c$. The solid red arrowed line denotes the space-time path of El Faro while drifting along the path $\Gamma$. The vertical axis is time (t) and the other two axes refer to the space dimensions (x) and (y) respectively.}
\label{FIGST}
\end{figure}
Space-time wave extremes can be modeled stochastically~\cite{Fedele2012,fedele2013} drawing on the theory of Euler Characteristics of random fields~\cite{adler1981geometry,adler2009random,adler2000excursion} and nonlinear wave statistics~\cite{TayfunFedele2007}. In the following, we present a new stochastic model for the prediction of space-time extremes~\cite{Fedele2012} that accounts for both second and third-order nonlinearities~\cite{fedele2015JPO}. Drawing on Fedele's work~\cite{Fedele2012,fedele2015JPO} considers a 3-D non-Gaussian field $\eta(x,y,t)$ in space-time over an area $A$ for a time period of $D$~(see Fig.~\eqref{FIGST}). The area cannot be too large since the wave field may not be homogeneous. The duration should be short so that spectral changes occurring in time are not so significant and the sea state can be assumed as stationary. Then, the third-order nonlinear probability $P_{\mathrm{FST}}^{(nl)}(\xi;A,D)$ that the maximum surface elevation $\eta_{\max}$ over the area $A$ and during the time interval $D$ exceeds the generic threshold $\xi H_{s}$ is equal to the probability of exceeding the threshold $\xi_{0}$, which accounts for kurtosis effects only, that is
\begin{equation}
\label{Pid2}
P_{\mathrm{FST}}^{(nl)}(\xi;A,D)=P_{\mathrm{ST}}(\xi_{0};A,D) \left(1+\varLambda\xi_{0}^{2}(4\xi_{0}^{2}-1)\right).
\end{equation}
The Gaussian probability of exceedance
\begin{equation}
P_{\mathrm{ST}}(\xi;A,D) =\mathrm{Pr}\left\{ \eta_{\max}>\xi H_{s}\right\} =(16M_{3}\xi^{2}+4M_{2}\xi+M_{1}) P_{\mathrm{R}}(\xi),\label{PV}
\end{equation}
where $P_{\mathrm{R}}(\xi)$ is the Rayleigh exceedance probability of Eq.~\eqref{PR}.
Here, $M_{1}$ and $M_{2}$ are the average number of 1-D and 2-D waves that can occur on the edges and boundaries of the volume $\Omega$, and $M_{3}$ is the average number of 3-D waves that can occur within the volume~\cite{Fedele2012}. These all depend on the directional wave spectrum and its spectral moments $m_{ijk}$ defined in the Methods section.
The amplitude $\xi$ accounts for both skewness and kurtosis effects and it relates to $\xi_0$ via the Tayfun (1980) quadratic equation
\begin{equation}
\xi=\xi_{0}+2\mu\xi_{0}^{2}.\label{sub2}
\end{equation}
Given the probability structure of the wave surface defined by Eq.~\eqref{Pid2}, the nonlinear mean maximum surface or crest height $\overline{h}_{\mathrm{FST}}=\xi_{\mathrm{FST}}H_s$ attained over the area $A$ during a time interval $D$ is given, according to Gumbel (1958), by
\begin{equation}
\xi_{\mathrm{FST}}=\overline{h}_{\mathrm{FST}}/H_{s}=\xi_{\mathrm{m}}+2\mu\xi_{\mathrm{m}}^{2}+\frac{\gamma_{e}\left(1+4\mu\xi_{\mathrm{m}}\right)}{16\xi_{\mathrm{m}}-\frac{32M_{3}\xi_{\mathrm{m}}+4M_{2}}{16M_{3}\xi_{\mathrm{m}}^{2}+4M_{2}\xi_{\mathrm{m}}+M_{1}}-\Lambda \frac{2\xi_{\mathrm{m}}(8\xi_{\mathrm{m}}^{2}-1)}{1+\Lambda\xi_{\mathrm{m}}^{2}(4\xi_{m}^{2}-1)} },\label{xist}
\end{equation}
where the most probable surface elevation value $\xi_{\mathrm{m}}$ satisfies $P_{\mathrm{FST}}(\xi_{\mathrm{m}};A,D)=1$~(see Eq.~\eqref{Pid2}).
The nonlinear mean maximum surface or crest height $h_{\mathrm{T}}$ expected at a point during the time interval $D$ follows from Eq.~\eqref{xist} by setting $M_2=M_3=0$ and $M_1=N_{\mathrm{D}}$, where $N_{\mathrm{D}}=D/\bar{T}$ denotes the number of wave occurring during $D$ and $\bar{T}$ is the mean up-crossing period (see Methods section). The second-order counterpart of the FST model ($\Lambda=0$) has been implemented in WAVEWATCH~III~\cite{barbariol2017}. The linear mean counterpart follows from Eq.~\eqref{xist} by setting $\mu=0$ and $\Lambda=0$.
The statistical interpretations of the probability $P_{\mathrm{FST}}^{(nl)}(\xi;A,D)$ and associated space-time average maximum $\overline{h}_{\mathrm{FST}}$ are as follows. Consider an ensemble of $N$ realizations of a stationary and homogeneous sea state of duration $D$, each of which has similar statistical structure to the El Faro wave field. On this basis, there would be $N$ samples, say $(\eta_{\max}^{(1)},...,\eta_{\max}^{(N)})$ of the maximum surface height $\eta_{\max}$ observed within the area $A$ during the time interval $D$. Then, all the maximum surface heights in the ensemble will exceed the threshold $\overline{h}_{\mathrm{FST}}$. Clearly, the maximum surface height can exceed by far such average. Indeed, only in a few number of realizations $N\cdot P_{\mathrm{FST}}^{(nl)}(\xi;A,D)$ out of the ensemble of $N$ sea states, the maximum surface height exceeds a threshold $\xi H_s\gg\overline{h}_{\mathrm{FST}}$ much larger than the expected value.
To characterize such rare occurrences in third-order nonlinear random seas one can consider the threshold $h_q=\xi_q H_s$ exceeded with probability $q$ by the maximum surface height $\eta_{\max}$ over an area $A$ during a sea state of duration $D$. This satisfies
\begin{equation}
P_{\mathrm{FST}}^{(nl)}(\xi_q;A,D)=q.\label{PNL1}
\end{equation}
The statistical interpretation of $h_{q}$ is as follows: the maximum surface height $\eta_{\max}$ observed within the area $A$ during $D$ exceeds the threshold $h_{q}$ only in $q\thinspace N$ realizations of the above mentioned ensemble of $N$ sea states.
Note that for large areas, i.e. $\ell>>L_{0}$, the $FST$ model as any other similar models available in literature~\cite{piterbarg1995,Juglard2005,forristall2011,forristall2015,cavaleri2016draupner} will overestimate the maximum surface height over an area and time interval because they all rely on Gaussianity. This implies that there are no physical limits on the values that the surface height can attain as the Gaussian model does not account for the saturation induced by the nonlinear dispersion~\cite{Fedele2014} of ocean waves or wave breaking. Thus, the larger the area $A$ or the time interval $D$, the greater the number of waves sampled in space-time, and unrealistically large amplitudes are likely to be sampled in a Gaussian or weakly nonlinear Gaussian sea.
This point is elaborated further and demonstrated explicitly by way of the results displayed in Fig.~(\ref{FIG10}). Here, the theoretical (FST) ratio $\overline{h}_{\mathrm{FST}}/\overline{h}_{\mathrm{T}}$ as a function of the area width $\ell/L_0$ is shown for the El Faro, Draupner and Andrea sea states respectively. The FST ratios for Draupner and Andrea are estimated using the European Reanalysis (ERA)-interim data~\cite{fedele2015JPO}. For comparisons, the empirical FST ratio from the El Faro HOS simulations together with the experimental observations at the Acqua Alta tower~\cite{fedele2013} are also shown. Recall that $\overline{h}_{\mathrm{FST}}$ is the mean maximum surface height expected over the area $\ell^2$ during a sea state of duration $D=1$ hour and $\overline{h}_{\mathrm{T}}$ is the mean maximum surface height expected at a point. Clearly, the theoretical FST ratio for El Faro fairly agrees with the HOS simulations for small areas ($\ell\le L_0$), whereas it yields overestimation over larger areas. We argue that the saturation of the HOS FST ratio over larger areas is an effect of the nonlinear dispersion which is effective in limiting the wave growth as a precursor to breaking~\cite{Fedele2014,Fedele_etalJFM2016}.
Note that the FST ratios for all the three sea states are nearly the same for $\ell\leq L_0$. These results are very encouraging as they suggest possible statistical similarities and universal laws for space-time extremes in wind sea states. Moreover, for $\ell\sim L_{0}$ the mean wave surface maximum expected over the area is 1.35 times larger than that expected at a point in agreement with Acqua Alta sea observations~\cite{fedele2013}.
\begin{figure}[t]
\centering\includegraphics[scale=0.45]{FIG_ST_variable_area} \protect\caption{Space-time extremes: theoretical FST ratios $\overline{h}_{\mathrm{FST}}/\overline{h}_{\mathrm{T}}$ as a function of the area width $\ell/L_0$ for El Faro (black), Draupner (red) and Andrea (blue) sea states, where $\overline{h}_{\mathrm{FST}}$ is the mean maximum surface height expected over the area $\ell^2$ during a sea state of duration $D=1$ hours and $\overline{h}_{\mathrm{T}}$ is the mean maximum surface height expected at a point. For comparisons, the empirical FST ratio from the El Faro HOS simulations (dashed line) together with the experimental observations at the Acqua Alta tower (squares) are also shown~\cite{fedele2013}. $L_0$ is the mean wavelength.}\label{FIG10}
\end{figure}
\subsection*{The occurrence frequency of a rogue wave by the El Faro vessel}
The data suggests that the El Faro vessel was drifting at an average speed of approximately~$2.5$~m/s prior to its sinking. This is considered in our analysis as follows. First, define the two events $R=\text{"El Faro encounters a rogue wave along its navigation route"}$ and $S=\text{"El Faro sinks"}$. We know that the event $S$ happened. As a result, one should consider the conditional probability
\begin{equation}
\mathrm{Pr}[R|S]=\frac{\mathrm{Pr}[S|R]\cdot\mathrm{Pr}[R]}{\mathrm{Pr}[S]}.
\label{PRS}
\end{equation}
Here, $\mathrm{Pr}[S]$ is the unconditional probability of the event that El Faro sinks. This could be estimated from worldwide statistics of sunk vessels with characteristics similar to El Faro. $\mathrm{Pr}[S|R]$ is the conditional probability that El Faro sinks given that the vessel encountered a rogue wave. This probability can be estimated by Monte Carlo simulations of the nonlinear interaction of the vessel with the rogue wave field.
Our rogue wave analysis provides an estimate of the unconditional probability~$\mathrm{Pr}[R]$ that El Faro encounters a rogue wave along its navigation or drifting route by means of the exceedance probability, or occurrence frequency~$P_e(h)$. This is the probability that a vessel along its navigation path encounters a rogue wave whose crest height exceeds a given threshold $h$.
The encounter of a rogue wave by a moving vessel is analogous to that of a big wave that a surfer is in search of. His likelihood to encounter a big wave increases if he moves around a large area instead of staying still.
This is a space-time effect which is very important for ship navigation and must be accounted for~\cite{Pierson1953,Lindgren1999,Rychlik2000,Fedele2012}.
The exceedance probability $P_e(h)$ is formulated as follows. Consider a random wave field whose surface elevation at a given point $(x,y)$ in a fixed frame at time $t$ is $\eta(x,y,t)$. Consider a vessel of area $A$ that navigates through the wave field at a constant speed $V$ along a straight path at an angle $\beta$ with respect to the $x$ axis. Define also $(x_e,y_e)$ as a cartesian frame moving with the ship. Then, the line trajectories of any point $(x_e,y_e)$ of the vessel in the fixed frame are given by
\begin{equation}
x=x_e+V\cos(\beta)t,\quad y=y_e+V\sin(\beta)t,\label{xy}
\end{equation}
where for simplicity we assume that at time $t=0$ the center of gravity of the vessel is at the origin of the fixed frame.
The surface height $\eta_c(t)$ encountered by the moving vessel, or equivalently the surface fluctuations measured by a wave probe installed on the ship, is
\begin{equation}
\eta_c(x_e,y_e,t)=\eta(x_e+V\cos(\beta)t,y_e+V\cos(\beta)t,t),\label{etac}
\end{equation}
If $\eta$ is a Gaussian wave field homogeneous in space and stationary in time, then so is $\eta_c$ with respect to the moving frame $(x_e,y_e,t)$. The associated space-time covariance is given by
\begin{equation}
\Psi(X,Y,T)=\overline{\eta_c(x_e,y_e,t) \eta_c(x_e+X,y_e+Y,t+T)}=\int S(f,\theta)\cos(k_{x}X+k_{y}Y-2\pi f_{e}T) \mathrm{d}f\mathrm{d}\theta,\label{etac_psi}
\end{equation}
where $k_x=k\cos(\theta)$, $k_y=k\sin(\theta)$ and $k$ is the wavenumber associated with the frequency $f$ by way of the wave dispersion relation. As a result of the Doppler effect, the encountered, or apparent frequency is~\cite{Pierson1953,Lindgren1999,Rychlik2000}
\begin{equation}
f_{e}=f-k V\cos(\theta-\beta)/(2\pi),
\end{equation}
and $S(f,\theta)$ is the directional wave spectrum of the sea state. Note that when the vessel moves faster than waves coming from a direction $\theta$, the apparent frequency $f_{e}<0$ and for an observer on the ship waves appear to move away from him/her. In this case, the direction of those waves should be reversed~\cite{Pierson1953}, i.e. $\theta=\theta+\pi$, and $f_e$ set as positive.
The spectral moments $m_{ijk}^{(e)}$ of the encountered random field readily follow from the coefficients of the Taylor series expansion of $\Psi(X,Y,T)$ around $(X=0,Y=0,T=0)$. In particular,
\begin{equation}
m_{ijk}^{(e)}=\frac{\partial^{i+j+k}\Psi}{\partial X^i \partial Y^j \partial T^k}\Big|_{X=Y=T=0}=\int S(f,\theta)k_{x}^{i}k_{y}^{j}f_{e}^{k}\mathrm{d}f\mathrm{d}\theta.
\end{equation}
The nonlinear space-time statistics can then easily processed by using the encountered spectral moments $m_{ijk}^{(e)}$ using the FST model~\cite{Fedele2012,fedele2015JPO}, which is based on~Eq.~\eqref{Pid2} as described above. Note that for generic navigation routes the encountered wave field $\eta_c$ is a non-stationary random process of time. Thus, the associated spectral moments will vary in time. The space-time statistics can be still computed by first approximating the route by a polygonal made of piecewise straight segments along which the random process $\eta_c$ is assumed as stationary.
Fig.~\eqref{FIG11} illustrates the HOS and theoretical predictions for the normalized nonlinear threshold $h_n/H_s$ exceeded with probability $1/n$, where $n$ is the number of waves. In particular, consider an observer on the vessel moving along the straight path $\Gamma$ spanned by El Faro drifting against the dominant sea direction over a time interval of 10 minutes. In space-time the observer spans the solid red line shown in Fig.~\eqref{FIGST}. In this case, he has a probability~$P_e\sim 3\cdot10^{-4}$ to encounter a wave whose crest height exceeds the threshold~$1.6H_s\approx14$~m~(blue lines). If we also account for the vessel size~(base area $A=241$~x~$30$~$m^2$), in space-time El Faro spans the volume of the slanted parallelepiped $V_a$ shown in Fig.~\eqref{FIGST}. In this case, the exceedance probability $P_e(V_a)$ further increases to~$1/400$~(black lines). Note that If the vessel would be anchored at a location for the same duration, in spacetime it would span instead the volume of the vertical parallelepiped $V_c$ shown in the same Figure. Note that the two parallelepipeds cover the same space-time volume $A$~x~$D$, with the base area $A$ and height $D=10$~$min$. For the case of the anchored vessel, the associated exceedance probability $P_e(V_c)$ is roughly the same as $P_e(V_a)$ since El Faro was drifting at a slow speed. Larger drift speeds yield larger $P_e(V_a)$ since the vessel encounters waves more frequently than if it was anchored, because of the Doppler effect~\cite{Lindgren1999,Rychlik2000}. Moreover, the drifting vessel covers the strip area~($1500$~x~$30$~$m^2$) in the 10-minute interval and the associated space-time volume is that of the parallelepiped $V_b$ shown in Fig.~\eqref{FIGST}, which has a larger volume than that of $V_a$. As a result, the occurrence frequency $P_e(V_b)$ of a rogue wave associated with $V_b$ is larger and it increases to~$\sim 1/100$~(see red lines in Fig.~\eqref{FIG11}). However, El Faro does not visit the entire volume $V_b$, but it only spans the smaller volume $V_a$. Thus, the conditional probability $P_e(V_a|V_b)$ that the drifting El Faro encounters a rogue wave given that a rogue wave occurred over the larger spacetime volume $V_b$ is $P_e(V_a)/P_e(V_b)\sim1/4$. Furthermore, a fixed observer has a much lower probability~$P_e\sim10^{-6}$ to pick randomly from a time series extracted at a point a wave whose crest height exceeds ~$1.6H_s$~(see Fig.~\ref{FIG7}, TF model, red solid line).
Finally, we observed that the exceedance probability~$P_e(V_a)$ for the drifting El Faro does not scale linearly with time because of nonlinearities that reduce the natural dispersion of waves. Indeed, assuming that El Faro drifts over a time interval 5 times longer ($50$ minutes), $P_e(V_a)$ just increases roughly by 3 times,~$\sim1/130$.
\begin{figure}[t]
\centering\includegraphics[scale=0.75]{moving_vessel2_4}
\caption{HOS (squares) and theoretical (solid lines) predictions for the normalized nonlinear threshold $h_n/H_s$ exceeded with probability $1/n$; i) along the straight path $\Gamma$ spanned by El Faro while drifting at an estimated approximate average speed of~$2.5$~m/s over a time interval of 10 minutes (blue), ii) and also accounting for the vessel size~($241$~x~$30$~$m^2$) (black), and over the strip area~($1500$~x~$30$~$m^2$) spanned by the vessel in a 10-minute interval (red). Confidence bands are also shown~(light dashes). Horizontal line denotes the threshold~$1.6H_s\approx14$~m, which is exceeded with probability~$3\cdot10^{-4}$,$1/400$ and~$1/100$ for the three cases shown.}
\label{FIG11}
\end{figure}
\begin{figure}[h]
\centering\includegraphics[scale=0.4]{spatial_shape_wave} \protect\caption{HOS simulations: expected spatial shape of a rogue wave whose crest height is~$>1.6 H_s\approx14$~m.}\label{FIG12}
\end{figure}
\section*{Discussions}
Our present studies open a new research direction on the prediction of rogue waves during hurricanes. Indeed, the impact of our studies is two-fold. On the one hand, the present statistical analysis provides the basis for an improved understanding of how rogue waves originate during hurricanes. On the other hand, the proposed stochastic model for the encounter probability of a rogue wave provides the basis in the next generation of wave forecast models for a predictive capability of wave extremes and early warnings for shipping companies and others to avoid dangerous areas at risk of rogue waves.
\section*{Methods}
\subsection*{Wave parameters}\label{Ss_Wp}
The significant wave height $H_s$ is defined as the mean value $H_{1/3}$ of the highest one-third of wave heights. It can be estimated either from a zero-crossing analysis or more easily from the wave omnidirectional spectrum $S_o(f)=\int_{0}^{2\pi} S(f,\theta)\mathrm{d}{\theta}$ as $H_s \approx 4\sigma$, where $\sigma=\sqrt{m_0}$ is the standard deviation of surface elevations, $m_j=\int S_o(f) f^j\mathrm{d}f$ are spectral moments. Further, $S(f,\theta)$ is the directional wave spectrum with $\theta$ as the direction of waves at frequency $f$, and the cyclic frequency is $\omega=2\pi f$.
The dominant wave period $T_{p}=2\pi/\omega_p$ refers to the cyclic frequency $\omega_p$ of the spectral peak. The mean zero-crossing wave period $T_{0}$ is equal to $2\pi/\omega_0$, with $\omega_0=\sqrt{m_2/m_0}$. The associated wavelength $L_{0}=2\pi/k_0$ follows from the linear dispersion relation $\omega_0 = \sqrt{gk_0 \tanh(k_0 d)}$, with $d$ the water depth. The mean spectral frequency is defined as~$\omega_{m}=m_{1}/m_{0}$~\cite{Tayfun1980} and the associated mean period $T_m$ is equal to $2\pi/\omega_m$. A characteristic wave steepness is defined as $\mu_m=k_m\sigma$, where $k_{m}$ is the wavenumber corresponding to the mean spectral frequency $\omega_{m}$~\cite{Tayfun1980}. The following quantitites are also introduced: $q_m = k_m d, Q_m = \tanh q_m$, the phase velocity $c_m = \omega_m/k_m$, the group velocity $c_g=c_m\left[1+2q_{m}/\mathrm{sinh(2}q_{m})\right]/2$.
The spectral bandwidth $\nu=(m_0 m_2/m_1^2-1)^{1/2}$ gives a measure of the frequency spreading. The angular spreading~$\sigma_{\theta}=\sqrt{\int_0^{2\pi}D(\theta)(\theta-\theta_m)^2 \mathrm{d}\theta}$, where $D(\theta)=\int_0^{\infty}S(\omega,\theta)\mathrm{d}\omega/\sigma^2$ and $\theta_m=\int_0^{2\pi}D(\theta)\theta\mathrm{d}\theta$ is the mean direction. Note that~$\omega_0=\omega_m\sqrt{1+\nu^2}$.
The wave skewness $\lambda_3$ and the excess kurtosis $\lambda_{40}$ of the zero-mean surface elevation $\eta(t)$ are given by
\begin{equation}
\lambda_3=\overline{\eta^3}/\sigma^3,\qquad\lambda_{40}=\overline{\eta^4}/\sigma^4-3 \,.
\end{equation}
Here, overbars imply statistical averages and $\sigma$ is the standard deviation of surface wave elevations.
For second-order waves in deep water~\cite{Fedele2009}
\begin{equation}
\lambda_{3}\approx 3\mu_m(1-\nu+\nu^2),
\end{equation}
and the following bounds hold~\cite{Tayfun2006}
\begin{equation}
3\mu_m(1-\sqrt{2}\nu+\nu^2) \leq \lambda_3 \leq 3\mu_m.
\end{equation}
Here, $\nu$ is the spectral bandwidth defined above and the characteristic wave steepness $\mu_m=k_m\sigma$, where $k_{m}$ is the wavenumber corresponding to the mean spectral frequency $\omega_{m}$~\cite{Tayfun1980}. For narrowband (NB) waves, $\nu$ tends to zero and the associated skewness $\lambda_{3,NB}=3\mu_m$~\cite{Tayfun1980,TayfunFedele2007,Fedele2009}.
For third-order nonlinear random seas the excess kurtosis
\begin{equation}
\lambda_{40}=\lambda_{40}^{d}+\lambda_{40}^{b}\label{C4total}
\end{equation}
comprises a dynamic component $\lambda_{40}^{d}$ due to nonlinear quasi-resonant
wave-wave interactions~\cite{Janssen2003,Janssen2009} and a Stokes bound harmonic contribution
$\lambda_{40}^{b}$~\cite{JanssenJFM2009}.
In deep water it reduces to the simple form~$\lambda_{40,NB}^{b}=18\mu_m^{2}=2\lambda_{3,NB}^2$~\cite{Janssen2009,JanssenJFM2009,Janssen2014a} where $\lambda_{3,NB}$ is the skewness of narrowband waves~\cite{Tayfun1980}.
As for the dynamic component, Fedele~\cite{fedele2015kurtosis} recently revisited Janssen's~\cite{Janssen2003} weakly nonlinear formulation for $\lambda_{40}^{d}$. In deep water, this is given in terms of a six-fold integral that depends on the Benjamin-Feir index $BFI=\mu_m/\sqrt{2}\nu$ and the parameter $R=\sigma_{\theta}^{2}/2\nu^{2}$, which is a dimensionless measure of the multidirectionality of dominant waves~\cite{Janssen2009,Mori2011}.
As waves become unidirectional (1D) waves $R$ tends to zero.
\subsection*{The Tayfun-Fedele model for crest heights}
We define $P(\xi)$ as the probability that a wave crest observed at a fixed point of the ocean in time exceeds the threshold $\xi H_s$. For weakly nonlinear nonlinear seas, this probability can be described by the third-order Tayfun-Fedele model~\cite{TayfunFedele2007},
\begin{equation}
P_{TF}(\xi)=\mathrm{Pr}\left[h>\xi\,H_s\right]=\mathrm{exp}\left(-8\,\xi_{0}^{2}\right)\left[1+\varLambda \xi_{0}^{2}\left(4\,\xi_{0}^{2}-1\right)\right],\label{Pid}
\end{equation}
where $\xi_{0}$ follows from the quadratic equation $\xi=\xi_{0}+2\mu\,\xi_{0}^{2}$~\cite{Tayfun1980}.
Here, the Tayfun wave steepness~$\mu=\lambda_{3}/3$ is of $O(\mu_m)$ and it is a measure of second-order bound nonlinearities as it relates to the skewness $\lambda_{3}$ of surface elevations~\cite{Fedele2009}. The parameter~$\varLambda=\lambda_{40}+2\lambda_{22}+\lambda_{04}$ is a measure of third-order nonlinearities and is a function of the fourth order cumulants $\lambda_{nm}$ of the wave surface $\eta$ and its Hilbert transform $\hat{\eta}$~\cite{TayfunFedele2007}. In particular, $\lambda_{22}=\overline{\eta^2\hat{\eta}^2}/\sigma^4-1$~and~$\lambda_{04}=\overline{\hat{\eta}^4}/\sigma^4-3$.
In our studies $\varLambda$ is approximated solely in terms of the excess kurtosis as~$\varLambda_{\mathrm{appr}}={8\lambda_{40}}/{3}$ by assuming the relations between cumulants~\cite{Janssen2006} $\lambda_{22}=\lambda_{40}/3$ and $\lambda_{04}=\lambda_{40}$. These, to date, have been proven to hold for linear and second-order narrowband waves only \cite{TayfunLo1990}. For third-order nonlinear seas, our numerical studies indicate that $\varLambda\approx\varLambda_{\mathrm{appr}}$ within a $3\%$ relative error in agreement with observations~\cite{fedeleNLS,TayfunOMAE2007}.
For second-order seas, referred to as Tayfun sea states~\cite{Prestige2015}, $\varLambda=0$ only and $P_{TF}$ in Eq.~\eqref{Pid} yields the Tayfun~(T)~distribution~\cite{Tayfun1980}
\begin{equation}
P_{T}(\xi)=\mathrm{exp}\left(-8{\xi_{0}^2}\right).\label{PT}
\end{equation}
For Gaussian seas, $\mu =0$ and $\varLambda=0$ and $P_{TF}$ reduces to the Rayleigh~(R)~distribution
\begin{equation}
P_{R}(\xi)=\mathrm{exp}\left(-8{\xi^{2}}\right).\label{PR}
\end{equation}
Note that the Tayfun distribution represents an exact result for large second order wave crest heights and it depends solely on the steepness parameter defined as $\mu=\lambda_{3}/3$~\cite{Fedele2009}.
\subsection*{The Forristall model}
The exceedance probability is given by~\cite{Forristall2000}
\begin{equation}
P_{F}(\xi)=\mathrm{exp}\left(-{(\xi/\alpha)^{\beta}}\right),\label{F}
\end{equation}
where $\alpha=0.3536+0.2561 S_1+0.0800 U_r$, $\beta=2-1.7912 S_1-0.5302 U_r+0.284 U_r^2$ for multi-directional (short-crested) seas. Here, $S_1=2\pi H_s/(g T_m^2)$ is a characteristic wave steepness and the Ursell number $U_r=H_s/(k_m^2 d^3)$, where $k_m$ is the wavenumber associated with the mean period $T_m=m_0/m_1$ and $d$ is the water depth.
\subsection*{Space-Time Statistical Parameters}\label{Ss_STSP}
For space-time extremes, the coefficients in Eq.~\eqref{PV} are given by~\cite{Baxevani2006,Fedele2012}
\[
M_{3}=2\pi\frac{D}{\overline{T}}\frac{\ell_{x}}{\overline{L_{x}}}\frac{\ell_{y}}{\overline{L_{y}}}\alpha_{xyt},
\]
\[
M_{2}=\sqrt{2\pi}\left(\frac{D}{\overline{T}}\frac{\ell_{x}}{\overline{L_{x}}}\sqrt{1-\alpha_{xt}^{2}}+\frac{D}{\overline{T}}\frac{\ell_{y}}{\overline{L_{y}}}\sqrt{1-\alpha_{yt}^{2}}+\frac{\ell_{x}}{\overline{L_{x}}}\frac{\ell_{y}}{\overline{L_{y}}}\sqrt{1-\alpha_{xy}^{2}}\right),
\]
\[
M_{1}=N_{D}+N_{x}+N_{y},
\]
where
\[
N_{D}=\frac{D}{\overline{T}},\qquad N_{x}=\frac{\ell_{x}}{\overline{L_{x}}},\qquad N_{y}=\frac{\ell_{y}}{\overline{L_{y}}}
\]
are the average number of waves occurring during the time interval
D and along the x and y sides of length $\ell_{x}$ and $\ell_{y}$
respectively. They all depend on the mean period $\overline{T}$,
mean wavelengths $\overline{L_{x}}$ and $\overline{L_{y}}$ in $x$
and $y$ directions:
\[
\overline{T}=2\pi\sqrt{\frac{m_{000}}{m_{002}}},\qquad\overline{L_{x}}=2\pi\sqrt{\frac{m_{000}}{m_{200}}},\qquad\overline{L_{y}}=2\pi\sqrt{\frac{m_{000}}{m_{020}}}
\]
and
\[
\alpha_{xyt}=\sqrt{1-\alpha_{xt}^{2}-\alpha_{yt}^{2}-\alpha_{xy}^{2}+2\alpha_{xt}\alpha_{yt}\alpha_{xy}}.
\]
Here,
\[
m_{ijk}=\iint k_{x}^{i}k_{y}^{j}f^{k}S(f,\theta)dfd\theta
\]
are the moments of the directional spectrum $S(f,\theta)$ and
\[
\alpha_{xt}=\frac{m_{101}}{\sqrt{m_{200}m_{002}}},\qquad\alpha_{yt}=\frac{m_{011}}{\sqrt{m_{020}m_{002}}},\qquad\alpha_{xy}=\frac{m_{110}}{\sqrt{m_{200}m_{020}}}.
\]
\subsection*{The Higher Order Spectral (HOS) method}\label{Ss_HOS}
The HOS, developed independently by Dommermuth \& Yue~\cite{DommermuthYue1987HOS} and West
\textit{et al.} \cite{West1987} is a numerical pseudo-spectral method, based on a perturbation expansion of the wave potential function up to a prescribed order of nonlinearities $M$ in terms of a small parameter, the characteristic wave steepness. The method solves for nonlinear wave-wave interactions up to the specified order $M$ of a number $N$ of free waves (Fourier modes). The associated boundary value problem is solved by way of a pseudo-spectral technique, ensuring a computational cost which scales linearly with $M^2N \log(N)$~\cite{Fucile_PhD,Schaffer}. As a result, high computational efficiency is guaranteed for simulations over large spatial domains. In our study we used the West formulation~\cite{West1987}, which accounts for all the nonlinear terms at a given order of the perturbation expansion. The details of the specific algorithm are given in Fucile~\cite{Fucile_PhD} and Fedele \textit{et al.}~\cite{fedele2016prediction}. The wave field is resolved using $1024$~x~$1024$ Fourier modes on a spatial area of $4000$m~x~$4000$m. Initial conditions for the wave potential and surface elevation are specified from the directional spectrum as an output of WAVEWATCH III~\cite{tolman2014}.
\section*{Data Availability}
All the publicly available data and information about the El Faro accident are posted on the National Transportation Safety Board (NTSB) website~\cite{elfaro}.
\hspace{2cm}
\bibliographystyle{unsrt}
| {'timestamp': '2017-03-24T01:08:00', 'yymm': '1703', 'arxiv_id': '1703.08161', 'language': 'en', 'url': 'https://arxiv.org/abs/1703.08161'} |
\section{Hardness Results on the \(\mathit{ManipSA}\) Problem} \label{Sect:MU}
We have seen in the last section that the \(\mathit{ManipSA}\) problem is in XP w.r.t. parameters \(n\) and \(\mu(a_1)\) and that it is in FPT for parameter \(\mathtt{rg}^{\max}\). One could hope for more positive results for parameters \(n\) and \(\mu(a_1)\), as an FPT algorithm. However, we show in this section that the \(\mathit{ManipSA}\) problem is W[1]-hard w.r.t. each of these two parameters.\\
We start with the hardness result on parameter \(\mu(a_1)\). In fact, we obtain a stronger result by proving that even determining if there exists a successfully manipulation is W[1]-hard w.r.t. \(\mu^{\max}\). Note that, by definition, \(\mu^{\max}\) is greater than or equal to \(\mu(a_1)\).
\begin{theorem} \label{theorem: hardness}
Determining if there exists a successful manipulation for \(a_1\) is W[1]-hard w.r.t. parameter \(\mu^{\mathtt{max}}\).
\end{theorem}
\begin{proof}
We make a parameterized reduction from the CLIQUE problem where given a graph \(G = (V,E)\) and an integer \(k\), we wish to determine if there exists a clique of size \(k\). This problem is W[1]-hard w.r.t. parameter \(k\). W.l.o.g., we make the assumptions that \(|V| > k\) and that \(|E| > k(k-1)/2\) (because otherwise it is trivial to determine if there is a clique of size \(k\)).\\
From an instance of CLIQUE, we create the following \(\mathit{ManipSA}\) instance.\\
\emph{Set of items.} We create two items, \(g_{\{i,j\}}\) (a good item) and \(w_{\{i,j\}}\) (one of the worst items), for each edge \(\{i,j\}\in E\) and two items, \(b_{i}\) (one of the best items) and \(m_i\) (a medium item), for each vertex \(i\in V\). Put another way, \(\mathcal{I} = \{g_{\{i,j\}},w_{\{i,j\}}|\{i,j\}\in E\}\cup\{b_i,m_i| i \in V\}\) and the number of items is thus \(|I| = 2|V| + 2|E|\).\\
\emph{Set of agents.} We create one agent \(e_{\{i,j\}}\) for each edge \(\{i,j\}\in E\) and one agent \(v_{i}\) for each vertex \(i\in V\). The top of \(e_{\{i,j\}}\)'s ranking is \(g_{\{i,j\}} \succ m_i \succ m_j \succ w_{\{i,j\}}\) (which one of \(m_i\) or \(m_j\) is ranked first can be chosen arbitrarily). The top of \(v_{i}\)'s ranking is \(b_{i} \succ m_i\) . We also create \(|V| - k - 1\) agents \(c_t\) for \(t\in \{1,\ldots, |V| - k - 1\}\) (whose role is to collect medium items) such that the top of the ranking of each \(c_t\) is \(m_1\succ m_2\ldots \succ m_{|V|}\). Last, the manipulator, that we denote by \(a_1\) to be consistent with the rest of the paper,
has the following preferences: he first ranks items \(b_{i}\),
then items \(g_{\{i,j\}}\), then items \(m_i\) and last items \(w_{\{i,j\}}\). To summarize, \(\mathcal{A} = \{e_{\{i,j\}}|\{i,j\}\in E\}\cup\{v_i|i\in V\}\cup\{c_t|t\in \{1,\ldots, |V| - k - 1\}\}\cup\{a_1\}\) and there are \(|\mathcal{A}| = 2|V|+|E|-k\) agents.\\
\emph{Picking sequence.} The picking sequence \(\pi\) is composed of the following rounds:
\begin{itemize}
\item Manipulator round 1: \(a_1\) gets to pick \(k\) items.
\item Vertex round: each agent \(v_{i}\) gets to pick one item.
\item Manipulator round 2: \(a_1\) gets to pick \(k(k-1)/2\) items.
\item Edge round: each agent \(e_{\{i,j\}}\) gets to pick one item.
\item Medium item collectors round: each agent \(c_t\) gets to pick one item.
\item Manipulator round 3: \(a_1\) gets to pick one item.
\item End round: the remaining items can be shared arbitrarily within the non-manipulators such that each of them gets at most one new item.
\end{itemize}
Note that \(\mu^{\mathtt{max}} = \mu(a_1) = \frac{k(k+1)}{2} + 1\).\\
\emph{Utility values of \(a_1\).} For ease of presentation, we will act as if there were only four different utility values, even if \(a_1\) is asked to report a complete preference order. One can remove this assumption, making preferences strict, by using sufficiently small \(\epsilon\) values.
In this sketch of proof, each item \(b_{i}\) has utility \(4\). Each item \(g_{\{i,j\}}\) has utility \(3\). Each item \(m_i\) has utility \(2\). Lastly, items \(w_{\{i,j\}}\) have utility \(1\). In this simplified setting, we set \(\succ_T\) as being one specific ranking consistent with the utility values of \(a_1\) and we wish to determine if there exists another ranking yielding a strictly higher utility.\\
\emph{Sketch of the proof.} At the end of the vertex round, all the best items are gone, as they have already been picked by \(a_1\) or by the vertex agents $v_i$. Similarly, at the end of the edge round, none of the good items are left. Hence, at the third manipulator round, when \(a_1\) picks her last item, the best she can hope for is a medium item. Consequently, the maximum utility she might achieve is accomplished by picking $k$ best items in her first round, $k(k-1)/2$ good items in her second round, and finally a medium item in her third round, for an overall utility of $4k + 3k(k-1)/2 + 2$. Note that she can always pick any set of \(k\) best items in her first round and then (whatever the previous \(k\) best items) pick any set of \(k(k-1)/2\) good items in her second round. Hence obtaining an overall utility of $4k + 3k(k-1)/2 + 1$ is always possible. Note also that, if $\{b_{i_1},\ldots,b_{i_k}\}$ are the $k$ best items selected by \(a_1\) at the first round, then in the vertex round the vertex agents $\{v_{i_1},\ldots,v_{i_k}\}$ will pick the medium items $\{m_{i_1},\ldots,m_{i_k}\}$. Moreover, before the third manipulator round, agents $c_t$ will pick additional $|V|-k-1$ medium items. So, a medium item is left at the third manipulator round only if none of the edge agents picks a medium item in the edge round. According to her preference ranking, any such agent $e_{\{i,j\}}$ will not pick a medium item iff $g_{\{i,j\}}$ is still available, or if $g_{\{i,j\}}$, $m_i$ and $m_j$ have all already been picked.
If $g_{\{i,j\}}$ is one of the $k(k-1)/2$ good items that have been already picked at the manipulator second round, then $m_i$ and $m_j$ have already been picked before by $v_i$ and $v_j$
in the vertex round, if $b_i$ and $b_j$ were already taken in the first manipulator round. In conclusion, none of the medium items are picked by the edge agents iff the $k(k-1)/2$ edges $e_{\{i,j\}}$ for which $g_{\{i,j\}}$ has already been picked at the second manipulator round have as endpoints only nodes in $\{v_{i_1},\ldots,v_{i_k}\}$, and this is possible iff $\{v_{i_1},\ldots,v_{i_k}\}$ forms a clique in the initial graph $G$. Summarizing, there exists a strategy for \(a_1\) achieving an overall utility of $4k + 3k(k-1)/2 + 2$ iff $G$ has a clique of $k$ nodes.
It remains to show that we could solve the CLIQUE problem if we could determine if there exists a successful manipulation. This fact results from the following disjunction of two cases: If \(u_T = 4k + 3k(k-1)/2 + 2\), then we can conclude that there exists a clique of size \(k\); Otherwise, if \(u_T = 4k + 3k(k-1)/2 + 1\), then there exists a clique of size \(k\) iff there exists a successful manipulation for \(a_1\).
\end{proof}
\textit{Remark:} Aziz et al.~\cite{aziz2017complexity} considered a sequential allocation setting in which the manipulator has a binary utility function but is asked to provide a complete preference order. In this setting, the manipulation problem consists in finding a ranking maximizing the utility of the bundle she gets. While the authors showed that this problem can be solved in polynomial time, the reduction used in the sketch of the proof of Theorem~\ref{theorem: hardness} shows that this problem is NP-hard if the manipulator has a utility function involving four different values (instead of two).\\
Similarly, we obtain that the \(\mathit{ManipSA}\) problem is W[1]-hard w.r.t. the number of agents.
\begin{theorem} \label{theorem: hardness on n}
$\mathit{ManipSA}$ is W[1]-hard w.r.t. the number of agents.
\end{theorem}
\begin{proof}
We design a parameterized reduction from MULTICOLORED CLIQUE. In this problem, given a graph \(G = (V,E)\) with vertex set $V=\{v_1,\ldots,v_n\}$, an integer \(k\), and a vertex coloring \(\phi: V \rightarrow \{1,\ldots,k\}\), we wish to determine if there exists a clique of size \(k\) in \(G\) containing exactly one vertex per color. MULTICOLORED CLIQUE is known to be W[1]-hard w.r.t. parameter \(k\) \cite{fellows2009parameterized}. \\
\emph{Idea of the proof}: We resort on the nice mathematical tool of Sidon sequences. These sequences associate to each number $i$ in $\{1,\ldots, n\}$ a value $id(i)$ such that, for every pair $(i,l)$ with \(i\leq l\), the sum $id(i)+ id(l)$ is different from the one of any other different pair of elements in $\{1,\ldots, n\}$. We use the construction of Erd\"os and Tur\`an \cite{erdos1941problem}, by setting \(id(i) = 2pi + (i^2 \mod p)\) for every \(i \in \{1,\ldots,n\}\), where $p$ is the smallest prime number greater than $n$. Notice that, by the Bertrand-Chebyshev theorem \cite{chebyshev1852memoire}, \(p < 2n\), and thus $id(i)=O(n^2)$. This sequence will be used in the following way. We create a large set of items $B_j$ for each color $j$. In the first picking round, the manipulator will be able to pick a large number of items within these sets. To recover a solution of the MULTICOLORED CLIQUE problem, we will show that, if a multicolored clique $\{v_{i_1},\ldots,v_{i_k}\}$ exists in which each vertex $v_{i_j}$ has color $\phi(v_{i_j})=j$, then in an optimal manipulation the manipulator should pick exactly $(k+1) \cdot id(i_j)$ items in each set $B_j$. The edges of the clique will then be identified by the sums $id(i_j)+id(i_r)$ for all pairs of vertices $\{v_{i_j},v_{i_r}\} \subset \{v_{i_1},\ldots,v_{i_k}\}$. \\
From a MULTICOLORED CLIQUE instance \((G=(V,E),k,\phi)\), we construct the following \(\mathit{ManipSA}\) instance.\\
\emph{Set of items}:
\begin{itemize}
\item For each color $j$, we create a set $B_j$ of \((k+1)\cdot id(n)\) items, and two sets $Id_j$ and $Id_{\overline{j}}$ of $id(n)+2$ items each.
The purpose of items in $Id_j \cup Id_{\overline{j}}$ is to ensure that the number of items picked by \(a_1\) in \(B_j\) is of the form $(k+1) \cdot id(i)$ such that $\phi(v_i) = j$.
\item For each pair of colors $\{j,r\}$ with $j \neq r$, we create a set $Id_{\{j,r\}}$ of $2 \cdot (id(n) +1)$ items.
The purpose of items in $Id_{\{j,r\}}$ is to ensure that, whenever \(a_1\) picks $(k+1) \cdot id(i)$ items in $B_j$ and $(k+1) \cdot id(l)$ items in $B_r$ for two given vertices $v_i$ and $v_l$ of colors $\phi(v_i) = j$ and $\phi(v_l) = r$, then $\{v_i,v_l\}$ is an edge of $G$.
\item We create a set $D$ of $k(k+1)\cdot id(n)$ items.
In a first picking round, the manipulator will be able to pick items in $\bigcup_{j=1}^k B_j \cup D$. The purpose of items in $D$ is to make it possible for \(a_1\) to adjust the number of items she picks in $\bigcup_{j=1}^k B_j$.
\item Last, we add a set $Z$ of $2k(k+1)id(n)$ items.
Set $Z$ will be used as a buffer of items where each non-manipulator will pick when no items in $Id_j$, $Id_{\overline{j}}$, or $Id_{j,r}$ are left, so as to avoid mutual conflicts.
\end{itemize}
\emph{Set of agents}: We create two agents $c_j$ and $\overline{c}_j$ per color $j$, and two agents $p_{j,r}$ and $p_{r,j}$ for each pair of colors $\{j,r\}$ such that \(j\neq r\). Moreover, we create one agent denoted by $d$ and one manipulator $a_1$. In total, there are $k(k+1)+2$ agents. We now detail the top of the preference rankings of non-manipulators, where by abuse of notations, we use $S\succ S'$ to denote the fact that items in $S$ are ranked before the ones in $S'$, while the order inside each set is indifferent.
\begin{itemize}
\item Agent $c_j$, for $1 \leq j \leq k$: $B_j \succ Id_j \succ Z \succ \ldots$.
\item Agent $\overline{c}_j$, for $1 \leq j \leq k$: $B_j \succ Id_{\overline{j}} \succ Z \succ \ldots$.
\item Agent $p_{j,r}$, for $1 \leq j \neq r \leq k$: $B_j \succ Id_{\{j,r\}} \succ Z \succ \ldots.$
\item Agent $p_{r,j}$ for $1 \leq j \neq r \leq k$: $B_r \succ Id_{\{j,r\}} \succ Z \succ \ldots.$
\item Agent $d$: $D \succ Z \succ \ldots$.
\end{itemize}
As an important remark, notice that agents $p_{j,r}$ and $p_{r,j}$ rank items in $Id_{\{j,r\}}$ identically.\\
\emph{Picking sequence}: \(\pi\) is composed of the following rounds:
\begin{itemize}
\item Manipulator round 1: \(a_1\) gets to pick \(k(k+1) \cdot id(n)\) items.
\item Non-manipulators round:
\begin{itemize}
\item Agents in $\mathcal{A} \setminus \{a_1,d\}$ pick in \(id(n)\) subrounds. In each subround, each of them picks exactly one item in the following order: agents \(c_{j}\), $1 \leq j \leq k$, are the first pickers, then come agents $p_{j,r}$,
and lastly agents \(\overline{c}_j\).
\item Finally, agent $d$ picks $k(k+1) \cdot id(n)$ items.
\end{itemize}
\item Manipulator round 2: \(a_1\) gets all remaining items.
\end{itemize}
\emph{Utility values of \(a_1\)}: For ease of presentation, we use two simplifying assumptions. First, we act as if different items can have the same utility value for $a_1$. This assumption can be removed making preferences strict by adding sufficiently small \(\epsilon\) values. Second, we use negative utilities. In fact, one can recover an equivalent instance with only non-negative values by adding to all the utilities the absolute value of the minimal one. Indeed, this would not change the set of optimal solutions as the size of \(a_1\)'s bundle is fixed by \(\pi\).
\begin{itemize}
\item Items in $Z$ have a utility value of $0$.
\item One specific item in each set $B_j$, that we denote by $b_j^*$, has a utility value of $4\alpha$ where $\alpha = (id(n) + 2)k(k+1)$. All items in $(\bigcup_{j=1}^k B_j \cup D)\setminus\{b_1^*,\ldots,b_k^*\} $ have a utility value equal to $2\alpha$.
\item The utilities of the items in the sets $Id_j$ and $Id_{\overline{j}}$ are defined as follows. Index the items in $Id_j$ (resp. $Id_{\overline{j}}$) from $1$ to $id(n)+2$ according to the preference order of agent $c_j$ (resp. $\overline{c}_j$). Furthermore, let $\Tau_{j} = \{id(i)| \phi(v_i) = j\}$, $\tau_{j}(t)$ denote the $t^{th}$ smallest value in $\Tau_{j}$, and $T_{j} = |\Tau_{j}|$. We also set $\tau_{j}(0) = 0$ and $\tau_{j}(T_{j} + 1) = id(n)+2$. Then, all items receive a utility value of $1$, except for the items of indices $\tau_{j}(t)$ for $t\in \{1,\ldots,T_{j}+1\}$, that get utility $\tau_{j}(t-1) - \tau_{j}(t) + 1$. Notice that, for every $t$ such that $1 \leq t \leq T_{j} + 1$, by definition the sum of the utilities of all the items from
$\tau_{j}(t-1)+1$ to $\tau_{j}(t)$ is $0$.
\item Similarly, the utilities of the items in each $Id_{\{j,r\}}$ are set in the following manner. Index these items from $1$ to $2id(n) + 2$ according to the preference order of agents $p_{j,r}$ and $p_{r,j}$. Furthermore, let $\Tau_{j,r} = \{id(i) +id(l)| \phi(v_i) = j, \phi(v_l) = r, \{v_i,v_l\} \in E\}$, $\tau_{j,r}(t)$ denote the $t^{th}$ smallest value in $\Tau_{j,r}$, and $T_{j,r} = |\Tau_{j,r}|$. We also set $\tau_{j,r}(0) = 0$ and $\tau_{j,r}(T_{j,r} + 1) = 2id(n)+2$. Then all items receive a utility value of $1$, except items of index $\tau_{j,r}(t)$ for $t\in \{1,\ldots,T_{j,r}+1\}$, whose utility is set to $\tau_{j,r}(t-1) - \tau_{j,r}(t) + 1$.
\end{itemize}
As we are going to show below, in an optimal manipulation, the agents behave as follows. In the first manipulator round, $a_1$ picks $k(k+1)\cdot id(n)$ items in $\bigcup_j B_j \cup D$. Then, in the non-manipulators round, agents $c_j$, $p_{j,r}$ and $\overline{c}_j$ for the different values $j$ and $r \neq j$ pick the remaining items in the sets $B_j$, plus other items in $Id_j$, $Id_{\overline{j}}$ and $Id_{\{j,r\}}$. Subsequently, $d$ takes all the remaining items in $D$ and further ones to complete her picks in $Z$. Finally, in the second manipulator round, $a_1$ collects all remaining items.\\
\emph{Sketch of the proof.} We first claim that, in the first manipulator round, \(a_1\) should pick only items in $\bigcup_j B_j \cup D$.
Indeed, after the non-manipulators round, none of these items is left, whatever \(a_1\) has previously picked. In particular, the items left by $a_1$ in each set $B_j$ are collected by agents $c_j$, $p_{j,r}$ and \(\overline{c}_j\), while the ones in $D$ are collected by agent $d$. Moreover, because $|\bigcup_j (Id_j\cup Id_{\overline{j}}) \cup \bigcup_{j\neq r} Id_{j,r}|$ is upper bounded by $\alpha$ and of the utility function we have set, any subset of items in $\mathcal{I} \setminus (\bigcup_j B_j \cup D)$ as a utility value which is strictly less than $\alpha$ and strictly greater than $-\alpha$. As a result, because each item in $\bigcup_j B_j \cup D$ is worth $2\alpha$, any solution which would not pick only items in $\bigcup_j B_j \cup D$ in the first manipulator round could be improved by doing so.
Using the same type of argument, we also claim that $a_1$ should pick all of the $b_j^*$ items in her first picking round. We will hence restrict our attention to picking strategies that verify these two assumptions. Under such an hypothesis, the best utility value $a_1$ can hope to get from the set of items she collects in her second picking round is $0$. This is induced by the utility values that we have set, as well as by the truthful picking strategies of non-manipulators. Indeed, note that by construction the overall utility of the set $\mathcal{I} \setminus (\bigcup_j B_j \cup D)$ is 0. Moreover, as sets $Id_j$, $Id_{\overline{j}}$ and $Id_{\{j,r\}}$ are indexed according to the preference orders of agents $c_j$, $\overline{c}_j$, $p_{j,r}$ and $p_{r,j}$, at the end of the non-manipulators round only prefixes of such sets have been picked. Hence, recalling that all the items in $Z$ have null utility for $a_1$, the overall utility of items left to $a_1$ at the beginning of the second manipulator round is $0$ if and only if the prefixes of the already picked items in all the sets $Id_j$, $Id_{\overline{j}}$ and $Id_{\{j,r\}}$ end up to items of negative value for $a_1$.
We now argue that this can happen if and only if there exists a multicolored clique $G$.
Let us first show the only if direction, i.e., that if $a_1$ gets an overall utility equal to $0$ from the set of items she collects in her second picking round, then there is a multicolored clique of size $k$ in $G$. Let us denote by $nb_j$ the number of items that $a_1$ has picked in $B_j$ during the first manipulator round. Since for each $j\in \{1,\ldots,k\}$ agent $a_1$ has picked $b_j^*$ and $|B_j|=(k+1)id(n)$, we have that $1\le nb_j \le (k+1)id(n)$. We first show that $nb_j$ should be a multiple of $k+1$.
To this aim, let us first observe that, for each $j\in \{1,\ldots,k\}$, after the first manipulator round, in every non-manipulators subround, $k+1$ items of $B_j$ (if still available) are picked by the $k+1$ agents $c_j$, $p_{j,r}$ with $j\neq r$ and $\overline{c}_j$ (in this order).
Therefore, at the end of the non-manipulators rounds, $c_j$ has picked $\lfloor nb_j/(k+1)\rfloor$ items in $Id_{j}$ and $\overline{c}_j$ has picked $\lceil nb_j/(k+1)\rceil$ items in $Id_{\overline{j}}$. But then, if $nb_j$ is not a multiple of $k+1$, these two numbers are different and thus the last items picked by $c_j$ in $Id(j)$ and by $\overline{c}_j$ in $Id_{\overline{j}}$ cannot both have negative utility for $a_1$, because the difference between two consecutive $id$ values is strictly greater than $1$. Therefore, each $nb_j$ should be of the form $nb_j = (k+1) \cdot id(i_j)$ for some $i_j\in \{1,\ldots,n\}$ such that $\phi(v_{i_j}) = j$, so that both $c_j$ and $\overline{c}_j$ pick $id(i_j)$ items in $Id_j$ and $Id_{\overline{j}}$, respectively.
In order to show that $\{v_{i_j}|1\le j\le k\}$ is a multicolored clique, it remains to prove that all the vertices of this set are neighbors in $G$. Indeed, since in each subround of the non-manipulators round every time $c_j$ picks in $Id(j)$ each agent $p_{j,r}$ picks in $Id_{\{j,r\}}$,
at the end of the non-manipulators round \(p_{j,r}\) and \(p_{r,j}\) have picked $id(i_j)+id(i_r)$ items in $Id_{\{j,r\}}$. Since in order for $a_1$ to achieve an overall utility equal to $0$ in the second manipulator round the last item previously picked in $Id_{\{j,r\}}$ must have a negative utility, \(\{v_{i_j}, v_{i_r}\}\) must be an edge of \(G\).
It remains to show the if direction, i.e., that if there is a multicolored clique $\{v_{i_1},\ldots,v_{i_k}\}$ in $G$, then there exists a strategy leading $a_1$ to reach overall utility $0$ in her second manipulation round. Assuming without loss of generality that $\phi(v_{l_j}) = j$, this can be accomplished by letting $a_1$ pick $nb_j = (k+1) \cdot id(i_j)$ items in $B_j$, $1 \leq j \leq k$, and the remaining items in $D$. Then, each $c_j$ (resp. $\overline{c}_j$) will pick
$id(i_j)$ items in $Id(j)$ (resp. $Id_{\overline{j}}$) and each $p_{j,r}$ will pick $id(i_j)$ items in $Id_{\{j,r\}}$, which causes $a_1$ to achieve overall utility $0$ in her second manipulator round, finally proving the claim.
\end{proof}
Consequently from Theorems~\ref{theorem: hardness} and \ref{theorem: hardness on n}, it is unlikely that the \(\mathit{ManipSA}\) problem admits FPT algorithms w.r.t. parameters \(\mu(a_1)\) and \(n\). Hence, these results valorize the XP results on these parameters obtained in Section~\ref{Sect:positive}, as well as Theorem~\ref{theorem n + mu}, which interestingly shows that the \(\mathit{ManipSA}\) problem is FPT w.r.t. parameter \(\mu(a_1) + n\).
\section{Conclusion and Future Work}
We have provided a variety of results on the problem of finding an optimal manipulation in the sequential allocation protocol. Beside an integer program to solve this problem, we have designed a dynamic programming algorithm from which we have derived several positive parameterized complexity results. For instance, we have shown that this manipulation problem is in XP with respect to the number of agents and that it is in FPT with respect to the maximum range of an item. Conversely, we have also provided matching W[1]-hardness results.
Lastly, motivated by the fact that agents could be inclined to behave truthfully if a manipulation would not be worth it, we have investigated an upper bound on the increase in utility that the manipulator could get by manipulating. We have showed that the manipulator cannot increase the utility of her bundle by a factor greater than or equal to 2 and that this bound is tight. Overall, our results show that, not only sequential allocations are worth manipulating, but also that they can be manipulated efficiently for a wide range of instances.
Several directions for future work are conceivable. One could try to decrease our upper bound on the increase in utility that the manipulator can obtain by restricting to specific instances (e.g., imposing a specific type of picking sequences). Moreover, it would be worth investigating the price of manipulation, i.e., the worst case ratio between the social welfare when one agent manipulates, all the others being truthful, and the one when all the agents behave truthfully. On this issue, to the best of our knowledge, little is known except for some preliminary results by Bouveret and Lang~\cite{bouveret2014manipulating}. \\
\section{Setting and Notations}
We consider a set \(\mathcal{A} = \{a_1,\ldots,a_n\}\) of \(n\) agents and a set \(\mathcal{I} = \{i_1,\ldots,i_m\}\) of \(m\) items. A preference profile \(P = \{\succ_{a_1}, \ldots,\succ_{a_n}\}\) describes the preferences of the agents.
More precisely, \(P\) is a collection of rankings such that ranking \(\succ_a\) specifies the preferences of agent \(a\) over the items in \(\mathcal{I}\). The items are allocated to the agents according to the following sequential allocation procedure: at each time step, a picking sequence \(\pi \in \mathcal{A}^m\) specifies an agent who gets to pick an item within the remaining ones. Put another way, \(\pi(1)\) picks first, then \(\pi(2)\) picks second, and so forth. We assume that agents behave greedily by choosing at each time step their preferred item within the remaining ones. If we view sequential allocation as a centralized protocol, then all agents report their preference rankings to a central authority which mimics this picking process. In the following, w.l.o.g. we use this centralized viewpoint where agents have to report their preference rankings. This sequential process leads to an allocation that we denote by \(\phi\). More formally, \(\phi\) is a function such that \(\phi(a)\) is the set of items that agent \(a\) has obtained at the end of the sequential allocation process.
\begin{example}[Adapted from Example 1 in~\cite{aziz2017complexity}] \label{example:running}
For the sake of illustration, we consider an instance with 3 agents and 4 items, i.e., \(\mathcal{A} = \{a_1 , a_2 , a_3\}\) and \(\mathcal{I} = \{i_1, i_2, i_3, i_4\}\). The preferences of the agents
are described by the following
profile:
\begin{align*}
a_1 &: i_1 \succ i_2 \succ i_3 \succ i_4\\
a_2 &: i_3 \succ i_4 \succ i_1 \succ i_2\\
a_3 &: i_1 \succ i_2 \succ i_3 \succ i_4
\end{align*}
and the picking sequence is \(\pi = (a_1, a_2, a_3, a_1)\). Then, \(a_1\) will first pick \(i_1\), then \(a_2\) will pick \(i_3\), \(a_3\) will pick \(i_2\), and lastly \(a_1\) will pick \(i_4\). Hence, the resulting allocation is given by \(\phi(a_1) = \{i_1, i_4\}\), \(\phi(a_2) = \{i_3\}\) and \(\phi(a_3) = \{i_2\}\).
\end{example}
The allocation \(\phi\) is completely determined by the picking sequence \(\pi\) and the preference profile \(P\). Notably, if one of the agents reports a different preference ranking, she may obtain a different set of items. This different set may even be more desirable to her. Consequently, agents may have an incentive to misreport their preferences.
\begin{example}[Example \ref{example:running} continued]
Assume now that agent \(a_1\) reports the preference ranking \(i_3 \succ i_2 \succ i_1 \succ i_4\). Then she obtains the set of items \(\phi(a_1) = \{i_2, i_3\}\). This set of items may be more desirable to \(a_1\) than \(\{i_1, i_4\}\) if for instance items \(i_1\), \(i_2\) and \(i_3\) are almost as desirable as one another, but are all three much more desirable than \(i_4\).
\end{example}
In this work, we study this type of manipulation. We will assume that agent \(a_1\) is the manipulator and that all other agents behave truthfully. Although the agents are asked to report ordinal preferences, we will assume a standard assumption in the literature that \(a_1\) has underlying additive utilities for the items. More formally, the preferences of \(a_1\) over items in \(\mathcal{I}\) are described by a set of positive values \(U = \{u(i)|i \in \mathcal{I}\}\) such that \(i \succ_{a_1} j\) implies \(u(i) > u(j)\). The utility of a set of items \(S\) is then obtained by summation, i.e., \(u(S) = \sum_{i\in S} u(i)\).
We will denote by \(\succ_T\) the truthful preference ranking of \(a_1\), and by \(\phi_{\succ}\) the allocation obtained if agent \(a_1\) reports the preference ranking \(\succ\). Moreover, we will denote by \(u_T\) the value \(u(\phi_{\succ_T}(a_1))\) which is the utility value of \(a_1\)'s allocation when she behaves truthfully. We will say that a preference ranking \(\succ\) is a successful manipulation if \(a_1\) prefers \(\phi_{\succ}(a_1)\) to \(\phi_{\succ_T}(a_1)\), i.e., if \(u(\phi_{\succ}(a_1)) > u_T\).
Hence, the objective for the manipulator is to find a successful manipulation \(\succ\) maximizing \(u(\phi_{\succ}(a_1))\). We are now ready to define formally the problem of \textbf{M}anipulating a \textbf{S}equential \textbf{A}llocation process, called \(\mathit{ManipSA}\).
\begin{cproblem}{\(\mathit{ManipSA}\)}
Input: A set \(\mathcal{A} = \{a_1, a_2, \ldots, a_n\}\) of \(n\) agents where \(a_1\) is the manipulator, a set \(\mathcal{I} = \{i_1,\ldots,i_m\}\) of \(m\) items, a picking sequence \(\pi\), a preference profile \(P\) and a set \(U\) of utility values for \(a_1\).
Find: A preference ranking \(\succ\) maximizing \(u(\phi_{\succ}(a_1))\).
\end{cproblem}
The \(\mathit{ManipSA}\) problem is known to be \(\mathit{NP}\)-hard \cite{aziz2017complexity}. In this work, we will address this optimization problem from a parameterized complexity point of view. We will be mostly interested in three types of parameters: the number of agents, the number of items that an agent gets to pick, and the range of the items. While the number of agents \(n\) is already clear, let us define the other parameters more formally. We denote by \(\mu(a)\) the number of items that agent \(a\) gets to choose in \(\pi\) and by \(\mu^{\mathtt{max}}\) the maximum of these values, i.e., \(\mu^{\mathtt{max}} = \max\{\mu(a) | a\in \mathcal{A}\}\). Let \(\mathtt{rk}_a(i)\) denote the rank of item \(i\) in the preference ranking of agent \(a\). Then, we define the range \(\mathtt{rg}(i)\) of an item \(i\) as:
\[\mathtt{rg}(i) = \max_{a\in \mathcal{A}\setminus\{a_1\}} \mathtt{rk}_a(i) - \min_{a\in \mathcal{A}\setminus\{a_1\}} \mathtt{rk}_a(i) + 1.\notag \]
Note that we define the range of an item using only non-manipulators. The maximum range of an item \(\mathtt{rg}^{\mathtt{max}}\) is then defined as \(\max_{i\in \mathcal{I}} \mathtt{rg}(i)\).
Let us give some intuitions on parameters $\mu(a_1)$ and \(\mathtt{rg}^{\mathtt{max}}\).
In the $\mathit{ManipSA}$ problem, $\mu(a_1)$ can be seen as a budget parameter for the manipulator. Intuitively, the larger the value of $\mu(a_1)$, the more she can manipulate. It is also the size of a feasible solution, i.e., the size of the bundle \(a_1\) will get. Interestingly, in real-life applications, $\mu(a_1)$ can be much smaller than $|\mathcal{I}|$ and even much smaller than $|\mathcal{A}|$. For these reasons, $\mu(a_1)$ is an interesting parameter to study in the $\mathit{ManipSA}$ problem.
On the other hand, parameter \(\mathtt{rg}^{\mathtt{max}}\) measures the correlation between the preferences of the non-manipulators. If \(\mathtt{rg}^{\mathtt{max}}=1\) (its minimal possible value), then all non-manipulators have the same preference ranking. In this case, the manipulation problem becomes easy to solve, as all non-manipulators can be treated as a single agent and the case of two agents is known to be polynomial-time solvable. This simple insight can let us hope that the manipulation problem remains tractable if this parameter is small. An important motivation behind parameter \(\mathtt{rg}^{\mathtt{max}}\) is that, in practice, the preferences of different agents are often correlated.
\section{Positive Parameterized Complexity Results on the $\mathit{ManipSA}$ Problem}\label{Sect:positive}
To solve the \(\mathit{ManipSA}\) problem, one can simply try the \(m!\) possible preference rankings and see which ones yield the maximum utility value. However, a more clever approach uses the following result.
\begin{fact} [From Propositions 7 and 8 in \cite{bouveret2011general}]
Given a specific set of items \(S\), it is possible to determine in polynomial time whether there exists a preference ranking \(\succ\) such that \(S \subseteq \phi_{\succ}(a_1)\). In such case, it is also easy to compute such a ranking.
\end{fact}
Hence, one can try all the \(\binom{m}{\mu(a_1)}\)
possible sets of \(\mu(a_1)\) items to determine which is the best one that \(a_1\) can get. This approach shows that the \(\mathit{ManipSA}\) problem is in XP w.r.t. parameter \(\mu(a_1)\).
\begin{example}[Example~\ref{example:running} continued] As in this example \(\mu(a_1) = 2\), \(a_1\) just has to determine which one of the \({4 \choose 2}\) following sets she must obtain: \(\{i_1,i_2\}, \{i_1,i_3\}, \{i_1,i_4\}, \{i_2,i_3\}\), \(\{i_2,i_4\}, \{i_3,i_4\}\).
\end{example}
To obtain further positive results, we first design a dynamic programming scheme.
We will then explain how this scheme entails several positive parameterized complexity results.
\subsection{A Dynamic Programming Scheme}
Our dynamic programming approach considers pairs \((k,S)\) where \(S \subseteq \mathcal{I}\) and \(k \in \{0,1,\ldots,\mu(a_1)\}\). In a state characterized by the pair \((k,S)\), we know that the items in \(S\) have been picked by some agents in \(\mathcal{A}\) as well as \(k\) other items that have been picked by agent \(a_1\). However, while the items in \(S\) are clearly identified, the identities of these \(k\) other items are unspecified.
Given a pair \((k,S)\), the number of items that have already been picked is \(|S| + k\). Hence, the next picker is \(\pi(|S| + k + 1)\). Let us denote this agent by \(a\). If \(a = a_1\), i.e., she is the manipulator, then she will pick one more item within the set \(\mathcal{I} \setminus S\) and we move to state \((k+1,S)\).
Otherwise, let \(b(a,S)\) denote the preferred item of agent \(a\) within the set \(\mathcal{I} \setminus S\). Then, two cases are possible:
\begin{itemize}
\item In the first case, \(b(a,S)\) has already been picked by agent \(a_1\). Then, we move to state \((k - 1, S \cup \{b(a,S)\})\). Note that this is only possible if \(k\) was greater than or equal to one.
\item Otherwise \(b(a,S)\) is picked by agent \(a\) and we move to state \((k,S\cup \{b(a,S)\})\).
\end{itemize}
Let us denote by \(V(k,S)\) the maximal utility that the manipulator can get from items in \(\mathcal{I}\setminus S\) starting from state \((k,S)\). Then the value of an optimal manipulation is given by \(V(k = 0, S = \emptyset)\). From the previous analysis, function \(V\) verifies the following dynamic programming equations:
\begin{align}
V(k,S) \!&= \!V(k+1 , S) \text{ if } \pi(|S| + k + 1) = a_1 \label{eq:dp1}\\
V(k=0,S) \!&=\! V(k,S\!\cup\!\{b(a,S)\}) \!\text{ if }\! \pi(|S| + k + 1)\! =\! a \!\neq\! a_1 \label{eq:dp2}\\
V(k>0,S) \!&=\! \max\big(V(k-1,S\cup\{b(a,S)\}) + u(b(a,S)), \notag\\
V(k&,S\cup\{b(a,S)\})\big) \text{ if } \pi(|S| + k + 1) = a \neq a_1 \label{eq:dp3}
\end{align}
where the termination is guaranteed by the fact that \(V(k,S) = \sum_{i \in \mathcal{I} \setminus S} u(i)\) when \(|S| + k = m\).
Equations~\ref{eq:dp1}-\ref{eq:dp3} induce a directed acyclic state graph \(\DPG=(\DPV,\DPA)\), where \(\DPV\) is the set of states generated from state \((k=0,S=\emptyset)\) when using these equations and the arcs in \(\DPA\) connect each state to its successor states. We will also denote by \(\DPS = \{S|(k,S)\in\DPV\}\) the set of item-sets \(S\) that are involved in \(\DPV\).
Notably, solving Equations~\ref{eq:dp1}-\ref{eq:dp3} can be performed by building graph \(\DPG\) and running backward induction on it (from the lastly generated states to the initial state \((k=0,S=\emptyset)\)). By standard bookkeeping techniques one can also identify the items that are taken in an optimal manipulation and the order in which they are taken and then recover an optimal ranking to report (where these items are ranked first and in the same order).
\begin{example}[Example \ref{example:running} continued]
Let us illustrate our approach on our running example. We let \(u(i_1) = 5\), \(u(i_2) = 4\), \(u(i_3) = 3\) and \(u(i_4) = 1\). Figure \ref{fig:Dynprog} displays the resulting state graph \(\DPG\). The values of the states are given next to them and the optimal branches are displayed with thick solid arrows. In this example, \(\DPS = \{\emptyset, \{i_3\}, \{i_1,i_3\},\{i_3,i_4\},\{i_1,i_2,i_3\},\{i_1,i_3,i_4\}\}\). As we can see, the optimal choices for \(a_1\) is to first pick \(i_3\) so that \(i_2\) is still available the second time she becomes the picker. Hence, an optimal manipulation is given by \(\succ = i_3 \succ i_2 \succ i_1 \succ i_4\) which results in an allocation \(\phi_{\succ}(a_1) =\{i_2, i_3\}\) with a utility of $7$ whereas \(u_T = 6\).
\end{example}
\begin{figure}[!t]
\centering
\scalebox{0.9}{\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=7cm,semithick]
\node[rectangle,draw,text=black] (A0) at (0,2) {$(0,\emptyset)$};
\node[text=black] (A0V) at (1, 2) {$7$};
\node[text=black] (A0V) at (-7, 2) {$\pi(1) = a_1$};
\node[rectangle,draw,text=black] (A) at (0,0.5) {$(1,\emptyset)$};
\node[text=black] (AV) at (1, 0.5) {$7$};
\node[text=black] (A0V) at (-7, 0.5) {$\pi(2) = a_2$};
\node[rectangle,draw,text=black] (B) at (-2,-3) {$(1,\{i_3\})$};
\node[text=black] (BV) at (-0.9, -3) {$6$};
\node[text=black] (A0V) at (-7, -3) {$\pi(3) = a_3$};
\node[text=black] (A0V) at (-7, -6.5) {$\pi(4) = a_1$};
\node[rectangle,draw,text=black] (C) at (1,-1.5) {$(0,\{i_3\})$};
\node[text=black] (CV) at (2, -1.5) {$4$};
\node[rectangle,draw,text=black] (D) at (-5,-6.5) {$(1,\{i_1,i_3\})$};
\node[text=black] (DV) at (-3.8, -6.5) {$5$};
\node[rectangle,draw,text=black] (E) at (-2,-5) {$(0,\{i_1,i_3\})$};
\node[text=black] (EV) at (-0.5, -5) {$1$};
\node[rectangle,draw,text=black] (F) at (3, -3) {$(0,\{i_3,i_4\})$};
\node[text=black] (FV) at (4.5, -3) {$4$};
\node[rectangle,draw,text=black] (G) at (-5, -8) {$(2,\{i_1,i_3\})$};
\node[text=black] (GV) at (-3.8, -8) {$5$};
\node[rectangle,draw,text=black] (H) at (-2, -6.5) {$(0,\{i_1,i_2,i_3\})$};
\node[text=black] (HV) at (-0.5, -6.5) {$1$};
\node[rectangle,draw,text=black] (I) at (-2, -8) {$(1,\{i_1,i_2,i_3\})$};
\node[text=black] (IV) at (-0.5, -8) {$1$};
\node[rectangle,draw,text=black] (J) at (3, -6.5) {$(0,\{i_1,i_3,i_4\})$};
\node[text=black] (JV) at (4.5, -6.5) {$4$};
\node[rectangle,draw,text=black] (K) at (3, -8) {$(1,\{i_1,i_3,i_4\})$};
\node[text=black] (KV) at (4.5, -8) {$4$};
\node[text=black,text width = 4cm] (L) at (-5, -9.5) {$a_1$ has picked $i_2,i_4$, \(u(i_2) + u(i_4) = 5\)};
\node[text=black,text width = 2.8cm] (M) at (-2, -9.5) {$a_1$ has picked $i_4$, \(u(i_4) = 1\)};
\node[text=black,text width = 2.8cm] (N) at (3, -9.5) {$a_1$ has picked $i_2$, \(u(i_2) = 4\)};
\path (A0) edge[line width=1.0pt] node {} (A)
(A) edge [dotted,line width=0.5pt,left] node {\(a_2\) picks \(i_3\)} (B)
(A) edge [line width=1.0pt,text width = 2.8cm, pos = 0.8] node {\(a_1\) has picked \(i_3\), \(u(i_3) = 3\)} (C)
(B) edge[left,dotted,line width=0.5pt] node {\(a_3\) picks \(i_1\)} (D)
(B) edge [line width=1.0pt,text width = 2.8cm] node {\(a_1\) has picked \(i_1\), \(u(i_1) = 5\)} (E)
(C) edge [line width=1.0pt] node {\(a_2\) picks \(i_4\)} (F)
(D) edge [line width=1.0pt] node {} (G)
(E) edge [line width=1.0pt] node {\(a_3\) picks \(i_2\)} (H)
(H) edge [line width=1.0pt] node {} (I)
(F) edge [line width=1.0pt] node {\(a_3\) picks \(i_1\)} (J)
(J) edge [line width=1.0pt] node {} (K)
(G) edge [line width=1.0pt] node {} (L)
(I) edge [line width=1.0pt] node {} (M)
(K) edge [line width=1.0pt] node {} (N);
\path [draw = black, rounded corners, inner sep=100pt,dotted]
(-7.8,2.5)
-- (4.7,2.5)
-- (4.7,1)
-- (-7.8,1)
-- cycle;
\path [draw = black, rounded corners, inner sep=100pt,dotted]
(-7.8,0.8)
-- (4.7,0.8)
-- (4.7,-2.3)
-- (-7.8,-2.3)
-- cycle;
\path [draw = black, rounded corners, inner sep=100pt,dotted]
(-7.8,-2.5)
-- (4.7,-2.5)
-- (4.7,-6)
-- (-7.8,-6)
-- cycle;
\path [draw = black, rounded corners, inner sep=100pt,dotted]
(-7.8,-6.1)
-- (4.7,-6.1)
-- (4.7,-7.5)
-- (-7.8,-7.5)
-- cycle;
\end{tikzpicture}}
\caption{\label{fig:Dynprog} Directed acyclic state graph \(\DPG\) in Example \ref{example:running}}
\end{figure}
\subsection{Complexity Analysis}
We now provide
positive parameterized complexity results by proving several upper bounds on \(|\DPV|\). In fact, we will prove bounds on \(|\DPS|\) and use the observation that \(|\DPV| \le (\mu(a_1)+1) |\DPS| \le m |\DPS|\) as there are only \(\mu(a_1)+1\) possible values for $k$ in a state $(k,S)$.
\paragraph{The algorithm is XP w.r.t. parameter \(n\).}
Let \(D(a,i)\) denote the set of items that agent \(a\) prefers to item \(i\), i.e., \(D(a,i) = \{j \in \mathcal{I} | j \succ_a i\}\). Then, for any set \(S\subseteq\mathcal{I}\), the definition of \(b(a,S)\), which we recall is the preferred element of agent \(a\) in set \(\mathcal{I} \setminus S\) implies that \(\bigcup_{a \in \mathcal{A}\setminus\{a_1\}}D(a,b(a,S)) \subseteq S\). Let us denote by \(\Delta\) the set of item-sets for which the equality holds, i.e., \(\Delta = \{S\subseteq\mathcal{I} | \bigcup_{a \in \mathcal{A}\setminus\{a_1\}}D(a,b(a,S)) = S \}\). Note that a set \(S\in \Delta\) is completely determined by the vector \((b(a_2,S), \ldots, b(a_n,S))\) and thus \(|\Delta| \leq m^{n-1}\). Our first key insight is that \(\DPS\) is a subset of \(\Delta\).
\begin{lemma} \label{lemma:Delta}
The set \(\DPS\) is a subset of \(\Delta\).
\end{lemma}
\begin{proof}
Let us show by induction that for each state \((k,S)\in \DPV\), \(S\) is in \(\Delta\).
The result is true for the initial state in which \(S = \emptyset\). Assume that the result is true for a state \((k,S)\). We show that the result also holds for the successor states. If \(\pi(|S| + k + 1) = a_1\), then the successor state is \((k+1,S)\) so the result is also true for this new state as \(S\) is unchanged. Otherwise, let \(\pi(|S| + k + 1) = a^*\), then the successor states are \((k,S\cup\{b(a^*,S)\})\) and \((k-1,S\cup\{b(a^*,S)\}) \). Then we have the following two inclusion relationships:
\begin{align*}
D(a^*,b(a^*,S))\cup\{b(a^*,S)\} \subseteq D(a^*,b(a^*,S\cup\{b(a^*,S)\})), \\
\forall a \in \mathcal{A} \setminus \{a_1\}, D(a,b(a,S)) \subseteq D(a,b(a,S\cup\{b(a^*,S)\})).
\end{align*}
These relationships imply that $S\cup\{b(a^*,S)\}$ is equal to:
\begin{align*}
\bigcup_{a \in \mathcal{A}\setminus\{a_1\}}\!\!\!\!\!\!D(a,b(a,S)) \cup \{b(a^*,S)\} \subseteq \!\!\!\!\! \bigcup_{a \in \mathcal{A}\setminus\{a_1\}} \!\!\!\!\!\!D(a,b(a,S \cup \{b(a^*,S)\})),
\end{align*}
by the induction hypothesis. As already stated, the reverse inclusion relationship is always true and hence \(S\cup\{b(a^*,S)\} \in \Delta\).
\end{proof}
Consequently from Lemma~\ref{lemma:Delta}, each state in \(\DPV\) admits two possible representations that we call {\em agent representation} and {\em item representation}. In the item representation, a state \((k,S)\) is represented by a vector of size \(m+1\), i.e., \(S\) is represented by a binary vector of size \(m\). In the agent representation, a state \((k,S)\) is represented by a vector of size \(n\). In this case, \(S\) is replaced by the vector \((b(a_2,S), \ldots, b(a_n,S))\). Note that processing a state (computing the successor states and the optimal value of the state according to the ones of the successor states) in the agent (resp. item) representation can be done in $O(nm)$ (resp. $O(m)$) operations.
We now show that the \(\mathit{ManipSA}\) problem can be solved in polynomial time for any bounded number of agents.
\begin{theorem} \label{theorem : n}
Problem \(\mathit{ManipSA}\) is solvable in \(O(n \cdot m^{n+1})\). As a result, \(\mathit{ManipSA}\) is in XP w.r.t. parameter \(n\).
\end{theorem}
\begin{proof}
The result follows from the fact that our dynamic programming scheme runs in \(O(n \cdot m^{n+1})\). To obtain this complexity bound, one should use the agent representation. In this case, processing a state requires \(O(nm)\) operations, and one can use a dynamic programming table of size \(m^n\) with one cell per possible vector \((k,b(a_2,S), \ldots, b(a_n,S))\).
\end{proof}
We now argue that our dynamic programming approach yields an FPT algorithm w.r.t. parameters \(n + \mu(a_1)\), \(n + \mathtt{rg}^{\max}\) and \(\mathtt{rg}^{\max}\) by providing tighter upper bounds on \(|\DPS|\). To use these bounds, we will need the two following lemmata:
\begin{lemma} \label{lemma: building graph}
Under the item representation, the graph \(\DPG\) can be build in $O(m|\DPV|^2)$.
\end{lemma}
\begin{proof}
First note that \(\DPG\) is indeed acyclic. Indeed, given a state that can occur at time step \(t\) of the allocation process (i.e., \(k+|S| = t\)), its successors will either correspond to time step \(t+1\) or will still correspond to time step \(t\) but with a strictly lower value for parameter \(k\). We now show how to incrementally build \(\DPG\) from state \((k = 0,S = \emptyset)\). For each new state generated at the previous iteration, compute its successor states, add edges towards them, and label them with the corresponding utility values. Moreover, each time a state is generated, compare it to the states already generated to avoid the creation of duplicates. If it is indeed a new state, its successors will be computed in the next iteration. This process is repeated until all states are generated. Note that because each state is only processed once, we will generate at most \(2|\DPV|\) states. However, because of the duplicate removal operation performed each time a state is generated, the method runs in $O(m|\DPV|^2)$. Indeed, this step will trigger \(O(|\DPV|^2)\) comparisons, each requiring \(m+1\) operations.
\end{proof}
\begin{lemma} \label{lemma: solving graph}
Under the item representation, problem \(\mathit{ManipSA}\) can be solved in $O(m|\DPV|^2)$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lemma: building graph}, we can build \(\DPG\) in $O(m|\DPV|^2)$ and compute an optimal manipulation by backward induction in $O(m|\DPV|)$.
\end{proof}
\paragraph{The algorithm is FPT w.r.t. parameter \(n + \mu(a_1)\).}
We further argue that \(|\DPS|\) can be upper bounded by \(m(\mu(a_1)+1)^{n-1}\). This is a consequence of the following lemma, where \(\mu(a,t)\) denotes the number of items that agent \(a\) gets to pick within the \(t\) first time steps.
\begin{lemma} \label{lemma: n + mu}
For each time step \(t\), there is a set \(S_t\) of \(t - \mu(a_1,t)\) items that are always picked within the \(t\) first time steps, whatever the actions of the manipulator.
\end{lemma}
\begin{proof}[Sketch of the proof]
Given an instance \(\mathcal{J}\) of the \(\mathit{ManipSA}\) problem, consider the instance \(\mathcal{J}^{-a_1}\) obtained from \(\mathcal{J}\) by removing \(a_1\). Moreover, let us denote by \(S_t^{-a_1}\) the set of items picked at the end of the \(t^{th}\) time step in \(\mathcal{J}^{-a_1}\). This set of size \(t\) is clearly defined as all agents behave truthfully in \(\mathcal{J}^{-a_1}\). We argue that after \(t\) time steps in \(\mathcal{J}\), all items in \(S_{t- \mu(a_1,t)}^{-a_1}\) have been picked whatever the actions of \(a_1\). This can be showed by induction because at each time step where the picker is a non-manipulator, she will pick the same item as in \(\mathcal{J}^{-a_1}\) unless this item has already been picked.
\end{proof}
As a consequence of Lemma~\ref{lemma: n + mu}, for each possible set \(S\) that can appear at time step \(t\) and agent \(a\in \mathcal{A}\setminus\{a_1\}\), \(b(a,S)\) can only be \(\mu(a_1,t-1)+1\) different items. More precisely, \(b(a,S)\) has to be one of the \(\mu(a_1,t-1)+1\) preferred items of \(a\) in \(\mathcal{I}\setminus S_{t-1}\). As a result, the number of possible vectors \((b(a_2,S), \ldots, b(a_n,S))\) associated to all the possible sets \(S\) that can appear at time step \(t\) is upper bounded by \((\mu(a_1,t-1)+1)^{n-1}\). Lastly, by using the facts that each set \(S\in \DPS\) is characterized by the vector \((b(a_2,S), \ldots, b(a_n,S))\), that \(\mu(a_1,t) \le \mu(a_1)\) for all \(t\), and by considering all possible time steps, we obtain that \(|\DPS| \le m(\mu(a_1)+1)^{n-1}\).
\begin{theorem} \label{theorem n + mu}
Problem \(\mathit{ManipSA}\) is solvable in \(O(m^3 (\mu(a_1)+1)^{2n})\). As a result, \(\mathit{ManipSA}\) is FPT w.r.t. parameter \(n + \mu(a_1)\).
\end{theorem}
\begin{proof}
This result is a consequence of Lemma~\ref{lemma: solving graph} and the fact that \(|\DPV| \le (\mu(a_1)+1)|\DPS| \le m(\mu(a_1)+1)^{n}\).
\end{proof}
\paragraph{The algorithm is FPT w.r.t. parameter \(n + \mathtt{rg}^{\max}\).}
We show that \(|\DPS|\) is also upper bounded by \(m(2\mathtt{rg}^{\max})^{n-2}\). This is a consequence of the following lemma.
\begin{lemma}\label{lem : diff of rk and rg}
For any set \(S \subseteq \mathcal{I}\), and any two agents \(a_s,a_t \in \mathcal{A}\setminus\{a_1\}\),
\[| \mathtt{rk}_{a_s}(b(a_s,S)) - \mathtt{rk}_{a_t}(b(a_t,S))| \leq \mathtt{rg}^{\max} - 1.\]
\end{lemma}
\begin{proof}
If we assume for the sake of contradiction that \(\mathtt{rk}_{a_s}(b(a_s,S)) \ge \mathtt{rk}_{a_t}(b(a_t,S)) + \mathtt{rg}^{\max}\) and use the fact that \(|\mathtt{rk}_{a_s}(b(a_t,S)) - \mathtt{rk}_{a_t}(b(a_t,S))| < \mathtt{rg}^{\max}\) (by definition of \(\mathtt{rg}^{\max}\)), then we can conclude that \(b(a_t,S) \succ_{a_s} b(a_s,S)\), which contradicts the definition of \(b(a_s,S)\).
\end{proof}
Lemma~\ref{lem : diff of rk and rg} implies that for each of the \(m\) possible items for \(b(a_2,S)\), there are only \(2\mathtt{rg}^{\max}-1\) possible items for other parameters \(b(a_j,S)\) with \(j>2\). Then, by using the facts that a set \(S\in \DPS\) is characterized by the vector \((b(a_2,S), \ldots, b(a_n,S))\), we obtain that \(\DPS \le m(2\mathtt{rg}^{\max})^{n-2}\).
\begin{theorem} \label{theorem : n + rg}
Problem \(\mathit{ManipSA}\) is solvable in \(O(m^5(2\mathtt{rg}^{\max})^{2(n-2)})\). As a result, \(\mathit{ManipSA}\) is FPT w.r.t. parameter \(n + \mathtt{rg}^{\max}\).
\end{theorem}
\begin{proof}
This result is a consequence of Lemma~\ref{lemma: solving graph} and the fact that \(|\DPV| \le m|\DPS| \le m^2(2\mathtt{rg}^{\max})^{n-2}\).
\end{proof}
\paragraph{The algorithm is FPT w.r.t. parameter \(\mathtt{rg}^{\max}\).}
Lastly, \(|\DPS|\) can also be upper bounded by \(m 2^{2 \mathtt{rg}^{\max}}\). This claim is due to the fact that the set \(S\setminus D(a_2,b(a_2,S))\) cannot contain an item whose rank w.r.t. \(a_2\) is ``too high'', which is proved in the following lemma.
\begin{lemma} \label{lem : for th rg}
Given \(S\in \DPS\), all \(i\) in \(S\setminus D(a_2,b(a_2,S))\) verify
\[\mathtt{rk}_{a_2}(b(a_2,S)) + 1 \le \mathtt{rk}_{a_2}(i) \le \mathtt{rk}_{a_2}(b(a_2,S)) +2\mathtt{rg}^{\max}.\]
\end{lemma}
\begin{proof}
First note that by definition \(b(a_2,S)\!\! \not \in\!\! S\) (because \(b(a_2,S)\!\! \in\!\!\mathcal{I}\setminus S\)) and \(D(a_2,b(a_2,S)) \!\!= \!\!\{i\in \mathcal{I} | \mathtt{rk}_{a_2}(i) < \mathtt{rk}_{a_2}(b(a_2,S))\}\). Hence, the first inequality of the lemma hold.
Let us assume for the sake of contradiction that there exists \(i\!\in\! S\setminus D(a_2,b(a_2,S))\) such that \(\mathtt{rk}_{a_2}(i) \!>\! \mathtt{rk}_{a_2}(b(a_2,S)) +2\mathtt{rg}^{\max}\).
Because \(S\) belongs to \(\Delta\), we have that \(S\setminus D(a_2,b(a_2,S)) = \bigcup_{a \in \mathcal{A}\setminus\{a_1\}}D(a,b(a,S))\setminus D(a_2,b(a_2,S))\). Hence, there exists \(a_j\) with \(j\ge 3\) such that \(i\in D(a_j,b(a_j,S))\).
By definition of \(\mathtt{rg}^{\max}\), we have that \(\mathtt{rg}^{\max} > \mathtt{rk}_{a_2}(i) - \mathtt{rk}_{a_j}(i)\), or equivalently that \(\mathtt{rk}_{a_j}(i) > \mathtt{rk}_{a_2}(i) - \mathtt{rg}^{\max}\), which yields that \(\mathtt{rk}_{a_j}(b(a_j,S)) > \mathtt{rk}_{a_j}(i) > \mathtt{rk}_{a_2}(i) - \mathtt{rg}^{\max} > \mathtt{rk}_{a_2}(b(a_2,S)) +\mathtt{rg}^{\max}\). This contradicts Lemma~\ref{lem : diff of rk and rg}.
\end{proof}
As a consequence of Lemma~\ref{lem : for th rg}, \(|\DPS|\) is upper bounded by \(m 2^{2 \mathtt{rg}^{\max}}\) because there are at most \(m\) possible items for \(b(a_2,S)\), and for each of them, there are at most \(2^{2 \mathtt{rg}^{\max}}\) possible sets for \(S\setminus D(a_2,b(a_2,S))\).
\begin{theorem} \label{theorem : rg}
Problem \(\mathit{ManipSA}\) is solvable in \(O(m^5 2^{4 \mathtt{rg}^{\max}})\). As a result, \(\mathit{ManipSA}\) is FPT w.r.t. parameter \( \mathtt{rg}^{\max}\).
\end{theorem}
\begin{proof}
This result is a consequence of Lemma~\ref{lemma: solving graph} and the fact that \(|\DPV| \le m|\DPS| \le m^2 2^{2 \mathtt{rg}^{\max}}\).
\end{proof}
\textit{Remark.} Note that it is easy to prove that, in contrast, the problem is NP-hard even if the average range of the items is of 2.\footnote{ One can use a reduction with a sufficiently large number of dummy items ranked last and in the same positions by all agents.} Furthermore, Theorem \ref{theorem : n + rg} might seem less appealing as the \(\mathit{ManipSA}\) problem is FPT w.r.t. parameter \(\mathtt{rg}^{\max}\) alone. However, we would like to stress that the time complexity of Theorem \ref{theorem : n + rg} might be more interesting than the one of Theorem \ref{theorem : rg} for a small number of agents.
\begin{comment}
The most simple approach to solve the \(\mathit{ManipSA}\) problem is probably to try the \(m!\) possible preference rankings and see which one yields the maximum utility value. However, a more clever approach can use the following result from \cite{bouveret2011general}:
\begin{fact} [From Propositions 7 and 8 in \cite{bouveret2011general}]
Given a specific set of items \(S\), it can be determined in polynomial time if there exists a preference ranking \(\succ\) such that \(S \subseteq \phi_{\succ}(a_1)\). In such case, it is also easy to compute such a ranking.
\end{fact}
Hence, one can try the \({m \choose \mu(a_1)}\) possible sets of \(\mu(a_1)\) items to determine which is the best one that \(a_1\) can get. This approach shows that \(\mathit{ManipSA}\) is in XP w.r.t. parameter \(\mu(a_1)\).
\begin{example}[Example~\ref{example:running} continued] In our running example, \(\mu(a_1) = 2\). Hence, \(a_1\) just has to determine which one of the \({4 \choose 2}\) following sets she must obtain: \(\{a_1,a_2\}, \{a_1,a_3\}, \{a_1,a_4\}, \{a_2,a_3\}\), \(\{a_2,a_4\}, \{a_3,a_4\}\).
\end{example}
To obtain further positive results, we first
design a dynamic programming scheme for the \(\mathit{ManipSA}\) problem. We will then explain how this scheme entails several positive parameterized complexity results.
\subsection{A Dynamic Programming Scheme}
Our dynamic programming approach considers pairs \((k,S)\) where \(S \subseteq \mathcal{I}\) and \(k \in \{0,1,\ldots,\mu(a_1)\}\). In a state characterized by the pair \((k,S)\), we know that the items in \(S\) have been picked by some agents in \(\mathcal{A}\) as well as \(k\) other items that have been picked by agent \(a_1\). However, while the items in \(S\) are clearly identified, the identities of these \(k\) other items are unspecified.
Given a pair \((k,S)\) the number of items that have already been picked is \(|S| + k\). Hence, the next agent to become a picker is \(\pi(|S| + k + 1)\). Let us denote this agent by \(a\). If \(a = a_1\), i.e., she is the manipulator, then she will pick one more item within the set \(\mathcal{I} \setminus S\) and we move to state \((k+1,S)\).
Otherwise, let \(b(a,S)\) denote the preferred item of agent \(a\) within the set \(\mathcal{I} \setminus S\). Then, two cases are possible:
\begin{itemize}
\item either \(b(a,S)\) has already been picked by agent \(a_1\). In this case, we move to state \((k - 1, S \cup \{b(a,S)\})\). Note that this is only possible if \(k\) was greater than or equal to one.
\item otherwise \(b(a,S)\) is picked by agent \(a\) and we move to state \((k,S\cup \{b(a,S)\})\).
\end{itemize}
Let us denote by \(V(k,S)\) the maximal utility that the manipulator can get from items in \(\mathcal{I}\setminus S\) starting from state \((k,S)\). Then the value of an optimal manipulation is given by \(V(k = 0, S = \emptyset)\). From the previous analysis, function \(V\) verifies the following dynamic programming equations:
\begin{align*}
V(k,S) &= V(k+1 , S) \text{ if } \pi(|S| + k + 1) = a_1\\
V(k=0,S) &= V(k,S\cup\{b(a,S)\}) \text{ if } \pi(|S| + k + 1) = a \neq a_1\\
V(k>0,S) &= \max\big(V(k-1,S\cup\{b(a,S)\}) + u(b(a,S)),\\
&V(k,S\cup\{b(a,S)\})\big) \text{ if } \pi(|S| + k + 1) = a \neq a_1
\end{align*}
The termination is guaranteed by the facts that \(V(k,S) = \sum_{i \in \mathcal{I} \setminus S} u(i)\) when \(|S| + k = m\). By standard bookkeeping techniques one can identify the items that are taken in an optimal manipulation and the order in which they are taken and then recover an optimal ranking to report (where these items are ranked first and in the same order).
Before analyzing the complexity of this dynamic programming scheme, we illustrate it on our running example.
\begin{example}[Example \ref{example:running} continued]
Let us illustrate our approach on our running example. We let \(u(i_1) = 5\), \(u(i_2) = 4\), \(u(i_3) = 3\) and \(u(i_4) = 1\). We show in Figure \ref{fig:Dynprog} the directed acyclic state graph of the dynamic programming algorithm. The values of the states are given next to them and the optimal branches are displayed with thick solid arrows. As we can see, the optimal choices for \(a_1\) is to first pick \(i_3\) so that \(i_2\) is still available the second time she becomes the picker. Hence an optimal manipulation is given by \(\succ = i_3 \succ i_2 \succ i_1 \succ i_4\) which results in an allocation \(\phi_{\succ}(a_1) =\{i_2, i_3\}\) with a utility of $7$ whereas \(u_T = 6\).
\begin{figure}[!h]
\centering
\scalebox{1}{\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=7cm,
semithick]
\node[rectangle,draw,text=black] (A0) at (0,1) {$(0,\{\})$};
\node[text=black] (A0V) at (1, 1) {$7$};
\node[text=black] (A0V) at (-7, 1) {$\pi(1) = a_1$};
\node[rectangle,draw,text=black] (A) at (0,-0.5) {$(1,\{\})$};
\node[text=black] (AV) at (1, -0.5) {$7$};
\node[text=black] (A0V) at (-7, -0.5) {$\pi(2) = a_2$};
\node[rectangle,draw,text=black] (B) at (-2,-3.5) {$(1,\{i_3\})$};
\node[text=black] (BV) at (-0.9, -3.5) {$6$};
\node[text=black] (A0V) at (-7, -3.5) {$\pi(3) = a_3$};
\node[text=black] (A0V) at (-7, -6.5) {$\pi(4) = a_1$};
\node[rectangle,draw,text=black] (C) at (1.5,-2) {$(0,\{i_3\})$};
\node[text=black] (CV) at (2.5, -2) {$4$};
\node[rectangle,draw,text=black] (D) at (-5,-6.5) {$(1,\{i_1,i_3\})$};
\node[text=black] (DV) at (-3.8, -6.5) {$5$};
\node[rectangle,draw,text=black] (E) at (-2,-5) {$(0,\{i_1,i_3\})$};
\node[text=black] (EV) at (-0.5, -5) {$1$};
\node[rectangle,draw,text=black] (F) at (3, -3.5) {$(0,\{i_3,i_4\})$};
\node[text=black] (FV) at (4.5, -3.5) {$4$};
\node[rectangle,draw,text=black] (G) at (-5, -8) {$(2,\{i_1,i_3\})$};
\node[text=black] (GV) at (-3.8, -8) {$5$};
\node[rectangle,draw,text=black] (H) at (-2, -6.5) {$(0,\{i_1,i_2,i_3\})$};
\node[text=black] (HV) at (-0.5, -6.5) {$1$};
\node[rectangle,draw,text=black] (I) at (-2, -8) {$(1,\{i_1,i_2,i_3\})$};
\node[text=black] (IV) at (-0.5, -8) {$1$};
\node[rectangle,draw,text=black] (J) at (3, -6.5) {$(0,\{i_1,i_3,i_4\})$};
\node[text=black] (JV) at (4.5, -6.5) {$4$};
\node[rectangle,draw,text=black] (K) at (3, -8) {$(1,\{i_1,i_3,i_4\})$};
\node[text=black] (KV) at (4.5, -8) {$4$};
\node[text=black] (L) at (-5, -9) {$a_1$ has picked $i_2,i_4$};
\node[text=black] (M) at (-2, -9) {$a_1$ has picked $i_4$};
\node[text=black] (N) at (3, -9) {$a_1$ has picked $i_2$};
\path (A0) edge[line width=1.0pt] node {} (A)
(A) edge [dotted,line width=0.5pt,left] node {\(a_2\) picks \(i_3\)} (B)
(A) edge [line width=1.0pt] node {\(a_1\) has picked \(i_3\)} (C)
(B) edge[left,dotted,line width=0.5pt] node {\(a_3\) picks \(i_1\)} (D)
(B) edge [line width=1.0pt] node {\(a_1\) has picked \(i_1\)} (E)
(C) edge [line width=1.0pt] node {\(a_2\) picks \(i_4\)} (F)
(D) edge [line width=1.0pt] node {} (G)
(E) edge [line width=1.0pt] node {\(a_3\) picks \(i_2\)} (H)
(H) edge [line width=1.0pt] node {} (I)
(F) edge [line width=1.0pt] node {\(a_3\) picks \(i_1\)} (J)
(J) edge [line width=1.0pt] node {} (K)
(G) edge [line width=1.0pt] node {} (L)
(I) edge [line width=1.0pt] node {} (M)
(K) edge [line width=1.0pt] node {} (N);
\path [draw = black, rounded corners, inner sep=100pt,dotted]
(-7.8,1.5)
-- (4.7,1.5)
-- (4.7,-0.)
-- (-7.8,-0.)
-- cycle;
\path [draw = black, rounded corners, inner sep=100pt,dotted]
(-7.8,-0.2)
-- (4.7,-0.2)
-- (4.7,-2.8)
-- (-7.8,-2.8)
-- cycle;
\path [draw = black, rounded corners, inner sep=100pt,dotted]
(-7.8,-3)
-- (4.7,-3)
-- (4.7,-6)
-- (-7.8,-6)
-- cycle;
\path [draw = black, rounded corners, inner sep=100pt,dotted]
(-7.8,-6.1)
-- (4.7,-6.1)
-- (4.7,-7.5)
-- (-7.8,-7.5)
-- cycle;
\end{tikzpicture}}
\caption{\label{fig:Dynprog} \small Directed acyclic state graph of the dynamic programming approach applied to the instance of Example \ref{example:running}}
\end{figure}
\end{example}
\subsection{Complexity of the Dynamic Programming Scheme}
We now provide
positive parameterized complexity results by proving several bounds on the number of states involved in the dynamic programming scheme described above.
\paragraph{The algorithm is XP w.r.t. parameter \(n\).}
Let \(D(a,i)\) denote the set of items that agent \(a\) prefers to item \(i\), i.e., \(D(a,i) = \{j \in \mathcal{I} | j \succ_a i\}\). Then, for any set \(S\subseteq\mathcal{I}\), the definition of \(b(a,S)\), which we recall is the preferred element of agent \(a\) in set \(\mathcal{I} \setminus S\) implies that \(\bigcup_{a \in \mathcal{A}\setminus\{a_1\}}D(a,b(a,S)) \subseteq S\). Let us denote by \(\Delta\) the set of sets for which the equality holds, i.e., \(\Delta = \{S\subseteq\mathcal{I} | \bigcup_{a \in \mathcal{A}\setminus\{a_1\}}D(a,b(a,S)) = S \}\). Note that a set \(S\in \Delta\) is completely determined by the vector \((b(a_2,S), \ldots, b(a_n,S))\) and thus \(|\Delta| \leq m^{n-1}\). Our first key insight is that all the sets \(S\) involved in our dynamic programming scheme belong to \(\Delta\).
\begin{lemma} \label{lemma:Delta}
For all pairs \((k,S)\) involved in our dynamic programming scheme, we have that \(S\) belongs to \(\Delta\).
\end{lemma}
\begin{proof}
The result follows simply by induction. The result is true for the initial state in which \(S = \emptyset\). Assume that the result is true for a state characterized by a pair \((k,S)\). We show that the result also holds for the successor states. If \(\pi(|S| + k + 1) = a_1\), then the successor state is \((k+1,S)\) so the result is also true for this new state as \(S\) is unchanged. Otherwise, let \(\pi(|S| + k + 1) = a^*\), then the successor states are \((k,S\cup\{b(a^*,S)\})\) and \((k-1,S\cup\{b(a^*,S)\}) \). Then we have the following two inclusion relationships:
\begin{align*}
D(a^*,b(a^*,S))\cup\{b(a^*,S)\} \subseteq D(a^*,b(a^*,S\cup\{b(a^*,S)\})), \\
\forall a \in \mathcal{A} \setminus \{a_1\}, D(a,b(a,S)) \subseteq D(a,b(a,S\cup\{b(a^*,S)\})).
\end{align*}
These relationships imply that:
\begin{align*}
S\cup\{b(a^*,S)\} = \bigcup_{a \in \mathcal{A}\setminus\{a_1\}}D(a,b(a,S)) \cup\{b(a^*,S)\}
\subseteq \bigcup_{a \in \mathcal{A}\setminus\{a_1\}}D(a,b(a,S\cup\{b(a^*,S)\})),
\end{align*}
where the first equality is due to the induction hypothesis. As already stated, the reverse inclusion relationship is always true and hence \(S\cup\{b(a^*,S)\} \in \Delta\).
\end{proof}
Consequently from Lemma~\ref{lemma:Delta}, each state of the dynamic program admits two possible representations that we will denote by {\em agent representation} and {\em item representation}. In the item representation, a state \((k,S)\) is represented by a vector of size \(m+1\), i.e., \(S\) is represented by a binary vector of size \(m\). In the agent representation, a state \((k,S)\) is represented by a vector of size \(n\). In this case, \(S\) is replaced by the vector \((b(a_2,S), \ldots, b(a_n,S))\). Note that processing a state (computing the succesor states and the value of the state) in the {\em agent representation} (resp. {\em item representation}) can be done in $O(nm)$ (resp. $O(m)$) operations.
We now show that the \(\mathit{ManipSA}\) problem can be solved in polynomial time for any bounded number of agents. Indeed, the problem is in XP w.r.t. parameter \(n\).
\begin{theorem} \label{theorem : n}
Problem \(\mathit{ManipSA}\) is solvable in \(O(n \cdot m^{n+1})\). As a result, \(\mathit{ManipSA}\) is in XP w.r.t. parameter \(n\).
\end{theorem}
\begin{proof}
The result follows from the fact that our dynamic programming scheme runs in \(O(n \cdot m^{n+1})\). To obtain this complexity bound, one should use the agent representation. In this case, processing a state requires \(O(nm)\) operations, and one can use a dynamic programming table of size \(m^n\) to run the dynamic program. More precisely, this table has one cell for each possible vector \((k,b(a_2,S), \ldots, b(a_n,S))\).
\end{proof}
We now argue that our dynamic programming approach yields an FPT algorithm w.r.t. parameters \(n + \mu(a_1)\), \(n + \mathtt{rg}^{\max}\) and \(\mathtt{rg}^{\max}\) by providing tighter bounds on the number of sets \(S\) involved in our dynamic programming scheme. To use these bounds, we will need the following lemma:
\begin{lemma} \label{lemma: building graph}
Under the item representation, the directed acyclic state graph of our dynamic program can be build in $O(m\mathcal{S}^2)$, where $\mathcal{S}$ denotes the number of states involved in our dynamic programming scheme.
\end{lemma}
\begin{proof}
First note that the directed state graph, i.e., the graph where each state is connected to its successor states is indeed acyclic. Indeed, given a state that can occur at timestep \(t\) of the allocation process (i.e., \(k+|S| = t\)), its successors will either correspond to timestep \(t+1\) or will still correspond to timestep \(t\) but with a strictly lower value for parameter \(k\). We now show how to incrementally build the directed acyclic state graph from state \((k = 0,S = \{\})\). For each new state generated at the previous iteration of the method, compute its successor states, add edges towards them, and label them with the corresponding utility values. Moreover, each time a state is generated, compare it to the states already generated to remove duplicates. If it is indeed a new state, its successors will be computed in the next iteration. This step is repeated until we have generated all states. Note that because each state is only processed once, we will generate at most \(2\mathcal{S}\) states. However, because of the duplicate removal operation performed each time a state is generated, the method runs in $O(m\mathcal{S}^2)$. Indeed, this step will trigger \(O(\mathcal{S}^2)\) comparisons, each requiring \(m+1\) operations.
\end{proof}
\paragraph{The algorithm is FPT w.r.t. parameter \(n + \mu(a_1)\).}
We further argue that the number of different sets \(S\) involved in our dynamic programming scheme can be upper bounded by \(m(\mu(a_1)+1)^{n-1}\). This is a consequence of the following lemma, where \(\mu(a,t)\) denotes the number of items that agent \(a\) gets to pick within the \(t\) first time steps.
\begin{lemma} \label{lemma: n + mu}
For each time step \(t\), there is a set \(S_t\) of \(t - \mu(a_1,t)\) items that are always picked within the \(t\) first time steps, whatever the actions of the manipulator.
\end{lemma}
\begin{proof}[Sketch of the proof]
Given an instance \(\mathcal{J}\) of the \(\mathit{ManipSA}\) problem, consider the instance \(\mathcal{J}^{-a_1}\) obtained from \(\mathcal{J}\) by removing \(a_1\). Moreover, let us denote by \(S_t^{-a_1}\) the set of items picked at the end of the \(t^{th}\) time step in \(\mathcal{J}^{-a_1}\). This set of size \(t\) is clearly defined as all agents behave truthfully in \(\mathcal{J}^{-a_1}\). We argue that after \(t\) time steps in \(\mathcal{J}\), all items in \(S_{t- \mu(a_1,t)}^{-a_1}\) have been picked whatever the actions of \(a_1\). This can be showed by induction because at each time step where the picker is a non-manipulator, she will pick the same item as in \(\mathcal{J}^{-a_1}\) unless this item has already been picked.
\end{proof}
As a consequence of Lemma~\ref{lemma: n + mu}, for each possible set \(S\) that can appear at time step \(t\) and agent \(a\in \mathcal{A}\setminus\{a_1\}\), \(b(a,S)\) can only be \(\mu(a_1,t-1)+1\) different items. More precisely, \(b(a,S)\) has to be one of the \(\mu(a_1,t-1)+1\) preferred items of \(a\) in \(\mathcal{I}\setminus S_{t-1}\). As a result, the number of possible vectors \((b(a_2,S), \ldots, b(a_n,S))\) associated to all the possible sets \(S\) that can appear at time step \(t\) is upper bounded by \((\mu(a_1,t-1)+1)^{n-1}\). Lastly, by using the facts that each set \(S\in \Delta\) is characterized by the vector \((b(a_2,S), \ldots, b(a_n,S))\), that \(\mu(a_1,t) \le \mu(a_1)\) for all \(t\), and by considering all possible time steps, we obtain that the number of different sets \(S\) involved in our dynamic programming scheme can be upper bounded by \(m(\mu(a_1)+1)^{n-1}\).
\begin{theorem} \label{theorem n + mu}
Problem \(\mathit{ManipSA}\) is solvable in \(O(m^3 (\mu(a_1)+1)^{2n})\). As a result, \(\mathit{ManipSA}\) is FPT w.r.t. parameter \(n + \mu(a_1)\).
\end{theorem}
\begin{proof}
We use the fact that the number of states involved in our dynamic programming scheme is bounded by \(m(\mu(a_1)+1)^{n}\), as there are at most \(\mu(a_1)+1\) possible values for \(k\) and \(m(\mu(a_1)+1)^{n-1}\) possible sets for \(S\) in a state \((k,S)\). Using the item representation and Lemma \ref{lemma: building graph}, we can hence build the directed acyclic state graph in \(O(m \cdot m^2 (\mu(a_1)+1)^{2n})\) and then perform backward induction in \(O(m \cdot m (\mu(a_1)+1)^{n})\).
\end{proof}
\paragraph{The algorithm is FPT w.r.t. parameter \(n + \mathtt{rg}^{\max}\).}
The number of different sets \(S\) involved in our dynamic programming scheme is also upper bounded by \(m(2\mathtt{rg}^{\max})^{n-2}\). This is a consequence of the following lemma.
\begin{lemma}\label{lem : diff of rk and rg}
For any set \(S \subseteq \mathcal{I}\), and any two agents \(a_s,a_t \in \mathcal{A}\setminus\{a_1\}\), we have the following inequality:
\[| \mathtt{rk}_{a_s}(b(a_s,S)) - \mathtt{rk}_{a_t}(b(a_t,S))| \leq \mathtt{rg}^{\max} - 1.\]
\end{lemma}
\begin{proof}
If we assume for the sake of contradiction that \(\mathtt{rk}_{a_s}(b(a_s,S)) \ge \mathtt{rk}_{a_t}(b(a_t,S)) + \mathtt{rg}^{\max}\) and use the fact that \(|\mathtt{rk}_{a_s}(b(a_t,S)) - \mathtt{rk}_{a_t}(b(a_t,S))| < \mathtt{rg}^{\max}\) (by definition of \(\mathtt{rg}^{\max}\)), then we can conclude that \(b(a_t,S) \succ_{a_s} b(a_s,S)\), which contradicts the definition of \(b(a_s,S)\).
\end{proof}
Lemma~\ref{lem : diff of rk and rg} implies that for each of the \(m\) possible items for \(b(a_2,S)\), there are only \(2\mathtt{rg}^{\max}-1\) possible items for other parameters \(b(a_j,S)\) with \(j>2\). Then, by using the facts that a set \(S\in \Delta\) is characterized by the vector \((b(a_2,S), \ldots, b(a_n,S))\), we obtain that the number of different sets \(S\) involved in our dynamic programming scheme can be upper bounded by \(m(2\mathtt{rg}^{\max})^{n-2}\).
\begin{theorem} \label{theorem : n + rg}
Problem \(\mathit{ManipSA}\) is solvable in \(O(m^5(2\mathtt{rg}^{\max})^{2(n-2)})\). As a result, \(\mathit{ManipSA}\) is FPT w.r.t. parameter \(n + \mathtt{rg}^{\max}\).
\end{theorem}
\begin{proof}
The proof is similar to the one of Theorem \ref{theorem n + mu}. We use the fact that the number of states involved in our dynamic programming scheme is bounded by \(m^2(2\mathtt{rg}^{\max})^{n-2}\), as there are at most \(m\) possible values for \(k\) and \(m(2\mathtt{rg}^{\max})^{n-2}\) possible sets for \(S\) in a state \((k,S)\). Using the item representation and Lemma \ref{lemma: building graph}, we can hence build the directed acyclic state graph in \(O(m \cdot m^4(2\mathtt{rg}^{\max})^{2(n-2)})\) and then perform backward induction in \(O(m \cdot m^2(2\mathtt{rg}^{\max})^{n-2})\).
\end{proof}
\paragraph{The algorithm is FPT w.r.t. parameter \(\mathtt{rg}^{\max}\).}
Lastly, the number of different sets \(S\) involved in our dynamic programming scheme can also be upper bounded by \(m 2^{2 \mathtt{rg}^{\max}}\). This claim is due to the fact that the set \(S\setminus D(a_2,b(a_2,S))\) cannot contain an item whose rank w.r.t. \(a_2\) is ``too high'', which is proved in the following lemma.
\begin{lemma} \label{lem : for th rg}
Given \(S\in \Delta\), all \(i\) in \(S\setminus D(a_2,b(a_2,S))\) verify the following inequalities:
\[\mathtt{rk}_{a_2}(b(a_2,S)) + 1 \le \mathtt{rk}_{a_2}(i) \le \mathtt{rk}_{a_2}(b(a_2,S)) +2\mathtt{rg}^{\max}.\]
\end{lemma}
\begin{proof}
First note that by definition \(b(a_2,S)\!\! \not \in\!\! S\) (because \(b(a_2,S)\!\! \in\!\!\mathcal{I}\setminus S\)) and \(D(a_2,b(a_2,S)) \!\!= \!\!\{i\in \mathcal{I} | \mathtt{rk}_{a_2}(i) < \mathtt{rk}_{a_2}(b(a_2,S))\}\). Hence, any item \(i\!\in\! S\setminus D(a_2,b(a_2,S))\) verifies $\mathtt{rk}_{a_2}(i) \!\ge\! \mathtt{rk}_{a_2}(b(a_2,S)) + 1$.
Let us assume for the sake of contradiction that there exists \(i\!\in\! S\setminus D(a_2,b(a_2,S))\) such that \(\mathtt{rk}_{a_2}(i) \!>\! \mathtt{rk}_{a_2}(b(a_2,S)) +2\mathtt{rg}^{\max}\).
Because \(S\) belongs to \(\Delta\), we have that \(S\setminus D(a_2,b(a_2,S)) = \bigcup_{a \in \mathcal{A}\setminus\{a_1\}}D(a,b(a,S))\setminus D(a_2,b(a_2,S))\). Hence, there exists \(a_j\) with \(j\ge 3\) such that \(i\in D(a_j,b(a_j,S))\).
By definition of \(\mathtt{rg}^{\max}\), we have that \(\mathtt{rk}_{a_j}(i) > \mathtt{rk}_{a_2}(i) - \mathtt{rg}^{\max}\) which yields that \(\mathtt{rk}_{a_j}(b(a_j,S)) > \mathtt{rk}_{a_j}(i) > \mathtt{rk}_{a_2}(i) - \mathtt{rg}^{\max} > \mathtt{rk}_{a_2}(b(a_2,S)) +\mathtt{rg}^{\max}\). This contradicts Lemma~\ref{lem : diff of rk and rg}.
\end{proof}
As a consequence of Lemma~\ref{lem : for th rg}, the number of sets \(S\) is upper bounded by \(m 2^{2 \mathtt{rg}^{\max}}\), because there are at most \(m\) possible items for \(b(a_2,S)\) and for each of them there are at most \(2^{2 \mathtt{rg}^{\max}}\) possible sets for \(S\setminus D(a_2,b(a_2,S))\).
\begin{theorem} \label{theorem : rg}
Problem \(\mathit{ManipSA}\) is solvable in \(O(m^5 2^{4 \mathtt{rg}^{\max}})\). As a result, \(\mathit{ManipSA}\) is FPT w.r.t. parameter \( \mathtt{rg}^{\max}\).
\end{theorem}
\begin{proof}
The proof is similar to the one of Theorem \ref{theorem n + mu}. We use the fact that the number of states involved in our dynamic programming scheme is upper bounded by \(m^2(2^{2 \mathtt{rg}^{\max}})\), as there are at most \(m\) possible values for \(k\) and \(m 2^{2 \mathtt{rg}^{\max}}\) possible sets for \(S\) in a state \((k,S)\). Using the item representation and Lemma \ref{lemma: building graph}, we can hence build the directed acyclic state graph in \(O(m \cdot m^4(2^{4 \mathtt{rg}^{\max}}))\) and then perform backward induction in \(O(m \cdot m^2(2^{2 \mathtt{rg}^{\max}}))\).
\end{proof}
\textit{Remark.} Note that it is easy to prove that, in contrast, the problem is NP-hard even if the average range of the items is of 2.\footnote{ One can use a reduction with a sufficiently large number of dummy items ranked last and in the same positions by all agents.} Furthermore, Theorem \ref{theorem : n + rg} might seem less appealing as the \(\mathit{ManipSA}\) problem is FPT w.r.t. parameter \(\mathtt{rg}^{\max}\) alone. However, we would like to stress that the time complexity of Theorem \ref{theorem : n + rg} might be more interesting than the one of Theorem \ref{theorem : rg} for a small number of agents.
\end{comment}
\begin{comment}
\section{\(\mathit{ManipSA}\) and Parameter \(n\)}\label{Sect:NumberOfAgents}
We now show that the problem can be solved in polynomial time if the number of agents is bounded. Indeed, we will design a dynamic programming approach whose complexity is \(O(nm(m+1)^{n})\). Stated differently, the problem is in XP w.r.t. the parameter \(n\).
Denote by \(\mathtt{top}_a(j)\) the set compounded of the \(j\) preferred items of agent \(a\) and by \(\mathtt{it}_a(j)\) the \(j^{th}\) preferred item of agent \(a\). The dynamic programming approach will consider \(n\)-tuples \((k_1,i_2,\ldots,i_n)\). Such a tuple gives the following information about what has happened so far in the picking process:
\begin{itemize}
\item all items in \(\bigcup_{j=2}^n\mathtt{top}_{a_j}(i_j) \) have already been picked by some agent.
\item items \(\mathtt{it}_{a_j}(i_j+1)\) for \(j\in\{2,\ldots,n\}\) have not been picked yet by agents in \(\{a_2,\ldots,a_n\}\) but they may have been picked by the manipulator \(a_1\).
\item up to this point, no item has been picked by agents in \(\{a_2,\ldots,a_n\}\) within the set \(O \setminus(\bigcup_{j=2}^n\mathtt{top}_{a_j}(i_j))\).
\item up to this point, \(a_1\) has picked \(k_1\) items within the set \(O \setminus(\bigcup_{j=2}^n\mathtt{top}_{a_j}(i_j))\). However, the identity of these items remains to be specified.
\end{itemize}
Given a tuple \((k_1,i_2,\ldots,i_n)\) the number of items that have already been picked is \(|\bigcup_{j=2}^n\mathtt{top}_{a_j}(i_j)| + k_1\). Hence, the next agent to become a picker is \(\pi(|\bigcup_{j=2}^n\mathtt{top}_{a_j}(i_j)| + k_1 + 1)\). If this agent is the manipulator, then he will pick one more item within \(O \setminus(\bigcup_{j=2}^n\mathtt{top}_{a_j}(i_j))\) and we move to state \((k_1+1,i_2,\ldots,i_n)\).
Otherwise, let us assume the picker is agent \(a_2\). Then, two cases are possible: either \(\mathtt{it}_2(i_2+1)\) has already been picked by agent \(a_1\), or it will be picked by agent \(a_2\). To restate this fact more formally, let us denote by \(t_i^j\) the rank of the preferred item of agent \(a_i\) in \(O\setminus(\mathtt{top}_{a_j}(i_j+1)\cup\bigcup_{k=2, k\neq j }^n\mathtt{top}_{a_k}(i_k))\) to which we subtract one. Then:
\begin{itemize}
\item either \(\mathtt{it}_{a_2}(i_2+1)\) has already been picked by agent \(a_1\) and we move to state \((k_1-1,t_2^2,\ldots,t_n^2)\). Note that this is only possible if \(k_1\) was greater than or equal to one.
\item otherwise \(\mathtt{it}_{a_2}(i_2+1)\) is picked by agent \(a_2\) and we move to state \((k_1,t_2^2,\ldots,t_n^2)\).
\end{itemize}
A similar analysis can be made if another non-manipulator is the picker.
Let us denote by \(V(k_1,i_2,\ldots,i_n)\) the maximal utility that the manipulator can get from items in \(O\setminus(\bigcup_{j=2}^n\mathtt{top}_{a_j}(i_j))\) starting from state \((k_1,i_2,\ldots,i_n)\). Then the value of an optimal manipulation is given by \(V(k_1=0,i_2=0,\ldots,i_n=0)\). From the previous analysis, function \(V\) verifies the following dynamic programming equations:
\begin{align*}
V(k_1,i_2,\ldots,i_n) &= V(k_1+1,i_2,\ldots,i_n) \\&\text{ if } \pi(|\bigcup_{j=2}^n\mathtt{top}_{a_j}(i_j)| + k_1 + 1) = a_1\\
V(k_1,i_2,\ldots,i_n) &= \max(V(k_1,t_2^j,\ldots,t_n^j), V(k_1-1,t_2^j,\ldots,t_n^j) + u(\mathtt{it}_{j}(i_j+1)) \\&\text{ if } \pi(|\bigcup_{j=2}^n\mathtt{top}_{a_j}(i_j)| + k_1 + 1) = a_j \neq a_1
\end{align*}
Note that each of the \(t_i^j\) values can be computed in \(O(m)\). Hence computing the value of each state can be performed in \(O(mn)\).
The termination is guaranteed by the fact that \(V(k_1,i_2,\ldots,i_n) = \sum_{o \in O \setminus(\bigcup_{j=2}^n\mathtt{top}_{a_j}(i_j))} u(o)\) when \(|\bigcup_{j=2}^n\mathtt{top}_{a_j}(i_j)| + k_1 = m\). Hence, the number of tuples is bounded by \(O((m+1)^n)\).\\
\end{comment}
\begin{comment}
To illustrate this approach we will consider the case of 3 agents. Denote by \(\mathtt{top}_a(j)\) the set compounded of the \(j\) preferred items of agent \(a\) and by \(\mathtt{it}_a(j)\) the \(j^{th}\) preferred item of agent \(a\). The dynamic programming approach will consider triples \((k,i,j)\) (of course with \(n\) agents we would consider \(n\)-tuples instead of triples). Such a triple gives the following information about what has happened so far in the picking process:
\begin{itemize}
\item all items in \(\mathtt{top}_2(i) \cup \mathtt{top}_3(j)\) have already been picked by some agent.
\item items \(\mathtt{it}_2(i+1)\) and \(\mathtt{it}_3(j+1)\) have not been picked yet neither by agent 2 nor agent 3 but they may have been picked by agent 1.
\item up to this point, no item has been picked by agent 2 nor 3 within the set \(O \setminus(\mathtt{top}_2(i)\cup\mathtt{top}_3(j))\).
\item up to this point, agent 1 (the manipulator) has picked \(k\) items within the set \(O \setminus(\mathtt{top}_2(i)\cup\mathtt{top}_3(j))\). However, the identity of these items remains to be specified.
\end{itemize}
Given a triple \((k,i,j)\) the number of items that have already been picked is \(|\mathtt{top}_2(i)\cup\mathtt{top}_3(j)| + k\). Hence, the next agent to become a picker is \(\pi(|\mathtt{top}_2(i)\cup\mathtt{top}_3(j)| + k + 1)\). If this agent is the manipulator, then he will pick one more item within \(O \setminus(\mathtt{top}_2(i)\cup\mathtt{top}_3(j))\) and we move to state \((k+1,i,j)\).
Otherwise, let us assume the picker is agent 2. Then, two cases are possible: either \(\mathtt{it}_2(i+1)\) has already been picked by agent 1, or it will be picked by agent 2. To restate this fact more formally, let us denote by \(h_1(i,j)\) (resp. \(h_2(i,j)\)) the rank of the preferred item of agent 2 (resp. 3) in \(O\setminus(\mathtt{top}_2(i+1)\cup\mathtt{top}_3(j))\) to which we subtract one. Then:
\begin{itemize}
\item either \(\mathtt{it}_2(i+1)\) has already been picked by agent 1 and we move to state \((k-1,h_1(i,j),h_2(i,j))\). Note that this is only possible if \(k\) was greater than or equal to one and that agent 2 remains the picker.
\item otherwise \(\mathtt{it}_2(i+1)\) is picked by agent 2 and we move to state \((k,h_1(i,j),h_2(j,i))\).
\end{itemize}
A similar analysis can be made if agent 3 is the picker.
Let us denote by \(V(k,i,j)\) the maximal utility that the manipulator can get from items in \(O \setminus(\mathtt{top}_2(i)\cup\mathtt{top}_3(j))\) starting from state \((k,i,j)\). Then the value of an optimal manipulation is given by \(V(0,0,0)\). From the previous analysis, function \(V\) verifies the following dynamic programming equations:
\begin{align*}
V(k,i,j) &= V(k+1,i,j) \\&\text{ if } \pi(|\mathtt{top}_2(i)\cup\mathtt{top}_3(j)| + k + 1) = 1\\
V(k,i,j) &= \max(V(k, h_1(i,j), h_2(i,j)), V(k-1, h_1(i,j),h_2(i , j)) + u(\mathtt{it}_{2}(i+1)) \\&\text{ if } \pi(|\mathtt{top}_2(i)\cup\mathtt{top}_3(j)| + k + 1) = 2\\
V(k, i , j) &= \max(V(k, h_3(i,j), h_4(i,j)), V(k-1 , h_3(i,j) , h_4(i,j)) + u(\mathtt{it}_{3}(j + 1)) \\&\text{ if } \pi(|\mathtt{top}_3(j)\cup\mathtt{top}_3(j)| + k + 1) = 3
\end{align*}
where, \(h_3(i,j)\) (resp. \(h_4(i,j)\)) is the rank of the preferred item of agent 2 (resp. 3) in \(O \setminus(\mathtt{top}_2(i)\cup\mathtt{top}_3(j+1))\) to which we subtract 1. Each of these values can be computed in \(O(m)\). The termination is guaranteed by the fact that \(V(-1, i,j) = -\infty\) and that \(V(k,i,j) = \sum_{o \in O \setminus(\mathtt{top}_2(i)\cup\mathtt{top}_3(j))} u(o)\) when \(|\mathtt{top}_2(i)\cup\mathtt{top}_3(j)| + k = m\). Hence, the number of triples to evaluate is bounded by \(O(m^n)\).
\end{comment}
\begin{comment}
\begin{example}[Example \ref{example:running} continued]
Let us illustrated our approach on the instance described in Example \ref{example:running}. For this purpose, we show in Figure \ref{fig:Dynprog} the directed acyclic state graph of the dynamic programming algorithm.
\begin{figure}[!h]
\centering
\scalebox{.7}{\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm,
semithick]
\node[circle,draw,text=black] (A0) at (0,1) {$(0,0,0)$};
\node[circle,draw,text=black] (A) at (0,0) {$(1,0,0)$};
\node[circle,draw,text=black] (B) at (-4,-1) {$(1,1,0)$};
\node[circle,draw,text=black] (C) at (4,-1) {$(0,1,0)$};
\node[circle,draw,text=black] (D) at (-8,-2) {$(1,1,1)$};
\node[circle,draw,text=black] (E) at (-4,-2) {$(0,1,1)$};
\node[circle,draw,text=black] (F) at (4, -2) {$(0,2,0)$};
\node[circle,draw,text=black] (G) at (-8, -3) {$(2,1,1)$};
\node[circle,draw,text=black] (H) at (-4, -3) {$(0,1,3)$};
\node[circle,draw,text=black] (I) at (-4, -4) {$(1,1,3)$};
\node[circle,draw,text=black] (J) at (4, -3) {$(0,3,1)$};
\node[circle,draw,text=black] (K) at (4, -4) {$(1,3,1)$};
\node[text=black] (L) at (-8, -4) {$a_1$ has picked $i_2,i_4$};
\node[text=black] (M) at (-4, -5) {$a_1$ has picked $i_4$};
\node[text=black] (N) at (4, -5) {$a_1$ has picked $i_2$};
\path (A0) edge node {} (A)
(A) edge node {\(a_2\) picks \(i_3\)} (B)
(A) edge node {\(a_1\) has picked \(i_3\)} (C)
(B) edge node {\(a_3\) picks \(i_1\)} (D)
(B) edge node {\(a_1\) has picked \(i_1\)} (E)
(C) edge node {\(a_2\) picks \(i_4\)} (F)
(D) edge node {} (G)
(E) edge node {\(a_3\) picks \(i_2\)} (H)
(H) edge node {} (I)
(F) edge node {\(a_3\) picks \(i_1\)} (J)
(J) edge node {} (K)
(G) edge node {} (L)
(I) edge node {} (M)
(K) edge node {} (N);
\end{tikzpicture}}
\caption{\label{fig:Dynprog} \small Directed acyclic state graph of the dynamic programming approach applied to the instance of Example \ref{example:running}}
\end{figure}
\end{example}
We now argue that the dynamic programming approach that we have designed is an FPT algorithm w.r.t. parameters \(n + \mu(a_1)\) and \(n + \mathtt{rg}^{\max}\).
\paragraph{The algorithm is FPT w.r.t. parameter \(n + \mathtt{rg}^{\max}\).}
Indeed, the number of tuples \((k_1, i_2,\ldots, i_n)\) can be bounded by \((m+1)^2(2\mathtt{rg}^{\max}-1)^{n-2}\). This is because there are only \(m+1\) possible values for \(k_1\) and for each of the \(m+1\) possible values of \(i_2\), there are only \(2\mathtt{rg}^{\max}-1\) possible values for other parameters \(i_j\). Indeed, if \(i_2 = l\), then \(i_j\) with \(j>2\) cannot be greater than \(l + \mathtt{rg}^{\max} - 1\) as this would imply that \(\mathtt{it}_{2}(l+1)\) is already consider to be taken and this would in turn imply that \(i_2 > l\). Similarly, by inverting the roles of \(a_2\) and \(a_j\), we can conclude that \(i_j\) cannot be lower than \(l - \mathtt{rg}^{\max} + 1\).
\paragraph{The algorithm is FPT w.r.t. parameter \(n + \mu(a_1)\).}
The number of tuples \((k_1, i_2,\ldots, i_n)\) can also be bounded by \((m+1)(\mu(a_1)+1)^n\). This is because there are only \(\mu(a_1)+1\) possible values for \(k_1\) and for each of the \(m+1\) possible time steps (number of items that have already been picked), there are only \(\mu(a_1)+1\) possible values for other parameters \(i_j\). This relies on the following observation, each time a non-manipulator gets to pick an item we can identify an item that we are sure is picked afterwards whatever the actions of the manipulator. Identifying these items can be done by using the following rule. At each time step \(t\) such that \(\pi(t) = a_i\) is a non-manipulator, find the preferred item of \(a_i\) that we did not already marked as taken. Then this item can be marked as taken for the future time steps. Indeed, either it has already been picked (which depends on the actions of the manipulator), or it will be picked at time step \(t\) by \(a_i\). Hence, at time step \(t+1\), within the \(t\) items that have already been picked, there are at least \(t - \mu(a_1)\) that are always the same. It is easy to conclude from there that \(i_j\) can only take \(\mu(a_1)+1\) different values.
\end{comment}
\section{Introduction}
Allocating resources to a set of agents in an efficient and fair manner is one of the most fundamental problems in computational social choice.
One challenging case is the allocation of indivisible items~\cite{bouveret2016fair,brams2003fair,lipton2004approximately}, e.g., allocating players to teams.
To address this problem, the sequential allocation mechanism has lately received increasing attention in the AI literature~\cite{aziz2015possible,aziz2016welfare,bouveret2011general,kalinowski2013social,kalinowski2013strategic,levine2012make}.
This mechanism works as follows: at each time step an agent, selected according to a predefined sequence, is allowed to pick one item among the remaining ones.
Such a protocol has many desirable qualities: it is simple, it can be run both in a centralized and in a decentralized way, and agents do not have to submit cardinal utilities.
For these reasons, sequential allocation is used in several real life applications, as for instance by several professional sports associations \cite{brams1979prisoners} to organize their draft systems (e.g., the annual draft of the National Basketball Association in the US), and by the Harvard Business School to allocate courses to students \cite{budish2012multi}.
Unfortunately, it is well known that the sequential allocation protocol is not strategy-proof. Stated otherwise, an agent can obtain a better allocation by not obeying her preferences, and this might lead to unfair allocations for the truthful agents \cite{bouveret2011general}. Such a drawback has motivated the algorithmic study of several issues related to strategic behaviors in the sequential allocation setting, the most important one being the computation of a ``successful'' manipulation. Hopefully, if finding a successful manipulation is computationally too difficult, then agents may be inclined to behave truthfully~\cite{walsh2016strategic}.
\paragraph{Related work on strategic behaviors in the sequential allocation setting.}
Aziz et al.~\cite{aziz2017equilibria} studied the sequential allocation setting by treating it as a one shot game. They notably designed a linear-time algorithm to compute a pure Nash equilibrium and a polynomial-time algorithm to compute an optimal Stackelberg strategy when there are two agents, a leader and a follower, and the follower has a constant number of distinct values for the items. If the sequential allocation setting is seen as finite repeated game with perfect information, Kalinowski et al.~\cite{kalinowski2013strategic} showed that the unique subgame perfect Nash equilibrium can be computed in linear time when there are two players. However, with more agents, the authors showed that computing one of the possibly exponentially many equilibria is PSPACE-hard.
Several papers focused on the complexity of finding a successful manipulation for a given agent, also called manipulator.
Bouveret and Lang~\cite{bouveret2011general} showed that determining if there exists a strategy for the manipulator to get a specific set of items can be done in polynomial time. Moreover, Aziz et al.~\cite{aziz2017complexity} showed that finding if there exists a successful manipulation, whatever the utility function of the manipulator, is a polynomial-time problem. Conversely, the same authors showed that determining an optimal manipulation (for a specific utility function) is an NP-hard problem. Bouveret and Lang~\cite{bouveret2014manipulating} provided further hardness results for finding an optimal manipulation for the cases of non-additive preferences and coalitions of manipulators.
On the other hand, finding an optimal manipulation can be performed in polynomial time if the manipulator has lexicographic or binary utilities~\cite{aziz2017complexity,bouveret2014manipulating} or if there are only two agents~\cite{bouveret2014manipulating}. Tominaga et al.~\cite{tominaga2016manipulations} further showed that finding an optimal manipulation is a polynomial-time problem when there are only two agents and the picking sequence is composed of a sequence of randomly generated rounds. More precisely, in each round, both agents get to pick one item and a coin flip determines who picks first.
\paragraph{Our contribution.} We tackle the parameterized complexity of manipulating sequential allocations, and provide a complete picture of the problem w.r.t. the following three parameters: the number $n$ of agents, the number $\mu(a_1)$ of items the manipulator gets to pick in the allocation process, and the maximum range $\mathtt{rg}^{\max}$ of an item, a parameter measuring how close the preference rankings of the agents are. In particular, using a novel dynamic programming algorithm, we show that the problem is in XP with respect to $n$, that is it can be solved in polynomial time if $n$ is constant, and it is Fixed-Parameter Tractable (FPT) w.r.t. $\mathtt{rg}^{\max}$. Moreover, we show that it is in XP w.r.t. $\mu(a_1)$ and FPT w.r.t. to the sum $n+\mu(a_1)$. Interestingly enough, we prove that the problem is W[1]-hard w.r.t. to the single parameters $n$ and $\mu(a_1)$. As a consequence, our XP results are both tight. Table~\ref{tab:sumup} summarizes our results. Lastly, we provide an integer programming formulation of the problem and show that the manipulator cannot increase the utility of her bundle by a multiplicative factor greater than or equal to 2, this bound being tight.
\begin{table}[h]
\caption{\label{tab:sumup} \small Our parameterized complexity results on the problem of manipulating sequential allocations.}
\centering\begin{tabular}{|c|c|c|c|c|}
\hline
Parameter & $n$ & $\mu(a_1)$ & $n + \mu(a_1)$ & $\mathtt{rg}^{\max}$ \\
\hline
Results & In XP and W[1]-hard & In XP and W[1]-hard & In FPT & In FPT\\
& Theorems~\ref{theorem : n} and \ref{theorem: hardness on n} & Theorem~\ref{theorem: hardness} & Theorem~\ref{theorem n + mu} & Theorem~\ref{theorem : rg}\\
\hline
\end{tabular}
\end{table}
Two results presented in this paper (Theorems \ref{theorem : n} and \ref{theorem : bound}) have independently been found by Xiao and Ling~\cite{DBLP:journals/corr/abs-1909-06747}. Indeed, they present another XP algorithm for parameter $n$ as well as the same bound on the increase in utility that a manipulator can obtain. Interestingly, the proofs and the insights on the picking sequence allocation process that are used are quite different.
\section{An Upper Bound on the Optimal Value of \(\mathit{ManipSA}\)} \label{section : bound}
Our initial hope was that computational complexity could be a barrier to manipulating sequential allocation. Unfortunately, we have seen in Section~\ref{Sect:positive} that the \(\mathit{ManipSA}\) problem can be solved efficiently for several subclasses of instances. Another reason that could push agents towards behaving truthfully could be that it would not be worth it. Indeed, if the increase in utility that an agent can get by manipulating is very low, she might be reluctant to gather the necessary information and do the effort of looking for a good manipulation. We provide the following tight bound on this issue.
\begin{theorem} \label{theorem : bound}
The manipulator cannot increase her welfare by a factor greater than or equal to 2, i.e., \(\max_{\succ} u(\phi_{\succ}(a_1)) < 2 u_T \) and this bound is tight.
\end{theorem}
\begin{proof}
We proceed by induction on the value of parameter \(\mu(a_1)\). If \(\mu(a_1) = 1\), the bound is obvious because \(a_1\) cannot manipulate. If \(\mu(a_1) = 2\), the bound is also easy to prove because \(a_1\) will obtain only two items and the utility of each of them cannot be greater than the one of the first item \(a_1\) picks when behaving truthfully, one having a strictly lower utility. Note that \(u_T > 0\) when \(\mu(a_1) \ge 2\). Let us assume the bound true up to \(\mu(a_1) = k-1\) and let us further assume for the sake of contradiction that there exists an instance \(\mathcal{J}\) with \(\mu(a_1) = k\) where \(\max_{\succ} u(\phi_{\succ}(a_1)) \ge 2 u_T \). Moreover, let us denote by \(x_1, \ldots, x_k\) (resp. \(y_1,\ldots, y_k\)) the items picked by \(a_1\) when behaving truthfully (resp. according to one of her best manipulation that we denote by \(\succ_b\)), where the items are ordered w.r.t. the time step at which they are picked (e.g., \(x_1\) is picked first when \(a_1\) behaves truthfully). Our hypothesis implies that:
\begin{equation} \label{hyp}
\sum_{i=1}^k u(y_i) \ge 2\sum_{i=1}^k u(x_i) = 2 u_T.
\end{equation}
We will now show how to build from \(\mathcal{J}\) an instance \(\mathcal{J}''\) with \(\mu(a_1) = k-1\) and where \(\max_{\succ} u(\phi_{\succ}(a_1)) \ge 2 u_T \), hence bringing a contradiction. However, for ease of presentation this construction is decomposed into two parts: a first one where we work on an instance \(\mathcal{J}'\) obtained from \(\mathcal{J}\); and a second one where we analyse the desired instance \(\mathcal{J}''\) which is obtained from \(\mathcal{J}'\).\\
\textbf{Part 1:} Consider the instance \(\mathcal{J}'\) obtained from \(\mathcal{J}\) by removing the first occurrence of \(a_1\) in \(\pi\) (an arbitrary non-manipulator is added at the end of \(\pi\) so that the length of \(\pi\) remains \(m\)). We denote by \(t^1\) this particular time step, i.e., the one of the first occurrence of \(a_1\) in \(\pi\). We point out that in \(\mathcal{J}'\), the manipulator can manipulate to obtain the set of items \(y_2,\ldots,y_k\). Indeed, assume w.l.o.g. that items \(y_1,\ldots,y_k\) are ranked first in \(\succ_b\) (not necessarily in that order) and let \(\succ_b^{\downarrow y_1}\) be the ranking obtained from \(\succ_b\) by putting \(y_1\) in last position. Furthermore, let \(S^{\mathcal{J}}_t\) (resp. \(S^{\mathcal{J}'}_t\)) be the set of items picked at the end of time step \(t\) in instances \(\mathcal{J}\) (resp. \(\mathcal{J}'\)) when \(a_1\) follows strategy \(\succ_b\) (resp. \(\succ_b^{\downarrow y_1}\)). Then we have the following lemma.
\begin{lemma} \label{lemma: proof bound 1}
\(S^{\mathcal{J}'}_t = S^{\mathcal{J}}_t\) for \(t < t^1\) and \(S^{\mathcal{J}'}_t \subseteq S^{\mathcal{J}}_{t+1}\) for \(t \ge t^1\).
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma: proof bound 1}]
The first part of the lemma is obvious because the picking processes in \(\mathcal{J}\) and \(\mathcal{J}'\) are identical for \(t < t^1\).
We prove the second part of the lemma by induction.
Let us denote by \(\pi_{\mathcal{J}}(t)\) (resp. \(\pi_{\mathcal{J}'}(t)\)) the picker at time step \(t\) in instance \(\mathcal{J}\) (resp. \(\mathcal{J}'\)).
Then for all \(t \ge t^1\), we have that \(\pi_{\mathcal{J}'}(t) = \pi_{\mathcal{J}}(t+1)\).
Then, \(S^{\mathcal{J}'}_t \subseteq S^{\mathcal{J}}_{t+1}\) is true for \(t = t^1\) because \(S^{\mathcal{J}'}_{t^1-1} = S^{\mathcal{J}}_{t^1-1}\), \(S^{\mathcal{J}}_{t^1} = S^{\mathcal{J}}_{t^1-1} \cup \{y_1\}\), and hence the only item that \(\pi_{\mathcal{J}'}(t^1)\) could prefer to the one \(\pi_{\mathcal{J}}(t^{1}+1)\) picks is \(y_1\). Let us assume the inclusion relationship true up to \(t = l\). So \(S^{\mathcal{J}}_{l+1} = S^{\mathcal{J}'}_{l} \cup \{i\}\), where \(i\) is some item. Then, we obtain that \(S^{\mathcal{J}'}_{l+1} \subseteq S^{\mathcal{J}}_{l+2}\), because the only item that \(\pi_{\mathcal{J}'}(l+1)\) could prefer to the one \(\pi_{\mathcal{J}}(l+2)\) picks is item \(i\).
\end{proof}
A direct consequence of Lemma~~\ref{lemma: proof bound 1}, \(\succ_b^{\downarrow y_1}\) is successful in picking items \(y_2,\ldots,y_k\) in \(\mathcal{J}'\).\\
\textbf{Part 2:} Let us now consider the instance \(\mathcal{J}''\) obtained from \(\mathcal{J}'\) by removing \(x_1\) from the set of items as well as the last agent in the picking sequence (that we had artificially added in the first part of the proof). In \(\mathcal{J}''\) the set of items that \(a_1\) gets when behaving truthfully is \(x_2,\ldots, x_k\). Consider the preference ranking \(\succ_b^{\downarrow y_1, - x_1}\) obtained from \(\succ_b^{\downarrow y_1}\) by removing \(x_1\) and denote by \(S^{\mathcal{J}''}_t\) the set of items picked at the end of time step \(t\) in \(\mathcal{J}''\) when \(a_1\) follows strategy \(\succ_b^{\downarrow y_1, - x_0}\).
Moreover, let \(t^l\) (resp. \(t^1\)) be the time step in \(\mathcal{J}'\) at which \(a_1\) picks \(y_l\) (resp. at which \(x_1\) is picked by some agent) when \(a_1\) uses strategy \(\succ_b^{\downarrow y_1}\) where \(2 \leq l \leq k\).
We show that \(a_1\) can get in \(\mathcal{J}''\) a set of items \(Y\) compounded of all items in \(\{y_2,\ldots, y_k\}\) up to one item.
This is a consequence of the following lemma.
\begin{lemma}\label{lemma: proof bound 2}
\(S^{\mathcal{J}''}_t = S^{\mathcal{J}'}_t\) for \(t < t^1\) and \(S^{\mathcal{J}''}_t\) is of the form \((S^{\mathcal{J}'}_t\setminus \{x_1\}) \cup \{i\}\) for \(t \ge t^1\) where \(i\) is some item in \(\mathcal{I}\).
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma: proof bound 2}]
The first part of the lemma is obvious because the picking processes in \(\mathcal{J}'\) and \(\mathcal{J}''\) are identical for \(t < t^1\).
We prove the second part of the lemma by induction.
Let us denote by \(\pi_{\mathcal{J}'}(t)\) (resp. \(\pi_{\mathcal{J}''}(t)\)) the picker at time step \(t\) in instance \(\mathcal{J}'\) (resp. \(\mathcal{J}''\)).
Then for all \(t\), we have that \(\pi_{\mathcal{J}'}(t) = \pi_{\mathcal{J}''}(t)\).
Then, the fact that \(S^{\mathcal{J}''}_{t^1}\) is of the form \((S^{\mathcal{J}'}_{t^1}\setminus \{x_1\}) \cup \{i\}\) is just due to the fact that \(S^{\mathcal{J}'}_{t^1-1} = S^{\mathcal{J}''}_{t^1-1}\).
Let us assume this fact true up to \(t = l\). So \(S^{\mathcal{J}''}_{l} = S^{\mathcal{J}'}_{l} \setminus \{x_1\} \cup \{i\}\), where \(i\) is some item. We obtain that \(S^{\mathcal{J}''}_{l+1}\) is of the form \(S^{\mathcal{J}'}_{l} \setminus \{x_1\} \cup \{i'\}\), where \(i'\) is some item, because \(\pi_{\mathcal{J}''}(l+1)\) will pick the same item as \(\pi_{\mathcal{J}'}(l+1)\) except if this item is \(i\).
\end{proof}
Let \(j\) be the first index in \(\mathcal{J}''\) (if any) such that \(a_1\) cannot pick \(y_j\) at \(t^j \ge t^1\). Then, by Lemma \ref{lemma: proof bound 2},
we have that \(S^{\mathcal{J}''}_t = (S^{\mathcal{J}'}_t\setminus \{x_1\}) \cup \{y_{l+1}\}\) for \(t^j \le t^l \le t < t^{l+1}\). This is due to the fact that \(\succ_b^{\downarrow y_1}\) is successful in picking items \(y_2,\ldots,y_k\) in \(\mathcal{J}'\) and this proves the claim that \(a_1\) can get a set of items \(Y\) compounded of all items in \(\{y_2,\ldots, y_k\}\) up to one item.
Hence,
\begin{align*}
\sum_{y \in Y } u(y) &\ge \sum_{i = 1}^k u(y_i) - u(y_1) - \max_{2\le j\le k} u(y_j)\\
&\ge 2\sum_{i=1}^k u(x_i) - u(y_1) - \max_{2\le j\le k} u(y_j) \\
&\ge 2\sum_{i=2}^k u(x_i)
\end{align*}
where the second inequality is due to Inequality \ref{hyp} and the third one is due to fact that \(u(y_i) \le u(x_1), \forall i \in \{1,\ldots,k\}\). We obtain a contradiction because \(\mu(a_1) = k-1\) in \(\mathcal{J}''\).
The tightness of the bound is provided by the instance of Example \ref{example:running} with the following utility function \(u(i_1) = u(i_2) + \epsilon = u(i_3) + 2 \epsilon = 1\) and \(u(i_4) = 0\). We recall that in this instance \(u_T = u(i_1) + u(i_4) = u(i_1)\) whereas the manipulator can obtain by manipulating the set \(S=\{i_2,i_3\}\) with utility \(u(i_2) + u(i_3) = 2u(i_1) -3\epsilon\).
\end{proof}
We conclude from Theorem \ref{theorem : bound} that, while the increase in utility of the manipulator cannot be arbitrarily large, manipulating may often be worth it for the manipulator as doubling her utility can be a significant improvement.
\section{An Integer Programming Formulation}\label{Sect:IntProg}
In this last section, we provide an integer programming formulation of the \(\mathit{ManipSA}\) problem.
This integer program provides another tool than the dynamic programming algorithm of Section~\ref{Sect:positive} to solve the \(\mathit{ManipSA}\) problem which may be more efficient for some instances. Moreover, a more thorough analysis of this formulation and of it's solution polytope may lead to new results. For instance, some bounds on the number of variables could yield new parameterized complexity results via Lenstra's theorem \cite{lenstra1983integer}.
By abuse of notation, we will identify the set \(\mathcal{I}\) as the set \([m] = \{1,\ldots,m\}\). Let \(x_{it}\) be a binary variable with the following meaning: \(x_{it} = 1\) iff item \(i\) is picked at time step \(t\). Then, the constraint \(\sum_{i = 1}^m x_{it} = 1\) implies that there is exactly one item being picked at time step \(t\). Similarly, the constraint \(\sum_{t = 1}^m x_{it} = 1\) implies that item \(i\) is picked at exactly one time step.
Lastly, consider a time step \(t\) such that \(\pi(t) \neq a_1\). Then the following set of constraints implies that \(\pi(t)\) picks at time step \(t\) the best item available for her.
\begin{equation}
x_{it} + \sum_{j \succ_{\pi(t)} i} x_{jt} + \sum_{t' < t} x_{it'} \geq 1, \quad \forall i \in [m] \notag
\end{equation}
In words, this constraint says that: if \(\pi(t)\) does not pick item \(i\) at time step \(t\) (i.e., \(x_{it} = 0\)), it is either because she picks something that she prefers (i.e., \(\sum_{j \succ_{\pi(t)} i} x_{jt} = 1\) ), or because item \(i\) has already been picked at an earlier time step (i.e., \(\sum_{t' < t} x_{it'} = 1\)). To summarize, we obtain the following integer programming formulation\footnote{We note that we obtain an integer programming formulation which is very close to one of the mathematical program used to solve stable marriage problems \cite{gusfield1989stable}.} with \(m^2\) binary variables and at most \(m^2\) constraints (as we can assume the manipulator is the picker for at least two time steps),
\begin{align*}
\max_{x_{it}} \sum_{i=1}^m\sum_{t : \pi(t) = a_1} x_{it} u(i)& \\
\text{s.t. \hspace{1.cm}} \sum_{t = 1}^m x_{it} &= 1, \quad \forall i \in [m]\\
\sum_{i = 1}^m x_{it} &= 1, \quad \forall t \in [m]\\
x_{it} + \sum_{j \succ_{\pi(t)} i} x_{jt} + \sum_{t' < t} x_{it'} &\geq 1, \quad \forall i,t \in [m]^2 \text{ s.t. } \pi(t) \neq a_1 \\
x_{it} &\in \{0,1\}, \quad \forall i,t \in [m]^2
\end{align*}
| {'timestamp': '2019-11-27T02:13:20', 'yymm': '1909', 'arxiv_id': '1909.08920', 'language': 'en', 'url': 'https://arxiv.org/abs/1909.08920'} |
\section{Introduction}
\label{sec:intro}
Precipitation initiation is a dynamic process that is influenced by many weather variables, location, and seasons. In general terms, as a parcel of air containing water vapor rises in the atmosphere, it will reach a certain height at which the temperature drops below the dew point and the air becomes saturated. Any small excess of water vapor beyond this saturation point will cause the excess amount of vapor to condense into liquid water or ice, forming clouds~\cite{book}. Further ascent of water vapor can lead to the growth of clouds, which may finally precipitate. However, predicting rainfall from the behavior of different weather parameters is challenging.
The research community has shown growing interest in rainfall prediction over the past few years. Recent publications \cite{Yibin(a),Benevides} have discussed using precipitable water vapor (PWV) content derived from Global Positioning System (GPS) signal delay to predict the rainfall. We have also used GPS-derived PWV values for rain prediction \cite{IGARSS_2016} and have employed sky cameras \cite{Dev2016GRSM} to detect the onset of precipitation~\cite{rainonset}. However, the water vapor content of the atmosphere -- albeit a good indicator of rain -- is not sufficient to predict rain with high accuracy. Other researchers \cite{Junbo,sharifi} have suggested using other meteorological parameters. In ~\cite{manandhar2018systematic}, we have provided a systematic analysis of the various weather parameters for rainfall detection.
In this paper, we use various surface weather parameters along with the water vapor content derived from GPS, and implement a machine learning based technique to classify \emph{rain} and \emph{no rain} observations. In the following sections, we describe the dataset used in this paper and present the proposed algorithmic approach. Then we discuss the experiments and test results. Finally, we conclude the paper with future directions. The source code of all simulations in this paper is available online.\footnote{~\url{https://github.com/shilpa-manandhar/precipitation-detection}}
\section{Features for Rainfall Classification}
In this section, we describe the different variables, including surface weather parameters, total column water vapor content, and seasonal/diurnal characteristics, which are later used for rainfall classification.
\subsection{Surface Weather Parameters}
Surface weather parameters recorded by a weather station (Davis Instruments 7440 Weather Vantage Pro II) with tipping bucket rain gauge are used in this study. The weather station is located at Nanyang Technological University (NTU), (1.3$^{\circ}$N, 103.68$^{\circ}$E). The following weather station measurements are used for this paper:
\begin{itemize}
\setlength\itemsep{1pt}
\item Surface temperature ($^{\circ}$C),
\item Relative humidity (RH) (\%),
\item Dew point ($^{\circ}$C),
\item Solar irradiance (W/m$^{2}$),
\item Rainfall rate (mm/hr).
\end{itemize}
All the recorded weather parameters have a temporal resolution of 1 minute.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.65\textwidth]{AllFeatures1}
\caption{Time series of different weather parameters. (a) PWV and Rainfall rate; (b) Surface Temperature, Dew point, and Relative humidity; (c) Solar irradiance. The horizontal axis for all subplots represents the time in day of the year (\textit{DoY}). For example, 334.8 indicates November 30 at 19:20.
\label{TimeSeries}}
\end{figure*}
\subsection{GPS Derived Water Vapor Content}
In addition to the various surface weather parameters, precipitable water vapor (PWV) values derived from GPS signal delays are used as an additional important feature for the classification. This section provides a brief overview of the derivation of PWV values from GPS signal delays.
PWV values (in mm) are calculated using the zenith wet delay (ZWD), \textit{$\delta$L$_w^{o}$}, incurred by the GPS signals as follows:
\begin{equation}
\mbox{PWV}=\frac{PI \cdot \delta L_w^{o}}{\rho_l},
\label{eq1}
\end{equation}
where $\rho_{l}$ is the density of liquid water (1000 kg$/m^{3}$), and \textit{PI} is the dimensionless factor determined by \cite{shilpaPI}:
\begin{dmath}
PI=[-\text{sgn}(L_{a})\cdot 1.7\cdot 10^{-5} |L_{a}|^{h_{fac}}-0.0001]\cdot \\
\cos \frac{2\pi(DoY-28)}{365.25}+
0.165-1.7\cdot 10^{-5}|L_{a}|^{1.65}+f,
\label{eq2}
\end{dmath}
where $L_a$ is the latitude, \textit{DoY} is day-of-year, $h_{fac}$ is either 1.48 or 1.25 for stations in the Northern or Southern hemisphere, respectively. $f=-2.38\cdot 10^{-6}H$, where $H$ is the station altitude above sea level, can be ignored for $H<1000m$.
For this paper, the ZWD values for an IGS GPS station located at NTU (station ID: NTUS) are processed using GIPSY OASIS software and recommended scripts \cite{GIPSY}. PWV values are then calculated for NTUS using Eqs.~\ref{eq1}-\ref{eq2}, with $L_a= 1.34$, $h_{fac}= 1.48$, $H=78m$. The calculated PWV values have a temporal resolution of 5 minutes.
\subsection{Seasonal and Diurnal Features}
Singapore experiences four different seasons:
\begin{itemize}
\setlength\itemsep{1pt}
\item North-East Monsoon (NE) during November-March,
\item First-Inter Monsoon (FI) during April-May,
\item South-West Monsoon (SW) during June-October,
\item Second-Inter Monsoon (SI) during October-November.
\end{itemize}
The period of these seasons vary a little from year to year, which is updated in a yearly weather report provided by Singapore's National Environment Agency (NEA) \cite{NEA}.
The rainfall pattern in Singapore is heavily influenced by different seasons. Most of the rain is experienced in NE and SW monsoon seasons. Since seasons play an important role in rainfall, we include day-of-year (\textit{DoY}) as a feature for rainfall classification. Furthermore, rainfall occurrences in tropical regions like Singapore also show clear diurnal characteristics. Heavy convective rainfalls are generally experienced during the late afternoon in the NE monsoon in Singapore. Thus time-of-day (\textit{ToD}) is also included as a feature.
\subsection{Time Series Example}
In this section, we show time series data to illustrate the importance of all the features for rain classification. Fig.~\ref{TimeSeries} shows the time series of weather parameters over two consecutive days in 2010. Weather parameters are sampled at 5-minute intervals to match the GPS PWV timings.
The different features show interesting changes with rain. PWV values tend to increase before the rain. Surface temperature decreases and matches the dew point temperature during the rain. Relative humidity increases and reaches almost 100 \% when it rains and RH is also generally higher in the night time. Similar fluctuations can be observed in the solar irradiance values. For Singapore, clear sky solar radiation is around 1000 W/m$^{2}$ in the daytime~\cite{ClearSkyPIERS}. In Fig.~\ref{TimeSeries}(c) we can see the drop in the solar radiation before rain, likely due to the buildup of clouds. The example also highlights the diurnal variations of rain and these weather parameters, as discussed in the previous section.
\section{Experiments}
\subsection{Database}
In this paper, three years (2010-2012) of data are used for the experiments. The data from 2010-2011 are used for training and testing the algorithm. For further assessment of the performance, the trained algorithm is also tested on data from a separate year (2012).
\subsection{Dataset Imbalance}
For any classification algorithm it is very important to train a model properly, and thus the training data should be chosen wisely. In this paper, we consider $7$ features -- \textit{temperature}, \textit{dew point}, \textit{relative humidity}, \textit{solar irradiance}, \textit{PWV}, \textit{ToD}, and \textit{DoY}, which are used for binary classification of rain. Each of these features are used with a temporal resolution of 5 min. If a dataset of 1 year (365 days) is taken as the training database, it includes a total of 365*288 data points. Out of these 105,120 data points, there are far fewer data points with rain than without, because rain is relatively rare event.
For a year's (2010) data, the ratio of data points with rain (\textit{minority cases}) to the data points without rain (\textit{majority cases}) is 1:70, which indicates that the database is highly imbalanced with respect to rain events. Consequently, the traditional way of separating a database with say 70\% of training size and 30\% of test size might result in a biased model, which is dominated by the characteristics of the majority database. Instead, we employ random downsampling techniques to make the training dataset balanced \cite{RandomDownSampling}.
Random downsampling is one of the techniques used to overcome the problem of imbalanced databases. In this method, while forming the training data set, all the cases from the minority scenario are taken into consideration. Then the cases from majority scenario are randomly chosen such that the minority to majority ratio is balanced. There is a general practice to make the ratio 1:1, but other ratios can also be considered \cite{RandomDownSampling}.
\subsection{Training and Testing}
A certain percentage of data points from two years (2010-2011) of the database is taken randomly as a training set and the remaining data as the test set. The training dataset is then balanced by performing random downsampling to obtain a minority to majority ratio of 1:1. The balanced training data is then used to train the model using a Support Vector Machine (SVM). The model is trained for different training data sizes. Since the training data is selected randomly, for each training data size, the experiment is performed 100 times and the average values of the evaluation metrics are calculated.
\subsection{Evaluation Metrics}
There are different evaluation metrics that can be used to analyze the results. One should choose a suitable evaluation metric that best fits the scenario. For the study of rain, it is important to see how well the rainfall is predicted and how often the methodology makes false predictions. Therefore, the results are generally expressed in terms of true detection and false alarm rates \cite{Yibin(a), Benevides}.
From the confusion matrix, the true positive, true negative, false positive and false negative samples are represented by $TP$, $TN$, $FP$ and $FN$ respectively. We report the True Detection (TD) and False Alarm (FA) rates, which are defined as follows:
\begin{equation*}
TD = TP/(TP+FN),
\end{equation*}
\begin{equation*}
FA = FP/(TN+FP).
\end{equation*}
\section{Results}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{TPR_FA_Downsampled}
\caption{True Detection and False Alarm rates for test data and data from 2012.
\label{fig:meas-rec}}
\end{center}
\end{figure}
Fig.~\ref{fig:meas-rec} shows the average TD and FA values at varying \% of training data size. The model reaches the highest TD at around 40\% of training data size, but the FA is also quite high at this point. Around a training data size of 20\% or lower the TD is good and the FA values are also lower. Similar results were obtained when the trained model was tested on data from a separate year (2012).
Table~\ref{tab:result} reports the TD and FA rates at 20\% of training data size and remaining observations as testing data set with and without downsampling. The results with downsampling are better as compared to those without. We achieve a high true detection rate of 87.4\% and 85.8\% on the test data and data from 2012 respectively. Similarly, the false alarm rate is 32.2\% and 28.5\% for test data and data from 2012 respectively. In the literature~\cite{Benevides,Yibin(a)}, a true detection rate of 80\% and a false alarm rate of 60\% has been reported for rainfall prediction using PWV data. The results presented in this paper show a significant improvement in the false alarm rate reported. Therefore, our approach is able to achieve a better performance for rain detection.
\begin{table}[ht]
\centering
\caption{True Detection (TD) and False Alarm (FA) rates in \% at 20\% training data size.}
\begin{tabular}{*5c}
\toprule
Dataset & \multicolumn{2}{c}{\shortstack{Without \\ downsampling}} & \multicolumn{2}{c}{\shortstack{With \\ downsampling}} \\
\midrule
{} & TD & FA & TD & FA \\
Test Data & 82.5 & 44.8 & 87.4 & 32.2\\
2012 Data & 82.9 & 42.4 & 85.8 & 28.5\\
\bottomrule
\end{tabular}
\label{tab:result}
\end{table}
\section{Conclusions \& Future Work}
\label{sec:conc}
In this paper, a machine-learning based framework to detect precipitation from regular surface meteorological parameters and GPS derived PWV has been presented. Our proposed method has a high true detection rate and moderately low false alarm rate, as demonstrated using weather data from the tropics. In the future, we plan to use set-theory based techniques~\cite{dev2017rough} to analyze the impact of these various features on precipitation. We also plan to study methodologies to further reduce false alarms. We will also explore other techniques to counter the effects of an unbalanced dataset with rare events of interest.
\balance
| {'timestamp': '2018-05-08T02:03:20', 'yymm': '1805', 'arxiv_id': '1805.01950', 'language': 'en', 'url': 'https://arxiv.org/abs/1805.01950'} |
\section{Introduction}
The Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo have
detected six sources of GWs as of writing this article
\citep{abbott16a,abbott16b,abbott17,abbott17a,abbott17b,abbott17c}. All of them
but one, GW170817, last reference, correspond to a system of BHB. Typically,
the masses are larger than the nominal one derived from stellar evolution of
$10\,M_{\odot}$, as predicted by \cite{Amaro-SeoaneChen2016}, i.e.
``hyperstellar-mass black holes'', as the authors coined,
and previously discussed by e.g. also
\citealt{heger2003,mapelli08,zampieri09,mapelli10,belczy2010,fryer12,mapelli13,ziosi2014,speetal15}.\\
There are two different channels to form a BHB, namely either (i) in the field
in isolation and via stellar evolution of a binary of two extended stars
\citep[see e.g.][]{tutukov,bethe, belczynski2002,belckzynski2010,postnov14,loeb16,tutukov17,postnov17,giacobbo18}.
(ii) or via dynamical interactions in a dense stellar system (see the review of
\citealt{BenacquistaDowning2013} and e.g. also
\citealt{sambaran10,downing2010,mapelli13,ziosi2014,rodriguez15,rodriguez16,mapelli16,askar17,antonini16,megan17,frak18}).
In this article we address the evolution of a BHB in an open cluster with a
suit of 1500 direct-summation $N-$body simulations. We model the evolution of
the BHB with properties similar to what can be expected to be detected by
LIGO/Virgo \citep{Amaro-SeoaneChen2016}, using different representations of
small- and intermediate-mass isolated open star clusters.
Our cluster models are considered as a proxy of the Galactic population of open
clusters, which are characterized by central densities of a few M$_\odot\,{\rm
pc}^{-3}$, i.e. much lower than the typical densities of globular clusters, and
contain a number of stars from two to three orders of magnitude less than a
globular cluster.
We investigate the evolution of stellar BHBs in low-mass and low-density open
clusters models by means of high precision direct-summation $N-$body
simulations. In an open cluster the impulsive effect produced by the large
fluctuations over the mean field, whose amplitude is of order $\sqrt{N}/N$, can
significantly affect the BHB evolution. Assuming an initial number of star
$N_{\rm o}=1000$ for this type of open clusters and $N_{\rm g}=10^6$ for a
typical globular cluster, we can calculate the expected fluctuations over the
mean field amplitude as $f=\sqrt[]{N_{\rm g}/N_{\rm o}}=\sqrt[]{1000}\simeq
32$, thus implying a larger such effect in open clusters. This enhanced effect
of stochastic fluctuations (physically given by the rare but close approaches
among cluster stars) reflects in the ratio of the 2-body relaxation time scales
which, given the cluster sizes as $R_{\rm o}$ and $R_{\rm g}$, writes as
\citep{spi87}
\begin{equation}
\frac{t_{\rm rlx,\,o}}{t_{\rm rlx,\,g}}=\frac{1}{f}\frac{\log(0.11\,N_{\rm o})}{\log(0.11\,N_{\rm g})}\left(\frac{R_{\rm o}}{R_{\rm g}}\right)^{3/2}.
\end{equation}
Assuming $R_{\rm o}/R_{\rm g} = 1/5 $, the above equation yields to $t_{\rm
rlx,\,o}/t_{\rm rlx,\,g}\simeq 0.02$, meaning that the smaller system evolves
50 times faster. Of course, this enhanced effect of individual strong
encounters is partly compensated by their smaller time rate.
In this paper we address the evolution of a BHB which, as a result of dynamical
friction orbital decay \citep[see e.g.][]{bt}, we assume to be located at the
centre of an open cluster-like system. The masses of the black holes are set to
$30\,M_{\odot}$ each, following the first LIGO/Virgo detection, the GW150914
source \citep{abbott16a}, and \cite{Amaro-SeoaneChen2016}. Despite the possible
ejection due to the supernova natal kick, there is margin for such kind of
remnant to be retained in an open cluster. Indeed, compact remnants such as
neutron stars and black holes formed in massive binaries are much easier
retained in clusters because the kick momentum is shared with a massive
companion, which leads to a much lower velocity for the post-supernova binary
\citep{podsi04,podslia2005}. In the case of neutron stars, \cite{podsi04}
showed that for periods below 100 days, the supernova explosion leads to a very
little or no natal kick at all (their Fig.2, the dichotomous kick scenario).
Open clusters have binaries mostly with periods of 100 days and below (see
\citealt{Mathieu2008}, based on the data of \citealt{DuquennoyMayor1991}).
These results can be extrapolated to black holes because they receive a similar
kick to neutron stars (see \citealt{RepettoEt2012} and also the explanation of
\citealt{Janka2013} of this phenomenon). In any case, black holes with masses
greater than $10\,M_{\odot}$ at solar metallicity form via direct collapse and
do not undergo supernova explosion, and hence do not receive a natal kick
\citep{PernaEtAl2018}. Also, while the solar metallicity in principle could
not lead to the formation of black holes more massive than $25\,M_{\odot}$
\cite{spema17}, we note that the resonant interaction of two binary systems can
lead to a collisional merger which leads to formation of this kind of black
hole \citep{goswami2004,fregeau2004} at the centre of a stellar system, where
they naturally segregate due to dynamical friction.
Moreover, we note that another possibility is that these stellar-mass black
holes could have got their large masses due to repeated relativistic mergers of
lighter progenitors. However, the relativistic recoil velocity is around
$200-450\,{\rm km/s}$ for progenitors with mass ratio $\sim [0.2,\,1]$
respectively, so that this possibility is unlikely \citep[see e.g.][their Fig.
1, lower panel]{Amaro-SeoaneChen2016}, because they would escape the cluster,
unless the initial distribution of spins is peaked around zero and the black
holes have the same mass (as in the work of \cite{RodriguezEtAl2018} in the
context of globular clusters). In this case, second generation mergers are
possible and, hence, one can form a more massive black hole via successive
mergers of lighter progenitors.
The relatively low number of stars of open clusters gives the possibility to
integrate over at least a few relaxation times in a relatively short
computational time, so that, contrarily to the cases of globular clusters or
galactic nuclei, it is possible to fully integrate these systems without the
need to rescale the results.
In this article we present a series of 1500 dedicated direct-summation $N-$body
simulations of open clusters with BHBs. The paper is organized as follows: in
Sect. 2 we describe the numerical methods used and our set of models; in Sect.3
we present and discuss the results of the BHB dynamics; in Sect. 4 we discuss
the implication of our BHBs as sources of gravitational waves; in Sect. 5 we
present the results on tidal disruption events, in Sect. 6 we draw overall
conclusions.
\section{Method and Models}
\label{met_mod}
To study the BHB evolution inside its parent open cluster (henceforth OC) we
used \texttt{NBODY7} \citep{aarsethnb7}, a direct N-body code that integrates
in a reliable way the motion of stars in stellar systems, and implements a
careful treatment to deal with strong gravitational encounters, taking also
into account stellar evolution. We performed several simulations at varying
both OC and BHB main properties, taking advantage of the two high-performance
workstations hosted at Sapienza, University of Roma, and the Kepler cluster,
hosted at the Heidelberg University.
\begin{table*}
\centering
\caption{
Main parameters characterizing our models. The first two columns refers to the
cluster total number of stars, $N_{\rm cl}$, and its mass $M_{\rm cl}$. The second
two-column group refers to the BHB parameters: semi-major axis, \textit{a}, and
initial eccentricity, \textit{e}. The last column gives the model
identification name. Each model is comprised of 150 different OC realizations.
}
\label{ictab}
\begin{tabular}{@{}ccccccc@{}}
\toprule
\multicolumn{2}{c}{\textbf{Cluster}} & & \multicolumn{2}{c}{\textbf{BHB}} & \textbf{} & \textbf{$N$-body set} \\ \midrule
$N_{\rm cl}$ & $M_{\rm cl}$ (M$_\odot$) & & \textit{a} (pc) & \textit{e} & & Model \\ \midrule
\multirow{2}{*}{512} & \multirow{2}{*}{$ 3.2 \times 10^{2}$} & & \multirow{2}{*}{0.01} & 0.0 & & A00 \\
& & & & 0.5 & & A05 \\ \midrule
\multirow{2}{*}{1024} & \multirow{2}{*}{$ 7.1 \times 10^{2}$} & & \multirow{2}{*}{0.01} & 0.0 & & B00 \\
& & & & 0.5 & & B05 \\ \midrule
\multirow{2}{*}{2048} & \multirow{2}{*}{$ 1.4\times 10^{3}$} & & \multirow{2}{*}{0.01} & 0.0 & & C00 \\
& & & & 0.5 & & C05 \\ \midrule
\multirow{2}{*}{4096} & \multirow{2}{*}{$ 2.7 \times 10^{3}$} & & \multirow{2}{*}{0.01} & 0.0 & & D00 \\
& & & & 0.5 & & D05 \\ \bottomrule
\end{tabular}
\end{table*}
Table \ref{ictab} summarizes the main properties of our $N$-Body simulation
models. We created four simulation groups representing OC models at varying
initial number of particles, namely $512\leq N \leq 4096$. Assuming a
\citet{kroupa01} initial mass function ($0.01$ M$_\odot$ $\leq$ M $\leq$
$100$ M$_\odot$), our OC model masses range between $ 300$ M$_{\odot}$ and $3000$
M$_\odot$. All clusters are modeled according to a Plummer density profile
\citep{Plum} at virial equilibrium with a core radius ($r_c = 1$ pc), and
adopting solar metallicity (Z$_\odot$).
We perform all the simulations
including the stellar evolution recipes that are implemented in the \texttt{NBODY7} $\,$ code
which come from the standard \texttt{SSE} and \texttt{BSE} tools
\citep{hurley2000, hurley2002}, with updated stellar mass loss and remnant formation prescriptions from \citet{bel2010}.
Further, for simplicity, we do not take into account primordial binaries, which we leave to future work.
To give statistical significance to the
results we made $150$ different realizations of every model, which are denoted
with names A00, A05, B00, B05, C00, C05, D00 and D05, where the letter refers
to increasing $N$ and the digits to the initial BHB orbital eccentricity.
Additionally, we ran a further sample of $421$ simulations, aiming at
investigating the implications of some of our assumptions on the BHB evolution.
These models are deeply discussed in Sect. \ref{tde}.
In all our simulations, we assumed that the BHB is initially placed at the
centre of its host OC, and is composed of two equal mass BHs with individual
mass M$_{\rm{BH}}$ = $30$ M$_\odot$. The initial BHB semi major axis is $0.01$ pc
with two initial eccentricities, $e_{\rm BHB}=0$, $e_{\rm BHB}=0.5$. The initial
conditions drawn this way are obtained updating the procedure followed in
\cite{ASCD15He}. The choice of a BHB initially at rest at centre of the
cluster with that not very small separation is not a limitation because the
dynamical friction time scale of $30$ M$_{\odot}$ is short enough to make
likely that both the orbital decay of the BHB occurs rapidly and also that the
probability of a rapid formation of a BHB from two individual massive BHs is
large even on a short time.
The BHB orbital period is, for the given choices of masses and semimajor axis,
P$_{\rm BHB}= 0.012$ Myr. Note that our BHBs are actually "hard" binaries
\citep{heggie75,hills75,bt}, having binding energy BE$\sim$ 3.8 $10^{45}$ erg,
which is larger than the average kinetic energy of the field stars in each type
of cluster studied in this work. All models were evolved up to $3$ Gyr, which
is about 3 times the simulated OC internal relaxation time.
The scope of the present work is to give investigate the BHB dynamical evolution, hence we focus on tracking mainly its evolution. We also note that stellar-mass BHs naturally form binary systems in
open clusters over a wide cluster mass range, and can also undergo
triple-/subsystem-driven mergers, as recently shown through explicit direct N-body simulations
by \citet{kimpson} and \citet{sambaran1}.
\section{Dynamics of the black hole binary}
\subsection{General evolution}
\label{BHB_ev}
The BHB is assumed to be located at the centre of the cluster. Due to
interactions with other stars, the BHB can either undergo one of the following
three outcomes. First, (i) the BHB can shrink and hence become harder, meaning
that the kinetic energy of the BHB is higher than the average in the system
\citep[see e.g.][]{bt}; also (ii) the BHB can gain energy and therefore
increase its semi-major axis and (iii) the BHB can be ionised in a typically
three-body encounter. In Table~\ref{ub} we show the percentages of these
three outcomes in our simulations.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{images/semi_sha.pdf}
\includegraphics[width=0.5\textwidth]{images/semi_shb.pdf}
\caption{
Semi-major axis evolution in four random, representative simulations. The upper panel
shows binaries which initially are circular and lower panel depicts eccentric ones.
We normalise the time to the initial (i.e. at $T=0$) period of the binary.
}
\label{fig:param_sh}
\end{figure}
\begin{table}
\centering
\caption{
Percentage of BHB which undergo one of the three processes described in the
text. They either shrink (column 2), or increase their semi-major axis (column
3) or break up (column 4).
}
\label{ub}
\begin{tabular}{@{}cccc@{}}
\toprule
\textbf{Model} & \textbf{Harder} & \textbf{Wider}& \textbf{Break up} \\
& \% & \% & \% \\\midrule
A00 & 89.1 & 7.9 & 2.9 \\
A05 & 97.1 & 2.1 & 0.7 \\
B00 & 92.5 & 2.7 & 4.8 \\
B05 & 94.0 & 2.0 & 4.0 \\
C00 & 93.6 & 0 & 6.4 \\
C05 & 96.5 & 0 & 3.5 \\
D00 & 94.2 & 0 & 5.8 \\
D05 & 97.1 & 0 & 2.8 \\ \bottomrule
\end{tabular}
\end{table}
We can see that typically about $90$\% of all binaries shrink their semi-major
axis as they evolve, as one can expect from the so-called \textit{Heggie's law}
\citep{heggie75}. We note that models in which the binary initially was
eccentric lead to a higher percentage in the ``harder'' outcome. We display in
Fig.~\ref{fig:param_sh} a few representative examples of these processes.
The decrease is gradual for model A and B while model C and D (which are the
more massive) show a steeper decrease.
There are however cases in which the binary gains energy from gravitational
encounters and increases its semi-major axis, becoming ``wider'' (Table
\ref{ub}, column 2).
If the host cluster is massive enough, the semi-major
axis always decreases (models C and D), contrary to lighter models, in which it
can increase (models A and B).
Because of the initial choice of the BHB semi-major axis, gravitational
encounters with other stars rarely ionise it, although we observe a few events,
typically below $7\,\%$ (circular binaries are easier to ionise). This
ionisation happens between $\sim 5$ Myr and up to $\sim 100$ Myr and it is usually driven
by the encounter with a massive star ($\gtrsim 10$ M$_\odot$). In such case,
the massive star generally pair with one of the BHs, while the other BH
is usually ejected from the stellar systems.
\subsubsection{Pericentre evolution}
For a BHB to become an efficient source of GWs, the pericentre distance must be
short enough. In this section we analyse the evolution of the pericentres for
our different models. In Table~\ref{peritab} we summarise the results of
Figs.~\ref{fig:per_a}, \ref{fig:per_b}. In the table we show the average
pericentre distance at three different times(1, 2 and 3 Gyr)
in the evolution of the cluster as well as the absolute minimum
pericentre distance we find at each of these times.
\begin{table}
\centering
\caption{
Evolution of the BHB pericentre distance for all the models. The columns from
left to right denote, respectively: the model, the initial BHB pericentre
(r$_{\rm p}^{i}$), the time in which we have calculated the average ($T$), the BHB
pericentre distance averaged over all the simulations of the respective model
($\langle r_{\rm p}\rangle$), and the absolute minimum distance we record
(r$_{\rm p}^{min}$). We note that the Schwarzschild radius of a $30\,M_{\odot}$ is
$1.43\times\,10^{-12}~{\rm pc}$.
}
\label{peritab}
\begin{tabular}{@{}ccccccl@{}}
\hline
\textbf{Model} & \textbf{r$_{\rm p}^{i}$} & \textbf{T} & \textbf{$\langle r_{\rm p}\rangle$} & \textbf{r$_{\rm p}^{min}$} & \\
& (pc) & (Gyr) & (pc) & (pc) & \\
\hline
& & 1 & $2.3\times 10^{-3}$ & $5.0\times 10^{-6}$ & \\
A00 & $1.0\times 10^{-2}$ & 2 & $2.3\times 10^{-3}$ & $3.2\times 10^{-5}$ & \\
& & 3 & $2.1\times 10^{-3}$ & $4.9\times 10^{-6}$ & \\ \midrule
& & 1 & $1.7\times 10^{-3}$ & $1.4\times 10^{-5}$ & \\
A05 & $5.0\times 10^{-3}$ & 2 & $1.9\times 10^{-3}$ & $1.0\times 10^{-5}$ & \\
& & 3 & $1.7\times 10^{-3}$ & $2.7\times 10^{-4}$ & \\ \midrule
& & 1 & $5.4\times 10^{-4}$ & $3.4\times 10^{-6}$ & \\
B00 & $1.0\times 10^{-2}$ & 2 & $5.7\times 10^{-4}$ & $2.2\times 10^{-6}$ & \\
& & 3 & $5.1\times 10^{-4}$ & $4.1\times 10^{-6}$ & \\ \midrule
& & 1 & $1.1\times 10^{-3}$ & $2.4\times 10^{-7}$ & \\
B05 & $5.0\times 10^{-3}$ & 2 & $8.9\times 10^{-4}$ & $2.4\times 10^{-7}$ & \\
& & 3 & $7.9\times 10^{-4}$ & $1.5\times 10^{-6}$ & \\ \midrule
& & 1 & $2.8\times 10^{-4}$ & $1.7\times 10^{-6}$ & \\
C00 & $1.0\times 10^{-2}$ & 2 & $2.6\times 10^{-4}$ & $2.2\times 10^{-7}$ & \\
& & 3 & $2.5\times 10^{-4}$ & $5.3\times 10^{-7}$ & \\ \midrule
& & 1 & $3.7\times 10^{-4}$ & $1.5\times 10^{-6}$ & \\
C05 & $5.0\times 10^{-3}$ & 2 & $3.1\times 10^{-4}$ & $2.5\times 10^{-7}$ & \\
& & 3 & $2.6\times 10^{-4}$ & $2.5\times 10^{-7}$ & \\ \midrule
& & 1 & $1.3\times 10^{-4}$ & $2.7\times 10^{-6}$ & \\
D00 & $1.0\times 10^{-2}$ & 2 & $1.0\times 10^{-4}$ & $9.2\times 10^{-7}$ & \\
& & 3 & $9.1\times 10^{-5}$ & $8.8\times 10^{-7}$ & \\ \midrule
& & 1 & $1.5\times 10^{-4}$ & $1.8\times 10^{-6}$ & \\
D05 & $5.0\times 10^{-3}$& 2 & $1.5\times 10^{-4}$ & $3.6\times 10^{-6}$ & \\
& & 3 & $1.3\times 10^{-4}$ & $9.8\times 10^{-6}$ & \\ \midrule
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{}&
\end{tabular}
\end{table}
\begin{figure*}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{images/peri_A00.pdf}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{images/peri_B00.pdf}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{images/peri_C00.pdf}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{images/peri_D00.pdf}
\end{minipage}
\caption{
BHB pericentre distance distribution for all simulations of models A00, B00,
C00 and D00. The histograms are calculated at three different points in the
evolution of the systems, namely at $1$ Gyr (blue), $2$ Gyr (red) and $3$
Gyr (green). We show with a vertical, black dashed line the initial pericentre
in the model.
}
\label{fig:per_a}
\end{figure*}
\begin{figure*}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{images/peri_A05.pdf}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{images/peri_B05.pdf}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{images/peri_C05.pdf}
\end{minipage}
\begin{minipage}[b]{0.49\linewidth}
\centering
\includegraphics[width=\textwidth]{images/peri_D05.pdf}
\end{minipage}
\caption{As in Fig.\ref{fig:per_a}, but for the models A05, B05, C05 and D05.}
\label{fig:per_b}
\end{figure*}
For BHB which initially are circular, we can see in the table and in
Fig.~\ref{fig:per_a} that in all models there is a significative shrinkage of
the pericentre distance of, at least, one order of magnitude.
Such shrinkage occurs after only 1 Gyr. For the most
massive clusters, i.e. model D00, about $20\%$ of all binaries achieve a
pericentre distance which is of about two orders of magnitude smaller than the
initial value. We note, however, that a very few binaries shrink to extremely
small values, reaching pericentre distances of down to $10^{-7}\,pc$.
Eccentric binaries also shrink and achieve smaller pericentre values, as we can
see in Fig.~\ref{fig:per_b}.
In both the case of eccentric and circular orbit, we note that in low dense clusters,
i.e. model A and B, the BHB preserves a pericentre relatively close to the initial value
indicating that such stellar systems are less efficient in favouring the BHB shrinkage.
In such models the pericentres data appears more spread than in models C and D.
A further difference is that for example in model A00, even after 3 Gyr, the BHB have
larger pericentres, indicating that the binary becomes wider, contrary to what is
observed in model A05. We note additionally, that in the intermediate low massive model, B05,
the pericentre reaches very small values (of the order of $10^{-7}\,pc$) which
does not occur for an initial circular orbit.
These results indicate that in such cases, both the cluster stellar
density and the initial orbital eccentricity play a relevant role in favouring the BHB shrinkage.
\subsection{Retained and dynamically-ejected BHBs}
\label{ret_esc}
We observe that in the majority of cases these dynamically--formed binaries
interacting with other stars in the system can also be ejected away from
the cluster. In the code that we are using, \texttt{NBODY7}, single or binary
stars are considered escapers of their energy is positive and their distance to
the centre of the OC centre is at least two times the initial half mass radius
\citep{aarseth_esc, aarseth_book}. Taking into account the evolutionary
scenarios discussed in Sect. \ref{BHB_ev}, we derive for each model the
fraction of escaping and retained BHBs. Table \ref{esc_ret} summarizes the
results of this analysis.
In model A all the BHBs that become harder, i.e. shrink their semi-major axis,
are retained in the OC both in the cases in which the binary has an initial
circular orbit (A00) and in the cases in which the BHBs has initial eccentric
orbit (A05).
In model B only a small fraction of BHBs ($0.7$ \%) is ejected from the cluster
while a large fraction is retained. In particular, in model B05 the fraction of
ejected BHBs ($2.7$ \%) is higher than in model B00. In model C the percentage
of ejected BHBs is larger than in the previous cases. In particular, when the
binary has an initial eccentric orbit (model C05) the fraction of escaping BHBs
is about the $10.5$ \%. Finally, in model D the majority ($\geq 85$ \%) of
BHBs is retained in the cluster even if in this case, contrary to the previous
situations, circular orbits have a higher fraction of ejected BHBs.
After an ionisation of the BHB, the individual black holes usually form a new binary with
a star in the cluster. These dynamically-formed binaries are usually short-lived, and
last at most some tens of Myrs.
After a BHB has been separated, one of the black holes stays in the cluster and
forms a new binary with a star.
These dynamically-formed binaries do not survive for long. Moreover,
we notice that the newly single BHs are more likely to be expelled from the
stellar systems than retained because of multiple scattering with massive stars ($\gtrsim 10$ M$_\odot$).
The presence of such massive stars is comparable to the time at which the BHs are expelled
from the systems, which is generally short ($\lesssim 100$ Myr).
As it is shown in Table \ref{esc_ret}, only in
three models studied (A00, C00 and D00) the BHs are retained after the binary
breaking.
Note that the larger fraction of ejected BHBs "belongs" to more massive
clusters (C and D) in spite of their larger escape velocity.
The time at which the bound BHB is ejected from the cluster varies among the
models, with a BHB mean ejection time between $0.4$ and $1.2$ Gyr. We noticed
that low dense clusters (models A and B) show a BHB ejection time shorter than
massive clusters (models C and D).
\begin{table}
\centering
\caption{
Percentage of BHBs retained (ret) by the cluster or ejected (esc). The first
column indicates the models. Column 2 and 3 indicate the percentage of retained
or ejected BHB that become harder. Column 4 and 5 refers to wider BHBs. Column
6 and 7 give the percentage of retained and ejected single black hole after the
binary breaking.}
\label{esc_ret}
\begin{tabular}{@{}ccccccc@{}}
\toprule
\textbf{Model} & \multicolumn{2}{c}{\textbf{hard}} &\multicolumn{2}{c}{\textbf{wider}} & \multicolumn{2}{c}{\textbf{break}} \\ \midrule
& ret & esc& ret & esc & ret & esc \\
& \% & \% & \% & \% & \% & \% \\
A00 & 89.1 & 0.0 & 7.9 & 0.0 & 0.7 & 2.1 \\
A05 & 97.1 & 0.0 & 2.1 & 0.0 & 0.0 & 0.7 \\
B00 & 91.8 & 0.7 & 2.7 & 0.0 & 0.0 & 4.8 \\
B05 & 91.3 & 2.7 & 2.0 & 0.0 & 0.0 & 4.0 \\
C00 & 88.6 & 4.9 & 0.0 & 0.0 & 0.7 & 5.6 \\
C05 & 86.5 & 9.9 & 0.0 & 0.0 & 0.0 & 3.5 \\
D00 & 85.5 & 8.6 & 0.0 & 0.0 & 0.7 & 5.0 \\
D05 & 90.0 & 7.2 & 0.0 & 0.0 & 0.0 & 2.8 \\ \bottomrule
\end{tabular}
\end{table}
The pie charts in Fig. \ref{fig:pie} illustrate the probabilities of the
different channels for two models studied, C and D (both configuration 00 and
05). Harder binaries are denoted with letter "$h$", wider with "$w$", broken up
binaries with "$b$". Then, each of the three scenarios are split into two
cases: BHB retained by the cluster (indicate with "$ret$") and BHB or BHs
ejected from the system (indicate with "$esc$"). From the pie charts it is
clear that in the majority of cases the BHBs shrink the semi major axis,
becoming harder and remaining bound to the parent cluster. Model C00 and D00
show also a very small fraction of broken up binaries (hence newly single BHs)
which are retained by the clusters. Such result, on the contrary, is not
observed in model C05 and D05. A considerable number of harder BHB escaped from the
cluster is observed in model C05.
Furthermore, the percentage of newly single BHs escaped from the cluster ($b_{\rm esc}$) is higher
in model C00 and D00. Finally it is worth noticing the fraction of
coalescence events (black slices) in each model.
\begin{figure*}
\includegraphics[width=1\textwidth]{images/pie_tot_bn.pdf}
\caption{
The pie charts indicate the various evolutionary scenario of the BHB discussed
in Sect. \ref{BHB_ev} and Sect. \ref{ret_esc} for models C and D. The colour is
referred to: the fraction of retained harder BHB ($h_{\rm ret}$), the fraction of
ejected harder BHB ($h_{\rm esc}$), the fraction of escaped wider BHB ($w_{\rm ret}$),
the fraction of retained wider BHB ($w_{\rm ret}$), the fraction of broke binaries
retained ($b_{\rm ret}$), the fraction of broke binaries ejected ($b_{\rm esc}$) and
the fraction of mergers ($merge$). On the other hand the striped slices
referred to BHB that broke up. The width of each slice indicate the percentage
as shown in Table \ref{esc_ret} and Table \ref{table_merge}.
}
\label{fig:pie}
\end{figure*}
\subsection{External Tidal Field} For a Milky Way-like galaxy the dynamical
evolution of open clusters may be significantly influenced by an external tidal
field \citep{sambaran1}. To investigate such effect, we assume our clusters
are embedded in a tidal field like that of the solar neighbourhood. The
Galactic potential is modelled using a bulge mass of MB = $1.5 \cdot 10^{10}$
M$_\odot$ \citep{fdisk} and disc mass MD = $5 \cdot 10^{10}$ M$_\odot$. The geometry of the
disc is modelled following the formulae of \citet{fbulge} with the following
scale parameters a=5.0 kpc and b=0.25 kpc. A logarithmic halo is included such
that the circular velocity is 220 km/s at 8.5 kpc from the Galactic center.
Adopting these configurations, we ran a further sub-set of simulations for each
model A, B, C and D. The external tidal field generally contribute stripping
stars from the cluster, accelerating its dissolution through the field. In our models
the complete dissolution of the clusters occur between 1.5 Gyr and 3 Gyr.
We notice that the significant reduction of the BHB semi major axis (up to 1-2
order of magnitude) occurs in a time which ranges between $\sim 50$ and $\sim 7
\cdot 10^{2}$ Myr. In such time-range the clusters are still bound and the
tidal forces have not yet contribute to dilute the clusters, avoiding the
binary harden. The gravitational interactions that contribute to significantly
shrink the BHB semi major axis act in a short time-range and in such time the cluster
still contain between 60\% and 80\% of bound stars.
The complete disruption of clusters occur when the gravitational interactions
do not play anymore a dynamical role in the evolution of the black hole binary.
It is worth mentioning that such result are typical of open cluster that lie at
8.5 Kpc from the Galactic center, otherwise clusters closer to the central
regions would dissolve in a shorter time scale.
\section{Sources of gravitational waves}
\subsection{Relativistic binaries}
\label{subsec.relbin}
The code that we used for this work (\texttt{NBODY7}) identifies those compact objects
that will eventually merge due to the emission of gravitational radiation. Note
that the \texttt{NBODY7}~ code indicates a binary as `merging' when at least one of the
conditions described in \cite{aarsethnb7} is satisfied
\footnote{https://www.ast.cam.ac.uk/~sverre/web/pages/pubs.htm} (see also \citet[Sect. 2.3.1]{sambaran3}).
However, the
code does not integrate in time these binaries down to the actual coalescence,
because this would require a reduction of the time-step down to such small
values to make the integration stall. In Table~\ref{table_merge} we give the
percentage of BHB mergers as identified by \texttt{NBODY7} in the simulations.
The more massive the cluster, the larger the number of relativistic mergers
found. We noted that the initial value of the binary eccentricity is not
necessarily correlated with the number of coalescences. The majority of mergers
occur in a time range between $5$ Myr and $1.5$ Gyr. Only two merger events
take longer, between $\approx 1.5$ Gyr and $\approx 2$ Gyr.
In our models, the clusters have not yet disrupted when the BHB
coalescences occur, still containing more than the 80 \% of the initial number of stars.
\begin{table}
\centering
\caption{Percentage of BHB mergers found for each model studied.}
\label{table_merge}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Model} & \% Mergers
\\ \midrule
A00 & 0.0 \\
A05 & 0.7 \\
\midrule
B00 & 0.7 \\
B05 & 0.7 \\
\midrule
C00 & 2.1 \\
C05 & 4.3 \\
\midrule
D00 & 7.1 \\
D05 & 5.7 \\ \bottomrule
\end{tabular}
\label{table.mergers}
\end{table}
In Figs.~\ref{fig:parammergea} and \ref{fig:parammergeb} we show the evolution
of the BHB semi-major axis and pericentre distance of a few representative
cases of Table~\ref{table.mergers} which initially were circular or
eccentric, respectively. It is remarkable that the pericentre distances
drop down to $7--8$ orders of magnitude with respect to the initial value. The
eccentricities fluctuate significantly, episodically reaching values very
close to unity.
\begin{figure}
\centering
\includegraphics[width=0.50\textwidth]{images/eccmerge.pdf}
\caption{The evolution of the eccentricity for two cases in which
the BHBs merge, for models A05 and D00.}
\label{fig:eccmer}
\end{figure}
Because of the relativistic recoil kick
\citep{CampanelliEtAl2007,BakerEtAl2006,GonzalezEtAl2007,fragk17,frlgk18}, the product
of the merger of the BHB might achieve very large velocities, such to escape
the host cluster in all of the cases due to the very small escape velocity.
Fig.~\ref{fig:bhbesctgw} shows the distribution of the BHB semi-major axis
and eccentricity in the last output before the gravitational wave regime drives the merger.
Because these binaries have
undergone many dynamical interactions with other stars, the eccentricities are
very high, ranging between $0.99996$ and above $0.99999$.
Taking into account the expression for the GW emission time, $\mathcal{T}_{gw}$, \citep{peters64},
\begin{equation}
\centering
\mathcal{T}_{gw} (yr) = 5.8 \enskip 10^{6} \enskip \frac{(1+q)^{2}}{q} \enskip \left(\frac{a}{10^{2}\quad \rm{pc}}\right)^{4} \enskip \left(\frac{m_{1}+m_{2}}{10^{8} \quad \rm{M}_{\odot{}}}\right)^{-3} (1-e^{2})^{7/2}
\label{peters}
\end{equation}
where $q$ is the mass ratio between the two BHs of mass $m_{1}$ and $m_{2}$
\footnote{note that the r.h.s of Eq. \ref{peters} is invariant on the choice $q=m_1/m_2$ or $q=m_2/m_1$},
we found that about 50\% of the mergers are mediated by a three body encounter with a
pertuber star.Such three body interaction is thus a fundamental ingredient for BHB coalescence
in low dense star clusters, as already pointed out by \citet{sambaran3}.
An example of such mechanism is discussed in the next section (\ref{mikkola}).
\begin{figure} \centering
\includegraphics[width=0.50\textwidth]{images/sea_merg_semi_a.pdf}
\includegraphics[width=0.50\textwidth]{images/sea_merg_peri_a.pdf}
\caption{Evolution of the BHB semi major axis (upper panel) and pericentre distance
(lower panel) of three illustrative cases which initially were circular.
}
\label{fig:parammergea}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.50\textwidth]{images/sea_merg_semi_b.pdf}
\includegraphics[width=0.50\textwidth]{images/sea_merg_peri_b.pdf}
\caption{Same as Fig.~\ref{fig:parammergea} but for BHBs which initially had an eccentric orbit.}
\label{fig:parammergeb}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}
{\includegraphics[scale=1,clip]{images/newaemerged.pdf}}
\caption
{
Distribution of the semi-major axis ($a$) and eccentricity ($e$) for all BHBs
which merge in our simulations. The various symbols refer to the models as defined in Table~\ref{ictab}.
}
\label{fig:bhbesctgw}
\end{figure}
\subsection{A detailed example of a merger event}
\label{mikkola}
As we said above, \texttt{NBODY7} ~identifies the binary merger events in a different
way when a close interaction with a third object occurs.
However, it does not fully integrate those specific cases because following the
detailed binary evolution would practically make the code stuck. So, in order
to check with accuracy the process of BHB coalescence upon perturbation, we
followed the evolution of one of the allegedly merging BHB by mean of the
few-body integrator \texttt{ARGdf } \citep{megan17}. Based on the \texttt{ARCHAIN } code
\citep{mikkola99}, \texttt{ARGdf } includes a treatment of dynamical friction effect in
the algorithmic regularization scheme, which models at high precision strong
gravitational encounters also in a post-Newtonian scheme with terms up to the
$2.5$ order (\citealt{mikkola08}, whose first implementation in a
direct-summation code is in \citealt{KupiEtAl06}). We chose, at random, one of
our simulations of the D00 model to set initial conditions for the high
precision evolution of a "pre merger" BHB considering its interaction with the
closest 50 neighbours, number that we checked sufficient to give accurate
predictions at regard.
\begin{figure} \resizebox{\hsize}{!}
{\includegraphics[scale=1,clip]{images/plot652.pdf}} \caption { Trajectories of the
BHs in our resimulation (model D00). The cyan circle and dashed line represents
the perturbing star and its trajectory, the black holes are shown as a blue
and red circle and solid lines. The grey circle and lines indicate the stars of the sub
cluster sample simulated. } \label{3bd} \end{figure}
\begin{figure}
\resizebox{\hsize}{!}
{\includegraphics[scale=1,clip]{images/ecc_mikk.pdf}}
\caption
{
Evolution of the BHB eccentricity of Fig.~\ref{mkk} as a consequence of the
three body encounter. Each jump in the eccentricity corresponds to a close
passage of the third star to the BHB, as described in the text.
}
\label{mkke}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}
{\includegraphics[scale=1,clip]{images/ifig6202.pdf}}
\caption
{
Trajectories of the BHs in our resimulation (model D00).
The cyan circle and dashed line represents the perturbing star and its
trajectory, the black holes are shown as a blue and red circle and solid lines.}
\label{mkk}
\end{figure}
This integration is a clear example of the relevance of dynamical interactions
with other stars. Fig.~\ref{3bd} is a snapshot of the BHB evolution and the
formation of a triple system with a pertuber star. \footnote{An animation of
the triple orbit and the eccentricity evolution is available on line with the name
triple\_argdf.avi}. The BHB shrinks by
interacting with such pertuber, of mass $3.4\,M_\odot$, which is in retrograde
orbit as compared to the inner binary with an inclination of $105\degree$ indicating
an eccentric \citet{kozai} \citet{lidov} mechanism \citet{naoz}.
We note also a flyby star of mass
$0.5\,M_\odot$ which interacts with the triple system (BHB \& pertuber) In
Fig.~\ref{mkke} we display the step-like increase of the BHB eccentricity,
which is marked by the repeated interactions with the outer star. Each time the
pertuber orbits around the BHB we observe a step increasing of the
eccentricity. On the contrary the flyby encounter is not efficient to make a
significant perturbation on the eccentricity evolution. Fig.~\ref{mkk} shows a
zoom of the evolution of the BHB latest orbits before the coalescence event.
The plot in the rectangle is a zoom of the final part of the BHB trajectory (at
its right side), spanning a length scale $\sim 10^{-7}$pc.
Therefore, in this particular case the triple built up is the main ingredient that drives
the BHB coalescence. A similar result is derived by \citet{sambaran3} for
low dense star clusters-like.
\subsection{Gravitational Waves}
In Fig.~\ref{fig.30_30_Peters} we show the amplitude vs frequency of emitted
gravitational waves for the case described in the above subsection. Using the
last orbital parameters of the binary which correspond to the last integration
made with \texttt{ARGdf }, we evolve the last phase of the binary by means of
Eq.\ref{peters} deriving a coalescence time $T_{\rm mrg} \cong
7\,\textrm{yrs}$. The amplitude is estimated following the approach of
Keplerian orbits of \cite{PM63} and the orbital evolution as in the work of
\cite{peters64}. We have set the luminosity distance to that of the first
source detected by LIGO \citep{abbott16a}, which corresponds to a redshift of
about $z=0.11$. As described by the work of \cite{ChenAmaro-Seoane2017}, only
circular sources are audible by LISA, which is ``deaf'' to eccentric binaries
of stellar-mass black holes that emit their maximum power at frequencies farther
away from LISA. Hence, this particular source only enters the
Advanced LIGO detection band.
\begin{figure}
\includegraphics[width=1\columnwidth]{images/30_30_Peters.pdf}
\caption{
Characteristic amplitude $h_c$ of the first most important harmonics for the model of
Fig.~\ref{mkk} at a luminosity distance of $D=500~{\rm Mpc}$. We also
pinpoint seven moments in the evolution with dots which correspond,
respectively, to 5 min, 8 sec and 1 sec before the merger of the two black
holes.
}
\label{fig.30_30_Peters}
\end{figure}
\subsection{Black holes inspiraling outside of the cluster}
In our simulations some BHBs undergo a strong interaction with a star and they
are kicked out from the cluster. The BHBs become escapers as defined in
Section~(\ref{ret_esc}). In this case, the BHBs remain almost frozen in their
relative configuration without any possible further evolution of their orbital
parameters as described in Section~(\ref{subsec.relbin}): the escaping BHB
evolves only due to the emission of gravitational radiation. For all these
escaping BHBs (47 cases over the whole set of our simulations), we estimate the
timescale for coalescence using the approach of Keplerian orbits of
\cite{peters64} and find that it always exceeds the Hubble time.
The inspiral phase of these binaries falls in the sensitivity window of LISA.
However, they evolve very slowly in frequency due to the fact that the
semi-major axis is still large, and the time to coalescence scales as $\propto
a^4$. For an observational time of 5 years, the source would virtually not have
moved in frequency, and hence the accumulated SNR over that time is negligible.
\subsection{Merger Rate}
To estimate approximately the merger rate, $\mathcal{R}_{\rm mrg}$
we first derive the mean number density of open clusters, $n_{\rm OC}$,
over a volume $\Omega$ corresponding to redshift $z\leq 1$ as
\begin{equation}
{n_{\rm OC}}= \frac{{N_{\rm OC-MW}} \enskip {N_{\rm MW-\Omega}}}{\Omega}.
\end{equation}
\noindent
In this equation ${N_{\rm OC-MW}}$ is the number of OCs in Milky Way (MW)-like
galaxies and ${N_{\rm MW-\Omega}}$ is the number of MW-like galaxies within
${z=1}$. We estimate the number of OCs in our Galaxy on the basis of the
open-cluster mass function discussed in \cite{piskunov08,spz2010} for the mass
range of OCs considered in our work (from $300$ M$_\odot$ to approximately $3000$
M$_\odot$). We take N$_{\rm {MW}}=10^8$ as the number of Milky Way-like galaxies at
redshift $\sim 1$, as discussed in \cite{tal17}.
We stress here that the estimated merger rate is an upper limit, since it assumes
that each open cluster host a massive BHB similarly to the clusters studied in our models.
Hence, the black hole binary meger rate can be estimated to be
\begin{equation}
\centering
\mathcal{R}_{\rm mrg} = \frac{1}{N_{\rm s}} \sum_{k=1}^{N_{\rm s}} \frac{n_{\rm OC}}{t_{\rm k}} \approx 2 \textrm{Gpc}^{-3}\,\textrm{yr}^{-1},
\label{rate}
\end{equation}
\noindent
where N$_{\rm s}$ is the total number of $N$-body simulations performed in this
work, and t$_{\rm k}$ is the time of each coalescence event as found in our
simulations. This estimate is however derived under the most favourable
conditions and represents the most optimistic merger rate expected from
low-mass open clusters. Note that the BHB merger rate inferred from the first
LIGO observations (GW150914) is in the range $2$ - $600$ Gpc$^{-3}$ yr$^{-1}$
\citep{abbrate16}. The most updated estimate of the merger
rate from LIGO-Virgo events (after including GW170104) is $12$-$213$ Gpc$^{-3}$ yr$^{-1}$ \citep{abbott17}.
Our BHB merger rate is consistent with those found in
\cite{sambaran1,sambaran2,sambaran3} for BHB mergers in Young Massive Star Clusters
(YMSC). \cite{antoninirasio2016} found a merger rate ranging from $0.05$ to
$1$ Gpc$^{-3}$ yr$^{-1}$ for possible progenitor of GW150914 in globular
clusters at z$<$0.3. \cite{rodriguez16b} and \cite{rodriguez16} derived that in
the local universe BHBs formed in Globular Clusters (GCs) will merge at a rate of $5$ Gpc$^{-3}$
yr$^{-1}$. A result very similar was derived by \citet{askar17} who, for BHB
originated in globular cluster, derived a rate of $5.4$ Gpc$^{-3}$ yr$^{-1}$.
When the history of star clusters across cosmic time is included, \citet{frak18}
showed that the rate in the local Universe is $\sim 10$ Gpc$^{-3}$, i.e. nearly twice the rate predicted for isolated clusters.
In Fig.~\ref{fig:mrate} we show the estimated merger rate as a function of
the initial number of cluster stars ($N$). The merger rates derived from our
models A, B, C and D are well fitted with a linear relation.
An extension of our merger rate estimate to globular cluster-like systems ($N >
10 ^{5}$) gives a result in agreement with that found in \cite{park17}, and
previously found by \cite{bae14} and \cite{rodriguez16,rodriguez16b}.
\begin{figure} \centering \includegraphics[width=0.50\textwidth]{images/ratemerg.pdf}
\caption{The most optimistic merger rate ($\mathcal{R}_{\rm mrg}$, in Gpc$^{-3}$ yr$^{-1}$)
obtained for each model studied in this work, as function of the total initial number of cluster stars N.
The merger rates are well fitted with a linear relation, $R_{\rm mrg}=aN+b$, where
$a= 6.2e-{05}$ and $b= -0.04$.}
\label{fig:mrate}
\end{figure}
Although BHB mergers originating in open clusters-like systems might be less
numerous than those produced in massive star clusters, they would add a
comparable amount to the BHB merger rate in the Universe because of their
larger abundance \citep{sambaran2,sambaran3}.
\section{Tidal disruption events and BHB ejection}
\label{tde}
All numerical models we have considered so far have solar metallicity, $Z =
2.02$, and are based on the standard stellar evolution recipes
\citep{hurley2000, hurley2002}.
Moreover, they all consider an equal-mass BHB sitting in the host cluster
centre with an initial mass $M_{{\rm BHB}} = 60$ M$_\odot$. In order to explore the role
played by metallicity, stellar evolution recipes adopted, BHB mass and mass
ratio, we present in this section an additional sample consisting of 421
simulations (the supplementary models), gathered in 5 different groups.
In all these 5 supplementary groups, the OC initial conditions are the same as
in D00 principal models. This implies $N_{\rm cl} = 4096$ and, for the BHB
initial orbit, $a_{\rm BHB} = 0.01$ pc and $e_{\rm BHB} = 0$, unless specified otherwise.
We labels each group with the letter M and a number between 1 and 5.
In model M1, we model the OC assuming an initial metallicity value
$Z=0.0004$, typical of an old stellar population. The BHB initial conditions
are the same as in model D00. Stellar evolution is treated taking advantage of
a new implementation of the \texttt{SSE} and \texttt{BSE} tools, which includes
metal-dependent stellar winds formulae and improvements described in
\cite{belczynski2010}. In the following, we identify the updated stellar
evolution treatment used for model M1 as BSEB, while in all the other cases we
label them with BSE. Note that these updates allow the formation of BHs
with natal masses above $30$ M$_\odot$, while this is not possible in the standard
\texttt{SSE} implementation \citep{hurley2000}. Moreover, it must be stressed
that the updates affect only metallicities below the solar value.
Model M2 is similar to model M1 in terms of initial metallicity and BHB initial
condition, while we used the standard \texttt{SSE} and \texttt{BSE} codes to
model stellar evolution. Therefore, the underlying difference between this and
the previous is that in the latter the mass of compact remnants is
systematically lower. This, in turn, implies that the number of perturbers that
can have a potentially disruptive effect on the BHB evolution is reduced in
model M2.
In model M3 we adopt $Z = 0.02$, i.e. solar values, and we focuse on a BHB with
component masses $M_1 = 13$ M$_\odot$ and $M_2 = 7$ M$_\odot$. This set has a twofold
focus. On one side, it allows us to investigate the evolution of a BHB with
mass ratio lower than 1. On the other side, since in this case the BHB total
mass is comparable to the maximum mass of compact remnants allowed from stellar
evolution recipes, gravitational encounters should be more important in the BHB
evolution.
To further investigate the role of mass ratio, M4 models are similar to M3, but
in this case the BHB mass ratio is smaller, namely $q=0.23$, i.e. the
components mass are $M_1 = 30$ M$_\odot$ and $M_2 = 7$ M$_\odot$.
In all the principal and supplementary models discussed above, we assume that
the BHB is initially at the centre of the OC. In order to investigate whether
such a system can be formed dynamically, i.e. via strong encounters, in model
M5 we set two BHs, with masses $M_1 = M_2 = 30$ M$_\odot$, initially unbound. In this
case we set $Z=0.02$, in order to compare with D00 principal models.
The results of these runs are summarized in Table \ref{newsim}.
\begin{table}
\centering
\caption{Supplementary models. The columns refer to: model name, BHB individual masses and mass ratio, metallicity, stellar evolution recipes used, number of simulations performed. The cluster is always simulated with 4096 stars.}
\begin{tabular}{ccccccc}
\hline
Model & $M_1$ & $M_2$ & $q$ & $Z$ & SE & $N_{\rm mod}$ \\
& M$_\odot$ & M$_\odot$ & & Z$_\odot$ & &\\
\hline
M1 & 30 & 30 & 1 & $10^{-4}$ & BSEB & 109\\
M2 & 30 & 30 & 1 & $10^{-4}$ & BSE & 131\\
M3 & 13 & 7 & 0.54 & $1$ & BSE & 100\\
M4 & 30 & 7 & 0.23 & $1$ & BSE & 42 \\
M5 & 30 & 30 & 1 & $1$ & BSE & 89 \\
\hline
\end{tabular}
\label{newsim}
\end{table}
Since we are interested only in the evolution of the initial BHB, we stop the
simulations if at least one of the BHB initial components is ejected away from
the parent OC.
When metallicity-dependent stellar winds are taken into account (model M1), the
reduced mass loss causes the formation of heavier compact remnants, with masses
comparable to the BHB components. Since the number of BHs is $\sim
10^{-3}N_{\rm cl}$, according to a Kroupa IMF, in models M1 at least
4-5 heavy BHs can form, interact and possibly disrupt the initial BHB. This is
confirmed in the simulations results - we find one of the BHB components
kicked out in $P_{\rm esc} = 34.9\%$ of the cases investigated.
After the component ejection, the remaining BH can form a new
binary with one of the perturbers, or a new BHB.
The ``ejection probability'' in models M2 is only slightly lower than
in M1, $P_{\rm esc} = 33.6\%$, thus implying that the heavier perturbers
forming in models M1 only marginally affect the BHB evolution. This is likely
due to two factors: (i) their number is relatively low (4-5), (ii) the mass
segregation process in such a low-density, relatively light stellar system is
slower than the time over which stellar encounters determine the BHB evolution.
The latter point implies that the BHB evolution is mostly driven by the
cumulative effects of multiple stellar encounters, rather than to a few
interactions with a heavy perturber.
In model M3, characterized by a lighter BHB and solar metallicity, the BHB
total mass falls in the high-end of the BH mass spectrum, $20$ M$_\odot$. This
implies a larger number of massive perturbers with respect to the standard case
discussed in the previous sections and provides insight on the fate of light
BHBs in OCs. Due to the high-efficiency of strong interactions, the BHB unbinds
in $f_{\rm esc} = 32\%$ of the cases, and in no case the BHB undergoes
coalescence.
Model M5 is characterized by a similar ejection probability, which instead
rises up to $40.5\%$ in model M4. This is likely due to the relatively
low-mass of the secondary component. Indeed, as shown through scattering
experiments, BHB-BH interactions seem to naturally lead to a final state in
which the resulting BHB has a larger mass ratio \citep[see for
instance][]{ASKL18}.
In a few cases, we found that the BHB disruption is mediated by a star, which
binds to one of the two BHB former components. The newly formed BH-star pair
is characterized by a high eccentricity ($e>0.9$) and pericentre sufficiently
small to rip the star apart and give rise to a tidal disruption event (TDE). In
the current \texttt{Nbody6} implementation, only the $10\%$ of the star mass is
accreted on the BH, while this percentage can be as high as $50\%$.
The fraction of models in which a TDE takes place spans one order of magnitude,
being $f_{\rm TDE} \cong 0.03-0.3$, with the maximum achieved in models M4 and
the minimum in M1. Note that in model M5 we did not found any TDE (see Table
\ref{newsim}), but in this case the two BHs are initially moving outside the OC
inner regions.
In our models, TDEs involve either main sequence stars (MS), stars in the core
He burning phase (HB) or in the early asymptotic giant branch (AGB) phase. In
model M3 ($f_{\rm TDE} = 0.14$) TDEs involve MS ($29\%$), early AGB ($57\%$)
and AGB ($14\%$) stars. In model M4, where the BHB has a low mass ratio
($q=7/30$), TDEs are boosted, since in this case is easier to replace the
lighter BH. Indeed, a component swap occurs in $28.5\%$ of the cases, with the
new companion star being swallowed by the heavier BH.
Our findings suggest that X-ray or UV emission from OCs can be the signature of
the presence of BHs with masses as high as $20-30$ M$_\odot$.
\begin{table}
\centering
\caption{Summary of results from the supplementary models. Columns refer to: model name, percentage of cases in which at least one of the BHB components is ejected, percentage of cases in which a star is swallowed by one of the two BHs, percentage of cases in which the BHB merges.}
\begin{tabular}{ccccccc}
\hline
Model & $P_{\rm esc}$ & $P_{\rm TDE}$ & $P_{\rm mer}$ \\
& $\%$ & $\%$ & $\%$ \\
\hline
M1 & 34.9& 2.8 & 0.0\\
M2 & 33.6& 6.9 & 3.8\\
M3 & 32.0& 14.0& 0.0\\
M4 & 40.5& 28.5& 0.0\\
M5 & 32.6& 0.0 & 0.0\\
\hline
\end{tabular}
\label{newsim}
\end{table}
Using our results we can calculate the TDE rate for Milky Way - like galaxies
as
\begin{equation}
\Gamma_{\rm TDE} = \frac{f_{\rm TDE}N_{\rm OC}N_{\rm MW}}{\Omega T} = 0.3-3.07\times 10^{-6} {\rm yr}^{-1},
\end{equation}
Our estimates nicely agree with similar TDE-rate calculation provided by
\citet{perets16}, and results in a $\sim 1$ order of magnitude lower than the values calculated for TDEs occurring around supermassive black holes
\citep{fraglei18,stonemetzger,stone17,stone16b,megan1}. Here $f_{\rm TDE}$
is the fraction of TDE inferred from simulation, while we adopt the values
for $N_{\rm OC}, ~ N_{\rm MW}$ and $\Omega$ discussed in the previous section.
Moreover, we assumed $T = 3$ Gyr, i.e. the simulated time.
We apply the same analysis to our principal models and find a TDE rate for
solar-metallicity OCs of $\Gamma_{\rm TDE} = 0.3-3.07\times 10^{-6}
{\rm yr}^{-1}$ for MW-like galaxies in the local Universe.
The BHB coalescence occurs in a few cases $f_{\rm mer} \cong 0.004$, and only in
models M2, where metallicity dependent mass loss is disabled. This suggests
that there exists three different regimes, depending on the perturber maximum
mass $M_p$. If (1) $M_{\rm BHB} \gg M_p$, the BHB is much more massive than field stars and
stellar encounters poorly affects its evolution; however, if (2) $M_{\rm BHB} \geq
M_p$, a few perturbers have masses comparable to the BHB, and can efficiently
drive it toward coalescence, causing for instance an increase in the BHB
eccentricity or a considerable shrinkage; in case (3) that $M_{\rm BHB} =
M_p$, there is at least one perturber with a mass similar, or even larger, than
the BHB. The BHB-perturber interactions causes either the BHB disruption, or
the formation of a new BHB with the perturber replacing the lighter BHB
component.
Note that we cannot exclude that a BHB merge in other models, since we stop
the computation if the original BHB gets disrupted. Hence, we can infer a lower
merger rate for metal poor OCs as follows
\begin{equation}
\mathcal{R}_{\rm mrg} = \frac{f_{\rm mer}N_{\rm MW} N_{\rm OC}}{\Omega T} \simeq 0.26{\rm yr}^{-1}{\rm Gpc}^{-3}.
\end{equation}
These models highlight the importance of stellar evolution in our calculations,
since stronger stellar winds lead to smaller remnants reducing the number of
objects massive enough to cause the BHB disruption. This leads to a higher
probability for the BHB to shrink by the repeated interactions with smaller
objects.
As described above, in model M5 the two BHs are initially unbound, and their
initial position and velocities are kept coherently to the OC distribution
function. In this situation, the fraction of cases in which at least one of
the BHs is ejected from the cluster is similar to that of the other models
($f_{\rm esc}\sim 32.6\%$), but in none of the models the two BHs bind
together. This is due to the low efficiency of dynamical friction in the OC
that avoids the two BHB to decay in the innermost potential well. Also TDEs are
suppressed, due to the low number of strong encounters between BH and cluster
stars because of the low density of the surrounding environment.
To conclude, our supplementary models confirm that the possibility for a BHB to
coalesce in an OC depends strongly on the environment in which the BHB formed
and on its total mass and mass ratio. In metal-poor OCs (metal-dependent)
stellar winds drive the formation of a seizable number of massive perturbers
that can efficiently disrupt the BHB, thus reducing the coalescence
probability. Coalescence is strongly reduced also in the case of low mass
ratios ($q\sim 0.2$) or relatively light BHBs ($M_1+M_2 \sim 20$ M$_\odot$).
One of the most interesting outcomes of the models presented in this section is
the possibility to use the OC TDE rate as a proxy to infer the presence of a
massive BH or BHB around the OC centre.
\section{Conclusions}
In this paper we address the evolution of an equal mass, non-spinning, stellar
BHB with total mass $60$ M$_{\odot}$ inhabiting the centre of a
small/intermediate star cluster (open cluster like, OC), using the direct
$N$-body code \texttt{NBODY7} \citep{aarsethnb7}. In order to quantify the effect of
repeated stellar encounters on the BHB evolution, we vary the OC total mass and
the BHB orbital properties, providing a total of $\sim1150$ simulations which
we refer to as {\it principal} models. For the sake of comparison, we also
investigate the role played by the BHB total mass, the stellar evolution
recipes adopted and the OC metallicity. These can be considered as {\it
supplementary} models. The total simulations sample hence consists of $\sim
1500$ different OC models, with masses in the range $300-3000$ M$_\odot$.
In $\sim 95\%$ of all the principal simulations performed, the BHB hardens due
to the repeated scatterings with flyby stars, while its eccentricity increases
significantly. This process takes place on a relatively short time-scale, $\sim
1$ Gyr. In $\sim 1.2\%$ of the principal simulations, instead, the
perturbations induced by massive stars that occasionally approach the BHB make
it wider. In the remaining $\sim 4.8\%$ cases, the interactions with OC stars
are sufficiently strong to break up the BHB. When the BHB gets harder, its semi-major axis reduces by 2 to 4 orders of magnitude, thus
decreasing the merger time-scale by a factor 16 in the best case. Hardened BHBs
are retained within the parent OC with a probability of $95\%$, while those
becoming wider are all retained. In the case of BHB breakup, the two BHs tend
to form short-lived binary systems with other OC stars, and eventually at least
one of the two BHs is ejected from the parent cluster.
In $\sim 3\%$ of the models, the star-BHB interactions are sufficiently
effective to drive the BHB coalescence within a Hubble time. We find that a
crucial ingredient for the BHB to merge is the interaction with a perturbing
star, which considerably shortens the merger time. These dynamical perturbers
enhance the number of GW sources by as much as $50\%$. The merger takes place
in a time ranging from $5$ Myr to $2.9$ Gyr. In a few cases, the merging binaries emit
GWs shifting from the $10^{-3}$ to the $10$ Hz frequency band. This suggests that
merging BHBs in OCs can potentially be seen both by LISA, $\sim 200$ yr before
the merger, and LIGO, during the last phase preceding the merger.
Extrapolating our results to the typical population of OCs in MW-like galaxies
in the local Universe, we found that the most optimistic merger rate for BHB
mergers in low-mass stellar systems is $\mathcal{R}_{\rm mrg}\sim$ 2 yr$^{-1}$
Gpc$^{-3}$, a value compatible with the merger rate expected for galactic
nuclei, but smaller than the merger rate inferred for globular and young
massive clusters.
According to our supplementary models, in low-metal environments the BHB
hardening is suppressed, due to the presence of a large number of high-mass
perturbers that can efficiently drive the BHB disruption. In this regard,
different stellar evolution recipes may affect significantly the results, since
they regulate the maximum mass of compact remnant. Assuming a smaller BHB and a
solar metallicity for the cluster stars leads to similar results, since, again,
the fraction of perturbers sufficiently massive to drive the BHB disruption is
much larger.
In none of the cases in which the BHB components are initially kept unbound the
BHB forms via dynamical processes. This is due to the low efficiency of
dynamical friction in the OC low-dense environment, which is unable to drive
the BHs orbital segregation and pairing. So binaries as the ones considered in
this paper should be primordial.
In a noticeable fraction of the supplementary models, we found that the BHB
breaks up and one of the BHs forms a very eccentric binary with an OC star,
typically a main sequence or an AGB star. These binaries are usually
short-living systems and result in a tidal disruption event, with part of the
stellar debris being swallowed by the BH.
Our supplementary models suggest that TDEs in OCs occur at a rate $\Gamma_{\rm
TDE} = 3.08\times 10^{-6}$ yr$^{-1}$ per MW-like galaxies in the local
Universe.
\section*{Acknowledgements}
SR acknowledges Sapienza, Universit\'a di Roma, which funded the research
project ''Black holes and Star clusters over mass and spatial scale'' via the
grant AR11715C7F89F177. SR is thankful to Sverre Aarseth of the Institute of
Astronomy, Cambridge, for his helpful comments and suggestions during the
development of this work. MAS acknowledges the Sonderforschungsbereich SFB 881
"The Milky Way System" (subproject Z2) of the German Research Foundation (DFG)
for the financial support provided. PAS acknowledges support from the Ram{\'o}n
y Cajal Programme of the Ministry of Economy, Industry and Competitiveness of
Spain, as well as the COST Action GWverse CA16104. GF is supported by the Foreign Postdoctoral Fellowship Program of the Israel Academy of Sciences and Humanities. GF also acknowledges support from an Arskin postdoctoral fellowship and Lady Davis Fellowship Trust at the Hebrew University of Jerusalem.
\bibliographystyle{mnras}
| {'timestamp': '2018-11-28T02:00:29', 'yymm': '1811', 'arxiv_id': '1811.10628', 'language': 'en', 'url': 'https://arxiv.org/abs/1811.10628'} |
\section{Introduction}
In this paper we define a new algebra of noncommutative differential
forms over any Hopf algebra $H$ with an invertible antipode. The
resulting differential calculus, denoted here by $\mathcal{K}^*(H)$,
is intimately related to the class of
anti-Yetter-Drinfeld modules over $H$, herein called AYD modules, that
were introduced in \cite{Khalkhali:SaYDModules}. More precisely, we
show that there is a one to one correspondence between AYD modules
over $H$ and $H$--modules that admit a flat connection with respect to
our differential calculus $\mathcal{K}^*(H)$. In general terms, this
gives a new interpretation of AYD modules as noncommutative analogues
of local systems over the noncommutative space represented by $H$.
The introduction of AYD modules in \cite{Khalkhali:SaYDModules} was
motivated by the problem of finding the largest class of Hopf modules
that could serve as coefficients for the cyclic (co)homology of Hopf
algebras introduced by Connes and Moscovici
\cite{ConnesMoscovici:HopfCyclicCohomology,
ConnesMoscovici:HopfCyclicCohomologyIa,
ConnesMoscovici:HopfCyclicCohomologyII} and its extensions and
ramifications defined in \cite{Khalkhali:DualCyclicHomology,
Khalkhali:InvariantCyclicHomology, Khalkhali:HopfCyclicHomology,
Kaygun:BialgebraCyclicK}. The question of finding an appropriate
notion of local system, or coefficients, to twist the cyclic homology
of associative algebras is an interesting open problem. The case of
Hopf cyclic cohomology on the other hand offers an interesting
exception in that it admits coefficients and furthermore, as we shall
prove, these coefficient modules can be regarded as ``flat bundles''
in the sense of Connes' noncommutative differential geometry
\cite{Connes:NonCommutativeGeometry}.
Here is a plan of this paper. In Section~\ref{Prelims} we recall the
definitions of anti-Yetter-Drinfeld (AYD) and Yetter-Drinfeld (YD)
modules over a Hopf algebra, and we give a characterization of the
category of AYD modules over group algebras as a functor
category. This will serve as our guiding example. Motivated by this,
in Section~\ref{FlatConnections} we define two natural differential
calculi $\C{K}^*(H)$ and $\widehat{\C{K}}^*(H)$ associated with an
arbitrary Hopf algebra $H$. Next, we give a complete characterization
of the category of AYD and YD modules over a Hopf algebra $H$ as
modules admitting flat connections over these differential calculi
$\C{K}^*(H)$ and $\widehat{\C{K}}^*(H)$, respectively. In the same
section, we also investigate tensor products of modules admitting flat
connections. In Section~\ref{EquivariantCalculi}, we place the
differential calculi $\C{K}^*(H)$ and $\widehat{\C{K}}^*(H)$ in a much
larger class of differential calculi, unifying the results proved
separately before. This last section is motivated by the recent paper
\cite{PanaiteStaic:GeneralizedAYD}. After this paper was posted, T.
Brzezinski informed us that he can also derive our differential
calculus and flatness results from the theory of corings
\cite{BrzezinskiWisbauer:ComodulesCorings}. This would involve
realizing AYD modules via a specific entwining structure, and using
the general machinery of \cite{BrzezinskiWisbauer:ComodulesCorings}
that cast such structures as flat modules.
In this paper we work over a field $k$ of an arbitrary characteristic.
All of our results, however, are valid over an arbitrary unital
commutative ground ring $k$ if the objects involved (coalgebras,
bialgebras, Hopf algebras and their (co)modules) are flat
$k$--modules. We will assume $H$ is a Hopf algebra with an invertible
antipode and all of our $H$--modules and comodules are left modules
and comodules unless explicitly indicated otherwise.
Acknowledgement: M.K. would like to thank professor Alain Connes for
suggesting the problem of interpreting AYD modules as flat modules in
the sense of noncommutative differential geometry and for enlightening
discussions at an early stage of this work.
\section{Preliminaries and a guiding example}\label{Prelims}
Recall from \cite{Khalkhali:SaYDModules} that a $k$--module $X$ is
called a left-left anti-Yetter-Drinfeld (AYD, for short) module if (i)
$X$ is a left $H$--module, (ii) $X$ is a left $H$--comodule, and (iii)
one has a compatibility condition between the $H$--module and comodule
structure on $X$ in the sense that:
\begin{align}\label{AYD}
(hx)_{(-1)}\otimes(hx)_{(0)}
= h_{(1)}x_{(-1)}S^{-1}(h_{(3)})\otimes h_{(2)}x_{(0)}
\end{align}
for any $h\in H$ and $x\in X$. The AYD condition~(\ref{AYD}) should
be compared with the Yetter-Drinfeld (YD) compatibility condition:
\begin{align}\label{YD}
(hx)_{(-1)}\otimes (hx)_{(0)}
= h_{(1)}x_{(-1)}S(h_{(3)})\otimes h_{(2)}x_{(0)}
\end{align}
for any $h\in H$ and $x\in X$.
A morphism of AYD modules $X\to Y$ is simply a $k$-linear map $X
\to Y$ compatible with the $H$--action and coaction. The resulting
category of AYD modules over $H$ is an abelian category. There is a
similar statement for YD modules.
To pass from (\ref{AYD}) to (\ref{YD}) one replaces $S^{-1}$ by $S$.
This is, however, a nontrivial operation since the categories of AYD
and YD modules over $H$ are very different in general. For example,
the former is not a monoidal category in a natural way
\cite{Khalkhali:SaYDModules} while the latter is always monoidal. If
$S^2=id_H$, in particular when $H$ is commutative or cocommutative,
then these categories obviously coincide.
To understand these compatibility conditions better we proceed as
follows. Let $X$ be a left $H$ module. We define a left $H$--action on
$H\otimes X$ by letting
\begin{align}\label{AYDact}
h (g\otimes x): =h_{(1)}gS^{-1}(h_{(3)})\otimes h_{(2)}x
\end{align}
for any $h\in H$ and $g\otimes x \in H\otimes X$.
\begin{lem}
The formula given in (\ref{AYDact}) defines a left $H$--module
structure on $H\otimes X$. Moreover, an $H$--module/comodule $X$ is
an anti-Yetter-Drinfeld module iff its comodule structure map
$\rho_X:X\to H\otimes X$ is a morphism of $H$--modules.
\end{lem}
There is of course a similar characterization of YD modules. The left
action (\ref{AYDact}) should simply be replaced by the left action
\begin{align}\label{YDact}
h (g\otimes x):=h_{(1)}gS(h_{(3)})\otimes h_{(2)}x
\end{align}
Let us give a characterization of AYD modules in a concrete example.
Let $G$ be a, not necessarily finite, discrete group and let $H=k[G]$
be its groups algebra over $k$ with its standard Hopf algebra
structure, i.e. $\Delta(g)=g\otimes g$ and $S(g)=g^{-1}$ for all $g\in
G$. We define a groupoid $G\ltimes G$ whose set of morphisms is
$G\times G$ and its set of objects is $G$. Its source and target maps
$s: G\ltimes G\to G$ and $t:G\ltimes G\to G$ are defined by
\begin{align*}
s(g,g') &= g & t(g,g') &= gg'g^{-1}
\end{align*}
for any $(g,g')$ in $G\ltimes G$. It is easily seen that under group
multiplication as its composition, $G\ltimes G$ is a groupoid.
\begin{prop}\label{BundlesOverG}
The category of Yetter-Drinfeld modules over $k[G]$ is
isomorphic to the category of functors from $G\ltimes G$ into the
category of $k$--modules.
\end{prop}
\begin{proof}
Let $M$ be a $k[G]$--module/comodule. Denote its structure morphisms
by $\mu:k[G] \otimes M\to M$ and $\rho:M\to k[G] \otimes M$.
Since we assumed $k$ is a field, $M$ has a basis of the form
$\{e^i\}_{i\in I}$ for some index set $I$. Since $M$ is a $k[G]
$--comodule one has
\begin{align*}
e^i_{(-1)}\otimes e^i_{(0)} = \sum_{j\in I}\sum_{g\in G}
c^i_{j,g}(g\otimes e^j)
\end{align*}
where only finitely many $c_{j,g}$ is non-zero. One can chose a
basis $\{m^\lambda\}_{\lambda\in\Lambda}$ for $M$ such that
\begin{align*}
m^\lambda_{(-1)}\otimes m^\lambda_{(0)}
= \sum_\alpha c_{\lambda,\alpha} (g_\lambda\otimes m^\alpha)
\end{align*}
and since all comodules are counital and $k[G]$ has a counit
$\varepsilon(g)=1$ for any $g\in G$ we see that
\begin{align*}
m^\lambda = \sum_\alpha c_{\lambda,\alpha} m^\alpha
\end{align*}
implying $c_{\lambda,\alpha}$ is uniformly zero except
$c_{\lambda,\lambda}$ which is 1. In other words, one can split $M$
as $\bigoplus_{g\in G} M_g$ such that $\rho(x)= g\otimes x$ for any
$x\in M_g$. Now assume $M$ is an AYD module. Then since
\begin{align*}
(hx)_{(-1)}\otimes (hx)_{(0)} = hgh^{-1}\otimes hx
\end{align*}
for any $x\in M_g$ and $h\in G$ one can see that
$L_h:M_g\to M_{hgh^{-1}}$ where $L_h$ is the $k$--vector space
endomorphism of $M$ coming from the left action of $h$. This
observation implies that the category of AYD modules over $k[G]$
and the category of functors from $G\ltimes G$ into the category of
$k$--modules are isomorphic.
\end{proof}
\section{Differential calculi and flat connections}\label{FlatConnections}
Our next goal is to find a noncommutative analogue of
Proposition~\ref{BundlesOverG}. To this end, we will replace the
groupoid $G\ltimes G$ by a differential calculus $\C{K}^*(H)$
naturally defined for any Hopf algebra $H$. The right analogue of
representations of the groupoid $G\ltimes G$ will be $H$--modules
admitting flat connections with respect to the differential calculus
$\C{K}^*(H)$.
Let us first recall basic notions of connection and curvature in the
noncommutative setting from \cite{Connes:NonCommutativeGeometry}. Let
$A$ be a $k$--algebra. A differential calculus over $A$ is a
differential graded $k$--algebra $(\Omega^*,d)$ endowed with a
morphism of algebras $\rho:A\to \Omega^0$. The differential $d$ is
assumed to have degree one. Since in our main examples we have
$\Omega^0=A$ and $\rho=id$, in the following we assume this is the
case.
Assume $M$ is a left $A$--module. A morphism of $k$--modules
$\nabla:M\to \Omega^1\eotimes{A}M$ is called a connection with
respect to the differential calculus $(\Omega^*,d)$ if one has a
Leibniz rule of the form
\begin{align*}
\nabla(am) = a\nabla(m) + d(a)\eotimes{A}m
\end{align*}
for any $m\in M$ and $a\in A$. Given any connection $\nabla$ on $M$,
there is a unique extension of $\nabla$ to a map $\widehat{\nabla}:
\Omega^*\eotimes{A}M\to\Omega^*\eotimes{A}M$ satisfying a graded
Leibniz rule. It is given by
\begin{align*}
\widehat{\nabla}(\omega\otimes m)
= d(\omega)\eotimes{A}m + (-1)^{|\omega|}\omega\nabla(m)
\end{align*}
for any $m\in M$ and $\omega\in\Omega^*$. A connection
$\nabla:M\to \Omega^1\eotimes{A}M$ is called flat if its curvature
$R: =\widehat{\nabla}^2=0$. This is equivalent to saying
$\Omega^*\eotimes{A}M$ is a differential graded $\Omega^*$--module
with the extended differential $\widehat{\nabla}$.
The following general definition will be useful in the rest of this
paper.
\begin{defn}
Let $X_0,\ldots,X_n$ be a finite set of $H$--bimodules. We define
an $H$--bimodule structure on the $k$--module
$X_0\otimes\cdots\otimes X_n$ by
\begin{align*}
h(x^0\otimes\cdots\otimes x^n)
= & h_{(1)}x^0 S^{-1}(h_{(2n+1)})\otimes\cdots\otimes
h_{(n)} x^{n-1} S^{-1}(h_{(n+2)})\otimes h_{(n+1)} x^n,\\
(x^0\otimes\cdots\otimes x^n)h
= & x^0\otimes\cdots\otimes x^{n-1}\otimes x^nh.
\end{align*}
for any $h\in H$ and $(x^0\otimes\cdots\otimes x^n)\in
X_0\otimes\cdots\otimes X_n$. Checking the bimodule conditions is
straightforward. We denote this bimodule by $X_0\oslash\cdots\oslash
X_n$.
\end{defn}
\begin{rem}
We should remark that $X\oslash Y$ is not a monoidal product.
In other words given any three $H$--bimodules $X$, $Y$, and $Z$, then
$(X\oslash Y)\oslash Z$ and $X\oslash (Y\oslash Z)$ are
not isomorphic as left $H$--modules unless $H$ is cocommutative.
\end{rem}
\begin{defn}
For each $n\geq 0$, let $\C{K}^n(H)= H^{\oslash n+1}$.
We define a differential $d: \C{K}^n(H) \to
\C{K}^{n+1}(H)$ by
\begin{align*}
d(h^0\otimes\cdots\otimes h^n)
= & - (1\otimes h^0\otimes\cdots\otimes h^n)
+ \sum_{j=0}^{n-1}(-1)^j(h^0\cdots\otimes
h^j_{(1)}\otimes h^j_{(2)}\otimes\cdots\otimes h^n)\\
& + (-1)^n(h^0\otimes\cdots\otimes h^{n-1}\otimes
h^n_{(1)}S^{-1}(h^n_{(3)})\otimes h^n_{(2)}).
\end{align*}
We also
define an associative graded product structure by
\begin{align*}
(x^0\otimes\cdots\otimes x^n) & (y^0\otimes\cdots\otimes y^m)\\
= & x^0\otimes\cdots\otimes x^{n-1}\otimes
x^n_{(1)}y^0S^{-1}(h_{(2m+1)})\otimes\cdots\otimes
x^n_{(m)}y^{m-1}S^{-1}(x^n_{(m+2)})\otimes x^n_{(m+1)}y^m
\end{align*}
for any $(x^0\otimes\cdots\otimes x^n)$ in $\C{K}^n(H)$ and
$(y^0\otimes\cdots\otimes y^m)$ in $\C{K}^m(H)$.
\end{defn}
\begin{prop}\label{AYDCalculus}
$\C{K}^*(H)$ is a differential graded $k$--algebra.
\end{prop}
\begin{proof}
For any $x\in\C{K}^0(H)$ one has
\begin{align*}
d(x) = - (1\otimes x) + (x_{(1)}S^{-1}(x_{(3)})\otimes x_{(2)})
= [x,(1\otimes 1)],
\end{align*}
and for $(y\otimes 1)$ in $\C{K}^1(H)$ and $x\in\C{K}^0(H)$ we see
\begin{align*}
d(x(y\otimes 1))
= & d(x_{(1)}yS^{-1}(x_{(3)})\otimes x_{(2)})\\
= & - (1\otimes x_{(1)}yS^{-1}(x_{(3)})\otimes x_{(2)})
+ (x_{(1)}y_{(1)}S^{-1}(x_{(5)})\otimes x_{(2)}y_{(2)}S^{-1}(x_{(4)})\otimes x_{(3)})\\
& - (x_{(1)}yS^{-1}(x_{(5)})\otimes x_{(2)}S^{-1}(x_{(4)})\otimes x_{(3)})\\
= & d(x)(y\otimes 1) + x d(y\otimes 1)
\end{align*}
We also see for $(x\otimes y)$ in $\C{K}^1(H)$ the we have
\begin{align*}
d((x\otimes 1)y) & = d(x\otimes y)
= - (1\otimes x\otimes y)
+ (x_{(1)}\otimes x_{(2)}\otimes y)
- (x\otimes y_{(1)}S^{-1}(y_{(3)})\otimes y_{(2)})\\
= & - (1\otimes x\otimes 1)y
+ (x_{(1)}\otimes x_{(2)}\otimes 1) y
- (x\otimes 1\otimes 1)y
+ (x\otimes 1)(1\otimes y)\\
& - (x\otimes 1)(y_{(1)}S^{-1}(y_{(3)})\otimes y_{(2)})\\
= & d(x\otimes 1)y - (x\otimes 1)d(y).
\end{align*}
Note that with the product structure on $\C{K}^*(H)$ one
has
\begin{align*}
(x^0\otimes\cdots\otimes x^n)
= (x^0\otimes 1)\cdots(x^{n-2}\otimes 1)(x^{n-1}\otimes 1)x^n
\end{align*}
for any $x^0\otimes\cdots\otimes x^n$ in $\C{K}^n(H)$. Now,
one can inductively show that
\begin{align*}
d(\Psi\Phi)
= & d(\Psi)\Phi + (-1)^{|\Psi|}\Psi d(\Phi)
\end{align*}
for any $\Psi$ and $\Phi$ in $\C{K}^*(H)$. Since the algebra is
generated by degree zero and degree one terms, all that remains is
to show that for all $x\in H$ we have $d^2(x)=0$ and $d^2(x\otimes
1)=0$. For the first assertion we see that
\begin{align*}
d^2(x) & = - d(1\otimes x) + d(x_{(1)}S^{-1}(x_{(3)})\otimes x_{(2)})\\
= & (1\otimes 1\otimes x) - (1\otimes 1\otimes x)
+ (1\otimes x_{(1)}S^{-1}(x_{(3)})\otimes x_{(2)})
- (1\otimes x_{(1)}S^{-1}(x_{(3)})\otimes x_{(2)})\\
&+ (x_{(1)(1)}S^{-1}(x_{(3)(2)})\otimes x_{(1)(2)}S^{-1}(x_{(3)(1)})\otimes x_{(2)})
- (x_{(1)}S^{-1}(x_{(3)})\otimes x_{(2)(1)}S^{-1}(x_{(2)(3)})\otimes x_{(2)(2)})\\
= & 0
\end{align*}
for any $x\in H$. For the second assertion we see
\begin{align*}
d^2& (x\otimes 1)
= - d(1\otimes x\otimes 1)
+ d(x_{(1)}\otimes x_{(2)}\otimes 1)
- d(x\otimes 1\otimes 1)\\
= & - (1\otimes 1\otimes x\otimes 1)
+ (1\otimes 1\otimes x\otimes 1)
- (1\otimes x_{(1)}\otimes x_{(2)}\otimes 1)
+ (1\otimes x\otimes 1\otimes 1)\\
& - (1\otimes x_{(1)}\otimes x_{(2)}\otimes 1)
+ (x_{(1)}\otimes x_{(2)}\otimes x_{(3)}\otimes 1)
- (x_{(1)}\otimes x_{(2)}\otimes x_{(3)}\otimes 1)
+ (x_{(1)}\otimes x_{(2)}\otimes 1\otimes 1)\\
& + (1\otimes x\otimes 1\otimes 1)
- (x_{(1)}\otimes x_{(2)}\otimes 1\otimes 1)
+ (x\otimes 1\otimes 1\otimes 1)
- (x\otimes 1\otimes 1\otimes 1)\\
= & 0
\end{align*}
for any $(x\otimes 1)$ in $\C{K}^*(H)$. The result follows.
\end{proof}
Note that the calculus $\C{K}^*(H)$ is determined by (i) the
$H$--bimodule $\C{K}^1(H) = H\otimes H$ (ii) the differential
$d_0:H\to H\otimes H$ and $d_1:H\otimes H\to H\otimes H\otimes H$
and (iii) the Leibniz rule $d(\Psi\Phi) = d(\Psi)\Phi +
(-1)^{|\Psi|}\Psi d(\Phi)$.
Recall that for any coassociative coalgebra $C$, the
category of $C$--bicomodules has a monoidal product called cotensor
product, which is denoted by $\fotimes{C}$. The cotensor product is
left exact and its right derived functors are denoted by ${\rm
Cotor}_C^*(\cdot,\cdot)$.
Now we can identify the homology of the calculus $\C{K}^*(H)$ as
follows:
\begin{prop}
$H_*(\C{K}^*(H))$ is isomorphic to ${\rm
Cotor}_H^*(k,H^{coad})$ where $k$ is considered as an $H$--comodule
via the unit and $H^{coad}$ is the coadjoint corepresentation over
$H$, i.e. $\rho^{coad}(h)=h_{(1)}S^{-1}(h_{(3)})\otimes h_{(2)}$ for
any $h\in H$.
\end{prop}
\begin{thm}\label{AYDModules}
The category of AYD modules over $H$ is isomorphic to the category
of $H$--modules admitting a flat connection with respect to the
differential calculus $\C{K}^*(H)$.
\end{thm}
\begin{proof}
Assume $M$ is a $H$--module which admits a morphism of $k$--modules
of the form $\nabla:M\to \C{K}^1(H)\eotimes{H}M\cong H\otimes M$.
Define $\rho_M(m)=\nabla(m)+(1\otimes m)$ and denote $\rho_M(m)$ by
$(m_{(-1)}\otimes m_{(0)})$ for any $m\in M$. First we see that
\begin{align*}
\nabla(hm) = & (hm)_{(-1)}\otimes (hm)_{(0)} - (1\otimes hm)
\end{align*}
and also
\begin{align*}
d(h)\eotimes{H}m + h\nabla(m)
= & - (1\otimes hm)
+ (h_{(1)}S^{-1}(h_{(3)})\otimes h_{(2)}m)\\
& + (h_{(1)}m_{(-1)}S^{-1}(h_{(3)})\otimes h_{(2)}m_{(0)})
- (h_{(1)}S^{-1}(h_{(3)})\otimes h_{(2)}m)\\
= & (h_{(1)}m_{(-1)}S^{-1}(h_{(3)})\otimes h_{(2)}m_{(0)})
- (1\otimes hm)
\end{align*}
for any $h\in H$ and $m\in M$. This means $\nabla$ is a connection
iff the $H$--module $M$ together with $\rho_X:M\to H\otimes M$
satisfy the AYD condition. The flatness condition will hold iff
for any $m\in M$ one has
\begin{align*}
\widehat{\nabla}^2(m)
= & d(m_{(-1)}\otimes 1)\eotimes{H} m_{(0)}
- (m_{(-1)}\otimes 1)\nabla(m_{(0)})
- d(1\otimes 1)m + (1\otimes 1)\nabla(m)\\
= & (m_{(-1)(1)}\otimes m_{(-1)(2)}\otimes m_{(0)})
- (m_{(-1)}\otimes m_{(0)(-1)}\otimes m_{(0)(0)})
= 0,
\end{align*}
meaning $\nabla$ is flat iff $\rho_M:M\to H\otimes M$ defines a
coassociative coaction of $H$ on $M$.
\end{proof}
\begin{defn}
Let $X$ be an AYD module over $H$. Define $\C{K}^*(H,X)$
as the graded $k$--module $\C{K}^*(H)\eotimes{H}X$ equipped with the
connection $\nabla_X(x) = \rho_X(x) - (1\otimes x)$ as its
differential.
\end{defn}
Recall from \cite{Khalkhali:SaYDModules} that an $H$--module/comodule
$X$ is called stable if the composition $X\xra{\rho_X}H\otimes
X\xra{\mu_X}X$ is $id_X$ where $\rho_X$ and $\mu_X$ denote the
$H$--comodule and $H$--module structure maps respectively. Explicitly
one has $x_{(-1)}x_{(0)}=x$ for any $x\in X$.
\begin{thm}
For an arbitrary AYD module $X$ one has $H_*(\C{K}^*(H,X))\cong {\rm
Cotor}_H^*(k,X)$. Moreover, if $X$ is also stable, then
$\C{K}^*(H,X)$ is isomorphic (as differential graded $k$--modules)
to the Hochschild complex of the Hopf-cyclic complex of the Hopf
algebra $H$ with coefficients in $X$.
\end{thm}
\begin{proof}
The first part of the Theorem follows from the observation that
$\C{K}^*(H,X)$, viewed just as a differential graded $k$--module, is
really $\C{B}_*(k,H,X)$ the two sided cobar complex of the coalgebra
$H$ with coefficients in $H$--comodules $k$ and $X$. The second
assertion follows from Remark~3.13 and Theorem~3.14 of
\cite{Kaygun:BialgebraCyclicK}.
\end{proof}
\begin{prop}
Let $X$ be an AYD module over $H$. If $H$ is cocommutative, then
$\C{K}^*(H,X)$ is a differential graded left $H$--module with
respect to the AYD module structure on $\C{K}^*(H,X)$.
\end{prop}
Instead of the AYD condition, one can consider the YD condition and form a
differential calculus $\widehat{\C{K}}^*(H)$ using the YD condition.
\begin{defn}
As before, assume $H$ is a Hopf algebra, but this time we do not
require the antipode to be invertible. We define a new differential
calculus $\widehat{\C{K}}^*(H)$ over $H$ as follows: let
$\widehat{\C{K}}^n(H)=H^{\otimes n+1}$ and define the differentials
as
\begin{align*}
d(x^0\otimes\cdots\otimes x^n)
= & -(1\otimes x^0\otimes\cdots\otimes x^n)
+ \sum_{j=0}^{n-1}(-1)^j(x^0\otimes \cdots\otimes x^j_{(1)}\otimes x^j_{(2)}\otimes\cdots x^n)\\
& + (-1)^n(x^0\otimes\cdots\otimes x^{n-1}\otimes x^n_{(1)}S(x^n_{(3)})\otimes x^n_{(2)})
\end{align*}
for any $x^0\otimes\cdots\otimes x^n$ in $\widehat{\C{K}}^n(H)$.
The multiplication is defined as
\begin{align*}
(x^0\otimes\cdots\otimes x^n) & (y^0\otimes\cdots\otimes y^m)\\
= & x^0\otimes\cdots\otimes x^{n-1}\otimes x^n_{(1)}y^0S(x_{(2m+1)})
\otimes\cdots\otimes x^n_{(m)}y^{m-1}S(x^n_{(m+1)})\otimes x^n_{(m+1)}y^m
\end{align*}
for any $x^0\otimes\cdots\otimes x^n$ and $y^0\otimes\cdots\otimes
y^m$ in $\widehat{\C{K}}^*(H)$.
\end{defn}
The proofs of the following facts are similar to the corresponding
statements for the differential calculus $\C{K}^*(H)$ and AYD modules.
\begin{prop}
$\widehat{\C{K}}^*(H)$ is a differential graded $k$--algebra.
\end{prop}
\begin{thm}\label{YDModules}
The category of YD modules over $H$ is isomorphic to the category of
$H$--modules admitting a flat connection with respect to the
differential
calculus $\widehat{\C{K}}^*(H)$.
\end{thm}
\begin{defn}
Let $X$ be a Yetter-Drinfeld module $X$ over $H$ with the structure
morphisms $\mu_X:H\otimes X\to X$ and $\rho_X:X\to H\otimes X$.
Define $\widehat{\C{K}}^*(H,X)$ as the (differential) graded
$k$--module $\widehat{\C{K}}^*(H)\eotimes{H}X$ with the connection
$\nabla_X(x) = \rho_X(x) - (1\otimes x)$ defined for any $x\in X$.
\end{defn}
\begin{prop}
For an arbitrary YD module $X$ one has
$H_*(\widehat{\C{K}}^*(H,X))\cong {\rm Cotor}_H^*(k,X)$.
\end{prop}
\begin{prop}
Assume $X$ is an arbitrary YD module over $H$. If $H$ is
cocommutative, then $\widehat{\C{K}}^*(H,X)$ is a differential
graded left $H$--module with respect to the YD module structure on
$\widehat{\C{K}}^*(H)$.
\end{prop}
Our next goal is to study tensor products of AYD and YD modules in our
noncommutative differential geometric setup. It is well known that
the tensor product of two flat vector bundles over a manifold is again
a flat bundle. Moreover, from the resulting monoidal, in fact
Tannakian, category one can recover the fundamental group of the base
manifold. The situation in the noncommutative case is of course far
more complicated and we only have some vestiges of this theory.
Assume $H$ is a Hopf algebra with an invertible antipode. Let $X$ be
a $H$--module admitting a flat connection $\nabla$ with respect to the calculus
$\C{K}^*(H)$. We define a switch morphism $\sigma: X\otimes
H\to H\otimes X$ by letting
\begin{align*}
\sigma(x\otimes h) = x_{\{-1\}} h\otimes x_{\{0\}}+h\otimes x,
\end{align*}
for any $x\otimes h$ in $X\otimes H$ where we used a Sweedler notation
to denote the connection: $\nabla(x)=x_{\{-1\}} \otimes
x_{\{0\}}$. Note that $\sigma$ is a perturbation of the standard
switch map.
\begin{prop}
Let $X$ and $X'$ be two $H$--modules with flat connections
$\nabla_X$ and $\nabla_{X'}$ with respect to the differential
calculi $\widehat{\C{K}}^*(H)$ and $\C{K}^*(H)$ respectively. Then
$\nabla_{X\otimes X'}:X\otimes X'\to H\otimes X\otimes X'$ given
by
\begin{align*}
\nabla_{X\otimes X'}(x\otimes x')=\nabla_X(x)\otimes x'
+ (\sigma\otimes id_{X'})\left(x\otimes \nabla_{X'}(x')\right)
\end{align*}
for any $x\otimes x'$ in $X\otimes X'$ defines a flat connection
on the $H$--module $X\otimes X'$ with respect to $\C{K}^*(H)$.
\end{prop}
\begin{proof}
Recall from Theorem~\ref{AYDModules} and Theorem~\ref{YDModules} that
the category of AYD (resp. YD) modules and the category of $H$--modules
admitting flat connections with respect to the differential calculus
$\C{K}^*(H)$ (resp. $\widehat{\C{K}}^*(H)$) are isomorphic. Then
given two $H$--modules with flat connections $(X,\nabla_X)$ over
$\widehat{\C{K}}^*(H)$ and $(X',\nabla_{X'})$ over $\C{K}^*(H)$ one
can extract $H$--comodule structures by letting $x_{(-1)}\otimes
x_{(0)}:=\rho_X(x):=\nabla_X(x)+(1\otimes x)$ and $x'_{(-1)}\otimes
x'_{(0)}:=\rho_{X'}(x'):=\nabla_{X'}(x')+(1\otimes x')$. Then we get
\begin{align*}
\nabla_{X\otimes X'}(x\otimes x')
= & \nabla_X(x)\otimes x'
+ (\sigma\otimes id_{X'})\left(x\otimes \nabla_{X'}(x')\right)\\
= & - (1\otimes x\otimes x') + (x_{(-1)}\otimes x_{(0)}\otimes x')
- (x_{(-1)}\otimes x_{(0)}\otimes x')
+ (x_{(-1)} x'_{(-1)}\otimes x_{(0)}\otimes x'_{(0)})\\
= & - (1\otimes x\otimes x')
+ (x_{(-1)} x'_{(-1)}\otimes x_{(0)}\otimes x'_{(0)})
\end{align*}
for any $(x\otimes x')$ in $X\otimes X'$. One can easily check that
$\rho_{X\otimes X'}(x\otimes x')=x_{(-1)}x'_{(-1)}\otimes
x_{(0)}\otimes x'_{(0)}$ is a coassociative $H$--coaction on $X\otimes
X'$. Moreover, for any $h\in H$ and $x\otimes x'$ in $X\otimes X'$
we have
\begin{align*}
\rho_{X\otimes X'}(h(x\otimes x'))
= & \rho_{X\otimes X'}(h_{(1)}(x)\otimes h_{(2)}(x'))\\
= & h_{(1)}x_{(-1)}S(h_{(3)})h_{(4)}x'_{(-1)}S^{-1}(h_{(6)})\otimes
h_{(2)}(x_{(0)})\otimes h_{(5)}x'_{(0)}\\
= & h_{(1)}x_{(-1)}x'_{(-1)}S^{-1}(h_{(3)})\otimes
h_{(2)}(x_{(0)}\otimes x'_{(0)})
\end{align*}
meaning $X\otimes X'$ is an AYD module over $H$. In other words
$\nabla_{X\otimes X'}$ is a flat connection on $X\otimes X'$ with
respect to the differential calculus $\C{K}^*(H)$. The result
follows.
\end{proof}
\section{Equivariant differential calculi}\label{EquivariantCalculi}
The concepts of AYD and YD modules have recently been extended in
\cite{PanaiteStaic:GeneralizedAYD}. In this section we give a further
extension of this new class of modules and show that they can be
interpreted as modules admitting flat connections with respect to a
differential calculus. In this section we assume $B$ is a bialgebra
and $B^{op, cop}$ is the bialgebra $B$ with the opposite
multiplication and comultiplication. Assume $\alpha:B\to B$ and
$\beta:B\to B^{op,cop}$ are two morphisms bialgebras. Also in this
section we fix a $B$--bimodule coalgebra $C$. In other words $C$ is a
$B$--bimodule and the comultiplication $\Delta_C:C\to C\otimes C$ is a
morphism of $B$--bimodules where we think of $C\otimes C$ as a
$B$--bimodule via the diagonal action of $B$. Equivalently, one has
\begin{align*}
(bcb')_{(1)} = b_{(1)}c_{(1)}b'_{(1)}\otimes b_{(2)}c_{(2)}b'_{(2)}
\end{align*}
for any $b,b'\in B$ and $c\in C$. We also assume $C$ has a grouplike
element via a coalgebra morphism $\B{I}:k\to C$. We do not impose
any condition on $\varepsilon(\B{I})$.
\begin{defn}
Define a graded $B$--bimodule $\C{K}_{(\alpha,\beta)}^*(C,B)$ by
letting $\C{K}_{(\alpha,\beta)}^n(C,B) = C^{\otimes n}\otimes B$.
The right action is defined by the right regular representation of
$B$ on itself, i.e.
\begin{align*}
(c^1\otimes\cdots\otimes c^n\otimes b')b
= c^1\otimes\cdots\otimes c^n\otimes b' b
\end{align*}
and the left action is defined by
\begin{align*}
b(c^1\otimes\cdots\otimes c^n\otimes b')
= \alpha(b_{(1)}) c^1 \beta(b_{(2n+1)})\otimes\cdots\otimes
\alpha(b_{(n)}) c^n \beta(b_{(n+2)})
\otimes b_{(n+1)} b'
\end{align*}
for any $c^1\otimes\cdots\otimes c^n\otimes b'$ in
$\C{K}_{(\alpha,\beta)}^n(C,B)$ and $b\in B$.
\end{defn}
\begin{prop}
$\C{K}_{(\alpha,\beta)}^*(C,B)$ is a differential graded
$k$--algebra.
\end{prop}
\begin{proof}
First we define a product structure. We let
\begin{align*}
(c^1\otimes\cdots\otimes & c^n\otimes b)
(c^{n+1}\otimes\cdots\otimes c^{n+m}\otimes b')\\
= & c^1\otimes\cdots\otimes c^n\otimes
\alpha(b_{(1)}) c^{n+1} \beta(b_{(2m+1)})\otimes\cdots\otimes
\alpha(b_{(m)}) c^{n+m} \beta(b_{(m+2)})\otimes b_{(m+1)}b'
\end{align*}
for any $c^1\otimes\cdots\otimes c^n\otimes b$ and
$c^{n+1}\otimes\cdots\otimes c^{n+m}\otimes b'$ in
$\C{K}_{(\alpha,\beta)}^*(C,B)$. Note that
\begin{align*}
c^1\otimes\cdots\otimes c^n\otimes b
= (c^1\otimes 1)\cdots(c^n\otimes 1)b.
\end{align*}
So it is enough to check associativity only for degree 1 terms.
Then
\begin{align*}
((c^1\otimes b^1)(c^2\otimes b^2))(c^3\otimes b^3)
= & (c^1\otimes\alpha(b^1_{(1)}) c^2 \beta(b^1_{(3)})\otimes b^1_{(2)} b^2)
(c^3\otimes b^3)\\
= & c^1\otimes \alpha(b^1_{(1)}) c^2 \beta(b^1_{(5)})\otimes
\alpha(b^1_{(2)})\alpha(b^2_{(1)}) c^3 \beta(b^2_{(3)})\beta(b^1_{(4)})\otimes
b^1_{(3)}b^2_{(2)} b^3\\
= & (c^1\otimes b^1)(c^2\otimes \alpha(b^2_{(1)}) c^3 \beta(b^2_{(3)})\otimes
b^2_{(2)} b^3)\\
= & (c^1\otimes b^1)((c^2\otimes b^2)(c^3\otimes b^3))
\end{align*}
for any $(c^i\otimes b^i)$ for $i=1,2,3$ as we wanted to show.
Define the differentials as
\begin{align*}
d(c^1\otimes\cdots\otimes c^n\otimes b)
= & -(\B{I}\otimes c^1\otimes\cdots\otimes c^n\otimes b)
+ \sum_{j=1}^n (-1)^{j-1}(c^1\otimes\cdots\otimes c^j_{(1)}\otimes c^j_{(2)}
\otimes\cdots\otimes c^n\otimes b)\\
& (-1)^n (c^1\otimes\cdots\otimes c^n\otimes
\alpha(b_{(1)})\B{I}\beta(b_{(3)})\otimes b_{(2)})
\end{align*}
for any $(c^1\otimes\cdots\otimes c^n\otimes b)$ in
$\C{K}_{(\alpha,\beta)}^*(C,B)$ and one can check that
\begin{align*}
d(b) & = -(\B{I}\otimes b) + (\alpha(b_{(1)})\B{I}\beta(b_{(3)})\otimes b_{(2)}) &
d(c\otimes 1) = & -(\B{I}\otimes c\otimes 1) + (c_{(1)}\otimes c_{(2)}\otimes 1)
-(c\otimes\B{I}\otimes 1)
\end{align*}
for any $b\in B$, $c\in C$. In order to prove that
$\C{K}_{(\alpha,\beta)}^*(C,B)$ is a differential graded
$k$--algebra, we must prove that the Leibniz rule holds. Since
$\C{K}_{(\alpha,\beta)}^*(C,B)$ as an algebra is generated by degree
1 terms, it is enough to check the Leibniz rule for degree 0 and 1
terms. We see that
\begin{align*}
d(b'b)
= & -(\B{I}\otimes b'b)
+ (\alpha(b'_{(1)})\alpha(b_{(1)})\B{I}\beta(b_{(3)})\beta(b'_{(3)})\otimes
b'_{(2)}b_{(2)})\\
= & -(\B{I}\otimes b'b) + (\alpha(b'_{(1)})\B{I}\beta(b'_{(3)})\otimes b'_{(2)}b)\\
& - (\alpha(b'_{(1)})\B{I}\beta(b'_{(3)})\otimes b'_{(2)}b)
+ (\alpha(b'_{(1)})\alpha(b_{(1)})\B{I}\beta(b_{(3)})\beta(b'_{(3)})\otimes
b'_{(2)}b_{(2)})\\
= & d(b') b + b' d(b)
\end{align*}
for any $b,b'\in B$. Moreover,
\begin{align*}
d((c\otimes b')b)
= & d(c\otimes b'b)
= -(\B{I}\otimes c\otimes b'b) + (c_{(1)}\otimes c_{(2)}\otimes b'b)
-(c\otimes \alpha(b'_{(1)})\alpha(b_{(1)})\B{I}\beta(b_{(3)})\beta(b'_{(3)})
\otimes b'_{(2)} b_{(2)})\\
= & -(\B{I}\otimes c\otimes b'b) + (c_{(1)}\otimes c_{(2)}\otimes b'b)
-(c\otimes \alpha(b'_{(1)}) \B{I} \beta(b'_{(3)}) \otimes b'_{(2)} b)\\
& +(c\otimes \alpha(b'_{(1)}) \B{I} \beta(b'_{(3)}) \otimes b'_{(2)} b)
-(c\otimes \alpha(b'_{(1)})\alpha(b_{(1)}) \B{I} \beta(b_{(3)}) \beta(b'_{(3)})
\otimes b'_{(2)} b_{(2)})\\
= & d(c\otimes b')b - (c\otimes b')d(b)
\end{align*}
for any $(c\otimes b')$ in $\C{K}_{(\alpha,\beta)}^1(C,B)$ and
$b\in B$. Then, again for the same elements
\begin{align*}
d(b(c\otimes b'))
= & d(\alpha(b_{(1)}) c \beta(b_{(3)}) \otimes b_{(2)} b')\\
= & -(\B{I}\otimes \alpha(b_{(1)}) c \beta(b_{(3)})\otimes b_{(2)} b')
+(\alpha(b_{(1)})\B{I}\beta(b_{(5)})\otimes \alpha(b_{(2)}) c \beta(b_{(4)})\otimes
b_{(3)} b')\\
& -(\alpha(b_{(1)})\B{I}\beta(b_{(5)})\otimes \alpha(b_{(2)}) c \beta(b_{(4)})\otimes
b_{(3)} b')
+ (\alpha(b_{(1)}) c_{(1)} \beta(b_{(5)}) \otimes
\alpha(b_{(2)}) c_{(2)} \beta(b_{(4)}) \otimes b_{(3)} b')\\
& -(\alpha(b_{(1)}) c \beta(b_{(5)}) \otimes
\alpha(b_{(2)})\alpha(b'_{(1)}) \B{I} \beta(b'_{(3)})\beta(b_{(4)})\otimes
b_{(3)} b'_{(2)})\\
= & d(b)(c\otimes b') + b d(c\otimes b')
\end{align*}
And finally for $(c\otimes 1)$ and $(c'\otimes 1)$ in
$\C{K}_{(\alpha,\beta)}^*(C,B)$ we see that
\begin{align*}
d((c\otimes 1)(c'\otimes 1))
= & d(c\otimes c'\otimes 1)\\
= & -(\B{I}\otimes c\otimes c'\otimes 1)
+(c_{(1)}\otimes c_{(2)}\otimes c'\otimes 1)
-(c\otimes c'_{(1)}\otimes c'_{(2)}\otimes 1)
+(c\otimes c'\otimes \B{I}\otimes 1)\\
= & -(\B{I}\otimes c\otimes c'\otimes 1)
+(c_{(1)}\otimes c_{(2)}\otimes c'\otimes 1)
-(c\otimes\B{I}\otimes c'\otimes 1)\\
&+(c\otimes\B{I}\otimes c'\otimes 1)
-(c\otimes c'_{(1)}\otimes c'_{(2)}\otimes 1)
+(c\otimes c'\otimes \B{I}\otimes 1)\\
= & d(c\otimes 1)(c'\otimes 1) - (c\otimes 1) d(c'\otimes 1)
\end{align*}
Now, one can inductively prove that
\begin{align*}
d(\Psi\Phi) = d(\Psi)\Phi + (-1)^{|\Psi|}\Psi d(\Phi)
\end{align*}
for any $\Psi$ and $\Phi$ in $\C{K}_{(\alpha,\beta)}^*(C,B)$ proving
$\C{K}_{(\alpha,\beta)}^*(C,B)$ is a differential graded $k$--algebra.
\end{proof}
\begin{cor}
$H_*(\C{K}_{(\alpha,\beta)}^*(C,B))$ is a graded algebra.
\end{cor}
Now we can identify the homology of the
$(\alpha,\beta)$--equivariant differential calculus:
\begin{prop}
$B$ is a $C$--comodule and
$H_*(\C{K}_{(\alpha,\beta)}^*(C,B))$ is isomorphic to ${\rm
Cotor}_C^*(k,B)$.
\end{prop}
\begin{proof}
The $C$--comodule structure is given by
$\rho_B(b)=\alpha(b_{(1)})\B{I}\beta(b_{(3)})\otimes b_{(2)}$ for
any $b\in B$. Now, one should observe that
$\C{K}_{(\alpha,\beta)}^*(C,B)$ is the two sided cobar complex
$\C{B}_*(k,C,B)$ of $C$ with coefficients in $k$ and $B$.
\end{proof}
\begin{defn}
A $k$--module $X$ is called an $(\alpha,\beta)$--equivariant
$C$--comodule if (i) $X$ is a left $C$--comodule via a structure
morphism $\rho_X:X\to C\otimes X$ (ii) $X$ is a left $B$--module
via a structure morphism $\mu_X:B\otimes X \to X$ (iii) one
has
\begin{align*}
\rho_X(bx) = \alpha(b_{(1)})x_{(-1)}\beta(b_{(3)})\otimes b_{(2)}x_{(0)}
\end{align*}
for any $b\in B$ and $x\in X$.
\end{defn}
\begin{thm}
The category of $(\alpha,\beta)$--equivariant $C$--comodules is
equivalent to the category of $B$--modules admitting a flat
connection with respect to the differential calculus
$\C{K}_{(\alpha,\beta)}^*(C,B)$.
\end{thm}
\begin{proof}
Assume we have a $k$--module morphism $\rho_X:X\to C\otimes X$ and
define $\rho_X(x):=\nabla(x)+(1\otimes x)$ for any $x\in X$ where we
denote $\rho_X(x)$ by $x_{(-1)}\otimes x_{(0)}$. Now for any $b\in
B$ and $x\in X$ we have
\begin{align*}
\nabla(bx)
= & -(\B{I}\otimes bx) + (bx)_{(-1)}\otimes (bx)_{(0)}\\
= & -(\B{I}\otimes bx) + (\alpha(b_{(1)})\B{I}\beta(b_{(3)})\otimes b_{(2)}x)
- (\alpha(b_{(1)})\B{I}\beta(b_{(3)})\otimes b_{(2)}x)
+ (\alpha(b_{(1)})x_{(-1)}\beta(b_{(3)})\otimes b_{(2)}x_{(0)})\\
= &\ d(b)\eotimes{B}x + b\nabla(x)
\end{align*}
iff $(bx)_{(-1)}\otimes (bx)_{(0)} =
\alpha(b_{(1)})x_{(-1)}\beta(b_{(3)})\otimes b_{(2)}x_{(0)}$. In
other words $\nabla:X\to C\otimes X$ is a connection iff the
morphism of $k$--modules $\rho_X:X\to C\otimes X$ satisfies the
$(\alpha,\beta)$--equivariance condition. Moreover, if we extend
$\nabla$ to $\widehat{\nabla}:C\otimes X\to C\otimes C\otimes X$
by letting
\begin{align*}
\widehat{\nabla}(c\otimes x)
= & d(c\otimes 1)\eotimes{B}x - (c\otimes 1)\eotimes{B}d(x)
\end{align*}
for any $(c\otimes x)\in C\otimes X$, then we have
\begin{align*}
\widehat{\nabla}^2(x)
= & - d(\B{I}\otimes 1)\eotimes{B}x + (\B{I}\otimes 1)\eotimes{B}\nabla(x)
+ d(x_{(-1)}\otimes 1)\eotimes{B} x_{(0)}
- (x_{(-1)}\otimes 1)\eotimes{B}\nabla(x_{(0)})\\
= & (\B{I}\otimes\B{I}\otimes x) - (\B{I}\otimes\B{I}\otimes x)
+ (\B{I}\otimes\B{I}\otimes x) - (\B{I}\otimes\B{I}\otimes x)
+(\B{I}\otimes x_{(-1)}\otimes x_{(0)})\\
& -(\B{I}\otimes x_{(-1)}\otimes x_{(0)})
+(x_{(-1)(1)}\otimes x_{(-1)(2)}\otimes x_{(0)})
-(x_{(-1)}\otimes \B{I}\otimes x_{(0)})\\
& +(x_{(-1)}\otimes \B{I}\otimes x_{(0)})
-(x_{(-1)}\otimes x_{(0)(-1)}\otimes x_{(0)(0)})\\
= & (x_{(-1)(1)}\otimes x_{(-1)(2)}\otimes x_{(0)})
-(x_{(-1)}\otimes x_{(0)(-1)}\otimes x_{(0)(0)}) = 0
\end{align*}
iff $\rho_X:X\to C\otimes X$ is a coassociative coaction of $C$.
The result follows.
\end{proof}
\begin{defn}
Let $X$ be an $(\alpha,\beta)$--equivariant $C$--comodule, i.e. $X$
admits a flat connection $\nabla:X\to C\otimes X$ with respect to
the differential calculus $\C{K}_{(\alpha,\beta)}^*(C,B)$. Define
$\C{K}_{(\alpha,\beta)}^*(C,B,X)$ as
$\C{K}_{(\alpha,\beta)}^*(C,B)\eotimes{B}X$ with the extended
connection $\widehat{\nabla}$ as its differential.
\end{defn}
\begin{prop}
For any $(\alpha,\beta)$--equivariant module $X$ one has $H_*(\C{K}_{(\alpha,\beta)}^*(C,B,X))\cong {\rm
Cotor}_C^*(k,X)$ where we think of $k$ as a $C$--comodule via the
grouplike element $\B{I}:k\to C$.
\end{prop}
\begin{rem}
Assume $B$ is a Hopf algebra. If $\alpha=id_B$, $\beta=S$, and
$C=B$, then the differential calculus is $\widehat{\mathcal{K}}(H)$ and
$(\alpha,\beta)$--equivariant $C$--comodules are Yetter-Drinfeld
modules. In case $\alpha=id_B$, $\beta=S^{-1}$, and $C=B$, then the
differential calculus is ${\mathcal{K}}(H)$ and
$(\alpha,\beta)$--equivariant $C$--comodules are
anti-Yetter-Drinfeld modules.
\end{rem}
| {'timestamp': '2005-12-07T02:37:04', 'yymm': '0512', 'arxiv_id': 'math/0512031', 'language': 'en', 'url': 'https://arxiv.org/abs/math/0512031'} |
\section{Introduction}
Superconducting on-chip filter-bank spectrometers (SFBSs) are a promising technology
for a number of scientifically important
applications in astronomy and meteorology that require low-noise,
spectroscopic measurements at millimetre and sub-millimetre wavelengths.
SFBSs are capable of achieving high channel counts and
the individual channel characteristics such as shape, width, and position in frequency, power-handling
and sensitivity can be tuned to the application. Moreover, the
micro-fabrication techniques used in the production of these thin-film devices mean that SFBSs are
intrinsically low-mass, physically compact and easily reproducible, making them
well-suited for array applications for both ground-based and satellite-borne instruments.
High signal detection efficiencies can be
achieved up to the superconducting pair-breaking threshold of the
superconductors used in the design; typically 680\,GHz for Nb, but higher
for superconducting compounds such as NbN or NbTiN.
Applications for astronomy include surveys of moderate
red-shift galaxies ($Z=4-6$) by precision determination of the frequencies of
CO and $[\textrm{CII}]$ rotational lines,
multichroic pixels for cosmic microwave background (CMB) observations
(foreground subtraction by observing in the $24-30$\cite{Koopman_ACTpol_2018} and
$30-48\,\rm{GHz}$ atmospheric windows,
and simultaneous observation in
multiple CMB frequency bands\cite{Pan_SPT3G_2018,Stebor_Simons_TES_spec})
or observation of low-Z CO and O line emissions from nearby galaxies.\cite{Grimes_Camels}
Chip spectrometers coupling to superconducting kinetic inductance detectors (KIDs)
are being developed by a number of groups
and large multiplexing counts of order $1000$'s have
been demonstrated.\cite{Endo_Deshima_2019,Redford_Superspec_2019}
However KID detectors are difficult to design for detection at low frequencies
$\nu \lesssim 100\,\rm{GHz}$, the pair-breaking threshold for
a typical low superconducting transition temperature material such as Al,\cite{Songyuan_2018a}
and can also be challenging to calibrate as regards their power-to-output-signal responsivity.
Superconducting transition edge sensors (TESs) are a type of bolometric detector
where the power absorption is
not limited by the pair-breaking threshold of a superconducting film.
In addition their (power-to-current) responsivity is straightforward to determine both theoretically and experimentally.
TES multiplexing schemes are a mature technology using both time- and frequency-domain approaches giving
multiplexing factors of order $100 $'s,\cite{Pourya_fdm_2018, Henderson_tdiv_ACTpol}
and microwave TES readout schemes promise to equal the
multiplexing factors demonstrated for KIDs.\cite{Henderson_microwave_readout, Irwin_microwave_readout}
Vertical profiles of atmospheric temperature and humidity measured
by satellite-borne radiometric sounders provide vital information for long-range weather forecasting.
These sounders work by measuring the upwelling radiance from the
atmosphere in a number of spectral channels, typically either at microwave (MW) or infrared (IR) wavelengths.
Vertical resolution and measurement accuracy improve rapidly with
greater channel number and radiometric sensitivity.\cite{aires2015microwave,Prateek_Hymas}
Significant progress has been made in IR sounder performance;
the Infrared Sounding Interferometer (IASI), for example, provides
over eight thousand channels with sub-kelvin noise equivalent differential temperature (NETD).\cite{iasi}
However, while able to provide high quality data,
IR sounders can do so only under infrequent clear-sky conditions, as clouds absorb and interfere with the signal of interest.
MW sounders, by contrast, are not affected by cloud cover, but their use has been hampered by poorer instrument performance.
Channel number is a significant problem: the Advanced Microwave Sounding Unit-A (AMSU-A)
in current use has, for example, only fifteen
channels,\cite{airs2000algorithm} while the
planned Microwave Imaging Instrument (MWI) will offer twenty-three.\cite{alberti2012two}
Sensitivity is also an issue and a recent study by Dongre\,\cite{Prateek_Hymas}
has indicated that maintaining and/or improving sensitivity as channel count increases is vital.
In this paper we report on an SFBS with TES readout
as a technology for realising a MW sounding instrument with several hundred channels and sky-noise limited performance.
This would represent a disruptive advance in the field,
allowing measurements of comparable performance to IR sounders under all sky conditions.
The chip spectrometer reported here is a demonstrator for an atmospheric temperature
and humidity sounder
(HYper-spectra Microwave Atmospheric Sounding: HYMAS), that is being developed to
operate at frequencies
in the range $\nu= 45 - 190 \,\,\rm{GHz}$.\cite{Prateek_Hymas,Hargrave_Hymas} The
demonstrator was designed to cover the very important
O$_2$ absorption band at $\nu= 50 - 60 \,\,\rm{GHz}$ for
atmospheric temperature sounding.
We believe, however, that our prototype designs and initial characterizations are already
relevant across the broad band of scientifically important research areas described above.
In Sec.~\ref{sec:tech_overview}
we give a brief overview of SFBSs and the particular features of the
technology that make them attractive for this application.
We will then describe the design of a set of devices to
demonstrate the key technologies required for temperature sounding using the O$_2$ absorption band at
$50 - 60\,\textrm{GHz}$.
In Sec.~\ref{sect:Fab}
we describe the fabrication of the demonstrator
chips and their assembly with electronics into a waveguide-coupled detector package.
In Sec.~\ref{sect:results} we
report the first characterization of complete SFBS's with TES readout detecting at
$40 - 65\,\textrm{GHz}$, considering
the TES response calibration, measurements of overall detection efficiency, and measurements of filter response.
Finally we summarise the achievements and
describe our future programme and
the pathway from this demonstrator to a full instrument.
\section{Superconducting on-chip Filter-bank Spectrometers}\label{sec:tech_overview}
\begin{figure}[!ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=8cm]{Hymas_data/system_diagram}
\end{tabular}
\end{center}
\caption{\label{fig:system_diagram} Conceptual design of the test devices,
also showing the architecture of a superconducting on-chip filter-bank spectrometer.}
\end{figure}
\begin{table}[b
\caption{\label{tab:demonstrator_performance} Table of
detector specifications for a baseline temperature sounder using the O$_2$ absorption band.
The background power loading and detector time constant have been calculated
assuming the same optics and scan pattern as the MWI instrument.\cite{alberti2012two}
The required NEP gives near sky-noise limited performance under these conditions.}%
\begin{ruledtabular}
\begin{tabular}{lcl}
Parameter & Specification & Units \\
\colrule
& & \\
Operating frequency range & 50--60 & GHz \\
Filter resolution & 100-500 & \\
Noise equivalent power & 3 & $\textrm{aW}/\sqrt {\textrm{Hz}} $\\
Detector time constant & 5 & ms \\
Detector absorption efficiency & $50\%$ & \\
Background power handling & 60 & fW \\
Operating temperature & $>300 $ & mK \\
Number of channels & 35 & \\
\\
\end{tabular}
\end{ruledtabular}
\end{table}
In general, a filter-bank spectrometer uses a set of band-defining electrical filters to disperse the different spectral components of the input signal over a set of power detectors.
In the case of an SFBS, the filters and detectors are implemented using microfabricated superconducting components,
and are integrated together on the same device substrate (the `chip').
Integration of the components on the same
chip eliminates housings, mechanical interfaces and
losses between different components, helping to reduce
the size of the system, while improving ruggedness and optical efficiency.
In addition it is easy to replicate a chip and therefore a whole
spectrometer, making SFBSs an ideal technology for realising spectroscopic imaging arrays.
Most mm- and sub-mm wave SFBSs that have been reported in the literature operate directly at
the signal frequency, i.e. there is no frequency down-conversion step.\cite{Endo_Deshima_2019,Redford_Superspec_2019}
There are two main benefits of this approach, the first of which is miniaturization.
The size of a distributed-element filter is intrinsically inversely proportional to the
frequency of operation: the higher the frequency, the smaller the individual filters and
the more channels that can be fitted on the chip.
The second benefit is in terms of instantaneous observing bandwidth, which is
principle limited only by the feed for frequencies below the pair-breaking thresholds of the superconductors.
Operation at the signal frequency is made possible by: (a) the
availability of superconducting detectors for mm- and
sub-mm wavelengths, and (b) the low intrinsic Ohmic loss of the superconductors.
Critically, (b) allows filter channels with scientifically useful resolution to be realised at high frequencies.
In the case of normal metals, increases in Ohmic loss with frequency and miniaturisation of the
components quickly degrade performance.
As an example, the Ohmic losses in Nb microstrip line at sub-kelvin
temperatures are expected to be negligible for frequencies up
to 680\,GHz\,\cite{yassin1995electromagnetic} (the onset of pair-breaking).
The use of ultra-low noise superconductor detector technology such as TESs and KIDs, (a),
in principle allows for extremely high channel sensitivities.
The SFBS test chips described here were developed with the target application of
satellite atmospheric sounding and against the specification given in Table 1.
Figure 1 shows the system level design of the chips.
Each chip comprises a feeding structure that couples signal from a waveguide onto
the chip in the range 45--65\,GHz, followed by a filter-bank and a set of TES detectors.
The chip is then housed in a test fixture
that incorporates additional cold electronics and waveguide interfaces.
In the sub-sections that follow we will describe the design of each of the components in detail.
Each of the test chips described has twelve detectors in total.
This number was chosen for convenient characterisation without
multiplexed readout, but the architecture readily scales to higher channel count.
In principle the limiting factor is increasing attenuation on the
feed line as more channels are added, but the losses on superconducting line are so low that other issues, such as readout capacity, are expected to be the main limit.
\subsection{Feed and Transition}\label{sec:feed_and_trans}
\begin{figure}[htp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=8cm]{Hymas_data/transition_cad_labelled_combined2}
\end{tabular}
\end{center}
\caption{\label{fig:transition} Details of the waveguide to microstrip transition.
The different components are described in the text.
In the inset shows the simulated scattering parameters where
ports 1, 2 and 3 are the waveguide input, microstrip output and coupling the chip-cavity, respectively.
}
\end{figure}
RF signal input to the test devices is via a standard 1.88\,mm$\times$3.76\,mm WR15 waveguide,
allowing full coverage of the O$_2$ band
(WR15 has a recommended operating range of $50 - 75\,\textrm{GHz}$, single-mode operation from
$39.875 - 79.750\,\textrm{GHz}$).
The signal is coupled from the waveguide onto a 22.3\,$\Omega$
microstrip line on the chip through a radial probe transition,\cite{kooi2003full}
the design of which is shown in Fig. \ref{fig:transition}.
A split block waveguide is used, the upper part of which has been
rendered as transparent in the figure to allow the internal structure to be seen.
A channel is machined through the waveguide wall at the split
plane to accommodate a silicon beam extending from the chip, shown in green.
This beam supports a fan-shaped radial probe, shown in blue,
the apex of which connects to the upper conductor of the microstrip.
The microstrip ground plane is shown in yellow.
Not visible is an air channel under the beam, which raises
the cut-off frequency of the waveguide modes of the loaded channel above the band of operation.
It is critical for performance that the ground plane is at the
same potential as the waveguide/probe-channel walls at the probe apex,
however it is not straightforward to make a physical connection.
The changes in ground plane width shown in Fig. \ref{fig:transition} implement
a stepped impedance filter in the ground-plane/wall system to ensure a wideband
short at the probe plane, assuming the ground plane is wire-bonded to the walls in the chip cavity.
This filter also prevents power flow along the air channel due to the TEM mode.
The performance of the design was simulated using OpenEMS,
a finite difference time domain (FDTD) solver.\cite{openEMS}
Insertion loss ($S_{21}$), reflection loss ($S_{11}$) and indirect
leakage to the chip cavity ($S_{31}$) as a function of frequency are shown in the inset of Fig. \ref{fig:transition}.
As can be seen, the design achieves better than -3\,dB
insertion loss over $45 - 65\,\textrm{GHz}$, corresponding to a fractional bandwidth of nearly 35\,\%.
\subsection{Filter-bank}\label{sec:filter-bank}
\begin{figure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=6.5 cm]{Hymas_data/fb_schematic}
\caption{\label{fig:filter-bank_schematic} Schematic of the filter-bank.}
\end{subfigure} \\
\vspace{0.5 cm}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=7.5cm]{Hymas_data/Filter_bank_gain_profiles}
\caption{\label{fig:filter-bank_performance} Modelled transmission gain versus frequency.
}
\end{subfigure}
\caption{\label{fig:filter-bank} Details of the filter bank designs.
Fig \ref{fig:filter-bank_performance} shows the power transmitted to
individual channels for $\mathcal{R}=200$. The different channels are indicated by the different line colours.
Transmission to the residual power detector terminating the feed is indicated by the black dotted line.
}
\end{figure}
The filter-bank architecture employed on the test chips is shown schematically in Fig. \ref{fig:filter-bank_schematic}.
It comprises ten half-wave resonator filters driven in parallel from a single microstrip feed line.
Each of the filters comprises a length of microstrip line,
the ends of which are connected by overlap capacitors to the feed and an
output microstrip line. The output line is terminated by a matched resistor that is closely thermally-coupled
to a TES bolometer.
Both the termination resistor and TES are thermally isolated from the cold stage.
The feed line itself is terminated on an eleventh
matched resistor coupled to a TES.
This TES measures the power remaining after passing the filter bank
and is subsequently described as the residual power detector.
This TES was specifically designed to handle the expected higher incident power.
A twelfth detector on chip has no microwave connection and was used as a `dark' (i.e. nominally power-free)
reference.
A common superconducting microstrip design is used for all parts of the filter-bank, consisting of a Nb ground plane,
a 400\,nm thick SiO$_2$ dielectric layer and a 3\,$\mu$m wide, 500\,nm thick, Nb trace.
The modelled characteristic impedance is 22.3\,$\Omega$.\cite{yassin1995electromagnetic}
The spectral characteristics of each filter are determined by the resonant behaviour of the line section
and the strength of the coupling to the feed and detector.
For weak coupling, the line section behaves as an open-ended
resonator and transmission peaks sharply at resonance.
Each filter channel
is designed to have its own fundamental frequency $\nu_0$.
For a filter of length $l$
and phase velocity $c$, $\nu_0$ is given by
\begin{equation}\label{eqn:resonant_frequency}
\nu_0 = \frac{c}{2 l}.
\end{equation}
Tuning over the range $40-70\,\textrm{GHz}$ is achieved by values of $l$ in the range $2.225 - 1.00\,\textrm{mm}$.
The channel resolution $\mathcal{R}=\nu_0/\Delta \nu$ where $\Delta \nu$ is the 3-dB width.
The bandwidth of a filter and its peak-value of transmission
are governed by losses in the resonator, as measured by the quality factor.
Assuming power loss $P$ from the resonator when the stored energy is $U$,
the total quality factor $Q_\text{t}$ of the resonator
(and hence the resolution of the filter $\mathcal{R}\equiv Q_\text{t}$) is defined
by $Q_\text{t} = 2 \pi \nu_0 U / P$.
$P$ can be further decomposed into the sum of the power loss $P_\text{c,in}$
and $P_\text{c,out}$ to the input and output circuit respectively (\emph{coupling losses})
and the power dissipated $P_\text{int}$ in the resonator itself through Ohmic and dielectric loss (\emph{internal losses}).
We can then define additional quality factors $Q_\text{int}$, $Q_\text{c,in}$ and $Q_\text{c,out}$
by $Q_{n} = 2 \pi \nu_0 U / P_n$
These correspond to the Q-factor of the resonator in
the limit where the individual losses are dominant,
and $Q_\text{t}^{-1} = Q_\text{int}^{-1} + Q_\text{c,in}^{-1} + Q_\text{c,out}^{-1}$.
With these definitions, the power gain of the channel as a function of frequency $\nu$ can be shown to be
\begin{equation}\label{eqn:filter_power_gain}
G(\nu,\nu_0) = \frac{2 Q_\text{t}^2}{Q_\text{c,in} Q_\text{c,out}}
\frac{1}{1 + 4 Q_\text{t}^2 (\nu - \nu_0)^2 / \nu_0^2}.
\end{equation}
$Q_\text{c,in}$ and $Q_\text{c,out}$ are controlled by the input and output coupling capacitors.
For maximum transmission of power to the filter channel detector (-3\,dB) we must
engineer $Q_\text{c,in} = Q_\text{c,out} = Q_\text{c} \ll Q_\text{int}$,
in which case $Q_\text{t} = Q_\text{c} / 2 \ll Q_\text{int}$.
Therefore the minimum achievable bandwidth is limited by $Q_\text{int}$.
Under the same conditions the power transmitted to the residual power detector is
\begin{equation}\label{eqn:filter_power_transmitted}
T(\nu,\nu_0) = 1+ \frac{Q_\text{t}}{Q_\text{c,in}} \frac{\left[ Q_\text{t}/Q_\text{c,in} -2 \right]}
{1 + 4 Q_\text{t}^2 (\nu - \nu_0)^2 / \nu_0^2},
\end{equation}
and the power is reduced by 6-dB on resonance as
seen from the dotted black line in Fig.~\ref{fig:filter-bank_performance}.
In practice the presence of the coupling capacitors causes the measured centre frequency $\nu_\text{0,meas}$ of the filter to differ slightly from the resonant frequency of the open-ended line, as given by (1).
The detuning is dependent on coupling strength and analysis of the perturbation of the circuit to first-order gives
\begin{equation}\label{eqn:detuning}
\nu_\text{0,meas} \approx
\left( 1 - \frac{1 + \sqrt{2}}{2 \sqrt{\pi \mathcal{R}}} \right) \nu_0 .
\end{equation}
Ohmic loss is small in superconducting microstrip lines at low temperatures.
Instead, experience with superconducting microresonators $<$10\,GHz suggests
dielectric loss due to two-level-systems will govern $Q_\text{int}$. \cite{zmuidzinas2012superconducting}
A main aim of the prototype devices was to investigate the loss mechanisms active
at millimetre wavelengths, although extrapolation from the low-frequency
data suggests spectral resolutions
of a few hundred should be easily achievable for the microstrip design used here, and higher
may be achievable for alternatives such as coplanar waveguide.
We report on two of the four different filter bank designs that were fabricated on the demonstrator devices.
Three of these have identically spaced channels, designed to be 2.5\,GHz apart over the range $42.5 - 65.00\,\textrm{GHz}$.
The designs differ in spectral resolution of the channels, with target values of $\mathcal{R}=$250, 500 and 1000.
These explore the achievable control of channel placement and resolution
and provide experimental characterization of the millimetre-wave properties of the microstrip over a wide frequency range.
However, the channels in these designs are widely spaced.
The fourth design investigated an alternate possible mode of operation where the passbands of the
filters overlap significantly, with nine $\mathcal{R}=500$ channels in the band $53 - 54\,\textrm{GHz}$.
In this case the spacing of the filters along the feed line becomes significant, as the
filters interact with each other electrically.
If the filters are arranged in order of decreasing frequency and placed approximately a quarter of a
wavelength apart, part of the in-band signal scattered onto the feed by the filter is reflected back
in phase from the next filter along, enhancing transmission to the detector.
Consequently the transmission of the filters can be increased by dense packing,
but at the expense of the shape of the response; we are investigating the implications for scientific performance.
The modelled passbands for the widely-spaced filter banks are shown in Fig. \ref{fig:filter-bank_performance}.
\subsection{TES design}
\label{sect:TES_Design}
Electrothermal feedback (ETF) in a thermally-isolated TES of resistance $R$,
voltage-biased within its superconducting-normal
resistive transition
means that the TES
self-regulates its temperature $T$ very close to $T_c$, the TES transition temperature.\cite{Irwin2005}
When the superconducting-normal resistive transition
occurs in a very narrow temperature range $\alpha\gg 1$, where $\alpha=T (dR/dT)/R$
characterizes the transition sharpness.
Provided
$T,\,T_c\gtrapprox 1.5 T_\textrm{b}$ the bath temperature, the small-signal power-to-current responsivity
$s_I$
is then given by $s_I=-1/(I_0(R_0-R_\textrm{L})$. Here $T_0$, $I_0$ and $R_0$ are the TES temperature, current and resistance
respectively at the operating point and $R_\textrm{L}$ is the load resistance.
All are simple to evaluate from measurements of the TES and its known bias circuit parameters
meaning that $s_I$ is straightforward to evaluate.
The TESs we report on were designed to operate from a bath temperature of $T_\textrm{b}\simeq300\,\,\rm{mK}$
in order to be usable from a simple cryogenic cooling platform, to have a saturation power of order
$P_\textrm{sat}\simeq 2\,\,\rm{pW}$ to satisfy expected power loading, and a
phonon-limited noise equivalent power (NEP) of order $2\,\rm{aW/\sqrt{Hz}}$
to minimise detector NEP with respect to atmospheric noise (see Table~\ref{tab:demonstrator_performance}).
TES design modelling indicates that the required performance
should be achievable with a superconducting-normal transition temperature
$T_c\sim 550\to 650\,\rm {mK}$. The detailed calculation depends on the value of the
exponent $n$ that occurs in the calculation of the
power flow from the TES to the heat bath: $P_\textrm{b}=K_\textrm{b}(T_c^n-T_\textrm{b}^n)$,
where $K_\textrm{b}$ is a material dependent parameter that includes the
geometry of the thermal link between the TES and bath. For ideal voltage bias
($ R_\textrm{L} =0 $)
the saturation power
is $P_{\textrm{sat}}=(1-R_0/R_{\textrm{N}})P_\textrm{b}$
and $R_{\textrm{N}}$ is the TES normal-state resistance.\cite{Irwin2005}
Under typical operating conditions $R_0\sim 0.25 R_{\textrm{N}}$.
The thermal conductance to the bath $G_\textrm{b}=dP_\textrm{b}/dT_c$, determines the phonon-limited
NEP where $\textrm{NEP}=\sqrt{4k_\textrm{b}\gamma G_\textrm{b} T_c^2}$,
$k_\textrm{b}$ is Boltzmann's constant, and $\gamma\leq 1$
takes account of the temperature gradient across $G_\textrm{b}$. Our previous work suggests that
at the operating temperature and length scales $n\sim 1.5\to 2.5$
should apply for 200-nm thick $\rm{SiN_x}$\, at these temperatures with
lengths $50-1000\,\rm{\mu m}$. For the filter channels, thermal
isolation of the TES was formed
from four 200nm-thick $\rm{SiN_x}$\, legs each of length $500\,\rm{\mu m}$,
three of width $1.5\,\rm{\mu m}$ and one of
$4\,\rm{\mu m}$ to carry the microstrip.
The residual power detector had support legs of length $50\,\rm{\mu m}$ and the same widths as for the filter channels.
\section{Fabrication and Assembly}
\label{sect:Fab}
The detector chips were fabricated on $225\,\rm{\mu m}$-thick Si wafers coated with
a dielectric bilayer comprising 200~nm thick low-stress $\rm{SiN_x}$\,\,
and an additional 50~nm
$\rm{SiO_2}$\,
etch stop. After Deep Reactive Ion Etching (DRIE) the $\rm{SiN_x}$, $\rm{SiO_2}$\, films formed the thermal isolation structure.
The detectors were made by
sputtering metal and dielectric thin films
in ultra-high vacuum.
The superconducting microstrip was formed from a 150~nm-thick Nb ground plane
with 400~nm amorphous silicon dioxide ($\rm{SiO_2}$)
dielectric and a 400~nm-thick $3\,\rm{\mu m}$-wide Nb line.
Coupling capacitors were made from $\rm{SiO_2}$\,
and Nb.
Thin film AuCu resistors terminate the microstrip on the TES island.
The TESs
were fabricated from TiAl bilayers with the Ti and Al thicknesses
of
thicknesses 150~nm, 40~nm respectively, calculated to target a superconducting transition temperature of
$600-650\,\rm{mK}$.\cite{Songyuan_Tc_2018}
Electrical connections to the TES were sputtered Nb.
DRIE
removed the Si under the TES such that the $\rm{SiN_x}$\,
island and legs provide the necessary thermal isolation.
The DRIE also released the individual chips from the host wafer and defined the chip shape
seen in \ref{fig:detectors}.
\begin{figure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[trim = {0.35cm 0cm 0cm 0cm}, clip, width=8cm]{Hymas_data/hp_dev_photo_labelled}
\caption{\label{fig:detector_photo} High resolution optical image of a residual power detector.}
\end{subfigure} \\
\begin{subfigure}{\linewidth}
\centering
\includegraphics[trim = {0.35cm 0cm 0cm 0cm}, clip, width=8cm]{Hymas_data/labelled_chip_v4}
\caption{\label{fig:chip_photo} Optical image of a complete demonstrator chip. The contact pads for Channel 1
are
indicated in the lower left of the image, with channel numbers increasing to the right. }
\end{subfigure}
\caption{\label{fig:detectors} Photographs of the residual power detector and a complete demonstrator chip.
Both images were taken prior to DRIE of the device wells, that removes the dark-grey areas to release the membranes
and the chip itself.}
\end{figure}
Figure~\ref{fig:detector_photo} shows an image of the residual-power detector prior to DRIE of the Si.
The $4\,\rm{\mu m}$-wide
leg supporting the microstrip is indicated in the lower left-hand region of the image.
The $3\times35\,\rm{\mu m}$ AuCu termination
can be seen.
Figure~\ref{fig:detectors}~(b) shows a composite high resolution photograph of a completed detector chip after DRIE.
The component structures, waveguide probe,
filter-bank, TES detectors and superconducting contact pads are indicated.
The individual filter channels are also
visible.
Filter lengths are determined by the small coupling capacitors that are (just) visible as darker dots between the
through-line and the individual TES wells, the lengths increase with
channel number (left-to-right in the image) and correspondingly $\nu_{0}$ reduces.
\begin{figure}[htp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=8.0cm]{Hymas_data/labelled_detector_cavity_dg}
\end{tabular}
\end{center}
\caption
{ \label{fig:Detector_cavity}
Photograph of the assembled lower half of the detector enclosure. The upper part forms a light tight enclosure and completes
the upper section of the split waveguide. }
\end{figure}
Figure~\ref{fig:Detector_cavity} shows an image of a completed chip mounted in a detector enclosure. The probe can
be seen extending into the lower half of the split waveguide. Al wirebonds connect the chip ground plane to the
enclosure and on-chip electrical wiring to the
superconducting fan-out wiring. Superconducting NbTi wires connect
through a light-tight feed-through into the electronics enclosure
on the back of the detector cavity. The low-noise
two-stage SQUIDs used here were provided by Physikalisch-Technische Bundesanstalt (PTB). The upper section of the detector
enclosure completed the upper section of the waveguide and ensured that the package was light-tight.
Completed detector packages were cooled using two separate cryogen-cooled ADR's
both giving a base temperature $T_{\textrm{b}}\lesssim100\,\,\rm{mK}$.
\begin{figure}[htp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=8cm]{Hymas_data/Experimental_schematic_v2}
\end{tabular}
\end{center}
\caption
{ \label{fig:BB_schematic}
Schematics of the measurement scheme. (a) For the blackbody power calibration. (b) for the measurement of filter spectral response}
\end{figure}
Figure~\ref{fig:BB_schematic}~(a) shows a schematic of the blackbody power calibration scheme. A matched $50\,\,\rm{\Omega}$
resistive load (Fairview Microwave SKU:ST6510)\cite{Fairview}
terminated a length of Coax Co. Ltd SC-219/50-SS-SS coaxial line.\cite{Coax_jp}
The load was
mounted on a thermally isolated
Cu plate within the cryostat $4\,\textrm{K}$ vacuum space.
This termination-resistor stage was equipped with a calibrated LakeShore RuOx thermometer\cite{Lakeshore}
and resistive heater
and was connected to the refrigerator $4\,\,\rm{K}$ stage by a length of copper wire ($l=20\,\,\rm{cm}$, $r=250\,\,\rm{\mu m}$)
that determined the dominant stage time constant which was estimated at $\sim 20\,\,\rm{ms}$.
Further SC-219
connected the termination resistor to a 3-dB attenuator (Fairview Microwave SA6510-03)
mounted on the
$4\,\,\rm{K}$ stage itself,
to a further attenuator mounted on the
$1\,\,\rm{K}$ stage, in order to minimise heat-loading to the cold-stage
by heat-sinking the inner coax conductor, and then to the WR-15
coax-waveguide transition (Fairview Microwave SKU:15AC206), waveguide probe
and detector chip. The total coax length was
$800\,\,\rm{mm}$. Results of these measurements are reported in Sec.~\ref{sect:Power_calibration}.
The spectral response of the filters was measured using a continuous wave (CW) source.
For these
tests a different cryostat was used, but it again comprised a two-stage (1K/50mK) ADR launched
from a 4\,K plate cooled by liquid cryogens. A different set cold electronics
with nine single-stage SQUIDs, provided by PTB was used
that allowed simultaneous readout of nine devices.
Figure~\ref{fig:BB_schematic}~(b) shows a schematic of the scheme.
A Rohde \& Schwarz ZVA-67 VNA capable of
measurements up to $67\,\textrm{GHz}$ was used as a frequency-tuneable, power-levelled, CW source and the
signal coupled down to the detector block on coaxial cable. A KMCO KPC185FF HA hermetic
feedthrough was used to enter the cryostat, then 1520~mm of Coax Co. Ltd SC-219/50-SS-SS coaxial
cable was used for the connection between room temperature and the $4\,\textrm{K}$ plate, anchoring on the liquid nitrogen tank,
liquid helium tank and finally the $4\,\textrm{K}$ plate itself. The final connection from $4\,\textrm{K}$ to the
waveguide adapter on the test block was made using 330\,mm of Coax Co. Ltd SC-219/50-NbTi-NbTi
superconducting cable for thermal isolation. A 10\,dB attenuator (Fairview Microwave SA6510-10) was inserted in line
at the $4\,\textrm{K}$ plate to provide thermal anchoring between the inner and outer conductors
of the coaxial cable and isolation of the detectors chip from short-wavelength power loading.
All connections were made with 1.85~mm connectors to allow mode free
operation up to $65\,\textrm{GHz}$. Measurements indicated the expected total attenuation of order 68\,dB between
room temperature and the WR-15 waveguide flange. In operation the frequency of the CW source was stepped
through a series of values and the output of the devices logged in the dwell time between transitions,
exploiting the normal mode of operation of the VNA.
Results of these measurements are reported in
Sec.~\ref{sect:Filter_response}.
Two chips were characterized in this series of tests. Both chips were fabricated on the same wafer, but have different
designed filter resolving powers. Chip 1 has filters designed with $\mathcal{R}=500$ was used for the blackbody
measurements with measurements on filter channels
4-6, 11 (the residual power detector), and 12 (the dark detector).
Chip 2 with $\mathcal{R}=200$ was used for the spectral measurements
and we report measurements on channels 3-8, 10 and 12.
Both chips were designed to cover
the $42.5-65\,\rm{GHz}$ range.
\section{Results}
\label{sect:results}
\subsection{TES characteristics}
\label{sect:TES_results}
\begin{table}[htp
\caption{\label{tab:TES_dc_results}%
Summary of DC TES characteristics for Chip 1.
}
\begin{ruledtabular}
\begin{tabular}{cccccl}
\textrm{Channel}&
$R_{\textrm{N}}\,\rm{(\Omega)}$&
$T_c\,\rm{(mK)}$&
$n$&
$G_b\,\rm{(pW/K)}$&
Notes\\
\colrule
&&&&&\\
4 & 2.6 & 457 & $2.0\pm0.1$ & $4.7$ & \textrm{Filter}\\
5 & 2.5 & 452 & $2.0\pm0.1$ & $\rm{\,\,\,}4.75$ & \textrm{Filter}\\
6 & 2.4 & 454 & $2.0\pm0.1$ & $4.8$ & \textrm{Filter}\\
12 & 2.4 & 455 & $2.0\pm0.1$ & $4.7$ & \textrm{Dark}\\
& & & & & \\
11 & 2.4 & 459 & $2.0\pm0.1$ & $65\rm{\,\,\,\,\,\,}$ & \textrm{Residual}\\
& & & & & \textrm{Power}\\
\end{tabular}
\end{ruledtabular}
\end{table}
Table~\ref{tab:TES_dc_results} shows measured and calculated values for the DC characteristics of 5 channels from Chip 1.
$T_c$ was close-to but somewhat lower than designed, and $R_{\textrm{N}}$ was higher.
The critical current density was also reduced from our previously measured values even for
pure Ti films. This may indicate that these TESs demonstrate
the thin-film inter-diffusion effect between Ti-Nb couples recently reported by
Yefremenko~{\textit {et al.}}.\cite{Yefremenko2018_2}
The superconducting-normal resistive transition
with $T_{\textrm{b}}$, as inferred from measurements of the TES Joule power dissipation as a function of
bias voltage, appeared
somewhat broad. Hence, $G_b$ was determined from the power dissipated at constant TES resistance $R_0=0.25R_{\textrm{N}}$, close
to the bias voltage used for power measurements. The exponent in the power flow was $n=2.0\pm0.1$ for all TESs.
\subsection{TES ETF parameters and power-to-current sensitivity estimate}
\label{sect:ETF_estimates}
\begin{figure}[htp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=8.0cm]{Hymas_data/dI_400_p11_model_combined}
\end{tabular}
\end{center}
\caption
{ \label{fig:Response_p11_400}
The inset shows the measured real
and imaginary parts (blue and green dots respectively) of the impedance of the residual power detector (channel 11).
The solid red and dashed
black lines are the modelled values (real and imaginary respectively) using the parameters given in Table~\ref{tab:ETF_parameters}.
The main plot shows the
measured (blue dots) and calculated (red line) response of the TES current to a
small step change in the bias voltage for the same channel with no additional parameters.
}
\end{figure}
For a TES modelled as single heat capacity $C$ with
conductance to the bath $G_\textrm{b}$, the measured impedance $Z(f)$
is given by\cite{Irwin2005}
\begin{equation}
Z(f)=R_\textrm{L}+j2\pi f L_\textrm{in}+Z_\textrm{TES}(f),
\end{equation}
where $R_\textrm{L}$ is the load resistance, $L_\textrm{in}$ is the input inductance of the SQUID plus any stray inductance,
$Z_\textrm{TES}(f)$ is the TES impedance and $f$ is the measurement frequency. The TES impedance is given by
\begin{equation}
Z_\textrm{TES}(f)=R_0(1+\beta)+ \frac{R_0 \mathscr{L}_I}{1-\mathscr{L}_I}\frac{2+\beta}{1+ 2 \pi f \tau_I},
\end{equation}
where $\mathscr{L}_I=P_0\alpha/G_\textrm{b}T_0$. $P_0=I_0R_0^2$ is the
Joule power at the bias point, $T_0\simeq T_c$ is the operating temperature and
$\alpha=T\, (dR/dT)/R$ characterizes the sharpness of the resistive transition
at the bias point. $\beta=I (dR/dI)/R$ measures the sensitivity
of the transition to changes in current $I$. At measurement frequencies much higher than the reciprocal of the effective TES time
(here taken to be $f\gtrsim 5\,\rm{kHz}$),
$\textrm{Re}\, ( Z(f))=R_0(1+\beta)$, and so $\beta$ can be determined
with minimal measured parameters.
At low frequencies, here taken to be
$f\lesssim 5\,\rm{Hz}$, $\textrm{Re}\, ( Z(f)) =R_\textrm{L}+R_0(1+\beta)-(R_0\mathscr{L}_I(2+\beta))/(\mathscr{L}_I-1)$,
so that $\mathscr{L}_I$ and hence
$\alpha$ can be found.
The low frequency power-to-current responsivity is given by
\begin{equation}
s_I(0)= -\frac{1}{I_0R_0}\left[\frac{R_\textrm{L}+R_0(1+\beta)}{R_0\mathscr{L}_I}+1-\frac{R_\textrm{L}}{R_0} \right]^{-1}.
\label{equn:s_I}
\end{equation}
For a TES with good voltage bias $R_\textrm{L}\ll R_0$, and a sharp transition $\alpha, \, \mathscr{L}_I\gg1$, then
provided $\beta\ll \mathscr{L}_I-1$, $s_I(0)=-1/(I_0(R_0-R_\textrm{L}))$ and the power-to-current responsivity is straight-forward
to calculate.
For the TiAl TES reported here the measured
power-voltage response suggested a fairly broad
superconducting-normal transition. Here we use the full expression given by Eq.~\ref{equn:s_I}
with $s_I(0)=-k_s/(I_0(R_0-R_\textrm{L}))$ and $k_s$ the calculated value of the reciprocal within the brackets.
The inset of Fig.~\ref{fig:Response_p11_400} shows the measured real and imaginary parts (blue and green dots respectively)
of the
impedance of the residual power detector, channel 11. The solid red and dashed black lines show the modelled impedance.
Table~\ref{tab:ETF_parameters}
shows the derived ETF parameters and heat capacity.
The main figure shows the measured normalised TES current response (blue dots) to a small change
in TES bias voltage (i.e. a small modulation of instantaneous Joule power), and (red line) the calculated
response using the parameters determined in modelling the impedance.
No additional parameter were used.
\begin{figure}[htp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=8.0cm]{Hymas_data/dI_300_p12_model_combined}
\end{tabular}
\end{center}
\caption
{ \label{fig:Response_p12_300}
The inset shows the measured real
and imaginary parts (blue and green dots respectively) of the
impedance of the dark detector (channel 12). The solid red and dashed
black lines are the modelled values (real and imaginary respectively) using the parameters given in Table~\ref{tab:ETF_parameters}.
The main plot shows the
measured (blue dots) and calculated (red line) response of the TES current to a
small step change in the bias voltage for the same channel without additional parameters.
}
\end{figure}
The inset of Fig.~\ref{fig:Response_p12_300} shows the measured real and imaginary parts (blue and green dots respectively)
of the
impedance of the dark detector, channel 12. The solid red and dashed black lines show the modelled impedance.
Table~\ref{tab:ETF_parameters}
shows the derived ETF parameters and heat capacity.
The main figure shows the measured normalised TES current response (blue dots) to a small change
in TES bias voltage (i.e. a small modulation of instantaneous Joule power), and (red line) the calculated
response using the parameters determined in modelling the impedance.
As in the modelling shown in Fig.~\ref{fig:Response_p11_400},
no additional parameter were used.
Comparable correspondence between
measured impedance and current step response using impedance-derived
ETF parameters and heat capacity was found for all measured channels.
\begin{table}[htp
\caption{\label{tab:ETF_parameters}
Summary of calculated ETF parameters.
}
\begin{ruledtabular}
\begin{tabular}{cccccl}
\textrm{Channel}&
$\alpha$&
$\beta$&
$C\,\rm{(fJ/K)}$&
$k_s$&
Notes\\
\colrule
& & & & & \\
4 & 66 &1.3& 220 & 0.77 & \textrm{Filter}\\
5 & 77 & 2.9 & 200 & 0.74 & \textrm{Filter}\\
6 & 114 & 4.4 & 200 & 0.80 & \textrm{Filter}\\
12 & 128 & 1.3 & 180 & 0.88 & \textrm{Dark}\\
& & & & & \\
11 & 114 & 1.9 & 390 & 0.66 & \textrm{Residual}\\
& & & & &\textrm{Power} \\
\end{tabular}
\end{ruledtabular}
\end{table}
Derived values for $\alpha$, $\beta$, the heat capacity
and $k_s$ for Chip 1 used in the modelling of Figs.~\ref{fig:Response_p11_400} and
\ref{fig:Response_p12_300}
are given in Table~\ref{tab:ETF_parameters}.
We see that $\alpha$ is low compared to our MoAu TESs, $\beta\sim0.03\alpha$,
and
the responsivity is reduced from the high-$\alpha$ value $k_s=1$.
\section{Power calibration}
\label{sect:Power_calibration}
\subsection{Available in-band power}
\label{sect:Available_power}
We assume that the matched termination load, waveguide and probe behave as an ideal single-mode blackbody
source at temperature $T_\textrm{bb}$ with a bandwidth determined by the waveguide cut-on and probe 3-dB cut-offs giving
a top-hat filter with minimum and maximum transmission frequencies $\nu_{\min}=40$, $\nu_{\max}=65\,\rm{GHz}$ respectively.
Fitting a model to the manufacturer's data we find that loss in the coax
can be described by $\alpha_\textrm{coax}(l)\,[\textrm{dB}]=0.2648\, l\,[\textrm{mm}]/\sqrt{\nu\,[\textrm{GHz}]} $,
where $l$ is the
total line length.
Additional losses arise from the 3-dB attenuators
that form the coax heat-sinks (2 being used), and the $1.85\,\,\rm{mm}$ connectors
(8 in total), each contributing
an additional loss of $\alpha_\textrm{c}=0.223\,\,\rm{dB}$. The total attenuation $\alpha_{l}(\nu)$ (in dB) is then
\begin{equation}
\alpha_{l}(\nu)= \alpha_\textrm{coax}(l) + 6 + 8\alpha_\textrm{c} .
\label{equn:attenuation}
\end{equation}
The change in available power \textit{at the probe} is
$\Delta P_{\max}(T_{bb}) =(P_0(T_\textrm{bb})-P_0(T_{0})) 10^{-\alpha_{l}(\nu)/10}$,
with $T_{0}$ the lowest operating temperature for the blackbody and
\begin{equation}
P_0(T) =\int_{\nu_{\min}}^{\nu_{\max}} h\nu n(\nu,T) d\nu,
\label{equn:Blackbody_power}
\end{equation}
where $n(\nu,T)= [\exp ( h\nu/(k_\textrm{b} T) -1 ) ]^{-1}$ is the Bose distribution,
and $h$ is Planck's constant.
To calculate the power transmission to the residual power detector,
we assume each of the filter channels of centre frequency $\nu_{0,i}$, (where $i$ identifies the channel
and $ i \in 1\dots 10$)
taps its maximum fraction
of the incident power given by Eq.~\ref{eqn:filter_power_gain} and the transmitted power
is given by Eq.~\ref{eqn:filter_power_transmitted},
with $6\textrm{-dB}$ attenuation at the residual power detector
due to each filter at its resonant frequency. The total
available power at the residual power detector $P_r(T_\textrm{bb})$ is
\begin{eqnarray}
&&P_{r} (T)= \eta \int_{\nu_{\min}}^{\nu_{\max}} h\nu n(\nu,T)\, \prod_i T_i(\nu,\nu_{0,i})\, 10^{-\alpha_{l}(\nu)/10} d\nu,\nonumber\\
&&
\label{equn:Available_power}
\end{eqnarray}
$\Pi$ is the product and
$\eta$ is the overall efficiency referenced to the input to the probe.
\begin{figure}[htp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=8.0cm]{Hymas_data/Detected_high_power_v3}
\end{tabular}
\end{center}
\caption
{ \label{fig:BB_P11}
Power detected for the residual power detector as a function of change in blackbody temperature. The inset
shows the maximum available power $P_{\max}$ for 3 values of the filter-bank resolution.
$\mathcal{R}=500$ is the designed resolution.
The indicated efficiency $\eta$
assumes $\mathcal{R}=200$ and was calculated with the maximum TES responsivity $s_i=-1/I_0(R_0-R_{\textrm{load}})$.}
\end{figure}
\subsection{Coupled power measurement}
\label{sect:Measured_power}
Measurements of the
detection efficiency were made on Chip 1 with the cold stage temperature maintained at $90\,\,\rm{mK}$ using a PID
feedback loop to regulate the ADR base temperature. Under typical (unloaded) conditions the
cold stage temperature is constant at
$T_\textrm{b}=\pm 100\,\,\mu\textrm{K}$.
$T_\textrm{bb}$ was increased by stepping the blackbody heater current at $~2\,\textrm{Hz}$
with the change in measured TES current and $T_\textrm{bb}$ digitized at $2\,\textrm{kHz}$.
Changes in detected power in both the residual power and dark detectors as $T_\textrm{bb}$ was increased
were calculated from the change in their respective measured currents assuming the
simple power-to-current responsivity, $s_I(0)=-1/(I_0(R_0-R_\textrm{L}))$,
with the TES operating points calculated from measured and known electrical circuit parameters.
At the highest $T_\textrm{bb}$ used, $\sim 10\,\,\rm{K}$, $T_\textrm{b}$ increased by $\sim 1\,\,\rm{mK}$ suggesting power
loading onto the cold stage.
Under the same blackbody loading, and even after subtracting the
expected dark detector power response due to $T_\textrm{b}$, $\Delta P_\textrm{dark}=G_\textrm{b,dark}\Delta T_\textrm{b}$,
the dark detector indicated a current (hence power) response that we interpret here as additional incident power.
We modelled this residual response as a change in the temperature of the Si chip itself $T_\textrm{chip}$, finding an increase
of order $3\,\,\rm{mK}$ at the highest
$T_\textrm{bb}$.
The modelled chip response $\Delta T_\textrm{chip}$ closely
followed a quadratic response with $\Delta T_\textrm{bb}$. Although the origins of this
additional power loading could not be determined in this work,
it may have arisen from residual short wavelength loading from the blackbody source.
At 10~K the peak in the multimode blackbody
spectrum is around $1\,\,\rm{THz}$ implying that there may be a significant available detectable power at high frequencies.
Even a very small fraction of this power would be sufficient to
produce the inferred chip heating.
Changes in detected power for the residual power detector with $T_\textrm{bb}$ were accordingly reduced,
to include
the effect of both the measured $\Delta T_\textrm{b}$, and the modelled $\Delta T_\textrm{chip}$.
Figure \ref{fig:BB_P11} shows the change in detected power $\Delta P_r$
as a function of $\Delta T_\textrm{bb}$, for $T_\textrm{bb}$ in the range
4.2 to $9.4\,\,\rm{K}$. The power response is close to linear with temperature (the correlation coefficient
$R_{\textrm{cor}}=0.98$). The inset shows the
maximum available power $P_{\max}$ calculated for three values of the filter resolution $\mathcal{R}_i=100$, $200$
and $500$ - the designed value.
The residual power increases as $\mathcal{R}_i$ increases.
As discussed in Sec.~\ref{sect:Filter_response}
the designed value may represent an over-estimate of the
achieved $\mathcal{R}$ for the measured chip.
Assuming $\mathcal{R}=200$ we estimate an efficiency of $0.61\substack{+0.01 \\ -0.03}$ and the lower
error is calculated assuming the designed filter resolution $\mathcal{R}=500$. The measured power
and hence efficiency estimate were calculated assuming the
maximum power-to-current responsivity $s_i=-1/(I_0(R_0-R_\textrm{L}))$.
From the measured response of the residual power detector described in
Sec.~\ref{sect:ETF_estimates}, and Table~\ref{tab:ETF_parameters}
we see that this \textit{over-estimates} the sensitivity
by a factor $1/k_s\simeq 1.5$.
Taking account of this correction, our final estimate of overall efficiency
referred to the input to
the probe is increased to $\eta=0.91\substack{+0.015 \\ -0.05\,}$.
\section{Filter spectral response}
\label{sect:Filter_response}
\begin{figure}[htp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=8.0cm]{Hymas_data/Normalized_filters_with_fit_v1}
\end{tabular}
\end{center}
\caption
{ \label{fig:Filter_response}
Normalised power detected for filter channels 3 to 8 and 10 for Chip 2 (dots) and the calculated
filter profiles (full lines). Colours and annotation identify the individual channels.
}
\end{figure}
Measurements of the
spectral response were made on Chip 2.
Figure~\ref{fig:Filter_response} shows (dots) the normalised power detected
and (solid lines) the modelled response
using Eq.~\ref{eqn:filter_power_gain} (i.e. a Lorentzian)
for filter channels 3 to 8 and 10
as labelled. Colours and annotation identify the individual channels.
The individual filter responses $\mathcal{R}_i$ and $\nu_{0,i}$ are shown in Table~\ref{tab:Filter_parameters}
along with the nominal design filter centre frequencies.
Errors in $\nu_{0,i}$ were calculated from the fit to the data.
The frequency sampling of channel 10 was insufficient to
determine the filter response. For this channel the model fit shown was calculated assuming
$\mathcal{R}_{10}=90.2$ (the mean of the other channels) and is plotted only as a guide to the eye.
For this series of measurements, within measurement noise, there was \textit{no} observable response in the dark channel
and we estimate $\Delta P_{\textrm{dark}}<100\,\,\textrm{aW}$.
The absence of any detectable dark response
using a \textit{power-levelled, narrow-band,}
source operating at a fixed temperature of $\sim 300\,\,\textrm{K}$ -- in which case we might
expect no, or only very small changes in additional loading --
gives weight to our interpretation and analysis that the blackbody and efficiency measurements
should indeed be corrected for heating from a
\textit{broad-band, temperature modulated} source. In those measurements
the dark detector showed an unambiguous response.
\begin{table}[htp]
\caption{\label{tab:Filter_parameters} Summary of designed and measured filter channel characteristics.
}
\begin{ruledtabular}
\begin{tabular}{cccc}
\textrm{Channel}& $\nu_\textrm{design} $& $\nu_{0,i}$ & $\mathcal{R}_i$\\
& $\rm{(GHz)} $& $\rm{(GHz)} $ & \\
\colrule
&&&\\
3 & 60.0 & $61.06\pm0.013$ & $99.0\pm6.5$ \\
4 & 57.5 & $58.53\pm0.008$ & $92.8\pm3.4$\\
5 & 55.0 & $55.90\pm0.013$ & $79.3\pm4.2$\\
6 & 52.2 & $53.33\pm0.003$ & $106.5\pm2.0{\rm{\,\,\,}}$\\
7 & 50.0 & $50.71\pm0.010$ & $85.3\pm5.5$\\
8 & 47.5 & $48.21\pm0.005$ & $78.4\pm1.9$\\
10 & 42.5 & $42.89\pm0.001$ & \\
\end{tabular}
\end{ruledtabular}
\end{table}
The shapes of the individual filter responses were close to the expected Lorentzian as shown in more detail
in Fig~\ref{fig:Filter_response_detail}
for channels 3 and 5.
\begin{figure}[htp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=8.0cm]{Hymas_data/Figure11_v1}
\end{tabular}
\end{center}
\caption
{ \label{fig:Filter_response_detail}
Power detected from filter channels 3 and 5 with Lorentzian fits. }
\end{figure}
Figure~\ref{fig:Filter_frequencies} shows the measured filter centre frequencies $\nu_{0}$ as a function of
reciprocal filter length
$l$. The correlation is close to unity ($R_{\textrm{cor}}=0.9998$)
as predicted by Eqs.~\ref{eqn:resonant_frequency} and \ref{eqn:detuning},
demonstrating the precision with which it is possible to
position the
individual filter channels.
From the calculated regression line
$\nu_{0}\,\textrm{[GHz]}= (66.15\pm0.20)/l\,\textrm{[mm]}-(1.14\pm016)$,
and assuming a modest lithographic precision of
$\pm 1 \,\, \mu{\textrm{m}}$ for the fabrication process,
we calculate $\delta \nu=\pm37\,\,{\textrm{MHz}}$
for a 50~GHz filter, or $\delta \nu/\nu=0.0007$.
The finite value for the intercept is unexpected
and is an area for future investigation.
Uncertainties in parameters such as the realised coupling capacitance and the exact permittivity
of the dielectric constant are expected, but from
Eqs.~[1] and [4],
should only affect the constant of proportionality in the relationship
$\nu_0 \propto l^{-1}$, rather than generate an offset.
Instead, the offset suggests an issue with some detail of the underlying circuit model.
\begin{figure}[htp]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=8.0cm]{Hymas_data/Filter_frequencies_inset_v5}
\end{tabular}
\end{center}
\caption{
\label{fig:Filter_frequencies}
Plot of measured filter centre frequency as a function of inverse filter length. (Inset) The measured channel
resolutions $\mathcal{R}_i$ with frequency. }
\end{figure}
\section{Summary and Conclusions}
\label{sect:Conclusions}
We have described the design, fabrication and characterization of a superconducting filter bank spectrometer
with transition edge sensor
readout. We have described the design of a waveguide-microstrip radial probe transition with wide bandwidth.
We have demonstrated detection of millimetre-wave
radiation with frequencies from 41 to 64~GHz with characteristics that are
already
suitable for atmospheric temperature
sounding using the O$_2$ absorption band. The temperature of a cryogenic blackbody was modulated to determine the
power detection efficiency of these prototype devices, finding an efficiency $\eta=0.91\substack{+0.015 \\ -0.05}$
using the measured impedance of the
TES to determine the detection responsivity.
Filter profiles were determined on a separate device.
The measured channel profiles were Lorenztian as expected. We measured
extremely good predictability of relative channel
placement, and also in individual channel placement
$\delta \nu/\nu=0.07\,\%$.
We found somewhat lower filter resolutions $\mathcal{R}$ than the designed values, possibly arising from our use
in the
filter design
of
dielectric constant measurements at lower frequencies compared to those explored here
or, perhaps, unexpected dielectric losses.
This will be an area for future investigation.
In the next phase of this work we will continue to investigate this question but
also investigate alternative filter architectures -- particularly coplanar structures
where dielectric losses may-well be lower.
Guided by the measurements reported, we will
refine our designs to increase channel density for the O$_2$-band observations and
also include higher frequency channels
particularly aimed at observation of
the 183~GHz water-vapor line to enable a complete on-chip atmospheric temperature
and humidity sounding instrument.
Finally we emphasize that, although we have presented and discussed these measurements in the context
of an enhanced atmospheric temperature sounding demonstrator,
we believe we have already shown impressive spectroscopic and low-noise power detection
at technically challenging
millimetre-wave frequencies, with applications across a broad spectrum
of scientific enquiry.
\section{Acknowledgements}
This work was funded by the UK CEOI under Grant number RP61060435A07.
| {'timestamp': '2020-01-27T02:08:33', 'yymm': '2001', 'arxiv_id': '2001.08947', 'language': 'en', 'url': 'https://arxiv.org/abs/2001.08947'} |
\section{Introduction}
High temperature QCD is expected to asymptote a weakly coupled
Coulomb plasma albeit with still strong infrared divergences. The
latters cause its magnetic sector to be non-perturbative at all
temperatures. At intermediate temperatures of relevance to
heavy-ion collider experiments, the electric sector is believed to
be strongly coupled.
Recently, Shuryak and Zahed~\cite{SZ_newqgp} have suggested that
certain aspects of the quak-gluon plasma in range of temperatures
$(1-3)\,T_c$ can be understood by a stronger Coulomb interaction
causing persistent correlations in singlet and colored channels.
As a result the quark and gluon plasma is more a liquid than a gas
at intermediate temperatures. A liquid plasma should exhibit
shorter mean-free paths and stronger color dissipation, both of
which are supported by the current experiments at
RHIC~\cite{hydro}.
To help understand transport and dissipation in the strongly
coupled quark gluon plasma, a classical model of the colored
plasma was suggested in~\cite{gelmanetal}. The model consists of
massive quarks and gluons interacting via classical colored
Coulomb interactions. The color is assumed classical with all
equations of motion following from Poisson brackets. For the SU(2)
version both molecular dynamics simulations~\cite{gelmanetal} and
bulk thermodynamics~\cite{cho&zahed, cho&zahed2} were recently
presented including simulations of the energy loss of heavy
quarks~\cite{dusling&zahed}.
In this paper we extend our recent equilibrium analysis of the
static properties of the colored Coulomb plasma, to transport. In
section 2 we discuss the classical equations of motion in the
SU(2) colored phase space and derive the pertinent Liouville
operator. In section 3, we show that the resolvent of the
Liouville operator obeys a hierarchy of equations in the SU(2)
phase space. In section 4 we derive an integral equation for the
time-dependent structure factor by introducing a non-local
self-energy kernel in phase space. In section 5, we close the
Liouville hierarchy through a free streaming approximation on the
4-point resolvent and derive the self-energy kernel in closed form.
In section 6, we project the self-energy kernel and the non-static
structure factor onto the colorless hydrodynamical phase space.
In section 7, we show that the sound and plasmon mode are the
leading hydrodynamical modes in the SU(2) colored Coulomb plasma.
In section we analyze the shear viscosity for the transverse sound mode
for arbitrary values of $\Gamma$. We show that a minimum forms at
$\Gamma\approx 5$ at the cross-over between the hydrodynamical
and single-particle regimes. In section 8, we analyze self-diffusion in
phase space, and derive an explicit expression for the diffusion
constant at strong coupling. Our conclusions and prospects are
in section 9. In appendix A we briefly summarize our variables
in the SU(2) phase space. In appendix B we detail the projection
method for the self-energy kernel used in the text. In appendix C
we show that the collisional color contribution to the Liouville
operator drops in the self-energy kernel. In appendix D some useful
aspects of the hydrodynamical projection method are outlined.
\section{Colored Liouville Operator}
\renewcommand{\theequation}{II.\arabic{equation}}
\setcounter{equation}{0}
The canonical approach to the colored Coulomb plasma was discussed
in~\cite{gelmanetal}. In brief, the Hamiltonian for a single
species of constituent quarks or gluons in the SU(2)
representation is defined as
\begin{equation}
H=\sum_{i}^N\frac{\vec p^2_i}{2m_i}+\sum_{i>j=1}^N\frac{\vec
Q_i\cdot\vec Q_j}{|\vec r_i-\vec r_j|} \label{HAMILTON}
\end{equation}
The charge $g^2/4\pi$ has been omitted for simplicity of the
notation flow and will be reinserted in the pertinent physical
quantities by inspection.
The equations of motion in phase space follows from the classical
Poisson brackets. In particular
\begin{equation}
\frac{d\vec r_i}{dt}=-\{H,\vec r_i\}= \frac{\partial H}{\partial
\vec p_j}\frac{\partial \vec r_i}{\partial \vec r_j}=\frac{\vec
p_i}m \label{XPB}
\end{equation}
The Newtonian equation of motion is just the colored electric Lorentz
force
\begin{equation}
\frac{d\vec p_i}{dt}=-\{H,\vec p_i\}=- \frac{\partial H}{\partial
\vec r_j}\frac{\partial \vec p_i}{\partial \vec p_j}= Q_i^a\,\vec
E_i^a=\vec{F}_i \label{PPB}
\end{equation}
with the colored electric field and potentials defined as
($a=1,2,3$)
\begin{equation}
\vec{E}_i^a=-\vec\nabla_i\Phi^a_i=-\vec\nabla_i\sum_{j\neq
i}\frac{Q^a_j}{|\vec r_i-\vec r_j|} \label{XPB2}
\end{equation}
Our strongly coupled colored plasma is mostly electric following
the original assumptions in~\cite{gelmanetal,gelmanetal2}. The
equation of motion of the color charges is
\begin{equation}
\frac{dQ_i^a}{dt}=-\{H,Q_i^a\}= -\sum_{j,b}\frac{\partial
H}{\partial Q_i^b}\frac{\partial Q_i^a}{\partial Q_j^c} \{Q_j^b,
Q_j^c\}=\sum_{j\neq i}\frac{Q_iT^aQ_j}{|\vec r_i-\vec r_j|}
\label{QPB}
\end{equation}
for arbitrary color representation. For SU(2) the classical color
charge (\ref{QPB}) precesses around the net colored potential
$\Phi$ determined by the other particles as defined in
(\ref{XPB2}),
\begin{equation}
\frac{d\vec Q_i}{dt}=(\vec \Phi_i\times \vec Q_i)
\end{equation}
This equation was initially derived by Wong~\cite{wong}.
Some aspects of the SU(2) phase space are briefly recalled
in Appendix A.
The set (\ref{XPB}), (\ref{PPB}) and (\ref{QPB}) define the
canonical evolution in phase space. The time-dependent phase
distribution is formally given by
\begin{equation}
f(t,\vec r \vec p \vec Q)=\sum_{i=1}^N\delta(\vec r-\vec
r_i(t))\delta(\vec p-\vec p_i(t))\delta(\vec Q-\vec Q_i(t))\equiv
\sum_i\delta(\vec q-\vec q_i(t)) \label{FDIS}
\end{equation}
For simplicity $\vec q$ is generic for $\vec r,\vec p,\vec Q$.
Using the chain rule, the time-evolution operator on (\ref{FDIS})
obeys
\begin{equation}
\frac{d}{dt}=\frac{\partial}{\partial t}+ \frac{d\vec
r_i}{dt}\frac{\partial}{\partial \vec r_i}+ \frac{d\vec
p_i}{dt}\frac{\partial}{\partial \vec p_i}+ \frac{d\vec
Q_i}{dt}\frac{\partial}{\partial \vec Q_i} \equiv
\partial_t+i\mathcal{L} \label{LIOU}
\end{equation}
The last relation defines the Liouville operator
\begin{equation}
\mathcal{L}=\mathcal{L}_0+\mathcal{L}_I+\mathcal{L}_Q=-i\vec
v_i\cdot\nabla_{\vec r_i}-i\,\vec{F}_i\cdot\vec{\nabla}_{\vec p_i}
-i\vec \Phi_i \cdot( \vec Q_i\times\vec \nabla_{\vec Q_i})
\label{LIOU1}
\end{equation}
The last contribution in (\ref{LIOU1}) is genuily a 3-body force
because of the cross product (orbital color operator). It requires
3 distinct colors to not vanish. This observation will be
important in simplifying the color dynamics below. Also
(\ref{LIOU1}) is hermitean.
Since (\ref{FDIS}) depends implicitly on time, we can write
formally
\begin{equation}
\frac{d}{dt}f(t,\vec r\vec p\vec Q)=i\mathcal{L}f(t,\vec r\vec
p\vec Q) \label{EVOLUTION}
\end{equation}
with a solution $f(t)=e^{i\mathcal{L}t}f(0)$. The formal relation
(\ref{EVOLUTION}) should be considered with care since the action
of the Liouville operator on the 1-body phase space distribution
(\ref{FDIS}) generates also a 2-body phase space distribution.
Indeed, while $\mathcal{L}_0$ is local in phase space
\begin{equation}
\mathcal{L}_0\sum_i\delta(\vec q-\vec q_i)=-i\vec
v\cdot\nabla_{\vec r}\sum_i\delta(\vec q-\vec q_i)=L_0(\vec
q)\sum_i\delta(\vec q-\vec q_i) \label{LO}
\end{equation}
the 2 other contributions are not. Specifically
\begin{eqnarray}
\mathcal{L}_I\sum_m\delta(\vec q-\vec q_m)&=&i\sum_{i\neq
j}\nabla_{\vec r_i} \frac{\vec Q_i\cdot\vec Q_j}{|\vec r_i-\vec
r_j|}
\cdot \nabla_{\vec p_i}\sum_m \delta(\vec q-\vec q_m) \nonumber \\
&=& i\int d\vec q' \sum_{i\neq j , mn}\nabla_{\vec
r_i}\frac{\vec Q_i\cdot\vec Q_j}{|\vec r_i-\vec
r_j|}\cdot\nabla_{\vec p_i}\,\,\delta(\vec
q-\vec q_m) \delta(\vec q'-\vec q_n) \nonumber \\
&=&- \int d\vec q'L_I(\vec q,\vec q')\,\sum_{mn}\delta(\vec q-\vec
q_m)\delta(\vec q'-\vec q_n)
\label{L1}
\end{eqnarray}
with
\begin{equation}
L_I(\vec q,\vec q')=i\nabla_{\vec r} \frac{\vec Q\cdot\vec
Q'}{|\vec r-\vec r'|} \cdot (\nabla_{\vec p}- \nabla_{\vec p'})
\end{equation}
Similarly
\begin{eqnarray}
\mathcal{L}_Q\sum_m\delta(\vec q-\vec q_m)&=&-i\sum_{ j\neq i, m}
\frac{\vec Q_i\times\vec Q_j}{|\vec r_i-\vec r_j|}\cdot
\nabla_{\vec Q_i} \,\delta(\vec q-\vec q_m) \nonumber \\
&=& -i\int d\vec q'\sum_{j\neq i, mn}\frac{\vec Q_i\times \vec
Q_j}{|\vec r_i-\vec r_j|}\cdot \nabla_{\vec Q_i}\delta(\vec
q-\vec q_m) \delta(\vec q'-\vec q_n) \nonumber \\
&=&- \int d\vec q'L_Q(\vec q,\vec q') \sum_{mn}\delta(\vec q-\vec
q_m)\delta(\vec q'-\vec q_n) \label{LQ}
\end{eqnarray}
with
\begin{equation}
L_Q(\vec q,\vec q')=-i \frac{\vec Q\times\vec Q'}{|\vec r-\vec
r'|} \cdot (\nabla_{\vec Q}-\nabla_{\vec Q'})
\end{equation}
Clearly (\ref{LQ}) drops from 2-body and symmetric phase space
distributions. It does not for 3-body and higher.
\section{Liouville Hierarchy}
\renewcommand{\theequation}{III.\arabic{equation}}
\setcounter{equation}{0}
An important correlation function in the analysis of the colored
Coulomb plasma is the time dependent structure factor or 2-body
correlation in the color phase space
\begin{equation}
{\bf S}(t-t',\vec r-\vec r', \vec p\vec p', \vec Q\cdot \vec
Q')=\langle\delta f(t,\vec r\vec p\vec Q)\,\delta f(t',\vec r'\vec
p'\vec Q')\rangle \label{FF}
\end{equation}
with $\delta f=f-\langle f\rangle$ the shifted 1-body phase space
distribution. The averaging in (\ref{FF}) is carried over the
initial conditions with fixed number of particles $N$ and average
energy or temperature $\beta=1/T$. Thus $\langle f\rangle=nf_0(p)$
which is the Maxwellian distribution for constituent quarks or
gluons. In equilibrium, the averaging in (\ref{FF}) is time and
space translational invariant as well as color rotational
invariant.
Using the ket notation with $\vec 1\equiv \vec q\equiv {\vec
r}{\vec p}{\vec Q}$
\begin{equation}
|\delta f(t,\vec 1)\rangle=\bigg|\sum_{m}\delta(\vec q-\vec
q_m(t))-\langle \sum_{m}\delta(\vec q-\vec q_m(t))\bigg\rangle
\equiv |\delta f(t,\vec 1)-\langle f(t,\vec 1)\rangle>
\end{equation}
with also $\vec 2=\vec q'$, $\vec 3=\vec q''$, $\vec 4=\vec q'''$
and so on and the formal Liouville solution $\delta f(t,\vec
1)=e^{i\mathcal{L}t}\delta f(\vec 1)$ we can write (\ref{FF}) as
\begin{equation}
{\bf S}(t-t',\vec q,\vec q')=\langle\delta f(t,\vec 1)|\delta
f(t',\vec 2)\rangle= \langle\delta f(\vec 1)|e^{i{\cal
L}(t'-t)}|\delta f(\vec 2)\rangle
\end{equation}
The bra-ket notation is short for the initial or equilibrium
average. Its Laplace or causal transform reads
\begin{equation}
{\bf S}(z, \vec q, \vec q')=-i\int_{-\infty}^{+\infty} \,dt
\,\,\theta(t-t')\,\, e^{izt} \,\,{\bf S}(t-t',\vec q,\vec q')=
\langle\delta f(\vec 1)|\frac 1{z+\mathcal{L}}|\delta f(\vec
2)\rangle \label{RES}
\end{equation}
with $z=\omega+i0$. Clearly
\begin{equation}
z{\bf S}(z, \vec q, \vec q')+\langle\delta f(\vec
1)|\mathcal{L}\frac 1{z+\mathcal{L} }|\delta f(\vec
2)\rangle=\langle\delta f(\vec 1)|\delta f(\vec 2)\rangle
\label{RES1}
\end{equation}
Since ${\cal L }^\dagger={\cal L}$ is hermitian and using
(\ref{LO}), (\ref{L1}) and (\ref{LQ}) it follows that
\begin{equation}
\langle\delta f(\vec 1)|{\cal L}=\langle\delta f(\vec 1)|{
L}_0(\vec q)- \int \,d\vec q"\,L_{I+Q}(\vec q,\vec
q'')\,\langle\delta f(\vec 1)\delta f(\vec 3)|
\end{equation}
Thus
\begin{equation}
\Big(z-L_0(\vec q)\Big){\bf S}(z, \vec q,\vec q')-\int\,d\vec
q''L_{I+Q}(\vec q',\vec q''){\bf S}(z,\vec q\vec q'',\vec q')={\bf
S}_0(\vec q,\vec q') \label{RES2}
\end{equation}
where we have defined the 3-body phase space resolvent
\begin{equation}
{\bf S}(z, \vec q\vec q'', \vec q')=\,\langle\delta f(\vec
1)\delta f(\vec 3)|\frac 1{z+{\cal L}}\,|\delta f(\vec 2)>
\label{SS3}
\end{equation}
${\bf S}_0(\vec q,\vec q')$ is the static colored structure factor
discussed by us in~\cite{cho&zahed3}. Since $L_{I+Q}(\vec q',\vec
q)$ is odd under the switch $\vec q\leftrightarrow \vec q'$, and
since ${\bf S}(z, \vec q\vec q'',\vec q')={\bf S}(-z, \vec q,\vec
q'\vec q'')$ owing to the $t\leftrightarrow t'$ in (\ref{RES}),
then
\begin{equation}
\Big(z+L_0(\vec q')\Big){\bf S}(z, \vec q,\vec q')-\int\,d\vec
q''L_{I+Q}(\vec q',\vec q''){\bf S}(z,\vec q, \vec q' \vec
q'')={\bf S}_0(\vec q,\vec q') \label{RES2X}
\end{equation}
(\ref{RES2}) or equivalently (\ref{RES2X}) define the Liouville
hierarchy, whereby the 2-body phase space distribution ties to the
3-body phase space distribution and so on. Indeed, (\ref{RES2X})
for instance implies
\begin{equation}
\Big(z+L_0(\vec q'')\Big){\bf S}(z, \vec q\vec q', \vec
q'')-\int\,d\vec q'''L_{I+Q}(\vec q'',\vec q'''){\bf S}(z,\vec q
\vec q', \vec q''\vec q''')={\bf S}_0(\vec q\vec q',\vec q'')
\label{RES2XX}
\end{equation}
with the 4-point resolvent function
\begin{equation}
{\bf S}(z,\vec q \vec q', \vec q''\vec q''')= \,\langle\delta
f(\vec 1)\delta f(\vec 2)|\frac 1{z+{\cal L}}\,|\delta f(\vec
3)\delta f(\vec 4)> \label{SS3X}
\end{equation}
These are the microscopic kinetic equations for the color phase
space distributions. They are only useful when closed, that is by
a truncation as we discuss below. These formal equations where
initially discussed
in~\cite{foster&martin,foster,mazenko1,mazenko3} in the context of
the one component Coulomb Abelian Coulomb plasma. We have now
generalized them to the multi-component and non-Abelian colored
Coulomb plasma.
\section{Self-Energy Kernel}
\renewcommand{\theequation}{IV.\arabic{equation}}
\setcounter{equation}{0}
In (\ref{RES2}) the non-local part of the Liouville operator plays
the role of a non-local self-energy kernel $\Sigma$ on the 2-body
resolvent. Indeed, we can rewrite (\ref{RES2}) as
\begin{eqnarray}
(z-L_0(\vec q)){\bf S}(z,\vec q,\vec q')-\int dq''\Sigma (z,\vec
q,\vec q'') {\bf S}(z,\vec q'',\vec q')={\bf S}_0(\vec q,\vec q')
\label{SELF}
\end{eqnarray}
with the non-local self-energy kernel defined formally as
\begin{equation}
\int d\vec q''\Sigma(z,\vec q,\vec q'')\,{\bf S}(z,\vec q'',\vec
q')=\int d\vec q'' L_{I+Q}(\vec q,\vec q'')\,{\bf S}(z,\vec q\vec
q'',\vec q') \label{SELF1}
\end{equation}
The self-energy kernel $\Sigma$ can be regarded as the sum of a
static or z-independent contribution $\Sigma_S$ ans a non-static
or collisional contribution $\Sigma_C$,
\begin{equation}
\Sigma(z, \vec q, \vec q'')=\Sigma_S(\vec q,\vec
q'')+\Sigma_C(z,\vec q,\vec q'') \label{SUM}
\end{equation}
The stationary part $\Sigma_S$ satisfies
\begin{equation}
\int d\vec q'' \Sigma_S(\vec q,\vec q'')\,{\bf S}_0(\vec q'',\vec
q')= \int d\vec q''L_{I+Q}(\vec q,\vec q'')\,{\bf S}_0(\vec q,\vec
q',\vec q'') \label{SELF2}
\end{equation}
which identifies it with the sum of the 2- and 3-body part of the
Liouville operator $L_{I+Q}$.
The collisional part $\Sigma_C$ is more involved. To unwind it, we
operate with $(z+L_0(\vec q'))$ on both sides of (\ref{SELF1}),
and then reduce the left hand side contribution using
(\ref{RES2X}) and the right hand side contribution using
(\ref{RES2XX}). The outcome reduces to
\begin{eqnarray}
\Sigma_C(z, \vec q, \vec q'')\,{\bf S}_0(\vec q'',\vec q')=&&
-\int d\vec q'''\,L_{I+Q}(\vec q,\vec q'')L_{I+Q}(\vec q',\vec
q'')
\,{\bf S}(z,\vec q\vec q'',\vec q'\vec q''')\nonumber\\
&&+\int\,dq'''\,\Sigma(z,\vec q,\vec q'')\,L_{I+Q}(\vec q',\vec
q'')\,{\bf S}(z, \vec q'', \vec q'\vec q''') \label{INT}
\end{eqnarray}
after using (\ref{SELF2}). From (\ref{SELF1}) it follows formally
that
\begin{equation}
\Sigma(z,\vec q,\vec q'')=\int\,d\vec q'''\, L_{I+Q}(\vec q,\vec
q''')\,{\bf S}^{-1}(z,\vec q',\vec q'')\,{\bf S}(z, \vec q\vec
q''',\vec q') \label{FORMX}
\end{equation}
Inserting (\ref{FORMX}) into the right hand side of (\ref{INT})
and taking the $\vec q'$ integration on both sides yield
\begin{equation}
nf_0(p'')\,\Sigma_C(z,\vec q,\vec q'') =-\int d\vec q'' d\vec
q'''\,L_{I+Q}(\vec q,\vec q'')L_{I+Q} (\vec q',\vec q''')\,{\bf
G}(z,\vec q\vec q'',\vec q'\vec q''') \label{SELF5}
\end{equation}
with ${\bf G}$ a 4-point phase space correlation function
\begin{equation}
{\bf G}(z,\vec q\vec q_1',\vec q'\vec q_2')={\bf S}(z,\vec q\vec
q_1',\vec q'\vec q_2')-\int d\vec q_3d\vec q_4 {\bf S}(z, \vec
q\vec q_1',\vec q_3){\bf S}^{-1}(z,\vec q_3,\vec q_4){\bf
S}(z,\vec q_4,\vec q'\vec q_2') \label{G4}
\end{equation}
The collisional character of the self-energy $\Sigma_C$ is
manifest in (\ref{SELF5}). The formal relation for the
collisional self-energy (\ref{SELF5}) was initially derived
in~\cite{mazenko1,mazenko3} for the one-component and Abelian
Coulomb plasma. We now have shown that it holds for any
non-Abelian SU(N) Coulomb plasma.
Eq. (\ref{SELF5}) shows that the connected part of the self-energy
kernel is actually tied to a 4-point correlator in the colored
phase space. In terms of (\ref{SELF5}), the original kinetic
equation (\ref{RES2}) now reads
\begin{eqnarray}
&&\Big(z-L_0(\vec q)\Big){\bf S}(z, \vec q, \vec q')-\int\,d\vec
q''\Sigma_S(\vec q,\vec q''){\bf S}(z,\vec q'',\vec q')= {\bf
S}_0(\vec q,\vec q') \nonumber\\&& -\int\,d\vec q''\,d\vec q_1
d\vec q_2\,L_{I+Q}(\vec q,\vec q_1)L_{I+Q} (\vec q'',\vec
q_2)\,{\bf G}(z,\vec q\vec q_1,\vec q''\vec q_2)\,{\bf S}(z,\vec
q'',\vec q') \label{BOLZ}
\end{eqnarray}
which is a Boltzman-like equation. The key difference is that it
involves correlation functions and the Boltzman-like kernel in the
right-hand side is {\bf not} a scattering amplitude but rather a
reduced 4-point correlation function. (\ref{BOLZ}) reduces to the
Boltzman equation for weak coupling. An alternative derivation of
(\ref{BOLZ}) can be found in Appendix C through a direct
projection of (\ref{SELF1}) in phase space.
\section{Free Streaming Approximation}
\renewcommand{\theequation}{V.\arabic{equation}}
\setcounter{equation}{0}
The formal kinetic equation (\ref{SELF5}) can be closed by
approximating the 4-point correlation function in the color phase
space by a product of 2-point correlation
function~\cite{mazenko3},
\begin{equation}
{\bf G}(t, \vec q\vec q_1, \vec q'\vec q_2)\approx \Big( {\bf
S}(t, \vec q,\vec q'){\bf S}(t, \vec q_1,\vec q_2)+{\bf S}(t, \vec
q, \vec q_2){\bf S}(t, \vec q', \vec q_1) \Big)\label{ST3}
\end{equation}
This reduction will be referred to as the free steaming
approximation. Next we substitue the colored Coulomb potentials in
the double Liouville operator $L_{1+Q}\times L_{1+Q}$ with a bare
Coulomb ${\bf V}(\vec r-\vec r',\vec Q\cdot\vec Q')=\vec
Q\cdot\vec Q'/|\vec r-\vec r'|$.
\begin{eqnarray}
& & L_{I+Q}(\vec q,\vec q_1)=i\nabla_{\vec r}{\bf V}(\vec r-\vec
r_1,\vec Q\cdot\vec
Q_1)\cdot (\nabla_{\vec p}-\nabla_{\vec p_1}) \nonumber \\
& &-i \Big(\vec Q\times\nabla_{\vec Q}{\bf V}(\vec r-\vec r_1,\vec
Q\cdot \vec Q_1)\cdot \nabla_{\vec Q}+\vec Q_1\times \nabla_{\vec
Q_1}{\bf V}(\vec r-\vec r_1,\vec Q\cdot \vec
Q_1)\cdot\nabla_{Q_1}\Big) \label{ST4}
\end{eqnarray}
times a dressed colored Coulomb potential ${\bf c}_D$ defined
in~\cite{cho&zahed3}
\begin{eqnarray}
& & L_{I+Q}^R(\vec q,\vec q_1)=-i\frac{1}{\beta}\nabla_{\vec
r}{\bf c}_D(\vec r-\vec r_1,\vec Q\cdot \vec Q_1)\cdot
(\nabla_{\vec p}-\nabla_{\vec p_1}) \nonumber \\
& & +i\frac{1}{\beta}\Big(\vec Q\times\nabla_{\vec Q}{\bf
c}_D(\vec r-\vec r_1,\vec Q\cdot \vec Q_1)\cdot \nabla_{\vec
Q}+\vec Q_1\times \nabla_{\vec Q_1}{\bf c}_D(\vec r-\vec r_1,\vec
Q\cdot \vec Q_1)\cdot\nabla_{Q_1}\Big) \label{ST5}
\end{eqnarray}
This bare-dressed or half renormalization was initially
suggested~\cite{wallenborn&baus} in the context of the
one-component Coulomb plasma to overcome the shortcomings of a
full or dressed-dressed renormalization initially suggested
in~\cite{mazenko1,mazenko3}. The latter was shown to upset the
initial conditions. Thus
\begin{equation}
L_{I+Q}(\vec q,\vec q_1)L_{I+Q}(\vec q',\vec q_2)\rightarrow
\frac{1}{2}\Big( L_{I+Q}(\vec q,\vec q_1)L_{I+Q}^R(\vec q',\vec
q_2)+L_{I+Q}^R(\vec q,\vec q_1)L_{I+Q}(\vec q',\vec q_2)\Big)
\label{eq004s}
\end{equation}
Combining (\ref{ST3}) and (\ref{eq004s}) in (\ref{SELF5}) yields
\begin{eqnarray}
&& n\,f_0(\vec p')\,\Sigma_C(t, \vec q, \vec q')\approx
-\frac{1}{2}\int d\vec q_1\,d\vec q_2\,\bigg(L_{I+Q}(\vec q,\vec
q_1)L_{I+Q}^R (\vec q',\vec q_2){\bf S}(t, \vec q,\vec q'){\bf
S}(t,\vec q_1,\vec q_2) \nonumber\\
&& +L_{I+Q}(\vec q,\vec q_1)L_{I+Q}^R (\vec q',\vec q_2){\bf S}(t,
\vec q,\vec q_2){\bf S}(t,\vec q', \vec q_1) + (\vec
q_1\leftrightarrow \vec q_2,\vec q\leftrightarrow \vec q' ) \bigg)
\label{ST6}
\end{eqnarray}
\noindent This is the half dressed but free streaming
approximation for the connected part of the self-energy for the
colored Coulomb plasma. Translational invariance in space and
rotational invariance in color space allows a further reduction of
(\ref{ST6}) by Fourier and Legendre transforms respectively.
Indeed, Eq. (\ref{ST6}) yields
\begin{eqnarray}
&&n\,f_0(\vec p')\,\Sigma_C(t, \vec q, \vec
q')\nonumber\\
&&\approx -\frac{1}{2} \int d\vec q_1\,d\vec q_2\,\bigg(L_{I}(\vec
q,\vec q_1)L_{I}^R (\vec q',\vec q_2){\bf S}(t, \vec q,\vec
q'){\bf S}(t,\vec q_1,\vec q_2) \nonumber\\
&& +L_{I}(\vec q,\vec q_1)L_{I}^R (\vec q',\vec q_2){\bf S}(t,
\vec q,\vec q_2){\bf S}(t,\vec q', \vec q_1) + (\vec
q_1\leftrightarrow \vec q_2,\vec q\leftrightarrow \vec q') \bigg) \nonumber \\
&&=-\frac{1}{2\beta} \int d\vec q_1\,d\vec q_2\,\bigg(
\nabla_{\vec r}{\bf c}_D(\vec r-\vec r_1,\vec Q\cdot \vec
Q_1)\cdot \nabla_{\vec p}\nabla_{\vec r'}{\bf V}(\vec r'-\vec
r_2,\vec Q'\cdot \vec Q_2)\cdot \nabla_{\vec
p'}{\bf S}(t, \vec q,\vec q'){\bf S}(t,\vec q_1,\vec q_2) \nonumber\\
&& +\nabla_{\vec r}{\bf c}_D(\vec r-\vec r_1,\vec Q\cdot \vec
Q_1)\cdot \nabla_{\vec p}\nabla_{\vec r'}{\bf V}(\vec r'-\vec
r_2,\vec Q'\cdot \vec Q_2)\cdot \nabla_{\vec p'}{\bf S}(t, \vec
q,\vec q_2){\bf S}(t,\vec q', \vec q_1) + (\vec q_1\leftrightarrow
\vec q_2,\vec q\leftrightarrow \vec q') \bigg) \nonumber \\
\label{eq006s}
\end{eqnarray}
where we note that the colored part of the Liouville operator
dropped from the collision kernel in the free streaming
approximation as we detail in Appendix C. Both sides of
(\ref{eq006s}) can be now Legendre transformed in color to give
\begin{eqnarray}
&& n\,f_0(\vec p')\,\sum_l\Sigma_{Cl}(t, \vec r\vec r', \vec p\vec
p')\frac{2l+1}{4\pi}P_l(\vec Q\cdot\vec Q') \nonumber \\
& & \approx -\frac 1{2\beta}\int d\vec r_1 d\vec p_1 d\vec
r_2d\vec p_2 \sum_{l}\frac{2l+1}{4\pi}\bigg(\frac{l+1}
{2l+1}P_{l+1}(\vec Q\cdot\vec Q')+\frac{l}{2l+1}P_{l-1}
(\vec Q\cdot\vec Q')\bigg) \nonumber \\
& & \times \bigg( \nabla_{\vec r}{\bf c}_{D1}(\vec r-\vec
r_1)\cdot \nabla_{\vec p}\nabla_{\vec r'} \frac{1}{|\vec r'-\vec
r_2|}\cdot \nabla_{\vec p'}{\bf S}_l(t, \vec r\vec r',\vec p\vec
p'){\bf S}_1(t, \vec r_1\vec r_2,\vec p_1\vec p_2) \nonumber \\
& & +\nabla_{\vec r}{\bf c}_{Dl}(\vec r-\vec r_1)\cdot
\nabla_{\vec p}\nabla_{\vec r'} \frac{1}{|\vec r'-\vec r_2|}\cdot
\nabla_{\vec p'}{\bf S}_1(t, \vec r\vec r_2,\vec p\vec p_2){\bf
S}_l(t, \vec r'\vec r_1,\vec p'\vec p_1) \nonumber \\
& & +\nabla_{\vec r'}{\bf c}_{Dl}(\vec r'-\vec r_2)\cdot
\nabla_{\vec p'}\nabla_{\vec r} \frac{1}{|\vec r-\vec r_1|}\cdot
\nabla_{\vec p}{\bf S}_l(t, \vec r\vec r_2,\vec p\vec p_2){\bf
S}_1(t, \vec r'\vec r_1,\vec p'\vec p_1) \nonumber \\
& & \nabla_{\vec r'}{\bf c}_{D1}(\vec r'-\vec r_2)\cdot
\nabla_{\vec p'}\nabla_{\vec r} \frac{1}{|\vec r-\vec r_1|}\cdot
\nabla_{\vec p}{\bf S}_l(t, \vec r\vec r',\vec p\vec p'){\bf
S}_1(t, \vec r_1\vec r_2,\vec p_1\vec p_2) \bigg) \label{eq007s}
\end{eqnarray}
Thus
\begin{eqnarray}
&& n\,f_0(\vec p')\,\Sigma_{Cl}(t, \vec r\vec r', \vec p\vec
p') \nonumber \\
& & \approx -\frac 1{2\beta}\int d\vec r_1 d\vec p_1 d\vec
r_2d\vec p_2 \bigg( \nabla_{\vec r}{\bf c}_{D1}(\vec r-\vec
r_1)\cdot \nabla_{\vec p}\nabla_{\vec r'} \frac{1}{|\vec
r'-\vec r_2|}\cdot \nabla_{\vec p'} \nonumber \\
& & \times \Big(\frac{l}{2l+1}{\bf S}_{l-1}(t, \vec r\vec r',\vec
p\vec p'){\bf S}_1(t, \vec r_1\vec r_2,\vec p_1\vec
p_2)+\frac{l+1}{2l+1}{\bf S}_{l+1}(t, \vec r\vec r',\vec p\vec
p'){\bf S}_1(t, \vec r_1\vec r_2,\vec p_1\vec
p_2)\Big) \nonumber \\
& & +\nabla_{\vec r}{\bf c}_{D1}(\vec r-\vec r_1)\cdot
\nabla_{\vec p}\nabla_{\vec r'} \frac{1}{|\vec r'-\vec r_2|}\cdot
\nabla_{\vec p'} \nonumber \\
& & \times \Big(\frac{l}{2l+1}{\bf S}_{1}(t, \vec r\vec r_2,\vec
p\vec p_2){\bf S}_{l-1}(t, \vec r'\vec r_1,\vec p'\vec p_1)+\frac{
l+1}{2l+1}{\bf S}_{1}(t, \vec r\vec r_2,\vec p\vec p_2){\bf
S}_{l+1}(t, \vec r'\vec r_1,\vec p'\vec p_1)\Big) \nonumber \\
& & +\nabla_{\vec r'}{\bf c}_{Dl}(\vec r'-\vec r_2)\cdot
\nabla_{\vec p'}\nabla_{\vec r} \frac{1}{|\vec r-\vec r_1|}\cdot
\nabla_{\vec p} \nonumber \\
& & \times \Big(\frac{l}{2l+1}{\bf S}_{l-1}(t, \vec r\vec r_2,\vec
p\vec p_2){\bf S}_1(t, \vec r'\vec r_1,\vec p'\vec p_1)+\frac{l+1}
{2l+1}{\bf S}_{l+1}(t, \vec r\vec r_2,\vec p\vec p_2){\bf
S}_1(t, \vec r'\vec r_1,\vec p'\vec p_1) \Big) \nonumber \\
& & +\nabla_{\vec r'}{\bf c}_{D1}(\vec r'-\vec r_2)\cdot
\nabla_{\vec p'}\nabla_{\vec r} \frac{1}{|\vec r-\vec r_1|}\cdot
\nabla_{\vec p} \nonumber \\
& & \times \Big(\frac{l}{2l+1}{\bf S}_{l-1}(t, \vec r\vec r',\vec
p\vec p'){\bf S}_1(t, \vec r_1\vec r_2,\vec p_1\vec
p_2)+\frac{l+1}{2l+1}{\bf S}_{l+1}(t, \vec r\vec r',\vec p\vec
p'){\bf S}_1(t, \vec r_1\vec r_2,\vec p_1\vec p_2)\Big) \bigg)
\nonumber \\ \label{eq008s}
\end{eqnarray}
with ${\bf S}_{l-1}\equiv0$ by definition. In the colored Coulomb
plasma the collisional contributions diagonalize in the color
projected channels labelled by $l$, with $l=0$ being the density
channel, $l=1$ the plasmon channel and so on. In momentum space
(\ref{eq008s}) reads
\begin{eqnarray}
&& n\,f_0(\vec p')\, \Sigma_{Cl}(t, \vec k, \vec p\vec p')
\nonumber \\
& & = -\frac 1{2\beta} \int d\vec p_1 d\vec p_2 \int \frac{d\vec
l}{(2\pi)^3}\bigg( \vec l\cdot\nabla_{\vec p}\vec
l\cdot\nabla_{\vec p'} {\bf c}_{D1}(l) V_{\vec l} \nonumber
\\ & & \times \Big( \frac{l}{2l+1} {\bf S}_{l-1}(t, \vec k-\vec
l,\vec p\vec p'){\bf S}_1(t, \vec l,\vec p_1\vec
p_2)+\frac{l+1}{2l+1} {\bf S}_{l+1}(t, \vec k-\vec l,\vec p\vec
p'){\bf S}_1(t, \vec l,\vec p_1\vec p_2)\Big) \nonumber \\
& & + \vec l\cdot\nabla_{\vec p}(\vec k-\vec l)\cdot\nabla_{\vec
p'} {\bf c}_{Dl}(l) V_{\vec k-\vec l} \nonumber
\\ & & \times \Big( \frac{l}{2l+1} {\bf S}_{1}(t, \vec k-\vec
l,\vec p\vec p_2){\bf S}_{l-1}(t, \vec l,\vec p'\vec
p_1)+\frac{l+1}{2l+1} {\bf S}_{1}(t, \vec k-\vec l,\vec p\vec
p_2){\bf S}_{1+1}(t, \vec l,\vec p'\vec p_1)\Big) \nonumber \\
& & + (\vec k-\vec l)\cdot\nabla_{\vec p}\vec l\cdot\nabla_{\vec
p'} {\bf c}_{Dl}(l) V_{\vec k-\vec l} \nonumber
\\ & & \times \Big( \frac{l}{2l+1} {\bf S}_{l-1}(t, \vec
l,\vec p\vec p_2){\bf S}_{l}(t, \vec k-\vec l,\vec p'\vec
p_1)+\frac{l+1}{2l+1} {\bf S}_{l+1}(t, \vec l,\vec p\vec
p_2){\bf S}_{1}(t, \vec k-\vec l,\vec p'\vec p_1)\Big) \nonumber \\
& & +\vec l\cdot\nabla_{\vec p}\vec l\cdot\nabla_{\vec p'} {\bf
c}_{D1}(l) V_{\vec l} \nonumber \\
& & \times \Big( \frac{l}{2l+1} {\bf S}_{l-1}(t, \vec k-\vec
l,\vec p\vec p'){\bf S}_1(t, \vec l,\vec p_1\vec
p_2)+\frac{l+1}{2l+1} {\bf S}_{l+1}(t, \vec k-\vec l,\vec p\vec
p'){\bf S}_1(t, \vec l,\vec p_1\vec p_2)\Big) \bigg) \nonumber \\
\label{SFOURIER}
\end{eqnarray}
with $V_{\vec l}=4\pi/{\vec l}^2$. We note that for $l=0$ which is
the colorless density channel (\ref{SFOURIER}) involves only ${\bf
S}_1$ which is the time-dependent charged form factor due to the
Coulomb interactions.
\section{Hydrodynamical Projection}
\renewcommand{\theequation}{VI.\arabic{equation}}
\setcounter{equation}{0}
In terms of (\ref{SFOURIER}) , (\ref{SELF1}) and
\begin{equation}
\Sigma_l (z\vec k, \vec p\vec p_1)=\bigg(
\Sigma_{0l}+\Sigma_{Il}+\Sigma_{Ql}+\Sigma_{Cl}\bigg)(z\vec k,
\vec p\vec p_1) \label{KEY}
\end{equation}
the Fourier and Legendre transform of the kinetic equation
(\ref{RES2}) now read
\begin{equation}
z{\bf S}_l(z\vec k, \vec p\vec p')-\int\,d\vec p_1\Sigma_l(z\vec
k, \vec p\vec p_1){\bf S}_l(z\vec k, \vec p_1\vec p')={\bf
S}_{0l}(\vec k, \vec p\vec p') \label{K1}
\end{equation}
with $\Sigma_{0l}=L_0$ and $\Sigma_{Sl}=L_{(I+Q)l}$. Specifically
\begin{eqnarray}
&& \Sigma_{0l}(z\vec k,\vec p\vec p_1)=\vec k\cdot\vec v \delta(\vec p -\vec p_1)\nonumber\\
&& \Sigma_{Il}(z\vec k,\vec p\vec p_1)=-n\,f_0(p)\frac{\vec k\cdot\vec p}{m}\,{\bf c}_{Dl}(\vec k)\nonumber\\
&& \Sigma_{Ql}(z\vec k,\vec p\vec p_1)=0 \label{K2}
\end{eqnarray}
and $\Sigma_{Cl}$ is defined in (\ref{SFOURIER}). See also
Appendix B for an alternative but equivalent derivation using the
operator projection method.
(\ref{K1}) is the key kinetic equation for the colored Coulomb
plasma. It still contains considerable information in phase space.
A special limit of the classical phase space is the long
wavelength or hydrodynamical limit. In this limit, only few
moments of the phase space fluctuations $\delta f$ or equivalently
their correlations in ${\bf S}\approx \langle\delta f\delta
f\rangle$ will be of interest. In particular,
\begin{eqnarray}
&& \vec{n}(t, \vec r)=\int d\vec{p} d\vec Q\,
\, \delta f (t, \vec r, \vec p, \vec Q)\nonumber\\
&& \vec{p}(t, \vec r)=\int d\vec{p} d\vec Q\, \vec p
\, \delta f (t, \vec r, \vec p, \vec Q)\nonumber\\
&& {\bf e}(t, \vec r)=\int d\vec{p} d\vec Q \,\frac {p^2}{2m}\,
\delta f (t, \vec r, \vec p, \vec Q) \label{MOM}
\end{eqnarray}
The local particle density, 3-momentum and energy (kinetic). The
hydrodynamical sector described by the macro-variables (\ref{MOM})
is colorless. An interesting macro-variable which carries charge
representation of SU(2) would be
\begin{equation}
{\bf n}_l (t, \vec r)=\frac 1{2l+1} \sum_m \int d\vec{r} d\vec Q
\,Y_l^m (\vec Q)\, \delta f (t, \vec r, \vec p, \vec Q)
\label{DENSL}
\end{equation}
which reduces to the $l$ color density with $l=0$ being the
particle density, $l=1$ the charged color monopole density, $l=2$
the charged color quadrupole density and so on. Because of color
rotational invariance in the SU(2) colored Coulomb plasma, the
constitutive equations for (\ref{DENSL}) which amount to charge
conservation hold for each $l$.
To project (\ref{K1}) onto the hydrodynamical part of the phase
space characterized by (\ref{DENSL}) and (\ref{MOM}), we define
the hydrodynamical projectors
\begin{equation}
{\cal P}_H=\sum_{i=1}^5|i\rangle\langle i|\qquad\qquad {\cal
Q}_H={\bf 1}_H-{\cal P}_H \label{PRO}
\end{equation}
with $1=$ l-density, $2,4,5=$ momentum and $3=$ energy as detailed
in Appendix D. When the $l=0$ particle density is retained in
(\ref{PRO}) the projection is on the colorless sector of the phase
space. When the $l=1$ charged monopole density is retained in
(\ref{PRO}) the projection is on the plasmon channel, and so on.
Most of the discussion to follow will focus on projecting on the
canonical hydrodynamical phase space (\ref{MOM}) with $l=0$ or
singlet representation. The inclusion of the $l\neq 0$
representations of SU(2) is straightforward.
Formally (\ref{KEY}) can be viewed as a $\vec p\times \vec p_1$
matrix in momentum space
\begin{equation}
\left( z-\Sigma_l(z\vec k)\right)\,{\bf S}_l(z\vec k)={\bf
S}_{0l}(\vec k) \label{KEY1}
\end{equation}
The projection of the matrix equation (\ref{KEY1}) follows the
same procedure as in Appendix B. The result is
\begin{equation}
\left(z-{\cal P}_H\Sigma_l (z\vec k){\cal P}_H-{\cal
P}_H\Theta_l(z\vec k){\cal P}_H\right) {\cal P}_H{\bf S}_l(z\vec
k){\cal P}_H={\cal P}_H{\bf S}_{0l}(k){\cal P}_H \label{KEY2}
\end{equation}
with
\begin{equation}
\Theta_l=\Sigma_l(z\vec k) {\cal Q}_H(z-{\cal Q}_H\Sigma_H(z\vec
k){\cal Q}_H)^{-1} {\cal Q}_H\Sigma_l(z\vec k)
\end{equation}
If we define the hydrodynamical matrix elements
\begin{eqnarray}
&&{\bf G}_{lij}(z\vec k)=\langle i|{\bf S}_l(z\vec k)|j\rangle\nonumber\\
&&{\Sigma}_{lij}(z\vec k)=\langle i|{\Sigma }_l(z\vec k)|j\rangle\nonumber\\
&&{\Theta}_{lij}(z\vec k)=\langle i|{\Theta}_l(z\vec k)|j\rangle\nonumber\\
&&{\bf G}_{0lij}(z\vec k)=\langle i|{\bf S}_{0l}(\vec k)|j\rangle
\label{MAT}
\end{eqnarray}
then (\ref{KEY2}) reads
\begin{equation}
\left(z\delta_{ii'}-\Omega_{lij}(z\vec k)\right)\,{\bf
G}_{lji'}(z\vec k)={\bf G}_{0lii'}(\vec k) \label{DIS}
\end{equation}
with $\Omega_l=\Sigma_l+\Theta_l$. (\ref{DIS}) takes the form of a
dispersion for each color partial wave $l$ with the projection
operator (\ref{PRO}) set by the pertinent density (\ref{DENSL}).
The contribution $\Sigma_l$ to $\Omega_l$ will be referred to as
{\it direct} while the contribution $\Theta_l$ will be referred to
as {\it indirect}.
\section{Hydrodynamical Modes}
\renewcommand{\theequation}{VII.\arabic{equation}}
\setcounter{equation}{0}
The zeros of (\ref{DIS}) are the hydrodynamical modes originating
from the Liouville equation for the time-dependent structure factor. The
equation is closed under the free streaming approximation with
half renormalized vertices as we detailed above.
We start by analyzing the 2 transverse modes with $i=T$ in
(\ref{MAT}) and (\ref{DIS}). We note with~\cite{baus} that ${\bf
G}_{lTi}=0$ whenever $T\neq i$. The hydrodynamical projection (see
Appendix D) causes the integrand to be odd whatever $l$. The 2
independent transverse modes in (\ref{DIS}) decouple from the
longitudinal $i=L$, the (kinetic) energy $i=E$ and particle
density $i=N$ modes for all color projections. Thus
\begin{equation}
{\bf G}_{lT}(z\vec k)=\frac 1{z-\Omega_{lT}(z\vec k)} \label{TDIS}
\end{equation}
with $\Omega_{lT}=\langle T|\Omega_l|T\rangle$ and ${\bf
G}_{lT}=\langle T|{\bf G}_l|T\rangle$. The hydro-projected
time-dependent $l$ structure factor for fixed frequency
$z=\omega+i0$, wavenumber $k$ develops 2 transverse poles
\begin{equation}
z_l(\vec k)=\Omega_{lT}(z\vec k)\approx {\cal O}(k^2)
\label{TPOLE}
\end{equation}
The last estimate follows from O(3) momentum symmetry under
statistical averaging whatever the color projection. We identify
the transverse poles in (\ref{TPOLE}) with 2 shear modes of
consititutive dispersion
\begin{equation}
\omega+i\frac{\eta_l}{mn}k^2+{\cal O}(k^3)= 0
\label{VIS2}
\end{equation}
with $\eta_l$ the shear viscosity for the lth color
representation. Unlike conventional plasmas, the classical SU(2)
color Coulomb plasma admits an infinite hierarchy of shear modes
for each representation $l$.
\begin{figure}[!h]
\begin{center}
\subfigure{\label{radial:all}\includegraphics[width=0.495\textwidth]
{S00_alla.ps}}
\subfigure{\label{radial:G021}\includegraphics[width=0.495\textwidth]
{S01_allb.ps}}
\end{center}
\begin{center}
\subfigure{\label{radial:G066}\includegraphics[width=0.495\textwidth]
{S02_allc.ps}}
\subfigure{\label{radial:G128}\includegraphics[width=0.495\textwidth]
{S03_alld.ps}}
\end{center}
\caption{${\bf S}_{0l}(q)$ from SU(2) Molecular Dynamics.} \label{structure}
\end{figure}
The remaining 3 hydrodynamical modes $L,E,N$ are more involved as
they mix in (\ref{DIS}) and under general symmetry consideration.
Indeed current conservation, ties the L mode to the N mode for
instance. Most of the symmetry arguments regarding the generic
nature of $\Omega_l$ in~\cite{baus} carry to our case for each
color representation. Thus, for the 3 remaining non-transverse
modes (\ref{DIS}) reads in matrix form
\begin{equation}
\left(
\begin{array}{ccc}
{\bf G}_{l NN} & {\bf G}_{lNL} & {\bf G}_{lNE} \\
{\bf G}_{lLN} &{\bf G}_{lLL} & {\bf G}_{lLE} \\
{\bf G}_{lEN}& {\bf G}_{lEL} & {\bf G}_{lEE}
\end{array}
\right)= \left(
\begin{array}{ccc}
z & -\Omega_{lNL} & 0 \\
-\Omega_{lLN} &z-\Omega_{lLL} & -\Omega_{lLE} \\
0 & -\Omega_{lEL} & z-\Omega_{lEE}
\end{array}
\right)^{-1} \left(
\begin{array}{ccc}
1+n\,{\bf h}_l & 0 & 0 \\
0 &1 & 0 \\
0 & 0 & 1
\end{array}
\right) \label{MAT1}
\end{equation}
The 3 remaining hydrodynamical modes are the zeros of the
determinant
\begin{equation}
\Delta_l=\left|
\begin{array}{ccc}
z & -\Omega_{lNL} (zk)& 0 \\
-\Omega_{lLN} (zk) &z-\Omega_{lLL} (zk) & -\Omega_{lLE} (zk) \\
0 & -\Omega_{lEL} (zk) & z-\Omega_{lEE} (zk)
\end{array}
\right|=0 \label{MAT2}
\end{equation}
(\ref{MAT2}) admits infinitly many solutions $z_l(k)$. We seek
the hydrodynamical solutions as analytical solutions in $k$ for
small $k$, ie. $z(k)=\sum_nz_{ln}k^n$ for each SU(2) color
representation $l$. In leading order, we have
\begin{equation}
\Delta_l\approx z_{l0}\left(z_{l0}^2-\frac{k^2T}{m}\,{\bf
S}_{0l}^{-1}(k\approx 0)\right)\approx 0 \label{MAT3}
\end{equation}
after using the symmetry properties of $\Omega_l$ as
in~\cite{baus} for each $l$. We have also made use of the
generalized Ornstein-Zernicke equations for each $l$
representation~\cite{cho&zahed3}
In Fig.~\ref{structure} we show the molecular dynamics simulation
results for 4 typical structure factors~\cite{cho&zahed3}
\begin{equation}
{\bf S}_{0l}(\vec k )=\left({\frac{4\pi}{2l+1}}\right)
\left<\left| \sum_{jm}\,e^{{i\vec k}\cdot{\vec
x_j(0)}}\,Y_l^m(\vec Q_i )\,\right|^2 \right>
\end{equation}
for $l=0,1,2,3$. We have made use of the dimensionless wavenumber
$q=k\,a_{WS}$ with $a_{WS}$ is the Wigner-size radius. In
Fig.~\ref{fluctuation} we show the analytical result for ${\bf
S}_{01}$ which we will use for the numerical estimates below. We
note that the $l=1$ structure factor which amounts to the monopole
structure factor vanishes at $k=0$. All other $l$'s are finite at
$k=0$ with $l=0$ corresponding to the density structure factor.
(\ref{MAT3}) displays 3 hydrodynamical zeros as $k\rightarrow 0$
for each $l$ representation. One is massless and we identify it
with the diffusive heat mode. The molecular dynamics simulations
of the structure factors in Fig.~\ref{structure} implies that all
$l\neq 0$ channels are sound dominated with two massless
modes, while the $l=1$ is plasmon dominated with two massive
longitudinal plasmon states. Thus
\begin{equation}
z_{l\pm}=\pm \omega_{p}^2\delta_{l1}
\end{equation}
with $\omega_{p}=k_D\sqrt{T/m}$ the plasmon frequency. The
relevance of this channel to the energy loss has been discussed in
\cite{cho&zahed5}. We used ${\bf S}_{01}(k\approx 0)\approx
k^2/k_D^2$ with $k_D^2$ the squared Debye momentum. All even
$l\neq 1$ are contaminated by the sound modes. The SU(2) classical
and colored Coulomb plasma supports plasmon oscillations even at
strong coupling. These modes are important in the attenuation of
soft monopole color oscilations.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.55\textwidth]{structure_tale.ps}
\end{center}
\caption{${\bf S}_{01}(q)$ for different $\Gamma$~\cite{cho&zahed3}}.
\label{fluctuation}
\end{figure}
\section{Shear Viscosity}
\renewcommand{\theequation}{VIII.\arabic{equation}}
\setcounter{equation}{0}
The transport parameters associated to the SU(2) classical and colored Coulomb plasma
follows from the hydrodynamical projection and expansion discussed above. This includes,
the heat diffusion coefficient, the transverse shear viscosity and the longitudinal plasmon
frequency and damping parameters. In this section, we discuss explicitly the shear viscosity
coefficient for the SU(2) colored Coulomb plasma.
Throughout, we define $\lambda=\frac{4}{3}\pi(3\Gamma)^{3/2}$,
the bare Coulomb interaction $\bar{V}_l=k_0^2/l^2$ in units of the Wigner-size radius $k^{-1}_0=a_{
WS}$. While varying the Coulomb coupling
\begin{equation}
\Gamma=\frac{g^2}{4\pi}\beta\frac{C_2}{a_{WS}}
\end{equation}
all length scales will be measured in $a_{WS}=(4\pi n/3)^{-1/3}$,
all times in the inverse plasmon frequency $1/\omega_p$ with
$\omega_p^2={\kappa_D^2}/{m\beta}={n}g^2C_2/m$.
All units of mass will be measured in $m$. The Debye
momentum is $\kappa_D^2=g^2n\beta C_2$ and the plasma density is $n$.
for instance, the shear viscosity will be expressed in fixed dimensionless units
of $\eta_*=nm\omega_p a_{WS}^{2}$.
The transverse shear viscosity follows from (\ref{TDIS}) with $\Sigma_l$ contributing to the direct
or hydrodynamical part, and $\Theta_l$ contributing to the indirect or single-particle part. For $l=0$
\begin{equation}
\frac{\eta_{0}}{\eta^*}=\frac{\eta_{0\,
dir}}{\eta^*}+\frac{\eta_{0\, ind}}{\eta^*} \label{ETAFULL}
\end{equation}
respectively. The direct or hydrodynamical
contribution is likely to be dominant at strong coupling, while
the indirect or single-particle contribution is likely to take
over at weak coupling. We now proceed to show that.
The indirect contribution to the viscosity follows from the
contribution outside the hydrodynamical subspace through
${\cal Q}_H$ and lumps the single-particle phase contributions.
It involves the inversion of ${\cal Q}_H\Sigma_{C0}{\cal Q}_H$
in (\ref{eq013e}) with
\begin{eqnarray}
\eta_{0\rm ind}=\lim_{k\to0}\frac {mn}{k^2} \frac{|\langle
t|\Sigma_0 |tl\rangle|^2}{\langle tl|i\Sigma_0|tl\rangle}
=\lim_{k\to0}\frac {mn}{k^2} \frac{|\langle
t|(\Sigma_{00}+\Sigma_{C0}) |tl\rangle|^2}{\langle
tl|i\Sigma_{C0}|tl\rangle} \label{VIS3}
\end{eqnarray}
In short we expand $\Sigma_{C0}$ in terms of generalized Hermite
polynomials, with the first term identified with the stress tensor
due to the projection operator (\ref{eq003h}). The inversion
follows by means of the first Sonine polynomial expansion.
Explicitly
\begin{equation}
\eta_{ind}=\frac{\eta_{0\,ind}}{\eta_*}=nm \lim_{k \to
0}\frac{1}{k^2}\frac{ |\langle t \vert\Sigma_{00}+\Sigma_{C0}(\vec
k, 0)\vert lt \rangle |^2 }{\langle lt \vert i\Sigma_{C0}(\vec k,
0)\vert lt \rangle} =\frac{(1+\lambda I_2)^2} {\lambda I_3}
\label{eq013v}
\end{equation}
with
\begin{eqnarray}
& & I_2=\frac{1}{60\pi^2}\frac{1}{(3\Gamma)^{1/2}}
\int_0^{\infty} dq \Big(2({\bf S}_{01}(q)^2-1)+(1-{\bf S}_{01}(q))\Big) \nonumber \\
& & I_3=\frac{1}{10\pi^{3/2}}\frac{1}{3\Gamma} \int_0^{\infty}dq q
(1-{\bf S}_{01}(q)) \label{eq014v}
\end{eqnarray}
with the dimensionless wave number $q=ka_{WS}$.
We recall that ${\bf S}_{01}$ is the monopole structure factor
discussed in~\cite{cho&zahed3} both analytically and numerically.
In Fig.~\ref{fluctuation} we show the behavior of the static
monopole structure factor from~\cite{cho&zahed3} for different
Coulomb couplings. The larger $\Gamma$ the stronger the first
peak, and the oscillations. These features characterize the onset
of the crystalline structure in the SU(2) colored Coulomb plasma.
A good fit to Fig.~\ref{fluctuation} follows from the following
parametrization
\begin{equation}
1+ C_0 e^{-q/C_1}\sin{((q-C_2)/C_3)} \label{FIT}
\end{equation}
with 4 parameters $C_{0,1,2,3}$. The fit following from
(\ref{FIT}) extends to $q\approx 100$ within $10^{-5}$ accuracy,
thanks to the exponent.
The direct contribution to the shear viscosity follows from similar arguments.
From (\ref{TDIS}) and (\ref{VIS2}), we have in the zero momentum limit
\begin{eqnarray}
\eta_{0\, dir}=\lim_{k\to0} \frac {mn}{k^2}\langle t|i\Sigma_0
|t\rangle=\lim_{k\to0}\frac {mn}{k^2}\langle t|i\Sigma_{C0}
(0,0)|t\rangle
\end{eqnarray}
with $\Sigma_0=\Sigma_{00}+\Sigma_{I0}+\Sigma_{C0}$ as defined in
(\ref{K2}) and (\ref{SFOURIER}). Only those nonvanishing
contributions after the hydrodynamical projection were retained in
the second equalities in (\ref{VIS3}) as we detail in Appendix D.
A rerun of the arguments yields
\begin{eqnarray}
\eta_{dir}^{*}&=&\eta_{0\,dir}/\eta_*=\lambda\frac{\omega_p}
{\kappa_D^3}\lim_{k\to0}\frac{1}{\vec k^2}\int\frac{d\vec
l}{(2\pi)^3} \int_{0}^{\infty}dt
n(\vec \epsilon\cdot\vec l)^2 \nonumber \\
& \times & \bigg( {\bf c}_{D1}(l){\cal G}_{n1}(\vec k-\vec
l,t){\cal G}_{n1}(\vec l,t)\bar{V}_{\vec l}-{\bf c}_{D0}(l){\cal
G}_{n1}(\vec k-\vec l,t){\cal G}_{n1}(\vec l,t)\bar{V}_{\vec
k-\vec
l} \bigg) \nonumber \\
\label{eq002v}
\end{eqnarray}
The projected non-static structure factor is
\begin{eqnarray}
{\cal G}_{n1}(\vec l,t) &=&\frac{1}{n}\int d\vec pd\vec p'\,{\bf
S}_{1}(\vec l,t;\vec p\vec p')={\overline{\cal G}}_{n1}(\vec l,
t)\,{\bf S}_{01}(\vec l) \label{eq010v}
\end{eqnarray}
with the normalization ${\overline{\cal G}}_{n1}(\vec l, 0)=1$. As
in the one component Coulomb plasma studied
in~\cite{gould&mazenko} we will approximate the dynamical part by
its intermediate time-behavior where the motion is free. This
consists in solving (\ref{SELF}) with no self-energy kernel or
$\Sigma=0$,
\begin{equation}
{\cal G}_{n1}(\vec l, t)\approx e^{-(lt)^2/2m\beta}\,{\bf
S}_{01}(\vec l) \label{GFREE}
\end{equation}
Thus inserting (\ref{GFREE}) and performing the integrations with
$k\rightarrow 0$ yield the direct contribution to the shear
viscosity
\begin{equation}
\eta_{dir}^{*}
=\frac{\eta_{dir}}{\eta_0}=\frac{\sqrt{3}}{45\pi^{1/2}}\Gamma^{\frac{1}{2}}
\label{eq012v}
\end{equation}
The full shear viscosity result is then
\begin{equation}
\frac{\eta_{0}}{\eta^*}=\frac{\eta_{0\,
dir}}{\eta^*}+\frac{\eta_{0\, ind}}{\eta^*}=
\frac{\sqrt{3}}{45\pi^{1/2}}\Gamma^{\frac{1}{2}} +\frac{(1+\lambda
I_2)^2} {\lambda I_3} \label{ETAFULLX}
\end{equation}
after inserting (\ref{eq013v}) and (\ref{eq012v}) in
(\ref{ETAFULL}). The result (\ref{ETAFULLX}) for the shear
viscosity of the transverse sound mode is analogous to the result
for the sound velocity in the one component plasma derived
initially in~\cite{wallenborn&baus} with two differences: 1/ The
SU(2) Casimir in $\Gamma$; 2/ the occurrence of ${\bf S}_{01}$
instead of ${\bf S}_{00}$. Since ${\bf S}_{01}$ is plasmon
dominated at low momentum, we conclude that the shear viscosity is
dominated by rescattering against the SU(2) plasmon modes in the
cQGP.
Using the fitted monopole structure factor (\ref{FIT}) in
(\ref{eq014v}) we can numerically assess (\ref{eq013v}) for
different values of $\Gamma$. Combining this result for the
indirect viscosity together with (\ref{eq012v}) for the direct
viscosity yield the colorless or sound viscosity $\eta_0$. The
values of $\eta_0$ are displayed in Table I, and shown in
Fig.~\ref{viscosity} (black). The SU(2) molecular dynamics
simulations in~\cite{gelmanetal} which are parameterized as
\begin{equation}
\eta^{*}_{MD}\simeq0.001\Gamma+\frac{0.242}{\Gamma^{0.3}}+\frac{0.072}{\Gamma^2}
\label{eq011v}
\end{equation}
are also displayed in Table I and shown in Fig.~\ref{viscosity}
(red) for comparison. The sound viscosity dips at about
$\Gamma\approx 8$ in our analytical estimate. To understand the
origin of the minimum, we display in Fig. \ref{viscosity_fit} the
scaling with $\Gamma$ of the direct or hydrodynamical and the
indirect part of the shear viscosity. The direct contribution to
the viscosity grows like $\Gamma^{1/2}$, the indirect contribution
drops like $1/\Gamma^{5/2}$. The latter dominates at weak
coupling, while the former dominates at strong coupling. This is
indeed expected, since the direct part is the contribution from
the hydrodynamical part of the phase space, while the indirect
part is the contribution from the non-hydrodynamical or
single-particle part of phase space. The crossing is at
$\Gamma\approx 4$.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.49\textwidth]{viscosity_separated.ps}
\includegraphics[width=0.49\textwidth]{viscosity_full.ps}
\end{center}
\caption{The direct and indirect part of the
viscosity}\label{viscosity}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.49\textwidth]{eta_dir.ps}
\includegraphics[width=0.49\textwidth]{eta_ind.ps}
\end{center}
\caption{The best fit of the direct and indirect part of the
viscosity}\label {viscosity_fit}
\end{figure}
\begin{table}[!h]
\caption{Reduced shear viscosity. See text.} \label{tb1}
\begin{center}
\begin{tabular}{cccccccccccc}
\hline
$\Gamma $ & $ 2 $ & $ 4 $ & $ 6 $ & $ 8 $ & $ 10 $ & $ 12 $ & $ 14 $ & $ 16 $ & $ 18 $ \\
\hline
$\eta^{*}_{QGP}$ & $ 0.286 $ & $ 0.092 $ & $ 0.067 $ & $ 0.066 $ & $ 0.070 $ & $ 0.076 $ & $ 0.081 $ & $ 0.087 $ & $ 0.092 $ \\
$\eta_{MD} $ & $ 0.217 $ & $ 0.168 $ & $ 0.168 $ & $ 0.139 $ & $ 0.132 $ & $ 0.127 $ & $ 0.124 $ & $ 0.122 $ & $ 0.120 $ \\
\hline
\end{tabular}
\end{center}
\end{table}
The reduced sound velocity $\eta_*$ is dimensionless. To restore
dimensionality and compare with expectations for an SU(2) colored
Coulomb plasma, we first note that the particle density is about
$3\times\,0.244\,T^3=0.732\,T^3$. There are 3 physical gluons,
each carrying black-body density. The corresponding Wigner-Seitz
radius is then $a_{WS}=({3}/{4\pi n})^{{1}/{3}}\approx
{0.688}/{T}$. The Coulomb coupling is $\Gamma\approx
1.453\,({g^2N_c}/{4\pi})$. Since the plasmon frequency is
$\omega_p^2={\kappa_D^2}/{m\beta}={n}g^2N_c/m$, we get
$\omega_p^2\simeq3.066\,T^2({g^2N_c}/{4\pi})$ with $m\simeq3T$.
The unit of viscosity $\eta_0=nm\omega_pa_{WS}^2$ translates to
$1.822\,T^3({g^2}N_c/4\pi)^{{1}/{2}}$. In these units, the
viscosity for the SU(2) cQGP dips at about $0.066$ which is
$\eta^*_{QGP}\approx 0.066\,\eta_0\approx
0.120\,T^3\,(g^2N_c/4\pi)^{1/2}$. Since the entropy in our case is
$\sigma=6\,(4\pi^2/90)T^3$, we have for the SU(2) ratio
$\eta/\sigma|_{SU(2)}=0.046\,(g^2N_c/4\pi)^{1/2}$. The minimum in
the viscosity occurs at $\Gamma=1.453\,({g^2N_c}/{4\pi})\approx
8$, so that $({g^2N_c}/{4\pi})^{1/2}\approx 2.347$. Thus, our
shear viscosity to entropy ratio is
$\eta/\sigma|_{SU(2)}\simeq0.107$. A rerun of these estimates for
SU(3) yields $\eta/\sigma|_{SU(3)}\simeq0.078$ which is lower than
the bound $\eta/\sigma=1/4\pi\simeq0.0795$ suggested from
holography.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.49\textwidth]{viscosity_approx_separated.ps}
\includegraphics[width=0.49\textwidth]{viscosity_approx_full.ps}
\end{center}
\caption{Comparison with weak coupling. See text.}
\label{viscosity_approx}
\end{figure}
Finally, we show in Fig.~\ref{viscosity_approx} the shear
viscosity $\eta*_{S(q)}$ at low $\Gamma$ (a:green) and large
$\Gamma$ (b:black) assessed using the weak-coupling structure
factor $S(k)=k^2/(k^2+k_D^2)$. The discrepancy is noticeable for
$\Gamma$ near the liquid point. The large discrepancy for small
values of $\Gamma$ reflects on the fact that the integrals in
(\ref{eq014v}) are infrared sensitive. The sensitivity is tamed by
our analytical structure factor and the simulations. We recall
that in weak coupling, the Landau viscosity $\eta_L$
is~\cite{ichimaru3}
\begin{equation}
\frac{\eta_L}{\eta^*}=\frac{5\sqrt{3\pi}}{18}\frac{1}{\Gamma^{5/2}}\frac 1{{\rm ln}(r_D/r_0)}
\label{LANDAU}
\end{equation}
which follows from a mean-field analysis of the kinetic equation with the
plasma dielectric constant set to 1. The logarithmic dependence in (\ref{LANDAU})
reflects on the infrared and ultraviolet sensitivity of the mean-field approximation.
Typically $r_D=1/k_D$ and $r_0=(g^2C_2/4\pi)\beta$ which are the Debye length
and the the distance of closest approach. Thus
\begin{eqnarray}
\frac{\eta_L}{\eta^*}\approx\frac{5\sqrt{3\pi}}{27}\frac{1}{\Gamma^{5/2}}\frac 1{{\rm ln}(1/\Gamma)}
\label{LANDAU1}
\end{eqnarray}
or $\eta_L/\eta^*\approx 0.6/(\Gamma^{5/2}{\rm ln}(1/\Gamma))$ which is overall consistent
with our analysis.
The Landau or mean-field result is smaller for the viscosity than the result
from perturbative QCD. Indeed, the unscaled Landau viscosity (\ref{LANDAU1})
reads
\begin{equation}
\eta_L\approx \frac{10}{24}\frac{\sqrt{m}}{(\alpha_sC_2)^2\beta^{5/2}}\frac{1}{\alpha_s}
\label{LANDAU2}
\end{equation}
after restoring the viscosity unit $\eta^*=nm\omega_pa_{WS}^2$ and using
${\rm ln}(r_D/r_0)\approx 3{\rm ln}(1/\alpha_s)/2$ with $\alpha_s=g^2/4\pi$.
While our consituent gluons carry $m\approx \pi T$, in the mean field or weak
coupling we can set their masses to $m\approx gT$. With this in mind, and setting $C_2=N_c=3$ in
(\ref{LANDAU2}) we obtain
\begin{equation}
\eta_L\approx \frac{5\sqrt{2}}{108\pi^{1/4}}\frac{T^3}{\alpha_s^{7/4}{\rm ln}(1/\alpha_s)}
\approx 0.05\frac{T^3}{\alpha_s^{7/4}{\rm ln}(1/\alpha_s)}
\label{LANDAU3}
\end{equation}
which is to be compared with the QCD weak coupling
result~\cite{heiselberg}
\begin{equation}
\eta_{QCD}\approx \frac{T^3}{\alpha_s^{2}{\rm ln}(1/\alpha_s)}
\label{LANDAU4}
\end{equation}
The mean-field result (\ref{LANDAU3}) is $\alpha_s^{1/4}\approx \sqrt{g}$ {\it smaller} in weak
coupling than the QCD perturbative result. The reason is the fact that in perturbative QCD the
viscosity is not only caused by collisions with the underlying parton constituents, but also
quantum recombinations and decays. These latter effects are absent in our classical QGP.
\section{Diffusion Constant}
\renewcommand{\theequation}{IX.\arabic{equation}}
\setcounter{equation}{0}
The calculation of the diffusion constant in the SU(2) plasma is
similar to that of the shear viscosity. The governing equation is
again (\ref{RES2}) with $\Sigma$ and ${\bf S}$ replaced by
$\Sigma_s$, ${\bf S}_s$. The label is short for single particle.
The difference between ${\bf S}$ and ${\bf S}_s$ is the
substitution of (\ref{FDIS}) by
\begin{equation}
f_s(\vec r\vec p \vec Q t)=\sqrt{N}\delta(\vec r-\vec
r_1(t))\delta(\vec p-\vec p_1(t))\delta(\vec Q-\vec Q_1(t))
\label{eq001t}
\end{equation}
The diffusion constant follows from the velocity auto-correlator
\begin{equation}
V_D(t)=\frac{1}{3}\langle\vec V(t)\cdot\vec V(0)\rangle
\label{eq003t}
\end{equation}
through
\begin{equation}
D=\int_{0}^{\infty}dt V_D(t)\label{eq002t}
\end{equation}
Solving (\ref{RES2}) using the method of one-Sonine polynomial
approximation as in~\cite{gould&mazenko} yields the Langevin-like
equation
\begin{equation}
\frac{dV_D(t)}{dt}=-\int_{0}^{t}dt'M(t')V_D(t-t') \label{eq004t}
\end{equation}
with the memory kernel tied to $\Sigma_{C0}^S$,
\begin{eqnarray}
&& n\,f_0(\vec p')\, \Sigma_{Cl}^S(t, \vec k, \vec p\vec p')
\nonumber \\
& & = -\frac{1}{\beta} \int d\vec p_1 d\vec p_2 \int \frac{d\vec
l}{(2\pi)^3} \vec l\cdot\nabla_{\vec p}\vec l\cdot\nabla_{\vec p'}
{\bf c}_{D1}(l) V_{\vec l} \nonumber \\
& & \times \Big( \frac{l}{2l+1} {\bf S}_{l-1}^S(t, \vec k-\vec
l,\vec p\vec p'){\bf S}_1(t, \vec l,\vec p_1\vec
p_2)+\frac{l+1}{2l+1} {\bf S}_{l+1}^S(t, \vec k-\vec l,\vec p\vec
p'){\bf S}_1(t, \vec l,\vec p_1\vec p_2)\Big) \nonumber \\
\label{eq005d}
\end{eqnarray}
and
\begin{eqnarray}
&& n\,f_0(\vec p')\, \Sigma_{C0}^S(t, \vec k=\vec 0, \vec p\vec
p')
\nonumber \\
& & = -\frac{1}{\beta} \int d\vec p_1 d\vec p_2 \int \frac{d\vec
l}{(2\pi)^3}\vec l\cdot\nabla_{\vec p}\vec l\cdot\nabla_{\vec p'}
{\bf c}_{D1}(l)V_{\vec l} {\bf S}_{1}^S(t,\vec l,\vec p\vec
p'){\bf S}_1(t, \vec l,\vec p_1\vec p_2) \label{eq006d}
\end{eqnarray}
therefore
\begin{equation}
M(t)=\frac{\beta}{3m}\int d\vec p d\vec p' \vec p\cdot\vec p'
\Sigma_{C0}^S(t,\vec k=\vec 0,\vec p\vec p')f_0(\vec p')
\label{eq005t}
\end{equation}
which clearly projects out the singlet color contribution. If we
introduce the dimensionless diffusion constant,
$D^{*}=D/w_pa_{WS}^2$, then (\ref{eq002t}) together with
(\ref{eq004t}) yield
\begin{equation}
\frac{1}{D}=m\beta\int_{0}^{\infty}dtM(t) \rightarrow
\frac{1}{D^{*}}=3\Gamma\int_{0}^{\infty}w_p dt\frac{M(t)}{w_p^2}=
3\Gamma\int_{0}^{\infty} d\tau\bar{M}(\tau) \label{eq006t}
\end{equation}
Using similar steps as for the derivation of the viscosity, we can
unwind the self-energy kernel $\Sigma_s$ in (\ref{eq006t}) to give
\begin{equation}
\frac{1}{D^{*}}=-\Gamma\int\frac{d\vec l}{(2\pi
)^3}\int_{0}^{\infty}d\tau \vec l^2 {\bf c}_{D1}(l)V_{\vec l}
{\cal G}_{n1}^S(l,t)\,{\cal G}_{n1}(l,t) \label{eq007t}
\end{equation}
where we have used the same the half-renormalization method
discussed above for the viscosity. The color integrations are
done by Legendre transforms. Here again, we separate the
time-dependent structure factors as ${\cal G}_{n1}(l,t)={\bf
S}_{01}(l)\bar{\cal G}_{n1}(l,t)$ and ${\bf
S}_{01}^S(l,t)=\bar{\cal G}_{n1}(l,t)$ in the free particle
approximation. Thus
\begin{equation}
\frac{1}{D^{*}}=\Gamma^{3/2}\Big(\frac{1}{3\pi}
\Big)^{\frac{1}{2}}\int_{0}^{\infty} dq q(1-{\bf S}_{01}(q))
\label{eq008t}
\end{equation}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.55\textwidth]{diffusion_full_half.ps}
\end{center}
\caption{Diffusion Constant (black, green) versus molecular dynamics
simulations (red). See text. } \label{diffusion}
\end{figure}
\begin{table}[!h]
\caption{Diffusion constant. See text.}
\label{tb2}
\begin{center}
\begin{tabular}{cccccccccc}
\hline
$\Gamma $ & $ 2 $ & $ 4 $ & $ 6 $ & $ 8 $ & $ 10 $ & $ 12 $ & $ 14 $ & $ 16 $ & $ 18 $ \\
\hline
$D^{*}_{QGP} $ & $ 0.410 $ & $ 0.115 $ & $ 0.055 $ & $ 0.034 $ & $ 0.024 $ & $ 0.017 $ & $ 0.014 $ & $ 0.012 $ & $ 0.010 $ \\
$D^{*}_{MD} $ & $ 0.230 $ & $ 0.132 $ & $ 0.095 $ & $ 0.076 $ & $ 0.063 $ & $ 0.055 $ & $ 0.048 $ & $ 0.044 $ & $ 0.040 $ \\
\hline \label{DIFF}
\end{tabular}
\end{center}
\end{table}
The results following from (\ref{eq008t}) are displayed in
Table~\ref{DIFF} and in Fig.~\ref{diffusion} (black) from weak to
strong coupling. For comparison, we also show the the diffusion
constant measured using molecular dynamics simulations with an
SU(2) colored Coulomb plasma~\cite{gelmanetal}. The molecular
dynamics simulations are fitted to
\begin{equation}
D^{*}\simeq\frac{0.4}{\Gamma^{0.8}}
\end{equation}
For comparison, we also show the diffusion constant (\ref{eq008t})
assessed using the weak coupling or Debye structure factor $S(k)=k^2/(k^2+k_D^2)$
in Fig.~\ref{diffusion} (green). The discrepancy between the analytical
results at small $\Gamma$ are similar to the ones we noted above
for the shear viscosity. In our correctly resummed structure factor
of Fig.~\ref{fluctuation}, the infrared behavior of the cQGP is controlled
in contrast to the simple Debye structure factor.
Finally, a comparison of (\ref{eq008t}) to (\ref{eq014v}) shows that
$1/D^*\approx 1/\lambda I_3$ which is seen to grow like
$\Gamma^{3/2}$. Thus $D^*$ drops like $1/\Gamma^{3/2}$
which is close to the numerically generated result fitted in
Fig.~\ref{diffusion_fit} (left). The weak coupling self-diffusion
coefficient scales as $1/\Gamma^{5/2}$ as shown in Fig.~\ref{diffusion_fit} (right).
More importantly, the diffusion constant in the SU(2) colored Coulomb
plasma is caused solely by the non hydrodynamical modes or single
particle collisions in our analysis.
It does not survive at strong coupling where
most of the losses are caused by the collective sound and/or
plasmon modes. This result is in contrast with the shear viscosity we
discussed above, where the hydrodynamical modes level it off at large $\Gamma$.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.495\textwidth]{diffusion_fit_half.ps}
\includegraphics[width=0.495\textwidth]{diffusion_free_fit.ps}
\end{center}
\caption{Fit to the diffusion constant. See text.}
\label{diffusion_fit}
\end{figure}
\section{Conclusions}
We have provided a general framework for discussing
non-perturbative many-body dynamics in the colored SU(2) Coulomb
plasma introduced in \cite{SZ_newqgp}. The framework extends the
analysis developed intially for one-component Abelian plasmas to
the non-Abelian case. In the latter, the Liouville operator is
supplemented by a color precessing contribution that contributes
to the connected part of the self-energy kernel.
The many-body content of the SU(2) colored Coulomb plasma are best
captured by the Liouville equation in phase space in the form of
an eigenvalue-like equation. Standard projected perturbation
theory like analysis around the static phase space distributions
yield a resummed self energy kernel in closed form. Translational
space invariance and rigid color rotational invariance in phase
space simplifies the nature of the kernel.
In the hydrodynamical limit, the phase space projected equations
for the time-dependent and resummed structure factor displays both
transverse and longitudinal hydrodynamical modes. The shear
viscosity and longitudinal diffusion constant are expressed
explicitly in terms of the resummed self-energy kernel. The latter
is directly tied with the interacting part of the Liouville
operator in color space. We have shown that in the free streaming
approximation and half-renormalized Liouville operators, the
transport parameters are finite.
We have explicitly derived the shear viscosity and longitudinal
diffusion constant of the SU(2) colored Coulomb plasma in terms of
the monopole static structure factor and the for all values of the
classical Coulomb parameter $\Gamma=V/K$, the ratio of the
potential to kinetic energy per particle. The results compare
fairly with molecular dynamics simulations for SU(2).
The longitudinal diffusion constant is found to drop from weak to
strong coupling like $1/\Gamma^{3/2}$. The shear viscosity is
found to reach a minimum for $\Gamma$ of about 8. The large
increase at weak coupling is the result of the large mean free
paths and encoded in the direct or driving part of the connected
self-energy. The minimum at intermediate $\Gamma$ is tied with the
onset of hydrodynamics which reflects on the liquid nature of the
colored Coulomb plasma in this regime.
At larger values of $\Gamma$ an SU(2) crystal forms as reported
in~\cite{SZ_newqgp}. Our current analysis should be able to
account for the emergence of elasticities, with in particular an
elastic shear mode. This point will be pursued in a future
investigation. The many body analysis presented in this work treats the color
degrees of freedom as massive constituents with a finite mass and
a classical SU(2) color charge. The dynamical analysis is fully
non-classical. In a way, quantum mechanics is assumed to generate
the constituent degrees of freedom with their assigned parameters.
While this picture is supported by perturbation theory at very
weak coupling, its justification at strong coupling is by no means
established.
\begin{acknowledgements}
This work was supported in part by US DOE grants DE-FG02-88ER40388
and DE-FG03-97ER4014.
\end{acknowledgements}
\newpage
| {'timestamp': '2009-10-14T18:27:11', 'yymm': '0910', 'arxiv_id': '0910.2666', 'language': 'en', 'url': 'https://arxiv.org/abs/0910.2666'} |
\section{Introduction}
Central black holes (BHs) with masses ranging from $\sim 10^6$ to a few $\times
10^9$ $M_\odot$\ are an integral component of most, perhaps all, galaxies with
a bulge component (Kormendy 2004), and although rarer, at least some late-type
galaxies host nuclear BHs with masses as low as $\sim 10^5$ $M_\odot$\
(Filippenko \& Ho 2003; Barth et al. 2004; Greene \& Ho 2007a, 2007b). It is
now widely believed that BHs play an important role in the life cycle of
galaxies (see reviews in Ho 2004). To date, most of the observational effort
to investigate the relationship between BHs and their host galaxies have
focused on the stellar component of the hosts, especially the velocity
dispersion and luminosity of the bulge, which empirically seem most closely
coupled to the BH mass. Although the gas content of inactive galaxies has
been extensively studied (e.g., Haynes \& Giovanelli 1984; Knapp et al. 1985;
Roberts et al. 1991; Bregman et al. 1992; Morganti et al. 2006),
comparatively little attention has been devoted to characterizing the
interstellar medium of active galaxies or of systems with knowledge of their
BH mass or accretion rate.
The gaseous medium of the host galaxy, especially the cold phase as traced
in neutral atomic or molecular hydrogen, offers a number of diagnostics
inaccessible by any other means. Since cold gas constitutes the very raw
material out of which both the stars form and the BH grows, the cold gas
content of the host galaxy is one of the most fundamental quantities that can
be measured in the effort to understand the coevolution of BHs and galaxies.
At the most rudimentary level, we might naively expect the gas content of the
host to be correlated with the BH accretion rate or the luminosity of its
active galactic nucleus (AGN). Likewise, the gas content should reflect
the particular evolutionary stage of the host galaxy. Many current models
(e.g., Granato et al. 2004; Springel et al. 2005) invoke AGN feedback as a
key ingredient for galaxy formation and for coupling the BH to its host.
Depending on the violence with which the accretion energy is injected into the
host and the evolutionary state of the system, AGN feedback can wreck havoc on
the interstellar medium of the host. For example, recent \ion{H}{1}\ absorption
observations of radio-loud AGNs detect substantial quantities of high-velocity
outflowing neutral gas, presumably in the midst of being expelled from the
host galaxy by the radio jet (Morganti et al. 2007). Performing a careful,
systematic census of the cold gas content of AGN hosts will provide much
needed empirical guidance for AGN feedback models. Apart from the sheer gas
mass, \ion{H}{1}\ and CO observations, even when conducted in spatially unresolved
mode, can provide other useful probes of the physical properties of the host,
and of its circumgalactic environment (e.g., Ho 2007a, 2007b). For example,
the width of the integrated line profile, if it is sufficiently regular, gives
an estimate of the rotation velocity of the disk, and hence an additional
handle on the gravitational potential of the system. Combining the line width
with the Tully-Fisher (1977) relation, we can infer immediately the total
luminosity of the host, independent of any contamination from the AGN. The
degree of symmetry of the line profile furnishes useful, if crude, information
on the spatial distribution of gas within and around the host, as well as an
effective probe of possible dynamic disturbances due to neighboring galaxies.
The primary goal of this study is to quantify the \ion{H}{1}\ content of a large,
well-defined sample of active galaxies with uniformly measured BH masses and
optical properties, spanning a wide range in AGN properties. Despite the
obvious importance of
\vskip 0.3cm
\figurenum{1}
\psfig{file=f1.eps,width=8.5cm,angle=0}
\figcaption[fig1.ps]{
The distribution of BH masses and Eddington ratios for the sample included
in this study. The 101 newly surveyed objects for which \ion{H}{1}\ observations
were successfully obtained are plotted as circles, while the sample of 53
sources taken from the literature are marked as triangles.
\label{fig1}}
\vskip 0.3cm
\noindent
understanding the cold gas component of AGN host
galaxies, there has been relatively little modern work conducted with this
explicit goal in mind. Although there have been a number of \ion{H}{1}\ surveys of
AGNs, most of them have focused on relatively low-luminosity Seyfert nuclei
(Allen et al. 1971; Heckman et al. 1978; Bieging \& Biermann 1983; Mirabel \&
Wilson 1984; Hutchings 1989; Greene et al. 2004) and radio-emitting elliptical
galaxies (Dressel et al. 1982; Jenkins 1983), with only limited attention
devoted to higher luminosity quasars (Condon et al. 1985; Hutchings et al.
1987; Lim \& Ho 1999). This is in part due to sensitivity limitations
(quasars are more distant), but also due to the poor baselines of pre-upgrade
Arecibo\footnote{The Arecibo Observatory is part of the National Astronomy and
Ionosphere Center, which is operated by Cornell University under a cooperative
agreement with the National Science Foundation.} spectra. With the new
Gregorian optics, $L$-band receiver, and modern
backend at Arecibo, the time is ripe to revisit the problem in a concerted
fashion. In light of the scientific issues outlined above, the motivation has
never been stronger. We are particularly keen to use the \ion{H}{1}\ line width as a
kinematic tracer of the host galaxy potential. Since the rotation velocity of
the disk is correlated with the stellar velocity dispersion of the bulge (see
Ho 2007a, and references therein), the \ion{H}{1}\ line width can be used as a new
variable to investigate the correlation between BH mass and galaxy potential.
We are additionally interested in using the \ion{H}{1}\ spectra to obtain dynamical
masses for the host galaxies, to use the line shape to probe the nearby
environment and dynamical state of the hosts, and to evaluate possible
correlations between \ion{H}{1}\ content and AGN properties. These issues are
investigated in a companion paper (Ho et al. 2008).
\section{Observations and Data Reduction}
\subsection{Sample}
Our sample of AGNs was chosen with one overriding scientific motivation in
mind: the availability of a reliable BH mass estimate. As we rely on
the virial mass method to estimate BH masses (Kaspi et al. 2000;
Greene \& Ho 2005b; Peterson 2007), this limits our targets to type 1 AGNs.
Sensitivity considerations with the current Arecibo system imposes a practical
redshift limit of $z$\lax0.1. Apart from these two factors, and the visibility
restrictions of Arecibo (0$^{\circ}$\ {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$}\ $\delta$ {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$}\ 37$^{\circ}$), the targets were
selected largely randomly to fill the available schedule blocks of the
telescope. The 113 newly observed objects, whose basic properties are
summarized in Table~1, contains two subsamples. The first comprises 98 type 1
AGNs from the Fourth Data Release of the Sloan Digital Sky Survey (SDSS;
Adelman-McCarthy et al. 2006), which form part of an on-going study of
low-redshift AGNs by Greene (2006; see also Greene \& Ho 2004, 2005b, 2006a,
2006b, 2007a, 2007b). Although the SDSS objects strictly do not form a
complete or unbiased sample, they are representative of low-redshift
broad-line AGNs of moderate to high luminosities. With $M_g \approx -18.8$
to $-23.1$ mag, only $\sim 3-4$ objects satisfy the conventional luminosity
threshold of quasars\footnote{The canonical luminosity threshold of quasars,
$M_B = -23.0$ mag (Schmidt \& Green 1983), translates to $M_B = -22.1$ mag in
our distance scale, which assumes $H_0$ = 70 km s$^{-1}$~Mpc$^{-1}$, $\Omega_m
= 0.3$, and $\Omega_{\Lambda} = 0.7$. For a power-law AGN spectrum of the
form $f_\lambda \propto \lambda^{-1.56}$ (Vanden~Berk et al. 2001), this
threshold is $M_g \approx -22.3$ mag.}, but most are very prominent Seyfert 1
nuclei. Twenty-eight of the objects have broad H$\alpha$\ profiles with full-width
at half maximum (FWHM) less than 2000 km s$^{-1}$, and thus meet the formal line width
criterion of narrow-line Seyfert 1 galaxies (e.g., Osterbrock \& Pogge 1985).
The second subsample, in total 15 objects, were primarily chosen because they
have been studied with reverberation mapping (Kaspi et al. 2000; Peterson et
al. 2004); we deem these to be high-priority objects because they have
better-determined BH masses. This subsample includes seven Palomar-Green (PG)
sources (Schmidt \& Green 1983), among them five luminous enough to qualify as
bona fide quasars, and two satisfying the line width criterion of narrow-line
Seyfert 1 galaxies (PG~0003+199 and PG~1211+143).
To augment the sample size and to increase its dynamic range in terms of BH
mass and AGN luminosity, we performed a comprehensive search of the literature
to compile all previously published \ion{H}{1}\ measurements of type~1 AGNs that
have sufficient optical data to allow estimation of BH masses. The results of
this exercise yielded a sizable number of additional objects (53), the
details of which are documented in the Appendix. Our final sample, now
totaling 166 and by far the largest ever studied, covers a wide range of BH
masses, from {$M_{\rm BH}$}\ $\approx\,10^5$ to $10^9$ $M_\odot$, and a significant
spread in Eddington ratios, from $\log L_{\rm bol}/L_{\rm Edd} \approx -2.7$
to 0.3 (Fig.~1), where $L_{\rm Edd} \equiv 1.26 \times 10^{38}
\left(M_{\rm BH}/M_{\odot}\right)$ ergs s$^{-1}$. Although the sample definitely
contains predominantly low-luminosity AGNs, it covers at least 4 orders of
magnitude in nuclear luminosity (Fig.~2{\it a}), from $L_{\rm H\alpha} \approx
10^{40}$ to $10^{44}$ ergs s$^{-1}$\ (excluding the ultra-low-luminosity object
NGC~4395 at $L_{\rm H\alpha} \approx 10^{38}$ ergs s$^{-1}$), which in more familiar
units corresponds to $B$-band absolute magnitudes of $M_B \approx -15.5$ to
$-24.75$ mag (Fig.~2{\it b}).
\subsection{Arecibo Observations}
We observed the 21~cm spin-flip transition of neutral hydrogen (\ion{H}{1})
in our sample at the Arecibo radio telescope from
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{2}
\centerline{\psfig{file=f2.eps,width=19.5cm,angle=-90}}
\figcaption[fig2.ps]{
The distribution of ({\it a}) H$\alpha$\ luminosity and ({\it b}) $B$-band absolute
magnitude of the AGN component for the sample objects with \ion{H}{1}\ data. The
H$\beta$\ luminosities of the literature sample (Table~6) were converted to
H$\alpha$\ assuming H$\alpha$/H$\beta$\ = 3.5, as empirically determined by Greene \& Ho
(2005b). To convert between H$\alpha$\ luminosity and $B$-band absolute magnitude,
we employ the correlation between H$\alpha$\ and 5100 \AA\ continuum luminosity of
Greene \& Ho (2005b), and then assume a continuum spectrum of the form
$f_\lambda \propto \lambda^{-1.56}$ (Vanden~Berk et al. 2001) to extrapolate
to 4400 \AA.
\label{fig2}}
\end{figure*}
\vskip 0.3cm
\noindent
November 2005 through April 2007. The \ion{H}{1}\ observations were
conducted in four independently tracked 25~MHz bands centered on redshifted
\ion{H}{1} and OH (1420.405751786, 1612.2310, 1667.3590, and 1720.5300~MHz,
respectively). Observations consisted of 5-minute position-switched scans,
with a calibration diode fired after each position-switched pair and spectral
records recorded every 6~seconds. Typically sources were observed for
1--2~hours. The autocorrelation spectrometer used 1024 channels and 9-level
sampling in two (subsequently averaged) polarizations. Rest-frame velocity
resolutions ranged from 5.15~km~s$^{-1}$ ($z=0$) to 5.72~km~s$^{-1}$
($z=0.11$), but most spectra were Hanning smoothed, reducing the velocity
resolution roughly by a factor of 2.
Records were individually calibrated and bandpasses flattened using the
calibration diode and the corresponding off-source records. Records and
polarizations were subsequently averaged, and a low-order polynomial baseline
was fit and subtracted. Systematic flux calibration errors in these data are
of order $10\%$. All data reduction was performed in AIPS++\footnote{The
AIPS++ (Astronomical Information Processing System) is freely available for
use under the Gnu Public License. Further information may be obtained from
{\tt http://aips2.nrao.edu}.}. Some spectra showed standing waves due to
resonances within the telescope superstructure or strong continuum sources
($\gtrsim 300$~mJy) falling in the beam (either coincidentally or due to
strong radio emission from the target galaxy itself). The expected \ion{H}{1}\
line widths are similar to the size of the standing wave features
($\sim1$~MHz), so the detectability of lines was severely impaired in a few
cases.
The observed bands were generally interference-free in the vicinity of the
observed lines, requiring little or no flagging, but some redshift ranges,
most prominently $z\simeq 0.066$--0.069 and $z\simeq0.051$--0.054, were
unobservable due to radio frequency interference (RFI). Hence, the redshift
distribution of the sample has gaps. \ion{H}{1} lines were detected in 66
galaxies in the sample, 35 galaxies were significant nondetections, and 12
galaxies are indeterminate due to standing waves in the bandpass or RFI. The
spectra for the detected sources are plotted in Figure~3, accompanied by
their optical images. No 18 cm OH lines were detected
in the sample (many lines were unobservable due to RFI).
The \ion{H}{1}\ properties of the sample are summarized in Table~2. For each
detected source, we list the systemic velocity of the line in the barycentric
frame ($\upsilon_{\rm sys}$), defined to be the midpoint (mean) of the
velocities corresponding to the 20\% point of the two peaks in the \ion{H}{1}\
profile, and the line width $W_{20}$, the difference between these two
velocities. The actual high and low velocities are obtained from an
interpolation between the two data points bracketing 20\% of peak flux. For a
typical root-mean-square noise level of $\sim 0.3$ mJy, we estimate that the
uncertainty in the systemic velocity is $\sigma(\upsilon_{\rm sys}) \approx
3.4$ km s$^{-1}$; the uncertainty in the line width is $\sigma(W_{20}) = 2
\sigma(\upsilon_{\rm sys}) \approx 6.8$ km s$^{-1}$. In practice, these formal
values underestimate the true errors for spectra affected by RFI or poor
bandpasses, or in instances when the line profile is not clearly
double-peaked. Profiles that are single-peaked and/or highly
asymmetric are noted in Table~2.
To convert the raw line widths to ${\upsilon_m}$, the maximum rotational velocity, four
corrections must be applied to $W_{20}$: (1) instrumental resolution, which we
assume to be $W_{\rm inst} = 10$ or 5 km s$^{-1}$, depending on whether the spectrum
was Hanning smoothed or not, and that it can be removed by linear subtraction;
(2) redshift, which stretches the line width by a factor $(1+z)$; (3)
turbulent broadening, which for simplicity we assume to be $W_{\rm turb} = 22$
km s$^{-1}$\ for $W_{20}$ and can be subtracted linearly (Verheijen \& Sancisi 2001);
and (4) inclination angle. We assume that the inclination of the \ion{H}{1}-emitting
disk to the line-of-sight can be approximated by the photometric inclination
angle of the optical disk, $i$ (see \S2.3). The final maximum rotational
velocity is then
\begin{figure*}[t]
\centerline{\psfig{file=table1_p1.ps,height=0.99\textheight,angle=180}}
\end{figure*}
\clearpage
\begin{figure*}[t]
\centerline{\psfig{file=table1_p2.ps,height=0.99\textheight,angle=180}}
\end{figure*}
\clearpage
\begin{figure*}[t]
\hskip -0.4truein
\centerline{\psfig{file=table1_p3.ps,height=0.99\textheight,angle=180}}
\end{figure*}
\clearpage
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{3{\it a}}
\centerline{\psfig{file=f3a.eps,width=19.5cm,angle=0}}
\figcaption[fig3.ps]{
\ion{H}{1}\ spectra and optical $g$-band SDSS images of the \ion{H}{1}-detected objects.
The velocity scale is given in the barycentric frame, and the velocity
range is chosen such that the lines have roughly comparable widths on the
plots. Features suspected to be due to radio frequency interference are
labeled ``RFI.'' Each image subtends a physical scale of 50 kpc $\times$ 50
kpc, with north oriented up and east to the left.
\label{fig3}}
\end{figure*}
\vskip 0.3cm
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{3{\it b}}
\centerline{\psfig{file=f3b.eps,width=19.5cm,angle=0}}
\figcaption[fig3.ps]{
Same as Fig.~3{\it a}.
\label{fig3}}
\end{figure*}
\vskip 0.3cm
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{3{\it c}}
\centerline{\psfig{file=f3c.eps,width=19.5cm,angle=0}}
\figcaption[fig3.ps]{
Same as Fig.~3{\it a}.
\label{fig3}}
\end{figure*}
\vskip 0.3cm
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{3{\it d}}
\centerline{\psfig{file=f3d.eps,width=19.5cm,angle=0}}
\figcaption[fig3.ps]{
Same as Fig.~3{\it a}.
\label{fig3}}
\end{figure*}
\vskip 0.3cm
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{3{\it e}}
\centerline{\psfig{file=f3e.eps,width=19.5cm,angle=0}}
\figcaption[fig3.ps]{
Same as Fig.~3{\it a}.
\label{fig3}}
\end{figure*}
\vskip 0.3cm
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{3{\it f}}
\centerline{\psfig{file=f3f.eps,width=19.5cm,angle=0}}
\figcaption[fig3.ps]{
Same as Fig.~3{\it a}.
\label{fig3}}
\end{figure*}
\vskip 0.3cm
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{3{\it g}}
\centerline{\psfig{file=f3g.eps,width=19.5cm,angle=0}}
\figcaption[fig3.ps]{
Same as Fig.~3{\it a}. The image for Akn 120 comes from {\it HST}/PC2 (filter
F750LP) and subtends 22.4 kpc $\times$ 22.4 kpc.
\label{fig3}}
\end{figure*}
\vskip 0.3cm
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{3{\it h}}
\centerline{\psfig{file=f3h.eps,width=19.5cm,angle=0}}
\figcaption[fig3.ps]{
Same as Fig.~3{\it a}, except that for MCG~+01-13-012, NGC~3227,
NGC~5548, and NGC~7469 the images are in the $B$ band taken from the
Digital Sky Survey. The images for the four PG quasars come from {\it HST},
taken with the following detector and filter combinations and field sizes:
PG~0844+349 (ACS/F625W, 84.6 kpc $\times$ 84.6 kpc),
PG~1229+204 (PC2/F606W, 43.9 kpc $\times$ 43.9 kpc),
PG~1426+015 (PC2/F814W, 61.4 kpc $\times$ 61.4 kpc), and
PG~2130+099; (PC2/F450W, 43.9 kpc $\times$ 43.9 kpc).
\label{fig3}}
\end{figure*}
\vskip 0.3cm
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{3{\it i}}
\centerline{\psfig{file=f3i.eps,width=19.5cm,angle=0}}
\figcaption[fig3.ps]{
Same as Fig.~3{\it a}, except that for RX~J0602.1+2828 and RX~J0608.0+3058
the images are in the $B$ band and were taken from the Digital Sky Survey.
\label{fig3}}
\end{figure*}
\vskip 0.3cm
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{4{\it a}}
\centerline{\psfig{file=f4a.eps,width=19.5cm,angle=0}}
\figcaption[fig4.ps]{
Optical $g$-band SDSS images of the \ion{H}{1}\ nondetections. Each image subtends a
physical scale of 50 kpc $\times$ 50 kpc, with north oriented up and east to
the left.
\label{fig4}}
\end{figure*}
\vskip 0.3cm
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{4{\it b}}
\centerline{\psfig{file=f4b.eps,width=19.5cm,angle=0}}
\figcaption[fig4.ps]{
Same as Fig.~4{\it a}.
\label{fig4}}
\end{figure*}
\vskip 0.3cm
\vskip 0.3cm
\begin{figure*}[t]
\figurenum{4{\it c}}
\centerline{\psfig{file=f4c.eps,width=19.5cm,angle=0}}
\figcaption[fig4.ps]{
Same as Fig.~4{\it a}, except for PG~0003+199, which comes
from {\it HST}/PC2 (filter F606W) and subtends 21.9 kpc $\times$ 21.9 kpc.
\label{fig4}}
\end{figure*}
\vskip 0.3cm
\clearpage
\begin{figure*}[t]
\centerline{\psfig{file=table2_p1.ps,width=18.5cm,angle=0}}
\end{figure*}
\clearpage
\begin{figure*}[t]
\centerline{\psfig{file=table2_p2.ps,width=18.5cm,angle=0}}
\end{figure*}
\clearpage
\begin{figure*}[t]
\centerline{\psfig{file=table2_p3.ps,width=18.5cm,angle=0}}
\end{figure*}
\begin{figure*}[t]
\centerline{\psfig{file=table3_v1.ps,width=16.5cm,angle=0}}
\end{figure*}
\clearpage
\begin{figure*}[t]
\centerline{\psfig{file=table4_p1.ps,width=18.5cm,angle=0}}
\end{figure*}
\clearpage
\begin{figure*}[t]
\centerline{\psfig{file=table4_p2.ps,width=18.5cm,angle=0}}
\end{figure*}
\clearpage
\begin{equation}
\upsilon_m = {{(W_{20} - W_{\rm inst})/(1+z) - W_{\rm turb}}\over{2 \ {\rm sin}\ i}}.
\end{equation}
In the optically thin limit, the integrated line flux,
$\int S_\nu\,d\upsilon$, in units of Jy~km s$^{-1}$, is related to the \ion{H}{1}\ mass as
(Roberts 1962)
\begin{equation}
M_{{\rm H~{\tiny I}}} = 2.36\times10^5 \ D_{L}^2\ \int S_\nu\,d\upsilon
\, \, \, \, M_\odot,
\end{equation}
\noindent
where $D_{L}$ is the luminosity distance expressed in Mpc and $d\upsilon$
is the line width in the observer's frame. We neglect any correction for
self-absorption, since this is controversial (see, e.g., Springob et al.
2005), and, in any case, depends on Hubble type, which is not well-known for
many of our sources (see \S2.3). Upper limits for the integrated fluxes
and \ion{H}{1}\ masses are calculated using 3 times the root-mean-square noise level
and a rest-frame line width of 304 km s$^{-1}$, the median value for the 66 detected
objects.
Single-dish \ion{H}{1}\ observations always run the risk of source confusion,
especially for relatively distant samples such as ours. At the median redshift
of $z = 0.05$ for our targets, Arecibo's telescope beam (FWHM $\approx$
3\farcm5) subtends a linear diameter of $\sim 200$ kpc. We use the optical
images (from SDSS if available, or else from the Palomar Digital Sky Survey;
see \S2.3), in combination with the redshifts, to identify potential sources
of confusion within a search radius of 7\farcm5. The intensity of the first
sidelobes of the Arecibo beam drops to $\sim$10\% of the peak at a distance of
5\farcm5 from the beam center, and by 7$^\prime$--8$^\prime$\ it becomes negligible
(Heiles et al. 2000). We consider an object as a candidate confusing source
if it lies within the search radius and has a cataloged radial velocity
within $\pm 500$ km s$^{-1}$\ of that of the science target. Only a few candidates
have been identified, and these are noted in Table~2. The vast majority of
the objects in our survey are unaffected by source confusion.
Eight of the objects in our survey have published \ion{H}{1}\ data. A comparison
of our measurements with those in the literature (Table~3) shows that in
general there is fairly good agreement. The most noticeable exception is
PG~2130+099, for which both our line width and flux are lower than the
literature values by about a factor of 2.
\subsection{Optical Data}
We use both optical spectroscopic and imaging data to ascertain a number of
parameters that are central to our analysis. For the SDSS objects, these
data were taken directly from the SDSS archives. The spectra were analyzed
following the procedures previously described in Greene \& Ho (2004, 2005b;
see also Kim et al. 2006). In brief, we obtain a pure emission-line spectrum
for each object by subtracting from the observed total spectrum a model
consisting of a stellar component, a featureless power-law component, and an
\ion{Fe}{2}\ ``pseudo-continuum.'' We then fit the resulting narrow and broad
emission lines using a combination of multi-component Gaussians. The optical
emission-line parameters are collected in Table~4. We also give (in Table~1),
where available, values of the central stellar velocity dispersion
and its associated uncertainty, derived using the technique of Greene \& Ho
(2006a). If the data do not permit the stellar velocity dispersion to be
measured, we list instead the velocity dispersion of the [\ion{O}{2}]\ $\lambda$ 3727
line, which Greene \& Ho (2005a) have shown to be an effective substitute.
BH masses were estimated using the broad H$\alpha$\ method of Greene \& Ho
(2005b), using the FHWM and luminosities given in Table~1. We further convert
the broad H$\alpha$\ luminosity to the AGN continuum luminosity at 5100 \AA, using
Equation 1 of Greene \& Ho (2005b), from which we deduce the bolometric
luminosity assuming that $L_{\rm bol} = 9.8$ \ensuremath{L_{\rm{5100 \AA}}}
(McClure \& Dunlop 2004).
The non-SDSS objects were treated differently. The majority of these, by
design, have BH masses directly measured from reverberation mapping, and we
simply adopt the values given in Peterson et al. (2004), from which continuum
luminosities at 5100 \AA\ were also taken. Three of the non-SDSS
objects (MCG~+01-13-012, RX~J0602.1+2828, and RX~J0608.0+3058) only have
measurements for the H$\beta$\ line, but BH masses based on this line alone can
also be estimated with reasonable accuracy (Greene \& Ho 2005b).
The images provide five important pieces of information about the sources:
the total (AGN plus host galaxy) magnitude, morphological type, size,
inclination angle, and potential sources of confusion within the \ion{H}{1}\
beam. For the SDSS objects, we choose the $g$ band as our
fiducial reference point, since it is closest to the more traditional
$B$ band on which most of the literature references are based. In Figure~3,
we display the optical image of the sources detected in \ion{H}{1}; images of the
\ion{H}{1}\ nondetections are shown in Figure~4. In a few cases we were able to
locate high-resolution images in the {\it Hubble Space Telescope (HST)}\
archives. The size of each image has been scaled to a constant physical scale
of 50 kpc $\times$ 50 kpc to facilitate comparison of objects with very
different distances.
Inspection of Figures~3 and 4 shows that obtaining reliable morphological
types of the host galaxies is challenging for most of the sources, because of
their small angular sizes and the coarse resolution and shallow depth of the
SDSS images. In assigning a morphological type, we must be
careful to give lower weight to the apparent prominence of the bulge,
since a substantial fraction of the central brightness enhancement presumably
comes from the AGN core itself. The SDSS database provides quantitative
measurements of the Petrosian radius containing 50\% and 90\% of the
light, from which one can calculate the (inverse) ``concentration index,''
defined to be $C \equiv r_{\rm P50}/r_{\rm P90}$. We use the correlation
between $C$ and morphological type index of Shimasaku et al. (2001) as an
additional guide to help us assign morphological types, again bearing in
mind that because of the AGN contamination the concentration index should be
viewed strictly as an upper limit to the true value. We generally give less
weight to the classifications based on $C$. (We have discovered a
few glaring examples where the SDSS-based concentration index gives an
egregiously erroneous morphological type.) The most difficult classifications
are those that lie on the boundary between ellipticals and S0s, which is
sometimes ambiguous even for nearby, bright galaxies. Unless the galaxy is
highly inclined, it is often just impossible to tell; we label these cases as
``E/S0.'' Another difficult situation arises when trying to discern whether a
disk galaxy truly possesses spiral arms. Given the modest quality of the SDSS
images and the relatively large distances of the galaxies, again often no
clear-cut decision can be made, and we are forced to assign a classification
of ``S0/Sp.'' For a few of the objects, the image material is simply
inadequate to allow a classification to be made at all.
The SDSS photometry additionally provides values for the major axis ($a$)
and minor axis ($b$) isophotal diameters measured at a surface brightness
level of $\mu = 25$ mag~arcsec$^{-2}$, from which we can deduce the
photometric inclination angle using Hubble's (1926) formula
\begin{equation}
{\rm cos}^2 i = {{q^2 - q_0^2}\over{1-q_0^2}},
\end{equation}
\vskip 0.3cm
\figurenum{5}
\begin{figure*}[t]
\centerline{\psfig{file=f5.eps,width=19.0cm,angle=-90}}
\figcaption[fig5.ps]{
The distribution of ({\it a}) \ion{H}{1}\ masses and ({\it b}) \ion{H}{1}\
masses normalized to the $B$-band luminosity of the
host galaxy. Limits are plotted as open histograms.
\label{fig5}}
\end{figure*}
\vskip 0.3cm
\noindent
where $q = b/a$. The intrinsic thickness of the disk, $q_0$, varies by about a
factor of 2 along the spiral sequence; we adopt $q_0 = 0.3$, a value
appropriate for early-type systems (Fouqu\'e et al. 1990). It is also
of interest to combine the galaxy's optical size ($D_{\rm 25}$, diameter
at $\mu = 25$ mag~arcsec$^{-2}$) with the \ion{H}{1}\ line width to compute a
characteristic dynamical mass. From Casertano \& Shostak (1980),
\begin{equation}
M_{\rm dyn} = 2\times 10^4
\left( {D_{L}}\over{{\rm Mpc}} \right)
\left( {D_{\rm 25}}\over{{\rm arcmin}} \right)
\left( {\upsilon_{m}}\over{{\rm km~s}^{-1}} \right)^2 \,\,\, \, M_\odot.
\end{equation}
\noindent
Because we have no actual measurement of the size of the \ion{H}{1}\ disk, this
formula yields only an approximate estimate of the true dynamical mass.
However, from spatially resolved observations we know that the sizes of \ion{H}{1}\
disks of spiral galaxies, over a wide range of Hubble types and luminosities,
scale remarkably well with their optical sizes. From the studies of Broeils \&
Rhee (1997) and Noordermeer et al. (2005), $D_{\rm H~{\tiny I}}/D_{\rm 25}
\approx 1.7$ within 30\%--40\%. Nevertheless, our values of $M_{\rm dyn}$
are probably much more accurate as a relative rather than an absolute
measure of the galaxy dynamical mass.
The optical photometry, albeit of insufficient angular resolution to yield a
direct decomposition of the host galaxy from the AGN core, nevertheless can be
used to give a rough, yet still useful, estimate of the host galaxy's
luminosity. Following the strategy of Greene \& Ho (2004, 2007b), we obtain
the host galaxy luminosity by subtracting the AGN contribution, derived from
the spectral analysis, from the total Petrosian (galaxy plus AGN) luminosity
available from the photometry. In the current application, we use the broad
H$\alpha$\ luminosity as a surrogate for the 5100 \AA\ continuum luminosity to
minimize the uncertainty of measuring the latter, since in some of our objects
there may be significant starlight within the 3$^{\prime\prime}$\ aperture of the SDSS
spectra (see Greene \& Ho 2005b). We extrapolate the flux density at 5100
\AA\ to the central wavelength of the $g$ filter (5120 \AA) assuming that the
underlying power-law continuum has a shape $f_\lambda \propto \lambda^{-1.56}$
(Vanden~Berk et al. 2001), adding the small offset to the photometric
zeropoint of the $g$-band filter recommended in the SDSS
website\footnote{\tt http://photo.astro.princeton.edu/\#data\_model}. In a
few sources the host galaxy luminosity derived in this manner actually exceeds
the total luminosity. This may reflect the inherent scatter introduced by our
procedure, or perhaps variability in the AGN. For these cases, we adopt the
total luminosity as an upper limit on the host galaxy luminosity.
\section{Discussion and Summary}
We have used the Arecibo telescope to conduct the largest modern survey to
date for \ion{H}{1}\ emission in active galaxies. The sample consists of 113
$z$ {$\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$}\ 0.11 galaxies with type~1 AGNs, selected from an extensive study
of SDSS sources for which BH masses can be reliably determined. The new
observations were supplemented with an additional 53 type 1 AGNs assembled
from the literature, forming a final, comprehensive sample of 154 sources with
\ion{H}{1}\ detections or useful upper limits.
Among the newly observed galaxies, we detected \ion{H}{1}\ in 66 out of the 101
objects that were not adversely affected by RFI, for an overall detection rate
of 65\%. The \ion{H}{1}\ masses for the detected sources range from $M_{{\rm H~I}}$\
$\approx\,10^9$ to $4\times10^{10}$ $M_\odot$, with an average value of
$8.6\times 10^9$ $M_\odot$, while upper limits for the undetected objects
generally hover around $M_{{\rm H~I}}$\ $\approx\,10^{10}$ $M_\odot$\ (Fig.~5{\it a}).
Adding in the literature sample does not appreciably change these values. The
host galaxies of the current sample of type 1 AGNs are therefore quite rich in
neutral hydrogen. For reference, recall that our Galaxy has a total \ion{H}{1}\ mass
of $5.5 \times 10^9$ $M_\odot$\ (Hartmann \& Burton 1997).
Since the \ion{H}{1}\ content of galaxies scales with the stellar luminosity in a
manner that depends on morphological type (e.g., Roberts \& Haynes 1994),
Figure~5{\it b}\ examines the \ion{H}{1}\ masses normalized to the $B$-band
luminosity of the host galaxy. In the case of the SDSS objects, we converted
the host galaxy luminosities in the $g$ band (\S 2.3) to the $B$ band assuming
an average color of $g-B = -0.45$ mag, appropriate for an Sab galaxy (Fukugita
et al. 1995), roughly the average morphological type of our sample. The
resulting distribution, ranging from $M_{{\rm H~I}}$/$L_B \approx 0.02$ to 4.5 with an
average value of 0.42, agrees well with the distribution of inactive spiral
galaxies of Hubble type Sa to Sb (e.g., Roberts \& Haynes
\vskip 0.3cm
\figurenum{6}
\psfig{file=f6.eps,width=8.5cm,angle=0}
\figcaption[fig6.ps]{
Distribution of radial velocity difference as measured in the optical
and in \ion{H}{1}, $\Delta \upsilon = \upsilon_{\rm opt} - \upsilon_{\rm sys}$.
Note the excess of objects toward negative values of $\Delta \upsilon$.
\label{fig6}}
\vskip 0.3cm
\noindent
1994). This
reinforces the conclusion that the host galaxies of type 1 AGNs possess a
normal gas content, at least as far as neutral atomic hydrogen is concerned.
The implications of these detection statistics, along with an extensive
analysis of the \ion{H}{1}\ and AGN properties assembled here, are presented in
our companion paper (Ho et al. 2008).
Figure~6 compares the systemic radial velocity measured from \ion{H}{1}\ with the
published optical radial velocity, $\upsilon_{\rm opt} = cz$. The velocity
difference, $\Delta \upsilon = \upsilon_{\rm opt} - \upsilon_{\rm sys}$,
shows a large spread, from $\Delta \upsilon \approx -300$ to $+250$ km s$^{-1}$,
but there is a noticeable excess at negative velocities. On average,
$\langle \Delta \upsilon \rangle = -46 \pm 91$ km s$^{-1}$. A similar effect was
previously reported by Mirabel \& Wilson (1984) and Hutchings et al. (1987);
in their samples, the mean offset is $\langle \Delta \upsilon \rangle \approx
-50$ km s$^{-1}$, essentially identical to our result. Since our sources are
relatively bright, type~1 AGNs, the optical radial velocities are
predominantly derived from the narrow emission lines. The systemic velocity
of the galaxy, on the other hand, is well anchored by the \ion{H}{1}\ measurement.
The negative value of $\langle \Delta \upsilon \rangle$ therefore implies
that on average the ionized gas in the narrow-line region has a general
tendency to be mildly outflowing.
\acknowledgements
The work of L.~C.~H. was supported by the Carnegie Institution of Washington
and by NASA grants HST-GO-10149.02 and HST-AR-10969 from the Space Telescope
Science Institute, which is operated by the Association of Universities for
Research in Astronomy, Inc., for NASA, under contract NAS5-26555. Support for
J.~D. and J.~E.~G. was provided by NASA through Hubble Fellowship grants
HF-01183.01-A and HF-01196, respectively, awarded by the Space Telescope
Science Institute. We made use of the databases in HyperLeda
({\tt http://leda.univ-lyon1.fr/}), the Sloan Digital Sky Survey, and
the NASA/IPAC Extragalactic Database ({\tt http://nedwww.ipac.caltech.edu/}),
which is operated by the Jet Propulsion Laboratory, California Institute
of Technology, under contract with NASA. We thank Minjin Kim for help with
preparing the {\it HST}\ images and for analyzing the SDSS spectra shown in
Table~7, Aaron Barth for sending the {\it HST}\ image of PG~0844+349, and C. Motch
for making available his published spectra of RX~J0602.1+2828 and
RX~J0608.0+3058. We thank the anonymous referee for helpful suggestions.
| {'timestamp': '2008-03-13T19:25:40', 'yymm': '0803', 'arxiv_id': '0803.2023', 'language': 'en', 'url': 'https://arxiv.org/abs/0803.2023'} |
\section{Structural changes}
The appearance of a maximum in the melting line of hydrogen near 82~GPa has been explained by a softening of intermolecular interactions in the liquid \cite{bonev_nature_2004}, similar to what was observed in the orientationally-ordered phase of the solid (phase III) \cite{moshary_prb_1993}. This has prompted us to investigate the structure of the molecular liquid. We have discovered a new region in the molecular fluid in the vicinity of the melt line turnover that exhibits short range orientational ordering. It is revealed in the spatial distribution function, $SDF(r,\theta)$. Given an H$_2$ molecule, $SDF(r,\theta)$ is defined as the probability for finding a hydrogen atom at a distance $r$ from the molecular center of mass and at an angle $\theta$ with the molecular axis, normalized by the average H number density.
In Fig.~\ref{order}, we show a graphic of the $SDF$ for two densities along the same 1000~K isotherm; for clarity only the contributions from nearest-neighbors are shown. In the low density case (vertical plane), a molecule's nearest neighbor has an equal probability of being found at any $\theta$. At high pressure (horizontal plane), there is an orientational correlation between neighboring molecules. Particles are less likely to be found at the molecular poles, and more likely to be found near the equators. This can be further seen in the inset of Fig.~\ref{order}, where we have integrated over the radial degree of freedom, $\int \! SDF(r,\theta) \, dr = \overline{SDF}(\theta)$. To describe the evolution of the structural transition under compression, we define an order parameter as $\alpha = \overline{SDF}(\theta=90)/\overline{SDF}(\theta=0)$. A plot of $\alpha$ as a function of $r_s$ along the 1000~K isotherm (Fig.~\ref{asdf}) indicates a rapid increase in the vicinity of the previously reported \cite{bonev_nature_2004, shanti_prl_2008, eremets_jetp_2009} maximum in the melting curve. It is this change in the structure of the liquid, driven by a change in the intermolecular interactions, that is responsible for the turnover in the melting line.
\begin{figure}[tbh]
\includegraphics[width=0.45\textwidth,clip]{fig3_order.eps}
\caption{\label{order}Structural order parameter, $\alpha$, and spatial distributions functions, $SDF(r,\theta)$ and $\overline{SDF}(\theta)$ (see text for definitions), along the 1000~K isotherm in molecular H. The density where the melting curve has a maximum is indicated on the plot; it is in the region where $\alpha$ begins to increase rapidly. The graphic shows nearest neighbor contributions to $SDF(r,\theta)$ at low ($r_s= 2.40$, $P=6$~GPa) and high ($r_s = 1.45$, $P=170$~GPa) density. The $\overline{SDF}(\theta)$ further illustrate changes in the angular distribution with density.}
\end{figure}
A detailed quantitative description of the emerging orientational order will be presented in a follow up paper. Such analysis is expected to have relevance for identifying the finite (and possible zero) $T$ phases of the solid; similarities with phases
II and III of H are plausible. Here we mention only that in addition to the described dependence on $\theta$, there is a very strong tendency for neighboring molecules to lie in plane. For $\theta \sim 90^{\circ}$, the angle between molecular axes tends to be about 15$^{\circ}$ (for different $\theta$ it depends slightly on whether the molecules are tilted towards or away from each other).
One of the most prominent features of dense H is the sharp transition from molecular to atomic liquid. At sufficiently high density, the onset of dissociation leads to an EOS with a negative slope, i.e. $dP/dT |_{V=const} < 0$. There has been considerable debate as to whether the abrupt change in $P$ over a narrow $T$ range is due to a discontinuous ($1^{st}$ order) transition in the liquid and how it is affected by the presence of impurities (e.g. He). In what follows, we show that the anomalous features in the EOS, its density and impurity dependence, and its sensitivity to computational parameters can be understood on the basis of the changes in the liquid structure across the dissociation transition.
In order to describe the liquid structure in the presence of both atoms and molecules, we decompose $SDF(r,\theta)$ into atomic and molecular components. In both cases, $SDF(r,\theta)$ is calculated with respect to reference molecules center of mass. For the atomic $SDF(r,\theta)$, statistics for $r$ and $\theta$ are collected for single atoms only, while for the molecular one, the atoms at $r$ and $\theta$ are paired. In Fig.~\ref{asdf} we show such a decomposition at conditions corresponding to a large degree of local angular structure ($\alpha \sim 10$). At these $P$ and $T$, the liquid has just begun to dissociate ($\Pi(\tau) \approx 99\%$). The molecular $SDF$ is similar to that of a purely molecular liquid, with a peak near the reference molecule's equator. However, the atomic $SDF$ shows that unpaired atoms tend to shift towards the molecular poles. The atomic $\overline{SDF}(\theta)$ is peaked at $\theta \approx 50^{\circ}$ (and $130^{\circ}$) versus $90^{\circ}$ for the molecular $\overline{SDF}(\theta)$ [Fig.~\ref{asdf}(b)]. Similar analysis at low density shows no such angular dependence.
\begin{figure}[tbh]
\includegraphics[width=0.45\textwidth,clip]{fig4_asdf.eps}
\caption{\label{asdf}Molecular and atomic spatial distribution functions, $SDF(r,\theta)$ (a) and $\overline{SDF}(\theta)$ (b); see text for definitions. For clarity, only contributions from nearest neighbor atoms are included. Atoms forming molecules are more likely to reside near the reference molecule's equator. The distribution of unpaired atoms is more even and indicates that they are most likely to be found at $\theta \approx 50^{\circ}$ and $\theta \approx 130^{\circ}$.}
\end{figure}
At low $P$, where there is no short-range orientational order, the H$_2$ molecules are freely rotating and their packing is similar to that of spheres. Upon dissociation, the problem becomes that of spheres of different sizes, however, there is no apparent optimization of packing. On the other hand, because of covalent bonding, the effective volume of H$_2$ is less than that of two isolated hydrogen atoms. For these reasons, the dissociation transition at low density is characterized by $dP/dT |_{V=const} > 0$.
At high $P$, the molecules are no longer freely rotating and the packing problem becomes that of prolate spheroids (see graphic in Fig.~\ref{asdf}). As seen Fig.~\ref{asdf}, the molecular orientational order creates voids between them, which can be filled in by single atoms after dissociation. There will be a drop $P$ upon dissociation when the optimization in packing is sufficient to compensate for the higher effective volume of single atoms compared to that of molecules. This picture is consistent with the fact that the sharpness in the EOS increases with density \cite{bonev_prb_2004,miguel,caspersen,vorberger_prb_2007} - our order parameter follows the same trend. Furthermore, it has been noted that diluting the system with neutral impurities such as helium \cite{vorberger_prb_2007} softens the transition. We suggest that performing $SDF$ analysis on high density mixtures of molecular
hydrogen and helium (or other noble gases) will reveal similar angular local order, with the dopant atoms residing within the voids of the liquid. Finally, we note that in a one-dimensional hydrogen liquid, where angular order is not possible, $dP/dT |_{V=const}$ must always be positive.
Packing considerations also provide an explanation for the large sensitivity of the transition to the number of atoms, $N$, in the simulation supercell. We find that the degree of dissociation and related properties depend on $N$ in a non-intuitive way (similar observations have been made by Morales \cite{miguel}). At high densities, $\Pi(\tau)$ oscillates as $N$ is varied from 256, 512, 768, 1024. Surprisingly, simulations conducted with 256 atoms are in better agreement with those done with $N=1024$ than with $N=512$ ($\Pi_{N=128}=0.3338$, $\Pi_{N=256}=0.9842$, $\Pi_{N=512}=0.8328$, $\Pi_{N=768}=0.9660$, and $\Pi_{N=1024}=0.9656$ for $r_s=1.45$, $T=1000 K$). This effect \emph{cannot} be removed with a finer $k$-point sampling of the Brillouin zone. With $N=128$ the error in $P$ is as large as 8\% at $r_s$=1.40, $T$=750 K.
The origin of this initially puzzling behavior is that at a given density, $N$ defines the physical dimensions of the simulation box. Depending on how these dimensions relate to the correlation length of the liquid, different sized cells will have a slight bias towards one phase or another. Ideally one should use a value large enough that this effect is no longer an issue. Given the current cost of using large cells, we report values corresponding to $N=256$, as they are in relatively good agreement with $N=1024$ (the oscillation in $\Pi(\tau)$ is quite damped in this larger cell). Simulations performed with $N=256$ tend to favor the molecular phase relative to that of $N=1024$; this should partially correct for the underestimate of the dissociation barrier due to approximations of the DFT exchange-correlation potential.
The existence of short range angular structure in the liquid, specifically the different arrangement of molecules and atoms, suggests that something similar might also occur in the solid phase. In future searches for solid state structures it would be worthwhile to include variations of a two-component solid, comprised of molecules and atoms. Indeed, work by Pickard and Needs \cite{pickard_natphys_2007} indicates that mixed layered structures, comprised of molecules and atomic graphene-like sheets, remain energetically competitive over a wide range of pressures.
We now turn our discussion to the possiblity of a discontinous (\emph{i.e.} $1^{st}$ order) transition in the liquid. Based on our sampling of the phase diagram, the molecular-atomic transition appears to be continuous up to 200~GPa. We do not find a flat region in the $P(V)$ EOS - a signature of a $1^{st}$ order transition. Our data imply one of three possibilities: the transition is not $1^{st}$ order, it takes place outside of the $P$, $T$ space we considered, or it occurs over a range of densities too small to be resolved by our sampling. We can use our results, however, to establish a connection between the microscopic and macroscopic properties characterizing the transition and provide insight for the physical mechanism that could lead to it being $1^{st}$ order. Let $\delta V$ be the reduction in volume (due to packing) that can be achieved by dissociating a single molecule, while keeping $P$ and $T$ constant (the resulting state need not be in equilibrium). If a $1^{st}$ phase transition exists at these $P$ and $T$, we must have $-P\Delta V = \Delta U - T\Delta S$. Here $\Delta V$, $\Delta U$ and $\Delta S$ are the volume, energy and entropy differences across the transition. If it results in the dissociation of $n_d$ molecules, then we define $E_d \equiv \Delta U/n_d$, which has the physical meaning of dissociation energy. A $1^{st}$ order transition will take place if $n_d \delta V = - \Delta V$, which means $\delta V \approx E_d/P$ (assuming that $T\Delta S/P$ can be neglected, especially at high $P$ and low $T$). We note that $E_d$ \emph{decreases} under compression due to screening, while $\delta V$ \emph{increases} due to more pronounced orientational order. Thus, convergence of these two terms, $\delta V$ and $ E_d/P$, is increasingly likely at high $P$ and low $T$. Conclusive determination of the existence of a critical point will require very fine sampling of the phase diagram below 1000~K and above 200~GPa.
We have mapped the molecular-atomic transition of liquid hydrogen over a large pressure range. Our transition line correlates well with changes that can be observed in the electronic properties of the system such as the static dielectric constant. We predict that the liquid demonstrates significant short range orientational ordering. Its development coincides with the turnover in the melt line. The existence of this structural order is responsible for the large dissociation-induced $P$ drop in the EOS. Furthermore, it provides an explanation for the significant finite size effects that are present in this system. A $1^{st}$ order transition in the liquid, if it does exist, will likely occur at $T < $ 1000~K and $P > $ 200~GPa.
Work supported by NSERC, CFI, and Killam Trusts. Computational resources provided by ACEnet, Sharcnet, IRM Dalhousie, and Westgrid. We thank E. Schwegler, K. Caspersen, T. Ogitsu, and M. Morales for discussions and communicating unpublished results.
| {'timestamp': '2009-10-09T20:19:59', 'yymm': '0910', 'arxiv_id': '0910.1798', 'language': 'en', 'url': 'https://arxiv.org/abs/0910.1798'} |
\section{Introduction}
Evolutionary tracks and isochrones are usually computed taking into
account a fixed law of helium to metal enrichment
$( \Delta Y/\Delta Z )$. In
fact most of the available tracks follow a linear $Y(Z)$ relation of
the form $Y=Y_P+(\Delta Y/\Delta Z)\,Z$ (e.g. Bertelli et al. 1994
with $Y_{\rm p}=0.23,\Delta Y/\Delta Z=2.5$; or Girardi et al. 2000,
where $Y_{\rm p}=0.23,\Delta Y/\Delta Z=2.25$). Grids of stellar
models in general used $\Delta Y/\Delta Z$ values higher than $\sim 2$
in order to fit both the primordial and the solar initial He content.
On one side we recall that WMAP has provided a value of the primordial
helium ($Y_P\sim 0.248$, Spergel et al. 2003, 2007), significantly higher
than the value assumed in our previous stellar models.
On the other side the determinations of the helium enrichment from nearby
stars and K dwarfs or from Hyades binary systems show a large range of
values for this ratio (Pagel \& Portinari 1998, Jimenez et al. 2003,
Casagrande et al. 2007, Lebreton et al. 2001).
Therefore it is suitable to make available stellar models that can be used
to simulate different enrichment laws.
Several new results suggest that the naive assumption that the helium
enrichment law is universal, might not be correct. In fact there has been
recently evidence of significant variations in the helium content (and
perhaps of the age) in some globular clusters, that were traditionally
considered as formed of a simple stellar population of uniform age and
chemical composition.
According to Piotto et al. (2005) and Villanova et al. (2007), only
greatly enhanced helium can explain the color
difference between the two main sequences in $\omega$ Cen. Piotto et
al.(2007) attribute the triple main sequence in NGC 2808 to successive
rounds of star formation, with different helium abundances. In NGC
2808 a helium enhanced population could explain the MS spread and the HB
morphology ( Lee et al. 2005, D'Antona et al. 2005).
In general the main sequence morphology can allow to
detect differences in helium content only when the helium content is
really very large. Many globular clusters might have a population with enhanced
helium (up to $Y= 0.3 - 0.33$), but only a few have a small population
with very high helium, such as suggested for NGC 2808.
For NGC 6441 (Caloi \& D'Antona 2007) a very
large helium abundance is required to explain part of the horizontal
branch population, that might be interpreted as due to self-enrichment
and multiple star formation episodes in the early evolution of
globular clusters.
The helium self-enrichment in globular clusters has been discussed in
several papers. Among them we recall Karakas et al. (2006) on the AGB
self-pollution scenario, Maeder \& Meynet (2006) on rotating low-metallicity
massive stars, while Prantzos and Charbonnel (2006) discuss the drawbacks
of both hypotheses.
This short summary of recent papers displays
how much interest there is on the enriched helium content of some stellar
populations. This is the reason why we started these new sets of
evolutionary models allowing to simulate a large range of helium enrichment
laws.
There are a number of other groups producing extended databases of tracks and
isochrones both for scaled solar and for $\alpha$-enhanced chemical
compositions (Pietrinferni et al. 2004, 2006; Yi et al. 2001 and Demarque et
al. 2004; VandenBerg et al. 2006), but their models usually take into account
a fixed helium enrichment law. Unlike these authors, very recently Dotter et
al. (2007) presented new stellar tracks and isochrones for three initial helium
abundances in the range of mass between $0.1$ and $1.8 \mbox{$M_{\odot}$} $ for scaled
solar and $\alpha$-enhanced chemical compositions.
In this paper we present new
stellar evolutionary tracks for different values of the initial He
for each metal content. In fact homogeneous grids of stellar evolution
models and isochrones in an extended region of the Z-Y plane and for a large
range of masses are required
for the interpretation of photometric and spectroscopic observations of
resolved and unresolved stellar populations and for the investigation of the
formation and evolution of galaxies.
Section 2 describes the input physics and the coverage of the Z-Y plane and
Section 3 the new synthetic TP-AGB models. Section 4 presents the
evolutionary tracks and relative electronic tables.
Section 5 describes the derived isochrones and the interpolation scheme;
Section 6 presents the comparison with other stellar models databases
and Section 7 the concluding remarks.
\section {Input physics and coverage of the $Z-Y$ plane}
The stellar evolution code adopted in this work is basically the same
as in Bressan et al. (1993), Fagotto et al. (1994a,b) and Girardi et
al. (2000), but for updates in the opacities, in the rates of energy
loss by plasma neutrinos (see Salasnich et al. 2000), and for the
different way the nuclear network is integrated (Marigo et al. 2001).
In the following we will describe the input physics, pointing out the
differences with respect to Girardi et al. (2000).
\subsection{Initial chemical composition}
\label{sec_chemic}
\begin{table}[!ht]
\caption{Combinations of $Z$ and $Y$ of the computed tracks}
\label{tab_comp}
\smallskip
\begin{center}
{\small
\begin{tabular}{ccccccc}
\hline
\noalign{\smallskip}
Z & Y1 & Y2 & Y3 & Y4 & Y5 & Y6\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
0.0001 & 0.23 & 0.26 & 0.30 & & 0.40 & \\
0.0004 & 0.23 & 0.26 & 0.30 & & 0.40 & \\
0.001 & 0.23 & 0.26 & 0.30 & & 0.40 & \\
0.002 & 0.23 & 0.26 & 0.30 & & 0.40 & \\
0.004 & 0.23 & 0.26 & 0.30 & & 0.40 & \\
0.008 & 0.23 & 0.26 & 0.30 & 0.34 & 0.40 & \\
0.017 & 0.23 & 0.26 & 0.30 & 0.34 & 0.40 & \\
0.040 & & 0.26 & 0.30 & 0.34 & 0.40 & 0.46 \\
0.070 & & & 0.30 & 0.34 & 0.40 & 0.46 \\
\noalign{\smallskip}
\hline
\end{tabular}
}
\end{center}
\end{table}
Stellar models are assumed to be chemically homogeneous when they
settle on the zero age main sequence (ZAMS). Several grids of stellar
models were computed from the ZAMS with initial mass from $0.15$ to
$20 M_{\odot}$. The first release of the new evolutionary models, presented
in this paper, deals with very low and low mass stars up to
$2.5 M_{\odot}$. In general for each chemical composition the separation
in mass is of $0.05 M_{\odot}$ between $0.15$ and $0.6 M_{\odot}$, of $0.1
M_{\odot}$ between $0.6$ and $2.0 M_{\odot}$. In addition evolutionary
models at $2.20$ and $2.50 M_{\odot}$ are computed in order to make
available isochrones from the Hubble time down to 1 Gyr for each set of
models.
The initial chemical composition is in the range $0.0001
\le Z \le 0.070$ for the metal content and for the helium content in
the range $0.23 \le Y \le 0.46$ as shown in Table 1. The
helium content $Y=0.40$ was supplemented at very low metallicities as
useful for simulations of significant helium enrichment in
globular clusters.
For each value of $Z$, the fractions of different metals follow a
scaled solar distribution, as compiled by Grevesse \& Noels (1993) and
adopted in the OPAL opacity tables. The ratio between abundances of
different isotopes is according to Anders \& Grevesse (1989).
\subsection{Opacities}
\label{sec_opac}
The radiative opacities for scaled solar mixtures are from the OPAL
group (Iglesias \& Rogers 1996) for temperatures higher than $\log T =
4$, and the molecular opacities from Alexander \& Ferguson (1994) for
$\log T < 4.0$ as in Salasnich et al. (2000). In the temperature
interval $3.8<\log T < 4.0$, a linear interpolation between the
opacities derived from both sources is adopted. The agreement between
both tables is excellent and the transition is very smooth in this
temperature interval. For very high temperatures ($log T \ge 8.7$)
opacities by Weiss et al. (1990) are used.
The conductive opacities of electron-degenerate matter are from Itoh
et al. (1983), whereas in Girardi et al. (2000) conductive opacities
were from Hubbard \& Lampe (1969). A second difference with respect to
Girardi et al. (2000) lies in the numerical tecnique used to
interpolate within the grids of the opacity tables. In this paper we
used the two-dimensional bi-rational cubic damped-spline algorythm
(see Schlattl \& Weiss 1998, Salasnich et al. 2000 and Weiss \&
Schlattl 2000).
\subsection{Equation of state}
\label{sec_eos}
The equation of state (EOS) for temperatures higher than $10^7$~K is
that of a fully-ionized gas, including electron degeneracy in the way
described by Kippenhahn et al.\ (1967). The effect of Coulomb
interactions between the gas particles at high densities is taken into
account as described in Girardi et al.\ (1996).
For temperatures lower than $10^7$~K, the detailed ``MHD'' EOS of
Mihalas et al.\ (1990, and references therein) is adopted. The
free-energy minimization technique used to derive thermodynamical
quantities and derivatives for any input chemical composition, is
described in detail by Hummer \& Mihalas (1988), D\"appen et al.\
(1988), and Mihalas et al.\ (1988). In our cases, we explicitly
calculated EOS tables for all the $Z$ and $Y$ values of our tracks,
using the Mihalas et al.\ (1990) code, as in Girardi et al. (2000).
We recall that the MHD EOS is critical only for stellar models with
masses lower than 0.7~\mbox{$M_{\odot}$}, during their main sequence evolution. In
fact, only in dwarfs with masses lower than 0.7~\mbox{$M_{\odot}$}\ the surface
temperatures are low enough so that the formation of the H$_2$ molecule
dramatically changes the adiabatic temperature gradient. Moreover,
only for masses smaller than $\sim0.4$~\mbox{$M_{\odot}$}\ the envelopes become
dense and cool enough so that the many non-ideal effects included in
the MHD EOS (internal excitation of molecules, atoms, and ions;
translational motions of partially degenerate electrons, and Coulomb
interactions) become important.
\subsection{Reaction rates and neutrino losses}
\label{sec_rates}
The adopted network of nuclear reactions involves all the important
reactions of the pp and CNO chains, and the most important
alpha-capture reactions for elements as heavy as Mg (see Bressan et
al.\ 1993 and Girardi et al. 2000).
The reaction rates are from the compilation of Caughlan \& Fowler
(1988), but for $^{17}{\rm O}({\rm
p},\alpha)^{14}{\rm N}$ and $^{17}{\rm O}({\rm p},\gamma)^{18}{\rm
F}$, for which we use the determinations by
Landr\'e et al.\ (1990). The
uncertain $^{12}$C($\alpha,\gamma$)$^{16}$O rate was set to 1.7 times
the values given by Caughlan \& Fowler (1988), as indicated by the
study of Weaver \& Woosley (1993) on the nucleosynthesis by massive
stars.The adopted rate is consistent with the recent determination by
Kunz et al. (2002) within the uncertainties.
The electron screening factors for all reactions are those
from Graboske et al.\ (1973).
The abundances of the various elements are evaluated with the aid of a
semi-implicit extrapolation scheme, as described in Marigo et al. (2001).
The energy losses by neutrinos are from Haft et al. (1994). Compared with the
previous ones by Munakata et al.\ (1985) used in Girardi et al. (2000),
neutrino cooling during the RGB is more efficient.
\subsection{Convection}
\label{sec_conv}
The extension of the convective regions is
determined considering the presence of convective overshoot.
The energy transport in the outer convection zone is described
according to the mixing-length theory (MLT) of B\"ohm-Vitense (1958). The
mixing length parameter $\alpha$ is calibrated by means of the solar model.
{\em Overshoot} The extension of the convective regions takes into account
overshooting from the borders of both core and envelope convective zones.
In the following we adopt the formulation by Bressan et al. (1981) in which
the boundary of the convective core is set at the layer where the velocity
(rather than the acceleration) of convective elements vanishes. Also this
non-local treatment of convection requires a free parameter related to
the mean free path $l$ of convective elements through $l=\Lambda_c H_p$,
($H_p$ being the pressure scale height). The choice of this parameter
determines the extent of the overshooting
{\em across} the border of the classical core (determined by the Schwarzschild
criterion).
Other authors fix the extent
of the overshooting zone at the distance $d=\Lambda_c H_p$ from the border
of the convective core (Schwarzschild criterion).
The $\Lambda_ c$ parameter in Bressan et al.(1981)
formalism is not equivalent to others found in literature. For
instance, the overshooting extension determined by $\Lambda_c =0.5$ with
the Padova formalism roughly corresponds to the $d=0.25 H_p$
{\em above} the convective border, adopted by the Geneva group
(Meynet et al.\ 1994 and references therein)
to describe the same physical phenomenum.
This also applies to the extent of overshooting in the case of models
computed by Teramo group (Pietrinferni et al. 2004), in the Yale-Yonsei
database (Yi et al. 2001 and Demarque et al. 2004), as they adopted the value
0.2 for the overshoot parameter {\em above} the border of the classical
convective core. In Victoria-Regina stellar models by
VandenBerg et al. (2006) a free parameter varying with mass is adopted to take
into account overshoot in the context of Roxburgh's equation for the maximum
size of a central convective zone, so the comparison is not so straight.
The non-equivalency of the parameter
used to describe the extension of convective overshooting by different groups
has been a recurrent source of misunderstanding in the literature.
We adopt the same prescription as in Girardi et al. (2000) for the
parameter $\Lambda_c$ as a function of stellar mass.
$\Lambda_c$ is set to zero for stellar masses $M\le1.0$~\mbox{$M_{\odot}$}, and
for $M\ge 1.5 \mbox{$M_{\odot}$}$, we adopt $\Lambda_c=0.5$,
i.e.\ a moderate amount of overshoting.
In the range $1.0<M<1.5 \mbox{$M_{\odot}$}$, we adopt a gradual increase of the overshoot
efficiency with mass, i.e.\ $\Lambda_c=M/\mbox{$M_{\odot}$} - 1.0$. This
because the calibration of the overshooting efficiency in this mass
range is still very uncertain and from observations there are indications
that this efficiency should be lower than in intermediate-mass stars.
In the stages of core helium burning (CHeB), the value $\Lambda_c=0.5$ is
used for all stellar masses. This amount of overshooting
significantly reduces the extent of the breathing pulses of convection
found in the late phases of CHeB (see Chiosi et al.\ 1992).
Overshooting at the lower boundary of
convective envelopes is also considered.
The calibration of the solar model required
an envelope overshooting not higher than 0.25 pressure scale
height. This value of $\Lambda_e=0.25$ (see Alongi et al.\
1991) was then adopted for the
stars with $0.6\le(M/\mbox{$M_{\odot}$})<2.0$, whereas $\Lambda_e=0$ was
adopted for $M \la 0.6\mbox{$M_{\odot}$}$.
For $M>2.5\mbox{$M_{\odot}$}$ a value of $\Lambda_e=0.7$ was assumed as in
Bertelli et al.\ (1994) and Girardi et al. (2000). Finally, for masses
between 2.0 and $2.5 \mbox{$M_{\odot}$}$, $\Lambda_e$ was let to increase gradually
from 0.25 to 0.7, but for the helium burning evolution, where it is always
set to 0.7.
The adopted approach to overshooting perhaps does not
represent the complex situation found in real stars. However it represents a
pragmatic choice, supported by comparisons with observations. We recall that
also the question of
the overshooting efficiency in stars of different masses is still a matter of
debate (see a recent discussion by Claret 2007, and by Deng and Xiong
2007).
{\em Helium semiconvection} As He-burning proceeds in the convective core of
low mass stars during the early stages of the horizontal branch (HB),
the size of the convective core increases.
Once the central value of helium falls below $Y_c =0.7$, the
temperature gradient reaches a local minimum, so that continued overshoot is
no longer able to restore the neutrality condition at the border of the core.
The core splits into an inner convective
core and an outer convective shell. As further helium is captured by the
convective shell, this latter tends to become stable, leaving behind a region
of varying composition in condition of neutrality.
This zone is called semiconvective.
Shortly, starting from the center and going outwards, the matter
in the radiatively stable region above the formal convective core is mixed
layer by layer until the neutrality condition is achieved. This picture holds
during most of the central He-burning phase.
The extension of the semiconvective region
varies with the stellar mass, being important in the low- and intermediate-mass
stars up to about $5 M_\odot$, and negligible in more massive stars.
We followed the scheme by Castellani et al. (1985), as described in Bressan
et al. (1993).
{\em Breathing convection} As central helium gets as low as $Y_c \simeq 0.1$,
the enrichment of fresh helium caused by semiconvective mixing enhances the
rate of energy produced by the $3\alpha$ reactions in such a way that the
radiative gradient at the convective edge is increased. A very rapid
enlargement followed by an equally rapid decrease of the fully convective core
takes place (pulse of convection). Several pulses may occur before the
complete exhaustion of the central helium content. This convective instability
was called breathing convection by Castellani et al. (1985).
While semiconvection is a true theoretical prediction, the breathing pulses
are most likely an artifact of the idealized algorithm used to describe mixing
(see Bressan et al. 1993 and references therein). In our code breathing pulses
are suppressed by imposing that the core cannot be enriched in helium by mixing
with the outer layers more than a fixed fraction F of the amount burnt by
nuclear reactions. In this way a time dependence of convection is implicitely
taken into account.
\subsection{Calibration of the solar model}
\label{sec_sun}
Recent models of the Sun's evolution and interior show that currently observed
photospheric abundances (relative to hydrogen) must be lower than those of the
proto-Sun because helium and other heavy elements have settled toward the Sun's
interior since the time of its formation about 4.6 Gyr ago.
The recent update of the solar chemical composition (Asplund et al 2005, 2006
and Montalban et al. 2006) has led to a decrease in the CNO and Ne abundances
of the order of $30 \%$. Their new solar chemical composition corresponds to
the values: $ X=0.7393, Y=0.2485, Z=0.0122$ with $Z/X=0.0165$.
The corresponding decrease in opacity increases
dramatically the discrepancies between the sound-speed derived from
helioseismology and the new standard solar model (SSM). Combinations of
increases in opacity and diffusion rates are able to restore part of the
sound-speed profile agreement, but the required changes are larger than the
uncertainties accepted for the opacity and the diffusion uncertainties.
Antia and Basu (2006) investigated the possibility of determining the solar
heavy-element abundances from helioseismic data, used the dimensionless
sound-speed derivative in the solar convection zone and gave a mean value of
$Z=0.0172 \pm 0.002$.
Bahcall et al.(2006) used Monte Carlo simulations to determine the
uncertainties
in solar model predictions of parameters measured by helioseismology (depth of
the convective zone, the surface helium abundance, the profiles of the sound
speed and density versus radius) and provided determinations of the correlation
coefficients of the predicted solar model neutrino fluxes. They incorporated
both the Asplund et al. recently determined heavy element abundances and the
higher abundances by Grevesse \& Sauval (1998). Their Table 7 points out that
the derived characteristic solar model quantities are significantly different
for the two cases.
According to VandenBerg et al. (2007) the new Asplund et al. metallicity for
the Sun presents some difficulties for fits of solar abundance models to the
M67 CMD, in that they do not predict a gap near the turnoff, which however is
observed. If the Asplund et al. solar abundances are correct, only those low-Z
solar models that treat diffusion may be able to reproduce the M67 CMD.
Owing to the many uncertainties,
we adopted for the Sun the initial metallicity of $Z=0.017$, according to
Grevesse \& Sauval (1998). Our choice is a compromise between the previous
solar metal content, Z=0.019 or 0.020, usually considered for evolutionary
models and the significantly lower value supported by Asplund et al.
(2005, 2006).
The usual procedure for the calibration of the solar model is of computing a
number of models while varying the MLT parameter
$\alpha$ and the initial helium content $Y_0$ until the observed solar
radius and luminosity are reproduced within a predetermined range. Several
1~\mbox{$M_{\odot}$}\ models, for different values of the
mixing-length parameter $\alpha$ and helium content $Y_\odot$, are let
to evolve up to the age of 4.6~Gyr. From this series of models, we are
able to single out the pair of $[\alpha, Y_\odot]$ which allows
for a simultaneous match of the present-day solar radius and
luminosity, $R_\odot$ and $L_\odot$.
An additional constraint
for the solar model comes from the helioseismological
determination of the depth of the outer solar convective zone
($R_c$ of the order of $0.710-0.716$ $R_\odot$) and from the surface helium
value.
The envelope overshooting parameter was adopted as $\Lambda_e=0.25$, which
allows a reasonable reproduction of the observed value of $R_c$ (depth of the
solar convective envelope).
Our final solar model reproduces well the solar $R_\odot$, $L_\odot$,
and $R_{\rm c}$ values. The differences with respect to observed values are
smaller than 0.2~\% for $R_\odot$ and $L_\odot$, and $\sim1$~\% for
$R_{\rm c}$. From
this model with initial $Z_{\odot}=0.017$ we derive the value of initial helium
$Y_{\odot}=0.260$ and $\alpha = 1.68$ (this value of the mixing-length
parameter was used in all our stellar models as described previously).
\subsection {Mass loss on the RGB}
\label{sec_rgbagb}
Our evolutionary models were computed at constant mass for all stages
previous to the TP-AGB. However, mass loss by stellar wind during the RGB
of low-mass stars is taken into account at the stage of isochrone
construction. We use the empirical formulation by Reimers (1975), with the
mass loss rate expressed in function of the free parameter $\eta$,
by default assumed as 0.35 in our models (see Renzini \& Fusi-Pecci 1988).
The procedure is basically the following: we integrate the mass loss
rate along the RGB of every single track, in order to estimate the actual
mass of the correspondent model of ZAHB. See Bertelli et al. (1994) for
a more detailed description.
This approximation is a good one since the mass loss does not affect
significantly the internal structure of models along the luminous part
of the RGB.
\section{New synthetic TP-AGB models}
An important update of the database of evolutionary tracks is the extension
of stellar models and isochrones until the end of the thermal pulses along
the Asymptotic Giant Branch, particularly relevant for stellar population
analyses in the near-infrared, where the contribution of AGB stars to the
integrated photometric properties is significant.
The new synthetic TP-AGB models
have been computed with the aid of a synthetic code that has been
recently revised and updated in many aspects as described in the paper
by Marigo \& Girardi (2007), to which the reader should refer for
all details.
It should be emphasized that the synthetic TP-AGB model in use does not
provide a mere analytic description of this phase, since one key
ingredient is
a complete static envelope model which is numerically integrated from the
photosphere down to the core. The basic structure of the envelope model
is the same as the one used in the Padova stellar evolution code.
The major improvements are briefly recalled below.
\begin{itemize}
\item
{\em{Luminosity.}} Thanks to high-accuracy formalisms (Wagenhuber
\& Groenewegen 1998; Izzard et al. 2004) we follow the complex
behaviour of the luminosity due to the flash-driven variations
and the over-luminosity effect caused by hot-bottom burning (HBB).\\
\item
{\em{Effective temperature.}} One fundamental improvement is the
adoption of {\em molecular opacities} coupled to the actual surface
C/O ratio (Marigo 2002), in place of tables valid for scaled solar
chemical compositions (e.g. Alexander \& Ferguson 1994). The most
significant effect is the drop of the effective temperature as soon
as C/O increases above unity as a consequence of the third dredge-up,
in agreement with observations of carbon stars.\\
\item
{\em{ The third dredge-up.}} We use a more realistic description
of the process as a function of stellar mass and metallicity, with the
aid of available analytic relations of the characteristic parameters, i.e.
minimum core mass $M_{\rm c}^{\rm min}(M,Z)$ and efficiency
$\lambda(M,Z)$, as derived from full AGB calculations (Karakas et al. 2002).\\
\item
{\em{ Pulsation properties.}} A first attempt is made
to account for the switching between different pulsation modes
along the AGB evolution. Basing on available pulsation models for
long period variables (LPV; Fox \& Wood 1982, Ostlie \& Cox 1986) we
derive a criterion on the luminosity to predict the transition from
the first overtone mode to the fundamental one (and viceversa).\\
\item
{\em{ Mass-loss rates.}} These are evaluated with the aid
of different formalisms, based on dynamical atmosphere models for
LPVs, depending not only on stellar mass, luminosity, effective
temperature, and pulsation period, but also on the chemical type, namely:
Bowen \& Willson (1991) for C/O $<1$, and Wachter et al. (2002) for C/O $>1$.
\end{itemize}
It should be also emphasized that in Marigo \& Girardi (2007)
the free parameters
of the synthetic TP-AGB model -- i.e. the minimum core mass
$M_{\rm c}^{\rm min}$
and efficiency $\lambda$ -- were calibrated as a function of stellar mass
and metallicity, on the base of two basic observables derived for
both Magellanic Clouds (MC), namely:
i) the counts of AGB stars (for both M- and C-types) in stellar
clusters and ii) the carbon star luminosity functions
in galaxy fields.
As illustrated by Girardi \& Marigo (2007), the
former observable quantifies the TP-AGB lifetimes for both
the oxygen- and carbon-rich phase as a function of stellar mass and
metallicity and it helps to determine $M_{\rm c}^{\rm min}$. The
latter observable provides complementary constraints to both
$M_{\rm c}^{\rm min}$ and $\lambda$ and their
dependence on stellar mass and metallicity.
The calibration carried out by Marigo \& Girardi (2007)
has been mainly motivated by the aim of
assuring that for that set of TP-AGB tracks,
the predicted luminosities and lifetimes,
hence the nuclear fuel of this phase, are correctly evaluated.
The same dredge-up parameters as derived by Marigo \& Girardi (2007)
have been adopted for computing the present sets
of TP-AGB tracks, despite of the fact that they
are chacterized by
different initial conditions at the first thermal pulse
(e.g. $M_{\rm c},\, T_{\rm eff},\, L$), and different
chemical compositions of the envelope. This
fact implies that the new sets of TP-AGB tracks are, to a certain extent,
un-calibrated since no attempt was made to have our basic set of
observables reproduced.
However, the choice to keep the same model paramemeters as derived from
the MG07 calibration is still a meaningful option since the present
TP-AGB models a) include already several important improvements in the
treatment of the TP-AGB phase, as recalled at the beginning of this
section; b) they constitute the reference grid from which we will
start the new calibration procedure. The preliminary step will be to
adopt a reasonable enrichment law, i.e. $Y_P$ (zero point) and the
$\Delta Y/\Delta Z$ (slope) so as to limit the calibration of the free
parameters to TP-AGB models belonging to a particular subset of
initial ($Y,\,Z$) combinations. This work is being undertaken.
\begin{figure*}
\begin{minipage}{0.45\textwidth} \noindent a)
\resizebox{\hsize}{!}{\includegraphics{fig1a.eps}}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth} \noindent b)
\resizebox{\hsize}{!}{\includegraphics{fig1b.eps}}
\end{minipage}
\begin{minipage}{0.45\textwidth} \noindent c)
\resizebox{\hsize}{!}{\includegraphics{fig1c.eps}}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth} \noindent d)
\resizebox{\hsize}{!}{\includegraphics{fig1d.eps}}
\end{minipage}
\caption{
Evolutionary tracks in the HR diagram, for the composition $[Z=0.001,
Y=0.23]$. For tracks of low-mass stars up to the RGB-tip
and intermediate-mass ones ($M \le 2.5 M_{\odot}$) up to the bTP-AGB
(panel a),
and from the ZAHB up to the bTP-AGB (panel b). In panels c) and d) tracks
are displayed for $[Z=0.001,Y=0.40]$
}
\label{hrd_z001}
\end{figure*}
\begin{figure*}
\begin{minipage}{0.45\textwidth} \noindent a)
\resizebox{\hsize}{!}{\includegraphics{fig2a.eps}}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth} \noindent b)
\resizebox{\hsize}{!}{\includegraphics{fig2b.eps}}
\end{minipage}
\begin{minipage}{0.45\textwidth} \noindent c)
\resizebox{\hsize}{!}{\includegraphics{fig2c.eps}}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth} \noindent d)
\resizebox{\hsize}{!}{\includegraphics{fig2d.eps}}
\end{minipage}
\caption{
Evolutionary tracks in the HR diagram, for the composition $[Z=0.040,
Y=0.26]$. For tracks of low-mass stars up to the RGB-tip
and intermediate-mass ones up to the bTP-AGB (panel a),
and from the ZAHB up to the bTP-AGB (panels b). In panels c) and d) tracks
are displayed for $[Z=0.040,Y=0.40]$
}
\label{hrd_z040}
\end{figure*}
\begin{figure*}
\begin{minipage}{0.45\textwidth} \noindent a)
\resizebox{\hsize}{!}{\includegraphics{fig3a.eps}}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth} \noindent b)
\resizebox{\hsize}{!}{\includegraphics{fig3b.eps}}
\end{minipage}
\caption{
Evolutionary tracks in the HR diagram, for the composition $[Z=0.001,
Y=0.23$ (dot-dashed line) and $ Y=0.40$ (solid line)] in panel a).
In panel b) tracks are displayed for $[Z=0.040, Y=0.26$ (dot-dashed line)
and $Y=0.40$ (solid line)].
}
\label{hrd_y23y40}
\end{figure*}
\section{Stellar tracks}
\label{sec_tracks}
\subsection{Evolutionary stages and mass ranges}
\label{sec_massranges}
Our models are evolved from the ZAMS and
the evolution through the whole
H- and He-burning phases is followed in detail. The
tracks are stopped at the beginning of the TP-AGB phase (bTP-AGB)
in intermediate- and low-mass stars. In the
case of stellar masses lower than 0.6~\mbox{$M_{\odot}$}, the main sequence
evolution takes place on time scales much longer than a Hubble time.
For them, we stopped the computations at an age of about 20~Gyr.
In low-mass stars with $0.6 M_{\odot} \le M \le M_{Hef}$ the evolution is
interrupted at the stage of He-flash in the electron degenerate
hydrogen-exhausted core.
The evolution is
then re-started from a ZAHB model with the same core mass and surface
chemical composition as the last RGB model.
For the core we compute the total energy difference between the last RGB
model and the ZAHB configuration and assume it has been provided by helium
burning during the helium flash phase. In this way a certain amount of the
helium in the core is converted into carbon (about 3 percent in mass, depending
on the stellar mass and initial composition),
and the initial ZAHB model takes into account the
approximate amount of nuclear fuel necessary to lift the core degeneracy
during the He-flash. The evolution is then followed along the HB up to the
bTP-AGB phase. We point out that this procedure is more detailed than
the one adopted in Girardi et al. (2000), where the fraction of He core burnt
during the He-flash was assumed to be constant and equal to 5\%.
In intermediate-mass stars, the evolution goes from the ZAMS up to
either the beginning of the TP-AGB phase, or to the carbon ignition in
our more massive models (in this paper we consider masses up to $2.5~\mbox{$M_{\odot}$}$).
Table 2 gives the values of the transition mass
\mbox{$M\sub{Hef}$}\, as derived from the present tracks. \mbox{$M\sub{Hef}$}\ is
the maximum mass for a star to develop an electron-degenerate core
after the main sequence, and sets the limit between low- and
intermediate-mass stars (see e.g. Chiosi et al.\ 1992).
Given the low mass separation between the tracks we computed,
the \mbox{$M\sub{Hef}$}\ values here presented are uncertain by about 0.05~\mbox{$M_{\odot}$},
\begin{table}
\caption{The transition mass \mbox{$M\sub{Hef}$}}
\label{tab_mhef}
\begin{tabular}{lllllll}
\noalign{\smallskip}\hline\noalign{\smallskip}
& \multicolumn{6}{c}{$Y$} \\
\cline{2-7}
\noalign{\smallskip}
$Z$ & 0.23 & 0.26 & 0.30 & 0.34 & 0.40 & 0.46 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
0.0001 & 1.70 & 1.70 & 1.60 & & 1.40 & \\
0.0004 & 1.70 & 1.60 & 1.50 & & 1.40 & \\
0.001 & 1.60 & 1.60 & 1.50 & & 1.40 & \\
0.002 & 1.70 & 1.60 & 1.50 & & 1.40 & \\
0.004 & 1.80 & 1.75 & 1.60 & & 1.40 & \\
0.008 & 1.90 & 1.85 & 1.75 & 1.60 & 1.40 & \\
0.017 & 2.10 & 2.00 & 1.90 & 1.70 & 1.60 & \\
0.040 & & 2.20 & 2.05 & 1.90 & 1.70 & 1.40\\
0.070 & & & 2.05 & 1.80 & 1.60 & 1.40\\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\subsection{Tracks in the HR diagram}
\label{sec_hrd}
The complete set of tracks for very low-mass stars ($M<0.6$~\mbox{$M_{\odot}$}) are
computed starting at
a stage identified with the ZAMS, and end at the age of 20 Gyr. The ZAMS
model is defined to be the stage of minimum \mbox{$T\sub{eff}$}\ along the computed
track; it follows a stage of much faster evolution in which the pp-cycle
is out of equilibrium, and in which gravitation may provide a
non negligible fraction of the radiated energy.
It is well known that these stars evolve very little during the Hubble time.
We display in Figures 1 and 2
only some examples of the sets of
the computed evolutionary tracks (i.e. $[Z=0.001$ for $Y=0.23$ and $Y=0.40]$
and $[Z=0.040$ for $Y=0.26$ and $Y=0.40]$). In these
figures, panel a) and c) present the tracks for masses between 0.6 and
2.5 $M_{\odot}$ from the ZAMS up to the RGB-tip, or to the bTP-AGB, while in
panel b) and d) the low-mass tracks from the ZAHB up to the bTP-AGB phase are
plotted from $0.55 M_{\odot}$ to $M_{Hef}$.
Figure 3 is aimed to illustrate the range of temperatures and
luminosities involved for the boundary values of helium content at given
metallicity in the theoretical HR diagram.
The reader can notice, for instance, that at low metallicity and high helium
(Z=0.001, Y=0.040) loops begin to be present during the core He burning phase,
while they are
practically missing in the $Z=0.040$ ones for masses about $2.5 M_{\odot}$.
\subsection{Description of the tables}
\label{sec_tabletrack}
The data tables for the present evolutionary tracks are available only
in electronic format. A website with the complete data-base
(including additional data and future
extensions) will be mantained at \verb$http://stev.oapd.inaf.it/YZVAR$.
For each evolutionary track, the corresponding data file presents
23 columns with the following information:
\begin{enumerate}
\item \verb$n$: row number;
\item \verb$age/yr$: stellar age in yr;
\item \verb$logL$: logarithm of surface luminosity (in solar units),
\mbox{$\log(L/L_{\odot})$};
\item \verb$logTef$: logarithm of effective temperature (in K),
\mbox{$\log T\sub{eff}$};
\item \verb$grav$: logarithm of surface gravity ( in cgs units);
\item \verb$logTc$: logarithm of central temperature (in K);
\item \verb$logrho$: logarithm of central density (in cgs units);
\item \verb$Xc$: mass fraction of hydrogen in the stellar centre;
\item \verb$Yc$: mass fraction of helium in the stellar centre;
\item \verb$Xc_C$: mass fraction of carbon in the stellar centre;
\item \verb$Xc_O$: mass fraction of oxygen in the stellar centre;
\item \verb$Q_conv$: fractionary mass of the convective core;
\item \verb$Q_disc$: fractionary mass of the first mesh point where
the chemical composition differs from the surface value;
\item \verb$L_H/L$: fraction of the total luminosity provided by
H-burning reactions;
\item \verb$Q1_H$: fractionary mass of the inner border of the
H-rich region;
\item \verb$Q2_H$: fractionary mass of the outer border of the
H-burning region;
\item \verb$L_He/L$: fraction of the total luminosity provided by
He-burning reactions;
\item \verb$Q1_He$: fractionary mass of the inner border of the
He-burning region;
\item \verb$Q2_He$: fractionary mass of the outer border of the
He-burning region;
\item \verb$L_C/L$: fraction of the total luminosity provided by
C-burning reactions;
\item \verb$L_nu/L$: fraction of the total luminosity lost by
neutrinos;
\item \verb$Q_Tmax$: fractionary mass of the point with the highest
temperature inside the star;
\item \verb$stage$: label indicating particular evolutionary stages.
\label{item_stage}
\end{enumerate}
A number of evolutionary stages are indicated along the tracks
(column 23). They correspond either to: the
beginning/end of main evolutionary stages, local maxima and minima of
$L$ and \mbox{$T\sub{eff}$}, and main changes of slope in the HR diagram. These
particular stages were, in general, detected by an authomated
algorithm. They can be useful for the derivation of physical
quantities (as e.g.\ lifetimes) as a function of mass and metallicity,
and are actually used as equivalent evolutionary points in our
isochrone-making routines.
For TP-AGB tracks during the quiescent stages of evolution preceding He-shell
flashes, our tables provide:
\begin{enumerate}
\item\verb$n$: row number;
\item\verb$age/yr$: stellar age in yr;
\item\verb$logL$: logarithm of surface luminosity (in solar units),
\mbox{$\log(L/L_{\odot})$};
\item\verb$logTef$: logarithm of effective temperature (in K),
\mbox{$\log T\sub{eff}$};
\item\verb$Mact$: the current mass (in solar units);
\item\verb$Mcore$: the mass of the H-exhausted core (in solar units);
\item\verb$C/O$: the surface C/O ratio.
\end{enumerate}
\subsection{Changes in surface chemical composition}
\label{sec_chemical}
The surface chemical composition of the stellar models changes
on two well-defined dredge-up events. The first one occurs at
the first ascent of the RGB for all stellar models (except for
the very-low mass ones which are not evolved out of the main
sequence). The second dredge-up is found after the core
He-exhaustion, being remarkable only in models with
$M\ga3.5$~\mbox{$M_{\odot}$}. We provide tables with the surface chemical
composition of H, $^3$He, $^4$He, and main CNO isotopes, before
and after the first dredge-up, as in this paper we present models
from $0.15$ up to $2.5 M_{\odot}$.
Table 3 shows, as an example, the surface abundances for
the chemical composition Z=0.008 and Y=0.26.
\begin{table*}
\caption{Surface chemical composition (by mass fraction) of
$[Z=0.008, Y=0.26]$ models.}
\label{tab_du}
\begin{tabular}{lllllllllll}
\noalign{\smallskip}\hline\noalign{\smallskip}
$M/\mbox{$M_{\odot}$}$ & H & $^3$He & $^4$He & $^{12}$C & $^{13}$C & $^{14}$N & $^{15}$N & $^{16}$O & $^{17}$O & $^{18}$O \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{11}{l}{Initial:} \\
all & 0.732 & 2.78$\:10^{-5}$ & 0.260 & 1.37$\:10^{-3}$ & 1.65$\:10^{-5}$ & 4.24$\:10^{-4}$ & 1.67$\:10^{-6}$ & 3.85$\:10^{-3}$ & 1.56$\:10^{-6}$ & 8.68$\:10^{-6}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{11}{l}{After the first dredge-up:} \\
0.55 & 0.719 & 2.65$\:10^{-3}$ & 0.270 & 1.27$\:10^{-3}$ & 3.99$\:10^{-5}$ & 5.10$\:10^{-4}$ & 1.34$\:10^{-6}$ & 3.85$\:10^{-3}$ & 1.57$\:10^{-6}$ & 8.55$\:10^{-6}$ \\
0.60 & 0.719 & 2.94$\:10^{-3}$ & 0.270 & 1.26$\:10^{-3}$ & 2.91$\:10^{-5}$ & 5.44$\:10^{-4}$ & 1.41$\:10^{-6}$ & 3.85$\:10^{-3}$ & 1.66$\:10^{-6}$ & 8.31$\:10^{-6}$ \\
0.65 & 0.715 & 2.34$\:10^{-3}$ & 0.274 & 1.30$\:10^{-3}$ & 3.49$\:10^{-5}$ & 4.91$\:10^{-4}$ & 1.38$\:10^{-6}$ & 3.85$\:10^{-3}$ & 1.59$\:10^{-6}$ & 8.52$\:10^{-6}$ \\
0.70 & 0.715 & 2.40$\:10^{-3}$ & 0.274 & 1.29$\:10^{-3}$ & 3.27$\:10^{-5}$ & 4.96$\:10^{-4}$ & 1.40$\:10^{-6}$ & 3.85$\:10^{-3}$ & 1.63$\:10^{-6}$ & 8.45$\:10^{-6}$ \\
0.80 & 0.712 & 1.83$\:10^{-3}$ & 0.278 & 1.26$\:10^{-3}$ & 4.48$\:10^{-5}$ & 5.26$\:10^{-4}$ & 1.31$\:10^{-6}$ & 3.85$\:10^{-3}$ & 1.61$\:10^{-6}$ & 8.46$\:10^{-6}$ \\
0.90 & 0.709 & 1.43$\:10^{-3}$ & 0.281 & 1.18$\:10^{-3}$ & 4.52$\:10^{-5}$ & 6.13$\:10^{-4}$ & 1.22$\:10^{-6}$ & 3.85$\:10^{-3}$ & 1.68$\:10^{-6}$ & 8.24$\:10^{-6}$ \\
1.00 & 0.708 & 1.17$\:10^{-3}$ & 0.283 & 1.12$\:10^{-3}$ & 4.55$\:10^{-5}$ & 6.86$\:10^{-4}$ & 1.15$\:10^{-6}$ & 3.85$\:10^{-3}$ & 1.73$\:10^{-6}$ & 7.92$\:10^{-6}$ \\
1.10 & 0.708 & 9.56$\:10^{-4}$ & 0.283 & 1.07$\:10^{-3}$ & 4.54$\:10^{-5}$ & 7.40$\:10^{-4}$ & 1.09$\:10^{-6}$ & 3.85$\:10^{-3}$ & 1.84$\:10^{-6}$ & 7.64$\:10^{-6}$ \\
1.20 & 0.710 & 8.20$\:10^{-4}$ & 0.281 & 1.03$\:10^{-3}$ & 4.56$\:10^{-5}$ & 7.90$\:10^{-4}$ & 1.03$\:10^{-6}$ & 3.85$\:10^{-3}$ & 2.25$\:10^{-6}$ & 7.40$\:10^{-6}$ \\
1.30 & 0.711 & 7.25$\:10^{-4}$ & 0.280 & 9.92$\:10^{-4}$ & 4.60$\:10^{-5}$ & 8.36$\:10^{-4}$ & 9.80$\:10^{-7}$ & 3.85$\:10^{-3}$ & 2.46$\:10^{-6}$ & 7.19$\:10^{-6}$ \\
1.40 & 0.712 & 6.41$\:10^{-4}$ & 0.279 & 9.50$\:10^{-4}$ & 4.59$\:10^{-5}$ & 8.85$\:10^{-4}$ & 9.25$\:10^{-7}$ & 3.85$\:10^{-3}$ & 3.14$\:10^{-6}$ & 6.94$\:10^{-6}$ \\
1.50 & 0.713 & 5.71$\:10^{-4}$ & 0.279 & 9.25$\:10^{-4}$ & 4.56$\:10^{-5}$ & 9.19$\:10^{-4}$ & 9.00$\:10^{-7}$ & 3.84$\:10^{-3}$ & 1.12$\:10^{-5}$ & 6.78$\:10^{-6}$ \\
1.60 & 0.712 & 5.22$\:10^{-4}$ & 0.280 & 9.05$\:10^{-4}$ & 4.54$\:10^{-5}$ & 9.68$\:10^{-4}$ & 8.81$\:10^{-7}$ & 3.80$\:10^{-3}$ & 1.50$\:10^{-5}$ & 6.68$\:10^{-6}$ \\
1.70 & 0.711 & 4.46$\:10^{-4}$ & 0.281 & 8.92$\:10^{-4}$ & 4.52$\:10^{-5}$ & 1.03$\:10^{-3}$ & 8.63$\:10^{-7}$ & 3.75$\:10^{-3}$ & 1.83$\:10^{-5}$ & 6.57$\:10^{-6}$ \\
1.80 & 0.709 & 3.99$\:10^{-4}$ & 0.283 & 8.81$\:10^{-4}$ & 4.51$\:10^{-5}$ & 1.09$\:10^{-3}$ & 8.50$\:10^{-7}$ & 3.70$\:10^{-3}$ & 1.59$\:10^{-5}$ & 6.50$\:10^{-6}$ \\
1.85 & 0.707 & 3.73$\:10^{-4}$ & 0.284 & 8.77$\:10^{-4}$ & 4.44$\:10^{-5}$ & 1.13$\:10^{-3}$ & 8.49$\:10^{-7}$ & 3.66$\:10^{-3}$ & 1.64$\:10^{-5}$ & 6.46$\:10^{-6}$ \\
1.90 & 0.706 & 3.54$\:10^{-4}$ & 0.285 & 8.77$\:10^{-4}$ & 4.43$\:10^{-5}$ & 1.14$\:10^{-3}$ & 8.46$\:10^{-7}$ & 3.64$\:10^{-3}$ & 1.81$\:10^{-5}$ & 6.44$\:10^{-6}$ \\
2.00 & 0.706 & 3.19$\:10^{-4}$ & 0.285 & 8.71$\:10^{-4}$ & 4.42$\:10^{-5}$ & 1.17$\:10^{-3}$ & 8.43$\:10^{-7}$ & 3.62$\:10^{-3}$ & 1.47$\:10^{-5}$ & 6.40$\:10^{-6}$ \\
2.20 & 0.702 & 2.58$\:10^{-4}$ & 0.289 & 8.55$\:10^{-4}$ & 4.44$\:10^{-5}$ & 1.26$\:10^{-3}$ & 8.22$\:10^{-7}$ & 3.54$\:10^{-3}$ & 1.37$\:10^{-5}$ & 6.31$\:10^{-6}$ \\
2.50 & 0.699 & 2.00$\:10^{-4}$ & 0.292 & 8.42$\:10^{-4}$ & 4.43$\:10^{-5}$ & 1.33$\:10^{-3}$ & 8.07$\:10^{-7}$ & 3.49$\:10^{-3}$ & 1.06$\:10^{-5}$ & 6.21$\:10^{-6}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table*}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig4.eps}}
\caption{Isochrones for a chemical composition intermediate among those of the
computed tracks (Z=0.0003,Y=0.25) at Log Age= 10, 9.7, 9.4, 9.0, 8.9 and 8.7
years.The new synthetic TP-AGB models allow the extension of the isochrones
(red) until the end of the thermal pulses along the AGB (black).
}
\label{int_iso1}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig5.eps}}
\caption{Isochrones for the chemical composition Z=0.019 and Y=0.28,
intermediate among those of the computed tracks at Log Age= 10, 9.7, 9.4,
9.0 years. The same as Figure 4 for the AGB.
}
\label{int_iso2}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig6.eps}}
\caption{Comparison of isochrones for the same Z=0.001 and different helium
content. Solid lines correspond to Y=0.23, dashed ones to Y=0.40.
Also in this case the isochrones are extended until the end of the thermal
pulses along the AGB. }
\label{isozy1y2}
\end{figure}
\section{Isochrones}
\label{sec_isochrones}
From the tracks presented in this paper, we have constructed isochrones
updating and improving the algorithm of ``equivalent evolutionary points''
used in Bertelli et al.\ (1994) and Girardi et al. (2000).
The initial point of each isochrone is the 0.15~\mbox{$M_{\odot}$}\ model in the
lower main sequence. The terminal stage of the isochrones is
the tip of the TP-AGB for the evolutionary tracks presented in this first
paper (up to 2.5 $M_{\odot}$). We recall that usually there are small
differences in the logarithm of the effective temperature between the last
model of the track after the central He-exhaustion and the starting point
of the synthetic TP-AGB model of the correspondent mass. Constructing the
isochrones we removed these small discontinuities with a suitable shift.
The differences arise as the Alexander \& Ferguson (1994) low-temperature
opacity tables, used for the evolutionary models, are replaced by those
provided by Marigo's (2002) algorithm from the beginning to the end of the
TP-AGB phase.
In Figures 4 and 5 isochrones are shown for the chemical composition
(Z=0.0003,Y=0.25) and (Z=0.019,Y=0.28) intermediate
among those of the evolutionary tracks.
The interpolation method to obtain the isochrone for a specific composition
intermediate between the values of the computed tracks is described in the
following subsection.
In Figures 4, 5 and 6 the plotted isochrones are extended until the end of
the TP-AGB phase and a different line color points out this phase
in the theoretical HR diagram. The flattening of the AGB phase of the
isochrones marks the transition from M stars to Carbon stars ($C/O > 1$).
An increase of the helium content at the same metallicity in stellar
models causes a decrease in the mean opacity and an increase in the
mean molecular weight of the envelope (Vemury and Stothers, 1978), and
in turn higher luminosities, hotter effective temperatures and shorter
hydrogen and helium lifetimes of stellar models.
Apparently in contradiction with the previous statement of
higher luminosity for helium increase in evolutionary tracks, the evolved
portion of the isochrones with lower helium are more luminous, as shown in
figure 6 where we plot isochrones with Z=0.001 for Y=0.23 and Y=0.40.
This effect is related to the
interplay between the increase in luminosity and the decrease in lifetime
of stellar models with higher helium content (at the same mass and
metallicity).
\subsection{Interpolation scheme}
The program ZVAR, already used in many papers (for instance in Ng et
al. 1995, Aparicio et al. 1996, Bertelli \& Nasi 2001, Bertelli et
al. 2003) has been extended to obtain isochrones and to simulate
stellar populations in a large region of the $Z-Y$ plane (now its
name is YZVAR).
For each of a few discrete values of the metallicity of former
evolutionary tracks there was only one value of the helium content,
derived from the primordial helium with a fixed enrichment law. In
the present case we deal with 39 sets of stellar tracks in the
plane $Z-Y$ and we aim at obtaining isochrones for whatever $Z-Y$
combination and stellar populations with the required Y(Z)
enrichment laws. This problem requires a double interpolation in Y and
Z. We try to describe our method with the help of
Figure 7. In this figure the corners of the box
represent 4 different chemical compositions in a generic mesh of the
grid, i.e. $( Y_1,Z_1)$, $( Y_2,Z_1)$, $( Y_2,Z_2)$, $( Y_1,Z_2)$.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig7.eps}}
\caption{Interpolation scheme inside the range [Z1,Z2] and [Y1,Y2].}
\label{interpolation}
\end{figure}
The vertical lines departing from the corners represent the sequences of
increasing mass of the tracks computed for the corresponding values of
$(Y,Z)$. Along each of these lines
the big points represent the separation masses between tracks with a
different number of {\em characteristic} points ( a different number of
equivalent evolutionary phases), indicated as
$M_{Y_1Z_1{\rm sep}}^{i}$, $M_{Y_2Z_1{\rm sep}}^{i}$,
$M_{Y_1Z_2{\rm sep}}^{i}$, $M_{Y_2Z_2{\rm sep}}^{i}$.
The separation masses are marked with the indexes $i$ and $i+1$ in
Figure 7.
Actually three intervals of masses are shown in
this figure, marked with big numbers, identifying the iso-phase intervals,
which means that inside each of them the tracks are
characterized by the same number of equivalent evolutionary phases
\footnote{With phase we mean a particular stage of the evolution identified
by common properties both in the C-M diagram and in the internal structure of
the models.}. The different phases along the tracks are separated by
{\em characteristic} points, in the following indicated by the index $k$.
For a given mass $M$, age $t$ and chemical composition $(Y,Z)$ our
interpolation scheme has
to determine the luminosity and the effective temperature of the star. This is
equivalent to assert that we look for the corresponding interpolated track.
The interpolation must be done inside the appropriate iso-phase intervals of
mass.
For this reason our first step is the identification of the separation masses
($M_{{\rm sep}}^{i},i=1,\ldots,n-1$), where $n$ marks the number of
iso-phase intervals (n=3 for the case of figure 7). To do this we
interpolate in $Y$
between $M_{Y_1Z_1{\rm sep}}^{i}$ and $M_{Y_2Z_1{\rm sep}}^{i}$ obtaining
$M_{Z_1{\rm sep}}^{i}$. Analogously we obtain $M_{Z_2{\rm sep}}^{i}$
for the $Z_2$ value. Interpolating in $\log Z$ between $M_{Z_1{\rm sep}}^{i}$
and $M_{Z_2{\rm sep}}^{i}$ we obtain $M_{{\rm sep}}^{i}$. Analogous procedure
is followed for all $i=1,\ldots,n-1$ values.
In the example of Fig. 7 the particular mass $M$ that we are
looking for turns out to be located between $M_{{\rm sep}}^{i}$ and
$M_{{\rm sep}}^{i+1}$ and in the corresponding iso-phase interval labelled
{\bf 2}. Reducing to essentials, we proceed according to the following
scheme:
\begin{itemize}
\item
Definition of the adimensional mass
$\tau_m=(M-M_{{\rm sep}}^{i})/(M_{{\rm sep}}^{i+1}-M_{{\rm sep}}^{i})$,
in this way
$\tau_m$ determines the correspondent masses $M_{Y_1Z_1}$, $M_{Y_2Z_1}$,
$M_{Y_1Z_2}$, $M_{Y_2Z_2}$
for each chemical composition inside the iso-phase interval {\bf 2}.
\item
Interpolation of the tracks $M_{Y_1Z_1}$, $M_{Y_2Z_1}$, $M_{Y_1Z_2}$,
$M_{Y_2Z_2}$ between pairs of computed tracks (for details see Bertelli et al.
1990), to obtain
the ages of the characteristic points.
\item
Interpolation in $Y$ and $\log Z$ to obtain the ages $t_k$ of the
characteristic points of the mass M. We recall that all the masses inside
the same iso-phase interval show the same number of {\em characteristic}
points.
\item
Identification of the current phase (the actual age $t$ is inside the
interval ($t_k$,$t_{k+1}$)) and definition of the adimensional time
$\tau_t= (t-t_k)/(t_{k+1}-t_k)$
\item
Determination of the luminosities $L_{Y_1Z_1}$, $L_{Y_2Z_1}$,
etc. relative to the masses $M_{Y_1Z_1}$, $M_{Y_2Z_1}$ etc.,
in correspondence to the adimensional time $\tau_t$ inside the ($k ,k+1$)
phase.
\item
Interpolation in $Y$ between $L_{Y_1Z_1}$ and $L_{Y_2Z_1}$ obtaining $L_{Z_1}$
, between $L_{Y_1Z_2}$ and $L_{Y_2Z_2}$ obtaining $L_{Z_2}$ and finally
interpolating in $\log Z$ between $L_{Z_1}$ and $L_{Z_2}$ obtaining the
luminosity $L$ for the mass M and the age $t$.
\item
Analogous procedure to obtain $\log\mbox{$T\sub{eff}$}$ (or other physical properties).
\end{itemize}
For each knot of the grid YZ the data file, used to compute the isochrones,
contain the evolutionary tracks (log Age, $log L/L_{\odot}$, $log T_{eff}$)
for the provided range of
masses and four interfaces (one for each of the four quadrants centered
on the knot). The interfaces are such as to assure, inside every mesh of the
grid, the same number of iso-phase intervals.
This method is used to obtain an interpolated track of given mass and
chemical composition (inside the provided range) or to derive isochrones of
given age and chemical composition.
\subsection{Reliability of the interpolation}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig8.eps}}
\caption{Comparison between computed evolutionary tracks (green line) for a
chemical composition intermediate among those of the computed grids and the
corresponding interpolated (red points) with the above described method for
(Z=0.055,Y=0.37)
}
\label{intz055y37}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig9.eps}}
\caption{Comparison between computed evolutionary tracks (green line) for a
chemical composition intermediate among those of the computed grids and the
corresponding interpolated (red points) with the above described method for
(Z=0.0003,Y=0.28)
}
\label{intz0003y28}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig10.eps}}
\caption{Comparison between a computed evolutionary track for $M = 1.12
M_{\odot}$ (green line) and the correspondent track obtained by interpolation
from the grids (red points), with the chemical composition Z=0.00065 and
Y=0.35, in the more complex case of close tracks with different morphology
}
\label{intm1_12}
\end{figure}
We have done some tests in order to check the reliability of the
method for the interpolation in the Z-Y plane. For a few combinations
Z-Y we have calculated some tracks and then we have interpolated the
same masses from the existing grids, using the new program YZVAR. The
results of the comparison are shown in Figs.8, 9 and 10.
In Fig. 8 for the chemical composition Z=0.055 and
Y=0.37 (a point at the center of a mesh) we compare the computed
evolutionary tracks of low mass stars of 0.8, 1.0, 1.3, 1.5 $M_{\odot}$
(green line) and those interpolated with YZVAR (red points). The
result on the HR diagram is satisfactory and the ages differ by less
than $1 \%$. Fig. 9 presents the results as in Fig 8,
but for the chemical composition Z=0.003 and Y=0.28. In
Fig. 10 we display a critical case as the considered 1.12
$M_{\odot}$ is between two iso-phase intervals. In fact
this mass is comprised between two masses with different morphology in
the HR diagram. It is evident that the interpolation is able to reproduce
the computed track very well.
\subsection{Bolometric corrections}
The present isochrones are provided in the Johnson-Cousins-Glass
system as defined by Bessell (1990) and Bessell \& Brett (1988). As soon as
possible they will be available in the Vegamag systems of ACS and WFPC2 on
board of HST (cf. Sirianni et al. 2005; Holtzman et al. 1995) and in the
SDSS system.
The formalism we follow to derive
bolometric corrections in these systems is described in Girardi et
al. (2002). The definition of zeropoints has been revised and is
detailed in a forthcoming paper by Girardi et al. (in prep.; see also
Marigo et al. 2007) and will not be repeated here.
Suffice it to recall that the bolometric correction tables stand
on an updated and extended library of stellar spectral fluxes. The core
of the library now consists of the ``ODFNEW'' ATLAS9 spectral fluxes
from Castelli \& Kurucz (2003), for $T_{\rm eff}$ between 3500 and
50000~K, $\log g$ between $-2$ and $5$, and scaled-solar metallicities
[M/H] between -2.5 and +0.5. This library is extended at the intervals
of high $T_{\rm eff}$ with pure blackbody spectra.
For lower $T_{\rm eff}$, the library is completed
with the spectral fluxes for M, L and T dwarfs from Allard et
al. (2000), M giants from Fluks et al. (1994), and finally the C star
spectra from Loidl et al. (2001). Details about the implementation of
this library, and in particular about the C star spectra, are provided
in Marigo et al. (2007) and Girardi et al. (in prep.).
It is also worth mentioning that in the isochrones we apply the
bolometric corrections derived from this library without making any
correction for the enhanced He content. As demonstrated in Girardi et
al. (2007), for a given metal content, an enhancement of He
as high as
$\Delta Y=0.1$ produces changes in the bolometric corrections of just a
few thousandths of magnitude. Just in some very particular situations,
for instance at low $T_{\rm eff}$ and for blue pass-bands, can
He-enhancement produce more sizeable effects on BCs; these situations
however correspond to cases where the emitted stellar flux would
anyway be very small, and therefore are of little interest in
practice.
\subsection{Description of isochrone tables}
\label{sec_tableisoc}
Complete tables with the isochrones can be obtained through the web site
\verb$http://stev.oapd.inaf.it/YZVAR$.
In this data-base, isochrones are provided at $\Delta\log t=0.05$
intervals; this means that any two consecutive isochrones differ by
only 12 percent in their ages.
For each isochrone table the corresponding data file presents
16 columns with the following information:
\begin{description}
\item \verb$1. logAge$: logarithm of the age in years;
\item \verb$2. log(L/Lo)$: logarithm of surface luminosity (in solar units);
\item \verb$3. logTef$: logarithm of effective temperature (in K);
\item \verb$4. logG$: logarithm of surface gravity (in cgs units);
\item \verb$5. Mi$: initial mass in solar masses;
\item \verb$6. Mcur$: actual stellar mass in solar masses;
\item \verb$7. FLUM$: indefinite integral over the initial mass M of
the Salpeter initial mass function by number;
\item \verb$8. - 15.$: UBVRIJHK absolute magnitudes in the
Johnson-Cousins-Glass system;
\item \verb$16. C.P.$: index marking the presence of a characteristic
point, when different from zero.
\end{description}
We recall that the
initial mass is the useful quantity for population synthesis calculations,
since together with the initial mass function it determines the relative
number of stars in different sections of the isochrones.
In column 7 the
indefinite integral over the initial mass $M$ of the initial mass
function (IMF) by number,
i.e.\
\begin{equation}
\mbox{\sc flum} = \int\phi(M) \mbox{d} M
\end{equation}
is presented, for the case of the Salpeter IMF, $\phi(M)=AM^{-\alpha}$,
with $\alpha=2.35$. When we assume a normalization constant of $A=1$,
{\sc flum} is simply given by {\sc flum}$ = M^{1-\alpha}/(1-\alpha)$.
This is a useful quantity since the difference between any two
values of {\sc flum} is proportional to the number of stars located in
the corresponding mass interval. It is worth remarking that we
present {\sc flum} values
for the complete mass interval down to 0.15~\mbox{$M_{\odot}$}, always assuming a
Salpeter (1955) IMF, whereas we know that such an IMF cannot be extended to
such low values of the mass. However, the reader can easily derive
{\sc flum} relations for alternative choices of the IMF, by using
the values of the initial mass we present in Column 5 of the
isochrone tables.
In the last column when there appears the value 1, it marks the presence
of a characteristic evolutionary point from the ZAMS to the beginning of
the early AGB phase, while the value 2 is related to the characteristic
points of the TP-AGB phase.
If there is only one characteristic point (marked with 2) at the
end of the isochrone, this means that the TP-AGB phase is very short.
The beginning and the end of the TP-AGB phase are pointed out with index 2,
as well as when C/O increases above unity (transition from M to carbon stars).
\section{Comparison with other databases}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig11.eps}}
\caption{Comparison between the new Padova isochrones (PD08) and Girardi et al.
(PD00) ones for Z=0.019 and Y=0.273 and log Age=10., 9.7, 9.5, 9.2, 9. years.
Solid lines correspond to PD00, dash-dotted ones to PD08 isochrones.}
\label{compPD00_07}
\end{figure}
\begin{figure*}
\begin{minipage}{0.45\textwidth} \noindent a)
\resizebox{\hsize}{!}{\includegraphics{fig12a.eps}}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth} \noindent b)
\resizebox{\hsize}{!}{\includegraphics{fig12b.eps}}
\end{minipage}
\caption{
Comparison between Padova and Teramo isochrones with overshoot for the
composition $[Z=0.008, Y=0.256]$ in panel a). Dot-dashed lines correspond to
Teramo ones and solid lines to our new isochrones. The largest difference
between the PD and Te isochrones is met at log Age=9.4.
In panel b) the comparison is between Padova (solid line) and $YY$ isochrones
(dot-dashed) with Z=0.007 and Y=0.244 }
\label{isoPD_TE_Y2}
\end{figure*}
\begin{figure*}
\begin{minipage}{0.45\textwidth} \noindent a)
\resizebox{\hsize}{!}{\includegraphics{fig13a.eps}}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth} \noindent b)
\resizebox{\hsize}{!}{\includegraphics{fig13b.eps}}
\end{minipage}
\caption{
Evolutionary tracks from Girardi et al. (2000) for the composition $[Z=0.019,
Y=0.273]$ with overshoot (dot-dashed line) and without (solid line) in panel
a). In panel b) Teramo tracks are displayed for $[Z=0.0198, Y=0.273]$ with
overshoot (dot-dashed line) and without (solid line).
The ZAMS location for TE models with and
without overshoot is not coincident.}
\label{PD_TEzsun}
\end{figure*}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{fig14.eps}}
\caption{New Padova tracks and Teramo ones with overshoot from 1 to 2.5
$M_{\odot}$ for Z=0.008.
Solid lines correspond to PD08 models, dash-dotted ones to Teramo tracks.}
\label{PD07_TEz008}
\end{figure}
\begin{table*}
\begin{center}
\caption{Comparison of H-lifetimes for models with overshoot with nearly solar composition and for Z=0.008 }
\label{tab_tH}
\begin{tabular}{lllllllll}
\noalign{\smallskip}\hline\noalign{\smallskip}
\multicolumn{4}{c}{$Z\sim Z_{\odot}$}& &\multicolumn{4}{c}{$Z=0.008$}\\
\noalign{\smallskip}
\cline{1-4}\cline{6-9}
\noalign{\smallskip}
$M/\mbox{$M_{\odot}$}$ & $t_H$ TE $^1$ & $t_H$ PD08 $^2$ & $t_H$ $YY$ $^3$ & &
$M/\mbox{$M_{\odot}$}$ & $t_H$ TE $^4$ & $t_H$ PD08 $^5$ & $t_H$ $YY$ $^6$ \\
\noalign{\smallskip}\cline{1-4}\cline{6-9}\noalign{\smallskip}
1.00 & 10.889 & 10.547 & 10.855 & & 1.00 & 8.387 & 7.389 & 7.795 \\
1.20 & 5.679 & 5.109 & 5.359 & & 1.20 & 4.625 & 3.743 & 3.994 \\
1.40 & 4.182 & 3.390 & 3.424 & & 1.30 & 4.009 & 3.040 & 3.076 \\
1.60 & 3.067 & 2.347 & 2.390 & & 1.50 & 2.986 & 2.162 & 2.215 \\
1.80 & 2.136 & 1.647 & 1.671 & & 1.80 & 1.847 & 1.266 & 1.306 \\
2.00 & 1.439 & 1.198 & 1.229 & & 2.00 & 1.250 & 0.943 & 0.976 \\
3.00 & 0.375 & 0.394 & 0.392 & & 3.00 & 0.331 & 0.324 & 0.334 \\
5.00 & 0.094 & 0.107 & 0.104 & & 5.00 & 0.089 & 0.097 & 0.098 \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
$^1$ Z=0.0198 Y=0.273; $^2$ Z=0.017 Y=0.26; $^3$ Z=0.020 Y=0.27
$^4$ Z=0.008 Y=0.256; $^5$ Z=0.008 Y=0.26; $^6$ Z=0.007 Y=0.244
Ages are in Gyr
\end{center}
\end{table*}
Figure 11 shows the comparison between the isochrones by Girardi
et al. (2000), hereafter PD00, with the new ones, hereafter PD08, in the
age range between 1 and 14 Gyr, for the chemical composition $Z=0.017$ and
$Y=0.273$. There are
differences both in the MS phase and in the luminous part of the RGB
and/or AGB phase. The main contribution to the differences in the MS
phase is due to the interpolation technique for the opacity tables (see Section
2.2). The amount of the variation depends on mass and chemical
composition. We have verified that low-mass stellar evolutionary tracks
change very little with the method of opacity interpolation. For
intermediate mass stars there is a lowering of the luminosity
and a consequent increase of the lifetime of the new models.
For values of log Age$ \le 9.5-9.7$ (it depends on
the chemical composition) to obtain the same luminosity of PD00
for the MS termination point in the new isochrones (PD08) we must increase
the logarithm of the age of about 0.05. The color of the
turn-off of the new isochrones (younger than 9.5 Gyr) is bluer.
We point out that the luminous part of the RGB and/or AGB phases in the new
isochrones
is redder than in PD00 ones and the separation increases for increasing
luminosity. This effect is due to the treatment of the
density inversion in the red giant envelope. In the outermost layers of stars
with $T_{eff} \le 10^4 K$ a convective zone develops caused by the
incomplete ionization of hydrogen. If convection is treated according to the
mixing-length
theory (MLT) with the mixing-length proportional to the pressure scale height
($H_p$), when $T_{eff} \le 10^4 K$ a density inversion occurs. This situation
might be unphysical and it has been the subject of many speculations.
This difficulty can be avoided by adopting the density scale height $H_{\rho}$
instead of $H_p$ in the MLT.
Really in Girardi et al. tracks we avoided the inversion density
imposing that the temperature gradient $\Delta T$ is such that the density
gradient obeys the condition $\Delta _{\rho} \ge 0$. However the comparison of
the PD00 isochrones with the observations seems to require redder RGB and/or
AGB branches.
For this reason in the present computations we have used the MLT
proportional to the pressure scale height,
allowing the inversion density in the red convective envelope, as already
adopted in Bertelli et al. (1994) isochrones. The
presence of the inversion density determines RGB and/or AGB phases redder
than in previous isochrones (PD00).
Neutrino cooling is also more efficient during the RGB phase in the new tracks,
as described in Section 2.4 and it contributes to make the tip of the RGB more
luminous than in PD00. We have verified that for Z in the range 0.0001 - 0.001
(low values of Y) and for the solar case the overluminosity is of
the order 0.12 - 0.15 in units of $log L/L_{\odot}$ (the considered ages are
around log Age=10.0).
The effect on the I magnitude consists in an overluminosity of the order
of between -0.20 and -0.35 for Z in the range 0.0001 - 0.001. On the other hand
in the solar case
the interplay between lowering effective temperature and increasing
bolometric corrections is such that the tip of the PD00 case appears to be
approximately one magnitude more luminous than the new one, inverting
what happens to the I magnitude for low values of Z.
The comparison of some selected tracks and isochrones of the new grids
of the Padova database(PD08) with the analogous
counterparts of the publicly available Teramo database (TE) by Pietrinferni et
al. (2004) and the $YY$ database by Yi et al. (2001) and Demarque et al.
(2004) pointed out some differences.
As already evident in the comparison of isochrones
by different groups, shown in Pietrinferni et al. (2004), there is a systematic
difference from their 1.8 Gyr isochrone and the one by Girardi et al. (2000)
and by Yi et
al. (2001), as for the 2.0 Gyr one by VandenBerg (see figures 8, 11 and 12 in
Pietrinferni et al. 2004). In fact the turn-off
luminosity of their 1.8 Gyr isochrone (or 2.0 in the comparison with
VandenBerg) is higher, whereas the 10 Gyr and the 0.5 Gyr
ones are practically the same as by the other groups. They ascribed the
differences to differences in input physics and in the overshooting
treatment relatively to the comparison with Girardi et al. (2000).
In Table 4 we compare the H-lifetimes for models with overshoot in
the mass range between 1 and 5 $M_{\odot}$ and with a chemical composition
approximately solar
We point out that the largest differences can be found in the range
between 1 and 2 solar masses, in the sense that while PD08 and $YY$ tracks
have similar H-lifetimes, TE models are systematically longer up to 0.8 Gyr.
This difference cannot be ascribed to the treatment of overshoot, in fact
both groups (PD and TE) assume a similar prescription in the range between
1 and 2 $M_{\odot}$ (the core overshoot efficiency decreases linearly from
about $1.7-1.5 M_{\odot}$ down to $1 M_{\odot}$, where the efficiency is
zero for stars with a radiative core).
The recent determination of the age of the LMC globular cluster NGC 1978,
studied by Mucciarelli et al. (2007) shows the same anomaly, in this case
for the chemical composition typical of the LMC. The authors find that the
age obtained with Teramo isochrones results significantly older than that
obtained with Pisa and Padova isochrones, even if the same amount of overshoot
is assumed in the considered models.
Comparing the H-lifetimes for $Z=0.008$ we notice that there is a very
satisfactory agreement between the H lifetimes of $YY$ and PD08, whereas TE
ages are older up to about 1 Gyr at about $1.3 M_{\odot}$. The largest
differences are present in the range between 1 and 2 $M_{\odot}$, as shown in
Table~\ref{tab_tH}.
Figure 12 panel a) shows isochrones (at log Age = 10.0, 9.7,
9.4, 9.2, 9.0 years) from
Teramo database with Z=0.008 and Y=0.256, and (interpolated at the same metal
and helium content) from our new isochrones. This figure remarks the difference
in luminosity of the isochrones for the same age from the two databases,
more evident for log Age=9.4 (black solid line for PD08 and black dash-dotted
for Teramo) in the color version of the electronic paper.
In panel b) we compare Padova new isochrones for Z=0.007 and
Y=0.244 with $YY$ and we
find out that the agreement is very good, considering that their
models take into account convective core overshoot (0.2) and helium diffusion
(PD08 and $YY$ isochrones are shown for log Age = 10.0, 9.7, 9.4 and 9.0
years).
The reasons of the above mentioned disagreement cannot be easily disentangled,
but we notice that the ZAMS models for PD00 tracks (Girardi et al. 2000 for
solar chemical composition) with and without overshoot are coincident (as the
ZAMS is the location of chemically homogeneous models for the various
considered masses), while they are different in luminosity and temperature
for Teramo ones between 1 and 2 $M_{\odot}$.
The difference in the location of ZAMS models can be seen also
in Figure 14 where we plot our new models and Teramo ones
with overshoot for Z=0.008.
In Figure 13 the PD00 and TE tracks are plotted in the range of
mass between 1.2 and 2 $M_{\odot}$. As we did not compute new tracks without
overshoot, we plotted PD00 and TE models with and without overshoot to make
a comparison.
The anomalously longer H-burning lifetimes of TE models with respect
to other authors is probably, at least partly, related to the lower luminosiy
of the beginning of the central H-burning for their models with
overshoot.
We point out that Teramo models with overshoot were computed
starting from the pre-MS phase for masses lower than $3 M_{\odot}$.
Anyway also $YY$ models were evolved from the pre-MS stellar birthline to
the onset of helium burning (Yi et al. 2001), but there are not significant
differences (comparing PD08 to YY results) in the H-lifetimes and for the
isochrones in the age range between 1 and 10 Gyr, as shown in Table 4 and
Figure 12 b).
\section{Concluding remarks}
\label{sec_remarks}
Large grids of homogeneous stellar evolution models and isochrones are a
necessary ingredient for the interpretation of photometric and spectroscopic
observations of resolved and unresolved stellar populations.
Appropriate tools to compute synthetic color-magnitude diagrams for the
analysis of stellar populations require also the selection of the chemical
composition to be used in the simulation. Evolutionary stellar models are
usually computed taking into account a fixed law of helium to metal enrichment.
Determinations of the helium enrichment $\Delta Y / \Delta Z$ from nearby stars
and K dwarfs, or
from Hyades binary systems show a large range of values for this ratio.
Recent results suggest that the naive assumption that the helium enrichment
law is universal, might not be correct.
In fact there has been recently evidence of significant variations in the
helium content (and perhaps of the age) in some globular clusters, like
$\omega$ Cen, NGC 2808 and NGC 6441, while
globular clusters were traditionally considered as formed of a simple
stellar population of uniform age and chemical composition.
These results prompted us to compute new stellar evolutionary tracks covering
an extended region of the plane Z-Y, in order to enable users to analyse
stellar populations with different helium enrichment laws.
In this paper we present 39 grids of stellar evolutionary tracks up to
$2.5 M_{\odot}$. Next paper will present tracks and isochrones from $2.5$
up to $20 M_{\odot}$ for the same grid of chemical
compositions. The typical mass resolution for low mass stars is of 0.1
$M_{\odot}$, reduced to $0.05$ in the interval of very low masses ($M < 0.6
M_{\odot}$), and occasionally in the vicinity of the separation mass $M_{Hef}$
between low and intermediate mass stars. In this way we provide enough tracks
to allow a very detailed mapping of the HR diagram and the relative theoretical
isochrones are also very detailed. An important update of this database is the
extension of stellar models and isochrones until the end of the TP-AGB
by means of new synthetic models (cf. Marigo \& Girardi 2007).
The results of these new grids allow the estimate of several astrophysical
quantities as the RGB transition mass, the He core mass at He ignition, the
amount of changes of chemical elements due to the first dredge-up, and the
properties of the models at the beginning of the thermal pulse phase on the
AGB.
The web site (http://stev.oapd.inaf.it/YZVAR), dedicated to make available the
whole theoretical framework to the scientific community, includes:
\begin{itemize}
\item
{Data files with the information relative to each evolutionary track}
\item
{Isochrone files with the chemical composition of the computed grids in
the Z-Y plane }
\end{itemize}
Every file contains isochrones with $\sim 10.15 \le$ Log Age $ \le \sim 9.0$
years with
time step equal to 0.05. The initial and the final value can be a little more
or a little less, dependent on the chemical composition.
To compute the isochrones we adopted the interpolation scheme described in
Section 5.1. A web interactive interface will be tuned up as soon as possible
to allow users to obtain isochrones of whatever chemical composition inside
the original grid.
A modified version of the program which computes the isochrones is suited to
the purposes of evolutionary population synthesis, either of simple (star
clusters) or complex stellar populations (galaxies). In the latter case it is
possible to select the kind of star formation history, the chemical enrichment
law in the plane Z-Y, the initial mass function and the mass loss rate during
the RGB phase. The previous program ZVAR, in which the helium enrichment law
was implicit in the chemical composition of the various evolutionary sets, is
now updated for being used in the extended region of the Z-Y plane and is named
YZVAR.
A web interactive interface will provide stellar populations for selected input
data for the SFR, IMF, chemical composition and mass loss during the RGB phase.
\begin{acknowledgements}
We thank C. Chiosi for his continuous interest and support to stellar
evolution computations. We thank A. Weiss and B. Salasnich for help with
opacity tables, and A. Bressan for useful discussions.
The authors acknowledge constructive comments of the referee that helped
to clarify and improve the text.
We acknowledge financial support from INAF COFIN 2005 ``A Theoretical lab
for stellar population studies'' and from
Padova University (Progetto di Ricerca di Ateneo CPDA 052212).
\end{acknowledgements}
| {'timestamp': '2008-03-10T18:24:58', 'yymm': '0803', 'arxiv_id': '0803.1460', 'language': 'en', 'url': 'https://arxiv.org/abs/0803.1460'} |
\section{Introduction}
The problem of vacuum tunneling in de Sitter spacetime has recently
acquired renewed relevance. In part, this is due to developments in
string theory, which suggest that vacuum tunneling may be of relevance
for understanding transitions between the various potential vacua that
populate the string theory landscape. But, it is also of interest for
the light that it can shed on the nature of de Sitter spacetime. In
this talk I will describe some recent work~\cite{Hackworth} with Jim
Hackworth in which we explored some aspects of the
subject that have received relatively little attention.
De Sitter spacetime is the solution to Einstein's equations when there
is a constant positive vacuum energy density $V_{\rm vac}$, but no other
source. Globally, it can be represented as the hyperboloid $x^2 + y^2
+ z^2 + w^2 -v^2 = H^{-2}$ in a flat five-dimensional space with
metric $ds^2 = dx^2 + dy^2 + dz^2 + dw^2 -dv^2$, where
\begin{equation}
H^2 = {8 \pi \over 3} {V_{\rm vac} \over M_{\rm Pl}^2 } \, .
\end{equation}
The surfaces of constant $v$ are three-spheres, with the sphere of
minimum radius, $H^{-1}$, occurring at $v=0$. However, the special
role played by this surface is illusory. De Sitter spacetime is
homogeneous, and a spacelike three-sphere of minimum radius can be
drawn through any point.
An important property of de Sitter spacetime is the existence of horizons.
Just as for the case of a black hole, the existence of a
horizon gives rise to thermal radiation, characterized by a temperature
\begin{equation}
T_{\rm dS} = H/2\pi \, .
\end{equation}
However, there are important differences from the black hole case.
A black hole horizon has a definite location, independent of
the observer. Further, although an observer's motion affects how the
thermal radiation is perceived, the radiation has an unambiguous,
observer-independent consequence --- after a finite time, the black
hole evaporates. By contrast, the location of the de Sitter horizon
varies from observer to observer. Although comoving observers
detect thermal radiation with a temperature $T_{\rm dS}$, this
radiation does not in any sense cause the de Sitter spacetime to
evaporate. As we will see, tunneling between different de Sitter
vacua provides further insight into the thermal nature of de Sitter
spacetime and the meaning of $T_{\rm dS}$.
Of course, the relevance of de Sitter spacetime to our Universe comes
from the fact that in the far past, during the inflationary era, and
in the far future, if the dark energy truly corresponds to a
cosmological constant, the Universe approximates a portion of de
Sitter spacetime. The underlying assumption is that results derived
in the context of the full de Sitter spacetime are applicable to
a region that is approximately de Sitter over a spacetime volume
large compared to $H^{-4}$.
\begin{figure}[b]
\includegraphics[height=.22\textheight]{generic_potential.eps}
\caption{The potential for a typical theory with a false vacuum}
\label{potentialfig}
\end{figure}
\section{Vacuum decay in flat spacetime}
The prototypical example for studying vacuum decay is a scalar field
theory with a potential, such as that shown in
Fig.~\ref{potentialfig}, that has both an absolute minimum (the ``true
vacuum'') and a higher local minimum (the ``false vacuum''). For the
purposes of this talk I will assume that $V(\phi) > 0$, so that both
vacua correspond to de Sitter spacetimes. The false vacuum is a
metastable state that decays by quantum mechanical tunneling.
It
must be kept in mind, however, that the tunneling is not
from a homogeneous false vacuum to a homogeneous true vacuum, as
might be suggested by the plot of $V(\phi)$. Rather,
the decay proceeds via bubble nucleation, with tunneling being from
the homogeneous false vacuum to a configuration containing a bubble of
(approximate) true vacuum embedded in a false vacuum background.
After nucleation, the bubble expands, a classically allowed
process.
I will begin the discussion of this process by recalling the simplest
case of quantum tunneling, that of
a point particle of mass $m$ tunneling through a one-dimensional
potential energy barrier $U(q)$ from a initial point $q_{\rm init}$ to
a point $q_{\rm fin}$ on the other side of the barrier.
The WKB approximation
gives a tunneling rate proportional to $e^{-B}$,
where
\begin{equation}
B = 2 \int_{q_{\rm init}}^{q_{\rm fin}}
dq \sqrt{2m[U(q)- E]} \, .
\label{WKBfactor}
\end{equation}
This result can be generalized to the case of
a multi-dimensional system with coordinates $q_1, q_2, \dots, q_N$.
Given an initial point $q_j^{\rm init}$, one considers paths
$q_j(s)$ that start at $q_j^{\rm init}$ and end at some point
$q_j^{\rm fin}$ on the opposite side of the barrier.
Each such path defines a one-dimensional tunneling integral $B$.
The WKB tunneling exponent is obtained from the path that
minimizes this integral~\cite{Banks:1973ps}
As a bonus, this minimization process also
determines the optimal exit point from the barrier.
By manipulations analogous to those used in classical mechanics (but
with some signs changed), this minimization problem
can be recast as the problem of finding a stationary point of the
Euclidean action
\begin{equation}
S_{\rm E} = \int_{\tau_{\rm init}}^{\tau_{\rm fin}}
d\tau \left[
{m\over 2} \left({dq^j\over d\tau}\right)^2 + U(q)\right]\, .
\end{equation}
One is thus led to solve the Euclidean equations of motion
\begin{equation}
0 = m {d^2q_j \over d\tau^2} + {\partial U \over \partial q_j}\, .
\end{equation}
The boundary conditions are that $q_j(\tau_{\rm init}) = q_j^{\rm
init}$ and (because the kinetic energy vanishes at the point where the
particle emerges from the barrier)
that $dq_j/d\tau = 0$ at $\tau_{\rm fin}$.
The
vanishing of $dq_j/d\tau$ at the endpoint implies that the solution can be
extended back, in a ``$\tau$-reversed'' fashion, to give a solution
that runs from $q_j^{\rm init}$ to $q_j^{\rm fin}$ and back again
to $q_j^{\rm init}$. This solution is known as a ``bounce'', and
the tunneling exponent is given by
\begin{eqnarray}
B \!\! \!\! \!\!&=& \!\! \!\! \!\! \int d\tau \left[
{m\over 2} \left({dq^j\over d\tau}\right)^2 + U(q)
-U(q^{\rm init}) \right] \cr
\!\! \!\! \!\!&=& \!\! \!\! \!\! S_{\rm E}({\rm bounce})
- S_{\rm E}({\rm false~vacuum}) \, ,
\end{eqnarray}
with the factor of 2 in Eq.~(\ref{WKBfactor}) being absorbed by the
doubling of the path. It is essential to remember that
$\tau$ is not in any sense a time, but merely one of many possible
parameterizations of the optimal tunneling path.
The translation of this to field theory~\cite{ColemanI} is
straightforward: The coordinates $q_j$ become the field variables
$\phi({\bf x})$, and the path $q_j(\tau)$ becomes a series of
three-dimensional field configurations $\phi({\bf x}, \tau)$. The
Euclidean action is
\begin{equation}
S_{\rm E} =
\int d\tau \, d^3{\bf x} \left[
{1\over 2} \left({\partial \phi\over \partial \tau}\right)^2
+{1\over 2} ({\bf \nabla}\phi)^2 + V(\phi) \right]
\end{equation}
and so one must solve
\begin{equation}
{d^2 \phi \over d\tau^2} + ({\bf \nabla}\phi)^2
= {dV\over d\phi} \, .
\label{flatscalar}
\end{equation}
The boundary conditions are that the path must start at the
homogeneous false vacuum configuration, with $\phi({\bf x}, \tau_{\rm
init}) = \phi_{\rm fv}$, and that $d\phi/d\tau = 0$ at $\tau_{\rm
fin}$ for all $\bf x$. (Because $\phi_{\rm fv}$ is a minimum of the
potential, it turns out that $\tau_{\rm init} =- \infty$.)
A three-dimensional slice through the solution
at $\tau_{\rm fin}$ gives the most likely field configuration for the
nucleated true vacuum bubble. This configuration, $\phi({\bf x},
\tau_{\rm fin})$, gives the initial condition for the subsequent
real-time evolution of the bubble. As with the single-particle case,
a $\tau$-reflected solution is conventionally added to give a full
bounce.
Despite the fact that the spatial coordinates $\bf x$ and the path
parameter $\tau$ have very different physical meanings, there is a
remarkable mathematical symmetry in how they enter. This suggests
looking for solutions that have an SO(4) symmetry; i.e., solutions for
which $\phi$ is a function of only $s = \sqrt{{\bf x}^2 + \tau^2}$.
For such solutions, the field equation reduces to
\begin{equation}
{d^2\phi\over ds^2} + {3 \over s} {d\phi\over ds}
= {dV \over d\phi} \, .
\end{equation}
The boundary conditions are
\begin{equation}
\left. {d\phi \over ds}\right|_{s =0} = 0 \,\,\,
,\qquad \phi(\infty) = \phi_{\rm fv} \, ,
\label{flatboundary}
\end{equation}
where the first follows from the requirement that the solution be
nonsingular at the origin, and the second ensures both that a
spatial slice at $\tau = -\infty$ corresponds to the initial state,
and that the slices at finite $\tau$ have finite energy relative to
the initial state. Note that while $\phi(0)$ is required to be on the
true vacuum side of the barrier, it is not equal to (although it may
be close to) $\phi_{\rm tv}$.
Although the tunneling exponent is readily obtained from the WKB
approach, the prefactor, including (in principle) higher order
corrections, is most easily calculated from a path integral
approach~\cite{ColemanII}. The
basic idea is to view the false vacuum as a metastable state with a
complex energy, with the imaginary part of the energy density yielding
the decay rate per unit volume. The false vacuum energy is obtained
by noting that for large ${\cal T}$
\begin{equation}
I({\cal T}) =
\int [d\phi] \, e^{-S_E(\phi)} \sim e^{-E_{\rm fv} {\cal T} } \, ,
\end{equation}
where the path integral is over configurations with
$\phi({\bf x}, \tau= \pm {\cal T}/2) = \phi_{\rm fv}$.
This path integral can be calculated by summing the contributions from
the various stationary points, each of which gives a factor of $( \det
\,S'')^{-1/2}\, e^{-S}$. Here $S''$ is the functional second
derivative of the action, evaluated at the stationary point; i.e., the
product of the frequencies of the normal modes. The first stationary
point, a homogeneous false vacuum configuration with $\phi({\bf x},
\tau) = \phi_{\rm fv}$ everywhere, gives a contribution $Ae^{-S_{\rm
fv}}$, where the real prefactor $A$ includes the (properly
renormalized) determinant factor and $S_{\rm fv} = V(\phi_{\rm fv}) \,
{\cal T}{\cal V}$. Here ${\cal V}$ denotes the volume of space and is
understood to be taken to infinity at the end of the calculation.
The next stationary point is the bounce solution to
Eq.~(\ref{flatscalar}). The calculation of the determinant factor
here is complicated by the fact that $S''(\phi_{\rm bounce})$ has one
negative and four zero eigenvalues. The former implies a factor of
$i$, which I will display explicitly. The latter require the
introduction of collective coordinates; integrating over these gives a
factor of ${\cal T}{\cal V}$, corresponding to the fact that the
bounce can be centered anywhere in the four-dimensional Euclidean
space.
Finally, the approximate stationary points corresponding to
multibounce solutions also contribute, with the $n$-bounce contribution
including a factor of $({\cal T}{\cal V})^n/n!$ from
integrating over the positions of $n$ identical bounces.
Putting all this together gives a result that can be written as
\begin{eqnarray}
I({\cal T})
\!\! \!\! \!\! &=&\!\!\!\! \!\! A e^{- S_{\rm fv}} + i {\cal V}{\cal T} J
e^{- S_{\rm bounce}}
+ \cdots \cr \cr
\!\!\!\! \!\! &=&\!\!\!\! \!\! A e^{- S_{\rm fv}} \left[ 1 + i {\cal
V}{\cal T} J e^{-B}
+ {1\over 2}\left(i {\cal V}{\cal T} J e^{-B} \right)^2
+ \cdots \right] \cr \cr
\!\! \!\! \!\! &=&\!\!\! \!\!\! A e^{- {\cal T}{\cal V} V(\phi_{\rm fv})}
\exp\left[i {\cal V}{\cal T} J e^{-B} \right] \, .
\label{pathinteg}
\end{eqnarray}
Here $J$ includes both determinant and Jacobean factors, with the
latter arising from the introduction of the collective coordinates;
for present purposes, the important point is that it is real.
Extracting the energy density from the exponent in
Eq.~(\ref{pathinteg}) gives an imaginary part that is proportional to
${\cal V}$, corresponding to the fact that a bubble can nucleate
anywhere. The quantity we actually want is the nucleation rate per
unit volume,
\begin{equation}
\Gamma = -{2 {\rm Im}\,E_{\rm fv} \over {\cal V}}
= 2J e^{-B} \, .
\end{equation}
The path integral approach provides the vehicle for
extending~\cite{Langer:1969bc,Linde:1981zj} the
calculation to finite temperature $T$, with the path integral over
configurations extending from $ \tau = -\infty$ to $\tau = \infty$
replaced by one over over configurations that are periodic in $\tau$
with periodicity $1/T$. At low temperature, where $1/T$ is larger
than the characteristic radius of the four-dimensional bounce, there
is little change from the zero-temperature nucleation rate. However,
in the high-temperature regime where
$1/T$ is much smaller than this
characteristic radius the path integral is dominated by
configurations that are constant in $\tau$. A spatial slice
at fixed $\tau$ gives a configuration, with total energy $E_{\rm
crit}$, that contains a single critical bubble. The exponent in the
nucleation rate takes the thermal form
\begin{equation}
B = {E_{\rm crit}\over T} - {E_{\rm fv} \over T} \, .
\end{equation}
Note that, in contrast to the zero temperature case, there is no
spatial slice corresponding to the initial state. Only through the
boundary conditions at spatial infinity does the bounce solution give
an indication of the initial conditions.
\section{Adding gravity}
Coleman and De Luccia~\cite{ColemanDeLuccia} argued that the effects
of gravity on vacuum decay could be obtained by adding an
Einstein-Hilbert term to the Euclidean action and then seeking bounce
solutions of the resulting field equations; as before, the tunneling
exponent would be obtained from the difference between the actions of
the bounce and the homogeneous initial state. Their treatment did not
include the calculation of the prefactor, an issue that remains poorly
understood.
If one assumes O(4) symmetry, as in the flat spacetime case, the
metric can be written as
\begin{equation}
ds^2 = d\xi^2 + \rho(\xi)^2 d\Omega_3^2 \, ,
\end{equation}
where $d\Omega_3^2$ is the metric on the three-sphere, and the scalar
field depends only on $\xi$. The Euclidean action becomes
\begin{eqnarray}
S_E \!\! \!\! \!\!&=& \!\! \!\! \!\! 2\pi^2 \int d\xi \left[\rho^3
\left({1\over 2} \dot\phi^2 + V \right)
\right. \cr \!\! \!\! \!\!&& \!\! \!\! \!\!\qquad \qquad \left.
+ {3M_{\rm Pl}^2 \over 8\pi}
\left( \rho^2 \,\ddot \rho
+ \rho\,\dot\rho^2 - \rho \right) \right] \, ,
\end{eqnarray}
with dots denoting derivatives with respect to $\xi$. The
Euclidean field equations are
\begin{equation}
\ddot \phi + {3 \dot\rho \over \rho} \, \dot\phi = {dV\over d\phi}
\label{curvedPhiEq}
\end{equation}
and
\begin{equation}
{\dot \rho}^2 = 1 + {8 \pi \over 3 M_{Pl}^2}\,\rho^2
\,\left({1\over 2} \dot\phi^2 - V \right) \, .
\label{rhoEq}
\end{equation}
\begin{figure}[th]
\includegraphics[height=.28\textheight]{bounce.eps}
\caption{Schematic illustration of a Coleman-De Luccia bounce
solution in two limiting regimes. In both, $\phi$ is near its true
vacuum value in the region to the right of the dashed arc, while on
the left side it is near the false vacuum. In both cases the
equatorial slice denotes a three-sphere corresponding to the spatial
hypersurface on which the bubble nucleates. The lower dashed line in
(a) represents a three-sphere indicative of the initial false vacuum
state; this has no analogue in the regime illustrated in (b).}
\label{bigsmallbounce}
\end{figure}
One can show that if $V(\phi)$ is everywhere positive, as I am
assuming here, then $\rho(\xi)$ has two zeros and the Euclidean space
is topologically a four-sphere. One of the zeros of $\rho$ can be
chosen to lie at $\xi=0$, while the other is located at some value
$\xi_{\rm max}$. Requiring the scalar field to be nonsingular then
imposes the boundary conditions
\begin{equation}
\left.{d\phi \over d\xi}\right|_{\xi=0} = 0 \,, \,\,
\qquad \left.{d\phi \over d\xi}\right|_{\xi_{\rm max}} = 0 \, .
\end{equation}
The symmetry of these boundary conditions should be contrasted with
the flat space boundary conditions of Eq.~(\ref{flatboundary}). Note
that there is no requirement that scalar field ever achieve either of
its vacuum values, although $|\phi(\xi_{\rm max}) -\phi_{\rm fv}|$
is typically exponentially small
in cases where gravitational effects are
small.
Somewhat surprisingly, the Euclidean solution corresponding to a
homogeneous false vacuum is not an infinite space, but rather a
four-sphere of radius
\begin{equation}
H_{\rm fv}^{-1}
= \sqrt{3M_{\rm Pl}^2\over 8\pi V(\phi_{\rm fv})} \, .
\end{equation}
Its Euclidean action is
\begin{equation}
S_{\rm E} = -{3 \over 8} {M_{\rm Pl}^4 \over
V(\phi_{\rm fv})} \, .
\end{equation}
If the parameters of the theory are such that the characteristic
radius of the
flat space bounce
is much less than $H_{\rm fv}^{-1}$, then the curved space
bounce will be roughly as illustrated in Fig.~\ref{bigsmallbounce}a,
with the small region near $\xi =0$ corresponding to the true vacuum
region of the flat space bounce, the equatorial slice giving the
optimal configuration for emerging from the potential barrier, and a
slice such as that indicated by the lower dotted line roughly
corresponding to the state of the system before the tunneling process.
A bounce solution such as this yields a nucleation rate that only differs
only slightly from the flat space result.
On the other hand, there are
choices of parameters that give a bounce
solution similar to that indicated in Fig.~\ref{bigsmallbounce}b, with
a true vacuum region that occupies a significant fraction of the
Euclidean space. In this case, there is no slice that even roughly
approximates the initial state, suggesting that one should view this
as more analogous to a thermal transition in flat space than to
zero-temperature quantum mechanical tunneling.
Indeed, for a bounce such as this the true and false vacuum regions
can perhaps be viewed as being on a similar footing, so that the
bounce can describe either the nucleation of a true vacuum bubble in a
region of false vacuum, or the nucleation of a false vacuum bubble in
a true vacuum region~\cite{Lee:1987qc}. The rate for the former case
would be
\begin{equation}
\Gamma_{{\rm fv}\rightarrow {\rm tv}} \sim
\exp\left\{-[S_{\rm E}({\rm bounce}) - S_{\rm E}({\rm fv})]\right\}
\, ,
\end{equation}
while for the latter,
\begin{equation}
\Gamma_{{\rm tv}\rightarrow {\rm fv}} \sim
\exp\left\{-[S_{\rm E}({\rm bounce}) - S_{\rm E}({\rm tv})]\right\}
\, .
\end{equation}
The ratio of these is
\begin{eqnarray}
{ \Gamma_{{\rm tv}\rightarrow {\rm fv}}
\over \Gamma_{{\rm fv}\rightarrow {\rm tv}} }
\!\! \!\! \!\!&=& \!\! \!\! \!\! \exp\left\{S_{\rm E}({\rm tv})
- S_{\rm E}({\rm
fv})\right\}
\cr
\!\! \!\! \!\! &=& \!\! \!\! \!\! \exp\left\{{3 \over 8} {M_{\rm
Pl}^4\over V(\phi_{\rm fv})}
-{3 \over 8} {M_{\rm Pl}^4\over V(\phi_{\rm tv}) }
\right\} \, .
\end{eqnarray}
If $V(\phi_{\rm fv}) - V(\phi_{\rm tv}) \ll V(\phi_{\rm fv})$, the
geometry of space is roughly the same in the two vacua, and we can
sensibly ask about the relative volumes of space occupied by the false
and true vacua. In the steady state, this will be
\begin{eqnarray}
{{\cal V}_{\rm fv} \over {\cal V}_{\rm tv}}
\!\! \!\! \!\!&=& \!\! \!\! \!\!
{ \Gamma_{{\rm tv}\rightarrow {\rm fv}}
\over \Gamma_{{\rm fv}\rightarrow {\rm tv}} } \cr
\!\! \!\! \!\! &\approx& \!\! \!\! \!\!
\exp\left\{- {{4\pi \over 3} H^{-3}
[V(\phi_{\rm fv}) -V(\phi_{\rm tv})] / T_{\rm dS}}\right\} \, .
\end{eqnarray}
The last line of this equation, which gives the ratio as the
exponential of an energy difference divided by the de Sitter
temperature, is quite suggestive of a thermal interpretation of
tunneling in this regime.
It is not hard to show that the flat space Euclidean field equations
always have a bounce solution. This is no longer true when gravity is
included, as we will see more explicitly below. However,
Eqs.~(\ref{curvedPhiEq}) and (\ref{rhoEq}) always have a homogeneous
Hawking-Moss~\cite{Hawking:1981fz} solution that is that is
qualitatively quite different from the flat space bounce. Here $\phi$
is identically equal to its value $\phi_{\rm top}$ at the top of the
barrier, while Euclidean space is a four-sphere of radius $H^{-1}_{\rm
top} \equiv \sqrt{3M_{\rm Pl}^2/8\pi V(\phi_{\rm top})}$. From this solution
one infers a nucleation rate
\begin{equation}
\Gamma_{\rm fv} \sim
\exp\left\{-{3 \over 8} {M_{\rm Pl}^4\over V(\phi_{\rm fv})}
+{3 \over 8} {M_{\rm Pl}^4\over V(\phi_{\rm top})} \right\} \, .
\end{equation}
from the false vacuum
\begin{figure}[bt]
\includegraphics[height=.56\textheight]{asymmetric_array.eps}
\caption{Bounce solutions for a scalar potential~\cite{Hackworth}
with cubic and quartic interactions and $\beta = 70.03$. }
\label{array}
\end{figure}
\section{Other types of bounces?}
Given the existence of the Hawking-Moss solution, it is natural to
inquire whether the inclusion of gravity allows any other new classes
of Euclidean solutions. In particular, might there be ``oscillating
bounce'' solutions
in which $\phi$ crosses the potential barrier not
once, but rather $k>1$ times, between $\xi=0$ and $\xi=\xi_{\rm max}$?
There can indeed be such solutions~\cite{Banks:2002nm}.
In examining their properties, we focussed on the case where $V(\phi_{\rm
top}) - V(\phi_{\rm tv}) \ll V(\phi_{\rm tv})$. This simplifies the
calculations considerably, but does not seem to be essential for our
final conclusions. With this assumption, the metric is, to a first
approximation, that of a four-sphere of fixed radius $H^{-1}$, and
$\xi_{\rm max} = \pi/H$. We then only need to solve the scalar field
Eq.~(\ref{curvedPhiEq}). Defining $y = H\xi$, we can write this as
\begin{equation}
{d^2 \phi \over dy^2} + 3 \cot y {d\phi \over dy} = {1 \over H^2}
{dV \over d\phi} \, .
\end{equation}
It is convenient to start by first examining ``small amplitude''
solutions in which $\phi(0)$ and $\phi(\pi)$ are both close to
$\phi_{\rm top}$. Let us assume that near the top of the barrier $V$
can be expanded as\footnote{The omission of cubic terms here is only
to simplify the algebra. There is no difficulty, and little
qualitative change, in including such terms. The details are given in
Ref.~\cite{Hackworth}.}
\begin{equation}
\tilde V(\phi) = V(\phi_{\rm top})
- {H^2\beta \over 2} (\phi- \phi_{\rm top})^2
+ {H^2\lambda \over 4} (\phi- \phi_{\rm top})^4
+ \cdots
\end{equation}
with
\begin{equation}
\beta = {| V''(\phi_{\rm top})| \over H^2} \, .
\end{equation}
Keeping only terms linear in $(\phi-\phi_{\rm top})$ in
Eq.~(\ref{curvedPhiEq}) gives
\begin{equation}
0 = {d^2 \phi \over dy^2} + 3\cot y\, {d \phi \over dy}
+ \beta (\phi - \phi_{\rm top}) \, ,
\end{equation}
whose general solution is
\begin{equation}
\phi(y)- \phi_{\rm top} = A C_\alpha^{3/2}(\cos y)
+ BD_\alpha^{3/2}(\cos y) \, ,
\end{equation}
where $C_\alpha^{3/2}$ and $D_\alpha^{3/2}$ are Gegenbauer functions
of the first and second kind and $\alpha(\alpha+3) = \beta$. The
vanishing of $d\phi/dy$ at $y=0$ implies that $B=0$; the analogous
condition at $y=\pi$ is satisfied only if $\alpha$ is an integer,
in which case $C_\alpha^{3/2}$ is a polynomial.
While the linearized equation only has solutions for special values of
$\beta$, this condition is relaxed when the nonlinear terms are
included. Furthermore, the nonlinear terms fix the amplitude of the
oscillations, which is completely undetermined at the linear level.
The problem can be analyzed by an approach similar to that used
to treat the anharmonic oscillator.
Any function with $d\phi/dy$ vanishing at both $y=0$ and $y=\pi$
can be expanded as
\begin{equation}
\phi(y) = \phi_{\rm top} + {1 \over \sqrt{|\lambda|}}
\sum_{M=0}^\infty A_M C_M^{3/2}(y) \, .
\end{equation}
Substituting this into Eq.~(\ref{curvedPhiEq}) and keeping terms up to
cubic order in $(\phi-\phi_{\rm top})$ gives
\begin{eqnarray}
0
\!\! \!\! \!\!&=& \!\! \!\! \!\!
\sum_{M=0}^\infty C_M^{3/2}(\cos y) \Big[ [\beta - M(M+3)] A_M
\cr
\!\! \!\! \!\! && \!\! \!\! \!\!\qquad
- {\rm sgn}\,(\lambda) \sum_{I,J,K} A_I A_J A_K q_{IJK;M}
\Big] \, ,
\label{phiExpansion}
\end{eqnarray}
where the $q_{IJK;M}$ arise from expanding products of three Gegenbauer
polynomials.
Requiring that the quantities multiplying each of the $C_M^{3/2}$ separately
vanish yields an infinite
set of coupled equations. These simplify, however, if $|\Delta|
\equiv |\beta -N(N+3)| \ll 1$ for some $N$. In this case, one
coefficient,
$A_N$, is much greater than all the others. The $M=N$ term
in Eq.~(\ref{phiExpansion}) then gives (to leading order)
\begin{equation}
A_N = \pm \sqrt{ \Delta \over {\rm sgn}\,(\lambda) q_{NNN;N} } \, ,
\label{amplitude}
\end{equation}
where $q_{NNN;N} >0$.
If $\lambda>0$, Eq.~(\ref{amplitude}) only gives
a real value of $A_N$ if $\beta >
N(N+3)$. As $\beta$ is increased through this critical value, two
solutions appear. These are essentially small oscillations about
the Hawking-Moss solution, with $\phi(0) \approx \phi_{\rm top} \pm
A_N C_N^{3/2}(1)$ and $\phi(\pi) \approx \phi_{\rm top} \pm A_N
C_N^{3/2}(-1)$. Between these endpoints, $\phi$ crosses the top of
the barrier $N$ times. If $N$ is even, the two solutions are
physically distinct, with one having $\phi$ on the true vacuum side of
the barrier at both endpoints, and the other having both endpoint
values on the false vacuum side. If $N$ is odd, the two solutions are
just ``$y$-reversed'' images of each other.
As $\beta$ is increased further, the endpoints move down the sides of
the barrier, until eventually the small amplitude approximation breaks
down. Nevertheless, we would expect the solutions to persist, with
$\phi(0)$ and $\phi(\pi)$ each moving toward one of the vacua. When
$\beta$ reaches the next critical value, $(N+1)(N+4)$, two new
solutions, with $N+1$ oscillations about $\phi_{\rm top}$, will
appear, but the previous ones will remain. Thus, for $N(N+1) < \beta
< (N+1)(N+4)$, we should expect to find solutions with $k = 0, 1, 2,
\dots, N$ oscillations. We have confirmed these expectations by
numerically integrating the bounce equations for various values of the
parameters; the solutions for a typical potential with $\beta = 70.03$
are shown in Fig.~\ref{array}.
The fact that the number of solutions should increase with $\beta$ is
physically quite reasonable. One would expect the minimum distance
needed for an oscillation about $\phi_{\rm top}$, like the thickness
of the bubble wall itself, to be roughly $|V''|^{-1/2}$. Hence, the
number of
oscillations that can fit on a sphere of radius $H^{-1}$ should be of
order $H^{-1}/|V''|^{-1/2} = \sqrt{\beta}$. In particular, this
suggests that for $\beta < 4$ there should not even be a $k=1$
Coleman-De Luccia bounce~\cite{Jensen:1983ac}.
It is thus somewhat puzzling to note the implications of
Eq.~(\ref{amplitude}) for the case where $\lambda$ is negative. Here,
increasing $\beta$ through a critical value causes two solutions to
merge into the Hawking-Moss solution and disappear, suggesting that
the number of solutions is a decreasing function of $\beta$. The
resolution to this can be found by analytically and
numerically examining various potentials that are unusually flat at the top.
In all the cases we have examined, the number of solutions is governed
by a parameter $\gamma$ that measures an averaged value of $|V''|/H^2$
over the width of the potential barrier. When $\gamma$ is
sufficiently small, there are no bounce solutions (other than the
Hawking-Moss, which is always present).
As $\gamma$ is
increased, new solutions appear at critical values. These first
appear as solutions with finite values of $\phi(0) - \phi_{\rm top}$.
They then bifurcate, with $\phi(0)$ for one solution moving toward a
vacuum and $\phi(0)$ for the other moving toward $\phi_{\rm
top}$, eventually reaching it and disappearing when $\beta$ is at a
critical value. The net effect is that the number of solutions
generally increases with $\gamma$, although it is not strictly
monotonic.
\section{Interpreting the oscillating bounces}
How should these oscillating bounce solutions be interpreted? For the
flat space bubble, a spacelike slice through the center of the bounce
gives the initial conditions for the real-time evolution of the system
after nucleation; these predict a bubble wall with a well-defined
trajectory and a speed that soon approaches the speed of light. The
interpretation of the Coleman-De Luccia bounce is similar. The main
new feature here is the fact that the spacelike slice is finite.
Formally, this corresponds to the fact that de Sitter spacetime is a
closed universe, even though we expect the bubble nucleation
process to proceed similarly in a spacetime that only approximates de
Sitter locally.
The Hawking-Moss solution can be interpreted as corresponding to a
thermal fluctuation of all of de Sitter space (or, more plausibly, of
an entire horizon volume) to the top of the potential barrier.
Strictly speaking, classical Lorentzian evolution would leave $\phi$
at the top of the barrier forever. However, this is an unstable
configuration, and so would be expected to break up, in a stochastic
fashion, into regions that evolve toward one vacuum or the other.
The oscillating bounce solutions yield a hybrid of these two extremes.
The endcap regions near $\xi=0$ and $\xi=\xi_{\max}$ clearly evolve
into vacuum regions analogous to those from the Coleman-De Luccia
bounce, while the intermediate, ``oscillating'', region is like that
emerging from a Hawking-Moss mediated transition. As with the
Hawking-Moss solution, the bounce carries no information about the
initial state, and there is not even any correlation between the vacua
in the endcaps and the initial vacuum state. Thus, like Hawking-Moss,
it is reminiscent of finite temperature tunneling in the absence of
gravity, and provides evidence of the thermal nature of de Sitter
spacetime.
The relative importance of the various solutions depends on the values
of their Euclidean actions. Although the details vary with the
particular form of the potential, the various regimes are
characterized by a parameter $\gamma$ measuring an averaged value of
$|V''|/H^2$. If $\gamma \gg 1$, there is a Coleman-De Luccia bounce,
a Hawking-Moss solution, and many oscillating bounces. However, the
Coleman-De Luccia bounce has a much smaller action than the others,
and so dominates. This is a regime of quantum tunneling transitions
followed by deterministic classical evolution. At the other extreme
is the case where $\gamma \mathrel{\mathpalette\vereq<} 1$, where the Hawking-Moss is the
only solution to the bounce equations. This is a regime of thermal
transitions followed by stochastic real-time evolution. In between is
a transitional region, with thermal effects still important. It is
here that the oscillating bounces are most likely to play a role.
\begin{theacknowledgments}
This work was supported in part by the U.S. Department of Energy.
\end{theacknowledgments}
| {'timestamp': '2005-12-29T03:25:23', 'yymm': '0512', 'arxiv_id': 'hep-th/0512332', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/0512332'} |
\section{Introduction}
Stellar evolution theory predicts that stars initially more massive than 7-8$M_{\odot}$ should explode as Core Collapse Supernovae (CCSNe) at the end of their nuclear burning lives (e.g. Heger et al. 2003; Eldridge \& Tout 2004). The observation of progenitor stars of CCSNe in archival pre-explosion images, provides us with a direct means with which to test the theoretical predictions. Until quite recently the only directly observed progenitor stars of CCSNe were those of the Type II-peculiar SN~1987A in the Large Magellanic Cloud (LMC) (Walborn et al. 1987) and the Type IIb SN~1993J in M~81 (Aldering et al 1994). The advent of large data archives, perhaps most importantly that of the Hubble Space Telescope (HST), has allowed several groups around the world to extend the progenitor search to larger distances (e.g. Smartt et al. 2004; Li et al. 2005; Gal-Yam et al. 2007). To date a total of nine progenitor stars of CCSNe have been reported, all of which are for hydrogen-rich Type II SNe, and with the seven most recent additions specifically for Type II-P SNe. There has been as yet no direct detection of a hydrogen-deficient Type Ib or a hydrogen and helium deficient Type Ic supernova progenitor, although the precursor of the peculiar SN~2006jc was observed some two years prior to explosion during an LBV-like outburst (Pastorello et al. 2007). Upper limits have been determined for the luminosity of several Type Ib/c progenitors, of which the most restrictive limits set thus far are for the progenitor of the Type Ic SN~2004gt (Maund, Smartt \& Schweizer 2005; Gal-Yam et al. 2005). In terms of absolute magnitude, the pre-explosion HST observations probed down to $M_V$ = -5.3. This allowed both groups to determine that the progenitor was almost certainly a Wolf-Rayet star, a massive star that has lost its hydrogen-rich envelope. Such stars are predicted to be the progenitors of Type Ib/c SNe.
Another Type Ic SN with archival pre-explosion observations is SN~2002ap. It was discovered by Y. Hirose on 2002 January 29.4 UT at magnitude V=14.54 in the spiral galaxy M~74 (Nakano et al. 2002), at a distance of approximately 9.3 Mpc (Hendry et al. 2005). The object was quickly revealed as a Type Ic SN with broad spectral features, indicative of very high velocities of the SN ejecta (e.g. Meikle et al. 2002; Kinugasa et al. 2002; Gal-Yam et al. 2002; Mazzali et al. 2002; Foley et al. 2003). Although its spectra appeared similar to that of the peculiar SN 1998bw, it was less luminous, reaching a peak magnitude of $M_V\simeq$ -17.5, about 1.7 mag fainter than SN~1998bw. Furthermore, unlike SN~1998bw no gamma ray burst (GRB) was detected coincident with the position of SN~2002ap (Hurley et al. 2002a). $UBVRIKH\alpha$ pre-explosion observations of the site of SN~2002ap in M~74, taken with the KPNO 0.9~m, the 2.5~m Isaac Newton Telescope (INT) and the Bok 2.3~m telescope, were used by Smartt et al. (2002) to derive upper limits to the absolute brightness of the progenitor star in the various filters. The $B$ and $V$ band observations were the deepest, reaching absolute magnitude limits as faint as $M_B$ = -6.3 and $M_V$ = -6.6.
At the time it was believed that these were the best quality images available of the pre-explosion site of SN~2002ap. However we have discovered additional
pre-explosion images of the SN position in the archive of the Canada France Hawaii Telescope (CFHT), which are vastly superior in depth and resolution to those originally presented in Smartt et al. (2002). In fact we shall show that they are the deepest pre-explosion images of any nearby Type Ib/c SN to date.
The new archive at the Canadian Astronomy Data Centre (CADC)\footnote{http://cadcwww.dao.nrc.ca/cfht/} returns these images. This paper presents the study of these very deep, high quality ground based observations of the pre-explosion site of SN~2002ap, in which we attempt to detect the progenitor star and place rigorous constraints on its properties.
\section{Observations}\label{sec:obs}
\subsection{Pre-explosion observations}\label{sec:pre}
The galaxy M74 was observed with the 3.6-m Canada France Hawaii Telescope in October 1999, and to our knowledge the observations have not been published. The site of SN~2002ap was imaged using CFH12K, a 12 CCD mosaic camera, which is now decommissioned. The pixel size of this instrument was 0\farcs2 and further details can be found on the CFHT website\footnote{http://www.cfht.hawaii.edu/Instruments/Imaging/CFH12K/}. The $B$ and $R$ observations were particularly deep, consisting of individual, dithered 600-second exposures which combined to result in 1 hour 20 mins and 1 hour exposure times respectively. The $V$ and $H\alpha$ observations were much shallower with the individual frames combined giving total exposure times of just 600 and 900-seconds respectively. A summary of these observations can be found in Table \ref{tab:obs}. The images were downloaded from the CADC archive, and most had been processed successfully through the ELIXIR system. However, some of the $R$ band images generated by ELIXIR were corrupted, so we used the master flat from the archive and processed the frames manually using standard techniques in IRAF.
There were 8$\times$600s exposures from 1999 October 9th in $B$ and the combined image quality was 0\farcs75. A further 8$\times$600s set of exposures were taken on 1999 December 31st, but seven of these had an image quality of $\sim$1\farcs2 (the remaining image had poor seeing of 1\farcs9). Combining the 1\farcs2 stacked image with the better seeing 0\farcs75 images did not increase the sensitivity, so we chose to proceed with analysis of the 1999 October 9th images only. The 6$\times$600s $R$ band images taken on 1999 October 6th were all of similar resolution (0\farcs7), and the shorter 5$\times$120s $V$ stack had a resultant quality of 1\farcs2. There were 5$\times$300s $H\alpha$ exposures taken on 1999 Jul 13, each with image quality around 0\farcs9. The observations were made towards the end of the night which resulted in the final two exposures having significantly higher background counts. These two exposures were excluded from the final stacked image since they would only serve to add excess noise.
To determine zeropoints and colour corrections for each filter, magnitudes of standard stars identified from the catalogue of Henden et al. (2002) were measured using the aperture photometry task within the {\sc iraf} {\sc daophot} package. Due to the depth of the observations many of the standard stars in the field were saturated and it was only possible to find five useable objects common in {\it BVR}. These stars are shown labelled in Figure~\ref{fig:finder} and the corresponding catalogue magnitudes are presented in Table \ref{tab:standards}. The calculated zeropoints are shown in Table \ref{tab:zero}.
\begin{table*}
\caption[]{Summary of observational data.}
\begin{center}
\begin{tabular}{lrrrr} \hline\hline
Date of Observation & Telescope + & Filter & Exposure Time & Observer\\
& Instrument & & (sec)\\
\hline
Pre-explosion observations\\[1.5ex]
1999 Jul 13 & CFHT+CFH12K & H$\alpha$ & 900 & Cuillandre, Hainaut, McDonald\\
1999 Oct 06 & CFHT+CFH12K & R & 3600 & Astier, Fabbro, Pain\\
1999 Oct 09 & CFHT+CFH12K & B & 4800 & McDonald, Cuillandre, Veillet\\
1999 Oct 09 & CFHT+CFH12K & V & 600 & McDonald, Cuillandre, Veillet\\[1.5ex]
\hline
Post-explosion observations\\[1.5ex]
2003 Jan 10 & HST+ACS/HRC & F475W & 120 & Kirshner\\
2003 Jan 10 & HST+ACS/HRC & F625W & 120 & Kirshner\\
2003 Jan 10 & HST+ACS/HRC & F658N & 800 & Kirshner\\
2003 Jan 10 & HST+ACS/HRC & F814W & 180 & Kirshner\\
2004 Jul 06 & HST+ACS/HRC & F555W & 480 & Filippenko\\
2004 Jul 06 & HST+ACS/HRC & F814W & 720 & Filippenko\\
2004 Aug 31 & HST+ACS/HRC & F435W & 840 & Filippenko\\
2004 Aug 31 & HST+ACS/HRC & F625W & 360 & Filippenko\\
2006 Aug 25 & WHT+AUX & B & 4800 & Benn, Skillen\\
2006 Aug 27 & WHT+AUX & R & 3600 & Benn, Skillen\\
\hline\hline
\end{tabular}\\
\end{center}
{\footnotesize CFHT = 3.6-m Canada France Hawaii Telescope, Mauna Kea, Hawaii\\
HST+ACS/HRC = Hubble Space Telescope + Advanced Camera for Surveys/High Resolution Camera\\
WHT = 4.2-m William Herschel Telescope, La Palma, Canary Islands}
\label{tab:obs}
\end{table*}
\begin{figure}
\begin{center}
\epsfig{file = finder2.eps, width = 80mm}
\caption{Standard stars in CFHT pre-explosion observations. The approximate location of the site of SN~2002ap is also indicated. All other images in this paper have the same orienation as this figure.}
\label{fig:finder}
\end{center}
\end{figure}
\begin{table}
\caption[]{Standard star magnitudes taken from Henden et al. (2002).}
\begin{center}
\begin{tabular}{rcccc} \hline\hline
Star & Henden catalogue & B & V & R\\
& identity\\
\hline
A & 118 & 20.827 & 19.286 & 18.257\\
B & 105 & 20.231 & 19.756 & 19.523\\
C & 91 & 20.804 & 19.286 & 18.234\\
D & 87 & 21.333 & 19.827 & 18.696\\
E & 85 & 20.739 & 20.536 & 20.326\\
\hline\hline
\end{tabular}\\
\end{center}
\label{tab:standards}
\end{table}
\begin{table}
\caption[]{CFHT zeropoints and colour corrections.}
\begin{center}
\begin{tabular}{lrr} \hline\hline
Filter & Colour Correction & Zeropoint\\
& (Colour)\\
\hline
B & 0.04(B-R) & 26.10\\
R & 0.02(B-R) & 26.29\\
\hline\hline
\end{tabular}\\
\end{center}
\label{tab:zero}
\end{table}
\subsection{Post-explosion observations}\label{sec:post}
Observations of SN~2002ap were taken using the High Resolution Channel (HRC) (pixel scale = 0\farcs025) of the Advanced Camera for Surveys (ACS) on board HST during January 2003, and July and August 2004. These formed part of the programmes GO9114 (PI: R. Kirshner) and SNAP10272 (PI: A. Filippenko), designed to obtain late-time photometry of nearby SNe. Details of these observations are given in Table \ref{tab:obs}. All HST observations were downloaded from the Space Telescope Science Institute (STScI) archive via the on-the-fly re-calibration (OTFR) pipeline. Those from 2004 were found to be well aligned despite being taken on two different epochs almost two months apart. Identification of SN~2002ap in these late-time images was aided by comparison with those from January 2003 in which the SN was significantly brighter. In all cases the pointing of the observations was such that SN~2002ap was imaged very close to the centre of the HRC chip.
Photometry was carried out using the ACS module of the PSF-fitting photometry package {\sc dolphot}\footnote{http://purcell.as.arizona.edu/dolphot/}, a modified version of HSTphot (Dolphin 2000). This package uses model PSFs to automatically detect, characterise and perform photometry of objects in ACS images. Furthermore it incorporates aperture corrections, charge transfer efficiency (CTE) corrections (Reiss 2003) and transformation of HST flight system magnitudes to the standard Johnson-Cousins magnitude system (Sirianni et al 2005). Objects classified as being good stellar candidates were retained from the {\sc dolphot} output, with all other objects, characterised as too faint, too sharp, elongated or extended, being discarded. A further cut was performed to remove all objects containing bad or saturated pixels, and stars which had PSF fits with $\chi>2.5$.
In the 2004 observations SN~2002ap was detected with a significance of $\sim$ 8$\sigma$ in the F814W image, a significance of $\sim$ 5$\sigma$ in the F555W image, but was not detected in the F435W and F625W images down to their 5$\sigma$ detection limits. Photometry of the SN and of several nearby stars (see Figure~\ref{fig:SNcluster}) is given in Table \ref{tab:dolphot}.
Also shown are 5$\sigma$ detection limits for each of the HST observations. Although objects 2 and 3 in Figure~\ref{fig:SNcluster} appear to be a single object they are detected by {\sc dolphot} as two separate PSFs. Proper scaling of the F435W image allows us to verify that at least two sources exist, where object 2 appears as a much fainter extension to the bright source number 3. From the positions measured by {\sc dolphot} the two sources are separated by $\sim$ 3 ACS/HRC pixels (0\farcs075). The diffraction limited resolution of HST results in stellar PSFs with FWHMs of 0\farcs05 in F435W and 0\farcs09 in F814W. The much larger magnitude difference and the increased blending of objects 2 and 3 as observed in the F814W image makes it impossible to distinguish the two sources by eye in this image. Since Figure~\ref{fig:SNcluster} is a combination of the F435W and F814W observations, objects 2 and 3 appear to be a single source.
Further late-time images of SN~2002ap were taken at the 4.2-m William Herschel Telescope in late August 2006 (see Table \ref{tab:obs} for details). $B$ and $R$ band observations with exposure times matching those of the pre-explosion CFHT images were made using the the Auxiliary Port Imaging Camera (AUX), which has a pixel scale of 0\farcs11/pixel. The reduction and subsequent combining of the images was performed using standard tasks within {\sc iraf}. There were 8$\times$600s exposures taken in the $B$ band on 2006 August 25th. Inspection showed the combined image quality to be 0\farcs75. A set of 6$\times$600s $R$ band exposures were taken on 2006 August 27th, and the quality of the final combined image was 0\farcs7.
\begin{figure}
\begin{center}
\epsfig{file = new_SNcluster.eps, width = 80mm}
\caption{HST image showing SN~2002ap and several neighbouring stellar objects. This image is the sum of the F435W (Aug 2004) and F814W (Jul 2004) HRC observations. Although objects 2 and 3 appear to be a single source in this image, they are detected as two PSFs separated by 3 ACS/HRC pixels (0\farcs075) by the PSF-fitting photometry package {\sc dolphot}. See text for further details.}
\label{fig:SNcluster}
\end{center}
\end{figure}
\begin{table*}
\caption[]{Photometry of SN~2002ap and nearby stars in the HST observations of 2004 (see Figure~\ref{fig:SNcluster}). Transformation from HST flight system magnitudes to Johnson-Cousins magnitude system made using the transformation of Sirianni et al. (2005). Also shown are the 5$\sigma$ detection limits derived for each filter at the location of the SN. The detection limits were transformed to standard magnitudes assuming zero colours.}
\begin{center}
\begin{tabular}{rcccc} \hline\hline
& B & V & R & I\\
\hline
5$\sigma$ limit & 26.3 & 25.7 & 25.4 & 25.6\\
\hline
1 & 25.43(0.10) & 23.48(0.04) & 22.16(0.03) & 21.45(0.01)\\
2 & 24.49(0.06) & 24.60(0.09) & 24.86(0.18) & 24.87(0.17)\\
3 & 22.72(0.02) & 22.06(0.02) & 21.72(0.02) & 21.45(0.01)\\
4 & 24.85(0.07) & 25.29(0.13) & 24.89(0.15) & 25.22(0.16)\\
5 & 25.30(0.09) & 25.56(0.16) & - & -\\
SN & - & 25.84(0.19) & - & 25.05(0.13)\\
\hline\hline
\end{tabular}\\
\end{center}
{\footnotesize Figures in brackets are the photometric errors}
\label{tab:dolphot}
\end{table*}
\begin{figure*}
\centering
\begin{minipage}[c]{0.5\textwidth}
\centering
\epsfig{file = CFHT_pre_R.eps, width = 80mm}
\end{minipage}\\[10pt]
\begin{minipage}[c]{0.5\textwidth}
\centering
\epsfig{file = HST_2003_625W.eps, width = 80mm}
\end{minipage}\\[10pt]
\begin{minipage}[c]{0.5\textwidth}
\centering
\epsfig{file = HST_2004_BI_sum.eps, width = 80mm}
\end{minipage}\\[10pt]
\caption{Pre- and post-explosion imaging of the site of SN 2002ap. All images are centred on the SN location. The top panel is the pre-explosion CFHT $R$ band observation, the middle panel is the post-explosion HST F625W observation (120 seconds) taken in Jan 2003, and the lower panel is the sum of the post-explosion HST F814W (720 seconds) and F435W (840 seconds) observations taken in Jul/Aug 2004. Note that the F625W image is much shallower than either of the constituent observations that make up the bottom panel, which explains why it is difficult to identify objects common to both these images. The F625W observation is shown here to convince the reader of the location of SN~2002ap. The SN has faded significantly between the 2003 and 2004 observations, but as can be seen from Figure~\ref{fig:SNcluster} it is still visible at the later epoch.} \label{fig:prepost}
\end{figure*}
\begin{figure*}
\begin{center}
\epsfig{file = psf_sub.eps, width = 160mm}
\caption{The original pre-explosion observation (left), and the resultant image after PSF fitting and subtraction of objects close to the SN site (right).}
\label{fig:subtracted}
\end{center}
\end{figure*}
\section{Data Analysis}\label{sec:analysis}
\subsection{Alignment of pre- and post-explosion observations}\label{sec:alignment}
The site of SN~2002ap was found to lie on chip 8 of the CHF12K instrument in all the pre-explosion observations. All the images at CFHT were dithered by the observers to improve rejection of hot pixels and other CCD defects. Hence we registered the images in each filter by applying simple linear shifts and then combined them using standard tasks within {\sc iraf}. The stacked $BVR$ images were then co-aligned using the $R$-band image as the reference frame. To determine the precise location of SN~2002ap on the CFHT pre-explosion images, we employed the same method as used by, for example, Smartt et al. (2004) and Maund \& Smartt (2005).
In order to calculate a transformation to map the ACS/HRC (post-explosion) coordinate system to the CFHT (pre-explosion) observations, the positions of stars common to both sets of images were required. It proved particularly difficult to identify common stars due to the extreme differences in image resolution (CFHT $\sim$0\farcs8; HST $\sim$0\farcs08) and the pixel scales of the two instruments (CFH12K 0\farcs2/pixel; ACS/HRC 0\farcs025/pixel). Most of the stars in the HRC field of view are in resolved clusters, which appear blended in the lower resolution ground based observations. Such stars cannot be used to align the images since reliable measurements of their positions cannot be made in the CFHT frames. Nevertheless ten isolated stellar objects common to both the pre- and post-explosion observations were identified, and their positions measured using the centring algorithms within the {\sc iraf} {\sc daophot} package.
A general geometric transformation (x and y shifts, scales and rotations) which mapped the ACS/HRC coordinate system to that of the CFH12K instrument was calculated using the {\sc iraf} task {\sc geomap} and the positions of those stars common to both sets of observations. Figure~\ref{fig:prepost} shows sections of the pre- and post-explosion images centred on the SN coordinates and aligned using the {\sc geomap} solution. The pixel position of SN~2002ap in the post-explosion F814W frame was measured using the three centring algorithms of the {\sc daophot} package (centroid, ofilter, gauss), and the mean value transformed to the coordinate system of the CFHT images using the {\sc iraf} task {\sc geoxytran}. The error in the SN position was estimated from the standard deviation of these three measurements to be $\sim$ 0\farcs003. This error proved insignificant when compared to the RMS error of the transformation, which was $\sim$ 0\farcs072.
\subsection{The SN site in pre-explosion observations}
The location of the SN site in the CFHT images was investigated to discover if a progenitor star had been observed. Figure~\ref{fig:prepost}a shows that the SN position lies on the edge of a bright source, which is blended in the CHFT frames but, as can be seen from the HST imagery (Figure~\ref{fig:SNcluster}), is formed from at least five separate objects. The FWHM of stellar sources in the CFHT $BR$ frames is $\sim$ 0\farcs8 and in $V$ $\sim$ 1\farcs2. The SN was found to lie some 2\arcsec\ from the nearest of the sources that constitute the blend, so any progenitor star in the CFHT images should be resolved, provided the observations were deep enough for it to be detected.
Visual inspection showed there to be significant flux close to the SN location in the $B$ and $R$ frames, with nothing visible in the much shallower $V$ band image. At first sight this appears to be a detection of a progenitor star. To test this, two methods were used to determine whether or not the progenitor star had been recovered.
\subsubsection{PSF fitting photometry using {\sc DAOPHOT}}\label{sec:psffitting}
Method 1 utilised the PSF photometry tasks within the {\sc iraf} {\sc daophot} package, and followed the techniques for crowded field photometry described by Massey \& Davis (1992). Aperture photometry was initially performed on objects across the CFHT frames, before an empirical model point spread function (PSF) was created from several isolated stars using the task {\sc psf}. Several different PSF models were created in order to test how sensitive the photometry results were to the model used. The task {\sc allstar} was run on a subset of stars in the CFHT frames for each PSF model, and the photometry results from each run compared. The results varied little with the PSF model used, and could be considered identical to within the photometric errors. The positions of the SN and of the nearby stars were measured in the HST images and transformed to the CFHT coordinate system using the transformation discussed in Section~\ref{sec:alignment}. These positions, along with the magnitudes measured by {\sc dolphot}, were fed into the {\sc allstar} task, which performed simultaneous PSF fitting on all the objects, including any possible progenitor at the SN site. The success of the fitting procedure was judged from the fit residuals in the subtracted image. Figure~\ref{fig:subtracted} shows an original CFHT frame and a PSF subtracted version.
The {\sc allstar} task was implemented in two modes: 1) allowing re-centring of objects during PSF fitting using the initial input coordinates as starting positions, and 2) no re-centring of objects. A summary of the results is presented in Table~\ref{tab:allstarphot}.
In the case where re-centring was permitted the position of the object recovered closest to that of the SN was significantly shifted from the supernova's precise location; 0\farcs435 in the $B$ band image and 0\farcs368 in the $R$ band. The positions of this object in the pre-explosion observations were measured using several methods and mean values were calculated for each filter. One measurement was made by fitting all of the sources simultaneously. Three more measurements were made by first of all subtracting out all of the sources except the possible progenitor, and then measuring its position using the three centring algorithms of the {\sc iraf} {\sc daophot} package. One final measurement was made by fitting a PSF to the source in this subtracted image. The positional errors were estimated from the standard deviation of these five values and are shown in Table~\ref{tab:astrometric_errors} along with the other astrometric errors discussed in Section~\ref{sec:alignment}. From the total astrometric errors appropriate for each filter we find 4.1$\sigma$ and 2.5$\sigma$ differences between the position of the progenitor site and the object positions measured in the $B$ and the $R$ band images respectively. Moreover, the positions measured in the pre-explosion $B$ and $R$ band frames are themselves separated by $\sim$ 0\farcs200. The astrometric error with which to compare this separation is independent of the transformation error, and is simply a combination of those associated with measuring positions on each of the CFHT frames; 0\farcs148. Therefore 0\farcs200 is equivalent to a separation of $\sim$ 1.4$\sigma$, which is too high to confirm coincidence yet too low to rule it out. Equally the displacements of the pre-explosion sources from the SN position are of such significance that it is impossible to draw any definite conclusions on coincidence.
\begin{table}
\caption[]{{\sc daophot} PSF fitting photometry of supernova site in pre-explosion CFHT observations.}
\begin{center}
\begin{tabular}{rcc} \hline\hline
Filter & $\Delta$posn (mas) &Mag\\
\hline
\multicolumn{3}{l}{Recentering during profile fitting}\\[1.5ex]
B & 435 & 25.47(0.10)\\
R & 368 & 24.80(0.14)\\
\hline
\multicolumn{3}{l}{No recentering during profile fitting}\\[1.5ex]
B & 0 & 25.85(0.19)\\
R & 0 & 25.00(0.19)\\
\hline\hline
\end{tabular}\\[1.5ex]
{\footnotesize Figures in brackets are the photometric errors}
\end{center}
\label{tab:allstarphot}
\end{table}
However it is possible that the re-centred positions of these objects are unreliable. It has been found that while allowing stars to be re-centred during profile fitting, the positions of some of the faintest stars may be greatly shifted towards peaks in the subtraction noise of nearby very bright stars (Shearer et al. 1996). Since no objects corresponding to those found in the CFHT images were observed in the HST frames, and since the shift from the SN position was towards the nearby bright stars, it is reasonable to suggest that the above might have happened in this case.
The alternative is therefore to prohibit re-centring, forcing each PSF fit to be centred at the input coordinates throughout the fitting process. Via this method it was possible to fit a PSF at the progenitor site in both the $B$ and the $R$ band images. The magnitudes of these fits were slightly fainter than those from the re-centring case; $\sim$ 0.35 mags fainter in $B$ and $\sim$ 0.20 mags fainter in $R$ (Table~\ref{tab:allstarphot}). However, by disabling re-centring we lost any independent verification of the object's position, since we forced the PSF fit at the predetermined SN position. It is therefore not possible to conclude from PSF fitting techniques that the progenitor star of SN~2002ap is detected in the pre-explosion images, although the fact that the visible flux cannot be explained by any objects in the HST post-explosion observations makes such a conclusion appealing.
\begin{table}
\caption[]{Astrometric error associated with the alignment of pre and post-explosion observations.}
\begin{center}
\begin{tabular}{lr} \hline\hline
Error & Value (mas)\\
\hline
SN position & 3\\
Geometric transformation (RMS) & 72\\
\\
Pre-explosion position - $B$& 78\\
Pre-explosion position - $R$& 126\\
\hline
Total Error - $B$& 106\\
Total Error - $R$& 145\\
\hline\hline
\end{tabular}\\
\end{center}
\label{tab:astrometric_errors}
\end{table}
\subsubsection{Comparison with ground based late-time observations}\label{sec:imagesubtraction}
Another method that can be used to detect the presence of a progenitor star in archival imagery is to subtract a late-time post-explosion image from the pre-explosion version. The progenitor star will obviously not exist in the post-SN images, and as long as the SN itself has faded enough to contribute negligible flux, the subtraction of post-SN imagery from pre-SN frames should reveal a positive source at the progenitor position. This of course depends on whether the original observations were deep enough for the progenitor star to be detected above the background noise. To perform such an image subtraction late-time observations of SN~2002ap were taken with the Auxiliary Port Imaging Camera (AUX) of the 4.2-m William Herschel Telescope (WHT) (details in Table \ref{tab:obs}). Observations were made using $B$ and $R$ band filters, with identical exposure times and in similar seeing conditions to the CFHT observations. Taken approximately four and a half years post-explosion, the SN was much too faint to have been detected. (Note the magnitudes and detection limits for the SN in the 2004 HST observations - Table~\ref{tab:dolphot}.) Visual inspection showed the depth of the pre- and post-explosion images to be very similar (see Figures~\ref{fig:isis_subtraction}a and~\ref{fig:isis_subtraction}b).
After re-binning the WHT frames to match the pixel scale of the CFH12K camera (CFH12K pixel scale 0\farcs2; AUX pixel scale 0\farcs1) and accurate image alignment, the image subtraction package {\sc isis} 2.2 (Alard \& Lupton 1998; Alard 2000) was used to subtract the post-explosion images from their pre-explosion counterparts. Subtraction of the $R$-band observations resulted in a near perfectly flat subtracted image at the position of the SN (see Figure~\ref{fig:isis_subtraction}d), implying that the same source is visible pre- and post-explosion and therefore cannot be the SN progenitor. However, in the case of the $B$-band subtraction an extended negative feature, albeit of quite low signal-to-noise, is visible overlapping the SN position (see Figure~\ref{fig:isis_subtraction}c). Given the sense of the subtraction ({\it pre minus post}) a negative feature indicates an increase in detected flux in the post-explosion $B$-band image, an observation that is inconsistent with the detection of a progenitor in the pre-explosion frame. Also visible in the $B$-band subtracted image is a bright positive residual coincident with the centre of the adjacent blended source some 2\arcsec\ from the SN position, implying that something in this blend appeared brighter in the pre-explosion image. We can instantly rule this out as a progenitor detection based solely on the astrometry.
One might argue that the progenitor is detected in the pre-explosion frames and that the flux in the WHT post-explosion $B$ and $R$-band observations is from the SN, which has rebrightened perhaps due to interaction with circumstellar material or a light echo from interstellar dust. This scenario could potentially explain the differences between the $B$-band images. On the other hand it is highly unlikely. Comparing the $B$ and $R$ magnitudes of the pre-explosion source (Table~\ref{tab:allstarphot}) with the 5$\sigma$ detection limits of the SN in August 2004 (Table~\ref{tab:dolphot}) we see that by this epoch the SN has already become significantly fainter than the pre-explosion object. In order to reproduce the subtracted images, the SN would have to become significantly brighter between August 2004 and August 2006 when the WHT images were taken. Crucially, its $R$ magnitude at this later epoch would have to exactly match that of the pre-explosion source, with its $B$ magnitude becoming only slightly brighter. Such coincidental matching of the SN magnitudes to those of the pre-SN source is virtually impossible.
A simpler explanation is that the source close to the SN position in the pre- and post-explosion images is the same object, with the residuals in the subtracted image arising from differences between the CFH12K and the AUX filter functions. Since this object is still visible post-explosion obviously it cannot be the progenitor. So what is it?
Nothing is visible in the 2004 HST observations at the position of this source, which lies between the SN and the closest of the nearby stars in Figure~\ref{fig:SNcluster}. This is in spite of the fact that the detection limits of the HST images are significantly deeper than the measured magnitudes of this object in the CFHT frames. If, however, we assume that the object is a region of extended emission, we can explain this non-detection by the apparent low sensitivity of the ACS/HRC to such sources when compared with the CFHT instrument. Inspection of Figure~\ref{fig:prepost} shows several areas where there is significant flux in the CFHT images which cannot be accounted for by stars visible in the high resolution HST frames. Some traces of this flux are just visible in the HST F814W observation from 2004. Due to the very small pixel scale of the ACS/HRC (0\farcs025/pixel) the light from a region of extended emission is spread over many pixels and is therefore easily dominated by detector read noise if exposure times are not sufficiently long. In contrast the CFH12K pixels, which cover an area on the sky sixty four times that of the HRC pixels, are much more sensitive for imaging such sources. Summing the HST frames from all filters, and rebinning to match the pixel scale of the CFH12K camera, gave some indication of a diffuse region of emission close to the SN position and extending towards the bright stars nearby, but attempts to estimate its magnitude proved futile due to the very low signal-to-noise ratio.
If the flux from this diffuse source is dominated by emission lines (for example a H\,{\sc ii} region) this fact coupled with the differences between the CFHT and the WHT filter functions could explain the negative residuals in the subtracted $B$-band image (see Figure~\ref{fig:filtfunctions}). Unfortunately the $H\alpha$ pre-explosion observation was too shallow (900s compared to the 3600s $R$-band exposure) to produce a significant detection at the SN site, so it cannot be used to confirm the presence of a H\,{\sc ii} region.
We conclude that the object seen in the pre-explosion observations (in Fig.~\ref{fig:prepost}) close to the SN position is a diffuse source, possibly a H\,{\sc ii} region, and {\it not} the SN progenitor.
\begin{figure*}
\begin{center}
\epsfig{file = cfht_wht_sub.eps, width = 160mm}
\caption{a) CFHT pre-explosion $B$ observation, b) WHT post-explosion $B$ observation, c) $pre$ $minus$ $post$ subtracted $B$-band image and d) $pre$ $minus$ $post$ subtracted $R$-band image. All the images are shown with inverted intensities (positive sources appear black, negative sources appear white). The $R$-band subtraction (d) appears to be perfectly flat at the position of the SN, which is marked in all frames by cross hairs. In the $B$-band subtracted image (c) a faint and extended negative (white) feature is visible overlapping the SN location. Adjacent to this, coincident with the centre of the blend of nearby stars, is a bright positive (black) source. It is believed that both of these subtraction residuals are due to differences in the transmission functions of the CFHT and WHT filters (see Figure~\ref{fig:filtfunctions}), where the negative feature overlapping the SN position is a diffuse source of emission line flux (possibly a H\,{\sc ii} region) that is more efficiently tranmitted by the WHT $B$ filter, and the positive source is due to extra continuum flux from the nearby bright stars that is transmitted by the CFHT filter.}
\label{fig:isis_subtraction}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\epsfig{file = filt_functions.eps, width = 160mm}
\caption{Comparison of CFHT/CFH12K and WHT/AUX $B$ (left) and $R$ (right) filter functions}
\label{fig:filtfunctions}
\end{center}
\end{figure*}
\subsection{Detection limits of the pre-explosion observations}\label{sec:limits}
Since the progenitor star was not detected, limiting magnitudes were derived for the CFHT $B$ and $R$ band observations using several methods.
The standard deviation of the counts per pixel, in an area of 5$\times$5 pixels centred on the SN location, was used as a measure of the noise per pixel at the SN site in each image. By rearranging the equation for Signal-to-Noise ratio
\begin{center}
\begin{equation}
\label{equ:S/N}
S/N = \frac{F_{star}}{\sqrt{F_{star} + \sigma^2}}
\end{equation}
\end{center}
where $F_{star}$ is the flux of a star and $\sigma$ is the background noise level (both in electron counts), one can calculate what $F_{star}$ must be to produce a detection with a signal-to-noise ratio of a given value. In this case we decided to calculate the $F_{star}$ that would give a 5$\sigma$ detection within a 4 pixel ($\approx$ FWHM of the PSF) radius aperture. An aperture correction was applied to the resultant magnitude to give the final 5$\sigma$ limiting magnitude. The value of $\sigma$ in Equation~\ref{equ:S/N} is the standard deviation in electron counts for the whole 4 pixel aperture, $\sigma_{ap}$, and is calculated by multiplying the measured standard deviation per pixel, $\sigma_{pix}$, by the root of the number of pixels in the aperture, $\sqrt{N}$.
Another method involved measuring $\sigma_{ap}$ directly. A 6$\times$6 grid of 4 pixel radius apertures was used to measure the flux of an area of blank sky adjacent to the SN location. For each aperture the sky background was measured in a concentric annulus and subtracted from the total counts in the aperture. $\sigma_{ap}$ was then calculated as the standard deviation of the remaining counts in the 36 apertures. This method is preferable to using the total counts in each aperture to calculate the standard deviation since total counts can be significantly affected by changes in the level of the smooth background. An aperture at one side of the grid may overlie a region of much higher background flux than an aperture at the other side, producing large count differences and hence a value of $\sigma_{ap}$ that is unrealistically high. Subtraction of the local background at each location produces aperture counts that are more readily comparable, as they are measures of the deviation from an average level. Since we wished to measure the background fluctuations and noise on a scale comparable to the size of the PSF in the images and not large-scale variation in the background it was considered best to use the background subtracted counts.
One final estimate for the 5$\sigma$ detection limits was made using the photometry of stars within a 100\arcsec$\times$ 100\arcsec\ section of each image, roughly centred on the SN position. The 5$\sigma$ detection limit was determined as the average magnitude of all the stars with a photometric error of $\sim$ 0.2 mags.
The detection limits determined from the above methods were averaged to produce 5$\sigma$ limiting magnitudes of $B$ = $26.0\pm0.2$ mags and $R$ = $24.9\pm0.2$ mags. A magnitude limit for the much shallower and less restrictive $V$ band image was not derived.
\section{Distance, extinction and metallicity}\label{sec:dist}
Hendry et al. (2005) used three different methods to estimate the distance to M~74; standard candle method (SCM), brightest supergiants, and a kinematic distance estimate. From the mean of these three methods they estimate the distance to be $9.3 \pm 1.8$ Mpc. A light echo was discovered around another SN which exploded in this galaxy (SN 2003gd) and this study suggests a smaller distance of 7.2 Mpc would be more consistent with a model for the light echo flux \citep{van_dyk_echo}. A further determination of distance from the expanding photosphere method gives a distance of $6.7 \pm 4.5$ Mpc \citep{vinko_epm}. However given the large uncertainty of this result and the fact that the light echo method does not produce a definite independent estimate, we will adopt the distance of $9.3 \pm 1.8$ Mpc, noting that if the distance is lower the luminosity limit of the progenitor would be closer to the lower limit of the uncertainty we derive.
The total extinction toward SN2002ap has been shown by all previous studies to be low \citep{smartt_02ap, vinko_epm, foley_phot02ap, mazzali_02ap} and we adopt the value of \mbox{$E(B\!-\!V)$} = 0.09 $\pm$ 0.01 measured from a high resolution spectrum of the SN by Takada-Hidai et al. (2002). Assuming the reddening laws of Cardelli et al. (1989), with $R_V$ = 3.1, gives $A_B$ = 0.37 $\pm$ 0.04 and $A_R$ = 0.21 $\pm$ 0.02.
The metallicity gradient of M74 has been studied from the nebular emission lines of HII regions by Van Zee et al. (1998) and we have estimated the oxygen abundance from the latest emission line strengths combined with the latest empirical calibration from Bresolin et al. (2004). The abundance gradient in M74 suggests that at the galactocentric distance of SN2002ap \citep{smartt_02ap} the metallicity is 12 + log [O/H] = 8.4 $\pm$ 0.1. This latest calibration tends to give lower abundances for significantly metal rich regions than have previously been derived, but does not significantly change that originally derived in Smartt et al. (2002). Another recent study of the metallicity gradient in M74 is that of Pilyugin et al. (2004). At the position of SN~2002ap this would suggest a metallicity of $8.3\pm0.1$ dex. Hence it appears that the environment of SN2002ap is mildly metal deficient, somewhat similar to the massive stars in the Large Magellanic Cloud, which have an oxygen abundance of 8.35 $\pm$ 0.1 dex \citep{hunter_lmc}. However we note that Modjaz et al. (2007) determine a slightly higher value of 8.6 dex using a different calibration of the oxygen line fluxes.
\section{The progenitor of SN~2002ap}
From the detection limits of Section~\ref{sec:limits} and the values of distance and extinction from Section~\ref{sec:dist} we derive 5$\sigma$ absolute magnitude limits for the progenitor of SN~2002ap of \mbox{$M_B$ $\geq$ -4.2 $\pm$0.5} and \mbox{$M_R$ $\geq$ -5.1 $\pm$0.5}. The uncertainties are the errors in the sensitivity limits ($\pm$0.2), distance modulus ($\pm$0.43) and extinction ($A_B \pm$0.04; $A_R \pm$0.02) combined in quadrature. Although no progenitor star is detected, these limits constitute the tightest constraints yet placed on the absolute magnitude of a Type Ic SN progenitor, and as such can be used to deduce the likely properties of the star. The properties of greatest interest are the type of star that exploded and its mass, and several methods were employed to estimate these from the data.
\subsection{Single star evolution models and luminosity limits}\label{sec:single_stars}
Since we had no knowledge of the object's colour, we calculated 5$\sigma$ luminosity limits from $M_B$ and $M_R$ for a range of supergiant spectral types using the colours and bolometric corrections of Drilling \& Landolt (2000). This method is the same as that used by, for example, Maund \& Smartt (2005). The luminosity limits were plotted on Hertzsprung-Russell (H-R) diagrams (Figure~\ref{fig:H-R_diag}), along with stellar evolutionary tracks for single stars of initial mass between 7 and 120$M_\odot$. Four sets of stellar models were used, all of approximately LMC metallicity (metal/mass fraction Z=0.008); two sets from the Geneva group (Schaerer et al. 1993, Meynet et al. 1994) and two sets created using the Cambridge STARS\footnote{http://www.ast.cam.ac.uk/$\sim$stars} code (Eldridge et al. 2006). Of the two sets of models from each code, one incorporated standard mass-loss rates and the other arbitrarily doubled mass-loss rates during the pre-WR and WNL phases of evolution. Models from both groups were used in order to gauge any model dependence in our interpretation. Any supergiant star belonging to the solid shaded regions above the luminosity limits in these H-R diagrams would have been detected in one or more of the CFHT images, and can therefore be ruled out as a possible progenitor. In this way all evolved hydrogen-rich stars with initial masses greater than 8$M_\odot$ can be ruled out as possible progenitors. Such a conclusion is entirely consistent with the spectral characteristics (lack of H-lines) of SN~2002ap.
At effective temperatures ($T_{\rm eff}$) higher than those of the hottest O-stars are the evolved stages of stars with initial masses greater than $\sim$25-30$M_\odot$. At this late stage of their evolution such stars have lost all or most of their H-rich envelope due to mass loss processes, such as a strong stellar wind, explosive outbursts or interaction with a binary companion. The resulting objects (Wolf-Rayet (W-R) stars) are the exposed helium cores of the original massive stars. W-R stars are divided into two main subtypes; WN stars, in which the emission spectra are dominated by lines of nitrogen and helium, and WC stars, where carbon and oxygen lines dominate. The most oxygen rich WC stars are often further classified as WO. These sub-types are best explained as an evolutionary sequence of the form $WN \rightarrow WC \rightarrow WO$ (Conti 1982). The enhanced helium and nitrogen abundances observed in the spectra of WN stars correspond well to the equilibrium products of H-burning via the CNO cycle, which appear at the stellar surface as a result of mass loss. Subsequent He-burning and further mass loss results in the enhancement of carbon and oxygen at the stellar surface along with the depletion of helium and nitrogen. The star has become a WC star. Still more advanced stages of helium burning result in an overabundance of oxygen at the expense of carbon, producing a WO star.
SN theory predicts that the progenitors of Type Ib/c SNe are W-R stars, since Ib(Ic) spectra show no signs of H(H or He), which is most easily explained by the progenitor's lack of a H-rich(He-rich) envelope. A theoretical Mass-Luminosity relationship exists for W-R stars \citep{maeder_83, smith_maeder_89, crowther_07}. Crowther (2007) states this relationship for a H-free W-R star as
\begin{center}
\begin{equation}
\label{equ:mass-lum}
\log \frac{L}{L_\odot} = 3.032 + 2.695 \log \frac{M}{M_\odot} - 0.461 \left( \log \frac{M}{M_\odot} \right)^2.
\end{equation}
\end{center}
Using this formula one can calculate the mass of a W-R star if its luminosity is known. Note this calculation yields the mass of the W-R star, not the initial mass of the star from which it evolved. In this case we can calculate an upper limit for the W-R star's mass from the sensitivity limits of the CFHT observations. However to convert our $M_B$ and $M_R$ limits to luminosity limits requires appropriate bolometric and colour corrections. Since it is not possible to constrain the colours of the progenitor star from magnitude limits we instead consider the broadband colours of W-R stars in the LMC (Massey 2002), the vast majority of which have colours in the ranges $-0.4\leq B-V\leq0.6$ and $-0.1\leq V-R\leq0.4$. Bolometric corrections range from -2.7 to -6.0 \citep{crowther_07}, and Smith \& Maeder (1989) find a BC of $-4.5\pm0.2$ as appropriate for a wide range of galactic W-R stars. The bolometric correction is, however, related to the narrowband $v$ magnitude of the star, as photometry of W-R stars is usually performed with narrowband filters in order to sample the continuum flux between strong emission features. Differences between narrow (denoted by lowercase letters) and broadband photometry of W-R stars can be considerable due to the extra flux contributed by emission lines in the broadband observations. For example, $b-B$ = 0.55 and $v-V$ = 0.75 have been found for the strongest lined WC stars (Smartt et al. 2002). Here we use our broadband sensitivity limits and note that the actual bolometric magnitude limits will be equal to or fainter than what is calculated. For example if $b-B$ = 0.55 is applicable for the progenitor, then the limiting $M_{bol}$ should be fainter by 0.55 mag.
Approximate values of $M_{bol}$ were calculated from the magnitude limits $M_B$ and $M_R$ using a BC of -4.5 and the median of the colours of the LMC W-Rs appropriate in each case.
$M_B \rightarrow M_{bol} > -8.80\pm0.60$
$M_R \rightarrow M_{bol} > -9.45\pm0.56$
The errors are a combination of the estimated uncertainties in the absolute magnitudes ($\pm0.5$), bolometric correction ($\pm$0.2), and colours ($B-V \pm 0.26$; $V-R \pm 0.17$). The luminosity limit derived from the most constraining value of $M_{bol}$ is $\log (L/L_{\odot}) < 5.40 \pm 0.24$. Using equation~\ref{equ:mass-lum} this limit corresponds to a final W-R progenitor mass of $<12^{+5}_{-3}M_{\odot}$, which is entirely consistent with that of $\sim$ 5$M_{\odot}$ found by Mazzali et al. (2002) from modelling of the SN explosion.
It may be possible to infer a likely range of initial masses through comparison with stellar models as we have done for supergiant stars. For completeness our W-R luminosity limit is plotted in Figure~\ref{fig:H-R_diag} as a straight line across each plot at high temperatures. This is a somewhat crude approximation but since the error is conservative, the upper limit allows for equally conservative constraints to be placed on the star's properties. Considering the models with standard mass loss rates first, the Geneva and the STARS codes predict W-R stars to form only from stars initially more massive than about 34$M_{\odot}$. These independent stellar evolution codes appear to be in good agreement, although there are noticeable differences during W-R evolution phases. These differences are manifested in the initial mass versus final mass functions produced by each code, the final luminosity of each model W-R star being representative of final mass. The STARS models produce higher final W-R masses with increasing initial mass. The Geneva tracks show a similar behaviour, but reach a peak final mass at an initial mass of 60$M_{\odot}$. The Geneva models of initial mass higher than 60$M_{\odot}$ produce W-R stars of substantially lower final mass than similar STARS models. These differences are due to the lower mass loss rates prescribed during W-R evolution in the STARS code, which uses the W-R mass loss rates of Nugis \& Lamers (2000) and includes scaling of W-R mass loss rates with initial metallicity (Eldridge \& Vink 2006). Despite the discrepancies the overall conclusions drawn from both sets of standard mass loss models are the same. Ultimately at no point during W-R evolution do any of the models fall below the detection limits. This implies that no singly evolving star at metallicity Z=0.008 could have been the progenitor of SN~2002ap.
\begin{figure*}
\begin{center}
\epsfig{file = H-R_diag2.eps, width = 160mm}
\caption{H-R diagrams showing the 5$\sigma$ luminosity limits for the progenitor of SN~2002ap and stellar evolutionary tracks for singly evolving stars with initial masses of 7-120$M_{\odot}$. Top left and top right show the Geneva and Cambridge STARS stellar models respectively, both for metallicity Z=0.008 (approximately LMC) and incorporating standard mass loss rates. Bottom left and bottom right show the Geneva and STARS models at the same metallicity but with double the standard mass loss rates during the pre-Wolf-Rayet and WNL phases of evolution. Any star within the shaded area on each plot would have been detectable in one or both of the $BR$ pre-explosion observations. Only those models falling outside of this shaded area can be considered as viable progenitors for SN~2002ap.}
\label{fig:H-R_diag}
\end{center}
\end{figure*}
If single-star evolution is to be used to explain the progenitor of SN~2002ap, then we must invoke enhanced mass-loss in the models. As mentioned previously, mass loss can be a result of several phenomena such as stellar winds, eruptive outbursts and binary interaction. The latter we consider in Section~\ref{sec:binary}. For now we consider only singly evolving stars. Eruptive events such as LBV bursts are poorly understood but may account for substantial amounts of mass lost by very massive stars during their lifetimes (Smith \& Owocki 2006). Wind-driven mass-loss is slightly better understood. Three main factors affect this process: luminosity, surface temperature and composition. Higher luminosities in turn drive stronger stellar winds and increased metallicity also leads to higher mass loss rates (e.g. Abbott 1982). This results in the minimum initial mass for a W-R star decreasing with increasing metallicity. The Geneva group provide Z=0.008 models with double the standard mass loss rates during the pre-W-R and WNL phases of evolution (i.e. while each model retains some hydrogen), and we produced similar STARS models for comparison. The increased mass loss rates may be considered as a correction for an underestimation of the rates at Z=0.008, or of the initial metallicity of the progenitor of SN~2002ap. A factor of two increase in mass-loss rates is believed to represent a realistic upper limit, given the intrinsic uncertainty of the rates and the uncertainty in our measurement of metallicity. Comparing the luminosity limit to these tracks provides robust limits on what types of single stars, evolving in the canonical fashion, might reasonably have been the SN progenitor. The Geneva tracks with $\times$2 standard rates suggest the progenitor would have to have been initially more massive than 40$M_{\odot}$. Taking into account the upper error on the luminosity limit, the STARS $\times$2 models suggest that the progenitor could have evolved from a single star with initial mass in the range 30-60$M_{\odot}$, or a star of 120$M_{\odot}$. The differences between the Geneva $\times$2 and STARS $\times$2 models are again due to the different W-R mass-loss rates employed by each code.
It is interesting to compare our analysis here with the work of Maund, Smartt \& Schweizer (2005). In an attempt to place limits on the progenitor of another Type Ic supernova, SN~2004gt, they calculated colour and bolometric corrections for W-R stars by performing synthetic photometry on model W-R spectra from Grafener et al. (2002) (see Maund \& Smartt 2005 for details). In this manner they derived a typical bolometric correction of \mbox{$\sim$ -2.7}. Applying a similar BC to the limiting magnitudes presented in this paper would push our W-R luminosity limit down by 0.7 dex to a median value of $\log(L/L_{\odot})$ = 4.7. The upper W-R mass limit inferred in this case would be around 5$M_{\odot}$, much more restrictive than the 9$M_{\odot}$ limit found by Maund, Smartt \& Schweizer (2005) for the progenitor of SN~2004gt. This revised luminosity limit would rule out all single star progenitors, even from those models with double the standard mass loss rates. Our choice of a BC of -4.5 has been explained above and although the limits produced using this value are conservative in comparison, we believe they offer the most realistic interpretation given the model uncertainties.
In Figure \ref{fig:LMC_WR} we compare our $M_B$ and $M_R$ limits with broadband photometry of W-R stars in the LMC (Massey 2002), a comparison that is independent of BC. Considering the more restrictive $M_B$ limit, around 79 per cent of WNL stars can be ruled out as possible progenitors along with 23 per cent of WNE stars and 75 per cent of WC and WO stars. Given that SN~2002ap was a Type Ic SN, and therefore deficient of both hydrogen and helium, one would expect the W-R progenitor to have been an evolved WC or WO star. Assuming this to be true we can say that the progenitor was amongst the faintest $\sim$ 25 per cent of WC/WO stars which are known to exist. However our sensitivity limits cannot directly rule out around 21 per cent of the WNL stars and 77 per cent of the WNE stars as possible progenitors. WNL stars, which mark the transition of normal stars to W-R stars, have not yet lost all of their hydrogen envelope and can almost certainly be ruled out as progenitors based on the SN classification. Since such a large fraction of observed WNE stars fall below our detection limits, and with uncertainty as to how much helium must be present to result in a Type Ib instead of a Type Ic SN, we cannot definitely rule out a WNE star progenitor.
\begin{figure*}
\begin{center}
\epsfig{file = LMC_plots2.eps, width = 140mm}
\caption{Broadband photometry of W-R stars in the LMC (Massey 2002) compared to the absolute $B$ (bottom) and $R$ (top) magnitude limits. Note that the evolutionary sequence for W-R stars is WNL $\rightarrow$ WNE $\rightarrow$ WC $\rightarrow$ WO.}
\label{fig:LMC_WR}
\end{center}
\end{figure*}
\subsection{Combining observational constraints}\label{sec:indirect_constraints}
Besides trying to constrain the properties of the progenitor of SN~2002ap through direct observation, there are various indirect methods which can be used. One such approach is to observe the SN itself. Through modelling the explosions of carbon-oxygen (C+O) stars Mazzali et al. (2002) created best fits to the lightcurve and spectra of this SN, and thereby estimated that around 2.5$M_{\odot}$ of material was ejected during the explosion. Including the mass of the compact stellar remnant suggests a C+O progenitor star mass of $\sim$5$M_{\odot}$, which would form in a He core of $\sim$7$M_{\odot}$. To produce a helium core of this mass requires a star of initially greater than $\sim$15-20$M_{\odot}$. (This initial mass estimate is lower than the 20-25$M_{\odot}$ suggested by Mazzali et al. (2002) due to the inclusion of convective overshooting in the STARS stellar models.)
X-ray and radio observations of the SN can provide some constraints. The interaction of the fast moving supernova ejecta with the circumstellar material (CSM) leads to emission at X-ray and radio frequencies. Interaction with a higher density of CSM results in stronger emission. However, the radio and X-ray fluxes detected for SN~2002ap soon after explosion were low (Berger et al. 2002; Sutaria et al. 2003; Soria et al. 2004; Bj{\"o}rnsson \& Fransson 2004) suggesting a low density for the material immediately surrounding the star. If we assume that the final mass of the progenitor was around 5$M_{\odot}$, and that it was initially more massive than 15-20$M_{\odot}$, then the star must have lost 10-15$M_{\odot}$ during its lifetime. This material must be sufficiently dispersed prior to the SN explosion so as not to result in strong radio and X-ray emission. Only two epochs of X-ray observations of the SN are available, the first taken less than five days post-explosion and the second about one year later (Soria et al. 2004). Radio observations were taken on a total of 16 epochs, 15 of which spanned the first ten weeks after discovery of the SN (Berger et al. 2002; Sutaria et al. 2003) and one final observation approximately 1.7 years post-explosion in which the SN was not detected (Soderberg et al. 2006). It is reasonable to assume that no interaction occurred between the SN ejecta and any dense CSM during the 18-month gap in the radio monitoring, since such interaction would still have been detectable in the late-time observation (see Fig. 2 of Montes et al. (2000) - the timescale for variation in the radio light curve of SN 1979C is of the order of 4 years). We can say that for at least the first 1.7 years after the explosion of SN~2002ap the ejecta encountered no region of dense CSM. By multiplying this time period by the velocity of the outermost layers of the SN ejecta we can calculate a lower limit for the radius of such CSM. Mazzali et al. (2002) derive a photospheric velocity of $\sim$30,000 \mbox{$\rm{km}\,s^{-1}$} from a spectra taken just 2 days post-explosion (Meikle et al. 2002). However, Mazzali et al. (2002) also suggest that the large degree of line blending in this spectrum requires sufficient material at velocities $>$ 30,000 \mbox{$\rm{km}\,s^{-1}$}, and subsequently employ an ejecta velocity distribution with a cut-off at 65,000 \mbox{$\rm{km}\,s^{-1}$} in their models. Furthermore, through modelling of X-ray and radio observations, Bj{\"o}rnsson \& Fransson (2004) derive an ejecta velocity of $\sim$70,000 \mbox{$\rm{km}\,s^{-1}$}. They also point out that the photospheric velocity only provides a lower limit to the velocity of line-emitting regions (see Section 4 of Bj{\"o}rnsson \& Fransson (2004)). From this range of velocities of 30,000 to 70,000 \mbox{$\rm{km}\,s^{-1}$}, we calculate the minimum radius for a region of dense CSM to be in the range $\sim$ 1.6 to 3.8 $\times$ $10^{12}\,\rm{km}$ ($\sim$ 11,000 to 25,000 astronomical units (au)).
\begin{figure*}
\begin{center}
\epsfig{file = HR_cluster.eps, width = 160mm}
\caption{H-R diagram of stars close to the site of SN~2002ap and STARS stellar evolutionary models of massive stars at metallicity Z=0.008. Since star 3 is {\em not} well fitted by a single SED, this source is better reproduced as the convolution of a double star (3a and 3b). See text for details.}
\label{fig:cluster_HR}
\end{center}
\end{figure*}
One might also attempt to constrain the properties of a SN progenitor by investigating the properties of other stars close to its position. If such stars are found to be coeval with each other and are close enough to the SN site (within a few parsecs) to be considered coeval with the progenitor, then the most evolved of these stars are likely to have had initial masses similar to the SN progenitor. However, in this case SN~2002ap and its closest stellar neighbours are separated from each other by distances of around 50 to 100 pc, as can be seen from the HST observations (see Figure~\ref{fig:SNcluster}). We cannot reliably assume that that these stars are coeval with each other or with the SN progenitor, and therefore cannot use their properties to place any strong constraints on the progenitor star.
Nevertheless photometry was performed on these stars as detailed in Section~\ref{sec:post} and the results are presented in Table~\ref{tab:dolphot}. Absolute $BVRI$ magnitudes of each star were plotted against wavelength and compared with spectral energy distributions (SEDs) of supergiant stars of spectral type O9 to M5 (Drilling \& Landolt 2000) to determine an approximate $T_{\rm eff}$ and appropriate bolometric correction for each star. The details of how objects 2 and 3 are spatially resolved, at least in the F435W image, have already been discussed fully in Section~\ref{sec:post}. However, while fitting SEDs to each of the objects, it was found that no single SED could be fitted to object 3. This may have been due to increased blending of objects 2 and 3 in observations at longer wavelengths, which in turn would have lead to unreliable photometry, or it could indicate that object 3 is in fact the convolution of several sources. Assuming the latter scenario and the simplest solution therein, i.e. that object 3 is a blend of two stars, we created a program to iterate over all possible combinations of pairs of supergiant SEDs to find the best fit to the data. In this way we tentatively estimated the spectral types and magnitudes of stars 3a and 3b. All the stars were plotted on an H-R diagram and compared to stellar models to estimate their masses, which range from 15 to 40$M_{\odot}$ (see Figure~\ref{fig:cluster_HR}). (Note that this range of masses is not affected if we ignore 3a and 3b.) The lifetime of a 15$M_{\odot}$ star is of the order of 15 Myrs, and for a 40$M_{\odot}$ star around 5 Myr. If all the stars formed at about the same time one would expect to find that the most evolved stars are also the most massive. However the two most evolved stars in Figure~\ref{fig:cluster_HR} appear to be at the lower end of this mass range, at around 15-20$M_{\odot}$. This could suggest that the higher mass stars are unresolved multiple objects rather than single stars, but it is more likely a confirmation that these stars are indeed {\em not} coeval.
In Section~\ref{sec:single_stars} we found that any single star progenitor would have to be initially more massive than 30-40$M_{\odot}$ and that mass loss rates would have to be double the standard for the measured metallicity.
The final masses of such stars given by the two stellar evolution codes are between 10 and 12$M_{\odot}$. This is significantly larger than the 5$M_{\odot}$ estimate of Mazzali et al. (2002), but consistent with the mass we derived in Section~\ref{sec:single_stars} using the W-R mass-luminosity relationship (not surprising considering both estimates are based on our photometric limits). Enhanced mass-loss rates due to continuum driven outbursts (Smith \& Owocki 2006) might play an important role in the evolution of any progenitor star that is initially more massive than 40$M_{\odot}$ and could help to produce lower mass W-R stars.
Another possibility is a progenitor produced by the homogeneous evolution (Yoon \& Langer 2005) of a 12-20$M_{\odot}$ star. If the rotation of a main sequence star is fast enough, the rotationally induced mixing occurs on a much shorter timescale than nuclear burning. Fresh hydrogen is continually mixed into the core until the entire hydrogen content of the star is converted into helium. In this way the main sequence star transitions smoothly to become a helium star of the same mass. While Yoon \& Langer (2005) do not consider LMC metallicity objects if we extrapolate their results it is reasonable to expect that the most rapidly rotating stars with initial masses in the range 12-20 $M_{\odot}$ will lead to a Type Ic progenitor in the mass range implied by our luminosity limit and the results of Mazzali et al. (2002). Detailed models of rapidly rotating massive stars would be required to test this hypothesis.
\subsection{Binary evolution}\label{sec:binary}
Binary evolution provides a possible alternative to the single star models discussed previously. Such a scenario is suggested by Mazzali et al. (2002) for the progenitor of this SN, where a star of initially 15-20$M_{\odot}$ loses much of its mass through interaction with a binary companion, to become a C+O star of around 5$M_{\odot}$.
Using our luminosity limits, it is possible to constrain the binary systems that might have produced the progenitor of SN~2002ap. We can place limits on the luminosity and the mass of the binary companion star in the same way as we did for the progenitor. In the case of the companion, however, we can use both the pre- and the post-explosion observations for this purpose. A companion star was not seen in either set of images, but the post-explosion observations provided slightly more restrictive luminosity limits, and these are plotted on an H-R diagram in Figure~\ref{fig:binary}. Whereas with the SN progenitor we could assume that the star must have been in an evolved state just prior to exploding, we cannot make the same assumption about any possible binary companion. It is conceivable that such a star could be at any stage in its evolution. Comparing our luminosity limits with stellar models we can at least set discrete mass limits for the companion star at various evolutionary phases.
Considering the case where the binary companion is initially less massive than the SN progenitor, and therefore less evolved at the time of the explosion, we see from Figure~\ref{fig:binary} that it must have been a main sequence star of $\lesssim 20M_{\odot}$. Stars more massive than this would have been visible at all stages of their evolution. Note that this constraint is on the mass and evolutionary phase of the companion at the time of the SN; that is after it has accreted any material from the progenitor. In the case of a star initially more massive than the progenitor, it would have exploded as a SN prior to SN~2002ap leaving a neutron star or black hole as the binary companion\footnote{Nomoto et al. (1994) suggest that an initially more massive companion star may also become a white dwarf in some cases.}. This would of course require the binary to remain bound after the first SN. Brandt \& Podsiadlowski (1995) predict that just 27 per cent of binaries consisting of two high mass stars will remain bound after the initial SN explosion, and of these bound systems 26 per cent will immediately experience strong dynamical mass transfer leading to their merger. Therefore around 20 per cent of high mass binaries might be expected to result in a bound system with a neutron star or black hole component, with only a fraction configured in such a way as to result in the binary interaction required to produce the progenitor of SN~2002ap. Such systems are therefore much rarer than those with main sequence companions.
\begin{figure}
\begin{center}
\epsfig{file = secondary.eps, width = 80mm}
\caption{H-R diagram showing the 5$\sigma$ luminosity limits for a binary companion to the progenitor of SN~2002ap. These limits were derived from the post explosion HST F425W and F814W observations. Also plotted are STARS stellar evolutionary tracks for singly evolving stars with initial masses of 7-120$M_{\odot}$, at metallicity Z=0.008 and with double the standard mass loss rates. Any star within the shaded area would have been detectable in one or both of the aforementioned HST observations. If the progenitor star did have a binary companion it must have been either a main sequence star of $\lesssim 20M_{\odot}$ or a dormant neutron star/black hole.}
\label{fig:binary}
\end{center}
\end{figure}
Using the conclusions of Mazzali et al. (2002) as best estimates of the initial and final masses of the progenitor, we can say that it lost 10-15$M_{\odot}$ of material before exploding. A substantial fraction of this material must have been ejected from the binary system rather than accreting onto a main sequence companion, otherwise it could have gained enough mass to become detectable in the observations (e.g. Maund et al. 2004). We can therefore conclude that the binary mass transfer process must have been non-conservative in cases where the companion was a main sequence star. Furthermore, this non-conservative mass transfer must have occurred early enough in the progenitor star's lifetime to allow the ejected material sufficient time to disperse to a large radial distance. This condition must be met to remain consistent with the X-ray and radio observations of the SN discussed in Section~\ref{sec:indirect_constraints}. For a black hole companion there is no such requirement that mass transfer must be non-conservative since a more massive black hole is no more visible than a lower mass counterpart. The rate at which a black hole can accrete material is however limited (Narayan \& Quataert 2005) and in many cases the process will therefore be non-conservative.
Kippenhahn \& Weigert (1967) classified binary systems which undergo mass transfer into three types, Cases A, B and C. We shall first discuss each case assuming a main sequence companion of $\leq 20M_{\odot}$. Case A mass transfer is where the donor star begins to transfer mass while still on the main sequence. This occurs in very close binary systems with periods of the order of a day. However, the expansion of the star on the main sequence, which causes it to fill its Roche lobe and dictates the rate of mass transfer, occurs on the nuclear timescale. The companion star has no problem adjusting to accept the material at such a low rate, with the result that Case A mass transfer is in general a conservative process. Our previous argument that conservative mass transfer would result in a main sequence companion becoming visible in our observations therefore suggests that Case A mass transfer is not a viable option. Furthermore, while the donor star is the more massive, conservative mass transfer will result in the shrinking of the binary orbit. Since Case A systems are already initially very close, this will often lead to the merger of the two objects. Where the merger is of two main sequence stars the result will be a single star of mass roughly equal to the sum of its components. This object will then continue its evolution as a single star, for which we have already discussed progenitor scenarios in Section~\ref{sec:single_stars}.
Case C mass transfer is where the donor begins to transfer mass at the end of core helium burning when it expands rapidly, ascending the giant branch for the second time. Case C therefore occurs in wider binaries with orbital periods of around 100 days or more, ensuring that the donor must expand to become a supergiant before filling its Roche lobe. Mass loss will occur at the thermal or possibly even the dynamical timescale of the donor star as it rapidly expands. The rate at which a companion can accrete this material will be dictated by its thermal timescale. A main sequence companion of lower mass will have a long thermal timescale compared to the donor star, and therefore will only accrete material at a fraction of the rate at which it is being lost by the donor. Consequently much of the donor's mass will be ejected from the binary system so that, in the described model at least, Case C mass transfer will be non-conservative. The mass lost from the system will still be relatively close by when the SN explodes since Case C mass transfer occurs late in the evolution of the donor star. Exactly what distance this material has dispersed to depends on the velocity $v$ with which it is ejected from the binary and the elapsed time $t$ between the end of mass transfer and the explosion of the SN. This distance is given by $D$ $\approx$ 3.15 $\times 10^{9}$($v/100\,\rm{km}\,s^{-1}$)($t/1\,\rm{yr})\,\,{km}$ (or $D$ $\approx$ 21($v/100\,\rm{km}\,s^{-1}$)($t/1\,\rm{yr})\,\,{au}$). The velocity of the ejected material $\rm{v}$ will be similar to the orbital velocity of the accreting star, of the order of 10 - 100 $\rm{km}\,s^{-1}$. In Section~\ref{sec:indirect_constraints} we used the radio observations of the SN to calculate the minimum radial distance out to which no significant density of CSM was encountered by the SN ejecta: $D_{\rm{min}}$ = 11,000 to 25,000 $\rm{au}$. Assuming $v$ = 100 $\rm{km}\,s^{-1}$ requires that mass transfer ended at least $t_{\rm{min}}$ = 500 to 1200 yrs prior to the SN\footnote{The model assumed here is rather simple. In reality the strong stellar wind of the newly formed W-R star would collide with the previously ejected material causing it to accelerate (Eldridge et al. 2006 and references therein). This would have the effect of {\it reducing} $t_{\rm{min}}$ above.}. Since this time period is rather short we cannot rule out all Case C mass transfer scenarios. Binaries with orbital radii approaching the lower limit of what constitutes a Case C system might begin to interact early enough so as to satisfy the above requirement. The low extinction measured towards the SN (see Section~\ref{sec:dist}) probably suggests that the 10 to 15$M_{\odot}$ of material lost prior to explosion had dispersed to much larger distances than the lower limit we estimate from the radio and X-ray data. This would tend to rule out all Case C binaries. One can still, however, invoke a non-spherical geometry (e.g. the rings seen around SN~1987A) that would allow this material to be much closer without causing extinction of the SN provided the line-of-sight to the observer is not obscured.
Case B mass transfer is where the donor star begins to transfer mass when it first becomes a giant at the end of core hydrogen burning, but before the ignition of central helium burning. Orbital periods of systems undergoing this type of interaction are intermediate to those of Cases A and C. As with Case C, the mass loss will occur on the relatively short thermal or dynamical timescales, making it difficult for a main sequence companion to accrete all this material. Case B mass transfer will happen early enough in the evolution of the donor star to allow sufficient time for the ejected material to disperse prior to the SN explosion. This earlier stripping of its hydrogen envelope will also afford the donor star more time to shed its helium envelope through strong stellar winds and become the low mass WC star predicted to be the progenitor of this SN. Based on this argument we suggest that the mass transfer process in our assumed binary system is more likely to have been Case B than Case C. We conclude that in all possible binary systems for the progenitor of SN~2002ap where the companion star was initially less massive than the progenitor, the companion must have been initially $\leq 20M_{\odot}$ and non-conservative Case B mass transfer is most likely to have occurred.
\begin{figure*}
\begin{center}
\epsfig{file = binary_20_14_2.eps, width = 160mm}
\caption{Binary model of a 20$M_{\odot}$ primary and a 14$M_{\odot}$ secondary created using the Cambridge STARS stellar evolution code and the binary-evolution algorithm of Hurley et al. (2002b). The luminosity limits plotted are derived from the pre-explosion observations in the W-R domain, and from post-explosion observations for O to M-type supergiants. The initial separation is such that the primary fills its Roche lobe as it expands to become a giant after core hydrogen burning, and so Case B mass transfer ensues. However, mass loss through Roche lobe overflow does not remove mass quickly enough to halt the expansion of the primary star, with the result that the primary star expands to engulf its companion. The system has entered a common envelope evolution (CEE) phase. Orbital energy is transferred to the common envelope through an unknown physical process, leading to a reduction in the binary period and separation, and to the ejection of the common envelope. The evolution of both models is halted at the end of core carbon burning in the primary, essentially just prior to the SN explosion. The numbers displayed beside the primary and secondary tracks show initial and final masses. The primary ends its life as a 5.4$M_{\odot}$ WC star. The secondary accretes very little of the $\sim$15$M_{\odot}$ of material lost by the primary, increasing in mass by just 0.06$M_{\odot}$.}
\label{fig:binary2}
\end{center}
\end{figure*}
To test this idea we created binary models using the STARS stellar evolution code (Eldridge, Izzard \& Tout in prep). We note that the modelling of binary stars is extremely uncertain (e.g. Hurley et al. 2002b). In all the models the initial orbital periods were such that Case B mass transfer would occur. One of the models, incorporating a 20$M_{\odot}$ primary and a 14$M_{\odot}$ secondary, is shown as an example in Figure~\ref{fig:binary2} compared to the relevant luminosity limits. The evolution of both models continues up to the end of core carbon burning in the primary. At this point, the time until the primary goes SN is so short that both stars appear essentially as they will when it explodes. Both stars must fall below the luminosity limits at the time of the SN explosion in order to satisfy observations. Being of lower mass, the secondary (the companion) is still on the main sequence as the primary expands and begins to transfer mass. However in this model, and in fact in all of the Case B binary models created, it was found that mass loss through Roche lobe overflow could not remove mass quickly enough to halt the expansion of the primary star, with the result that it expanded to engulf its companion. Such a scenario is referred to as common envelope evolution (CEE). In this example CEE begins with the dense core of the primary and the main sequence secondary orbiting about the centre of mass of the system within the inflated envelope of the primary. Orbital energy is transferred to the common envelope through an unknown physical process, leading to a reduction in the orbital period and radius, and to the ejection of the common envelope. The timescale for CEE is short and therefore very little mass is accreted by the secondary during this time. Provided the binary orbit does not shrink so much as to cause a merger\footnote{Nomoto et al. (2001) propose that some binary mergers of this sort may produce rapidly rotating C+O stars, which could explode as Type Ic hypernovae.}, having been stripped of its H-rich envelope the primary star continues to lose mass through strong stellar winds, eventually shedding most of its He-rich layer also. The model shown in Figure~\ref{fig:binary2} produces a 5.4$M_{\odot}$ C+O W-R from the initially 20$M_{\odot}$ star.
In the case of an assumed neutron star or black hole companion we derive broadly the same conclusions about mass transfer processes as we do for a main sequence secondary. There are however some notable differences. Case A mass transfer would again most likely lead to the merger of the binary components. However the merger of a main sequence star with a neutron star or black hole would either completely disrupt the progenitor, or possibly result in the formation of a Thorne-Zytkow object (TZO) (Thorne \& Zytkow 1977). Fryer \& Woosley (1998) suggest that the merger of a black hole or neutron star with the He core of a red giant could produce a long complex GRB with no associated supernova. This may occur in Case B systems with initially small orbital separations. In general both Case B and Case C mass transfer processes are possible, either with or without a subsequent CEE phase, although we again suggest that Case B is more likely since it results in the earlier removal of the progenitor's hydrogen envelope and a much longer period of W-R evolution prior to explosion. We are of course unable to place mass limits on a neutron star or black hole companion using our luminosity limits.
Note that in all of the above we have employed binary interaction to remove only the hyrogen envelope of the progenitor star. The subsequent removal of the He-rich layer has occurred via strong radiatively driven winds. Nomoto et al. (2001) point out that under the right conditions a second mass transfer event may occur, which removes the He layer. This however requires a large degree of fine-tuning of the binary system. During the first CEE phase the binary orbit must be reduced enough to allow interaction between the newly formed He-star and its companion, but not so much as to cause the two to merge. This second mass transfer is more likely to occur for lower-mass He-stars since they attain larger stellar radii (Habets 1986). Nomoto et al. (1994) exclusively use such double mass transfer to model the evolution of the progenitor of the Type Ic SN~1994I; a 2.1$M_{\odot}$ C+O star formed from a 4$M_{\odot}$ He-star. The significantly larger estimates for the final progenitor mass of SN~2002ap and the He-core mass from which it formed (see Section~\ref{sec:indirect_constraints}) makes the double mass transfer scheme less likely in this case.
\section{Conclusions}
We have used a unique combination of deep, high quality CFHT pre-explosion images of the site of SN~2002ap and follow up HST and WHT images of the SN itself to place the most restrictive limits to date on the luminosity and mass of the progenitor of a Type Ic SN. Theory predicts that the progenitors of such SNe are W-R stars which have lost all of their H and most of their He-rich envelopes. The archival observations rule out as viable progenitors all evolved hydrogen-rich stars with initial masses greater than 7-8$M_{\odot}$, the lower mass limit for stars to undergo core collapse. This is entirely consistent with the observed absence of hydrogen in the spectra of this SN and with the subsequent prediction that its progenitor was a W-R star. The magnitude limits do not allow us to distinguish between WN, WC and WO stars, as examples of all these W-R types have been observed with magnitudes that fall below our detection limits. Since the SN was deficient of helium it is likely to have been a WC or WO star. Using the W-R Mass-Luminosity relationship we calculated the upper limit for the final mass of a W-R progenitor to be $< 12^{+5}_{-3}M_{\odot}$. Comparing the W-R luminosity limit with Geneva and STARS stellar models for single stars at metallicity Z=0.008 we found no viable single star progenitors when standard mass loss rates were considered, and only initially very massive ($>30-40M_{\odot}$) candidate progenitors when mass loss rates were doubled. The lack of significant extinction towards the SN, along with weak detections of the SN at X-ray and radio wavelengths, suggests a relatively low mass of circumstellar material surrounding the progenitor star. This would tend to rule out very high mass progenitors. We conclude that any single star progenitor must have experienced at least twice the standard mass-loss rates, been initially $>30M_{\odot}$ and exploded as a W-R star of mass 10-12$M_{\odot}$.
The most likely alternative to the single star models is a binary evolution scenario, in which the progenitor star is stripped of most of its mass by a companion, becoming a low mass, low luminosity W-R star prior to exploding as a supernova. This idea is consistent with Mazzali et al. (2002) who, based on modelling of the supernova lightcurve and spectra, suggest a binary scenario for the progenitor of SN~2002ap in which a star of initial mass 20-25$M_{\odot}$ becomes a 5$M_{\odot}$ C+O star prior to explosion. (Note that we revise this initial mass to the lower value of 15-20$M_{\odot}$ in Section~\ref{sec:indirect_constraints}.) The luminosity limits from both the pre- and post-explosion observations, along with other observational constraints, allowed us to place limits on the properties of any possible binary companion, and to infer which mass transfer processes could or could not have occurred. We conclude that, if the system was indeed an interacting binary, the companion star was either a main sequence star of $\leq20M_{\odot}$, or a black hole or neutron star formed when an initially more massive companion exploded as a supernova. In either case, mass transfer most likely began as non-conservative Case B, which led immediately to common envelope evolution (CEE).
\section*{Acknowledgments}
This work, conducted as part of the award "Understanding the lives of massive stars from birth to supernovae" made under the European Heads of Research Councils and European Science Foundation EURYI (European Young Investigator) Awards scheme, was supported by funds from the Participating Organisations of EURYI and the EC Sixth Framework Programme. SJS and RMC thank the Leverhulme Trust and the European Science Foundation for a Philip Leverhulme Prize and postgraduate funding. The William Herschel Telescope is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. We thank the Service Programme and Pierre Leisy for rapid coordination of the observations. This research used the facilities of the Canadian Astronomy Data Centre operated by the National Research Council of Canada with the support of the Canadian Space Agency and the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. Based in part on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555
| {'timestamp': '2007-07-27T18:29:02', 'yymm': '0706', 'arxiv_id': '0706.0500', 'language': 'en', 'url': 'https://arxiv.org/abs/0706.0500'} |
\section{Introduction}
These are working notes, originally written in September-November 2021. Adam \'O Conghaile has proposed in \cite{AOC2021} some very interesting ideas on how to leverage ideas from the sheaf-theoretic approach to contextuality \cite{abramsky2011sheaf,DBLP:conf/csl/AbramskyBKLM15} to obtain cohomological refinements of $k$-consistency in constraint satisfaction, and of the Weisfeiler-Leman approximations to structure isomorphism. The purpose of these notes was to develop the sheaf-theoretic aspects of these ideas more fully, and to give a more uniform and conceptual account of the cohomological refinements.
They are intended to feed into ongoing joint work with Adam, Anuj Dawar, and Rui Soares Barbosa which aims to get a deeper understanding and more extensive results on this approach.
\subsection*{Notation}
We will not distinguish notationally in this note between a $\sg$-structure $A$ and its underlying universe or carrier, also written as $A$.
\textbf{Stylistic note} When we say ``there is a bijective correspondence between things of type $X$ and things of type $Y$'', we mean that there are explicit, inverse transformations between descriptions of things of these types, and hence the two notions are essentially the same.
We fix a finite relational vocabulary $\sg$, and a finite $\sg$-structure $B$.
We use the notation $A \to B$ to mean that there exists a homomorphism from $A$ to $B$, and $A \nto B$ for the negation of this statement.
The \emph{constraint satisfaction problem} $\CSP(B)$ is to decide, for an instance given by a finite $\sg$-structure $A$, whether there is a homomorphism $A \to B$.
We refer to $B$ as the \emph{template}.
\section{Positional strategies and presheaves}
Given a finite $\sg$-structure $A$, a \emph{positional strategy} for the existential $k$-pebble game from $A$ to $B$ is given by a family $S$ of homomorphisms $f : C \to B$, where $C$ is an induced substructure of $A$ with $|C| \leq k$. This is subject to the following conditions:
\begin{itemize}
\item \textbf{down-closure}: If $f : C \to B \in S$ and $C' \subseteq C$, then $f |_{C'} : C' \to B \in S$.
\item \textbf{forth condition}: If $f : C \to B \in S$, $|C| <k$, and $a \in A$, then for some $f' : C \cup \{a\} \to B \in S$, $f' |_{C} = f$.
\end{itemize}
Duplicator wins the existential $k$-pebble game iff there is a non-empty such strategy.
It is well-known that the existence of such a strategy is equivalent to the \emph{strong $k$-consistency} of $A$ in the sense of constraint satisfaction \cite{kolaitis2000game}. Moreover, strong $k$-consistency can be determined by a polynomial-time algorithm.
We write $\Sk(A)$ for the poset of subsets of $A$ of cardinality $\leq k$. Each such subset gives rise to an induced substructure of $A$.
We define a presheaf $\Hk : \Sk(A)^{\op} \to \Set$ by $\Hk(C) = \hom(C,B)$. If $C' \subseteq C$, then the restriction maps are defined by $\rho^{C}_{C'}(h) = h |_{C'}$.
This is the \emph{presheaf of partial homomorphisms}.\footnote{We can make this into a sheaf, analogous to the event sheaf in \cite{DBLP:conf/csl/AbramskyBKLM15}.}
A subpresheaf of $\Hk$ is a presheaf $\Sh$ such that $\Sh(C) \subseteq \Hk(C)$ for all $C \in \Sk(A)$, and moreover if $C' \subseteq C$ and $h \in \Sh(C)$, then $\rho^{C}_{C'}(h) \in \Sh(C')$.
A presheaf is \emph{flasque} (or ``flabby'') if the restriction maps are surjective. This means that if $C \subseteq C'$, each $h \in \Sh(C)$ has an extension $h' \in \Sh(C')$ with $h' |_{C} = h$.
\begin{proposition}
There is a bijective correspondence between
\begin{enumerate}
\item positional strategies from $A$ to $B$
\item flasque sub-presheaves of $\Hk$.
\end{enumerate}
\end{proposition}
\begin{proof}
The property of being a subpresheaf of $\Hk$ is equivalent to the down-closure property, while being flasque is equivalent to the forth condition.
\end{proof}
Note that these flasque subpresheaves correspond to the ``empirical models'' studied in \cite{DBLP:conf/csl/AbramskyBKLM15} in relation to contextuality.
We can define a union operation on sub-presheaves of $\Hk$ pointwise: $(\bigcup_{i \in I} \Sh_i)(C) = \bigcup_{i \in I} \Sh_i (C)$. This preserves the property of being a flasque subpresheaf of $\Hk$. Thus there is a largest such sub-presheaf, which we denote by $\Sbar$. The following is then immediate:
\begin{proposition}
The following are equivalent:
\begin{enumerate}
\item $A$ is strongly $k$-consistent.
\item $\Sbar \neq \es$.
\end{enumerate}
\end{proposition}
\section{Presheaf representations and the pebbling comonad}\label{pebbsec}
\begin{proposition}\label{pebbprop}
The following are equivalent:
\begin{enumerate}
\item There is a coKleisli $I$-morphism $\Pk A \to B$
\item There is a non-empty flasque sub-presheaf $\Sh$ of $\Hk$.
\item $\Sbar \neq \es$.
\end{enumerate}
\end{proposition}
\begin{proof}
Given a coKleisli morphism $f : \Pk A \to B$, each $s \in \Pk A$ determines a subset $C \in \Sk(A)$ of those elements with ``live'' pebbles placed on them, and the responses of $f$ to the prefixes of $s$ determines a homomorphism $h : C \to B$, as described in detail in \cite{abramsky2017pebbling}. We define $\Sh_f(C)$ to be the set of all such morphisms, extending this with $\Sh_f(\es) = \es$.
The forth property follows from the fact that if $|C| < k$, then we have a free pebble we can place on an additional element, and the response of $f$ to this extension of the sequence gives the required extension of $h$.
Somewhat surprisingly, the down-closure property is slightly more subtle. If $C$ is a singleton, this follows trivially from our stipulation on the empty set. Otherwise, we can remove an element $a$ from $C$ by moving the pebble currently placed on it to another element $a'$ of $C$. This means we have duplicate pebbles placed on $a'$. Because $f$ is an $I$-morphism, it must give the same response to this duplicate placing as it did when the previous pebble was placed on $a'$, and we obtain the restriction of $h$ as required.
Now suppose that we have $\Sh$. For each $s \in \Pk A \cup \{ []\}$, we define a homomorphism $h_s : C \to B$, where $C$ is the set of elements with live pebbles in $s$, by induction on $|s|$.
When $s = []$, $h_s = \es$.
Given $s[(p,a)] \in \Pk A$, by induction we have defined $h_s : C \to B$. There are several cases to consider, depending on the set of live elements $C'$ corresponding to $s[(p,a)]$:
\begin{enumerate}
\item If $C' = C \cup \{a\}$, then since $\Sh$ is flasque, there is $h' \in \Sh(C')$ with $h' |_{C} = h_s$. We define $h_{s[(p,a]]} = h'$.
\item It may be the case that $C' = C \setminus \{b\}$ for some $b$. This can happen if $a \in C$, so $p$ is a duplicate placing of a pebble on $a$, and the pebble $p$ was previously placed on $b$.
In this case, we define $h_{s[(p,a]]} = h_s |_{C'}$.
\item Finally, we may have $C' = (C \setminus \{b\}) \cup \{a\}$. In this case, $h_s |_{C \setminus \{b\}} \in \Sh(C \setminus \{b\})$, and since $\Sh$ is flasque, has an extension $h' \in \Sh(C')$.
We define $h_{s[(p,a]]} = h'$.
\end{enumerate}
Now we define $f_{\Sh}(s[(p,a)]) = h_{s[(p,a)]}(a)$.
The equivalence of (2) and (3) is immediate.
\end{proof}
Since choices were made in defining $f_{\Sh}$, the passages between $f$ and $\Sh_{f}$ and $\Sh$ and $f_{\Sh}$ are not mutually inverse. Using the more involved argument in \cite[Proposition 9]{abramsky2017pebbling}, we can get a closer correspondence.
\section{Local consistency as coflasquification}
Seen from the sheaf-theoretic perspective, the local consistency algorithm has a strikingly simple and direct mathematical specification.
Given a category $\CC$, we write $\Pshv{\CC}$ for the category of presheaves on $\CC$, \ie functors $\CC^{\op} \to \Set$, with natural transformations as morphisms. We write $\Pshvf{\CC}$ for the full subcategory of flasque presheaves.
\begin{proposition}
The inclusion $\Pshvf{\CC} \inc \Pshv{\CC}$ has a right adjoint, so the flasque presheaves form a coreflective subcategory.
The associated idempotent comonad on $\widehat{\Sk(A)}$ is written as $\Sh \mapsto \Shfl$, where $\Shfl$ is the largest flasque subpresheaf of $\Sh$. The counit is the inclusion $\Shfl \inc \Sh$, and idempotence holds since $\Sh^{\Diamond\Diamond} = \Shfl$. We have $\Hk^{\Diamond} = \Sbar$.
\end{proposition}
\begin{proof}
The argument for the existence of a largest flasque subpresheaf of a given presheaf is similar to that given in the previous section: the empty presheaf is flasque, and flasque subpresheaves are closed under unions.\footnote{Note that restriction is well-defined on unions of subpresheaves of a given presheaf $\Sh$, which represent joins in the subobject lattice $\mathbf{Sub}(\Sh)$.} The key point for showing couniversality is that the image of a flasque presheaf under a natural transformation is flasque. Thus any natural transformation $\Sh' \natarrow \Sh$ from a flasque presheaf $\Sh'$ factors through the counit inclusion $\Shfl \inc \Sh$.
\end{proof}
This construction amounts to forming a greatest fixpoint. In our concrete setting, the standard local consistency algorithm (see e.g.~\cite{barto2014constraint}) builds this greatest fixpoint by filtering out elements which violate the restriction or extension conditions.
We shall give an explicit description of this construction in terms of presheaves, as it sets a pattern we shall return to repeatedly. Give a family $\{\Sh(C) \subseteq \Hk(C)\}_{C \in \Sk(A)}$\footnote{Not necessarily a presheaf, since it need not be closed under restriction.}, we define $\Shup$ by
\[ \Shup(C) := \begin{cases}
\{ s \in \Sh(C) \mid \forall a \in A. \, \exists s' \in \Sh(C \cup \{a\}). \, s' |_C = s \} & |C| < k \\
\Sh(C) & \text{otherwise}
\end{cases}
\]
and $\Shdown$ by
\[ \Shdown(C) :=
\{ s \in \Sh(C) \mid \forall a \in C. \, s |_{C \setminus \{a\}} \in \Sh(C \setminus \{a\}) \}
\]
Clearly these constructions are polynomial in $|A|$, $|B|$. We can then define an iterative process
\[ \Hk \linc \Hk^{\uparrow\downarrow} \linc \cdots \linc \Hk^{(\uparrow\downarrow)^{m}} \linc \cdots \]
Since the size of $\Hk$ is polynomially bounded in $|A|, |B|$, this will converge to a fixpoint in polynomially many steps.
This fixpoint is $\Hk^{\Diamond} = \Sbar$.
This construction is dual to a standard construction in sheaf theory, which constructs a flasque sheaf extending a given sheaf, leading to a monad, the \emph{Godement construction} \cite{godement1958topologie}.
The following proposition shows how this comonad propagates \emph{local inconsistency} to \emph{global inconsistency}.
\begin{proposition}
Let $\Sh$ be a presheaf on $\Sk(A)$.
If $\Sh(C) = \es$ for any $C \in \Sk(A) \setminus \{ \es \}$, then $\Shfl = \es$.
\end{proposition}
\begin{proof}
If is sufficient to show that $\Shfl(\{a\}) = \es$ for all $a \in A$, since the only presheaf for which this can hold is the empty presheaf. This will be the case for all $a \in C$ by propagating the forth condition.\footnote{A more precise argument goes by induction on $|C|$.} Now fix $a \in C$. For any $b \in A \setminus C$,
consider $\Shfl(\{ a,b \})$. This must be empty since it otherwise it would violate the restriction condition to $\{ a \}$.
But then $\Shfl(\{ b \})$ must be empty, since otherwise it would violate the forth condition for $a$.
\end{proof}
\section{Global sections and compatible families}
\label{secglobal}
A global section of a flasque subpresheaf $\Sh$ of $\Hk$ is a natural transformation $\One \natarrow \Sh$. More explicitly, it is a family $\{ h_{C} \}_{C \in \Sk(A)}$ with $h_C \in \Sh(C)$ such that, whenever $C \subseteq C'$, $h_C = h_{C'} |_{C}$.
\begin{proposition}\label{homglobsecsprop}
Suppose that $k \geq n$, where $n$ is the maximum arity of any relation in $\sg$. There is a bijective correspondence between
\begin{enumerate}
\item homomorphisms $A \to B$
\item global sections of $\Sbar$.
\end{enumerate}
\end{proposition}
\begin{proof}
Given a homomorphism $h : A \to B$, the family $\{ h |_{C} \}_{C \in \Sk(A)}$ is a flasque subpresheaf of $\Hk$, and hence is a subpresheaf of $\Sbar$. There is a corresponding global section of $\Sbar$, which picks out $h |_{C}$ at each $C \in \Sk$.
Conversely, given a global section $\{ h_{C} \}_{C \in \Sk(A)}$, we can define $h : A \to B$ by $h(a) = h_{\{a\}}(a)$. We must show that $h$ is a homomorphism. Given a relation instance $R^A(a_1,\ldots ,a_n)$, since $k \geq n$, $C := \{ a_1, \ldots , a_n \} \in \Sk(A)$, and for each $i$, $h_C |_{\{a_i\}} = h_{\{a_i\}}$, so $h_C(a_i) = h(a_i)$. Since $h_C$ is a homomorphism, the relation instance is preserved by $h$.
\end{proof}
\textbf{Fixed assumption}
Henceforth, we shall always make the background assumption that $k \geq n$, where $n$ is the relational width of $\sg$.
Another representation of global sections will be very useful. This will focus on the \emph{maximal elements} $\Mk(A)$ of the poset $\Sk(A)$, \ie~the $k$-element subsets.
A \emph{$k$-compatible family} in $\Sbar$ is a family $\{ h_C \}_{C \in \Mk(A)}$ such that, for all $C, C' \in \Mk(A)$,
\[ \rho^{C}_{C \cap C'}(h_C) = \rho^{C'}_{C \cap C'}(h_C') . \]
\begin{proposition}
\label{compfamprop}
There is a bijective correspondence between global sections and $k$-compatible families of $\Sbar$.
\end{proposition}
\begin{proof}
Clearly the restriction of a global section to $\Skm(A)$ gives a $k$-compatible family, since $ h_{C} |_{C \cap C'} = h_{C \cap C'} = h_{C'} |_{C \cap C'}$.
Conversely, given a $k$-compatible family $\{ h_{C} \}_{C \in \Skm(A)}$ and any $C' \in \Sk(A)$, we define $h_{C'} = h_{C_1} |_{C'}$, where $C' \subseteq C_1 \in \Skm(A)$. If $C' \subseteq C_2 \in \Skm(A)$, then $C' \subseteq C_1 \cap C_2$, and
\[ h_{C_1} |_{C'} = (h_{C_1} |_{C_1 \cap C_2})|_{C'} = (h_{C_2} |_{C_1 \cap C_2})|_{C'} = h_{C_2} |_{C'}, \]
so this definition is consistent across maximal extensions of $C'$, and yields a well-defined global section.
\end{proof}
\section{Cohomological $k$-consistency}\label{cohomkconsec}
We can summarize the results of Section~\ref{secglobal} as follows:
\begin{proposition}
There is a polynomial-time reduction from $\CSP(B)$ to the problem, given any instance $A$, of determining whether the associated presheaf $\Sbar$ has a global section, or equivalently, a $k$-compatible family.
\end{proposition}
Of course, since $\CSP(B)$ is NP-complete in general, so is the problem of determining the existence of a global section.
This motivates finding an efficiently computable approximation.
This leads us to the notion of cohomological $k$-consistency recently introduced by Adam \'O Conghaile \cite{AOC2021}.
This leverages results from \cite{DBLP:journals/corr/abs-1111-3620,DBLP:conf/csl/AbramskyBKLM15} on using cohomology to detect contextuality, and applies them in the setting of constraint satisfaction.
\section{Background on contextuality}
The logical structure of quantum mechanics is given by a family of overlapping perspectives or contexts. Each context appears classical, and different contexts agree locally on their overlap. However, there is no way to piece all these local perspectives together into an integrated whole, as shown in many experiments, and proved rigorously using the mathematical formalism of quantum mechanics.
To illustrate this non-integrated feature of quantum mechanics, we may consider the well-known ``impossible'' drawings by Escher, such as the one shown in Figure~\ref{EE0}.
\begin{figure}
\begin{center}
\includegraphics[width=0.64 \textwidth]{Escher-AscendingAndDescending.jpg}\\
\caption{M. C. Escher, \emph{Klimmen en dalen (Ascending and descending)}, 1960. Lithograph.}
\label{EE0}
\end{center}
\end{figure}
Clearly, the staircase \emph{as a whole} in Figure~\ref{EE0} cannot exist in the real world. Nonetheless, the constituent parts of Figure~\ref{EE0} make sense \emph{locally}, as is clear from Figure~\ref{E1}.
Quantum contextuality shows that the logical structure of quantum mechanics exhibits exactly these features of \emph{local consistency}, but \emph{global inconsistency}.
We note that Escher's work was inspired by the \emph{Penrose stairs} from \cite{pen15}.\footnote{Indeed, these figures provide more than a mere analogy. Penrose has studied the topological ``twisting'' in these figures using cohomology \cite{penrose1992cohomology}. This is quite analogous to our use of sheaf cohomology to capture the logical twisting in contextuality.}
\begin{figure}
\begin{center}
\includegraphics[width=0.7 \textwidth]{E1.pdf}
\includegraphics[width=0.7 \textwidth]{E2.pdf}
\caption{Locally consistent parts of Figure~\ref{EE0}.}
\label{E1}
\end{center}
\end{figure}
\section{Brief review of contextuality, cohomology and paradox}
We begin by reviewing some background. In this note, we only consider commutative rings with unit, which we refer to simply as ``rings''.
Given a ring $R$, the category of $R$-modules is denoted $\RMod$. There is an evident forgetful functor $U : \RMod \to \Set$, and an adjunction
\[ \begin{tikzcd}
\Set \arrow[r, bend left=25, ""{name=U, below}, "\FR"{above}]
\arrow[r, leftarrow, bend right=25, ""{name=D}, "U"{below}]
& \RMod
\arrow[phantom, "\textnormal{{$\bot$}}", from=U, to=D]
\end{tikzcd}
\]
Here $\FR(X) = R^{(X)}$ is the free $R$-module generated by $X$, given by the formal $R$-linear combinations of elements of $X$, or equivalently the functions $X \to R$ of finite support, with pointwise addition and scalar multiplication. Given $f : X \to Y$, the functorial action of $\FR$ is given by $\FR f : \sum_i r_i x_i \mapsto \sum_j s_j y_j$, where $s_j = \sum_{f(x_i) = y_j} r_i$.
The unit of this adjunction $\eta_X : X \to R^{(X)}$ embeds $X$ in $R^{(X)}$ by sending $x$ to $1 \cdot x$, the linear combination with coefficient $1$ for $x$, and $0$ for all other elements of $X$.
Given $A$ with associated presheaf $\Sbar$, we can define the $\ZMod$-valued presheaf $\FZ \Sbar$.\footnote{Note that $\ZMod$ is isomorphic to $\AbGrp$, the category of abelian groups.} In \cite{DBLP:journals/corr/abs-1111-3620,DBLP:conf/csl/AbramskyBKLM15} a cohomological invariant $\gamma$ is defined for a class of presheaves including $\FZ \Sbar$.
\subsection*{The cohomological invariant from \cite{DBLP:journals/corr/abs-1111-3620,DBLP:conf/csl/AbramskyBKLM15}}
A cohomological invariant $\gamma$ is defined for a class of presheaves including $\FZ \Sbar$.
\begin{itemize}
\item Given a flasque subpresheaf $\Sh$ of $\Hk$, we have the $\AbGrp$-valued presheaf $\FF = \FZ \Sh$. We use the \Cech cohomology with respect to the cover $\MM = \Mk(A)$.
\item In order to focus attention at the context $C \in \MM$, we use the presheaf $\FF |_{C}$, which ``projects'' onto $C$. The cohomology of this presheaf is the \emph{relative cohomology} of $\FF$ at $C$.
The $i$'th relative \Cech cohomology group of $\FF$ is written as $\Cohom{i}$.
\item We have the \emph{connecting homomorphism} $\Cohom{0} \to \Cohom{1}$ constructed using the Snake Lemma of homological algebra.
\item The cohomological obstruction $\gamma : \FF(C) \to \Cohom{1}$ defined in \cite{DBLP:conf/csl/AbramskyBKLM15} is this connecting homomorphism, composed with the isomorphism $\FF(C) \cong \Cohom{0}$.
\end{itemize}
For our present purposes, the relevant property of this invariant is the following \cite[Proposition 4.2]{DBLP:journals/corr/abs-1111-3620}:
\begin{proposition}\label{gammacompfamprop}
For a local section $s \in \Sbar(C_0)$, with $C_0 \in \Mk(A)$, the following are equivalent:
\begin{enumerate}
\item $\gamma(s) = 0$
\item There is a $\ZZ$-compatible family $\{ \alpha_{C} \}_{C \in \Mk(A)}$ with $\alpha_C \in \FZ \Sbar(C)$, such that, for all $C, C' \in \Mk(A)$:
$\rho^{C}_{C \cap C'}(\alpha_C) = \rho^{C'}_{C \cap C'}(\alpha_{C'})$. Moreover, $\alpha_{C_0} = 1\cdot s$.
\end{enumerate}
\end{proposition}
We call a family as in (2) a \emph{$\ZZ$-compatible extension} of $s$. We can regard such an extension as a ``$\ZZ$-linear approximation'' to a homomorphism $h : A \to B$ extending $s$.
Note that, under the embedding $\eta$, every global section of $\Sbar$, given by a compatible family $\{ h_C \}_{C \in \Mk(A)}$, will give rise to a $\ZZ$-compatible family $\{ \eta_{C}(h_C) \}_{C \in \Mk(A)}$.
\section{Algorithmic properties of the cohomological obstruction}
The crucial advantage of the cohomological notion in the present setting is given by the following observation from \cite{AOC2021}:
\begin{proposition}
There is a polynomial-time algorithm for deciding, given $s \in \Sbar(C)$, $C \in \Mk(A)$, whether $s$ has a $\ZZ$-compatible extension.
\end{proposition}
\begin{proof}
Firstly, the number of ``contexts'' $C \in \Mk(A)$ is given by $K := |\Mk(C)| = \binom{|A|}{k} \leq |A|^k$.
For each context $C$, the number of elements of $\Sbar(C)$ is bounded by $M := |B|^k$. Each constraint $\rho^{C}_{C \cap C'}(\alpha_C) = \rho^{C'}_{C \cap C'}(\alpha_{C'})$ can be written as a set of homogeneous linear equations: for each $s \in \Sbar(C \cap C')$, we have the equation
\[ \sum_{\substack{t \in \Sbar(C),\\ t |_{C \cap C'} = s}} r_{C,t} \;\;\; - \;\; \sum_{\substack{t' \in \Sbar(C'), \\ t' |_{C \cap C'} = s}} r_{C',t'} \; = \; 0 \]
in the variables $r_{C,s}$ as $C$ ranges over contexts, and $s$ over $\Sbar(C)$. Altogether, there are $\leq K^2M$ such equations, over $KM$ variables.
The constraint that $\alpha_{C_0} = 1\cdot s$ can be written as a further $M$ equations forcing $r_{C_{0},s} = 1$, and $r_{C_{0},s'} = 0$ for $s' \in \Sbar(C) \setminus \{s\}$.
The whole system can be described by the equation $\mathbf{A}\mathbf{x} = \mathbf{v}$, where $\mathbf{A}$ is a matrix with entries in $\{ -1, 0, 1\}$, of dimensions $(K^2 +1)M \times KM$, which is of size polynomial in $|A|$, $|B|$, while $\mathbf{v}$ is a vector with one entry set to $1$, and all other entries to $0$. The existence of a $\ZZ$-compatible extension of $s$ is equivalent to the existence of a solution for this system of equations. Since solving systems of linear equations over $\ZZ$ is in PTIME \cite{kannan1979polynomial}, this yields the result.
\end{proof}
Given a flasque subpresheaf $\Sh$ of $\Hk$, and $s \in \Sh(C)$, $C \in \Mk(A)$, we write $\Ztest(\Sh,s)$ for the predicate which holds iff $s$ has a $\ZZ$-compatible extension in $\Sh$.
The key idea in \cite{AOC2021} is to use this predicate as a filter to refine local consistency.
We define $\Shch \inc \Sh$ by
\[ \Shch(C) := \begin{cases}
\{ s \in \Sh(C) \mid \Ztest(\Sh,s) \} & \text{$C \in \Mk(A)$} \\
\Sh(C) & \text{otherwise}
\end{cases}
\]
Note that $\Shch$ can be computed with polynomially many calls of $\Ztest$, and thus is itself computable in polynomial time.
$\Shch$ is closed under restriction, hence a presheaf. It is not necessarily flasque.
Thus we are led to the following iterative process, which is the ``cohomological $k$-consistency'' algorithm from \cite{AOC2021}:
\[ \Hk \linc \Hk^{\fl} \linc \Hk^{\fl\ch\fl} \linc \cdots \linc \Hk^{\fl(\ch\fl)^{m}} \linc \cdots \]
Since the size of $\Hk$ is polynomially bounded in $|A|, |B|$, this will converge to a fixpoint in polynomially many steps.
We write $\Shm$ for the $m$'th iteration of this process, and $\Shst$ for the fixpoint. Note that $\Sbar = \Sh_{k}^{(0)}$.
Note that this process involves computing $\Ztest(\Sh, s)$ for the \emph{same} local section $s$ with respect to \emph{different} presheaves $\Sh$.
\begin{proposition}
If $\Sh' \inc \Sh$ are flasque subpresheaves of $\Hk$, with $s \in \Sh'(C) \subseteq \Sh(C)$, $C \in \Mk(A)$, then
$\Ztest(\Sh',s)$ implies $\Ztest(\Sh,s)$. The converse does not hold in general.
\end{proposition}
\begin{proof}
Deciding $\Ztest(\Sh,s)$ amounts to determining whether a set $E$ of $\ZZ$-linear equations has a solution. The corresponding procedure for $\Ztest(\Sh',s)$ is equivalent to solving a set $E \cup F$, where $F$ is a set of equations forcing some of the variables in $E$, corresponding to the coefficients of local sections in $\Sh$ which are not in $\Sh'$, to be $0$. A solution for $E \cup F$ will also be a solution for $E$, but solutions for $E$ may not extend to $E \cup F$.
\end{proof}
Returning to the CSP decision problem, we define some relations on structures:
\begin{itemize}
\item We define $A \tok B$ iff $A$ is\emph{ strongly $k$-consistent} with respect to $B$, \ie iff $\Sbar = \Sh_{k}^{(0)} \neq \es$.
\item We define $A \toZk B$ if $\Shst \neq \es$, and say that $A$ is \emph{cohomologically $k$-consistent} with respect to $B$.
\item We define $A \toZko B$ if $\Shone \neq \es$, and say that $A$ is \emph{one-step cohomologically $k$-consistent} with respect to $B$.
\end{itemize}
As already observed, we have:
\begin{proposition}
\label{ckconprop}
There are algorithms to decide $A \toZk B$ and $A \toZko B$ in time polynomial in $|A|$, $|B|$.
\end{proposition}
We can regard these relations as approximations to the ``true'' homomorphism relation $A \to B$. The soundness of these approximations is stated as follows:
\begin{proposition}
\label{chainimpprop}
We have the following chain of implications:
\[ A \to B \IMP A \toZk B \IMP A \toZko B \IMP A \tok B .\]
\end{proposition}
\begin{proof}
For the first implication, each homomorphism $h : A \to B$ gives rise to a compatible family, and hence, as already remarked, to a $\ZZ$-linear family $\{ \eta_{C}(h_C) \}_{C \in \Mk(A)}$ which extends each of its elements. The second and third implications are immediate from the definitions.
\end{proof}
\section{Composition}
So far we have focussed on a single template structure $B$ and a single instance $A$. We now look at the global structure.
We can define a poset-enriched category (or locally posetal 2-category) $\Ck$ as follows:
\begin{itemize}
\item Objects are relational structures.
\item 1-cells $\Sh : A \to B$ are flasque subpresheaves of $\HkAB$.
\item The local posetal structure (2-cells) is given by subpresheaf inclusions $\Sh' \inc \Sh$.
\end{itemize}
The key question is how to compose 1-cells. We define a 2-functor $\Ck(A,B) \times \Ck(B,C) \to \Ck(A,C)$. Given $\Sh : A \to B$ and $\Tsh : B \to C$, we define $\Tsh \circ \Sh : A \to C$ as follows: for $U \in \Sk(A)$, $\Tsh \circ \Sh(U) := \{ t \circ s \mid s \in \Sh(U) \AND t \in \Tsh(\im s) \}$. This is easily seen to be monotone with respect to presheaf inclusions.
This composition immediately allows us to conclude that the $\tok$ relation is transitive. We now wish to extend this to show that $\toZk$
is transitive. For guidance, we can look at a standard construction, of \emph{group (or monoid) ring} \cite{lang2012algebra}. Given a monoid $M$, with multiplication $m : M \times M \to M$, and a commutative ring $R$, we form the free $R$-module $R^{(M)}$. The key point is how to extend the multiplication on $M$ to $R^{(M)}$. For this, we recall the tensor product of $R$-modules, and the fact that $\FR$ takes products to tensor products, so that $R^{(M)} \otimes R^{(M)} \cong R^{(M \times M)}$. We can use the universal property of the tensor product \cite{mac2013categories}
\[ \begin{tikzcd}
R^{(M)} \times R^{(M)} \ar[r] \ar[d, "\otimes"'] & R^{(M)} \ar[d, leftarrow, "R^{(m)}"] \\
R^{(M)} \otimes R^{(M)} \ar[r, "\cong"] & R^{(M \times M)}
\end{tikzcd}
\]
to define $R^{(M)} \times R^{(M)} \to R^{(M)}$ as the unique bilinear map making the above diagram commute. Here $\otimes$ is the universal bilinear map:
\[ \otimes : (\sum_i r_i m_i, \sum_j s_j n_j) \mapsto \sum_{ij} r_i s_j (m_i,n_j) \]
giving rise to the standard formula
\[ (\sum_i r_i m_i) \cdot (\sum_j s_j n_j) \, = \, \sum_{ij} r_i s_j (m_i \cdot n_j) . \]
There is a subtlety in lifting this to the presheaf level, since there we have a \emph{dependent product}.
For the remainder of this section, we shall fix the following notation.
We are given $\Sh : A \to B$ and $\Tsh : B \to C$.
We define $\Sh \dprod \Tsh$ to be the presheaf on $\Sk(A)$ defined by $\sumST(U) := \{ (s,t) \mid s \in \Sh(U) \AND t \in \Tsh(\im s) \}$.
Restriction maps are defined as follows: given $U' \subseteq U$, $\rho^{U}_{U'} : (s,t) \mapsto (s |_{U'}, t |_{\im (s |_{U'})})$.
Functoriality can be verified straightforwardly.
We can now define a natural transformation $m : \sumST \natarrow \Tsh \circ \Sh$. For each $U \in \Sk(A)$, $m_U : (s,t) \mapsto t \circ s$. Given $U' \subseteq U$, naturality is the requirement that the following diagram commutes
\[ \begin{tikzcd}
\sumST (U) \ar[r, "m_U"] \ar[d, "\rho^{U}_{U'}"'] & \Tsh \circ \Sh(U) \ar[d, "\rho^{U}_{U'}"] \\
\sumST (U') \ar[r, "m_{U'}"] & \Tsh \circ \Sh(U')
\end{tikzcd}
\]
which amounts to the equation $(t \circ s) |_{U'} \, = \, t |_{\im (s |_{U'})} \circ s |_{U'}$.
We note that the analogue of Proposition~\ref{compfamprop} holds for $\AbGrp$-valued presheaves:\footnote{In fact, the result holds for presheaves valued in any concrete category.}
\begin{proposition}
\label{gencompfamprop}
Let $\mathcal{F} : \Sk(A)^{\op} \to \AbGrp$.
There is a bijective correspondence between global sections and $k$-compatible families of $\mathcal{F}$.
\end{proposition}
The proof is the same as that given for Proposition~\ref{compfamprop}.
\begin{proposition}\label{prodglobsecsprop}
Suppose we have global sections $\alpha : \One \natarrow \FZ \Sh$ with $\alpha_{U_0} = 1 \cdot s_0$, and $\beta : \One \natarrow \FZ \Tsh$ with $\beta_{\im s_0} = 1 \cdot t_0$. Then there is a global section $\alpha \dprod \beta : \One \natarrow \FZ (\sumST)$, with $(\alpha \dprod \beta)_{U_0} = 1 \cdot (s_0,t_0)$.
\end{proposition}
\begin{proof}
We define $(\alpha \dprod \beta)_{U} := \sum_{s \in \Sh(U)} \sum_{t \in \Tsh(\im s)} \alpha_s \beta_t (s,t)$, where
$\alpha_U = \sum_{s \in \Sh(U)} \alpha_s s$ and $\beta_{\im s} = \sum_{t \in \Tsh(\im s)} \beta_t t$.
Clearly, $(\alpha \dprod \beta)_{U_0} = 1 \cdot (s_0,t_0)$. It remains to verify naturality.
Given $U' \subseteq U$, by naturality of $\alpha$ and $\beta$, for each $s' \in \Sh(U')$, $t' \in \Tsh(\im s')$:
\[ \alpha_{s'} \; = \sum_{\substack{s \in \Sh(U), \\ s |_{U'} = s'}} \alpha_s, \qquad \beta_{t'} \; = \sum_{\substack{t \in \Tsh(\im s), \\ t |_{\im s'} = t'}} \beta_t . \]
Now given $s' \in \Sh(U')$ and $t' \in \Tsh(\im s')$, we compare the coefficient $\alpha_{s'}\beta_{t'}$ of $(s',t')$ in $(\alpha \dprod \beta)_{U'}$ with that in
$\FZ (\rho^{U}_{U'})((\alpha \dprod \beta)_{U})$. We have
\begin{align*}
\FZ (\rho^{U}_{U'})((\alpha \dprod \beta)_{U})(s',t') \; &= \sum_{\substack{s \in \Sh(U), \\ s |_{U'} = s'}} \; \sum_{\substack{t \in \Tsh(\im s), \\ t |_{\im s'} = t'}} \alpha_t \beta_t \\
&= \sum_{\substack{s \in \Sh(U), \\ s |_{U'} = s'}} \alpha_t \; (\sum_{\substack{t \in \Tsh(\im s), \\ t |_{\im s'} = t'}} \beta_t) \\
&= \sum_{\substack{s \in \Sh(U), \\ s |_{U'} = s'}} \alpha_t \beta_{t'} \\
&= (\sum_{\substack{s \in \Sh(U), \\ s |_{U'} = s'}} \alpha_t) \beta_{t'} \\
&= \alpha_{s'}\beta_{t'} .
\end{align*}
\end{proof}
\begin{proposition}
If $\Sh^{\Box} = \Sh$ and $\Tsh^{\Box} = \Tsh$, then $(\Tsh \circ \Sh)^{\Box} = \Tsh \circ \Sh$.
\end{proposition}
\begin{proof}
For $U_0 \in \Mk(A)$ and $t_0 \circ s_0 \in \Tsh \circ \Sh(U)$, we must show that $\Ztest(\Tsh \circ \Sh, t_0 \circ s_0)$. By Proposition~\ref{gencompfamprop}, it suffices to show that there is a global section $\gamma : \One \natarrow \FZ (\Tsh \circ \Sh)$ with $\gamma_U = 1 \cdot t_0 \circ s_0$. By assumption, and using Proposition~\ref{gencompfamprop} again, we have global sections $\alpha : \One \natarrow \FZ \Sh$ with $\alpha_{U_0} = 1 \cdot s_0$, and $\beta : \One \natarrow \FZ \Tsh$ with $\beta_{\im s_0} = 1 \cdot t_0$.
By Proposition~\ref{prodglobsecsprop}, we have $\alpha \dprod \beta : \One \natarrow \FZ (\sumST)$, with $(\alpha \dprod \beta)_{U_0} = 1 \cdot (s_0,t_0)$. Composing $\alpha \dprod \beta$ with $\FZ m : \FZ(\sumST) \natarrow \FZ(\Tsh \circ \Sh)$ yields $\gamma$ as required.
\end{proof}
As an immediate corollary, we obtain:
\begin{proposition}
The relation $\toZk$ is transitive.
\end{proposition}
\section{Cohomological reduction}\label{cohomredsec}
We now briefly outline the cohomological content of the construction $\Sh \mapsto \Shch$, which we shall call \emph{cohomological reduction}, since $\Shch \inc \Sh$.
We refer to \cite{DBLP:conf/csl/AbramskyBKLM15} for further details.
Given a flasque subpresheaf $\Sh$ of $\Hk$, we have the abelian-group-valued presheaf $\FF = \FZ \Sh$. We use the \Cech cohomology with respect to the cover $\MM = \Mk(A)$.
In order to focus attention at the context $C \in \MM$, we use the presheaf $\FF |_{C}$, which ``projects'' onto $C$. The cohomology of this presheaf is the \emph{relative cohomology} of $\FF$ at $C$.
The $i$'th relative \Cech cohomology group of $\FF$ is written as $\Cohom{i}$.
We have the \emph{connecting homomorphism} $\Cohom{0} \to \Cohom{1}$ constructed using the Snake Lemma of homological algebra.
The cohomological obstruction $\gamma : \FF(C) \to \Cohom{1}$ defined in \cite{DBLP:conf/csl/AbramskyBKLM15} is this connecting homomorphism, composed with the isomorphism $\FF(C) \cong \Cohom{0}$.
Using Proposition~\ref{gammacompfamprop}, the predicate $\Ztest(\Sh, s)$ is equivalent to $\gamma \circ \eta(s) = 0$, \ie $\eta(s) \in \ker \gamma$.
Thus we can define $\Shch(C)$ as the pullback (in $\Set$):
\[ \begin{tikzcd}
\Shch(C) \ar[d, hookrightarrow] \arrow[dr, phantom, "\lrcorner", very near start] \ar[r] & U (\ker \gamma) \ar[d, hookrightarrow] \\
\Sh(C) \ar[r, "\eta"] & U \FZ \Sh (C)
\end{tikzcd}
\]
\section{Relation to contextuality conditions}
We recall one of the contextuality properties studied in \cite{DBLP:conf/csl/AbramskyBKLM15}.
In the present setting, we can define this as follows. If $\Sh$ is a flasque subpresheaf of $\Hk$, then:
\begin{itemize}
\item $\Sh$ is \emph{cohomologically strongly contextual} $(\CSC(\Sh))$ if
\[ \forall C \in \Mk(A). \, \forall s \in \Sh(C). \, \neg \Ztest(s) . \]
\end{itemize}
We shall write $A \ntoZk B \, := \, \neg(A \toZk B)$, and similarly $A \ntoZko B$.
\begin{proposition}\label{CSCntoZkoprop}
$\CSC(\Sbar) \IMP A \ntoZko B$.
\end{proposition}
\begin{proof}
If $\CSC(\Sbar)$, then $\Sbar^{\Box}(C) = \es$ for all $C \in \Mk(A)$. This implies that $\Sbar^{\Box\Diamond}(C) = \es$ for all $C \in \Sk(A)$, \ie $\Shone = \mathbf{\es}$, and hence $A \ntoZko B$.
\end{proof}
\section{Linear templates: the power of one iteration}
We now consider the case where the template structure $B$ is linear. This means that $B = R$ is a finite ring, and the interpretation of each relation in $\sg$ on $R$ has the form $\Eab^{R}(r_1,\ldots,r_n) \equiv \sum_{i=1}^n a_i r_i = b$, for some $\va \in R^n$ and $b \in R$. Thus we can label each relation in $\sg$ as $\Eab$, where $\va$, $b$ correspond to the interpretation of the relation in $R$.
Given an instance $A$, we can regard each tuple $\vx \in A^n$ such that $\Eab^{A}(x_1,\ldots, x_n)$ as the equation $\sum_{i=1}^n a_i x_i = b$. The set of all such equations is denoted by $\TA$.
We say that a function $f : A \to R$ \emph{satisfies} this equation if $\sum_{i=1}^n a_i f(x_i) = b$ holds in $R$, \ie if $\Eab^{A}(f(x_1),\ldots, f(x_n))$. It is then immediate that a function $f : A \to R$ simultaneously satisfies all the equations in $\TA$ iff it is a homomorphism.
We can also associate a set of equations with each context $C \in \Mk(A)$. We say that $\Sbar(C)$ satisfies the equation $e_{\vc,d} := \sum_{i=1}^n c_i x_i = d$ if for all $s \in \Sbar(C)$, $s$ satisfies $e_{\vc,d}$, where $\{ x_1, \dots , x_n \} \subseteq C$,
$\vc \in R^n$ and $d \in R$.
Note that we do \emph{not} require that there is a corresponding relation $E_{\vc,d}$ in $\sg$. We write $\TC$ for the set of all equations satisfied by $\Sbar(C)$, and $\TS := \bigcup_{C \in \Mk} \TC$.
We say that $\Sbar$ satisfies \emph{All-versus-Nothing contextuality} ($\AvN_{R}(\Sbar)$) \cite{DBLP:conf/csl/AbramskyBKLM15} if there is no function $s : A \to R$ which satisfies all the equations in $\TS$.
\begin{proposition}
\label{ntoavnprop}
If $A \nto R$, then $\AvN_{R}(\Sbar)$.
\end{proposition}
\begin{proof}
Since we are assuming that $k \geq n$, where $n$ is the relational width of $\sg$, we have $\TA \subseteq \TS$, and hence any function satisfying $\TS$ will also satisfy $\TA$. As already observed, $A \nto B$ is equivalent to the statement that there is no satisfying assignment for $\TA$.
\end{proof}
We can now state another important result from \cite{AOC2021}: that cohomological $k$-consistency is an \emph{exact condition} for linear templates.
Moreover, the key step in the argument is the main result from \cite{DBLP:conf/csl/AbramskyBKLM15}.
\begin{proposition}
\label{linearprop}
For every linear template $R$, and instance $A$:
\[ A \to R \; \IFF \; A \toZk R \; \IFF \; A \toZko R . \]
\end{proposition}
\begin{proof}
The forward implications are given by Proposition~\ref{chainimpprop}.
Now suppose that $A \nto R$. By Proposition~\ref{ntoavnprop}, this implies that $\AvN_{R}(\Sbar)$. By \cite[Theorem 6.1]{DBLP:conf/csl/AbramskyBKLM15}, this implies that $\CSC(\Sbar)$.
By Proposition~\ref{CSCntoZkoprop}, this implies $A \ntoZko B$. By Proposition~\ref{chainimpprop} again, this implies $A \ntoZk R$.
\end{proof}
\section{Cohomological width}
We recall that a CSP template structure $B$ is said to have \emph{width} $\leq k$ (written as $\Wk(B)$) if, for all instances $A$:
\[ A \to B \; \IFF \; A \tok B . \]
Thus the templates of bounded width are those for which $\CSP(B)$ has an exact polynomial-time solution given by determination of strong $k$-consistency for some $k$.
In their seminal paper \cite{feder1998computational} Feder and Vardi identified two tractable subclasses of CSP, those with templates of bounded width, and those which are ``subgroup problems'' in their terminology, \ie essentially those with linear templates. Since all other cases with known complexity at that time were NP-complete, this motivated their famous Dichotomy Conjecture, recently proved by Bulatov \cite{bulatov2017dichotomy} and Zhuk \cite{zhuk2020proof}.
The two tractable classes identified by Feder and Vardi appeared to be quite different in character. However, we can use the preceding cohomological analysis to give a unified account.
We define the \emph{cohomological width} of a template structure $B$ to be $\leq k$ (notation: $\WZk(B)$) if, for all instances $A$:
\[ A \to B \; \IFF \; A \toZk B . \]
\begin{proposition}
If $B$ has bounded cohomological width, \ie $\WZk(B)$ for some $k$, then $\CSP(B)$ is in PTIME.
Moreover, both the Feder-Vardi classes of bounded width and linear templates have bounded cohomological width.
\end{proposition}
\begin{proof}
The first statement follows from Proposition~\ref{ckconprop}. Note that this provides a simple, uniform algorithm which gives an exact solution for $\CSP(B)$ whenever $B$ has bounded cohomological width.
For the second statement, from Proposition~\ref{chainimpprop} we have the implication $\Wk(B) \Rightarrow \WZk(B)$, hence bounded width implies bounded cohomological width.
Finally, Proposition~\ref{linearprop} implies that linear templates have bounded cohomological width.
\end{proof}
For each template structure $B$, either $\CSP(B)$ is NP-complete, or $B$ admits a weak near-unanimity polymorphism \cite{maroti2008existence}.
In \cite{zhuk2020proof}, Zhuk shows that if $B$ admits a weak near-unanimity polymorphism, there is a polynomial-time algorithm for $\CSP(B)$, thus establishing the Dichotomy Theorem.
This result motivates the following question (see also \cite{AOC2021}):
\begin{question}
Is is the case that for all structures $B$, if $B$ has a weak near unanimity polymorphism, then it has bounded cohomological width?
\end{question}
A positive answer to this question would give an alternative proof of the Dichotomy Theorem.
\section{Presheaf representations of logical equivalences}
As we have seen, the local consistency relation $A \tok B$ approximates the homomorphism relation $A \to B$.
By standard results (cf.~Proposition~\ref{pebbprop} and \cite{abramsky2017pebbling}), $A \tok B$ iff every $k$-variable existential positive sentence satisfied by $A$ is satisfied by $B$. We now consider presheaf representations of logical equivalence for richer logical fragments.
\subsection{Existential logic $\ELk$}
As a first step, we consider $\Ik$, the presheaf of \emph{partial isomorphisms}. For each $C \in \Sk(A)$, $\Ik(C)$ is the set of partial isomorphisms from $A$ to $B$ with domain $C$. This is a subpresheaf of $\Hk$.
We can now consider $\Ikfl$, the coflasquification of $\Ik$.
We have the following analogue of Proposition~\ref{homglobsecsprop}:
\begin{proposition}
Suppose that $k \geq n$, where $n$ is the maximum arity of any relation in $\sg$. There is a bijective correspondence between
\begin{enumerate}
\item embeddings $A \embed B$
\item global sections of $\Ikfl$.
\end{enumerate}
\end{proposition}
Note that $\Ikfl$ can be computed in polynomial time, by an algorithm which is a minor variation of the local consistency algorithm which produces $\Hk^{\fl}$.
We can see a non-empty flasque subpresheaf of $\Ik$ as a winning Duplicator strategy for the forth-only $k$-pebble game in which the winning condition is the partial isomorphism condition rather than the partial homomorphism condition.
This leads to the following correspondence with the logical preorder induced by \emph{existential logic} $\ELk$. This is the $k$-variable logic which allows negation on atomic formulas, as well as conjunction, disjunction, and existential quantification.
For structures $A$, $B$, $A \ELkpreord B$ if every sentence of $\ELk$ satisfied by $A$ is also satisfied by $B$.
\begin{proposition}
For finite structures $A$, $B$, the following are equivalent:
\begin{enumerate}
\item $\Ikfl \neq \es$
\item $A \ELkpreord B$.
\end{enumerate}
\end{proposition}
\subsection{The full $k$-variable fragment $\Lk$}
To express back-and-forth equivalences, we use the fact that the inverse operation on partial isomorphisms lifts to $\Ik$ and its subpresheaves.
Given a subpresheaf $\Sh$ of $\Ik = \IkAB$, we define $\Shdag$ to be the subpresheaf of $\IkBA$ on $\Sk(B)$ such that, for $D \in \Sk(B)$, $\Shdag(D) = \{ t \in \IkBA(D) \mid t^{-1} \in \Sh(\im t) \}$.
The fact that restriction is well-defined in $\Shdag$ follows since, if $s \in \Ik(C)$, $D = \im s$, and $D' \subseteq D$,
then $s^{-1} |_{D'} = (s |_{C'})^{-1}$, where $C' = \im \, (s^{-1} |_{D'})$.
\begin{proposition}
For finite structures $A$, $B$, the following are equivalent:
\begin{enumerate}
\item There is a non-empty subpresheaf $\Sh$ of $\Ik$ such that both $\Sh$ and $\Shdag$ are flasque.
\item $A \eqLk B$.
\end{enumerate}
\end{proposition}
\subsection{The $k$-variable fragment with counting quantifiers $\Lck$}\label{cofpsec}
We recall Hella's bijection game \cite{Hella1996}: at each round, Duplicator specifies a bijection $\beta$ from $A$ to $B$, and then Spoiler specifies
$a \in A$. The current position is extended with $(a, \beta(a))$. The winning condition is the partial isomorphism condition.
Note that allowing Spoiler to choose $b \in B$ would make no difference, since they could have chosen $a = \beta^{-1}(b)$ with the same effect.
Given a flasque subpresheaf $\Sh$ of $\Ik$, the additional condition corresponding to the bijection game is that, for each $s \in \Sh(C)$ with $|C| < k$, there is an assignment $a \mapsto s_a$ of $s_a \in \Sh(C \cup \{a\})$ with $s_a |_C = s$, such that the relation $\{ (a,s_a(a)) \mid a \in A \}$ is a bijection from $A$ to $B$. We write $\cotest(\Sh,s)$ if this condition is satisfied for $s$ in $\Sh$.
As observed in \cite{AOC2021}, for each such $s$ this condition can be formulated as a perfect matching problem on a bipartite graph of size polynomial in $|A|$ and $|B|$, and hence can be checked in polynomial time.
We lift this test to the presheaf $\Sh$, and define $\Shcotest \inc \Sh$ by $\Shcotest(C) = \{ s \in \Sh(C) \mid \cotest(\Sh,s) \}$.
Since there are only polynomially many $s$ to be checked, $\Shcotest$ can be computed in polynomial time.
We can then define an iterative process
\[ \Ik \linc \Ik^{\fl} \linc \Ik^{\fl\cotest\fl} \linc \cdots \linc \Ik^{\fl(\cotest\fl)^{m}} \linc \cdots \]
This converges to a fixpoint $\Shcofp$ in polynomially many iterations.
\begin{proposition}
For finite structures $A$, $B$, the following are equivalent:
\begin{enumerate}
\item $\Shcofp \neq \es$.
\item $A \eqLck B$.
\end{enumerate}
\end{proposition}
By a well-known result \cite{cai1992optimal}, the logical equivalence $\eqLck$ corresponds to equivalence at level $k-1$ according to the Weisfeiler-Leman algorithm, a widely studied approximation algorithm for graph and structure isomorphism \cite{kiefer2020weisfeiler}.
Thus the above yields an alternative polynomial-time algorithm for computing Weisfeiler-Leman equivalences.
\section{Cohomological refinement of logical equivalences}
We can proceed analogously to the introduction of a cohomological refinement of local consistency in Section~\ref{cohomkconsec}. Such refinements are possible for all the logical equivalences studied in the previous Section.
We will focus on the equivalences $\eqLck$, and the corresponding Weisfeiler-Leman equivalences.
The cohomological reduction $\Sh \mapsto \Shch$ is directly applicable to flasque subpresheaves of $\Ik$.
However, an additional subtlety which arises in this case is that $\Ztest(\Sh,s)$ need not imply $\Ztest(\Shdag, s^{-1})$ in general. We are thus led to the following symmetrization of cohomological reduction:
$\Sh \mapsto \Shsch := \Sh^{\ch \dagger \ch \dagger}$.
Combining this with the counting reduction fixpoint $\Sh \mapsto \Sh^{\cofp}$ from Section~\ref{cofpsec} leads to an iterative process
\[ \Ik \linc \Ik^{\cofp} \linc \Ik^{\cofp\sch\cofp} \linc \cdots \linc \Ik^{\cofp(\sch\cofp)^{m}} \linc \cdots \]
This converges to a fixpoint $\Shcst$ in polynomially many iterations. Since each of the operations $\Sh \mapsto \Shsch$ and $\Sh \mapsto \Sh^{\cofp}$ is computable in polynomial time, so is $\Shcst$.
We define $A \eqZ B$ if $\Shcst \neq \es$. We can view this as a polynomial time approximation to structure isomorphism.
The soundness of this algorithm is expressed as follows.
\begin{proposition}
For all finite structures $A$, $B$:
\[ A \cong B \IMP A \eqZ B \IMP A \eqLck B . \]
\end{proposition}
In \cite{AOC2021}, it is shown that $\eqZ$ strictly refines $\eqLck$. Moreover, Proposition~\ref{linearprop} is leveraged to show that $\eqZ$ is discriminating enough to defeat two important families of counter-examples: the CFI construction used in \cite{cai1992optimal} to show that $\Lck$ is not strong enough to characterise polynomial time, and the constructions in \cite{lichter2021separating,dawar2021limitations} which are used to show similar results for linear algebraic extensions of $\Lck$.
\section{A meta-algorithm for presheaves}
It is clear that the various algorithms we have described for local consistency, logical equivalences, and their cohomological refinements have a common pattern.
This pattern has two ingredients:
\begin{enumerate}
\item An \emph{initial over-approximation} $\Sh_0$.
\item a \emph{local predicate} $\vphi$.
\end{enumerate}
We can use $\vphi$ to define a \emph{deflationary operator} $J = \Jphi$ on the sub-presheaves of $\Sh_0$.
As a general setting, we take presheaves $\Pshv{P}$ on a poset $P$, ordered by sub-presheaf inclusion. For any choice of initial presheaf $\Sh_0$, the downset ${\downarrow} \Sh_0$ forms a complete lattice $L$.\footnote{Equivalently, this is the subobject lattice $\Sub(\Sh_0)$.}
A deflationary operator on $L$ is a monotone function $J : L \to L$ such that, for all $\Sh \in L$, $J \Sh \inc \Sh$. By the standard Tarski fixpoint theorem, this has a greatest fixpoint $J^* = \bigcup \{ \Sh \mid \Sh \inc J \Sh \}$.
To compute this greatest fixpoint, if $L$ satisfies the Descending Chain Condition, \ie there are no infinite descending chains, then we can compute
\[ \Sh_0 \linc J \Sh_0 \linc J^2 \Sh_0 \linc \cdots \]
which will converge after finitely many steps to $J^*$.
Now we specialize to subpresheaves of $\HkAB$. We define $|\Sh| := \sum_{C \in \Sk(A)} |\Sh(C)|$. Since $|\HkAB| \leq |A|^k|B|^k$, the number of iterations needed to converge to $J^*$ is polynomially bounded in $|A|$, $|B|$.
We consider deflationary operators of the form $J = J_{\vphi}$, where $\vphi(\Sh,s)$ is a predicate on local sections.
We require that $\vphi$ be monotone in the following sense: if $\Sh \inc \Sh'$, then for $C \in \Sk(A)$, $s \in \Sh(C)$, $\vphi(\Sh,s) \Rightarrow \vphi(\Sh',s)$.
We define $J_{\vphi}( \Sh)(C) := \{ s \in \Sh(C) \mid \vphi(\Sh,s) \}$. Since $|\Sh|$ is polynomially bounded in $|A|$, $|B|$, $J_{\vphi}$ makes polynomially many calls of $\vphi$.
We recover all the algorithms considered previously by making suitable choices for the predicate $\vphi$.
In most cases, this can be defined by $\vphi(\Sh, s) \; \equiv \; \bar{\Sh}, s \models \psi$, where $\bar{\Sh}$ is the relational structure with universe $\sum_{C \in \Sk(A)} \Sh(C)$, and the relation
\[ E^{\bar{\Sh}}(s,t,a,b) \equiv (t = s \cup \{ (a,b) \}); \]
while $\psi$ is a first-order formula over this vocabulary.\footnote{Strictly speaking, we should consider the many-sorted structure with sorts for $A$ and $B$.}
The counting predicate $\cotest$ and the cohomological reduction predicate $\Ztest$ require stronger logical resources.
In all cases, if $\vphi$ is computable in polynomial time, so is the overall algorithm to compute $J^*$.
\section{Further remarks and questions}
Firstly, we observe a down-closure property of cohomological $k$-consistency.
\begin{proposition}
If $k \leq k'$, then $A \toZkp B \IMP A \toZk B$.
\end{proposition}
\begin{proof}
Suppose we have a $\ZZ$-compatible family $\{ \alpha_{C'} \}_{C' \in \Mkp(A)}$ extending $s' \in \Sbarp(C'_0)$. By Proposition~\ref{gencompfamprop} we can extend this family to a global section of $\FZ \Sbarp$.
Using Proposition~\ref{gencompfamprop} again, the elements $\{ \alpha_{C} \}_{C \in \Mk(A)}$ at the $k$-element subsets will form a $\ZZ$-compatible family of $\Sbar$. Moreover, if $C_0$ is a $k$-element subset of $C'_0$, $\alpha_{C_0}$ will be $s = s' |_{C_0}$, so this family will extend $s$.
\end{proof}
We now ask which of the implications in Proposition~\ref{chainimpprop} can be reversed. Clearly, if $P \neq NP$, the first cannot be reversed, but there should be an explicit counter-example.
There are such counter-examples of false negatives for contextuality in \cite{DBLP:journals/corr/abs-1111-3620,DBLP:conf/csl/AbramskyBKLM15,caru2017cohomology}, but these are for specific values of $k$. Can we provide a family of counter-examples for all values of $k$?
We try to phrase this more precisely:
\begin{question}
If we fix an NP-complete template $B$, e.g. for 3-SAT, can we find a family of instances $\{ A_k \}$ such that, for all $k$:
\[ A_k \toZk B \AND A_k \nto B . \]
\end{question}
\bibliographystyle{amsplain}
| {'timestamp': '2022-06-27T02:09:13', 'yymm': '2206', 'arxiv_id': '2206.12156', 'language': 'en', 'url': 'https://arxiv.org/abs/2206.12156'} |
\section{Introduction}
\label{sect:intro}
This paper is motivated by a recent collection of papers \cite{JonCylinder,JonRhombus,BLN,EngWitten,Thapper,JonDiag}, which in turn were motivated by some combinatorial questions in statistical physics, namely the properties of the Witten index in the so-called hard squares model \cite{Fen}.
In the hard-core model on a grid we are given a graph $G$ and particles (fermions) located in the vertices of the grid, with the restrictions that two adjacent vertices cannot be occupied at the same time. That leads to considering the \emph{independent sets} in $G$, which represent the allowed states of the system. The family of all independent sets in $G$ naturally forms a simplicial complex, the \emph{independence complex} $\mathrm{Ind}(G)$ of $G$. Various topological invariants of $\mathrm{Ind}(G)$ correspond to physical characteristics of the underlying hard-core model. Among those are e.g. homology \cite{EngWitten,AdaSuper,Eer,Fen,HuSch2,JonGrids,HuOthers,HuSch1} or the Euler characteristic which is re-branded as the Witten index \cite{JonCylinder,JonRhombus,BLN,EngWitten,Thapper,JonDiag,Fen2}. Sometimes it is even possible to determine the exact homotopy type of $\mathrm{Ind}(G)$ \cite{BLN,Thapper,HHFS,HuOthers}.
In this work we continue the research line of Jonsson studying these spaces for the square grids with various boundary conditions. Let $P_m$ denote the path with $m$ vertices and let $C_n$ be the cycle with $n$ vertices. The free square grid is $G=P_m\times P_n$, its cylindrical version is $G=P_m\times C_n$ and the toroidal one is $G=C_m\times C_n$.
In the first part we show natural recursive dependencies in all three models. We concentrate mainly on cylinders.
\begin{theorem}
\label{thm:periods}
We have the following homotopy equivalences in the cylindrical case:
\begin{itemize}
\item[a)] $\mathrm{Ind}(P_1\times C_n)\simeq \Sigma\,\mathrm{Ind}(P_1\times C_{n-3})$,
\item[b)] $\mathrm{Ind}(P_2\times C_n)\simeq \Sigma^2\,\mathrm{Ind}(P_2\times C_{n-4})$,
\item[c)] $\mathrm{Ind}(P_3\times C_n)\simeq \Sigma^6\,\mathrm{Ind}(P_3\times C_{n-8})$,
\item[d)] $\mathrm{Ind}(P_m\times C_3)\simeq \Sigma^2\,\mathrm{Ind}(P_{m-3}\times C_3)$,
\item[e)] $\mathrm{Ind}(P_m\times C_5)\simeq \Sigma^2\,\mathrm{Ind}(P_{m-2}\times C_5)$,
\item[f)] $\mathrm{Ind}(P_m\times C_7)\simeq \Sigma^6\,\mathrm{Ind}(P_{m-4}\times C_7)$,
\end{itemize}
where $\Sigma$ denotes the unreduced suspension.
\end{theorem}
Here a) is a classical result of Kozlov \cite{Koz}, while d) and e) also follow from \cite{Thapper} where those spaces were identified with spheres by means of explicit Morse matchings. The results b), c) and f) are new and were independently proved by a different method in \cite{Kouyemon}. Note that a), b), c) are `dual' to, respectively, d), e) and f) in the light of Thapper's conjecture \cite[Conj. 3.1]{Thapper} (also \cite[Conj. 1.9]{Kouyemon}) that $\mathrm{Ind}(P_m\times C_{2n+1})\simeq \mathrm{Ind} (P_n\times C_{2m+1})$. If one assumes the conjecture holds, then a), b) and c) are equivalent to, respectively, d), e) and f). Theorem \ref{thm:periods} together with an easy verification of initial conditions imply the conjecture for $m\leq 3$. The statements d), e) and f) take the periodicity of Euler characteristic, proved by Jonsson \cite{JonCylinder}, to the level of homotopy type.
According to the calculation of K. Iriye \cite{KouyemonPriv} there is an equivalence $\mathrm{Ind}(P_4\times C_{2k+1})\simeq\mathrm{Ind}(P_k\times C_9)$ and both spaces are, up to homotopy, wedges of spheres. However, the number of wedge summands grows to infinity as $k\to\infty$, so no recursive relation as simple as those in Theorem~\ref{thm:periods} is possible for $\mathrm{Ind}(P_m\times C_9)$ nor $\mathrm{Ind}(P_4\times C_n)$.
Let us also mention that a completely analogous method proves the following.
\begin{proposition}
\label{prop:other}
We have the following homotopy equivalences in the free and toroidal cases:
\begin{itemize}
\item $\mathrm{Ind}(P_1\times P_n)\simeq \Sigma\,\mathrm{Ind}(P_1\times P_{n-3})$,
\item $\mathrm{Ind}(P_2\times P_n)\simeq \Sigma\,\mathrm{Ind}(P_2\times P_{n-2})$,
\item $\mathrm{Ind}(P_3\times P_n)\simeq \Sigma^3\,\mathrm{Ind}(P_3\times P_{n-4})$,
\item $\mathrm{Ind}(C_3\times C_n)\simeq \Sigma^2\,\mathrm{Ind}(C_3\times C_{n-3})$.
\end{itemize}
\end{proposition}
All those results are proved in Section \ref{sect:simplesuspension}.
In the second part of this work we aim to provide a method of recursively calculating the Euler characteristic in the cylindrical case $P_m\times C_n$ when the circumference $n$ is even. Since it is customary to use the Witten index in this context, we will adopt the same approach and define for any space $X$
$$Z(X)=1-\chi(X)$$
where $\chi(X)$ is the unreduced Euler characteristic. Then $Z(X)=0$ for a contractible $X$, $Z(\Sigma\,X)=-Z(X)$ for any finite simplicial complex $X$, and $Z(S^k)=(-1)^{k-1}$. Then the value
$$Z(G):=Z(\mathrm{Ind}(G))$$
is what is usually called the Witten index of the underlying grid model.
Table 1 in the appendix contains some initial values of $Z(P_m\times C_n)$ arranged so that $m$ labels the rows and $n$ labels the columns of the table. Let
$$f_n(t)=\sum_{m=0}^\infty Z(P_m\times C_{n})t^m$$
be the generating function of the sequence in the $n$-th column. By an ingenious matching Jonsson \cite{JonCylinder} computed the numbers $Z(P_m\times C_{2n+1})$ for odd circumferences and found that for each fixed $n$ they are either constantly $1$ or periodically repeating $1,1,-2,1,1,-2,\ldots$. Precisely
\begin{eqnarray*}
f_{6n+1}(t)=f_{6n-1}(t)&= &\frac{1}{1-t},\\
f_{6n+3}(t)&=&\frac{1-2t+t^2}{1-t^3}.
\end{eqnarray*}
The behaviour of $Z(P_m\times C_{2n})$ is an open problem of that work, which we tackle here. Also, recently Braun notes that some problems faced in \cite{Braun} are reminiscent of the difficulty of determining the homotopy types of the spaces $\mathrm{Ind}(P_m\times C_{2n})$.
Our understanding of the functions $f_{2n}(t)$ comes in three stages of increasing difficulty.
\begin{theorem}
\label{thm:genfun}
Each $f_{2n}(t)$ is a rational function, such that all zeroes of its denominator are complex roots of unity.
\end{theorem}
Our method also provides an algorithm to calculate $f_{2n}(t)$, see Appendix. This already implies that for each fixed $n$ the sequence $a_m=Z(P_m\times C_{2n})$ has polynomial growth. However, we can probably be more explicit:
\begin{conjecture}
\label{thm:genfunmedium}
For every $n\geq 0$ we have
\begin{eqnarray*}
f_{4n+2}(t)&= &\frac{h_{4n+2}(t)}{(1+t^2)\cdot\big[(1-t^{8n-2})(1-t^{8n-8})(1-t^{8n-14})\cdots(1-t^{2n+4})\big]},\\
f_{4n}(t)&=&\frac{h_{4n}(t)}{(1-t^2)\cdot\big[(1-t^{8n-6})(1-t^{8n-12})(1-t^{8n-18})\cdots(1-t^{2n+6})\big]}.
\end{eqnarray*}
for some polynomials $h_{n}$.
\end{conjecture}
In the denominators the exponents decrease by $6$.
Conjecture \ref{thm:genfunmedium} is in fact still just the tip of an iceberg, because $p_n(t)$ turn out to have lots of common factors with the denominators. This leads to the grande finale:
\begin{conjecture}
\label{conjecture:periodic}
After the reduction of common factors:
\begin{itemize}
\item $f_{4n+2}(t)$ can be written as a quotient whose denominator has no multiple zeroes. Consequently, for any fixed $n$, the sequence $a_m=Z(P_m\times C_{4n+2})$ is periodic.
\item $f_{4n}(t)$ can be written as a quotient whose denominator has only double zeroes. Consequently, for any fixed $n$, the sequence $a_m=Z(P_m\times C_{4n})$ has linear growth.
\end{itemize}
\end{conjecture}
A direct computation shows that the first part holds for a number of initial cases, with periods given by the table:
\begin{center}
\begin{tabular}{l||l|l|l|l|l|l}
$4n+2$ & 2 & 6 & 10 & 14 & 18 & 22\\ \hline
period & 4 & 12 & 56 & 880 & 360 & 276640
\end{tabular}
\end{center}
In Section \ref{sect:patterns} we prepare our main tool for the proof of Theorem~\ref{thm:genfun}: \emph{patterns} and their $\mu$-invariants. A small example of how they work is presented in detail in Section~\ref{sect:example6}. The proof of Theorem~\ref{thm:genfun} then appears in Section~\ref{sect:weakproof}. In Section~\ref{section:neck} we describe a completely independent combinatorial object, the \emph{necklace graph}, which is a simplified model of interactions between patterns. It has some conjectural properties, esp. Conjecture~\ref{con:cyc-len}, whose verification would prove Conjecture~\ref{thm:genfunmedium}. Section~\ref{section:neck}, up to and including Conjecture~\ref{con:cyc-len}, can be read without any knowledge of any other part of this paper. As for Conjecture~\ref{conjecture:periodic}, it seems unlikely that the methods of this paper will be sufficient to prove it.
To avoid confusion we remark that our results are in a sense orthogonal to some questions raised by Jonsson, who asked if the sequence in each \emph{row} of Table 1 (see Appendix) is periodic. That question is equivalent to asking if the eigenvalues of certain transfer matrices are complex roots of unity, and this paper is not about them.
\section{Prerequisites on homotopy of independence complexes}
\label{sect:prereq}
We consider finite, undirected graphs. If $v$ is a vertex of $G$ then by $N(v)$ we denote the \emph{neighbourhood} of $G$, i.e. the set of adjacent vertices, and by $N[v]$ the \emph{closed neighbourhood}, defined as $N[v]=N(v)\cup\{v\}$. If $e=\{u,v\}$ is any pair of vertices (not necessarily an edge), then we set $N[e]=N[u]\cup N[v]$.
If $G\sqcup H$ is the disjoint union of two graphs then its independence complex satisfies
$$\mathrm{Ind}(G\sqcup H)=\mathrm{Ind}(G)\ast\mathrm{Ind}(H)$$
where $\ast$ is the join. In particular, if $\mathrm{Ind}(G)$ is contractible then so is $\mathrm{Ind}(G\sqcup H)$ for any $H$. We refer to \cite{Book} for facts about (combinatorial) algebraic topology. By $\Sigma K=S^0\ast K$ we denote the suspension of a simplicial complex $K$.
The independence complex is functorial in two ways. First, for any vertex $v$ of $G$ there is an inclusion $\mathrm{Ind}(G\setminus v)\hookrightarrow \mathrm{Ind}(G)$. Secondly, if $e$ is an edge of $G$ then the inclusion $G- e\hookrightarrow G$ induces an inclusion $\mathrm{Ind}(G)\hookrightarrow\mathrm{Ind}(G- e)$. This leads to two main methods of decomposing simplicial complexes, using vertex or edge removals. They can be analyzed using the two cofibration sequences:
\begin{eqnarray*}
\mathrm{Ind}(G\setminus N[v])\hookrightarrow\mathrm{Ind}(G\setminus v)\hookrightarrow\mathrm{Ind}(G)\to\Sigma\,\mathrm{Ind}(G\setminus N[v])\to\cdots,\\
\Sigma\,\mathrm{Ind}(G\setminus N[e])\hookrightarrow\mathrm{Ind}(G)\hookrightarrow\mathrm{Ind}(G- e)\to\Sigma^2\,\mathrm{Ind}(G\setminus N[e])\to\cdots,
\end{eqnarray*}
both of which are known in various forms. See \cite{Ada} for proofs.
The following are all immediate consequences of the cofibration sequences.
\begin{lemma}
\label{lemma:simple}
We have the following implications
\begin{itemize}
\item[a)] If $\mathrm{Ind}(G\setminus N[v])$ is contractible then $\mathrm{Ind}(G)\simeq \mathrm{Ind}(G\setminus v)$.
\item[b)] If $\mathrm{Ind}(G\setminus v)$ is contractible then $\mathrm{Ind}(G)\simeq \Sigma\,\mathrm{Ind}(G\setminus N[v])$.
\item[c)] If $\mathrm{Ind}(G\setminus N[e])$ is contractible then $\mathrm{Ind}(G)\simeq \mathrm{Ind}(G- e)$.
\end{itemize}
\end{lemma}
\begin{lemma}
\label{lemma:zadditive}
For any vertex $v$ and edge $e$ of $G$ we have
\begin{eqnarray*}
Z(G)&=&Z(G\setminus v)-Z(G\setminus N[v]),\\
Z(G)&=&Z(G- e)-Z(G\setminus N[e]).
\end{eqnarray*}
\end{lemma}
There are some special combinatorial circumstances where Lemma \ref{lemma:simple} applies.
\begin{lemma}
\label{lemma:simple2}
We have the following implications
\begin{itemize}
\item[a)] (Fold lemma, \cite{Eng1}) If $N(u)\subset N(v)$ then $\mathrm{Ind}(G)\simeq \mathrm{Ind}(G\setminus v)$.
\item[b)] If $u$ is a vertex of degree $1$ and $v$ is its only neighbour then $\mathrm{Ind}(G)\simeq \Sigma\,\mathrm{Ind}(G\setminus N[v])$.
\item[c)] If $u$ and $v$ are two adjacent vertices of degree $2$ which belong to a $4$-cycle in $G$ together with two other vertices $x$ and $y$ then $\mathrm{Ind}(G)\simeq \Sigma\,\mathrm{Ind}(G\setminus\{u,v,x,y\})$ (see Fig.\ref{fig:1}).
\item[d)] If $G$ is a graph that contains any of the configurations shown in Fig.\ref{fig:contr}, then $\mathrm{Ind}(G)$ is contractible.
\end{itemize}
\end{lemma}
\begin{proof}
In a) vertex $v$ satisfies Lemma \ref{lemma:simple}.a) and in b) it satisfies Lemma \ref{lemma:simple}.b). In c) one can first remove $x$ without affecting the homotopy type (because $G\setminus N[x]$ has $u$ as an isolated vertex), and then apply Lemma \ref{lemma:simple} to $v$. Finally d) follows because a single operation of type described in b) or c) leaves a graph with an isolated vertex.
\end{proof}
A vertex $v$ of $G$ is called \emph{removable} if the inclusion $\mathrm{Ind}(G\setminus v)\hookrightarrow \mathrm{Ind}(G)$ is a homotopy equivalence. We call the graph $G\setminus N[v]$ the \emph{residue graph} of $v$ in $G$. By Lemma \ref{lemma:simple}.b if the residue graph of $v$ has a contractible independence complex then $v$ is removable.
If $e$ is an edge of $G$ then we say $e$ is \emph{removable} if the inclusion $\mathrm{Ind}(G)\hookrightarrow \mathrm{Ind}(G- e)$ is a homotopy equivalence. If $e$ is not an edge of $G$ then $e$ is \emph{insertable} if the inclusion $\mathrm{Ind}(G\cup e)\hookrightarrow\mathrm{Ind}(G)$ is a homotopy equivalence or, equivalently, if $e$ is removable from $G\cup e$. In both cases we call the graph $G\setminus N[e]$ the \emph{residue graph} of $e$ in $G$. By Lemma \ref{lemma:simple}.c if the residue graph of $e$ has a contractible independence complex then $e$ is removable or insertable, accordingly.
\begin{figure}
\begin{center}
\includegraphics[scale=0.85]{fig-9.pdf}
\end{center}
\caption{A configuration which can be removed at the cost of a suspension (Lemma \ref{lemma:simple2}.c).}
\label{fig:1}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{ccccccc}
\includegraphics[scale=0.85]{fig-1.pdf} & &\includegraphics[scale=0.85]{fig-2.pdf} & & \includegraphics[scale=0.85]{fig-3.pdf} & & \includegraphics[scale=0.85]{fig-4.pdf} \\
A & & B & & C & & D
\end{tabular}
\end{center}
\caption{Four types of configurations which force contractibility of the independence complex (Lemma \ref{lemma:simple2}.d).}
\label{fig:contr}
\end{figure}
\section{Proofs of Theorem \ref{thm:periods} and Proposition \ref{prop:other}}
\label{sect:simplesuspension}
We identify the vertices of $P_m$ with $\{1,\ldots,m\}$ and the vertices of $C_n$ with $\mathbb{Z}/n=\{0,\ldots,n-1\}$. Product graphs have vertices indexed by pairs. To deal with the degenerate cases it is convenient to assume that $C_2=P_2$, that $C_1$ is a single vertex with a loop and that $C_0=P_0$ are empty graphs. This convention forces all spaces $\mathrm{Ind}(C_1)$, $\mathrm{Ind}(P_0)$ and $\mathrm{Ind}(C_0)$ to be the empty space $\emptyset=S^{-1}$.
\begin{proof}[Proof of \ref{thm:periods}.a).]
This is a known result of \cite{Koz}, but we present a proof that introduces our method. Identify $P_1\times C_n$ with $C_n$. Let $e=\{0,4\}$. The residue graph of $e$ has $2$ as an isolated vertex, so $e$ is insertable. In $C_n\cup e$ the edge $\{0,1\}$ is removable (because $3$ is isolated in the residue) and subsequently $\{3,4\}$ is removable ($1$ isolated in the residue). It follows that
$$\mathrm{Ind}(C_n)\simeq\mathrm{Ind}(C_n\cup\{0,4\}\setminus\{0,1\}\setminus\{3,4\}).$$
But the latter graph is $C_{n-3}\sqcup P_3$, therefore its independence complex is $\mathrm{Ind}(C_{n-3})\ast\mathrm{Ind}(P_3)\simeq \mathrm{Ind}(C_{n-3})\ast S^0=\Sigma\,\mathrm{Ind}(C_{n-3})$ as required.
\end{proof}
\begin{proof}[Proof of \ref{thm:periods}.b).]
In $P_2\times C_n$ the edge $e_1=\{(1,0),(1,5)\}$ is insertable, because its residue graph contains a configuration of type A (see Lemma \ref{lemma:simple2}.d), namely with vertices $(2,1)$ and $(2,4)$ having degree one. For the same reason the edge $e_2=\{(2,0),(2,5)\}$ is insertable. Now in the graph $(P_2\times C_n)\cup\{e_1,e_2\}$ the edges
$$f_1=\{(1,0),(1,1)\}, f_2=\{(2,0),(2,1)\}, f_3=\{(1,4),(1,5)\}, f_4=\{(2,4),(2,5)\}$$
are all sequentially removable, because the residue graph in each case contains a configuration of type B. Therefore
\begin{eqnarray*}
\mathrm{Ind}(P_2\times C_n)&\simeq & \mathrm{Ind}((P_2\times C_n)\cup\{e_1,e_2\}-\{f_1,f_2,f_3,f_4\})=\\
&=&\mathrm{Ind}(P_2\times C_{n-4}\sqcup P_2\times P_4)=\mathrm{Ind}(P_2\times C_{n-4})\ast\mathrm{Ind}(P_2\times P_4)\simeq\\
&\simeq &\mathrm{Ind}(P_2\times C_{n-4})\ast S^1= \Sigma^2\,\mathrm{Ind}(P_2\times C_{n-4})
\end{eqnarray*}
where $\mathrm{Ind}(P_2\times P_4)$ can be found by direct calculation or from Proposition \ref{prop:other}.
\end{proof}
\begin{remark}
All the proofs in this section will follow the same pattern, that is to split the graph into two parts. One of those parts will be small, i.e. of some fixed size, and its independence complex will always have the homotopy type of a single sphere. Every time we need to use a result of this kind about a graph of small, fixed size, we will just quote the answer, leaving the verification to those readers who do not trust computer homology calculations \cite{Poly}.
\end{remark}
\begin{proof}[Proof of \ref{thm:periods}.c).]
We follow the strategy of b). First we need to show that the edges $e_1=\{(1,0),(1,9)\}$, $e_2=\{(2,0),(2,9)\}$, $e_3=\{(3,0),(3,9)\}$ are insertable.
For $e_1$ the residue graph is shown in Fig.\ref{fig:p3cn}.a. To prove its independence complex is contractible we describe a sequence of operations that either preserve the homotopy type or throw in an extra suspension. The sequence will end with a graph whose independence complex is contractible for some obvious reason, so also the complex we started with is contractible. The operations are as follows: remove $(3,2)$ (by \ref{lemma:simple2}.a with $u=(2,1)$); remove $(2,3)$ (by \ref{lemma:simple2}.a with $u=(1,2)$); remove $N[(3,4)]$ (by \ref{lemma:simple2}.b with $u=(3,3)$); remove $(2,6)$ (by \ref{lemma:simple2}.a with $u=(1,7)$); remove $N[(1,5)]$ (by \ref{lemma:simple2}.b with $u=(2,5)$). In the last graph there is a configuration of type A on the vertices $(3,6)$ and $(1,7)$.
The same argument works for $e_3$. For $e_2$ the residue graph has a connected component shown in Fig.\ref{fig:p3cn}.b. The modifications this time are: remove $N[(1,2)]$, $N[(1,7)]$, $N[(3,2)]$ and $N[(3,7)]$ for reasons of \ref{lemma:simple2}.b. The graph that remains contains a configuration of type A.
Next it remains to check that in $(P_3\times C_n)\cup\{e_1,e_2,e_3\}$ the edges $\{(0,i),(1,i)\}$ and $\{(8,i),(9,i)\}$ are removable for $i=1,2,3$. The types of residue graphs one must consider are quite similar and the arguments for the contractibility of their independence complexes are exact copies of those for $e_1,e_2,e_3$ above. We leave them as an exercise to the reader.
We can thus conclude as before
\begin{equation*}
\mathrm{Ind}(P_3\times C_n)\simeq\mathrm{Ind}(P_3\times C_{n-8})\ast\mathrm{Ind}(P_3\times P_8)\simeq\mathrm{Ind}(P_3\times C_{n-8})\ast S^5= \Sigma^6\,\mathrm{Ind}(P_3\times C_{n-8}).
\end{equation*}
\end{proof}
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[scale=0.85]{fig-6.pdf} & &\includegraphics[scale=0.85]{fig-7.pdf} \\
a) & & b)
\end{tabular}
\end{center}
\caption{Two residue graphs for edges insertable into $P_3\times C_n$.}
\label{fig:p3cn}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.85]{fig-5.pdf}
\end{center}
\caption{The residue graph for edges removable from $P_m\times C_7$.}
\label{fig:pmc7}
\end{figure}
\begin{proof}[Proof of \ref{thm:periods}.d).]
In $P_m\times C_3$ each edge $\{(3,i),(4,i)\}$ is removable for $i=0,1,2$ because each residue graph contains a configuration of type C. As before, this implies an equivalence
\begin{equation*}
\mathrm{Ind}(P_m\times C_3)\simeq \mathrm{Ind}(P_{m-3}\times C_{3})\ast\mathrm{Ind}(P_3\times C_3)\simeq\mathrm{Ind}(P_{m-3}\times C_{3})\ast S^1= \Sigma^2\,\mathrm{Ind}(P_{m-3}\times C_3)
\end{equation*}
\end{proof}
\begin{proof}[Proof of \ref{thm:periods}.e).]
In $P_m\times C_5$ each edge $\{(2,i),(3,i)\}$ is removable for $i=0,1,2,3,4$ because each residue graph contains a configuration of type A with vertices $(1,i-1)$ and $(1,i+1)$ having degree $1$. Again, this means
\begin{equation*}
\mathrm{Ind}(P_m\times C_5)\simeq \mathrm{Ind}(P_{m-2}\times C_{5})\ast\mathrm{Ind}(P_2\times C_5)\simeq\mathrm{Ind}(P_{m-2}\times C_{5})\ast S^1= \Sigma^2\,\mathrm{Ind}(P_{m-2}\times C_5)
\end{equation*}
\end{proof}
\begin{proof}[Proof of \ref{thm:periods}.f).]
We want to show that the edges $e_i=\{(4,i),(5,i)\}$, $i=0,\ldots,6$ are sequentially removable. Then the result will follow as before:
\begin{equation*}
\mathrm{Ind}(P_m\times C_7)\simeq \mathrm{Ind}(P_{m-4}\times C_{7})\ast\mathrm{Ind}(P_4\times C_7)\simeq\mathrm{Ind}(P_{m-4}\times C_{7})\ast S^5= \Sigma^6\,\mathrm{Ind}(P_{m-4}\times C_7).
\end{equation*}
The residue graph of $e_0$ is the graph $G$ from Fig.\ref{fig:pmc7}. We need to show that the independence complex of that graph is contractible. To do this, we will first show that each of the vertices $(4,j)$, $j=2,3,4,5$, is removable in $G$. By symmetry, it suffices to consider $(4,2)$ and $(4,3)$.
Consider first the vertex $(4,2)$ and its residue graph $G\setminus N[(4,2)]$. It can be transformed in the following steps: remove $N[(2,1)]$ (by \ref{lemma:simple2}.b with $u=(3,1)$); remove $N[(1,6)]$ (by \ref{lemma:simple2}.b with $u=(1,0)$); remove $N[(1,3)]$ (by \ref{lemma:simple2}.b with $u=(1,2)$); remove $N[(3,4)]$ (by \ref{lemma:simple2}.b with $u=(3,3)$). In the final graph $(2,5)$ is isolated.
Now we prove the residue graph $G\setminus N[(4,3)]$ has a contractible independence complex. Decompose it as follows: remove $(3,1),(3,2),(2,1),(2,2)$ (by \ref{lemma:simple2}.c); remove $(1,6)$ (by \ref{lemma:simple2}.a with $u=(2,0)$); remove $(2,5)$ (by \ref{lemma:simple2}.a with $u=(3,6)$); remove $N[(1,4)]$ (by \ref{lemma:simple2}.b with $u=(1,5)$). In the final graph $(2,3)$ is isolated.
Since all the vertices in row $4$ of $G$ are removable, $\mathrm{Ind}(G)$ is homotopy equivalent to the join $\mathrm{Ind}(G[1,2,3])\ast \mathrm{Ind}(G[5,\ldots])$, where $G[\ldots]$ means the subgraph of $G$ spanned by the numbered rows. But a direct calculation shows that $\mathrm{Ind}(G[1,2,3])$ is contractible, hence so is $\mathrm{Ind}(G)$. This ends the proof that the edge $e_0=\{(4,0),(5,0)\}$ of $P_m\times C_7$ was removable.
For all other edges $e_i$ in a sequence the residue graph will look exactly like $G$ with possibly some edges between rows $4$ and $5$ missing. This has no impact on contractibility since all of the above proof took part in rows $1,2,3$ of Fig.\ref{fig:pmc7}. That means that all $e_i$ are removable, as required.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:other}.]
We just sketch the arguments and the reader can check the details. Part a) is a result of \cite{Koz} and also follows from observing that the edge $\{3,4\}$ of the path $P_n$ is removable as its residue graph has an isolated vertex $1$. Part b) follows directly from Lemma \ref{lemma:simple2}.c.
For part c), each edge $\{(i,4),(i,5)\}$, $i=1,2,3$ of $P_3\times P_n$ is removable because their residue graphs either contain, or can easily be reduced to contain, a configuration of type C or D. Then the graph splits into two components and we conclude as usually.
In d) we first show that each edge $\{(i,0),(i,4)\}$, $i=0,1,2$ is insertable into $C_3\times C_n$ because the residue graph contains a configuration of type C. Then in the enlarged graph the obvious edges which must be removed to obtain a disjoint union $C_3\times C_{n-3}\sqcup C_3\times P_3$ are indeed removable, again because of a type C configuration in their residue graphs. We conclude as always.
\end{proof}
\section{Cylinders with even circumference: Patterns}
\label{sect:patterns}
To prove the results about cylinders of even circumference we will need quite a lot of notation. On the plus side, once all the objects are properly defined, the proofs will follow in a fairly straightforward way. It is perhaps instructive to read this and the following sections simultaneously. The next section contains a working example of what is going on for $n=6$. From now on $n$, the length of the cycle, is a fixed even integer which will not appear in the notation.
\empha{A pattern} $\mathcal{P}$ is a matrix of size $2\times n$ with $0/1$ entries and such that if $\mathcal{P}(1,i)=1$ then $\mathcal{P}(2,i)=1$, i.e. below a $1$ in the first row there is always another $1$ in the second row. An example of a pattern is
$$
\mathcal{P}=\left(\begin{array}{cccccc}1&0&1&0&0&0\\ 1&1&1&1&0&1\end{array}\right).
$$
We also call $n$ the length of the pattern. The rows of a pattern are indexed by $1$ and $2$, while the columns are indexed with $0,\ldots,n-1$, as the vertices of $C_n$. Given $i$ we say $\mathcal{P}(1,i)$ is `above' $\mathcal{P}(2,i)$ and $\mathcal{P}(2,i)$ is `below' $\mathcal{P}(1,i)$. We identify a pattern with patterns obtained by a cyclic shift or by a reflection, since they will define isomorphic graphs (see below). Also the words `left', `right' and `adjacent' are understood in the cyclic sense.
Given a pattern $\mathcal{P}$ define $G(\mathcal{P};m)$ as the induced subgraph of $P_m\times C_n$ obtained by removing those vertices $(1,i)$ and $(2,i)$ for which $\mathcal{P}(1,i)=0$, resp. $\mathcal{P}(2,i)=0$. This amounts to applying a `bit mask' defined by $\mathcal{P}$ to the first two rows of $P_m\times C_n$. The graph $G(\mathcal{P};m)$ for the pattern $\mathcal{P}$ above is:
\begin{center}
\includegraphics[scale=0.85]{fig-14}
\end{center}
Define the simplified notation $Z(\mathcal{P};m):=Z(G(\mathcal{P};m))$. If $\mathcal{I}$ denotes the all-ones pattern then $Z(\mathcal{I};m)=Z(P_m\times C_n)$ is the value we are interested in.
We now need names for some structures within a row:
\begin{itemize}
\item A \empha{singleton} is a single $1$ with a $0$ both on the left and on the right.
\item A \empha{block} is a contiguous sequence of $1$s of length at least $3$, which is bounded by a $0$ both on the left and on the right.
\item A \empha{run} is a sequence of blocks and singletons separated by single $0$s.
\item A \empha{nice run} is a run in which every block has length exactly $3$.
\end{itemize}
\begin{example}
The sequence $01010111010111010$ is a nice run. The sequence $01110111101010$ is a run, but it is not a nice run. The sequences $01011010$ and $0101001110$ are not runs.
\end{example}
A pattern is \empha{reducible} if the only occurrences of $1$ in the first row are singletons. Otherwise we call it \empha{irreducible}. In particular, a pattern whose first row contains only $0$s is reducible.
We now need three types of pattern transformations, which we denote $V$ (for vertex), $N$ (for neighbourhood) and $R$ (for reduction). The first two can be performed on any pattern. Suppose $\mathcal{P}$ is a pattern and $i$ is an index such that $\mathcal{P}(1,i)=1$.
\begin{itemize}
\item An \empha{operation of type $V$} sets $\mathcal{P}(1,i)=0$. We denote the resulting pattern $\mathcal{P}^{V,i}$.
\item An \empha{operation of type $N$} sets $\mathcal{P}(1,i-1)=\mathcal{P}(1,i)=\mathcal{P}(1,i+1)=\mathcal{P}(2,i)=0$. We denote the resulting pattern $\mathcal{P}^{N,i}$.
\end{itemize}
All other entries of the pattern remain unchanged. If it is clear what index $i$ is used we will abbreviate the notation to $\mathcal{P}^V$ and $\mathcal{P}^N$.
\begin{lemma}
\label{lem:zvn}
For any pattern $\mathcal{P}$ and index $i$ such that $\mathcal{P}(1,i)=1$ we have
$$Z(\mathcal{P};m)=Z(\mathcal{P}^{V,i};m)-Z(\mathcal{P}^{N,i};m),\quad m\geq 2.$$
\end{lemma}
\begin{proof}
This is exactly the first equality of Lemma \ref{lemma:zadditive} for $G=G(\mathcal{P};m)$.
\end{proof}
The third \empha{operation, of type $R$}, can be applied to a reducible pattern $\mathcal{P}$ and works as follows. First, temporarily extend $\mathcal{P}$ with a new, third row, filled with ones. Now for every index $i$ such that $\mathcal{P}(1,i)=1$ (note that no two such $i$ are adjacent) make an assignment
$$\mathcal{P}(1,i)=\mathcal{P}(2,i-1)=\mathcal{P}(2,i)=\mathcal{P}(2,i+1)=\mathcal{P}(3,i)=0.$$
Having done this for all such $i$ remove the first row (which is now all zeroes) and let $\mathcal{P}^R$ be the pattern formed by the second and third row.
\begin{lemma}
\label{lem:zr}
Suppose $\mathcal{P}$ is a reducible pattern and $k$ is the number of ones in its first row. Then
$$Z(\mathcal{P};m)=(-1)^kZ(\mathcal{P}^R;m-1),\quad m\geq 2.$$
\end{lemma}
\begin{proof}
It follows from a $k$-fold application of Lemma \ref{lemma:simple2}.b).
\end{proof}
We now describe a class of patterns which arise when one applies the above operations in some specific way to the all-ones pattern. A pattern $\mathcal{P}$ is called \empha{proper} if it satisfies the following conditions:
\begin{itemize}
\item The whole second row is either a run or it consists of only $1$s.
\item If the second row has only $1$s then the first row is a nice run.
\item Above every singleton $1$ in the second row there is a $0$ in the first row.
\item Above every block of length $3$ in the second row there are $0$s in the first row.
\item If $B$ is any block in the second row of length at least $4$ then above $B$ there is a nice run $R$, subject to the conditions:
\begin{itemize}
\item if the leftmost group of $1$s in $R$ is a block (of length $3$) then the leftmost $1$ of that block is located exactly above the $3$rd position of $B$,
\item if the leftmost group of $1$s in $R$ is a singleton then it is located exactly above the $2$nd or $3$rd position of $B$.
\end{itemize}
By symmetry the same rules apply to the rightmost end of $B$ and $R$.
\end{itemize}
Note that a proper pattern does not contain the sequences $0110$ nor $1001$ in any row. Also note that the first row of any proper pattern can only contain singletons and blocks of length $3$, and no other groups of $1$s.
\begin{example}
\label{ex:proper}
Here are some proper patterns:
\begin{eqnarray*}
\mathcal{A}=\left(\begin{array}{cccccccccc}0&1&0&1&0&0&0&0&0&0\\1&1&1&1&1&0&1&1&1&0 \end{array}\right) &
\mathcal{B}=\left(\begin{array}{cccccccccc}1&0&1&1&1&0&1&1&1&0\\1&1&1&1&1&1&1&1&1&1 \end{array}\right) \\
\mathcal{C}=\left(\begin{array}{cccccccccc}0&0&1&1&1&0&0&0&0&0\\1&1&1&1&1&1&1&0&1&0 \end{array}\right) &
\mathcal{D}=\left(\begin{array}{cccccccccc}0&0&1&0&1&1&1&0&0&0\\1&1&1&1&1&1&1&1&1&0 \end{array}\right)
\end{eqnarray*}
\end{example}
A pattern is called \empha{initial} if it is obtained from the all-ones pattern $\mathcal{I}$ by performing, for each even index $i=0,2,4,\ldots,n-2$ one of the operations of type $V$ or $N$. It means there should be $2^{n/2}$ initial patterns, but some of them can be identified via cyclic shift or reflection. One can easily see that every initial pattern is reducible. Moreover, by a repeated application of Lemma \ref{lem:zvn} we get that $Z(P_m\times C_n)$ is a linear combination of the numbers $Z(\mathcal{P};m)$ for initial patterns $\mathcal{P}$. More importantly, we have:
\begin{lemma}
\label{lem:initialproper}
Every initial pattern is proper.
\end{lemma}
\begin{proof}
If the operations we perform in positions $i=0,2,\ldots,n-2$ are all $V$ or all $N$ then we get one of the patterns described in Lemma \ref{lem:mu0patterns}. If we perform $N$ in points $i$, $i+2$ we get a singleton in position $i+1$ of the second row with a $0$ above it. For a choice of $NVN$ in $i$, $i+2$, $i+4$ we get a block of length $3$ in the second row with $0$s above. Finally for a longer segment $NV\cdots VN$ the outcome is a block with a nice run of singletons starting and ending above the $3$rd position in the block. We can never get two adjacent $0$s in the second row, so it is a run. Here is a summary of the possible outcomes:
$$
\bordermatrix{&&&\mathbf{N}&&V&&V&&V&&V&&\mathbf{N}&&V&&\mathbf{N}&&\mathbf{N}&&\cr&\cdots&0&0&0&0&1&0&1&0&1&0&0&0&0&0&0&0&0&0&0&\cdots\cr&\cdots&1&0&1&1&1&1&1&1&1&1&1&0&1&1&1&0&1&0&1&\cdots}
$$
where the labels $V,N$ indicate which operation was applied at a position.
\end{proof}
Now we introduce the main tool: an invariant which splits proper patterns into classes which can be analyzed recursively.
\begin{definition}
\label{def:megadefinition}
For any proper pattern $\mathcal{P}$ we define the $\mu$-invariant as follows:
\begin{eqnarray*}
\mu(\mathcal{P})&=&\mathrm{(\ number\ of\ blocks\ in\ the\ first\ row\ of\ }\mathcal{P}\mathrm{\ )} + \\
& & \mathrm{(\ number\ of\ blocks\ in\ the\ second\ row\ of\ }\mathcal{P}\mathrm{\ )}.
\end{eqnarray*}
\end{definition}
\begin{example}
All the proper patterns in Example \ref{ex:proper} have $\mu$-invariant $2$.
\end{example}
\begin{lemma}
\label{lem:mu0patterns}
For every proper pattern $\mathcal{P}$ we have $0\leq\mu(\mathcal{P})\leq n/4$. The only patterns with $\mu(\mathcal{P})=0$ are
$$
\mathcal{P}_1=\left(\begin{array}{ccccccc}1&0&1&0&\cdots&1&0\\ 1&1&1&1&\cdots&1&1\end{array}\right),\quad \mathcal{P}_2=\left(\begin{array}{ccccccc}0&0&0&0&\cdots&0&0\\ 1&0&1&0&\cdots&1&0\end{array}\right).
$$
\end{lemma}
\begin{proof}
If $\mu(\mathcal{P})=0$ then $\mathcal{P}$ has no blocks in either row. If the second row is all-ones then the first row must be a nice run with no blocks, so $\mathcal{P}=\mathcal{P}_1$. Otherwise the second row is an alternating $0/1$ but then the first row cannot have a $1$ anywhere, so $\mathcal{P}=\mathcal{P}_2$.
Now consider the following map. To every block in the second row we associate its two rightmost points, its leftmost point and the immediate left neighbour of the leftmost point. To every block in the first row we associate its three points and the point immediately left. This way every block which contributes to $\mu(\mathcal{P})$ is given $4$ points, and those sets are disjoint for different blocks. The only non-obvious part of the last claim follows since a block in the first row does not have any point over the two outermost points of the block below it. That ends the proof of the upper bound.
\end{proof}
Next come the crucial observations about operations on proper patterns and their $\mu$-invariants.
\begin{proposition}
\label{prop:propervn}
Suppose $\mathcal{P}$ is a proper, irreducible pattern and let $i$ be the index of the middle element of some block in the first row. Then $\mathcal{P}^{V,i}$ and $\mathcal{P}^{N,i}$ are proper and
$$\mu(\mathcal{P}^{V,i})=\mu(\mathcal{P})-1,\quad \mu(\mathcal{P}^{N,i})=\mu(\mathcal{P}).$$
\end{proposition}
\begin{proof}
There are two cases, depending on whether the block of length $3$ centered at $i$ is, or is not, the outermost group of $1$s in its run. If it is the outermost one then it starts over the $3$rd element of the block below it. The two possible situations are depicted below.
$$
\left(\begin{array}{ccccccccc}\cdots&0&0&0&1&1&1&0&\cdots \\ \cdots&0&1&1&1&1&1&1&\cdots\end{array}\right),\quad
\left(\begin{array}{ccccccccc}\cdots&1&0&1&1&1&0&1&\cdots \\ \cdots&1&1&1&1&1&1&1&\cdots\end{array}\right).
$$
In $\mathcal{P}^V$ the second row is the same as in $\mathcal{P}$. Above the current block we still have a nice run and its outermost $1$s are in the same positions. That means $\mathcal{P}^V$ is still proper. The number of blocks in the first row dropped by one, so $\mu(\mathcal{P}^V)=\mu(\mathcal{P})-1$.
The proof for $\mathcal{P}^N$ depends on the two cases. In the first case an operation of type $N$ splits the block in the second row creating a new block of size $3$ with $0$s above it. In the second block that comes out of the splitting the first two $1$s have $0$s above them, so whatever run there was in $\mathcal{P}$ it is still there and starts in an allowed position. That means we get a proper pattern. One block was removed and one split into two, so $\mu$ does not change.
In the second case the situation is similar. We increase the number of blocks in the second row by one while removing one block from the first row. The two outermost positions in the new block(s) have $0$s above them, so the nice runs which remain above them start in correct positions. Again $\mathcal{P}^N$ is proper.
\end{proof}
\begin{proposition}
\label{prop:properr}
If $\mathcal{P}$ is a proper, reducible pattern then $\mathcal{P}^{R}$ is proper and
$$\mu(\mathcal{P}^{R})=\mu(\mathcal{P}).$$
\end{proposition}
\begin{proof}
Consider first the case when $\mathcal{P}$ has only $0$s in the first row. Then the second row is a run with blocks only of size $3$ (because a longer block would require something above it). This means $\mathcal{P}^R$ has a full second row with a nice run in the first row. Such pattern is proper and $\mu(\mathcal{P}^R)=\mu(\mathcal{P})$ as we count the same blocks.
Now we move to the case when $\mathcal{P}$ has at least one $1$ in the first row. Note that $\mathcal{P}(1,i)=1$ if and only if $\mathcal{P}^R(2,i)=0$. That last condition implies that the second row of $\mathcal{P}^R$ is a run (as the first row of $\mathcal{P}$ does not contain the sequences $11$ nor $1001$).
If $\mathcal{P}^R(2,i)$ is a singleton $1$ then in $\mathcal{P}$ we must have had $\mathcal{P}(1,i-1)=\mathcal{P}(1,i+1)=1$ but then $\mathcal{P}(2,i)$ was erased by the operation $R$ and therefore $\mathcal{P}^R(1,i)=0$. This proves $\mathcal{P}^R$ has zeroes above singletones of the second row.
Now consider any block $B$ in the second row of $\mathcal{P}^R$ and assume without loss of generality that it occupies positions $1,\ldots,l$, hence $\mathcal{P}(1,0)=\mathcal{P}(1,l+1)=1$, $\mathcal{P}^R(2,0)=\mathcal{P}^R(2,l+1)=0$ and $\mathcal{P}(1,i)=0$ for all $1\leq i\leq l$. It means that the situation in $\mathcal{P}$ must have looked like one of these (up to symmetry):
\[
\begin{array}{ccccccccccccc}
& & 0 & & & & & & & &l+1& &\\
\ldelim({2}{0.5em} & 0 & 1 & 0 & 0 & 0 & & 0 & 0 & 0 & 1 & 0 & \rdelim){2}{0.5em} \\
& 1 & 1 & 1 & 0 & 1 &\cdots& 1 & 0 & 1 & 1 & 1 &\\
& & 0 & B & B & B & & B & B & B & 0 & &\\
\end{array}
\]
\[
\begin{array}{ccccccccccccc}
& & 0 & & & & & & & &l+1& &\\
\ldelim({2}{0.5em} & 0 & 1 & 0 & 0 & 0 & & 0 & 0 & 0 & 1 & 0 & \rdelim){2}{0.5em} \\
& 1 & 1 & 1 & 0 & 1 &\cdots& 0 & 1 & 1 & 1 & 1 &\\
& & 0 & B & B & B & & B & B & B & 0 & &\\
\end{array}
\]
\[
\begin{array}{ccccccccccccc}
& & 0 & & & & & & & &l+1& &\\
\ldelim({2}{0.5em} & 0 & 1 & 0 & 0 & 0 & & 0 & 0 & 0 & 1 & 0 & \rdelim){2}{0.5em} \\
& 1 & 1 & 1 & 1 & 0 &\cdots& 0 & 1 & 1 & 1 & 1 &\\
& & 0 & B & B & B & & B & B & B & 0 & &\\
\end{array}
\]
The letters $B$ indicate where the block $B$ will stretch in what will become the future second row of $\mathcal{P}^R$. The $1$s in $\mathcal{P}(1,0)$ and $\mathcal{P}(1,l+1)$ must be located above the $2$nd or $3$rd element of a block. The part of the second row in $\mathcal{P}$ denoted by $\cdots$ is a run with no $1$s above, so it must be a nice run. It follows that in $\mathcal{P}^R$ above $B$ we will get a nice run and by checking the three cases we see that the run starts above the $2$nd or $3$rd element of $B$, and it only starts above the $2$nd element if it has a singleton there.
There is just one way in which we can obtain a block $B$ of size $3$:
\[
\begin{array}{ccccccc}
& 0 & & & &l+1&\\
\ldelim({2}{0.5em} & 1 & 0 & 0 & 0 & 1 &\rdelim){2}{0.5em} \\
& 1 & 1 & 0 & 1 & 1 &\\
& 0 & B & B & B & 0 &\\
\end{array}
\]
and then the operation $R$ will erase everything above that block. This completes the check that the pattern $\mathcal{P}^R$ is proper.
It remains to compute $\mu(\mathcal{P}^R)$. Our previous discussion implies that:
\begin{itemize}
\item every block of size $3$ in the second row of $\mathcal{P}$ becomes a block in the first row of $\mathcal{P}^R$,
\item every two consecutive blocks longer than $3$ yield, between them, a block $B$ in the second row of $\mathcal{P}^R$,
\end{itemize}
and every block in $\mathcal{P}^R$ arises in this way. It means that every block in $\mathcal{P}$ contributes one to the count of blocks in $\mathcal{P}^R$ (in either first or second row). That proves $\mu(\mathcal{P}^R)=\mu(\mathcal{P})$.
\end{proof}
\section{An example: $n=6$}
\label{sect:example6}
First of all the value $Z(P_m\times C_6)=Z(\mathcal{I};m)$ for the all-ones pattern $\mathcal{I}$ splits into a linear combination of $Z$-values for the following patterns.
\begin{eqnarray*}
\mathcal{A}=\bordermatrix{&V&&V&&V&\cr&0&1&0&1&0&1\cr&1&1&1&1&1&1} & \mathcal{B}=\bordermatrix{&V&&V&&N&\cr&0&1&0&0&0&0\cr&1&1&1&1&0&1}\\
\mathcal{C}=\bordermatrix{&V&&N&&N&\cr&0&0&0&0&0&0\cr&1&1&0&1&0&1} & \mathcal{D}=\bordermatrix{&N&&N&&N&\cr&0&0&0&0&0&0\cr&0&1&0&1&0&1}
\end{eqnarray*}
The labels $V,N$ indicate which operation was applied to the particular position $i=0,2,4$. Any other pattern we get is isomorphic to one of these, and Lemma \ref{lem:zvn} unfolds recursively into:
\begin{equation}
\label{eq:6split}
Z(\mathcal{I};m)=Z(\mathcal{A};m)-3Z(\mathcal{B};m)+3Z(\mathcal{C};m)-Z(\mathcal{D};m).
\end{equation}
We also see that (Definition \ref{def:megadefinition}):
$$\mu(\mathcal{A})=0+0=0,\ \mu(\mathcal{B})=0+1=1,\ \mu(\mathcal{C})=0+1=1,\ \mu(\mathcal{D})=0+0=0.$$
All the patterns we have now are reducible. The first obvious reductions are
$$\mathcal{A}^R=\mathcal{D},\quad \mathcal{D}^R=\mathcal{A}$$
and by Lemma \ref{lem:zr} they lead to
$$Z(\mathcal{D};m)=Z(\mathcal{A};m-1),\quad Z(\mathcal{A};m)=-Z(\mathcal{D};m-1)=-Z(\mathcal{A};m-2).$$
It means that given the initial conditions for $Z(\mathcal{A};m)$ and $Z(\mathcal{D};m)$ we have now completely determined those sequences and their generating functions.
Note that $\mathcal{A}$ and $\mathcal{D}$ had $\mu$-invariant $0$. Now we move on to the patterns with the next $\mu$-invariant value $1$. We can reduce $\mathcal{B}$ (even three times) and $\mathcal{C}$ and apply Lemma \ref{lem:zr}:
$$\mathcal{B}^{RRR}=\mathcal{E},\ \mathcal{C}^R=\mathcal{E}; \quad Z(\mathcal{B};m)=-Z(\mathcal{E};m-3),\ Z(\mathcal{C};m)=Z(\mathcal{E};m-1)$$
where
$$
\mathcal{E}=\left(\begin{array}{cccccc}1&0&1&1&1&0\\1&1&1&1&1&1\end{array}\right).
$$
Still $\mu(\mathcal{E})=1$. Now we can apply a $V,N$-type splitting in the middle of the length $3$ block in the first row of $\mathcal{E}$, as in Proposition \ref{prop:propervn}. We have by Lemma \ref{lem:zvn}:
$$\mathcal{E}^V=\mathcal{A},\ \mathcal{E}^N=\mathcal{B}; \quad \quad Z(\mathcal{E};m)=Z(\mathcal{A};m)- Z(\mathcal{B};m)=Z(\mathcal{A};m)+ Z(\mathcal{E};m-3)$$
where $\mu(\mathcal{A})=0$, so the sequence $Z(\mathcal{A};m)$ is already known.
This recursively determines all the sequences and it is a matter of a mechanical calculation to derive their generating functions (some care must be given to the initial conditions). We can also check periodicities directly. The sequences with $\mu$-invariant $0$ are $4$-periodic:
$$Z(\mathcal{A};m)=-Z(\mathcal{A};m-2)=Z(\mathcal{A};m-4)$$
and those with $\mu$-invariant $1$ are $12$-periodic:
$$Z(\mathcal{E};m)=Z(\mathcal{A};m)+Z(\mathcal{A};m-3)+Z(\mathcal{A};m-6)+Z(\mathcal{A};m-9)+Z(\mathcal{E};m-12)=Z(\mathcal{E};m-12)$$
since $Z(\mathcal{A};m)=-Z(\mathcal{A};m-6)$. By (\ref{eq:6split}) this means $12$-periodicity of $Z(P_m\times C_6)$.
\section{Proof of the Theorem~\ref{thm:genfun}}
\label{sect:weakproof}
Everything is now ready to prove Theorem \ref{thm:genfun}. We are going to deduce it from a refined version given below. Throughout this section an even number $n$ is still fixed. For any pattern $\mathcal{P}$ of length $n$ define the generating function
\begin{equation}
\label{eqn:fp}
f_\mathcal{P}(t)=\sum_{m=0}^\infty Z(\mathcal{P};m)t^m.
\end{equation}
\begin{proposition}
\label{prop:weakrefined}
For any proper pattern $\mathcal{P}$ the function $f_\mathcal{P}(t)$ is a rational function such that all zeroes of its denominator are complex roots of unity.
\end{proposition}
The sequence $Z(P_m\times C_n)$ is a linear combination of sequences $Z(\mathcal{P};m)$ for initial patterns $\mathcal{P}$ (modulo initial conditions). Every initial pattern is proper (Lemma \ref{lem:initialproper}), so Theorem \ref{thm:genfun} follows. It remains to prove Proposition \ref{prop:weakrefined}, and this is done along the lines of the example in Section \ref{sect:example6}.
\begin{proof}
We will prove the statement by induction on the $\mu$-invariant of $\mathcal{P}$. If $\mu(\mathcal{P})=0$, then $\mathcal{P}$ is one of the patterns from Lemma \ref{lem:mu0patterns}. Each of them satisfies $\mathcal{P}^{RR}=\mathcal{P}$ and by Lemma \ref{lem:zr}:
$$Z(\mathcal{P};m)=(-1)^{n/2}Z(\mathcal{P};m-2)$$
hence $f_\mathcal{P}(t)$ has the form
$$\frac{a+bt}{1-(-1)^{n/2}t^2}.$$
Now consider any fixed value $\mu>0$ of the $\mu$-invariant and suppose the result was proved for all proper patterns with smaller $\mu$-invariants. Consider the directed graph whose vertices are all proper patterns with that invariant $\mu$. For any reducible $\mathcal{P}$ there is an edge $\mathcal{P}\to\mathcal{P}^R$ and for any irreducible $\mathcal{P}$ there is an edge $\mathcal{P}\to\mathcal{P}^N$ for some (only one) choice of $N$-type operation in the middle of a block. Since the graph is finite and the outdegree of each vertex is $1$, it consists of directed cycles with some attached trees pointing towards the cycles.
If $\mathcal{P}$ is a vertex on one of the cycles, then by moving along the cycle and performing the operations prescribed by the edges we will get back to $\mathcal{P}$ and obtain (Lemmas \ref{lem:zvn}, \ref{lem:zr} and Propositions \ref{prop:propervn},\ref{prop:properr}) a recursive equation of the form
\begin{equation}\label{recursiveeq}Z(\mathcal{P};m)=\pm Z(\mathcal{P};m-a)+\sum_\mathcal{R}\pm Z(\mathcal{R};m-b_\mathcal{R})\end{equation}
where $a>0$, $b_\mathcal{R}\geq 0$ and $\mathcal{R}$ runs through some proper patterns with invariant $\mu-1$ (Proposition \ref{prop:propervn}). Now the result follows by induction (the generating function $f_\mathcal{P}(t)$ will add an extra factor $1\pm t^a$ to the denominator coming from the combination of functions $f_\mathcal{R}(t)$).
If $\mathcal{P}$ is not on a cycle then it has a path to a cycle and the result follows in the same way.
\end{proof}
\begin{remark}
It is also clear that in order to prove Conjecture~\ref{thm:genfunmedium} one must have better control over the cycle lengths in the directed graph appearing in the proof. We will construct a more accessible model for this in the next section, see Theorem~\ref{thm:necklace-to-patterns}.
\end{remark}
\section{Necklaces}
\label{section:neck}
In this section we describe an appealing combinatorial model which encodes the reducibility relation between patterns. As before $n$ is an even positive integer and $k$ is any positive integer.
We define a \textbf{\emph{$(k,n)$-necklace}}. It is a collection of $2k$ points (\emph{stones}) distributed along the circumference of a circle of length $n$, together with an assignment of a number from $\{-2,-1,1,2\}$ to each of the stones. We call these numbers \emph{stone vectors} and we think of them as actual short vectors attached to the stones and tangent to the circle. The vector points $1$ or $2$ units clockwise (positive value) or anti-clockwise (negative value) from each stone and we say a stone \emph{faces} the direction of its vector. See Fig.\ref{fig:neck1} for an example worth more than a thousand words.
During a \emph{jump} a stone moves $1$ or $2$ units along the circle in the direction and distance prescribed by its vector. The configuration of stones and vectors is subject to the following conditions:
\begin{itemize}
\item consecutive stones face in opposite directions,
\item if two consecutive stones face away from each other then their distance is an odd integer,
\item if two consecutive stones face towards each other then their distance \emph{plus} the lengths of their vectors is an odd integer,
\item if two consecutive stones face towards each other then their distance is at least $3$; moreover if their distance is exactly $3$ then their vectors have length $1$.
\end{itemize}
The last two conditions can be conveniently rephrased as follows: if two stones facing towards each other simultaneously jump then after the jump their distance will be an odd integer and they will not land in the same point nor jump over one another.
We identify $(k,n)$-necklaces which differ by an isometry of the circle. Clearly the number of $(k,n)$-necklaces is finite.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics{fig-10} & \includegraphics{fig-11}\\
a) & b)
\end{tabular}
\caption{\label{fig:neck1}Two sample $(3,16)$-necklaces. Each has $6$ stones. The arrows are stone vectors of length $1$ (shorter) or $2$ (longer). The necklace in b) is the image under $T$ of the necklace in a).}
\end{center}
\end{figure}
Next we describe a \textbf{necklace transformation $T$} which takes a $(k,n)$-necklace and performs the following operations:
\begin{itemize}
\item (JUMP) all stones jump as dictated by their vectors,
\item (TURN) all stone vectors change according to the rule
$$-2\to 1, \quad -1\to 2, \quad 1\to -2, \quad 2\to -1,$$
i.e. both direction and length are switched to the other option,
\item (FIX) if any two stones find themselves in distance $3$ facing each other and any of their vectors has length $2$, then adjust the offending vectors by reducing their length to $1$.
\end{itemize}
An example of $N$ and $TN$ is shown in Fig.\ref{fig:neck1}. It is easy to check that if $N$ is a $(k,n)$-necklace then so is $TN$.
\begin{definition}
\label{def:necklace-graph}
Define $\mathrm{Neck}(k,n)$ to be the directed graph whose vertices are all the isomorphism classes of $(k,n)$-necklaces and such that for each $(k,n)$-necklace $N$ there is a directed edge $N\to TN$.
\end{definition}
\begin{lemma}
\label{lemma:neck-general}
The graph $\mathrm{Neck}(k,n)$ is nonempty if and only if $1\leq k\leq n/4$ and it is always a disjoint union of directed cycles.
\end{lemma}
\begin{proof}
To each stone whose vector faces clockwise we associate the open arc segment of length $2$ from that stone in the direction of its vector. To each stone whose vector faces counter-clockwise we associate the open arc segment of length $2$ of which this stone is the midpoint. The segments associated to different stones are disjoint hence $2k\cdot 2\leq n$, as required (compare the proof of Lemma \ref{lem:mu0patterns}).
The out-degree of every vertex in $\mathrm{Neck}(k,n)$ is $1$, so it suffices to show that the in-degree is at least $1$. Given a $(k,n)$-necklace $N$ let $T^{-1}$ be the following operations:
\begin{itemize}
\item for each stone which \emph{does not} face towards another stone in distance $3$, change the stone vector according to the rule
$$-2\to -1, \quad -1\to -2, \quad 1\to 2, \quad 2\to 1,$$
\item jump with all the stones,
\item change all stone vectors according to the rule
$$-2\to 2, \quad -1\to 1, \quad 1\to -1, \quad 2\to -2.$$
\end{itemize}
One easily checks that $T^{-1}N$ is a $(k,n)$-necklace and that $TT^{-1}N=T^{-1}TN=N$.
\end{proof}
Some boundary cases of $\mathrm{Neck}(k,n)$ are easy to work out.
\begin{lemma}
\label{lemma:neck-small}
For any even $n$ the graph $\mathrm{Neck}(1,n)$ is a cycle of length $n-3$. For any $k$ the graph $\mathrm{Neck}(k,4k)$ is a single vertex with a loop. For any $k$ the graph $\mathrm{Neck}(k,4k+2)$ is a cycle of length $k+2$ and $\lfloor k/2 \rfloor$ isolated vertices with loops.
\end{lemma}
\begin{proof}
A $(1,n)$-necklace is determined by a choice of an odd number $3\leq d\leq n-1$ (the length of the arc along which the two stones face each other) and a choice of $\epsilon\in\{1,2\}$ (the length of the vectors at both stones which must be the same due to the parity constraints), with the restriction that if $d=3$ then $\epsilon=1$. If we denote the resulting necklace $A_{n,d}^\epsilon$ then
$$TA_{n,n-1}^1=A_{n,3}^1, \quad TA_{n,d}^1=A_{n,n-d+2}^2 \ (3\leq d\leq n-3), \quad TA_{n,d}^2=A_{n,n-d+4}^1 \ (5\leq d\leq n-1)$$
and it is easy to check that they assemble into a $(n-3)$-cycle.
An argument as in Lemma \ref{lemma:neck-general} shows that there is just one $(k,4k)$-necklace $N$, with distances between stones alternating between $3$ (stones facing each other) and $1$ (stones facing away) and all vectors of length $1$. It satisfies $TN=N$.
The analysis of the last case again requires the enumeration of all possible cases and we leave it to the interested reader.
\end{proof}
The following is our main conjecture about $\mathrm{Neck}(k,n)$.
\begin{conjecture}
\label{con:cyc-len}
The length of every cycle in the graph $\mathrm{Neck}(k,n)$ divides $n-3k$. In other words, for every $(k,n)$-necklace $N$ we have $T^{n-3k}N=N$.
\end{conjecture}
This conjecture was experimentally verified for all even $n\leq 36$, see Table 3 in the Appendix.
It is now time to explain what necklaces have to do with patterns and what Conjecture \ref{con:cyc-len} has to do with Conjecture \ref{thm:genfunmedium}.
Intuitively, $(k,n)$-necklaces are meant to correspond to reducible proper patterns $\mathcal{P}$ of length $n$ and $\mu(\mathcal{P})=k$. The operation $T$ mimics the reduction $\mathcal{P}\to\mathcal{P}^R$, although the details of this correspondence are a bit more complicated (see proof of Theorem \ref{thm:necklace-to-patterns}). The lengths of cycles in the necklace graph $\mathrm{Neck}(k,n)$ determine the constants $a$ in the recursive equations (\ref{recursiveeq}) and therefore also the exponents in the denominators of $f_n(t)$ (Conjecture \ref{thm:genfunmedium}). A precise statement is the following.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics{fig-12} & \includegraphics{fig-13}\\
a) & b)
\end{tabular}
\caption{\label{fig:neck2} The correspondence between necklaces and patterns. If $N$ is the necklace then the numbers inside the circle form the second row of the pattern $UN$ and the numbers outside form its first row (only $1$'s are shown in the first row, the remaining entries are $0$'s). The second row has a block between each pair of stones facing each other. Over each block of length greater than $3$ the number of outermost $0$'s in the first row equals the length of the stone vector.}
\end{center}
\end{figure}
\begin{theorem}
\label{thm:necklace-to-patterns}
Let $g_{i,n}$ be the any common multiple of the lengths of all cycles in the graph $\mathrm{Neck}(i,n)$. Suppose $\mathcal{P}$ is a proper pattern of even length $n$ and $\mu(\mathcal{P})=k$. Then the generating function $f_\mathcal{P}(t)$ (see (\ref{eqn:fp})) is of the form
$$f_\mathcal{P}(t)=\frac{h_\mathcal{P}(t)}{1-(-1)^{n/2}t^2}\cdot\prod_{i=1}^{k}\frac{1}{1-t^{2g_{i,n}}}$$
for some polynomial $h_\mathcal{P}(t)$.
\end{theorem}
Before proving this result first observe:
\begin{theorem}
Conjecture \ref{con:cyc-len} implies Conjecture \ref{thm:genfunmedium}.
\end{theorem}
\begin{proof}
The sequence $Z(P_m\times C_n)$ is a linear combination of sequences $Z(\mathcal{P};m)$ for proper (in fact initial) patterns $\mathcal{P}$ of length $n$. Theorem \ref{thm:necklace-to-patterns} therefore implies that
$$f_n(t)=\frac{h_n(t)}{1-(-1)^{n/2}t^2}\cdot\prod_{i=1}^{\lfloor n/4\rfloor}\frac{1}{1-t^{2g_{i,n}}}$$
for some polynomial $h_n(t)$. If Conjecture \ref{con:cyc-len} is true then we can take $g_{i,n}=n-3i$, thus obtaining the statement of Conjecture \ref{thm:genfunmedium}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:necklace-to-patterns}.]
For $k\geq 1$ and even $n$ let $\mathrm{Prop}(k,n)$ denote the set of proper patterns $\mathcal{P}$ of length $n$ and such that $\mu(\mathcal{P})=k$. Moreover, let $\mathrm{Prop}_0(k,n)\subset\mathrm{Prop}(k,n)$ consist of patterns which do not have any block in row $1$. These are exactly the reducible patterns.
There is a map
$$S:\mathrm{Prop}(k,n)\to\mathrm{Prop}_0(k,n)$$
defined as follows. If $\mathcal{P}\in\mathrm{Prop}(k,n)$ then all blocks in the first row of $\mathcal{P}$ have length $3$. We apply the operation of type $N$ in the middle of every such block and define $S\mathcal{P}$ to be the resulting pattern. It has no blocks in the first row and $\mu(S\mathcal{P})=\mu(\mathcal{P})$ by Proposition~\ref{prop:propervn}.
Next we define a map
$$Q:\mathrm{Prop}_0(k,n)\to\mathrm{Neck}(k,n).$$
Note that a pattern $\mathcal{P}\in\mathrm{Prop}_0(k,n)$ is determined by the positions of blocks in the second row and, for every endpoint of a block, the information whether the outermost $1$ in the first row (if any) is located above the $2$nd or $3$rd position of the block. This already determines the whole run above a block (because it is an alternating run of $0$'s and $1$'s).
Now we transcribe it into a necklace $Q\mathcal{P}$ as follows (see Fig.\ref{fig:neck2}). Label the unit intervals of a circle of length $n$ with the symbols from the second row of $\mathcal{P}$. For every block place two stones bounding that block and facing towards each other. The lengths of the vectors at those stones are determined by the rule:
\begin{itemize}
\item if the stone bounds a block of length $3$ then the length of its vector is $1$,
\item otherwise the length of a stone vector is the number of outermost $0$'s in the first row of $\mathcal{P}$ over the edge of the block bounded by the stone.
\end{itemize}
If two stones face away from each other then ``between them'' the second row of $\mathcal{P}$ contains a run $0101\cdots10$ of odd length. If two stones face towards each other then the length of the block between them is either $3$ or it is the odd length of $101\cdots01$ plus $p_1+p_2$ where $p_i$ are their stone vector lengths. It verifies that $Q\mathcal{P}$ is a $(k,n)$-necklace.
The map $Q$ is a bijection and we let
$$U:\mathrm{Neck}(k,n)\to\mathrm{Prop}_0(k,n)$$
be its inverse. More specifically, the second row of $UN$ is obtained by placing a block of $1$'s between every pair of stones that face each other and an alternating run $010\cdots10$ between stones facing away. In the first row of $UN$, over each block, we place either $0$'s (if the block has length $3$) or an alternating sequence $101\cdots01$ leaving out as many outermost positions as dictated by the stone vector lengths. We fill the remaining positions in the first row with $0$'s. The construction is feasible thanks to the parity conditions satisfied by $N$.
All the maps are defined in such a way that for every $(k,n)$-necklace $N$ we have
\begin{equation}
\label{claim-blah}
TN=QS((UN)^R)\quad \textrm{or equivalently}\quad UTN=S((UN)^R).
\end{equation}
To see this consider how the reduction operation $(\cdot)^R$ and the map $S$ change the neighbourhood of an endpoint of a block in the second row of $UN$. The argument is very similar to the proof of Proposition~\ref{prop:properr} and the details are left to the reader.
Now we complete the proof by induction on $\mu(\mathcal{P})$. The case $\mu(\mathcal{P})=0$ was dealt with in the proof in Section \ref{sect:weakproof}. Now suppose $k=\mu(\mathcal{P})\geq 1$. If $\mathcal{P}\in\mathrm{Prop}(k,n)$ is any pattern then by Propositions \ref{lem:zvn} and \ref{prop:propervn} we have an equation
$$Z(\mathcal{P};m)=\pm Z(S\mathcal{P};m)+\sum_\mathcal{R}\pm Z(\mathcal{R};m)$$
for some patterns $\mathcal{R}$ which satisfy $\mu(\mathcal{R})=k-1$. That means it suffices to prove the result for patterns in $\mathrm{Prop}_0(k,n)$. Every such pattern is of the form $UN$ for some $(k,n)$-necklace $N$. Equation \eqref{claim-blah} and Propositions \ref{lem:zvn}, \ref{lem:zr} and \ref{prop:propervn} lead to an equation
\begin{eqnarray*}
Z(UN;m)&=&\pm Z((UN)^R;m-1)\\
&=&\pm Z(S((UN)^R);m-1)+\sum_\mathcal{R}\pm Z(\mathcal{R};m-1)\\
&=&\pm Z(UTN;m-1)+\sum_\mathcal{R}\pm Z(\mathcal{R};m-1)\end{eqnarray*}
with $\mathcal{R}$ as before. Now a $g_{k,n}$-fold iterated application of this argument for $N, TN, \ldots, T^{g_{k,n}}N=N$ produces an equation
$$Z(UN;m)=\pm Z(UN;m-g_{k,n})+\sum_\mathcal{R}\pm Z(\mathcal{R};m-b_\mathcal{R})$$
and its double application allows to avoid the problem of the unknown sign, that is we obtain
$$Z(UN;m)=Z(UN;m-2g_{k,n})+\sum_\mathcal{R'}\pm Z(\mathcal{R'};m-b_\mathcal{R'}).$$
It follows that the generating function of $Z(UN;m)$ can be expressed using combinations of the same rational functions which appeared in the generating functions for patterns of $\mu$-invariant $k-1$ together with $1/(1-t^{2g_{k,n}})$. That completes the proof.
\end{proof}
\newpage
\section{Appendix}
\subsection*{Table 1.} Some values of $Z(P_m\times C_n)$:
\begin{center}
\begin{tabular}{l|ccccccccccccc}
$m$ $\backslash$ $n$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\
\hline
0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & -1 & -2 & -1 & 1 & 2 & 1 & -1 & -2 & -1 & 1 & 2 & 1 & -1 \\
2 & -1 & 1 & 3 & 1 & -1 & 1 & 3 & 1 & -1 & 1 & 3 & 1 & -1\\
3 & 1 & 1 & -3 & 1 & 1 & 1 & 5 & 1 & 1 & 1 & -3 & 1 & 1\\
4 & 1 & -2 & 5 & 1 & 4 & 1 & 5 & -2 & 1 & 1 & 8 & 1 & 1 \\
5 & -1 & 1 & -5 & 1 & -1 & 1 & 3 & 1 & 9 & 1 & -5 & 1 & -1\\
6 & -1 & 1 & 7 & 1 & 1 & 1 & 7 & 1 & -1 & 1 & 7 & 1 & 13\\
7 & 1 & -2 & -7 & 1 & 4 & 1 & 1 & -2 & 1 & 1 & 8 & 1 & 1\\
8 & 1 & 1 & 9 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 9 & 1 & 1\\
9 & -1 & 1 & -9 & 1 & -1 & 1 & -1 & 1 & -11 & 1 & -9 & 1 & 13\\
10 & -1 & -2 & 11 & 1 & 2 & 1 & 3 & -2 & -1 & 1 & 14 & 1 & -1\\
11 & 1 & 1 & -11 & 1 & 1 & 1 & -3 & 1 & 1 & 1 & -11 & 1 & 15\\
12 & 1 & 1 & 13 & 1 & 1 & 1 & 5 & 1 & 11 & 1 & 13 & 1 & 1
\end{tabular}
\end{center}
\subsection*{Table 2.} Some initial generating functions $f_{n}(t)$ for even $n$ are given below in reduced form. By $\Phi_k(t)$ we denote the $k$-th cyclotomic polynomial ($\Phi_1(t)=t-1$, $\Phi_2(t)=t+1$).
{\small
\begin{eqnarray*}
f_2(t)&=& \frac{-(t-1)}{\Phi_4(t)}\\
f_4(t)&=& \frac{-(t^2+1)}{\Phi_1(t)\Phi_2(t)^2}\\
f_6(t)&=& \frac{-(t^4+2t^3+2t+1)}{\Phi_1(t)\Phi_3(t)\Phi_4(t)}\\
f_8(t)&=& \frac{-(t^6-t^5+2t^4+6t^3+2t^2-t+1)}{\Phi_1(t)\Phi_2(t)^2\Phi_{10}(t)}\\
f_{10}(t)&=& \frac{q_{10}(t)}{\Phi_1(t)\Phi_4(t)\Phi_7(t)\Phi_8(t)}\\
f_{12}(t)&=& \frac{q_{12}(t)}{\Phi_1(t)\Phi_2(t)^2\Phi_3(t)\Phi_6(t)^2\Phi_{18}(t)}\\
f_{14}(t)&=& \frac{q_{14}(t)}{\Phi_1(t)\Phi_4(t)\Phi_5(t)\Phi_{11}(t)\Phi_{16}(t)}\\
f_{16}(t)&=& \frac{q_{16}(t)}{\Phi_1(t)\Phi_2(t)^2\Phi_{10}(t)\Phi_{14}(t)\Phi_{26}(t)}\\
f_{18}(t)&=& \frac{q_{18}(t)}{\Phi_1(t)\Phi_3(t)\Phi_4(t)\Phi_5(t)\Phi_8(t)\Phi_9(t)\Phi_{12}(t)\Phi_{15}(t)\Phi_{24}(t)}\\
f_{20}(t)&=& \frac{q_{20}(t)}{\Phi_1(t)\Phi_2(t)^2\Phi_4(t)\Phi_7(t)\Phi_{14}(t)\Phi_{22}(t)\Phi_{34}(t)}\\
f_{22}(t)&=& \frac{q_{22}(t)}{\Phi_1(t)\Phi_4(t)\Phi_7(t)\Phi_{13}(t)\Phi_{19}(t)\Phi_{20}(t)\Phi_{32}(t)}
\end{eqnarray*}
}
The longer numerators are as follows.
{\small
\begin{eqnarray*}
q_{10}(t)&=&-(t^{12}-t^{11}+t^8+9t^7+9t^5+t^4-t+1)\\
q_{12}(t)&=&-(t^{14}+2t^{13}+3t^{12}-3t^{11}+8t^{10}-5t^9+6t^8+6t^7+6t^6-5t^5+8t^4-3t^3+3t^2+2t+1)\\
q_{14}(t)&=&-(t^{24}-t^{19}+14t^{18}+14t^{17}+29t^{16}+42t^{15}+42t^{14}+55t^{13}+56t^{12}\\
&& +55t^{11}+42t^{10}+42t^9+29t^8+14t^7+14t^6-t^5+1)\\
q_{16}(t)&=&-(t^{28}-2t^{27}+5t^{26}+5t^{24}-2t^{23}+9t^{22}-7t^{21}+39t^{20}-37t^{19}\\
&& +44t^{18}-25t^{17}+30t^{16}-24t^{15}+26t^{14}-24t^{13}+30t^{12}\\
&& -25t^{11}+44t^{10}-37t^9+39t^8-7t^7+9t^6-2t^5+5t^4+5t^2-2t+1)\\
q_{18}(t)&=&-(t^{38}+2t^{37}-t^{36}+2t^{35}+6t^{34}-2t^{33}+2t^{32}+30t^{31}-2t^{30}\\
&& +t^{29}+70t^{28}-t^{27}+t^{26}+92t^{25}-t^{24}+t^{23}+130t^{22}-t^{21}\\
&& +168t^{19}-t^{17}+130t^{16}+t^{15}-t^{14}+92t^{13}+t^{12}-t^{11}+70t^{10}\\
&& +t^9-2t^8+30t^7+2t^6-2t^5+6t^4+2t^3-t^2+2t+1)\\
q_{20}(t)&=&-(t^{46}-2t^{45}+6t^{44}-10t^{43}+19t^{42}-18t^{41}+34t^{40}-40t^{39}\\
&& +64t^{38}-28t^{37}+60t^{36}-31t^{35}+120t^{34}-96t^{33}+189t^{32} \\
&& -147t^{31}+240t^{30}-195t^{29}+283t^{28}-230t^{27}+258t^{26} \\
&& -193t^{25}+218t^{24}-208t^{23}+218t^{22}-193t^{21}+258t^{20} \\
&& -230t^{19}+283t^{18}-195t^{17}+240t^{16}-147t^{15}+189t^{14} \\
&& -96t^{13}+120t^{12}-31t^{11}+60t^{10}-28t^9+64t^8-40t^7 \\
&& +34t^6-18t^5+19t^4-10t^3+6t^2-2t+1)\\
q_{22}(t)&=&-(t^{62}+t^{61}+t^{58}+t^{57}-t^{55}+22t^{54}+\cdots\cdots+22t^8-t^7+t^5+t^4+t+1)
\end{eqnarray*}
}
\subsection*{Table 3.} The decomposition of the directed graph $\mathrm{Neck}(k,n)$ into a disjoint union of cycles. Here $l^p$ stands for $p$ copies of the cycle $C_l$.
\begin{center}
\begin{tabular}{l|ccccccc}
$n$ $\backslash$ $k$ & 1 & 2 & 3 & 4 & 5 & 6 & 7\\
\hline
4 & $1^1$ & & & & & & \\
6 & $3^1$ & & & & & & \\
8 & $5^1$ & $1^1$ & & & & & \\
10 & $7^1$ & $1^14^1$ & & & & & \\
12 & $9^1$ & $2^13^26^1$ & $1^1$ & & & & \\
14 & $11^1$ & $2^38^3$ & $1^15^1$ & & & & \\
16 & $13^1$ & $2^55^310^3$ & $7^4$ & $1^1$ & & &\\
18 & $15^1$ & $1^12^912^6$ & $3^49^9$ & $1^26^1$ & & &\\
20 & $17^1$ & $2^{14}7^414^6$ & $11^{22}$ & $1^14^38^5$ & $1^1$ & &\\
22 & $19^1$ & $2^{22}16^{10}$ & $13^{42}$ & $2^15^610^{21}$ & $1^27^1$ & &\\
24 & $21^1$ & $2^{30}9^518^{10}$ & $3^15^{10}15^{70}$ & $2^53^{10}4^36^912^{63}$ & $1^13^19^9$ & $1^1$ & \\
26 & $23^1$ & $1^12^{42}20^{15}$ & $17^{120}$ & $2^{15}7^{28}14^{165}$ & $11^{49}$ & $1^38^1$ &\\
28 & $25^1$ & $2^{55}11^622^{15}$ & $19^{186}$ & $2^{43}8^{43}16^{378}$ & $13^{194}$ & $1^35^410^{11}$ & $1^1$\\
30 & $27^1$ & $2^{73}24^{21}$ & $7^{19}21^{270}$ & $2^{99}3^66^49^{90}18^{765}$ & $3^{26}5^{31}15^{613}$ & $1^13^54^{10}6^{13}12^{80}$ & $1^39^1$\\
32 & $29^1$ & $2^{91}13^726^{21}$ & $23^{396}$ & $2^{217}5^{43}10^{83}20^{1480}$ & $17^{1750}$ & $2^17^{52}14^{435}$ & $1^511^{17}$ \\
34 & $31^1$ & $1^12^{115}28^{28}$ & $5^125^{551}$ & $2^{429}11^{242}22^{2600}$ & $19^{4334}$ & $2^74^{22}8^{124}16^{1791}$ & $1^113^{155}$\\
36 & $33^1$ & $2^{140}15^830^{28}$ & $9^{31}27^{738}$ & $1^12^{809}8^{12}12^{216}24^{4483}$ & $1^13^{18}7^{247}21^{9693}$ & $2^{31}3^{83}6^{76}9^{316}18^{6100}$ & $1^13^{18}5^{31}15^{970}$
\end{tabular}
\end{center}
\newpage
| {'timestamp': '2012-04-17T02:04:56', 'yymm': '1202', 'arxiv_id': '1202.1655', 'language': 'en', 'url': 'https://arxiv.org/abs/1202.1655'} |
\section{Introduction}
In the process of obtaining a sufficiently general version of the Fundamental Theorem of Asset Pricing (FTAP), semimartingales proved crucial in modelling discounted asset-price processes. The powerful tool of stochastic integration with respect to general predictable integrands, that semimartingales are exactly tailored for, finally lead to the culmination of the theory in \cite{MR1304434}. The FTAP connects the economical notion of \textsl{No Free Lunch with Vanishing Risk} (NFLVR) with the mathematical concept of existence of an \textsl{Equivalent Martingale Measure} (EMM), i.e., an auxiliary probability, equivalent to the original (in the sense that they have the same impossibility events), that makes the discounted asset-price processes have some kind of martingale property. For the above approach to work one has to utilize stochastic integration using \emph{general} predictable integrands, which translates to allowing for \emph{continuous-time} trading in the market. Even though continuous-time trading is of vast theoretical importance, in practice it is only an ideal approximation; the only feasible way of trading is via \emph{simple}, i.e., combinations of \emph{buy-and-hold}, strategies.
Recently, it has been argued that existence of an EMM is \emph{not} necessary for viability of the market; to this effect, see \cite{MR1774056}, \cite{MR1929597}, \cite{MR2210925}. Even in cases where classical arbitrage opportunities are present in the market, credit constraints will not allow for arbitrages to be scaled to any desired degree. It is rather the existence of a \textsl{strictly positive supermartingale deflator}, a concept weaker than existence of an EMM, that allows for a consistent theory to be developed.
\smallskip
Our purpose in this work is to provide answers to the following questions:
\begin{enumerate}
\item Why are semimartingales important in modeling discounted asset-price processes?
\item Is there an analogous result to the FTAP that involves weaker (both economic and mathematical) conditions, and only assumes the possibility of simple trading?
\end{enumerate}
A partial, but precise, answer to question (1) is already present in \cite{MR1304434}. Roughly speaking, market viability already imposes the semimartingale property on discounted asset-price processes. In this paper, we elaborate on the previous idea, undertaking a different approach, which ultimately leads to an improved result. We also note that in \cite{MR2139030}, \cite{MR2157199} and \cite{LarZit05}, the semimartingale property of discounted asset-price processes is obtained via the finite value of a utility maximization problem; this approach will also be revisited.
All the conditions that have appeared previously in the literature are only \emph{sufficient} to ensure that discounted asset-price processes are semimartingales. Here, we shall also discuss a necessary and sufficient condition in terms of a natural market-viability notion that parallels the FTAP, under minimal initial structural assumptions on the discounted asset-price processes themselves. The weakened version of the FTAP that we shall come up with as an answer to question (2) above is a ``simple, no-short-sales trading'' version of Theorem 4.12 from \cite{MR2335830}.
\smallskip
The structure of the paper is as follows. In Section \ref{sec: UPBR, supdefl, semimarts, FTAP}, we introduce the market model, simple trading under no-short-sales constraints. Then, we discuss the market viability condition of \textsl{absence of arbitrages of the first kind} (a weakening of condition NFLVR), as well as the concept of \textsl{strictly positive supermartingale deflators}. After this, our main result, Theorem \ref{thm: FTAP_jump}, is formulated and proved, which establishes both the importance of semimartingales in financial modelling, as well as the weak version of the FTAP. Section \ref{sec: beyond main result} deals with remarks on, and ramifications of, Theorem \ref{thm: FTAP_jump}.
We note that, though hidden in the background, the proofs of our results depend heavily on the notion of the \emph{num\'eraire \ portfolio} (also called \emph{growth-optimal}, \emph{log-optimal} or \emph{benchmark portfolio}), as it appears in a series of works: \cite{KELLY}, \cite{LONG}, \cite{MR1849424}, \cite{MR1970286}, \cite{MR1929597}, \cite{MR2194899}, \cite{MR2335830}, \cite{MR2284490}, to mention a few.
\section{The Semimartingale Property of Discounted Asset-Price Process and a Version of the Fundamental Theorem of Asset Pricing} \label{sec: UPBR, supdefl, semimarts, FTAP}
\subsection{The financial market model and trading via simple, no-short-sales strategies}
The random movement of $d \in \mathbb N$ risky assets in the market is modelled via c\`adl\`ag,
nonnegative stochastic processes $S^i$, where $i \in \set{ 1, \ldots, d}$.
As is usual in the field of Mathematical Finance, we assume that all wealth processes are discounted by another special asset which is considered a ``baseline''. The above process $S = (S^i)_{i = 1, \ldots, d}$ is defined on a filtered probability space $(\Omega, \, \mathcal{F}, \, \pare{\mathcal{F}_t}_{t \in \Real_+}, \, \prob)$, where $\pare{\mathcal{F}_t}_{t \in \Real_+}$ is a filtration satisfying $\mathcal{F}_t \subseteq \mathcal{F}$ for all $t \in \mathbb R_+$, as well as the usual assumptions of right-continuity and saturation by all $\mathbb{P}$-null sets of $\mathcal{F}$.
Observe that there is no \emph{a priori} assumption on $S$ being a semimartingale. This property will come as a consequence of a natural market viability assumption.
\smallskip
In the market described above, economic agents can trade in order to reallocate their wealth. Consider a \textsl{simple} predictable process $\theta := \sum_{j=1}^n \vartheta_{j} \mathbb{I}_{\dbraoc{\tau_{j-1}, \tau_j}}$. Here, $\tau_0 = 0$, and for all $j \in \set{ 1, \ldots, n}$ (where $n$ ranges in $\mathbb N$), $\tau_j$ is a \emph{finite} stopping time and $\vartheta_{j} = (\vartheta^i_j)_{i=1, \ldots, d}$ is $\mathcal{F}_{\tau_{j-1}}$-measurable. Each $\tau_{j-1}$, $j \in \set{ 1, \ldots, n}$, is an instance when some given economic agent may trade in the market; then, $\vartheta^i_j$ is the number of units from the $i$th risky asset that the agent will hold in the trading interval $]\tau_{j-1}, \tau_{j}]$. This form of trading is called \textsl{simple}, as it comprises of a finite number of \textsl{buy-and-hold} strategies, in contrast to \textsl{continuous} trading where one is able to change the position in the assets in a continuous fashion. This last form of trading is only of theoretical value, since it cannot be implemented in reality, even if one ignores market frictions.
Starting from initial capital $x \in \mathbb R_+$ and following the strategy described by the simple predictable process $\theta := \sum_{j=1}^n \vartheta_j \mathbb{I}_{\dbraoc{\tau_{j-1}, \tau_j}}$, the agent's discounted wealth process is given by
\begin{equation} \label{eq: Xhat}
X^{x, \theta} \ = x + \int_0^\cdot \inner{\theta_t}{\, \mathrm d S_t} \ := \ x + \sum_{j = 1}^n \inner{\vartheta_j}{ S_{\tau_j \wedge \cdot} - S_{\tau_{j-1} \wedge \cdot}}.
\end{equation}
(We use ``$\inner{\cdot}{\cdot}$'' throughout to denote the usual Euclidean inner product on $\mathbb R^d$.)
The wealth process $X^{x, \theta}$ of \eqref{eq: Xhat} is c\`adl\`ag \ and adapted, but could in principle become negative. In real markets, some economic agents, for instance pension funds, face several institution-based constraints when trading. The most important constraint is prevention of having negative positions in the assets; we plainly call this \emph{no-short-sales} constraints. In order to ensure that no short sales are allowed in the risky assets, which also include the baseline asset used for discounting, we define $\mathcal{X}_{\mathsf{s}} (x)$ to be the set of all wealth processes $X^{x, \theta}$ given by \eqref{eq: Xhat}, where $\theta = \sum_{j=1}^n \vartheta_j \mathbb{I}_{\dbraoc{\tau_{j-1}, \tau_j}}$ is simple and predictable and such that $\vartheta^i_j \geq 0$ and $\inner{ \vartheta_j}{S_{\tau_{j-1}}} \leq X^{x, \theta}_{\tau_{j-1}}$ hold for all $i \in \set{1, \ldots, d}$ and $j \in \set{ 1, \ldots, n}$. (The subscript ``$\mathsf{s}$'' in $\mathcal{X}_{\mathsf{s}}(x)$ is a mnemonic for ``simple''; the same is true for all subsequent definitions where this subscript appears.) Note that the previous no-short-sales restrictions, coupled with the nonnegativity of $S^i$, $i \in \set{1, \ldots, d}$, imply the stronger $\theta^i \geq 0$ for all $i = 1, \ldots, d$ and $\inner{ \theta}{ S_-} \leq X^{x, \theta}_-$. (The subscript ``${}_-$'' is used to denote the left-continuous version of a c\`adl\`ag \ process.) It is clear that $\mathcal{X}_{\mathsf{s}}(x)$ is a convex set for all $x \in \mathbb R_+$. Observe also that $\mathcal{X}_{\mathsf{s}} (x) = x \mathcal{X}_{\mathsf{s}} (1)$ for all $x \in \mathbb R_+ \setminus \set{0}$. Finally, define $\mathcal{X}_{\mathsf{s}} := \bigcup_{x \in \mathbb R_+} \mathcal{X}_{\mathsf{s}}(x)$.
\subsection{Market viability} \label{subsec: UPBR}
We now aim at defining the essential ``no-free-lunch'' concept to be used in our discussion. For $T \in \mathbb R_+$, an $\mathcal{F}_T$-measurable random variable $\xi$ will be called an \textsl{arbitrage of the first kind on $[0, T]$} if $\mathbb{P}[\xi \geq 0] = 1$, $\mathbb{P}[\xi > 0] > 0$, and \emph{for all $x > 0$ there exists $X \in \mathcal{X}_{\mathsf{s}}(x)$, which may depend on $x$, such that $\mathbb{P}[X_T \geq \xi] = 1$}. If, in a market where only simple, no-short-sales trading is allowed, there are \emph{no} arbitrages of the first kind on \emph{any} interval $[0, T]$, $T \in \mathbb R_+$ we shall say that condition NA$1_{\mathsf{s}}$ \ holds. It is straightforward to check that condition NA$1_{\mathsf{s}}$ \ is weaker than condition NFLVR (appropriately stated for simple, no-short-sales trading). The next result describes an equivalent reformulation of condition NA$1_{\mathsf{s}}$ \ in terms of boundedness in probability of the set of outcomes of wealth processes, which is essentially condition ``\textsl{No Unbounded Profit with Bounded Risk}'' of \cite{MR2335830} for all finite time-horizons in our setting of simple, no-short-sales trading.
\begin{prop} \label{prop: NAone iff NUPBR}
Condition \emph{NA$1_{\mathsf{s}}$} holds if and only if, for all $T \in \mathbb R_+$, the set $\set{X_T \ | \ X \in \mathcal{X}_{\mathsf{s}}(1)}$ is bounded in probability, i.e., $\downarrow \lim_{\ell \to \infty} \sup_{X \in \mathcal{X}_{\mathsf{s}} (1)} \, \mathbb{P} [X_T > \ell] = 0$ holds for all $T \in \mathbb R_+$.
\end{prop}
\begin{proof}
Using the fact that $\mathcal{X}_{\mathsf{s}}(x) = x \mathcal{X}_{\mathsf{s}}(1)$ for all $x > 0$, it is straightforward to check that if an arbitrage of the first kind exists on $[0, T]$ for some $T \in \mathbb R_+$ then $\set{X_T \ | \ X \in \mathcal{X}_{\mathsf{s}}(1)}$ is not bounded in probability. Conversely, assume the existence of $T \in \mathbb R_+$ such that $\set{X_T \ | \ X \in \mathcal{X}_{\mathsf{s}}(1)}$ is not bounded in probability. As $\set{X_T \ | \ X \in \mathcal{X}_{\mathsf{s}}(1)}$ is further convex, Lemma 2.3 of \cite{MR1768009} implies the existence of $\Omega_u \in \mathcal{F}_T$ with $\mathbb{P}[\Omega_u] > 0$ such that, for all $n \in \mathbb N$, there exists $\widetilde{X}^n \in \mathcal{X}_{\mathsf{s}}(1)$ with $\mathbb{P}[\{\widetilde{X}^n_T \leq n \} \cap \Omega_u] \leq \mathbb{P}[\Omega_u] / 2^{n+1}$. For all $n \in \mathbb N$, let $A^n = \mathbb{I}_{\{\widetilde{X}^n_T > n\}} \cap \Omega_u \in \mathcal{F}_T$. Then, set $A := \bigcap_{n \in \mathbb N} A^n \in \mathcal{F}_T$ and $\xi := \mathbb{I}_A$. It is clear that $\xi$ is $\mathcal{F}_T$-measurable and that $\mathbb{P}[\xi \geq 0] = 1$. Furthermore, since $A \subseteq \Omega_u$ and
\[
\mathbb{P} \bra{\Omega_u \setminus A} = \mathbb{P} \bra{\bigcup_{n \in \mathbb N} \pare{\Omega_u \setminus A^n}} \leq \sum_{n \in \mathbb N} \mathbb{P} \bra{ \Omega_u \setminus A^n} = \sum_{n \in \mathbb N} \mathbb{P} \bra{ \{ \widetilde{X}^n_T \leq n \} \cap \Omega_u} \leq \sum_{n \in \mathbb N} \frac{\mathbb{P}[\Omega_u]}{2^{n+1}} = \frac{\mathbb{P}[\Omega_u]}{2},
\]
we obtain $\mathbb{P}[A] > 0$, i.e., $\mathbb{P}[\xi > 0] > 0$. For all $n \in \mathbb N$ set $X^n := (1 / n) \widetilde{X}^n$, and observe that $X^n \in \mathcal{X}_{\mathsf{s}}(1 / n)$ and $\xi = \mathbb{I}_{A} \leq \mathbb{I}_{A^n} \leq X^n_T$ hold for all $n \in \mathbb N$. It follows that $\xi$ is and arbitrage of the first kind on $[0, T]$, which finishes the proof.
\end{proof}
\begin{rem} \label{rem: NAone for stop times}
The constant wealth process $X \equiv 1$ belongs to $\mathcal{X}_{\mathsf{s}}(1)$. Then, Proposition \ref{prop: NAone iff NUPBR} implies that condition NA$1_{\mathsf{s}}$ \ is also equivalent to the requirement that the set $\set{X_T \ | \ X \in \mathcal{X}_{\mathsf{s}}(1)}$ is bounded in probability for all finite stopping times $T$.
\end{rem}
\subsection{Strictly positive supermartingale deflators} \label{subsec: supermart_defl}
Define the set $\mathcal{Y}_{\mathsf{s}}$ of \textsl{strictly positive supermartingale deflators for simple, no-short-sales trading} to consist of all c\`adl\`ag \ processes $Y$ such that $\mathbb{P}[Y_0 = 1, \text{ and } Y_t > 0 \ \, \forall t \in \mathbb R_+] = 1$, and $Y X$ is a supermartingale for all $X \in \mathcal{X}_{\mathsf{s}}$. Note that existence of a strictly positive supermartingale deflator is a condition closely related, but strictly weaker, to existence of equivalent (super)martingale probability measures.
\subsection{The main result} \label{subsec: main result}
Condition NA$1_{\mathsf{s}}$, existence of strictly positive supermartingale deflators and the semimartingale property of $S$ are immensely tied to each other, as will be revealed below.
\smallskip
Define the (first) \textsl{bankruptcy time} of $X \in \mathcal{X}_{\mathsf{s}}$ to be $\zeta^X := \inf \{ t \in \mathbb R_+ \ | \ X_{t-} = 0 \text{ or } X_t = 0\}$. We shall say that $X \in \mathcal{X}_{\mathsf{s}}$ \textsl{cannot revive from bankruptcy} if $X_t = 0$ holds for all $t \geq \zeta^X$ on $\{ \zeta^X < \infty\}$. As $S^i \in \mathcal{X}_{\mathsf{s}}$ for $i \in \set{ 1, \ldots, d}$, the previous definitions apply in particular to each $S^i$, $i \in \set{ 1, \ldots, d}$.
Before stating our main Theorem \ref{thm: FTAP_jump}, recall that $S^i$, $i \in \set{1, \ldots, d}$, is an \textsl{exponential semimartingale} if there exists a semimartingale $R^i$ with $R^i_0 = 0$, such that $S^i = S^i_0 \mathcal E(R^i)$ where ``$\mathcal E$'' denotes the \textsl{stochastic exponential} operator.
\begin{thm} \label{thm: FTAP_jump}
Let $S = (S^i)_{i=1, \ldots, d}$ be an adapted, c\`adl\`ag \ stochastic process such that $S^i$ is nonnegative for all $i \in \set{1, \ldots, d}$. Consider the following four statements:
\begin{enumerate}
\item[$(i)$] Condition \emph{NA1}$_\mathsf{s}$ holds in the market.
\item[$(ii)$] $\mathcal{Y}_{\mathsf{s}} \neq \emptyset$.
\item[$(iii)$] $S$ is a semimartingale, and $S^i$ cannot revive from bankruptcy for all $i \in \set{1, \ldots, d}$.
\item[$(iv)$] For all $i \in \set{1, \ldots, d}$, $S^i$ is an exponential semimartingale.
\end{enumerate}
Then, we have the following:
\begin{enumerate}
\item[(1)] It holds that $(i) \Leftrightarrow (ii) \Rightarrow (iii)$, as well as $(iv) \Rightarrow (i)$.
\item[(2)] Assume further that $S^i_{\zeta^{S^i} -} > 0$ holds on $\{ \zeta^{S^i} < \infty\}$ for all $i \in \set{1, \ldots, d}$. Then, we have the equivalences $(i) \Leftrightarrow (ii) \Leftrightarrow (iii) \Leftrightarrow (iv)$.
\end{enumerate}
\end{thm}
\subsection{Proof of Theorem \ref{thm: FTAP_jump}, statement $(1)$}
\begin{proof}[$(i) \Rightarrow (ii)$]
Define the set of dyadic rational numbers $\mathbb{D} := \{ m / 2^k \ | \ k \in \mathbb N, \, m \in \mathbb N \}$, which is dense in $\mathbb R_+$. Further, for $k \in \mathbb N$, define the set of trading times $\mathbb{T}^k := \{ m / 2^k \ | \ m \in \mathbb N, \, 0 \leq m \leq k 2^k \}$. Then, $\mathbb{T}^k \subset \mathbb{T}^{k'}$ for $k < k'$ and $\bigcup_{k \in \mathbb N} \mathbb{T}^k = \mathbb{D}$. In what follows, $\mathcal{X}_{\mathsf{s}}^k (1)$ denotes the subset of $\mathcal{X}_{\mathsf{s}}(1)$ consisting of wealth processes where trading only may happen at times in $\mathbb{T}^k$. We now state and prove an intermediate result that will help to establish implication $(i) \Rightarrow (ii)$ of Theorem \ref{thm: FTAP_jump}.
\begin{lem} \label{lem: num exists for discrete-time}
Under condition \emph{NA$1_{\mathsf{s}}$}, and for each $k \in \mathbb N$, there exists a wealth process $\widetilde{X}^k \in \mathcal{X}_{\mathsf{s}}^k(1)$ with $\mathbb{P}[\widetilde{X}^k_t > 0] = 1$ for all $t \in \mathbb{T}^k$ such that, by defining $\widetilde{Y}^k := 1 / \widetilde{X}^k$, $\mathbb{E}[\widetilde{Y}^k_t X_t \ | \ \mathcal{F}_s] \leq \widetilde{Y}^k_s X_s$ holds for all $X \in \mathcal{X}_{\mathsf{s}}^k(1)$, where $\mathbb{T}^k \ni s \leq t \in \mathbb{T}^k$.
\end{lem}
\begin{proof}
The existence of such ``num\'eraire \ portfolio'' $\widetilde{X}^k$ essentially follows from Theorem 4.12 of \cite{MR2335830}. However, we shall explain in detail below how one can \emph{obtain} the validity of Lemma \ref{lem: num exists for discrete-time} following the idea used to prove Theorem 4.12 of \cite{MR2335830} in this simpler setting, rather than using the latter heavy result. Throughout the proof we keep $k \in \mathbb N$ fixed, and we set $\mathbb{T}^k_{++} := \mathbb{T}^k \setminus \set{0}$.
First of all, it is straightforward to check that condition NA$1_{\mathsf{s}}$ \ implies that each $X \in \mathcal{X}_{\mathsf{s}}$, and in particular also each $S^i$, $i \in \set{1, \ldots, d}$, cannot revive from bankruptcy. This implies that we can consider an alternative ``multiplicative'' characterization of wealth processes in $\mathcal{X}_{\mathsf{s}}(1)$, as we now describe. Consider a process $\pi = (\pi_t)_{t \in \mathbb{T}^k_{++}}$ such that, for all $t \in \mathbb{T}^k_{++}$, $\pi_t \equiv (\pi^i_t)_{i \in \set{1, \ldots, d}}$ is $\mathcal{F}_{t - 1/2^k}$-measurable and takes values in the $d$-dimensional simplex $\triangle^d := \big\{ z = (z^i)_{i =1, \ldots, d} \in \mathbb R^d \ | \ z^i \geq 0 \text{ for } i=1, \ldots, d, \text{ and } \sum_{i=1}^d z^i \leq 1 \big\}$. Define $X^{(\pi)}_0 := 1$ and, for all $t \in \mathbb{T}^k_{++}$, $X^{(\pi)}_t := \prod_{\mathbb{T}^k_{++} \ni u \leq t} \pare{1 + \inner{\pi_u}{\Delta R^{k}_u}}$, where, for $u \in \mathbb{T}^k_{++}$, $\Delta R^k_u = (\Delta R^{k,i}_u)_{i \in \set{1, \ldots, d}}$ is such that $\Delta R^{k, i}_u = \big( S^i_u / S^i_{u - 1/2^k} - 1 \big) \mathbb{I}_{\{ S^i_{u - 1/2^k} > 0 \}}$ for $i \in \set{1, \ldots, d}$. Then, define a simple predictable $d$-dimensional process $\theta$ as follows: for $i \in \set{1, \ldots, d}$ and $u \in ]t - 1/2^k, t]$, where $t \in \mathbb{T}^k_{++}$, set $\theta^i_u = \big( \pi^i_t X^{(\pi)}_{t - 1/2^k} / S^i_{t - 1/2^k} \big) \mathbb{I}_{\{S^i_{t - 1/2^k} > 0\}}$; otherwise, set $\theta = 0$. It is then straightforward to check that $X^{1, \theta}$, in the notation of \eqref{eq: Xhat}, is an element of $\mathcal{X}_{\mathsf{s}}^k(1)$, as well as that $X^{1, \theta}_t = X^{(\pi)}_t$ holds for all $t \in \mathbb{T}^k$. We have then established that $\pi$ generates a wealth process in $\mathcal{X}_{\mathsf{s}}^k(1)$. We claim that every wealth process of $\mathcal{X}_{\mathsf{s}}^k(1)$ can be generated this way. Indeed, starting with any predictable $d$-dimensional process $\theta$ such that $X^{1, \theta}$, in the notation of \eqref{eq: Xhat}, is an element of $\mathcal{X}_{\mathsf{s}}^k(1)$, we define $\pi^i_t = \big( \theta^i_t S^i_{t - 1/2^k} /X^{1, \theta}_{t - 1/2^k} \big) \mathbb{I}_{\{X^{1, \theta}_{t - 1/2^k} > 0\}}$ for $i \in \set{1, \ldots, d}$ and $t \in \mathbb{T}^k_{++}$. Then, $\pi = (\pi_t)_{t \in \mathbb{T}^k_{++}}$ is $\triangle^d$-valued, $\pi_t \equiv (\pi^i_t)_{i \in \set{1, \ldots, d}}$ is $\mathcal{F}_{t - 1/2^k}$-measurable for $t \in \mathbb{T}^k_{++}$, and $\pi$ generates $X^{1, \theta}$ in the way described previously --- in particular, $X^{1, \theta}_t = X^{(\pi)}_t$ holds for all $t \in \mathbb{T}^k$. (In establishing the claims above it is important that all wealth processes of $\mathcal{X}_{\mathsf{s}}$ cannot revive from bankruptcy.)
Continuing, since $\triangle^d$ is a \emph{compact} subset of $\mathbb R^d$, for all $t \in \mathbb{T}^k$ there exists a $\mathcal{F}_{t - 1/2^k}$-measurable $\rho_t = (\rho^i_t)_{i \in \set{1, \ldots, d}}$ such that, for all $\mathcal{F}_{t - 1/2^k}$-measurable and $\triangle^d$-valued $\pi_t = (\pi^i_t)_{i \in \set{1, \ldots, d}}$, we have
\[
\mathbb{E} \bra{ \frac{ 1 + \inner{\pi_t}{\Delta R^{k}_t}}{1 + \inner{\rho_t}{\Delta R^{k}_t}} \ \Bigg| \ \mathcal{F}_{t - 1/2^k}} \leq 1
\]
(It is exactly the existence of such $\rho_t$ can be seen as a stripped-down version of Theorem 4.12 in \cite{MR2335830}; in effect, $\rho_t$ is the optimal proportions of wealth connected with the log-utility maximization problem, modulo technicalities arising when the value of the log-utility maximization problem has infinite value.) Setting $\widetilde{X}^k$ to be the wealth process in $\mathcal{X}_{\mathsf{s}}^k(1)$ generated by $\rho$ as described in the previous paragraph, the result of Lemma \ref{lem: num exists for discrete-time} is immediate.
\end{proof}
We proceed with the proof of implication $(i) \Rightarrow (ii)$ of Theorem \ref{thm: FTAP_jump}, using the notation from the statement of Lemma \ref{lem: num exists for discrete-time}. For all $k \in \mathbb N$, $\widetilde{Y}^k$ satisfies $\widetilde{Y}^k_0 = 1$ and is a positive supermartingale when sampled from times in $\mathbb{T}^k$, since $1 \in \mathcal{X}_{\mathsf{s}}^k$. Therefore, for any $t \in \mathbb{D}$, the \emph{convex hull} of the set $\{ \widetilde{Y}^k_t \ | \ k \in \mathbb N\}$ is bounded in probability. We also claim that, under condition NA$1_{\mathsf{s}}$, for any $t \in \mathbb R_+$, the convex hull of the set $\{ \widetilde{Y}^k_t \ | \ k \in \mathbb N \}$ is bounded away from zero in probability. Indeed, for any collection $(\alpha^k)_{k \in \mathbb N}$ such that $\alpha^k \geq 0$ for all $k \in \mathbb N$, having all but a finite number of $\alpha^k$'s non-zero and satisfying $\sum_{k =1 }^\infty \alpha^k = 1$, we have
\[
\frac{1}{\sum_{k = 1}^\infty \alpha^k \widetilde{Y}^k } \ \leq \ \sum_{k = 1}^\infty \alpha^k \frac{1}{\widetilde{Y}^k} \ = \ \sum_{k = 1}^\infty \alpha^k \widetilde{X}^k \, \in \, \mathcal{X}_{\mathsf{s}} (1).
\]
Since, by Proposition \ref{prop: NAone iff NUPBR}, $\set{X_t \ | \ X \in \mathcal{X}_{\mathsf{s}} (1)}$ is bounded in probability for all $t \in \mathbb R_+$, the previous fact proves that the convex hull of the set $\{ \widetilde{Y}^k_t \ | \ k \in \mathbb N \}$ is bounded away from zero in probability.
Now, using Lemma A1.1 of \cite{MR1304434}, one can proceed as in the proof of Lemma 5.2(a) in \cite{MR1469917} to infer the existence of a sequence $(\widehat{Y}^k)_{k \in \mathbb N}$ and some process $(\widehat{Y}_t)_{t \in \mathbb{D}}$ such that, for all $k \in \mathbb N$, $\widehat{Y}^k$ is a convex combination of $\widetilde{Y}^k , \widetilde{Y}^{k+1}, \ldots$, and $\mathbb{P}[ \lim_{k \to \infty} \widehat{Y}_t^k = Y_t, \, \forall t \in \mathbb{D} ] = 1$. The discussion of the preceding paragraph ensures that $\mathbb{P}[0 < \widehat{Y}_t < \infty, \, \forall \, t \in \mathbb{D}] = 1$.
Let $\mathbb{D} \ni s \leq t \in \mathbb{D}$. Then, $s \in \mathbb{T}^k$ and $t \in \mathbb{T}^k$ for all large enough $k \in \mathbb N$. According to the conditional version of Fatou's Lemma, for all $X \in \bigcup_{k=1}^\infty \mathcal{X}_{\mathsf{s}}^k$ we have that
\begin{equation} \label{eq: Yhat is supermart defl}
\mathbb{E} [\widehat{Y}_t X_t \ | \ \mathcal{F}_s] \leq \liminf_{k \to \infty} \mathbb{E} [ \widehat{Y}^k_t X_t \ | \ \mathcal{F}_s] \leq \liminf_{k \to \infty} \widehat{Y}^k_s X_s = \widehat{Y}_s X_s.
\end{equation}
It follows that $(\widehat{Y}_t X_t)_{t \in \mathbb{D}}$ is a supermartingale for all $X \in \bigcup_{k=1}^\infty \mathcal{X}_{\mathsf{s}}^k$. (Observe here that we sample the process $\widehat{Y} X$ only at times contained in $\mathbb{D}$.) In particular, $(\widehat{Y}_t)_{t \in \mathbb{D}}$ is a supermartingale.
For any $t \in \mathbb R_+$ define $Y_t := \lim_{s \downarrow \downarrow t, s \in \mathbb{D}} \widehat{Y}_s$ --- the limit is taken in the $\mathbb{P}$-a.s. sense, and exists in view of the supermartingale property of $(\widehat{Y}_t)_{t \in \mathbb{D}}$. It is straightforward that $Y$ is a c\`adl\`ag \ process; it is also adapted because $(\mathcal{F}_t)_{t \in \mathbb R_+}$ is right-continuous. Now, for $t \in \mathbb R_+$, let $T \in \mathbb{D}$ be such that $T > t$; a combination of the right-continuity of both $Y$ and the filtration $(\mathcal{F}_t)_{t \in \mathbb R_+}$, the supermartingale property of $(\widehat{Y}_t)_{t \in \mathbb{D}}$, and L\'evy's martingale convergence Theorem, give $\mathbb{E}[\widehat{Y}_T \ | \ \mathcal{F}_t] \leq Y_t$. Since $\mathbb{P}[\widehat{Y}_T > 0] = 1$, we obtain $\mathbb{P}[Y_t > 0] = 1$. Right-continuity of the filtration $(\mathcal{F}_t)_{t \in \mathbb R_+}$, coupled with \eqref{eq: Yhat is supermart defl}, imply that $\mathbb{E} [Y_t X_t \ | \ \mathcal{F}_s] \leq Y_s X_s$ for all $\mathbb R_+ \ni s \leq t \in \mathbb R_+$ and $X \in \bigcup_{k=1}^\infty \mathcal{X}_{\mathsf{s}}^k$. In particular, $Y$ is a c\`adl\`ag \ nonnegative supermartingale; since $\mathbb{P}[Y_t > 0] = 1$ holds for all $t \in \mathbb R_+$, we conclude that $\mathbb{P}[Y_t > 0, \, \forall t \in \mathbb R_+] = 1$.
Of course, $1 \in \mathcal{X}_{\mathsf{s}}^k$ and $S^i \in \mathcal{X}_{\mathsf{s}}^k$ hold for all $k \in \mathbb N$ and $i \in \set{1, \ldots, d}$. It follows that $Y$ is a supermartingale, as well as that $Y S^i$ is a supermartingale for all $i \in \set{1, \ldots, d}$. In particular, $Y$ and $Y S = (Y S^i)_{i \in \set{1, \ldots, d}}$ are semimartingales. Consider any $X^{x, \theta}$ in the notation of \eqref{eq: Xhat}. Using the integration-by-parts formula, we obtain
\[
Y X^{x, \theta} = x + \int_0^\cdot \pare{ X^{x, \theta}_{t-} - \inner{\theta_t}{S_{t-}}} \, \mathrm d Y_t + \int_0^\cdot \inner{\theta_t}{\, \mathrm d (Y_t S_t)}.
\]
If $X^{x, \theta} \in \mathcal{X}_{\mathsf{s}}(x)$, we have $X^{x, \theta}_{-} - \inner{\theta}{S_-} \geq 0$, as well as $\theta^i \geq 0$ for $i \in \set{1, \ldots, d}$. Then, the supermartingale property of $Y$ and $Y S^i$, $i \in \set{1, \ldots, d}$, gives that $Y X^{x, \theta}$ is a supermartingale. Therefore, $Y \in \mathcal{Y}_{\mathsf{s}}$, i.e., $\mathcal{Y}_{\mathsf{s}} \neq \emptyset$.
\end{proof}
\begin{proof}[$(ii) \Rightarrow (i)$]
Let $Y \in \mathcal{Y}_{\mathsf{s}}$, and fix $T \in \mathbb R_+$. Then, $\sup_{X \in \mathcal{X}_{\mathsf{s}} (1)} \mathbb{E}[Y_T X_T] \leq 1$. In particular, the set $\set{Y_T X_T \ | \ X \in \mathcal{X}_{\mathsf{s}} (1)}$ is bounded in probability. Since $\mathbb{P}[Y_T > 0] = 1$, the set $\set{X_T \ | \ X \in \mathcal{X}_{\mathsf{s}} (1)}$ is bounded in probability as well. An invocation of Proposition \ref{prop: NAone iff NUPBR} finishes the argument.
\end{proof}
\begin{proof}[$(ii) \Rightarrow (iii)$]
Let $Y \in \mathcal{Y}_{\mathsf{s}}$. Since $S^i \in \mathcal{X}_{\mathsf{s}}$, $Y S^i$ is a supermartingale, thus a semimartingale, for all $i \in \set{1, \ldots, d}$. Also, the fact that $Y > 0$ and It\^o's formula give that $1 / Y$ is a semimartingale. Therefore, $S^i = (1 / Y) (Y S^i)$ is a semimartingale for all $i \in \set{1, \ldots, d}$. Furthermore, since $Y S^i$ is a nonnegative supermartingale, we have $Y_t S^i_t = 0$ for all $t \geq \zeta^{S^i}$ on $\{\zeta^{S^i} < \infty \}$, for $i \in \set{1, \ldots, d}$. Now, using $Y > 0$ again, we obtain that $S^i_t = 0$ holds for all $t \geq \zeta^{S^i}$ on $\{ \zeta^{S^i} < \infty \}$. In other words, each $S^i$, $i \in \set{1, \ldots, d}$, cannot revive after bankruptcy.
\end{proof}
\begin{proof}[$(iv) \Rightarrow (i)$]
Since $S$ is a semimartingale, we can consider continuous-time trading. For $x \in \mathbb R_+$, let $\mathcal{X} (x)$ be the set of all wealth processes $X^{x, \theta} := x + \int_0^\cdot \inner{\theta_t}{\, \mathrm d S_t}$, where $\theta$ is $d$-dimensional, predictable and $S$-integrable, ``$\int_0^\cdot \inner{\theta_t}{\, \mathrm d S_t}$'' denotes a vector stochastic integral, $X^{x, \theta} \geq 0$ and $0 \leq \inner{\theta}{S_-} \leq X_-^{x, \theta}$. (Observe that the qualifying subscript ``$\mathsf{s}$'' denoting simple trading has been dropped in the definition of $\mathcal{X}_{\mathsf{s}}(x)$, since we are considering continuous-time trading.) Of course, $\mathcal{X}_{\mathsf{s}}(x) \subseteq \mathcal{X}(x)$. We shall show in the next paragraph that $\set{X_T \ | \ X \in \mathcal{X}(1)}$ is bounded in probability for all $T \in \mathbb R_+$, therefore establishing condition NA$1_{\mathsf{s}}$, in view of Proposition \ref{prop: NAone iff NUPBR}.
For all $i \in \set{1, \ldots, d}$, write $S^i = S^i_0 \mathcal E(R^i)$, where $R^i$ is a semimartingale with $R^i_0 = 0$. Let $R := (R^i)_{i=1, \ldots, d}$. It is straightforward to see that $\mathcal{X} (1)$ coincides with the class of all processes of the form $\mathcal E \pare{ \int_0^\cdot \inner{\pi_t}{\, \mathrm d R_t} }$, where $\pi$ is predictable and take values in the $d$-dimensional simplex $\triangle^d := \big\{ z = (z^i)_{i =1, \ldots, d} \in \mathbb R^d \ | \ z^i \geq 0 \text{ for } i=1, \ldots, d, \text{ and } \sum_{i=1}^d z^i \leq 1 \big\}$. Since, for all $T \in \mathbb R_+$,
\[
\log \pare{\mathcal E \pare{ \int_0^T \inner{\pi_t}{\, \mathrm d R_t} }} \leq \int_0^T \inner{\pi_t}{\, \mathrm d R_t}
\]
holds for all $\triangle^d$-valued and predictable $\pi$, it suffices to show the boundedness in probability of the class of all $\int_0^T \inner{\pi_t}{\, \mathrm d R_t}$, where $\pi$ ranges in all $\triangle^d$-valued and predictable processes. Write $R = B + M$, where $B$ is a process of finite variation and $M$ is a local martingale with $|\Delta M^i| \leq 1$, $i \in \set{1, \ldots, d}$. Then, $\int_0^T |\inner{\pi_t}{\, \mathrm d B_t}| \leq \sum_{i =1}^d \int_0^T |\, \mathrm d B^i_t| < \infty$. This establishes the boundedness in probability of the class of all $\int_0^T \inner{\pi_t}{\, \mathrm d B_t}$, where $\pi$ ranges in all $\triangle^d$-valued and predictable processes. We have to show that the same holds for the class of all $\int_0^T \inner{\pi_t}{\, \mathrm d M_t}$, where $\pi$ is $\triangle^d$-valued and predictable. For $k \in \mathbb N$, let $\tau^k := \inf \{ t \in \mathbb R_+ \ | \ \sum_{i =1}^d [M^i, M^i]_t \geq k \} \wedge T$, Note that $[ M^i, M^i]_{\tau^k} = [ M^i, M^i]_{\tau^k-} + |\Delta M^i_{\tau^k}|^2 \leq k + 1$ holds for all $i \in \set{1, \ldots, d}$. Therefore, using the notation $\normtwo{\eta} := \sqrt{\mathbb{E}[|\eta|^2]}$ for a random variable $\eta$, we obtain
\[
\normtwo{\int_0^{\tau^k} \inner{\pi_t}{\, \mathrm d M_t}} \leq \sum_{i=1}^d \normtwo{ \int_0^{\tau^k} \pi^i_t \, \mathrm d M^i_t } \leq \sum_{i=1}^d \normtwo{ \sqrt{[ M^i, M^i]_{\tau^k}}} \leq d \sqrt{k + 1}
\]
Fix $\epsilon > 0$. Let $k = k (\epsilon)$ be such that $\mathbb{P}[\tau^k < T] < \epsilon / 2$, and also let $\ell := d \sqrt{2 (k + 1) / \epsilon} $. Then,
\[
\mathbb{P} \bra{ \int_0^T \inner{\pi_t}{\, \mathrm d M_t} > \ell} \leq \mathbb{P} \bra{\tau^k < T} + \mathbb{P} \bra{ \int_0^{\tau^k} \inner{\pi_t}{\, \mathrm d M_t} > \ell} \leq \frac{\epsilon}{2} + \abs{\frac{\normtwo{ \int_0^{\tau^k} \inner{\pi_t}{\, \mathrm d M_t}}}{\ell}}^2 \leq \epsilon.
\]
The last estimate is uniform over all $\triangle^d$-valued and predictable $\pi$. We have, therefore, established the boundedness in probability of the class of all $\int_0^T \inner{\pi_t}{\, \mathrm d M_t}$, where $\pi$ ranges in all $\triangle^d$-valued and predictable processes. This completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{thm: FTAP_jump}, statement $(2)$.}
In view of statement (1) of Theorem \ref{thm: FTAP_jump}, we only need to show the validity of $(iii) \Leftrightarrow (iv)$ under the extra assumption of statement (2). This equivalence is really Proposition 2.2 in \cite{MR2322919}, but we present the few details for completeness.
For the implication $(iii) \Rightarrow (iv)$, simply define $R^i := \int_0^\cdot (1 / S^i_{t-}) \, \mathrm d S^i_t$ for $i \in \set{1, \ldots, d}$, The latter process is a well-defined semimartingale because, for each $i \in \set{1, \ldots, d}$, $S^i$ is a semimartingale, $S^i_-$ is locally bounded away from zero on the stochastic interval $\dbra{0, \zeta^{S^i}}$, and $S = 0$ on $\dbraco{\zeta^{S^i}, \infty}$.
Now, for $(iv) \Rightarrow (iii)$, it is clear that $S$ is a semimartingale. Furthermore, for all $i \in \set{1, \ldots, d}$, $S^i$ cannot revive from bankruptcy; this follows because stochastic exponentials stay at zero once they hit zero.
\qed
\section{On and Beyond the Main Result} \label{sec: beyond main result}
\subsection{Comparison with the result of Delbaen and Schachermayer} \label{subsubsec: compar with DS}
Theorem 7.2 of the seminal paper \cite{MR1304434} establishes the semimartingale property of $S$ under condition NFLVR for simple admissible strategies, coupled with a local boundedness assumption on $S$ (always together with the c\`adl\`ag \ property and adaptedness). The assumptions of Theorem \ref{thm: FTAP_jump} are different than the ones in \cite{MR1304434}. Condition NA$1_{\mathsf{s}}$ \ (valid for simple, no-short-sales trading) is weaker than NFLVR for simple admissible strategies. Furthermore, local boundedness from above is not required in our context, but we do require that each $S^i$, $i \in \set{1, \ldots, d}$, is nonnegative. In fact, as we shall argue in \S \ref{subsubsec: local bdd from below} below, nonnegativity of each $S^i$, $i \in \set{1, \ldots, d}$, can be weakened by local boundedness from below, indeed making Theorem \ref{thm: FTAP_jump} a generalization of Theorem 7.2 in \cite{MR1304434}. Note that if the components of $S$ are unbounded both above and below, not even condition NFLVR is enough to ensure the semimartingale property of $S$; see Example 7.5 in \cite{MR1304434}.
\smallskip
Interestingly, and in contrast to \cite{MR1304434}, the proof of Theorem \ref{thm: FTAP_jump} provided here does \emph{not} use the deep Bichteler-Dellacherie theorem on the characterization of semimartingales as ``good integrators'' (see \cite{MR1906715}, \cite{MR2273672}, where one \emph{starts} by defining semimartingales as good integrators and obtains the classical definition as a by-product). Actually, and in view of Proposition \ref{prop: NAone iff NUPBR}, statement (2) of Theorem \ref{thm: FTAP_jump} can be seen as a ``multiplicative'' counterpart of the Bichteler-Dellacherie theorem. Its proof exploits two simple facts: (a) positive supermartingales are semimartingales, which follows directly from the Doob-Meyer decomposition theorem; and (b) reciprocals of strictly positive supermartingales are semimartingales, which is a consequence of It\^o's formula. Crucial in the proof is also the concept of the num\'eraire \ portfolio.
\subsection{The semimartingale property of $S$ when each $S^i$, $i \in \set{1, \ldots, d}$, is locally bounded from below} \label{subsubsec: local bdd from below}
As mentioned previously, implication $(i) \Rightarrow (iii)$ actually holds even when each $S^i$, $i \in \set{1, \ldots, d}$, is locally bounded from below, which we shall establish now. We still, of course, assume that each $S^i$, $i \in \set{1, \ldots, d}$, is adapted and c\`adl\`ag. Since ``no-short-sales'' strategies have ambiguous meaning when asset prices can become negative, we need to make some changes in the class of admissible wealth processes. For $x \in \mathbb R_+$, let $\mathcal{X}_{\mathsf{s}}'(x)$ denote the class of all wealth processes $X^{x, \theta}$ using simple trading as in \eqref{eq: Xhat} that satisfy $X^{x, \theta} \geq 0$. Further, set $\mathcal{X}_{\mathsf{s}}' = \bigcup_{x \in \mathbb R_+} \mathcal{X}_{\mathsf{s}}'(x)$. Define condition NA$1'_{\mathsf{s}}$ \ for the class $\mathcal{X}_{\mathsf{s}}'$ in the obvious manner, replacing ``$\mathcal{X}_{\mathsf{s}}$'' with ``$\mathcal{X}_{\mathsf{s}}'$'' throughout in \S \ref{subsec: UPBR}. Assume then that condition NA$1'_{\mathsf{s}}$ \ holds. To show that $S$ is a semimartingale, it is enough to show that $(S_{\tau^k \wedge t})_{t \in \mathbb R_+}$ is a semimartingale for each $k \in \mathbb N$, where $(\tau^k)_{k \in \mathbb N}$ is a localizing sequence such that $S^i \geq -k$ on $\dbra{0, \tau^k}$ for all $i \in \set{1, \ldots, d}$ and $k \in \mathbb N$. In other words, we might as well assume that $S^i \geq - k$ for all $i \in \set{1, \ldots, d}$. Define $\tilde{S}^i := k + S^i$; then, $\tilde{S}^i$ is nonnegative for all $i \in \set{1, \ldots, d}$. Let $\tilde{S} = (\tilde{S}^i)_{i \in \set{1, \ldots, d}}$. If $\tilde{\mathcal{X}_{\mathsf{s}}}$ is (in self-explanatory notation) the collection of all wealth processes resulting from simple, no-short-sales strategies investing in $\tilde{S}$, it is straightforward that $\tilde{\mathcal{X}_{\mathsf{s}}} \subseteq \mathcal{X}_{\mathsf{s}}'$. Therefore, NA$1_{\mathsf{s}}$ \ holds for simple, no-short-sales strategies investing in $\tilde{S}$; using implication $(i) \Rightarrow (iii)$ in statement (1) of Theorem \ref{thm: FTAP_jump}, we obtain the semimartingale property of $\tilde{S}$. The latter is of course equivalent to $S$ being a semimartingale.
\smallskip
One might wonder why we do not simply ask from the outset that each $S^i$, $i \in \set{1, \ldots, d}$, is locally bounded from below, since it certainly contains the case where each $S^i$, $i \in \set{1, \ldots, d}$, is nonnegative. The reason is that by restricting trading to using only no-short-sales strategies (which we can do when each $S^i$, $i \in \set{1, \ldots, d}$, is nonnegative) enables us to be as general as possible in extracting the semimartingale property of $S$ from the NA$1_{\mathsf{s}}$ \ condition. Consider, for example, the discounted asset-price process given by $S = a \mathbb{I}_{\dbraco{0,1}} + b \mathbb{I}_{\dbraco{1,\infty}}$, where $a > 0$ and $b \in \mathbb R_+$ with $a \neq b$. This is a \emph{really} elementary example of a nonnegative semimartingale. Now, if we allow for any form of simple trading, as long as it keeps the wealth processes nonnegative, it is clear that condition NA$1'_{\mathsf{s}}$ \ will fail (since it is known that at time $t = 1$ there will be a jump of size $(b - a) \in \mathbb R \setminus \set{0}$ in the discounted asset-price process). On the other hand, if we only allow for no-short-sales strategies, NA$1_{\mathsf{s}}$ \ will hold --- this is easy to see directly using Proposition \ref{prop: NAone iff NUPBR}, since $X_T \leq |b - a| / a$ for all $T \geq 1$ and $X \in \mathcal{X}_{\mathsf{s}}(1)$. Therefore, we can conclude that $S$ is a semimartingale using implication $(i) \Rightarrow (iii)$ in statement (1) of Theorem \ref{thm: FTAP_jump}. (Of course, one might argue that there is no need to invoke Theorem \ref{thm: FTAP_jump} for the simple example here. The point is that allowing for all nonnegative wealth processes results in a rather weak sufficient criterion for the semimartingale property of $S$.)
\subsection{The semimartingale property of $S$ via bounded indirect utility}
There has been previous work in the literature obtaining the semimartingale property of $S$ using the finiteness of the value function of a utility maximization problem via use of only simple strategies --- see, for instance, \cite{MR2139030}, \cite{MR2157199}, \cite{LarZit05}. In all cases, there has been an assumption of local boundedness (or even continuity) on $S$. We shall offer a result in the same spirit, dropping the local boundedness requirement. We shall assume \emph{either} that discounted asset-price processes are nonnegative and \emph{only} no-short-sales simple strategies are considered (which allows for a sharp result), \emph{or} that discounted asset-price processes are locally bounded from below. In the latter case, Proposition \ref{prop: semimart via finite expec util} that follows is a direct generalization of the corresponding result in \cite{MR2139030}, where the authors consider locally bounded (both above and below) discounted asset-price processes. In the statement of Proposition \ref{prop: semimart via finite expec util} below, we use the notation $\mathcal{X}_{\mathsf{s}}'(x)$ introduced previously in \S \ref{subsubsec: local bdd from below}.
\begin{prop} \label{prop: semimart via finite expec util}
Let $S = (S^i)_{i =1, \ldots, d}$ be such that $S^i$ is adapted and c\`adl\`ag \ process for $i \in \set{1, \ldots, d}$. Also, let $U: \mathbb R_+ \mapsto \mathbb R \cup \{ - \infty \}$ be a nondecreasing function with $U > - \infty$ on $]0 , \infty]$ and $U(\infty) = \infty$. Fix some $x > 0$. Finally, let $T$ be a finite stopping time. Assume that either:
\begin{itemize}
\item each $S^i$, $i \in \set{1, \ldots, d}$, is nonnegative and $\sup_{X \in \mathcal{X}_{\mathsf{s}}(x)} \mathbb{E}[U(X_T)] < \infty$, or
\item each $S^i$, $i \in \set{1, \ldots, d}$, is locally bounded from below and $\sup_{X \in \mathcal{X}_{\mathsf{s}}'(x)} \mathbb{E}[U(X_T)] < \infty$.
\end{itemize}
Then, the process $(S_{T \wedge t})_{t \in \mathbb R_+}$ is a semimartingale.
\end{prop}
\begin{proof}
Start by assuming that each $S^i$, $i \in \set{1, \ldots, d}$ is nonnegative and that $\sup_{X \in \mathcal{X}_{\mathsf{s}}(x)} \mathbb{E}[U(X_T)] < \infty$. Since we only care about the semimartingale property of $(S_{T \wedge t})_{t \in \mathbb R_+}$, assume without loss of generality that $S_t = S_{T \wedge t}$ for all $t \in \mathbb R_+$. Suppose that condition NA$1_{\mathsf{s}}$ \ fails. According to Proposition \ref{prop: NAone iff NUPBR} and Remark \ref{rem: NAone for stop times}, there exists a sequence $(\widetilde{X}^n)_{n \in \mathbb N}$ of elements in $\mathcal{X}_{\mathsf{s}}(x)$ and $p > 0$ such that $\mathbb{P} [\widetilde{X}_T^n > 2 n] \geq p$ for all $n \in \mathbb N$. For all $n \in \mathbb N$, let $X^n := (x + \widetilde{X}^n)/2 \in \mathcal{X}_{\mathsf{s}}(x)$. Then, $\sup_{X \in \mathcal{X}_{\mathsf{s}}(x)} \mathbb{E}[U(X_T)] \geq \liminf_{n \to \infty} \mathbb{E}[U(X_T^n)] \geq (1 - p) U(x/2) + p \liminf_{n \to \infty} U(n) = \infty$. This is a contradiction to $\sup_{X \in \mathcal{X}_{\mathsf{s}}(x)} \mathbb{E}[U(X_T)] < \infty$. We conclude that $(S_{T \wedge t})_{t \in \mathbb R_+}$ is a semimartingale using implication $(i) \Rightarrow (iii)$ in statement (1) of Theorem \ref{thm: FTAP_jump}.
Under the assumption that each $S^i$, $i \in \set{1, \ldots, d}$ is locally bounded from below and that $\sup_{X \in \mathcal{X}_{\mathsf{s}}'(x)} \mathbb{E}[U(X_T)] < \infty$, the proof is exactly the same as the one in the preceding paragraph, provided that one replaces ``$\mathcal{X}_{\mathsf{s}}$'' with ``$\mathcal{X}_{\mathsf{s}}'$'' throughout, and uses the fact that condition NA$1'_{\mathsf{s}}$ \ for the class $\mathcal{X}_{\mathsf{s}}'$ implies the semimartingale property for $S$, as was discussed in \S \ref{subsubsec: local bdd from below}.
\end{proof}
\subsection{On the implication $(iii) \Rightarrow (i)$ in Theorem \ref{thm: FTAP_jump}}
If we do not require the additional assumption on $S$ in statement (2) of Theorem \ref{thm: FTAP_jump}, implication $(iii) \Rightarrow (i)$ might fail. We present below a counterexample where this happens.
On $(\Omega, \mathcal{F}, \mathbb{P})$, let $W$ be a standard, one-dimensional Brownian motion (with respect to its own natural filtration --- we have not defined $(\mathcal{F}_t)_{t \in \mathbb R_+}$ yet). Define the process $\xi$ via $\xi_t := \exp(- t / 4 + W_t)$ for $t \in \mathbb R_+$. Since $\lim_{t \to \infty} W_t / t = 0$, $\mathbb{P}$-a.s., it is straightforward to check that $\xi_\infty := \lim_{t \to \infty} \xi_t = 0$, and actually that $\int_0^\infty \xi_t \, \mathrm d t < \infty$, both holding $\mathbb{P}$-a.s. Write $\xi = A + M$ for the Doob-Meyer decomposition of the continuous submartingale $\xi$ under its natural filtration, where $A = (1 / 4) \int_0^\cdot \xi_t \, \mathrm d t$ and $M = \int_0^\cdot \xi_t \, \mathrm d W_t$. Due to $\int_0^\infty \xi_t \, \mathrm d t < \infty$, we have $A_\infty < \infty$ and $[M, M]_\infty = \int_0^\infty |\xi_t|^2 \, \mathrm d t < \infty$, where $[M, M]$ is the quadratic variation process of $M$. In the terminology of \cite{MR2126973}, $\xi$ is a semimartingale up to infinity. If we define $S$ via $S_t = \xi_{t / (1 - t)}$ for $t \in [0, 1[$ and $S_t = 0$ for $t \in [1, \infty[$, then $S$ is a nonnegative semimartingale. Define $(\mathcal{F}_t)_{t \in \mathbb R_+}$ to be the augmentation of the natural filtration of $S$. Observe that $\zeta^S = 1$ and $S_{\zeta^S -} = 0$; the condition of statement (2) of Theorem \ref{thm: FTAP_jump} is not satisfied. In order to establish that NA$1_{\mathsf{s}}$ \ fails, and in view of Proposition \ref{prop: NAone iff NUPBR}, it is sufficient to show that $\set{X_1 \ | \ X \in \mathcal{X}_{\mathsf{s}}(1)}$ is not bounded in probability. Using continuous-time trading, define a wealth process $\widehat{X}$ for $t \in [0, 1[$, via $\widehat{X}_0 = 1$ and the dynamics $\, \mathrm d \widehat{X}_t / \widehat{X}_t = (1 / 4) (\, \mathrm d S_t / S_t)$ for $t \in [0,1[$. Then, $\widehat{X}_t = \exp \pare{ (1/16) (t / (1-t)) + (1/4) W_{t / (1-t)}}$ for $t \in [0,1[$,
which implies that $\mathbb{P}[\lim_{t \uparrow \uparrow 1} \widehat{X}_t = \infty] = 1$, where ``$t \uparrow \uparrow 1$'' means that $t$ \emph{strictly} increases to $1$. Here, the percentage of investment is $1/4 \in [0,1]$, i.e, $\widehat{X}$ is the result of a no-short-sales strategy. One can then find an approximating sequence $(X^k)_{k \in \mathbb N}$ such that $X^k \in \mathcal{X}_{\mathsf{s}}(1)$ for all $k \in \mathbb N$, as well as $\mathbb{P}[| X^k_{1} - \widehat{X}_{1 - 1 / k} | < 1] > 1 - 1 / k$. (Approximation results of this sort are discussed in greater generality in \cite{MR1971602}.) Then, $(X^k_1)_{k \in \mathbb N}$ is not bounded in probability; therefore, NA$1_{\mathsf{s}}$ \ fails. Of course, in this example we also have $(iii) \Rightarrow (iv)$ of Theorem \ref{thm: FTAP_jump} failing.
| {'timestamp': '2009-11-02T00:19:19', 'yymm': '0803', 'arxiv_id': '0803.1890', 'language': 'en', 'url': 'https://arxiv.org/abs/0803.1890'} |
\section{Introduction}
In this paper we describe basic concepts of dynamic programming in terms of categories of optics. The class of models we consider are discrete-time Markov decision processes, aka. discrete-time controlled Markov chains. There are classical methods of computing optimal control policies, underlying much of both classical control theory and modern reinforcement learning, known collectively as \emph{dynamic programming}. These are based on two operations that can be interleaved in many different ways: \emph{value improvement} and \emph{policy improvement}. The central idea of this paper is the slogan \emph{value improvement is optic precomposition}, or said differently, \emph{value improvement is a representable functor on optics}.
Given a control problem with state space $X$, a \emph{value function} $V : X \to \R$ represents an estimate of the long-run payoff of following a policy starting from any state, and can be equivalently represented as a costate $V : \binom{X}{\R} \to I$ in a category of optics. Every control policy $\pi$ also induces an optic $\lambda (\pi) : \binom{X}{\R} \to \binom{X}{\R}$. The general idea is that the forwards pass of the optic is a morphism $X \to X$ describing the dynamics of the Markov chain given the policy, and the backwards pass is a morphism $X \otimes \R \to \R$ which given the current state and the \emph{continuation payoff}, describing the total payoff from all future stages, returns the total payoff for the current stage given the policy, plus all future stages.
Given a policy $\pi$ and a value function $V : \binom{X}{\R} \to I$, the costate $\binom{X}{\R} \overset{\lambda (\pi)}\longrightarrow \binom{X}{\R} \overset{V}\longrightarrow I$ is a closer approximation of the value of $\pi$. This is called \emph{value improvement}. Iterating this operation
\[ \ldots \binom{X}{\R} \overset{\lambda (\pi)}\longrightarrow \binom{X}{\R} \overset{\lambda (\pi)}\longrightarrow \binom{X}{\R} \overset{V}\longrightarrow I \]
converges efficiently to the true value function of the policy $\pi$.
Replacing $\pi$ with a new policy that is optimal for its value function is called \emph{policy improvement}. Repeating these steps is known as \emph{policy iteration}, and converges to the optimal policy and value function.
Alternatively, instead of repeating value improvement until convergence before each step of policy improvement, we can also alternate them, giving the composition of optics
\[ \ldots \binom{X}{\R} \overset{\lambda (\pi_2)}\longrightarrow \binom{X}{\R} \overset{\lambda (\pi_1)}\longrightarrow \binom{X}{\R} \overset{V}\longrightarrow I \]
where each policy $\pi_i$ is optimal for the value function to the right of it. This is known as \emph{value iteration}, and also converges to the optimal policy and value function.
For an account of convergence properties of these algorithms, classic textbooks are \cite[Sec.6]{Puterman}, \cite[Ch.1]{bertsekas_dpoc_vol2}.
In this paper we illustrate this idea, using mixed optics to account for the categorical structure of transitions in a Markov chain and the convex structure of expected payoffs, which typically form the kleisli and Eilenberg-Moore categories of a probability monad. This paper is partially intended as an introduction to dynamic programming for category theorists, focussing on illustrative examples rather than on heavy theory.
\subsection{Related work}
The precursor of this paper was early work on value iteration using open games \cite{hedges_etal_compositional_game_theory}. The idea originally arose around 2016 during discussions of the first author with Viktor Winschel and Philipp Zahn. An early version was planned as a section of \cite{hedges_morphisms_open_games} but cut partly for page limit reasons, and partly because the idea was quite uninteresting until it was understood how to model stochastic transitions in open games \cite{bolt_hedges_zahn_bayesian_open_games} via optics \cite{Riley}. In this paper we have chosen to present the idea without any explicit use of open games, both in order to clarify the essential idea and also to bring it closer to the more recent framework of categorical cybernetics \cite{towards_foundations_categorical_cybernetics}, which largely subsumes open games \cite{capucci_etal_translating_extensive_form}. (Although, actually using this framework properly is left for future work.)
A proof-of-concept implementation of value iteration with open games was done in 2019 by the first author and Wolfram Barfuss\footnote{Source currently available at \url{https://github.com/jules-hedges/open-games-hs/blob/og-v0.1/src/OpenGames/Examples/EcologicalPublicGood/EcologicalPublicGood.hs}}, implementing a model from \cite{barfuss_learning_dynamics} - a model of the social dilemma of emissions cuts and climate collapse as a stochastic game, or jointly controlled MDP - and verifying it against Barfuss' Matlab implementation. A far more advanced implementation of reinforcement learning using open games was developed recently by Philipp Zahn, currently closed-source, and was used for the paper \cite{eschenbaum_etal_robust_algorithmic_collusion}.
The most closely related work to ours is \cite{DJM}, which formulates MDPs in terms of F-lenses \cite{Spivak} of the functor $\operatorname{BiKl}(C\times -,\Delta(\RR\times -))^{\op}$, where $C\times-$ is the reader comonad and $\Delta(\RR\times-)$ is a probability monad over actions with their expected value.
A MDP there is a lens from states and potential state changes and rewards to the agents observation and input $\binom{X}{\Delta(X\times\R)}\to\binom{O}{I}$.
Our approach differs in two ways.
We firstly assume that the readout function is the identity, as we are not dealing with partial observability \cite{POMDP}.
Secondly, we specify a concrete structure of the backwards update map $f^*: X\times I\to \Delta(X\times R)$, which allows us to rearrange the interface of this lens from policies to value functions.
Doing so opens up the possibility of composing these lenses sequentially, which is the heart of the dynamic programming approach explored in this paper.
Another approach is to model MDPs as coalgebras from states to rewards and potential transitions, as done by Feys et al. \cite{LTVMPDs}.
They observe that the Bellman optimality condition for value iteration is a certain coalgebra-to-algebra morphism. We believe that this approach is orthogonal to ours, and both could potentially be done simultaneously. We discuss this in the further work section.
A series of papers by Botta et al (for example \cite{botta13}) formulates dynamic programming in dependent type theory, accounting in a serious way for how different actions can be available in different states, a complication that we ignore in this paper. It may be possible to unify these approaches using dependent optics \cite{braithwaite_etal_fibre_optics,vertechi_dependent_optics}.
Finally, \cite{baez_erbele_categories_control} builds a category of signal flow diagrams, a widely used tool in control theory. Besides the common application to control theory there is little connection to this paper. In particular, time is implicit in their string diagrams, meaning their models have continuous time, whereas our approach is inherently discrete time. Said another way, composition in their category is `space-like' whereas ours is `time-like' - their morphisms are (open) systems whereas ours are processes.
\section{Dynamic programming}
\subsection{Markov Decision Processes}
A \emph{Markov decision process} (MDP) consists of a state space $X$, an action space $A$, a state transition function $f: X\times A\to X$, and a utility or reward function $U: X\times A\to \RR$. The state transition function is often taken to be stochastic, that is, to be given by probabilities $f (x' \mid x, a)$. In the stochastic case the utility function can be taken without loss of generality to be an expected utility function. We imagine actions to be chosen by an agent, who is trying to \emph{control} the Markov chain with the objective of optimising the long-run reward.
A \emph{policy} for an MDP is a function $\pi : X \to A$, which can also be taken to be either deterministic or stochastic. The type of policies encodes the Markov property: the choice of action depends only on the current state, and may not depend on any memory of past states.
Given an initial state $x_0 \in X$, a policy $\pi$ determines (possibly stochastically) a sequence of states
\[ x_0,\quad x_1=f(x_0,\pi(x_0)),\quad x_2=f(x_1,\pi(x_1)),\quad \dots \]
The total payoff is given by an infinite geometric sum of individual payoffs for each transition:
\begin{equation} \label{eq:policy_value}
V_\pi(x_0) = \sum_{k=0}^\infty \beta^kU(x_k,\pi(x_k))
\end{equation}
where $0 < \beta < 1$ is a fixed \emph{discount factor} which balances the relevance of present and future payoffs. (There are other methods of obtaining a single objective from an infinite sequence of transitions, such as averaging, but we focus on discounting in this paper.)
A key idea behind dynamic programming is that this geometric sum can be equivalently written as a telescoping sum:
\[ V_\pi (x_0) = U (x_0, \pi (x_0)) + \beta (U (x_1, \pi (x_1)) + \beta (U (x_2, \pi (x_2) + \cdots))) \]
The \emph{control problem} is to choose a policy $\pi$ in order to maximise (the expected value of) $V_\pi (x_0)$.
\subsection{Deterministic dynamic programming}
In dynamic programming, the agent's objective of maximizing the overall utility can be divided into two orthogonal goals:
to determine the value of a given policy $\pi$ (which we call the \emph{value improvement} step), and to determine the optimal policy $\pi^*$ (the \emph{policy improvement} step).
Bellman's equation is used as an update rule for both:
\begin{align}
\text{\textbf{Value improvement:}} &&&V'(x) = U(x,\pi(x)) + \beta V(f(x,\pi(x))) \label{eq:value_improvement} \\
\text{\textbf{Policy improvement:}} &&&\pi'(x) = \arg\max_{a \in A} U(x,a) + \beta V(f(x,a)) \label{eq:policy_improvement}
\end{align}
A Bellman optimality condition on the other hand determines the fixpoint of this update rule, and is met when $V'=V$ and $\pi'=\pi$ respectively.
The update rule \eqref{eq:value_improvement} is the discounted sum \eqref{eq:policy_value} where the stream of states is co-recursively fixed by the policy $\pi$ and transition function $f$.
The co-recursive structure refers to the calculation of the utility of a state $x$, where one needs the utility of the \emph{next} state, while in a recursive structure, $x$ needs the \emph{previous} state, starting from an initial state as a base case.
Two classical algorithms use these two steps differently:
Policy iteration iterates value improvement until the current policy value is optimal before performing a policy improvement step, and value iteration interleaves both steps one after another.
In policy iteration, a initial value function is chosen (usually $V(x)=0$), and a randomly chosen policy $\pi$ is evaluated by \eqref{eq:value_improvement} repeatedly until the value reaches a fixpoint, which is assured by the contraction mapping \cite{denardo_contraction_in_dp}.
Once $V$ reaches (or in practice gets close to) a fixpoint $V'=V$ or another convergence condition, the policy improvement step \eqref{eq:policy_improvement} chooses a greedy policy as an improvement to $\pi$.
A $q$-function or \emph{state-action value function} $q_\pi : X \times A \to \R$ describes the value of being in state $x$ and then taking action $a$, assuming that subsequent actions are taken by the policy $\pi$
\begin{equation}\label{eq:q}
q_\pi(x,a) = U(x,a) + \beta V(f(x,a))
\end{equation}
The \emph{policy improvement theorem} \cite{bellman_1957} states that if a pair of deterministic policies $\pi,\pi': X\to A$ satisfies for all $x\in X$
\[ q_\pi(x,\pi'(x)) \geq V_\pi(x) \]
then $V_{\pi'}(x)\geq V_\pi(x)$ for all $x\in X$.
The optimal policy $\pi^*$, if it exists, is the policy which if followed from any state, generates the maximum value.
This is a Bellman optimality condition which fuses the two steps \eqref{eq:value_improvement}, \eqref{eq:policy_improvement}:
\begin{equation} \label{eq:Bellman}
V_{\pi^*}(x) = \max_{a \in A} U(x,a) + \beta V_{\pi^*}(f(x,a))
\end{equation}
\emph{Value iteration} is a special policy iteration algorithm insofar it stops the update rule for value improvement to one step, by truncating the sum \eqref{eq:policy_value} to the first summand.
Moreover, it introduces the value improvement step implicitly in the policy improvement, which assigns a value to states
\[ V'(x) = \max_{a \in A} U(x,a) + \beta V(f(x,a)) \]
while the policy in each iteration is still recoverable as
\[ \pi'(x) = \arg\max_{a \in A} U(x,a) + \beta V(f(x,a)) \]
\subsection{Stochastic dynamic programming}
Stochasticity can be introduced in different places in a MDP:
\begin{enumerate
\item in the policy $\pi: X\to \Delta A$, where the probability of the policy $\pi$ taking action $a$ in a state $x$ is now notated $\pi(a \mid x)$.
\item in the transition function $f: X\times A\to \Delta X$ and potentially the reward function $U: X\times A\to \Delta\R$ independently.
\item usually the reward is included inside the transition function $f: X\times A\to \Delta(X\times\R)$, allowing correlated next states and rewards.
This is relevant when the reward is morally from the next state, rather than the current state and action.
If the reward were truly from the current state and action, the transition function can be decomposed into a function $f: X\times A\to \Delta X \times \Delta \R$.
\end{enumerate}
In this section we assume for simplicity that $\Delta$ is the finite support distribution monad, although the equations in the following can be formulated for arbitrary distributions by replacing the sum with an appropriate integral.
The policy value update rule \eqref{eq:value_improvement} becomes stochastic, and adopts a slightly different form depending on which part of the MDP is stochastic. For the cases 1. and 2.:
\begin{align}
V'(x) &= \sum_a\pi(a \mid x)(U(x,a) + \beta V(f(x,a))) \\
V'(x) &= \sum_r U(r \mid x,a)r + \sum_{x'}f(x' \mid x,a)\beta V(x') \\
\intertext{In the most general case, that is 1. together with 3.:}
V'(x) &= \sum_{a\in A}\pi(a \mid x)\sum_{x',r}f(x',r \mid x,a)(r + \beta V(x')) \label{eq:prediction}
\end{align}
(Note that the sum over $r$ is over the support of $f (- \mid x, a)$, which we assume here to be finite, although in general it can be replaced with an integral.)
The policy improvement theorem holds in the stochastic setting \cite[Sec.4.2]{RL_Intro} by defining
\[ q_\pi(s,\pi'(s))=\sum_a \sigma(a \mid s)q_\pi(s,a) \]
\subsection{Gridworld example}
A classic example in reinforcement learning is the Gridworld environment, where an agent moves in the four cardinal directions in a rectangular grid.
States of this finite MDP correspond to the positions that the agent can be in.
Assume that all transitions and policies are deterministic, and that the transition function prevents the agent from moving outside the boundary.
Suppose that the environment rewards 0 value for all states except the top left corner, where the reward is 1 (see figure \ref{fig:Gridworld}).
\begin{figure}[ht!]
\centering
\begin{tikzpicture}
\tikzstyle{arrow} = [red,-stealth,shorten <=-3pt,shorten >=-3pt]
\tikzstyle{matrixsty} = [matrix of nodes,nodes={inner sep=0pt,text width=.5cm,align=center,minimum height=.5cm}]
\matrix(m0)[matrixsty] at (1,1){
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\};
\matrix(m1)[matrixsty] at (4,1){
1 & 0 & 0 & 0 \\
$\beta$ & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\};
\matrix(m2)[matrixsty] at (7,1){
1 & 0 & 0 & 0 \\
$\beta$ & 0 & 0 & 0 \\
$\beta^2$ & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\};
\matrix(m3)[matrixsty] at (10,1){
1 & 0 & 0 & 0 \\
$\beta$ & 0 & 0 & 0 \\
$\beta^2$ & 0 & 0 & 0 \\
$\beta^3$ & 0 & 0 & 0 \\};
\matrix(m4)[matrixsty] at (13,1){
1 & 0 & 0 & 0 \\
$\beta$ & 0 & 0 & 0 \\
$\beta^2$ & 0 & 0 & 0 \\
$\beta^3$ & 0 & 0 & 0 \\};
\matrix(m5)[matrixsty] at (1,-2){
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\};
\matrix(m6)[matrixsty] at (4,-2){
1 & $\beta$ & 0 & 0 \\
$\beta$ & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\};
\matrix(m7)[matrixsty] at (7,-2){
1 & $\beta$ & $\beta^2$ & 0 \\
$\beta$ & $\beta^2$ & 0 & 0 \\
$\beta^2$ & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\};
\matrix(m8)[matrixsty] at (10,-2){
1 & $\beta$ & $\beta^2$ & $\beta^3$ \\
$\beta$ & $\beta^2$ & $\beta^3$ & 0 \\
$\beta^2$ & $\beta^3$ & 0 & 0 \\
$\beta^3$ & 0 & 0 & 0 \\};
\matrix(m9)[matrixsty] at (13,-2){
1 & $\beta$ & $\beta^2$ & $\beta^3$ \\
$\beta$ & $\beta^2$ & $\beta^3$ & $\beta^4$ \\
$\beta^2$ & $\beta^3$ & $\beta^4$ & 0 \\
$\beta^3$ & $\beta^4$ & 0 & 0 \\};
\foreach \gridx in {0,3,...,12} {
\pgfmathtruncatemacro{\gridxx}{\gridx+2}
\draw[step=.5cm,color=gray] (\gridx,0) grid (\gridxx,2);
\draw[step=.5cm,color=gray] (\gridx,-3) grid (\gridxx,-1);
}
\foreach \gridid in {m0,m1,m2,m3} {
\foreach \row in {1,...,4} {
\foreach \col in {1,...,4} {
\draw[arrow] (\gridid-\row-\col) -- ++ (0,.3cm);
}
}
}
\foreach \col in {1,3,4} {
\foreach \row in {1,...,4} {
\draw[arrow] (m4-\row-\col) -- ++ (0,.3cm);
}
}
\foreach \row in {1,...,4} {
\draw[arrow] (m4-\row-2) -- ++ (-.3cm,0);
}
\foreach \gridid in {m5,m6,m7,m8,m9} {
\foreach \row in {2,3,4} {
\foreach \col in {1,...,4} {
\draw[arrow] (\gridid-\row-\col) -- ++ (0,.3cm);
}
}
}
\foreach \gridid/\col in {m5/1,m5/2,m5/3,m5/4,m6/1,m6/3,m6/4,m7/1,m7/4,m8/1,m9/1} {
\draw[arrow] (\gridid-1-\col) -- ++ (0,.3cm);
}
\foreach \gridid/\col in {m6/2,m7/2,m7/3,m8/2,m8/3,m8/4,m9/2,m9/3,m9/4} {
\draw[arrow] (\gridid-1-\col) -- ++ (-.3cm,0);
}
\draw[->] (m0.east) -- (m1.west) node[above,midway]{$V$};
\draw[->] (m1.east) -- (m2.west) node[above,midway]{$V$};
\draw[->] (m2.east) -- (m3.west) node[above,midway]{$V$};
\draw[->] (m3.east) -- (m4.west) node[below,midway]{$\pi$};
\draw[->] (m4.east) -- ++ (.5cm,0) node[above,midway]{$\cdots$};
\draw[->] (m5.east) -- (m6.west) node[above,midway]{$\pi$} node[below,midway]{$V$};
\draw[->] (m6.east) -- (m7.west) node[above,midway]{$\pi$} node[below,midway]{$V$};
\draw[->] (m7.east) -- (m8.west) node[above,midway]{$\pi$} node[below,midway]{$V$};
\draw[->] (m8.east) -- (m9.west) node[above,midway]{$\pi$} node[below,midway]{$V$};
\draw[->] (m9.east) -- ++ (.5cm,0) node[above,midway]{$\cdots$};
\end{tikzpicture}
\caption{Difference between policy iteration (above) and value iteration (below). The numbers in the cells are state values and the red arrows are the directions dictated by the policy at each stage. The arrows between grids indicate what kind of update the algorithm does, either value improvement ($V$) or policy improvement ($\pi$). Notice how policy iteration performs value improvement three times before updating the policy, whereas value iteration improves the value and the policy at each stage.}
\label{fig:Gridworld}
\end{figure}
Starting with a policy which moves upwards in all states and a value function which rewards 1 only in the top left corner, a policy iteration algorithm would improve the value of the current policy until converging to the optimal values in the leftmost column, before updating the policy, while a value iteration algorithm would update the value function and also update the policy.
Take the finite set of positions as the state space $X$, and $A=\{\leftarrow,\rightarrow,\uparrow,\downarrow\}$ as the action space.
This example can be made stochastic if we add stochastic policies like $\epsilon$-greedy, where the action that the agent takes is the one with maximum value with probability $1 - \epsilon$ and a random one with probability $\epsilon$.
Another way is for the transition function to be stochastic, for example with a wind current that shifts the next state to the right with some probability $\epsilon$.
\subsection{Inverted pendulum example}
A task that illustrates a continuous state space MDP is the control of a pendulum balanced over a cart, which can be described in continuous-time exactly by two non-linear differential equations \cite[Example 2E]{control_systems_design}:
\begin{align*}
(M+m)\ddot{y}+mL\ddot{\theta}\cos\theta-mL\dot{\theta}^2\sin\theta &= a \\
mL\ddot{y}\cos\theta+mL^2\ddot{\theta}-mLg\sin\theta &= 0
\end{align*}
where $M$ is the mass of the cart, $m$ the mass of the pendulum, $L$ the length of the pendulum, $\theta$ the angle of the pendulum with respect to the upwards position, $y$ the carts horizontal position, $g$ the gravitational constant and $a$ is our control function (usually denoted $u$).
We rewrite the state variables as $x=[y,\dot{y},\theta,\dot{\theta}]^\top$.
Sampling the trajectory of continuous-time dynamics $\frac{\d}{\d t}x(t)=f(x(t))$ by $x_k=x(k\Delta t)$, one can define the discrete-time propagator $F_{\Delta t}$ by
\[ F_{\Delta t}(x(t))= x(t) + \int_t^{t+\Delta t} f(x(\tau)) \d\tau \]
which allows to model the system with $x_{k+1}=F_{\Delta t} (x_k)$.
A more common approach is to observe that the system of equations $\dot{x}=A(x)+B(x)a$ with $A$ and $B$ being non-linear functions of the state space, can be \emph{linearized} near a (not necessarily stable) equilibrium state, like the pendulum being in the upwards position.
There we can assume certain approximations like $\cos \theta\approx 1$ and $\sin\theta\approx \theta$, as well as small velocities leading to negligible quadratic terms $\dot{\theta}^2\approx 0$ and $\dot{y}^2\approx 0$.
This linearization around a fixpoint allows for the expression $\dot{x}=A x(t)+B a(t)$, where the matrix $A$ and vector $B$ are constants given by
\begin{align*}
A &= \begin{pmatrix}
0 & 1 & 0 & 0 \\
0 & 0 & -\frac{mg}{M} & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & \frac{(M+m)g}{ML} & 0
\end{pmatrix} & B &= \begin{pmatrix}
0 \\
\frac{1}{M} \\
0 \\
-\frac{1}{ML}
\end{pmatrix}
\end{align*}
If we assume that the observation of the pendulum angle and cart position is discretized in time, an a priori time-discretization of this model using Euler approximation follows $x_{k+1}=x_k+\Delta t (A x_k+ B a_k)$, with the same constants, where $k$ indexes time steps.
Therefore we can say that the time-discretized, linearized model of the inverted pendulum over a cart follows a deterministic MDP for which a controller $u$ can be learned.
We take the state space as $\R^4$ and the action space of the force exerted to the cart as $\R$.
The time-discretised formulation of this problem is more common in reinforcement learning settings than in `classical' control theory. In that case, a a common payoff function is to obtain one unit of reward for each time step that the pendulum is maintained within a threshold of angles. The not linearized, not time-discretized setting, which is more common in optimal control theory, allows the reward, which is usually termed negatively as a cost function $J$, to have a much more flexible expression, in terms of time spent towards the equilibrium, energy spent to control the device, etc.
\[ J(x,a) = \int_0^\infty C(x(t),a(t)) \d t \]
\subsection{Savings problem example}
The \emph{savings problem} is one of the most important models in economics, modelling the dilemma between saving and consumption of resources \cite[part IV]{ljungqvist_sargent_recursive_macroeconomic_theory} (see also \cite{stokey_lucas_recursive_methods_economic_dynamics}). It is also mathematically closely related to the problem of charging a battery, for example choosing when to draw electricity from a power grid to raise the water level in a reservoir \cite{mody_steffen_optimal_charging_electric_vehicles}.
At each discrete time step $k$, an agent receives an income $i_k$. They also have a bank balance $x_k$, which accumulates interest over time (this could also be, for example, an investment portfolio yielding returns). At each time step the agent makes a choice of \emph{consumption}, which means converting their income into utility (or, more literally, things from which they derive utility). If the consumption in some stage is less than their income then the difference is added to the bank balance, and if it is more than the difference is taken from the bank balance. The dilemma is that the agent receives utility only from consumption, but saving gives the possibility of higher consumption later due to interest. The optimal balance between consumption and saving depends on the discount factor, which models the agent's preference between consumption now and consumption in the future.
In the most basic version of the model, all values can be taken as deterministic, and the income $i_k$ can also be taken as constant. This basic model can be expanded in many ways, for example with forecasts and uncertainty about income and interest rates. A straightforward extension, which we will consider in this paper, is that income is normally distributed $i \sim \mathcal N (\mu, \sigma)$, independently in each time step.
We take the state space and action space both as $X = A = [0, \infty)$. Given the current bank balance $x$ and consumption decision $a$, the utility in the current stage is $U (x, a) = \min \{ a, x + i \}$. (That is, the agent's consumption is capped by their current bank balance.) The state transition is given by $f (x, a) = \max \{ (1 + \gamma) x - a + i, 0 \}$, where $\gamma$ is the interest rate.
\section{Optics}
In this section we recall material on categories of mixed optics, mostly taken from \cite{roman_etal_profunctor_optics}.
\subsection{Categories of optics}
Given a monoidal category $\M$ and a category $\C$, an action of $\M$ on $\C$ is given by a functor $\bullet : \M \times \C \to \C$ with coherence isomorphisms $I \bullet X \cong X$ and $(M \otimes N) \bullet X \cong M \bullet (N \bullet X)$. $\C$ is called an $\M$-actegory.
Given a pair of $\M$-actegories $\C,\D$, we can form the category of optics $\Optic_{\C,\D}$. Its objects are pairs $\binom{X}{X'}$ where $X$ is an object of $\C$ and $X'$ is an object of $\D$. Hom-sets are defined by the coend
\[ \Optic_{\C,\D} \left( \binom{X}{X'}, \binom{Y}{Y'} \right) = \int^{M : \M} \C (X, M \bullet Y) \times \D (M \bullet Y', X') \]
in the category $\Set$. Such a morphism is called an optic, and consists of an equivalence class of triples $(M, f, f')$ where $M$ is an object of $\M$, $f : X \to M \bullet Y$ in $\C$ and $g : M \bullet Y' \to X'$ in $\D$. We call $M$ the residual, $f$ the forwards pass and $f'$ the backwards pass, so we think of the residual as mediating communication from the forward pass to the backward pass. Composition of optics works by taking the monoidal product in $\M$ of the residuals.
A common example is a monoidal category $\M = \C = \D$ acting on itself by the monoidal product, so
\[ \Optic_\C \left( \binom{X}{X'}, \binom{Y}{Y} \right) = \int^{M : \C} \C (X, M \otimes Y) \times \C (M \otimes Y', X') \]
This is the original definition of optics from \cite{Riley}. If $\C$ is additionally cartesian monoidal then we can eliminate the coend to produce \emph{concrete lenses}:
\begin{align*}
\int^{M : \C} \C (X, M \times Y) \times \C (M \times Y', X') &\cong \int^{M : \C} \C (X, M) \times \C (X, Y) \times \C (M \times Y', X') \\
&\cong \C (X, Y) \times \C (X \times Y', X')
\end{align*}
On the other hand, if $\C$ is monoidal closed then we can eliminate the coend in a different way to produce \emph{linear lenses}:
\begin{align*}
\int^{M : \C} \C (X, M \otimes Y) \times \C (M \otimes Y', X') &\cong \int^{M : \C} \C (X, M \otimes Y) \times \C (M, [Y', X']) \\
&\cong \C (X, [Y', X'] \otimes Y)
\end{align*}
Both of these proofs use the \emph{ninja Yoneda lemma} for coends \cite{loregian-coend-cofriend}.
\begin{example}
Let $\Set$ act on itself by cartesian product. Optics $\binom{X}{X'} \to \binom{Y}{Y'}$ in $\Optic_\Set$ can be written equivalently as pairs of functions $X \to Y$ and $X \times Y' \to X'$, or as a single function $X \to Y \times (Y' \to X')$.
\end{example}
\begin{example}
Let $\Euc$ be the category of Euclidean spaces and smooth functions, which is cartesian but not cartesian closed. Optics $\binom{X}{X'} \to \binom{Y}{Y'}$ in $\Optic_\Euc$ can be written as pairs of smooth functions $X \to Y$ and $X \times Y' \to X'$.
\end{example}
\begin{example}
Let $\Mark$ be the category of sets and finite support Markov kernels, which is the kleisli category of the finite support probability monad $\Delta : \Set \to \Set$. It is a prototypical example of a Markov category \cite{Synthetic_approach}, and it is neither cartesian monoidal nor monoidal closed. Optics $\binom{X}{X'} \to \binom{Y}{Y'}$ in $\Optic_\Mark$ can only be written as optics, it is not possible to eliminate the coend. This is the setting used for Bayesian open games \cite{bolt_hedges_zahn_bayesian_open_games}.
\end{example}
\begin{example}
Let $\Conv$ be the category of convex sets, which is the Eilenberg-Moore category of the finite support probability monad \cite{fritz09}. A convex set can be thought of a set with an abstract expectation operator $\mathbb E : \Delta X \to X$. Thus the functor $\Delta : \Mark \to \Conv$ given by $X \mapsto \Delta (X)$ on objects is fully faithful. $\Conv$ has finite products which are given by tupling in the usual way. $\Conv$ also has a closed structure: the set of convex functions $X \to Y$ themselves form a convex set $[X, Y]$ pointwise. However $\Conv$ is not cartesian closed: instead there is a different monoidal product making it monoidal closed \cite[section 2.2]{sturtz_categorical_probability} (see also \cite{kock_closed_categories_commutative_monads}). This monoidal product ``classifies biconvex maps'' in the same sense that the tensor product of vector spaces classifies bilinear maps. The embedding $\Delta : \Mark \to \Conv$ is strong monoidal for this monoidal product, not for the cartesian product of convex sets.
We can define an action of $\Mark$ on $\Conv$, by $M \bullet X = \Delta (M) \otimes X$ \cite[section 5.5]{capucci_gavranovic_actegories}. Together with the self-action of $\Mark$, we get a category $\Optic_{\Mark,\Conv}$ given by
\begin{align*}
\Optic_{\Mark, \Conv} \left( \binom{X}{X'}, \binom{Y}{Y'} \right) &= \int^{M : \Mark} \Mark (X, M \otimes Y) \times \Conv (\Delta (M) \otimes Y', X') \\
&\cong \int^{M : \Mark} \Mark (X, M \otimes Y) \times \Conv (\Delta (M), [Y', X'])
\end{align*}
(This coend cannot be eliminated because the embedding $\Delta : \Mark \to \Conv$ does not have a right adjoint.)
This category of optics will be very useful for Markov decision processes, where the forwards direction is a Markov kernel and the backwards direction is a function involving expectations.
\end{example}
\subsection{Monoidal structure of optics}
A category of optics $\Optic_{\C,\D}$ is itself (symmetric) monoidal, when $\C$ and $\D$ are (symmetric) monoidal in a way that is compatible with the actions of $\M$. The details of this have been recently worked out in \cite{capucci_gavranovic_actegories}. The monoidal product on objects of $\Optic_{\C,\D}$ is given by pairwise monoidal product. All of the above examples are symmetric monoidal.
A monoidal category of optics comes equipped with a string diagram syntax \cite{hedges_coherence_lenses_open_games}. This has directed arrows representing the forwards and backwards passes, and right-to-left bending wires but not left-to-right bending wires. The residual of the denoted optic can be read off from a diagram, as the monoidal product of the wire labels of all right-to-left bending wires.
For example, a typical optic $(M, f, f') \in \Optic_\C \left( \binom{X}{X'}, \binom{Y}{Y'} \right)$ is denoted by the diagram
\begin{center}
\tikzfig{generic_optic}
\end{center}
These diagrams have only been properly formalised for a monoidal category acting on itself, so for mixed optics we need to be very careful and are technically being informal.
Costates in monoidal categories of optics, that is optics $\binom{X}{X'} \to I$ (where $I = \binom{I}{I}$ is the monoidal unit of $\Optic_{\C,\D}$), are a central theme of this paper. When we have a monoidal category acting on itself, costates in $\Optic_\C$ are given by
\[ \Optic_\C \left( \binom{X}{X'}, I \right) = \int^{M : \C} \C (X, M \otimes I) \times \C (M \otimes I, X') \cong \C (X, X') \]
Thus \emph{costates in optics are functions}. A different way of phrasing this is by defining a functor $K : \Optic_\C^\op \to \Set$ given on objects by $K \binom{X}{X'} = \C (X, X')$, and then showing that $K$ is representable \cite{hedges_morphisms_open_games}. We will generally treat this isomorphism as implicit, sometimes referring to costates as though they are functions.
In the case of a cartesian monoidal category $\C$, given a concrete lens $f : X \to Y$, $f' : X \times Y' \to X'$ and a function $k : Y \to Y'$, the action of $K$ gives us the function $X \to X'$ given by $x \mapsto f' (x, k (f (x)))$.
When we have $\M = \C$ acting on both itself and on $\D$ (which includes all of the examples above) then similarly
\[ \Optic_{\C,D} \left( \binom{X}{X'}, I \right) = \int^{M : \C} \C (X, M \otimes I) \times \D (M \bullet I, X') \cong \D (X \bullet I, X') \]
\section{Dynamic programming with optics}
Given an MDP with state space $X$ and action space $A$, we can convert it to an optic $\binom{X \otimes A}{\R} \to \binom{X}{\R}$. The category of optics in which this lives can be `customised' to some extent, and depends on the class of MDPs that we are considering and how much typing information we choose to include. The definition of this optic is given by the following string diagram:
\begin{center}
\tikzfig{mdp_optic}
\end{center}
To be clear, this diagram is not completely formal because we are making some assumptions about the category of optics we work in. In general, we require the forwards category $\C$ to be a Markov category (giving us copy morphisms $\Delta_X$ and $\Delta_A$), and the backwards category $\D$ must have a suitable object $\R$ together with morphisms $\times \beta : \R \to \R$ and $+ : \R \otimes \R \to \R$. Specific examples of interpretations of this diagram will be explored below. When the forwards category acts on the backwards category, then the forwards pass is a morphism $g : X \otimes A \to X \otimes X \otimes A$ in $\C$ where
\[ g = \Delta_{X \otimes A}\comp (f \otimes \id_{X \otimes A}) \]
and the backwards pass is a morphism $g' : X \bullet A \bullet \R \to \R$ in $\D$ encoding the function $g' (x, a, r) = \mathbb E U (x, a) + \beta r$.
The resulting optic is given by $\lambda = (X \otimes A, g, g') : \binom{X \otimes A}{\R} \to \binom{X}{\R}$ in $\Optic_{\C, \D}$.
Given a policy $\pi : X \to A$, we lift it to an optic $\overline\pi : \binom{X}{\R} \to \binom{X \otimes A}{\R}$, by
\begin{center}
\tikzfig{pi_bar}
\end{center}
Here we are also assuming that the forwards category has a copy morphisms $\Delta_X$ (for example, because it is a Markov category), and the backwards category has a suitable object of real numbers. The interpretation of this diagram is the optic $(I, \Delta_X\comp (\id_X \otimes \pi), \id_\R)$.
\subsection{Discrete-space deterministic decision processes}\label{sec:4.1}
Consider a deterministic decision process with a discrete set of states $X$, discrete and finite set of actions $A$, transition function $f : X \times A \to X$, payoff function $U : X \times A \to \R$ and discount factor $\beta \in (0, 1)$. We convert this into an optic $\lambda = (X \times A, g, g') : \binom{X \times A}{\R} \to \binom{X}{\R}$ in $\Optic_\Set$, whose forwards pass is $g (x, a) = (x, a, f (x, a))$ and whose backwards pass is $g' (x, a, r) = U (x, a) + \beta r$.
Consider a dynamical system with the state space $A^X \times \Optic_\Set \left( \binom{X}{\R}, I \right)$. Elements of this are pairs $(\pi, V)$ of a policy $\pi : X \to A$ and a value function $V : X \to \R$. We can define two update steps:
\begin{align*}
\text{\textbf{Value improvement:}}& &&(\pi, V) \mapsto (\pi, \overline\pi\comp \lambda\comp V) \\
\text{\textbf{Policy improvement:}}& &&(\pi, V) \mapsto (x \mapsto \arg\max_{a \in A} (\lambda\comp V) (x, a), V)
\end{align*}
(We assume that $\arg\max$ is canonically defined, for example because $A$ is equipped with an enumeration so that we can always choose the first maximiser.)
Unpacking and applying the isomorphism between costates in lenses and functions, a step of value improvement replaces $V$ with
\[ V' (x) = U (x, \pi (x)) + \beta V (f (x, \pi (x))) \]
and a step of policy improvement replaces $\pi$ with
\[ \pi' (x) = \arg\max_{a \in A} U (x, a) + \beta V (f (x, a)) \]
Iterating the value improvement step converges to a value function which is the optimal value function for the current (not necessarily optimal) policy $\pi$.
A fixpoint of alternating steps of value improvement and policy improvement is a pair $(\pi^*, V^*)$ satisfying
\begin{align*}
V^* (x) &= \max_{a \in A} (\lambda\comp V^*) (x, a) = \max_{a \in A} U (x, a) + \beta V^* (f (x, a)) \\
\pi^* (x) &= \arg\max_{a \in A} (\lambda\comp V^*) (x, a) = \arg\max_{a \in A} U (x, a) + \beta V^* (f (x, a))
\end{align*}
\begin{example}
[Gridworld example]
A policy of an agent in our version of Gridworld (Figure \ref{fig:Gridworld}) is a function from the $4\times 4$ set of states $X = \{ 1, 2, 3, 4 \}^2$ that we index by $(i,j)$ to the four-element set of actions $A = \{ \leftarrow, \rightarrow, \uparrow, \downarrow \}$, i.e. an element of $A^X$.
Initializing the value function $V$ with the environments immediate reward whose only non-zero value is $V(0,0)=1$ (top-left corner) and the policy with a upwards facing constant action $\pi(i,j)=\ \uparrow$ for all $(i,j)\in X$, a value improvement step would leave the policy unchanged while updating $V$ to $\overline{\pi}\comp\lambda\comp V$, which differs with $V$ only at $(0,1)\mapsto \beta$.
If we instead perform a policy improvement step, the value function remains unchanged while the new policy differs with $\pi$ at $(1,0)\mapsto \arg\max_{a\in A}(\lambda\comp v)(1,0,a)=\ \leftarrow$.
\end{example}
\subsection{Continuous-space deterministic decision processes}
\begin{example}[Inverted pendulum]
A state of our time-discretized inverted pendulum on a cart consists of $[y,\dot{y},\theta,\dot{\theta}]^\top$ in the state space $X=\R^4$.
The linearized transition function that sends $x_k$ to $x_{k+1}=Ax_k+Ba_k$ is a smooth map $X\to Y$.
The discretized cost $J(x,a)=\sum_{k=0}^\infty \beta^k C(x(k),a(k))$ defines the backwards smooth function $X\times A\times\R\to \R$ which adds the cost at the $k$th time step $C(x(k),a(k))$ to the discounted sum:
\[ (x, a, r) \mapsto C (x, a) + \beta r \]
These two maps form an optic $\binom{X\times A}{\R}\to\binom{X}{\R}$ in $\Optic_\Euc$. Note that the cost function $C$ is itself typically not affine, but rather convex (intuitively, since the `good states' that should minimise the cost fall in the middle of the state space).
\end{example}
This formalisation of the continuous state space misses however a practical problem.
Let $S$ be a continuous state space.
In numerical implementations, policy improvement over $S$ needs to map an action to every point in the space.
Two common approaches are to discretize the state space into a possibly non-uniform grid, or to restrict the space of values to a family of parametrized functions \cite[Sec.4.]{rust_numerical_dp_in_econ}.
The discretization approach treats the continuous state space as a distribution over a simplicial complex $X$ obtained e.g. by triangulation, $\Euc(1,S)\approx \Mark(I,X)$, where a continuous state gets mapped to a distribution over the barycentric coordinates of the simplex.
This effectively transforms the initial continuous-space deterministic decision process into a discrete-space MDP, modelling numerical approximation errors as stochastic uncertainty.
\subsection{Discrete-space Markov decision processes}
Consider a Markov decision process with a discrete set of states $X$, discrete and finite set of actions $A$, a transition Markov kernel $f : X \times A \to \Delta (X)$, expected payoff function $U : X \times A \to \R$ and discount factor $\beta \in (0, 1)$. We can write the transition function as conditional probabilities $f (x' \mid x, a)$.
We can convert this data into an optic $\lambda : \binom{X \otimes A}{\R} \to \binom{X}{\R}$ in the category $\Optic_{\Mark, \Conv}$ given by $\Mark$ acting on both itself and $\Conv$. This optic is given concretely by $(X \otimes A, g, g')$ where $g : X \otimes A \to X \otimes A \otimes X$ in $\Mark$ is given by $\Delta_{X\otimes A}\comp(f\otimes \id_{X\otimes A})$, and $g' : \Delta (X \otimes A) \to [\R, \R]$ in $\Conv$ is defined by $g' (\alpha) (r) = \mathbb E U (\alpha) + \beta r$, where $\alpha \in \Delta (X \times A)$ is a joint distribution on states and actions. Alternatively, we can note that the domain of $g'$ is free on the set $X \times A$ (although it cannot be considered free on an object of $\Mark$), and define it as the linear extension of $g' (x, a) (r) = U (x, a) + \beta r$.
With this setup, value improvement $(\pi, V) \mapsto (\pi, \overline\pi\comp \lambda\comp V)$ yields the value function
\[ V' (x) = \E_{a \sim \pi (x)} [U (x, a) + \beta V (f (x, a))] \]
Alternating steps of value and policy improvement converge to the optimal policy $\pi^*$ and value function $V^*$, which maximises the expected value of the policy:
\[ V^*_{\pi^*} (x_0) = \E \sum_{k = 0}^\infty \beta^k U (x_k, \pi^* (x_k)) \]
\begin{example}[Gridworld, continued]
In a proper MDP, transition functions can be stochastic, and update steps have to take expectations over values: value improvement maps $(\pi, V) \mapsto (\pi, \overline\pi\comp \lambda\comp V)$ and policy improvement maps $(\pi, V) \mapsto (x \mapsto \arg\max_{a \in A} \E (\lambda\comp V) (x, a), V)$.
This model also accepts stochastic policy improvement steps like $\epsilon$-greedy, which is an ad hoc heuristic technique of balancing exploration and exploitation in reinforcement learning \cite[Sec.2]{kaelbling_rl_survey}, a problem which is known in control theory as the identification-control conflict.
\end{example}
\subsection{Continuous-space Markov decision processes}
For continuous-space MDPs we need a category of continuous Markov kernels. There are several possibilities for this arising as the kleisli category of a monad, such as the Giry monad on measurable spaces \cite{giry82}, the Radon monad on compact Hausdorff spaces \cite{swirszcz_monadic_functors_convexity} and the Kantorovich monad on complete metric spaces \cite{fritz_perrone_probability_monad_colimit}. However, control theorists typically work with more specific parametrised families of distributions for computational reasons, the most common being normal distributions. We will work with the category $\Gauss$ of Euclidean spaces and affine functions with Gaussian noise \cite[section 6]{Synthetic_approach}. (This is an example of a Markov category that does not arise as the kleisli category of a monad, because its multiplication map would not be affine.) This works because the pushforward measure of a Gaussian distribution along an affine function is still Gaussian, which fails for more general functions.
\begin{example}
We will formulate the savings problem with normally-distributed income. For purposes of example, it is convenient to ignore the restriction that the bank balance cannot become negative, since the binary $\max$ operator is nonlinear and hence does not preserve Gaussians under pushforward measure. We therefore work in the category $\Gauss$, and take the state and action spaces to both be $X = A = \R$. (This technically causes the optimisation problem to become unbounded and degenerate, since there is no reason for the agent to not consume an arbitrarily high amount in every stage, but we ignore this problem.)
$\Gauss$ is a Markov category that is not cartesian (the monoidal product is the cartesian product of Euclidean spaces, which adds the dimensions), so it acts on itself by the monoidal product and we take the category $\Optic_\Gauss$. The transition function $f : \R \otimes \R \to \R$ is given by $f (x, a) = (1 + \gamma) x - a + \mathcal N (\mu, \sigma)$, and the payoff function $U : \R \otimes \R \to \R$ is given by $U (x, a) = a$.
Since we do not have a corresponding Eilenberg-Moore category, the expectation over payoffs is not taken `automatically': a value improvement step $(\pi, V) \mapsto \overline\pi\comp \lambda\comp V$ will produce a function $V' : \R \to \R$ in $\Gauss$ with nonzero variance, even if $V$ always has zero variance.
Finding a more appropriate categorical setting for this example (also allowing for the nonlinear $\min$ in payoffs to bound the optimisation problem) constitutes future work.
\end{example}
\section{Q-learning}
Consider a deterministic decision process corresponding to the optic $\lambda : \binom{X \times A}{\R} \to \binom{X}{\R}$.
The dynamical system with state space $A^X\times \Optic_\Set\left(\binom{X\times A}{\R},I\right)$ has elements $(\pi,q)$ consisting of a \emph{state-action} value function $q: X\times A\to \R$ as in \eqref{eq:q} rather than a state-value function $V : X \to \R$.
We can define similar update steps
\begin{align*}
\text{\textbf{Value improvement:}}& &&(\pi, q) \mapsto (\pi, \lambda\comp\overline\pi\comp q) \\
\text{\textbf{Policy improvement:}}& &&(\pi, q) \mapsto \left( x \mapsto \arg\max_{a \in A} q (x, a), q \right)
\end{align*}
These can also be fused into a single step:
\begin{align*}
\text{\textbf{State-action value iteration:}} &&&(\pi, q) \mapsto \left( x \mapsto \arg\max_{a \in A} q (x, a), \lambda\comp \overline\pi\comp q \right)
\end{align*}
Observe that composition of the $\lambda$ optic with $\overline\pi$ is flipped compared to the case seen in Section \ref{sec:4.1}, as we want an element of $\Optic_\Set\left(\binom{X\times A}{\R},\binom{X\times A}{\R}\right)$ to compose with $q$.
The advantage of learning state-action value functions $X \otimes A \to \R$ rather than state-value functions $X \to \R$ is that is gives a way to approximate $\arg\max_{a \in A} (\lambda\comp \overline\pi\comp q) (x, a)$ without making any use of $\lambda$, namely by instead using $\arg\max_{a \in A} q (x, a)$. This leads to an effective method known as Q-learning for computing optimal control policies even when the MDP is unknown, with only a single state transition and payoff being sampled at each time-step. This is the essential difference between classical control theory and \emph{reinforcement learning}. The above method, despite learning a $q$-function, is \emph{not} Q-learning because it makes use of $\lambda$ during value improvement.
Q-learning \cite{q_learning} is a sampling algorithm that approximates the state-action value iteration, usually by a lookup table $Q$ referred as Q-matrix.
It treats the optic as a black box, having therefore no access to the transition or rewards functions used in \eqref{eq:q}, and instead updates $q$ by interacting with the environment dynamics:
\[ q'(x',a) = (1-\alpha)q(x,a)+\alpha(r+\beta \max_{a'}q(x',a')) \]
where $\alpha\in(0,1)$ is a weighting parameter.
Note that both the new state $x'$ and the reward $r$ are obtained by interacting with the system, rather than looked ahead by $x'=f(x,a)$ and $r=U(x,a)$.
It falls in the family of temporal difference algorithms.
\section{Further work}
At the end of the previous section, it can be seen that Q-learning is no longer essentially using the structure of the category of optics, instead treating the Q-function as a mere function. We believe this can be overcome using the framework of categorical cybernetics \cite{towards_foundations_categorical_cybernetics}, leading to a fully optic-based approach to reinforcement learning. By combining with other instantiations of the same framework, it is hoped to encompass the zoo of modern variants of reinforcement learning that have achieved spectacular success in many applications in the last few years. For example, deep Q-learning represents the Q-function not as a matrix but as a deep neural network, trained by gradient descent, allowing much higher dimensionality to be handled in practice. Deep learning is currently one of the main applications of categorical cybernetics \cite{cruttwell_etal_categorical_foundations_gradient_based_learning}.
The proof that dynamic programming algorithms converge to the optimal policy and value function typically proceed by noting that the set of all value functions form a complete ordered metric space and that value improvement is a monotone contraction mapping. The metric structure is used to prove that iteration converges to a unique fixpoint by the contraction mapping theorem, and then the order structure is used to prove that this fixpoint is indeed optimal. Since value improvement is optic composition, these facts would be a special case of the category of optics being enriched in the category of ordered metric spaces and monotone contraction mappings. We do not currently know whether such an enrichment is possible. Unlike costates, general optics have nontrivial forwards passes, so there are two possible approaches: either ignore the forwards passes and defining a metric only in terms of the backwards passes, or defining a metric also using the forwards passes, for example using the Kantorovich metric between distributions. This would also be a reasonable place to unify our approach with the coalgebraic approach with metric coinduction \cite{LTVMPDs}.
Finally, continuous time MDPs pose a serious challenge to any approach for which categorical composition is sequencing in time, since composition of two morphisms in a category appears to be inherently discrete-time. (Open games are similarly unable to handle dynamic games with continuous time, such as pursuit games.) A plausible approach to this is to associate an endomorphism in a category to every real interval, by treating that interval of time as a single discrete time-step, and then requiring that all morphisms compose together correctly, similar to a sheaf condition. It is hoped that the Bellman-Jacobi-Hamilton equation, a PDE that is the continuous time analogue of the discrete-time Bellman equation, will simiarly arise as a fixpoint in this way. Exploring this systematically is important future work.
\bibliographystyle{eptcs}
| {'timestamp': '2022-06-10T02:20:31', 'yymm': '2206', 'arxiv_id': '2206.04547', 'language': 'en', 'url': 'https://arxiv.org/abs/2206.04547'} |
\section{Introduction} \label{intro}
Magnetic multilayers that combine strong spin-orbit interaction with broken inversion symmetry can give rise to the presence of topologically nontrivial spin textures, so-called magnetic skyrmions \cite{Roesler2006, Muhlbauer2009}, at room temperature \cite{Jiang2017}. Several studies have indicated strongly enhanced propagation velocities of skyrmions in antiferromagnets (AFMs) and compensated ferrimagnets \cite{Caretta2018, Zhang2016, Woo2018, Barker2016}. Furthermore, only recently has the stabilization of antiferromagnetic skyrmions in synthetic AFMs been demonstrated experimentally \cite{Legrand2019, Dohi2019, Chen2020}.
Despite the widely acknowledged superiority of such systems over ferromagnets---mainly owing to the fast current-driven motion of skyrmions due to the suppression of the skyrmion Hall effect, and the possible stabilization of very small-sized skyrmions---there still exist a number of challenges with regard to the realization of technological applications based on this particular class of multilayer structures. For instance, a straightforward electrical detection of skyrmions in all types of materials is highly challenging due to the small size of the measurement signal \cite{Wang2019}, while magnetic sensing of these spin textures in systems with vanishing magnetization is naturally impractical.
A promising approach towards the detection and detailed characterization of skyrmion states in (synthetic) antiferromagnetic and compensated ferrimagnetic multilayers is given by harnessing the intrinsic resonances of skyrmions, such as breathing modes, which entail an oscillation of the skyrmion size at characteristic GHz frequencies \cite{Mochizuki2012, Onose2012, Garst2017, Lin2014, Schuette2014}. More specifically, in analogy to previous works on the spectral analysis of topological defects in artificial spin-ice lattices \cite{Gliga2013}, the application of broadband microwave impedance spectroscopy may offer a direct means for skyrmion sensing, that is, ascertaining the presence or absence of skyrmions and possibly even quantifying skyrmion densities. In addition to that, measurements of magnetoresistive or anomalous Hall effect signals modulated by the periodic oscillation of the skyrmion size constitute---in analogy to previous work on magnetic vortices \cite{Lendinez2020, Cui2015}---a promising approach towards electrical skyrmion detection with a high signal-to-noise ratio.
However, in order to reliably exploit the breathing modes for electrical detection or other applications, the effect of magnetic compensation on these dynamic excitations needs to be clarified.
As will be discussed in the present work, the excitation frequencies and magnitudes of breathing modes are highly sensitive to the strength of various competing magnetic interactions prevailing in magnetic multilayers, and thus the experimental detection of these resonant excitations may help to determine characteristic materials parameters such as exchange interactions or anisotropies. In general, besides their utility for skyrmion sensing, breathing modes were also discussed to be exploited as information carriers in data processing devices \cite{Xing2020, Lin2019, Kim2018, Seki2020}. Furthermore, it was demonstrated that these dynamic excitations are also highly relevant for skyrmion-based magnonic crystals \cite{Zhang2019, Ma2015}. One example is given by the strong coupling of breathing modes in one-dimensional skyrmion lattices in ferromagnetic thin-film nanostrips, where the existence of in-phase and anti-phase modes was demonstrated \cite{Kim2018}. In detail, the propagation of the breathing modes through the nanostrips was shown to be controllable by the strength of the applied magnetic field.
The present work is devoted to skyrmion breathing dynamics in synthetic antiferromagnets composed of ultrahin layers that exhibit a circular-shaped geometry. Previous theoretical studies have demonstrated that the dynamic excitation modes of skyrmions are strongly influenced by the geometry of the considered system. For instance, in the case of an ultrathin circular ferromagnetic dot with a single skyrmion, the breathing modes hybridize with the radial spin-wave eigenmodes of the dot \cite{Kim2014, Mruczkiewicz2017}.
For the case of synthetic AFMs, so far only the gyration modes of skyrmions have been studied by means of micromagnetic simulations \cite{Xing2018}. In this case, the application of time-varying in-plane magnetic fields and the presence of antiferromagnetic coupling can lead to clockwise (CW) and counterclockwise (CCW) rotation modes as well as coupled excitation modes (CW-CW, CCW-CCW, and CW-CCW).
However, breathing modes in synthetic AFMs, which are excited by the application of out-of-plane ac magnetic fields, have not been investigated yet. Even though in Ref.\ \cite{Kravchuk2019} the spin eigenexcitations of a skyrmion in a collinear uniaxial antiferromagnetic thin film were investigated by means of numerical and analytical methods, the dynamic behavior is expected to be different in synthetic AFMs, where the interlayer exchange coupling is much weaker than the direct exchange in crystalline AFMs. In other words, there is a stronger separation of the two magnetic subsystems in synthetic AFMs, which also implies the presence of small dipolar fields \cite{Legrand2019, Duine2018}. More generally, synthetic AFMs can be viewed as materials with properties in between those of AFMs and ferromagnets \cite{Duine2018}.
Lastly, we note that the magnetization dynamics in synthetic AFMs typically exhibits a higher complexity than in ferromagnets. For instance, several studies have demonstrated the existence of resonant optic modes in synthetic AFMs in addition to the conventional acoustic (Kittel) mode in ferromagnets \cite{Waring2020, Khodadadi2017, Sorokin2020}. Due to the complicated dynamics reported for regular synthetic antiferromagnets which do not host skyrmions, it can be expected that the skyrmion breathing modes in such systems will be significantly altered compared to those in ferromagnets.
Building on the numerous existing studies, in the present work we examine the characteristic skyrmion breathing modes in antiferromagnetic-exchange coupled disks using micromagnetic simulations with the aim of providing guidance for the experimental detection and practical application of these excitations.
\section{Simulation Model and Methods}
\begin{figure
\centering
\includegraphics[width=8.5 cm]{Fig1.pdf}
\caption{Synthetic AFM structure consisting of two ferromagnetic (FM) and one nonmagnetic (NM) circular-shaped layers with thickness $t_{\mathrm{NM}}=t_{\mathrm{FM}}=1\,$nm and diameter $d=100\,$nm for each disk. Ac and dc magnetic fields are applied along the $z$-direction perpendicular to the layer planes.}
\label{MODEL}%
\end{figure}%
This study focuses on a synthetic antiferromagnetic structure as depicted in Fig.\ \ref{MODEL}. The top and bottom layers are ferromagnetic materials which are separated by a nonmagnetic metallic spacer layer. In analogy to Ref.\ \cite{Xing2018}, the ferromagnetic layers are coupled via the nonmagnetic spacer through a Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction which is modeled by the following energy term \cite{Parkin1991, Bruno1991}:
\begin{equation}\label{eq:RKKY}
E_{\mathrm{RKKY}}=\frac{\sigma}{t_{\mathrm{NM}}}\left(1-\textbf{m}_{\mathrm{t}}\textbf{m}_{\mathrm{b}}\right).
\end{equation}
Here, $\sigma$ denotes the surface exchange coefficient which depends on the thickness $t_{\mathrm{NM}}$ of the nonmagnetic spacer layer and is assumed to be negative throughout this work, thus implying an antiferromagnetic coupling. Furthermore, $\textbf{m}_{\mathrm{t}}$ and $\textbf{m}_{\mathrm{b}}$ are the unit vectors of magnetization for the top and bottom layer, respectively.
The static and dynamic states of the skyrmions in the considered model system were studied by using the Object Oriented MicroMagnetic Framework (\textsc{oommf}) code \cite{Donahue1999, Rohart2013} which carries out a numerical time integration of the Landau-Lifshitz-Gilbert (LLG) equation of motion for the local magnetization:
\begin{equation} \label{eq:LLG}
\frac{\mathrm{d}\textbf{m}}{\mathrm{d}t}=-|\gamma_{0}|\textbf{m}\times \textbf{H}_{\mathrm{eff}}+\alpha \textbf{m}\times \frac{\mathrm{d}\textbf{m}}{\mathrm{d}t}.
\end{equation}
Here, $\gamma_{0}$ denotes the gyromagnetic constant, $\alpha$ is the Gilbert damping parameter, $\textbf{m}$ is the unit vector of the magnetization, and $\textbf{H}_{\mathrm{eff}}$ is the effective magnetic field which is proportional to the derivative of the total micromagnetic energy $U$ with respect to the magnetization. In our model, we assume that $U$ includes the Zeeman energy, the isotropic exchange interaction characterized by the exchange stiffness $A$, demagnetization effects, a uniaxial anisotropy perpendicular to the layers, and an interfacial Dzyaloshinskii-Moriya interaction (DMI)
\begin{equation}
U_{\mathrm{DMI}}=D\left[m_{z}\left(\nabla \cdot \textbf{m}\right)-\left(\textbf{m} \cdot \nabla \right)m_{z} \right],
\end{equation}
where $D$ is the DMI constant specifying the strength of the interaction \cite{Dzyaloshinskii1958, Moriya1960}.
In order to allow for direct comparability with results for the single ferromagnetic dot considered in Ref.\ \cite{Kim2014}, we utilized identical simulation parameters for the ferromagnetic layers studied in this work. Consequently, each of the two layers corresponds to a model thin-film perpendicular anisotropy system with an exchange stiffness $A=15\,$pJ/m, perpendicular anisotropy constant $K_{\mathrm{u}}=1\,$MJ/m$^3$, saturation magnetization $\mu_{0}M_{\mathrm{s}}=1\,$T (unless specified otherwise), layer thickness $t_{\mathrm{FM}}=1\,$nm, and disk diameter $d=100\,$nm. The RKKY-coupling strength through the equally sized nonmagnetic spacer layer is varied from $\sigma=0$ to $\sigma=-3\times 10^{-3}\,$J/m$^2$. The strength of the interfacial DMI is fixed as $D=3\,$mJ/m$^2$ for both ferromagnetic layers. This value is considerably higher than in the case of Ref.\ \cite{Legrand2019}, where $D=0.2$--$0.8\,$mJ/m$^2$, whereas it lies well within the parameter range ($D=2.5$--$4.5\,$mJ/m$^2$) utilized in Ref.\ \cite{Kim2014}. As demonstrated in Refs.\ \cite{Sampaio2013, Kim2014}, higher DMI energy typically leads to larger skyrmion diameters. However, due to the choice of a smaller $M_{\mathrm{s}}$ and larger $A$ than in Ref.\ \cite{Legrand2019}, we expect to achieve comparable skyrmion diameters.
Here, a simulation mesh with $64\times 64\times 3$ finite difference cells is defined, implying a cell size of $1.5625\times 1.5625\times 1\,$nm.
The initial magnetization state was assumed to be that of one N\'{e}el-type skyrmion in each of the two ferromagnetic layers. The ground state has been determined by relaxing the system for $5\,$ns with a high damping constant of $\alpha = 0.5$, whereby the time evolution of the magnetization was monitored to confirm that an equilibrium state has been reached. For most of the simulations, a dc magnetic field $\mu_{0}H_{\mathrm{dc}}=50\,$mT along the perpendicular $z$-direction has been applied. As will be shown further below, such a small symmetry-breaking dc field solely increases the magnitude of certain dynamic modes, but does not have any impact on the qualitative conclusions that will also be valid for $\mu_{0}H_{\mathrm{dc}}=0\,$mT.
Subsequently, the dynamics of the skyrmions was studied for the application of a spatially uniform time-varying ac magnetic field $H_{\mathrm{ac}}=H_{0}\sin(2\pi f t)$ along the $z$-axis, where $\mu_{0}H_{0}=0.5\,$mT is the amplitude and $f=100\,$GHz the frequency. Simulations were performed for two different scenarios, which will be discussed in more detail in Sec.\ \ref{RESULTS}. In the first case, the ac field was applied across all three layers, while in the second case only one magnetic layer was exposed to the time-varying field. While the first scenario is more realistic with regard to future experimental work, the second case will prove to be instrumental for a general understanding of the skyrmion dynamics in synthetic AFMs. It will be further shown that the qualitative results for both pictures share many similarities.
Another important point is that the obtained results are nearly identical regardless of whether the ac field is applied over the entire simulation time or for only a limited time period after which the data are recorded.
Moreover, we note that there exist further experimental approaches to excite the GHz-range dynamic modes of skyrmions, such as, for example the application of laser or heat pulses \cite{Ogawa2015}, or spin torques. The latter will also be incorporated into the micromagnetic simulations and briefly discussed in the final part of Sec.\ \ref{RESULTS}.
For the skyrmion dynamics simulations the damping parameter has been chosen as $\alpha=0.01$ to ensure a good frequency resolution of the excited modes. The dynamics is simulated for at least $5\,$ns with data taken every $2\,$ps. Eventually, the power spectral density (PSD) of the spatially-averaged $z$-component of the magnetization $\langle m_{z}\rangle (t)$ is calculated by using a fast Fourier transform (FFT) algorithm. As a proof of concept, the results from Ref.\ \cite{Kim2014} for a single ferromagnetic layer had been reproduced before numerical calculations were conducted for the synthetic AFM. In addition to the synthetic AFM trilayer, Sec.\ \ref{RESULTS} also includes a discussion of micromagnetic simulations carried out on a model system for a synthetic ferrimagnet with unbalanced antiparallel moments in the two ferromagnetic layers.
\section{Results and Discussion} \label{RESULTS}
\subsection{Static Properties of Skyrmions in Synthetic Antiferromagnets}
\begin{figure
\centering
\includegraphics[width=8.5 cm]{BrMode_AFM_actop_SkDiametersCopy.pdf}
\caption{Skyrmion core diameter plotted against the absolute value of the antiferromagnetic coupling strength $\sigma$ for the top and bottom layer. Selected skyrmion ground states are shown for $\sigma=0\,$J/m$^2$ and $\sigma=-3\times 10^{-4}\,$J/m$^2$ for the individual layers. A dc magnetic field $\mu_{0}H_{\mathrm{dc}}=50\,$mT was applied along the positive $z$-direction as shown in the top right of the diagram.}
\label{SKSIZE}%
\end{figure}%
As a first step, the ground states were calculated for different antiferromagnetic coupling strengths at a dc magnetic field $\mu_{0}H_{\mathrm{dc}}=50\,$mT. The skyrmion core diameter, here defined as twice the distance from the center ($|m_{z}|=1$) to the point where $m_{z}=0$, is plotted against the coupling strength $|\sigma|$ in Fig.\ \ref{SKSIZE} for the top and the bottom layer. Furthermore, skyrmion states for two selected coupling strengths $\sigma=0\,$J/m$^2$ and $\sigma=-3\times 10^{-4}\,$J/m$^2$ are illustrated. As shown in the top right of the diagram, $H_{\mathrm{dc}}$ was applied along the positive $z$-direction. It can be seen that at high coupling strengths the diameters of both skyrmions are almost identical, $d_{\mathrm{top}}\approx d_{\mathrm{bottom}}\approx 40$nm, while towards lower values of $|\sigma|$ the size difference increases rapidly up to $\Delta d=d_{\mathrm{top}}-d_{\mathrm{bottom}}\approx 45$nm for $\sigma=0\,$J/m$^2$. This can be explained by the magnetostatic interaction becoming more relevant for the total micromagnetic energy in the case of weaker interlayer exchange coupling. More generally, as a result of the various competing energy terms, the skyrmion size is also highly sensitive to variations of other parameters such as the DMI strength, the exchange stiffness or the saturation magnetization (see discussion further below). Even though the DMI energy utilized in the present study is larger than in the case of Ref.\ \cite{Legrand2019}, due to the differences in other parameters, such as $A$ and $M_{\mathrm{s}}$, we obtain similar values for the skyrmion core diameter in the case of comparable coupling strengths $|\sigma|\approx 2\times 10^{-4}\,$J/m$^2$.
\begin{figure
\centering
\includegraphics[width=8.6 cm]{FFT_Power_BreathingMode_HAC_difflayers.pdf}
\caption{(a) Power spectral density (PSD) of $\langle m_{z}\rangle (t)$ calculated in the top, bottom, and both layers for a selected $\sigma$, where the ac magnetic field with $f=100\,$GHz is applied either across the top layer, bottom layer or both layers. (b) Line shape of mode 1 in dependence of the excitation frequency (PSD calculated for both magnetic layers).}
\label{DIFFLAYERS}%
\end{figure
\subsection{Breathing Modes in Synthetic Antiferromagnets}
An overview of the dynamic response to an applied ac magnetic field with $f=100\,$GHz and amplitude $\mu_{0}H_{0}=0.5\,$mT is given in Fig.\ \ref{DIFFLAYERS}. The calculated spatially-averaged PSD of $\langle m_{z}\rangle (t)$ for the top (black curves), bottom (red curves) and both layers (green curves) in the particular case of $\sigma=-2\times 10^{-4}\,$J/m$^2$ is shown in Fig.\ \ref{DIFFLAYERS}(a) as a function of frequency. The time-varying field is either applied across the top layer, the bottom layer or all layers. There are several distinct features at identical frequencies for all three cases with only minimal differences in the peak heights, except for the lowest-lying excitation at $f=5.5\,$GHz, where in the case of the ac field being applied across all layers the peak amplitude is strongly suppressed. This feature can only be observed in an enlarged plot, or alternatively occurs in a more pronounced way when the frequency of the ac field is reduced until it matches the resonance frequency (not shown). It will be discussed further below that sufficiently high dc magnetic fields lead to an enhanced magnitude of this lowest-lying excitation---even for the case that all layers are exposed to the time-varying magnetic field.
We emphasize that the peak magnitude and line shape of mode 1 strongly depend on the frequency of the ac magnetic field. This can be seen in Fig.\ \ref{DIFFLAYERS}(b) for the case of the ac field applied across the bottom layer. This indicates that the interplay between the first mode and the other resonant modes, which become activated towards higher excitation frequencies, strongly affects the observed line shape due to complicated phase relationships. For instance, utilizing an excitation frequency resonant with mode 2 results in a strong suppression of mode 1 (cf.\ orange curve). Moreover, the activation of further higher-order modes can change the symmetry of the resonance peak. For some excitation frequencies, there is a pronounced antiresonance adjacent to the resonance peak.
\begin{figure
\centering
\includegraphics[width=8.6 cm]{FFT_Power_BreathingMode_sAFM_contour.pdf}
\caption{(a) Map of the PSD as a function of the interlayer exchange coupling strength $|\sigma|$ and the frequency with an external ac magnetic field applied across the top layer. Only the four lowest-lying resonance modes are shown. (b) Resonance frequency for all eight modes in the entire range of $|\sigma|$. (c) Resonance frequency for the first eight dynamic modes in the uniform ground state of the synthetic AFM without the presence of skyrmions.}
\label{PSDMAP}%
\end{figure
Figure \ref{PSDMAP}(a) shows a map of the PSD as a function of the antiferromagnetic coupling and the frequency for the scenario that the ac field is only present in the top layer. Notice that only the four lowest-lying resonance modes are depicted. Furthermore, gaps in the individual branches occur due to the discrete nature of the utilized antiferromagnetic coupling strength $\sigma$.
It is evident that the characteristic frequencies of the resonances and antiresonances shift towards higher values upon increasing the interlayer coupling. This is further clarified in Fig.\ \ref{PSDMAP}(b) for all eight resonance modes and the complete range of the antiferromagnetic coupling strength $\sigma$. From this logarithmic representation it is clear that mode 2 displays the strongest increase in dependence of $\sigma$. Furthermore, as will be proven by the discussion further below, the observed modes occur in pairs of interrelated resonances. Lastly, Fig.\ \ref{PSDMAP}(c) displays the resonance frequencies for eight modes in the case of the uniform ground state of the considered synthetic AFM, that is, where no skyrmion is present. The two lowest-lying modes only occur in this magnetization state and exhibit an entirely different dependence on $\sigma$ in comparison to modes $1$ and $2$ in the skyrmion state. In detail, these resonances can be explained by the dynamics at the edges of the circular-shaped synthetic AFM structure, where the magnetization is tilted as a result of the boundary conditions related to the DMI, see Ref.\ \cite{Kim2014}. Such edge modes are also the physical origin of all higher-order resonances. Interestingly, their dependence on the interlayer coupling strength is almost identical to modes $3$--$8$ in Fig.\ \ref{PSDMAP}(b). In fact, as will be shown in the following, these edge modes are found to hybridize with the characteristic skyrmion eigenexcitations.
\subsubsection{Visualization of Dynamic Modes}
\begin{figure
\centering
\includegraphics[width=8.6 cm]{BrM047_2DSnaps_Modes1_2_3.pdf}
\caption{Snapshots of resonance modes 1 to 3 shown in three columns for $\sigma=-3\times 10^{-4}\,$J/m$^2$ and a time-varying magnetic field with the respective excitation frequency applied across the top layer. Two-dimensional contour plots display the difference $\Delta m_{z}$ in the magnetization component $m_{z}$ between the ground state at $t=0$ and an excited state with maximal amplitude $\langle m_{z}\rangle(t_{\mathrm{m}})$ at a selected time $t_{\mathrm{m}}$ for (a) the top and (b) the bottom layer. (c) Absolute value of the time-dependent spatially-averaged magnetization, $|\langle m_{z}\rangle(t)|$, for top and bottom layers with arbitrary scaling of the y-axes. Green circles indicate the selected $t_{\mathrm{m}}$ for the snapshots.}
\label{SNAPSHOT}%
\end{figure
Hereinafter, the physical origin of the individual modes will be discussed in detail. For this purpose, we consider two-dimensional snapshots of the resonance modes for the particular case of $\sigma=-3\times 10^{-4}\,$J/m$^2$ and an ac magnetic field with the respective mode's excitation frequency, applied only across the topmost layer. Figures \ref{SNAPSHOT}(a) and (b) include contour plots illustrating the difference $\Delta m_{z}$ in the magnetization component $m_{z}$ between the ground state at $t=0$ and an excited state with maximal amplitude $\langle m_{z}\rangle(t_{\mathrm{m}})$ at a selected time $t_{\mathrm{m}}$ for modes 1 to 3 in the top and bottom layer, respectively. For the sake of better comparability of the dynamic excitations in the two antiferromagnetically coupled layers, $m_{z}$ in the bottom layer has been multiplied by a factor of $-1$ at each position. The maximum change in $m_{z}$ is denoted by $+\delta m_{z}$ (red color) and $-\delta m_{z}$ (blue color).
In Fig.\ \ref{SNAPSHOT}(c), the time-dependent spatially-averaged $z$-component of the magnetization, $|\langle m_{z}\rangle(t)|$, for each of the two magnetic layers is shown. Green circles indicate the selected $t_{\mathrm{m}}$ for the snapshots. In the case of mode 1 ($f=6.0\,$GHz), the $m_z$ component in a ring-shaped area increases for both the top and bottom ferromagnetic disks, corresponding to a simultaneously increasing skyrmion diameter in both layers. Therefore, mode 1 corresponds to a synchronized (in-phase) breathing motion of the two skyrmions which originates in the magnetic coupling between the individual layers. In contrast to that, as indicated by the snapshots in the second column of Fig.\ \ref{SNAPSHOT}, mode 2 ($f=26.5\,$GHz) involves an anti-phase skyrmion core oscillation. While $m_{z}$ increases within the ring-shaped area in the top layer and thereby implies a larger skyrmion core diameter than in the ground state, $m_{z}$ in the bottom layer decreases, corresponding to a reduced diameter. For the case of $\sigma=-3\times 10^{-4}\,$J/m$^2$, the individual skyrmion core diameters oscillate between $39.2\,$nm and $42.0\,$nm. As a simplified classical analog, the coupled skyrmion breathing motions may be viewed as two harmonic oscillators of identical mass (e.g., pendula) which are coupled (e.g., by a spring with spring constant $k$) and subject to an external periodic driving force---cf.\ the coupled gyration modes of magnetic vortices \cite{Vogel2011, Buchanan2005}. Thus, the in-phase and anti-phase oscillations are two normal modes of the system. By regarding the antiferromagnetic coupling strength $\sigma$ as the analog of the classical spring constant $k$, it becomes immediately clear that the increasing frequency splitting between the in-phase and anti-phase breathing modes towards higher values of $|\sigma|$ (cf.\ Fig.\ \ref{PSDMAP}) is fully consistent with the classical picture where the splitting is proportional to $k$. Lastly, this classical model can also explain the presence of antiresonances in the power spectra---see, for instance, Fig.\ \ref{DIFFLAYERS}(b), where this feature is most pronounced at higher excitation frequencies---as this is a well-known phenomenon in the physics of coupled oscillators.
The above-described behavior strongly resembles the results in Ref.\ \cite{Kim2018} for micromagnetic simulations of coupled breathing modes in a one-dimensional skyrmion lattice in thin-film nanostrips.
However, a striking difference is that in the case of the synthetic AFM studied in the present work, the in-phase breathing motion exhibits a lower energy than the anti-phase oscillation, while the reverse is true for coupled skyrmions in a nanostrip. As presented in Fig.\ \ref{PSDMAP}(b), the relationship between the coupling strength $\sigma$ and the resonance frequency is different for the in-phase and anti-phase breathing modes, and their energy difference can be controlled by varying the antiferromagnetic coupling parameter, which, for instance, in practice is related to the thickness of the spacer layer. Towards lower absolute values of $\sigma$, the energy splitting between the two modes becomes increasingly smaller up to the point where for $|\sigma| \leq 1\times 10^{-5}\,$J/m$^2$ only the in-phase mode can be identified unambiguously. Due to the reduced (or even vanishing) interlayer exchange coupling, the breathing mode amplitude in the bottom layer is observed to be considerably smaller than in the top layer which has been excited by the ac magnetic field. Consequently, the anti-phase mode is not detectable in this case.
In analogy to the work on thin-film nanostrips \cite{Kim2018}, the frequency splitting for the antiferromagnetically-coupled skyrmions can also be explained by a symmetry breaking of the potential energy profile compared to the case of an isolated skyrmion as studied in Ref.\ \cite{Kim2014}. In fact, a similar behavior was also reported for coupled gyration modes of magnetic vortices \cite{Jung2011, Lee2011, Han2013} and skyrmions \cite{Kim2017}. In all these cases, the in-phase motion exhibits a lower energy than the anti-phase mode. The fact that this is also true for breathing modes in a synthetic AFM but the opposite effect is observed for dipolar-coupled breathing modes in thin-film nanostrips \cite{Kim2018} implies that the frequency splitting is highly sensitive to the interplay of various different magnetic interactions.
In addition to the pure breathing modes and similar to the case of a single skyrmion in an ultrathin ferromagnetic dot \cite{Kim2014}, the higher-order modes in Fig.\ \ref{PSDMAP} correspond to the hybridization of the breathing motion with geometrically quantized spin wave eigenmodes of the individual circular-shaped layers. Exemplary snapshots for mode 3 ($f=37.5\,$GHz) in Fig.\ \ref{SNAPSHOT} illustrate such a hybridization for the case of the synthetic AFM. In analogy to the pure breathing modes, each of the higher-order hybridization modes also occurs in an in-phase and anti-phase variation with different energies.
Lastly, we note that for the extended case of several antiferromagnetically-coupled pairs of layers, additional peaks emerge in the calculated power spectra. These new resonances are related to further coupled breathing modes with different phase shifts between the individual skyrmion eigenexcitations---similar to the dynamics of a one-dimensional skyrmion lattice in a ferromagnetic nanostrip \cite{Kim2018}. Such synthetic AFMs with an increased number of magnetic layers have been discussed to host antiferromagnetic skyrmions with enhanced thermal stability \cite{Legrand2019}. Consequently, the spectral analysis of the inherent dynamic eigenexcitations in these extended multilayer systems may also be practically relevant. However, a detailed discussion of this significantly more complex scenario is beyond the scope of the present work. Instead, the remainder of this paper contains a more in-depth analysis of skyrmion breathing modes in a system with only two antiferromagnetically-coupled layers.
\subsubsection{Role of External Magnetic Field $H_{\mathrm{dc}}$}
\begin{figure
\centering
\includegraphics[width=8.6 cm]{BrModeFieldDep.pdf}
\caption{Dependence on external dc magnetic field $H_{\mathrm{dc}}$ ($H_{\mathrm{ac}}$ applied across top layer only) for $\sigma=-3\times 10^{-4}\,$J/m$^2$. (a) Power spectra for small values of $H_{\mathrm{dc}}$. Amplitude and line shape of mode 1 change systematically. (b) PSD for selected higher values of $H_{\mathrm{dc}}$ with strong changes of modes 1 and 2. Dashed line shows a spectrum for large $H_{\mathrm{dc}}$ with $H_{\mathrm{ac}}$ applied across all layers. (c) Resonance frequency for mode 1 increases slowly as a function of $H_{\mathrm{dc}}$, while a strong decrease is observed for mode 2. (d) Skyrmion core diameter for both ferromagnetic layers in dependence of $H_{\mathrm{dc}}$.}
\label{HDCDEP}%
\end{figure}%
Figure \ref{HDCDEP} illustrates the dependence of the skyrmion breathing dynamics on the external magnetic dc field $H_{\mathrm{dc}}$ for $\sigma=-3\times 10^{-4}\,$J/m$^2$. While the dc field is present in all layers, the time-varying magnetic field is only applied to the top layer. As expected for a synthetic AFM, and in stark contrast to ferromagnets \cite{Kim2014}, applied dc fields of comparably low magnitude do not lead to observable variations of the resonance frequencies, see Fig.\ \ref{HDCDEP}(a). However, for the in-phase breathing mode at $6.0\,$GHz the peak magnitude and the line shape do depend on the dc field. First, the peak height increases with the absolute value of the dc magnetic field. In addition to that, the line shape clearly changes its symmetry at smaller positive field values between $13$ and $25\,$mT. By contrast, the higher-order modes remain nearly unaltered. Only at higher fields does the anti-phase mode shift towards lower frequencies. Two selected power spectra at higher dc fields are depicted in Fig.\ \ref{HDCDEP}(b) and compared to the simulated curve for zero field. In addition, a spectrum for $\mu_{0}H_{\mathrm{dc}}=625\,$mT is displayed for the case of the ac field being applied across all layers (dashed blue curve). Clearly, in contrast to small dc fields (cf.\ bottom panel of Fig.\ \ref{DIFFLAYERS}(a) where $\mu_{0}H_{\mathrm{dc}}=50\,$mT), the in-phase mode can be observed for this scenario due to the symmetry breaking caused by the strong dc field. Consequently, the application of a sufficiently large dc magnetic field is expected to facilitate the experimental detection of the in-phase breathing mode in a synthetic AFM. As shown in Fig.\ \ref{HDCDEP}(c), the resonance frequency of this mode remains nearly unaffected even by large dc fields, whereas for the anti-phase resonance a strong decrease can be observed. The corresponding static skyrmion core diameters in both magnetic layers are presented in Fig.\ \ref{HDCDEP}(d). While the diameter of the skyrmion located in the top layer increases significantly from about $40\,$nm to more than $60\,$nm throughout the simulated field range, the skyrmion in the bottom layer does not exhibit major changes in its spatial extent. Typically, larger external dc field values would cause the skyrmion in the bottom layer to shrink in size, but this effect is counteracted by the strong antiferromagnetic interlayer exchange coupling to the skyrmion in the top layer which would make a greater skyrmion diameter more favorable. Therefore, the green curve in Fig.\ \ref{HDCDEP}(d) represents a compromise between these two competing effects. Note that even larger values of $H_{\mathrm{dc}}$ lead to a breakdown of the skyrmion state for the given synthetic AFM. Moreover, negative values for $H_{\mathrm{dc}}$ lead to larger skyrmion diameters in the bottom layer, while the size of the skyrmion in the top layer does not undergo strong changes. The systematics for the power spectra remains the same as for positive magnetic fields. Finally, we note that the effect of an increasing difference in skyrmion core diameters towards larger $H_{\mathrm{dc}}$ values becomes less pronounced at higher interlayer exchange coupling strengths.
\subsubsection{Dependence on Saturation Magnetization $M_{\mathrm{s}}$}
\begin{figure
\centering
\includegraphics[width=8.6 cm]{BreathMode_s-3E-4_MSDep.pdf}
\caption{Dependence on saturation magnetization $M_{\mathrm{s}}$ in both layers ($H_{\mathrm{ac}}$ applied across top layer only) for $\sigma=-3\times 10^{-4}\,$J/m$^2$. (a) Power spectra for selected saturation magnetization $(M_{\mathrm{s}})$ values of the ferromagnetic layers. (b) Resonance frequencies shift in a different way for modes 1 and 2 as a function of $M_{\mathrm{s}}$. (c) Skyrmion core diameter for both ferromagnetic layers in dependence of $M_{\mathrm{s}}$.}
\label{MSANDHDEP}%
\end{figure
Aside from the previously discussed variations of the dc magnetic field, it is clarified in Fig.\ \ref{MSANDHDEP} that also changes in the saturation magnetization $M_{\mathrm{s}}$ of both layers can alter the skyrmion breathing dynamics in the considered synthetic AFM. In analogy to the case of small applied dc fields [cf.\ panel (a) in Fig.\ \ref{HDCDEP}], a change in the line shape symmetry of mode 1 can also be observed for variations of the saturation magnetization $M_{\mathrm{s}}$ in both magnetic layers as shown in Fig.\ \ref{MSANDHDEP}(a). This behavior also resembles the line-shape changes presented in Fig.\ \ref{DIFFLAYERS}(b) which arise due to the varying ac magnetic field frequency and can be attributed to the different phase relationships among the activated modes.
Figure \ref{MSANDHDEP}(b) clarifies that, in addition to the line shape changes, the resonance frequency of mode 1 shifts in a nontrivial way as a function of $\mu_{0}M_{\mathrm{s}}$, while it exhibits a nearly linear behavior in the case of mode 2. At low values of $\mu_{0}M_{\mathrm{s}}$, the resonance frequency of mode 1 decreases upon increasing $\mu_{0}M_{\mathrm{s}}$ and displays a local minimum at $1.0\,$T. This is followed by a slow increase towards larger values of $\mu_{0}M_{\mathrm{s}}$, a local maximum at $1.15\,$T and a subsequent decrease. As shown in \ref{MSANDHDEP}(c), the increase in the saturation magnetization from $0.75$ to $1.25\,$T entails a growing skyrmion core diameter from around $11\,$nm up to $60\,$nm in both magnetic layers, while the relative size difference remains equally small due to the strong antiferromagnetic coupling. In analogy to Ref.\ \cite{Kim2014}, the limiting factor for the skyrmion diameter is given by the interaction with the tilted magnetization at the boundary of the nanodisks.
In the previously discussed classical picture larger values of $M_{\mathrm{s}}$ would correspond to an increasing mass of each of the two harmonic oscillators, leading to lower eigenfrequencies. While this simple model correctly explains the behavior of mode 2, in the case of mode 1 it is only applicable for low $\mu_{0}M_{\mathrm{s}}$ values up to $1.0\,$T. For higher values of $\mu_{0}M_{\mathrm{s}}$, however, the interplay with the higher-order modes and the competition of various micromagnetic energies are the cause of an unexpected behavior. In fact, a similar systematics is observed for variations of the DMI parameter $D$ or the exchange stiffness $A$, as well as for other values of the interlayer exchange coupling strength $\sigma$.
For the experimental detection of breathing oscillations in antiferromagnetically-coupled multilayers, as well as for skyrmion sensing in general, both the in-phase and anti-phase modes are suitable candidates when the system is excited at their respective resonance frequencies. In practice, the scenario of an ac field being applied across the entire synthetic AFM is clearly more realistic than to assume its presence only within a single layer. We emphasize that the former case will require a sufficiently large dc magnetic field to break the symmetry and thus enable the detection of the in-phase breathing mode, see Fig.\ \ref{HDCDEP}(b). By contrast, the anti-phase mode is expected to be experimentally detectable in a more straightforward way. Furthermore, we point out that the unique dependence of the eigenfrequencies on the external dc field and the saturation magnetization will allow to draw detailed conclusions about the skyrmion states from the spectral analysis.
As has been shown, it may be also worthwhile to utilize higher excitation frequencies ($f\approx 100\,$GHz) and deduce further information about fundamental magnetic parameters from the interplay of various spin excitation modes. For instance, while the magnitude of the anti-phase mode remains large for different values of $M_{\mathrm{s}}$, both the line shape and the magnitude of the in-phase mode are clearly more sensitive to variations of this magnetic parameter.
\subsection{Breathing Modes in Synthetic Ferrimagnets}
\begin{figure
\centering
\includegraphics[width=8.6 cm]{Ferrimagnet_ModeAnalysis.pdf}
\caption{Static skyrmion core diameters for varying $M_{\mathrm{s}}$ in the top layer and constant $\mu_{0}M_{\mathrm{s}}=1.0\,$T in the bottom layer are presented in the case of (a) $\sigma=-2\times 10^{-5}\,$J/m$^2$ and (b) $\sigma=-3\times 10^{-4}\,$J/m$^2$. Panels (c) and (d) show the resonance frequency for the two lowest-lying modes as a function of $M_{\mathrm{s}}$ in the top layer for both coupling strengths. Snapshots (calculated in analogy to Fig.\ \ref{SNAPSHOT}) of a newly emerging, non-radial dynamic mode at higher $M_{\mathrm{s}}$ are displayed in (e) and (f).}
\label{FERRIMAG}%
\end{figure
In the following, we will discuss how the coupled skyrmion breathing dynamics is altered in the case of a synthetic ferrimagnet, \textit{i.e.}, the same trilayer system as depicted in Fig.\ \ref{MODEL}, but now containing unbalanced antiparallel moments in the two ferromagnetic layers. For the sake of simplicity, we assume varying values of the saturation magnetization $M_{\mathrm{s}}$ only in the top layer while keeping the bottom-layer magnetization constant at $\mu_{0}M_{\mathrm{s}}=1.0\,$T. The calculated static skyrmion core diameters for the two magnetic layers are depicted in Fig.\ \ref{FERRIMAG}(a) and (b) for two different coupling strengths $\sigma$. While for the stronger interlayer exchange coupling the skyrmion size is nearly identical in both layers, in the case of a weak coupling the individual skyrmion core diameters differ significantly over a broad range of top-layer $M_{\mathrm{s}}$ values.
Fig.\ \ref{FERRIMAG}(c) and (d) displays the evolution of the resonance frequency for the two lowest-lying eigenmodes with varying top-layer $M_{\mathrm{s}}$ in the case of weak and strong antiferromagnetic coupling, respectively. Both the in-phase and anti-phase mode resonance frequencies decrease monotonically as a function of the top layer $M_{\mathrm{s}}$. While the separation between the two modes remains large throughout the entire range of $M_{\mathrm{s}}$ values in the case of strong interlayer exchange coupling ($\sigma=-3\times 10^{-4}\,$J/m$^2$), for $\sigma=-2\times 10^{-5}\,$J/m$^2$ the anti-phase resonance mode closely approaches the in-phase resonance frequency until it vanishes at $\mu_{0}M_{\mathrm{s}}=1.1\,$T, where only the in-phase mode can be observed. Towards even higher values of $M_{\mathrm{s}}$, a second resonance mode reappears in the power spectrum. However, as can be seen in Fig.\ \ref{FERRIMAG}(e), exemplary snapshots of this mode for $\mu_{0}M_{\mathrm{s}}=1.25\,$T demonstrate that this is not a straightforward continuation of the anti-phase breathing excitation, but instead a newly emerging non-radial mode. In detail, the top layer exhibits a mode that is reminiscent of the quadrupolar distortion discussed in Ref.\ \cite{Lin2014}, while the bottom layer still shows a pure breathing mode. Interestingly, a similar behavior is observed for high $M_{\mathrm{s}}$ values in the case of stronger interlayer exchange coupling. As shown in Fig.\ \ref{FERRIMAG}(f), the higher coupling strength implies that the bottom layer also exhibits deviations from a radially symmetric breathing mode. Comparable nonradial skyrmion eigenmodes have also been predicted by Kravchuk \textit{et al.}, albeit for the case of a single (compensated) antiferromagnetic film \cite{Kravchuk2019}, while in the present work we only observe such excitations for a sufficiently uncompensated, synthetic ferrimagnet trilayer that hosts skyrmions with relatively large diameters.
In addition, our calculations indicate that such non-radial excitations do not occur in single ferromagnetic layers for which we have assumed identical simulation parameters as for the trilayer scenario. Therefore, the occurrence of these modes is characteristic for interlayer exchange-coupled skyrmions in uncompensated synthetic ferrimagnets. Also, it should be noted that the lowest-lying (in-phase) mode does not exhibit any deviations from the radial breathing dynamics at any of the considered values of $M_{\mathrm{s}}$.
Furthermore, while Fig.\ \ref{FERRIMAG} only includes the case of the ac magnetic field being applied across the top layer, the same modes can be excited by exposing only the bottom layer or even the entire trilayer structure to the oscillating external field.
Lastly, in contrast to synthetic AFMs (see Fig.\ \ref{DIFFLAYERS}), the non-vanishing total magnetization of synthetic ferrimagnets implies that the in-phase breathing mode also leads to a strong signature in the power spectrum even when the ac magnetic field is applied across all layers. In the case of synthetic AFMs, a symmetry-breaking and sufficiently large dc magnetic field along the $z$-axis is required to make the in-phase breathing mode experimentally accessible.
In conclusion, a variety of resonance peaks related to coupled breathing modes can be expected to be detected in microwave impedance spectroscopy experiments for both synthetic ferri- and antiferromagnets. In such experiments, it will be of major importance to utilize materials with damping parameters that are as low as possible in order to detect signatures of coupled breathing modes \cite{Back2020}. Ultimately, as the simulations show, the dynamic fingerprint of the coupled breathing modes, that is, the presence or absence, position, shape and number of resonances, will allow to draw conclusions about the underlying magnetic interactions and parameters. So far, the numerical calculations have implied the excitation of breathing modes by means of time-varying magnetic fields. From the experimental point of view, spin torques constitute an intriguing alternative to excite such magnetization dynamics. In the last part of this work, we will demonstrate that the excitation of coupled breathing modes in synthetic AFMs can also be realized with different types of spin torques.
\subsection{Excitation of Breathing Modes with Spin Torques}
\begin{figure
\centering
\includegraphics[width=7.6 cm]{BrMode047_HAC_vs_STT_SimpleOverview.pdf}
\caption{Comparison of power spectra obtained for magnetization dynamics excited by an ac magnetic field with $f=100\,$GHz across the top layer and by a purely damping-like spin-transfer torques for an antiferromagnetic coupling strength of $\sigma = -3\times 10^{-4}\,$J/m$^{2}$. Spin currents are assumed to exhibit a spin polarization along the $(0,0,1)$ or $(0,1,1)$ direction, and to be present in either the top layer or in all layers. Inset shows an enlarged view of the pink curve from the main panel.}
\label{STTHAC}%
\end{figure
In this part, we will show that the results obtained for the application of ac magnetic fields can be qualitatively reproduced when utilizing spin-transfer torques (STTs) for the excitation of the skyrmion breathing modes. In analogy to the excitation of resonance modes with ac magnetic fields, providing torque to only one ferromagnetic layer will allow to drive the system in an unbalanced manner. Moreover, it will be demonstrated that torques present in all layers can also lead to clear signatures of breathing modes in the power spectra.
In order to model STTs in our micromagnetic simulations with \textsc{oommf}, the following term is added to the right-hand side of the LLG equation which is given in Eq.\ (\ref{eq:LLG}) \cite{Slonczewski1996, Berger1996, Xiao2004, Donahue1999}:
\begin{equation} \label{eq:STT}
\mathrm{STT}=|\gamma_{0}|\,\beta \left[\epsilon \left( \textbf{m}\times \textbf{m}_{\mathrm{p}}\times \textbf{m}\right)-\epsilon^{\prime} \left(\textbf{m}\times \textbf{m}_{\mathrm{p}}\right) \right].
\end{equation}
For the considered synthetic AFM, we set the electron polarization direction as $\textbf{m}_{\mathrm{p}}=(0,0,1)$, that is, perpendicular to the layers, or as $\textbf{m}_{\mathrm{p}}=(0,1,1)$ in order to consider an additional in-plane component. The spin current is assumed to be injected from an additional fixed magnetic layer beneath (or on top of) the synthetic AFM structure. Furthermore, we assume that spin torques are exerted in either one or both ferromagnetic layers.
$\epsilon$ and $\epsilon^{\prime}$ correspond to the effective spin polarization efficiency factors for the damping- and field-like torques, respectively. More specifically, $\epsilon$ can be written as
\begin{equation}
\epsilon=\frac{P\Lambda^{2}}{(\Lambda^{2}+1)+(\Lambda^{2}-1)(\textbf{m}\cdot \textbf{m}_{\mathrm{p}})}.
\end{equation}
$P$ denotes the spin polarization and $\Lambda$ is a dimensionless parameter of the model \cite{Xiao2004}.
Finally, the other dimensionless parameter $\beta$ explicitly included in Eq.\ \ref{eq:STT} is given by
\begin{equation}
\beta=\left|\frac{\hbar}{\mu_{0}}\right| \frac{J}{t_{\mathrm{FL}} M_{\mathrm{s}}},
\end{equation}
where $\hbar$ is the reduced Planck's constant, $\mu_{0}$ the vacuum permeability, $J$ the current density that exerts the spin-torque and $t_{\mathrm{FL}}$ the thickness of the (free) layer that is subject to the STT.
Figure \ref{STTHAC} depicts an exemplary comparison of power spectra obtained for magnetization dynamics excited by an ac magnetic field applied across the top layer (violet curve) and by a purely damping-like STT for the case of strong antiferromagnetic coupling with $\sigma = -3\times 10^{-4}\,$J/m$^{2}$. We consider the three following scenarios for the modeled spin currents: (i) spin polarization along $(0,0,1)$ and STT only modeled in the top layer (orange curve), (ii) the extended case with STT present in both ferromagnetic layers (pink curve), and (iii) electron polarization direction along $(0,1,1)$ and STT only in the top layer (green curve).
While some previous studies suggest that the spin polarization is strongly reduced by the spacer layer of the synthetic AFM and thus only scenarios (i) and (iii) could be regarded as realistic for the excitation of coupled breathing modes by means of STTs \cite{Zhou2020}, other works demonstrate relatively high spin-diffusion lengths $l$ for materials typically used as nonmagnetic spacers, for example $l_{\mathrm{Ru}}\approx 14\,$nm for ruthenium \cite{Eid2002}. Due to the low thickness $t_{\mathrm{NM}}=1\,$nm of the spacer layer considered in the present work, scenario (ii) may also be relevant for future experiments. In addition, we note that in our model we neglect contributions arising from interfacial effects such as the possible reflection of spin currents. Finally, as will be discussed further below, spin-orbit torques (SOTs) may constitute a promising alternative for the excitation of resonant skyrmion breathing dynamics in synthetic AFMs. In this case, assuming the presence of spin torques only in one of the ferromagnetic layers is more appropriate than the situation in scenario (ii).
For the three given scenarios, all eight resonance modes occur at identical frequencies in the spectra for the simulated damping-like STT ($\epsilon \neq 0$, $\epsilon^{\prime}= 0$) with only minor differences in their magnitude compared to the modes excited by a magnetic field. While the magnitude can be controlled by the variation of parameters like $P$ or $J$, we note that the observed systematics is universal and, importantly, independent on the nature of the STT. In other words, purely field-like STTs ($\epsilon=0$, $\epsilon^{\prime}\neq 0$) or mixtures of the two STT types ($\epsilon \neq 0$, $\epsilon^{\prime}\neq 0$) lead to qualitatively similar power spectra.
Considering the exemplary graphs in Fig.\ \ref{STTHAC}, the results for $\textbf{m}_{\mathrm{p}}=(0,0,1)$ and STTs present in one ferromagnetic layer (orange curve) show the strongest similarities with the spectrum that is related to magnetization dynamics excited by an ac magnetic field across the top layer (purple curve). However, this scenario is assessed to be challenging for current experimental realization. By contrast, the case of STTs in both magnetic layers (pink curve) can be implemented by passing a spin-polarized current through the entire synthetic AFM structure. Similar to the case of a magnetic ac field applied across all layers as shown in Fig.\ \ref{DIFFLAYERS}, the in-phase breathing mode is strongly suppressed, but still present (see inset of Fig.\ \ref{STTHAC}).
Lastly, we will discuss the possibility to drive skyrmion breathing dynamics in synthetic AFMs by means of SOTs. In a previous work, it has been experimentally demonstrated for ferromagnetic multilayers that breathing-like excitations of skyrmions can be induced by spin-orbit torques \cite{Woo2017}.
For the case of synthetic AFMs with perpendicular anisotropy, the exploitation of novel types of SOTs originating from materials with reduced crystalline symmetry \cite{MacNeill2016, Baek2018, Safranski2018} such as non-collinear AFMs \cite{Holanda2020, Liu2019} is desirable, since a spin polarization component along the $z$-axis is required to excite breathing modes in these systems. By contrast, a comparably high crystalline symmetry of regular spin-source materials that provide current-induced SOTs usually restricts applications to magnetic devices with in-plane anisotropy \cite{MacNeill2016}.
For the case of spin currents generated from materials with reduced crystalline symmetries, the spin polarization can also have other contributions than solely the $z$-component. Here, by proving that breathing modes can also be excited by spin currents with $\textbf{m}_{\mathrm{p}}=(0,1,1)$ (green curve in Fig.\ \ref{STTHAC}) we conclude that experiments with novel SOTs can be expected to provide new results and possibilities with regard to dynamic excitations of skyrmions in synthetic AFMs. Note that for the example of $\textbf{m}_{\mathrm{p}}=(0,1,1)$, an additional feature in the spectrum (green curve) occurs at $f=4\,$GHz due to the simultaneous excitation of skyrmion gyration modes in this scenario.
\section{Summary and Conclusion}
In this work, we have numerically studied the breathing dynamics of skyrmions in synthetic AFM structures composed of two ferromagnetic layers that are separated by a nonmagnetic spacer. It was shown that varying the strength of the RKKY-like coupling through the metallic spacer layer allows for tuning the dynamic properties of in-phase and anti-phase breathing oscillations in a well-controlled way.
In addition to that, the different response of the two major types of coupled breathing modes to alterations of magnetic parameters, such as the saturation magnetization, was presented in detail.
Moreover, the systematics of in-phase and anti-phase breathing modes was discussed for the case of synthetic ferrimagnets. Aside from the characteristic dependence of resonance frequencies on the varying saturation magnetization of the individual magnetic layers, it was demonstrated that novel, non-radial dynamic modes can emerge for a sufficiently high degree of imbalanced moments in the two ferromagnetic layers.
Furthermore, both field- and damping-like STTs have been shown to represent an alternative means to excite skyrmion breathing dynamics in magnetic multilayers with antiferromagnetic interlayer exchange coupling.
In conclusion, it has been proven that the spectral analysis of coupled breathing modes in synthetic AFMs offers a promising approach for the detection and detailed characterization of skyrmions. In particular, measurements of magnetoresistive signals modulated by the in-phase or anti-phase resonant breathing oscillations are expected to allow for electrical detection of magnetic skyrmions in synthetic AFMs.
\section*{Acknowledgements}
M.\ L.\ acknowledges the financial support by the German Science Foundation (Deutsche Forschungsgemeinschaft, DFG) through the research fellowship LO 2584/1-1. This research was partially supported by the NSF through the University of Illinois at Urbana-Champaign Materials Research Science and Engineering Center DMR-1720633 and was carried out in part in the Materials Research Laboratory Central Research Facilities, University of Illinois.
| {'timestamp': '2020-08-18T02:36:47', 'yymm': '2006', 'arxiv_id': '2006.11318', 'language': 'en', 'url': 'https://arxiv.org/abs/2006.11318'} |
\section{Introduction}
\noindent
Type Ia supernovae (SNe Ia) are the calibrated
standard candles used in the discovery of the accelerated expansion
of the Universe (Riess et al. 1998; Perlmutter et al. 1999) and they
remain a powerful tool in exploring the nature of dark energy. Although
a lot of progress has been made in disentangling the nature of the
explosions, there are still many points to be addressed concerning
the progenitors (see reviews by Wang \& Han 2012; Maoz et al. 2014, and
Ruiz--Lapuente 2014, for instance). They appear to be thermonucler
explosions of white dwarfs (WDs) made of C+O, and accretion of material
by the WD from a companion in a close binary system should be the basic
mechanism to induce the explosion, but here the consensus stops.
The companion could either be a still thermonuclearly active star
in any stage of its evolution (the single--degenerate, SD channel) or
another WD (the double--degenerate, DD channel. The explosion could also
result from the merging of a WD with the electron--degenerate core of
an asymptotic giant branch (AGB) star. The mode of the accretion could range
from steady accretion to violent merger, and the explosion either arise from
central ignition of C, when the WD grows close to the Chandrasekhar mass, or
be induced by detonation of a He layer near
the surface, the mass of the WD being smaller in this case. Observed
different type Ia SNe may have different origins.
\noindent
No binary system has ever been discovered in which a SN Ia has
later taken place, but some binary systems are however considered
to be excellent candidates for SN Ia progenitors, such as U Sco,
which contains a WD already close to the Chandrasekhar mass. A
general prediction for the SD channel is that the companion star
of the WD should survive the explosion and present revealing
characteristics.
\noindent
There are remnants (SNRs) of the explosions of SNe Ia, close and recent
enough that their exploration can either detect the presence of a surviving
companion or confirm its absence (Ruiz--Lapuente 1997).
This has been done for several SNRs of the Ia type, in our own Galaxy and in
the LMC (Ruiz--Lapuente et al. 2004; Gonz\'alez Hern\'andez et al. 2009;
Kerzendorf et al. 2009; Schaefer \& Pagnotta 2012; Edwards et al. 2012;
Gonz\'alez Hern\'andez et al. 2012; Kerzendorf et al. 2012, 2013, 2014,
2018a,b; Bedin et al. 2014; Pagnotta \& Schaefer 2015; Ruiz--Lapuente et al.
2018).
\noindent
The remnant of SN 1572 (Tycho Brahe's SN) was the first to be explored
(Ruiz--Lapuente et al. 2004, RL04 hereafter), and the findings there have
later been the subject of several studies (Gonz\'alez Hern\'andez et al. 2009,
GH09 henceforth; Kerzendorf et al. 2009; Kerzendorf et al. 2013,
hereafter K13; Bedin et al.
2014, B14 hereafter).
\noindent
Now the {\it Gaia} Data Release 2 is providing an unprecedented view
of the kinematics of the Galactic disk (Brown et al. 2018). It not
only gives the 3D location of a very large sample of stars in the
Galaxy, but also full velocity information (proper motion and
radial velocity) for 7.2 million stars brighter than
$G_{\rm RVS}$ = 12 mag, and transverse velocity for an unprecedently
large number of stars.
{\it Gaia} DR2 provides astrometric parameters (positions, parallaxes and
proper motions) for 1.3 billion sources. The median uncertainty for the
sources brighter than $G$ = 14 mag is 0.03 mas for the parallax and 0.07
mas yr$^{-1}$ for the proper motions. The reference frame is aligned
with the International Celestial Reference System (ICRS) and non--rotating
with respect to the quasars to within 0.1 mas yr$^{-1}$. The systematics
are below 0.1 mas and the parallax zeropoint uncertainty is small, about
0.03 mas (Brown et al. 2018).
\noindent
Previously, the distances to the stars could only be estimated from
comparison of the absolute magnitudes deduced from their spectral
types and luminosity classes with their photometry, assuming
some interstellar extinction in the direction of the SNR. That
left considerable uncertainty in many cases (see RL04 and B14).
It is here where the {\it Gaia} DR2 is most useful.
\noindent
The situation was better concerning proper motions, where
{\it HST} astrometry, based on images taken at different epochs,
had allowed high precision (see B14). {\it HST} proper motions are
always relative to a local frame, whereas {\it Gaia} DR2
proper motions are absolute, referred to the ICRS. Moreover, {\it Gaia} DR2 allows to
calculate
the Galactic orbits of the stars.
In addition, without a precise knowledge
of the distances, the conversion of proper motions into
tangential velocities remained uncertain and so was the reconstruction
of the total velocities.
\noindent
The paper is organized as follows. First we describe the characteristcs
of Tycho's SNR. In Section 3, we examine the distances
given by the parallaxes from {\it Gaia},
for the surveyed stars,
and we compare them with previous estimates. In Section 4, the proper motions
from {\it Gaia} are compared with the {\it HST} ones.
Section 5 discusses the position in the Toomre diagram of possible companion
stars to SN 1572, as compared with a
large sample. In Section 6, we calculate the Galactic orbits of 4
representative stars and we discuss their characteristics. In Section 7 our
whole sample is discussed. Section 8 compares the observations with the
predictions of models of the evolution of SN Ia companions. Finally,
Section 9 gives a summary and the conclusions.
\section{Tycho SN remnant}
Tycho's SNR lies close to the Galactic plane ($b$ = 1.4 degrees,
which means 59--78 pc above the Galactic plane). The remnant
has angular radius
of 4 arcmin. In RL04
a search was performed
covering the innermost 0.65 arcmin radius centered on the Chandra
X--ray observatory center of the SN, up to an apparent visual magnitude of
22. Presently we will discuss more stars, roughly doubling the radius of the searched
area (see Figure 1). The coordinates of the Chandra geometrical center of the remnant are:
RA = 00h 25m 19.9s, DEC = 64o 08' 18.2'' (J2000).
This is the preferred center, which
pratically coincides with that of ROSAT (Hughes 2000),
that differs by only 6.5 arcsec.
The centroid in radio, from VLA (Reynoso et al. 1997), is also nearby.
The stars
closest to the center are A, B, C, D, E, F and G. They are the preferred
candidates because of that.
\noindent
The distance to SN 1572 has been subject to study using different
methods. The estimated value is converging into a value in the middle
of the range from 2 to 4 kpc. Chevalier, Kirshner \& Raymond (1980)
using the the expansion
of the filaments in the remnant and the shock velocity obtained a distance
of 2.3$\pm$0.5 kpc. A similar distance was obtained by Albinson et al.
(1986) through the observation of neutral hydrogen towards the supernova.
They place the distance in the range of 1.7--3.7 kpc. Just one year later,
Kirshner, Winkler \& Chevalier (1987) revisited the distance through the
expansion of the filaments of the remnant and found it to be between 2.0 and
2.8 kpc.
\bigskip
\noindent
In 2004, Ruiz--Lapuente (2004) attempted a different approach. By assembling
the records of the historical observations of this supernova in 1572--1574
and evaluating the uncertanties, it was possible to reconstruct the light
curve of the SNIa and the colour. After applying the stretch factor
fitting of light curves of SNe, it was possible to classify this SN within
the family of SNe Ia. The derived absolute magnitude was found
to be consistent with a distance of 2.8 $\pm$ 0.4 kpc for the scale
of $H_{0}$ $\sim$ 65 km s$^{-1}$ Mpc$^{-1}$. In this determination,
the extinction towards the supernova was derived from the
reddening as shown in the color curve of the SN.
Given that present estimates of H$_{0}$ are 67
km s$^{-1}$ Mpc$^{-1}$, this impacts into a somehow smaller value around 2.7 kpc.
\bigskip
\noindent
With the acknowledgement of those uncertainties, we take a range of
possible distance, in this paper, between 1.7 and 3.7 kpc (2.7$\pm$ 1 kpc)
and we study all the stars within this distance range as derived by
{\it Gaia} as potential candidates. We discuss the distance to the stars in the
next section and we come back to it when talking about candidate stars.
We now see a difference in the
distance towards some stars, and we compare with that published before.
\bigskip
\begin{figure*}
\centering
\includegraphics[width=0.7\columnwidth]{f1.pdf}
\caption{$B$--band image, taken with the 4.2m William Herschel Telescope,
showing all the stars referred to in this paper.}
\label{Figure 1}
\end{figure*}
\section{Parallaxes and {\it Gaia} distances}
\begin{table*}
\scriptsize
\begin{center}
\caption{\scriptsize {\it Gaia} IDs, parallaxes, proper motions and $G$
magnitudes of the sample of stars in Figure 1, from the {\it Gaia} DR2}
\begin{tabular}{lccccc}
\\
\hline
\hline
Star & {\it Gaia} ID & $\varpi$ & $\mu_{\alpha}$ cos $\delta$ & $\mu_{\delta}$ & $G$ \\
& & [mas] & [mas/yr] & [mas/yr] & [mag] \\
(1) & (2) & (3) & (4) & (5) & (6) \\
\hline
A & 431160565571641856 & 1.031$\pm$0.052 & -5.321$\pm$0.076 & -3.517$\pm$0.065 & 12.404$\pm$0.001 \\
B & 431160569875463936 & 0.491$\pm$0.051 & -4.505$\pm$0.063 & -0.507$\pm$0.049 & 15.113$\pm$0.001 \\
C1 & 431160359417132800 & 5.310$\pm$0.483 & -2.415$\pm$0.735 & -0.206$\pm$0.576 & 18.027$\pm$0.006 \\
D & 431160363709280768 & 1.623$\pm$0.318 & -4.566$\pm$0.636 & -2.248$\pm$0.376 & 19.371$\pm$0.003 \\
E & 431160565573859584 & 0.138$\pm$0.220 & 0.232$\pm$0.377 & -0.699$\pm$0.265 & 18.970$\pm$0.002 \\
F & 431160569875460096 & 0.466$\pm$0.079 & -5.739$\pm$0.130 & -0.292$\pm$0.097 & 17.036$\pm$0.001 \\
G & 431160359413315328 & 0.512$\pm$0.021 & -4.417$\pm$0.191 & -4.064$\pm$0.143 & 17.988$\pm$0.001 \\
H & 431160599931508480 & 0.620$\pm$0.203 & -4.839$\pm$0.341 & -0.577$\pm$0.248 & 18.895$\pm$0.002 \\
I & 431160569867713152 & -0.014$\pm$0.566 & -1.479$\pm$0.970 & -0.855$\pm$0.761 & 20.351$\pm$0.006 \\
J & 431160565571749760 & 0.134$\pm$0.240 & -3.900$\pm$0.373 & -1.054$\pm$0.292 & 18.965$\pm$0.002 \\
K & 431160393780294144 & -0.266$\pm$0.290 & -1.735$\pm$0.601 & -0.815$\pm$0.350 & 19.313$\pm$0.003 \\
L & 431160398076768896 & 0.689$\pm$0.457 & -2.471$\pm$0.876 & 0.514$\pm$0.578 & 20.072$\pm$0.005 \\
M & 431160604230502400 & -2.282$\pm$1.316 & 3.472$\pm$1.943 & -1.624$\pm$2.070 & 20.900$\pm$0.011 \\
N & 431160565571767552 & 0.246$\pm$0.096 & 0.092$\pm$0.148 & 0.134$\pm$0.121 & 17.612$\pm$0.001 \\
O & 431160569875457792 & 1.169$\pm$0.063 & 2.607$\pm$0.098 & 2.108$\pm$0.076 & 16.542$\pm$0.001 \\
P & 431160565571767424 & 0.168$\pm$0.092 & -0.889$\pm$0.139 & -0.389$\pm$0.106 & 16.998$\pm$0.001 \\
Q & 431160565575562240 & 0.663$\pm$0.334 & 0.438$\pm$0.643 & 1.409$\pm$0.404 & 19.496$\pm$0.004 \\
S & 431160565573859840 & 1.235$\pm$0.417 & 2.091$\pm$0.771 & -0.437$\pm$0.491 & 19.568$\pm$0.003 \\
T & 431159088102994432 & 0.565$\pm$0.289 & -5.177$\pm$0.563 & 0.004$\pm$0.353 & 19.320$\pm$0.003 \\
U & 431159092406721280 & 0.504$\pm$0.070 & -1.877$\pm$0.113 & -5.096$\pm$0.083 & 17.064$\pm$0.001 \\
V & 431160359413311616 & 0.059$\pm$1.023 & -2.201$\pm$1.184 & 1.645$\pm$1.279 & 20.235$\pm$0.007 \\
W & 431160393773079808 & 0.193$\pm$0.283 & -2.760$\pm$0.600 & 0.163$\pm$0.343 & 19.312$\pm$0.003 \\
X & 431159092398964992 & 0.192$\pm$0.427 & -1.187$\pm$0.812 & -0.836$\pm$0.511 & 19.812$\pm$0.004 \\
Y & 431159092406717568 & 0.631$\pm$0.223 & 0.144$\pm$0.347 & -2.261$\pm$0.290 & 18.923$\pm$0.002 \\
Z & 431159092398966400 & 0.176$\pm$0.146 & -1.498$\pm$0.233 & -0.294$\pm$0.193 & 18.082$\pm$0.002 \\
AA & 431159088102995968 & 0.957$\pm$0.467 & -2.277$\pm$0.977 & -1.184$\pm$0.595 & 19.973$\pm$0.005 \\
AB & 431159088102989056 & -0.090$\pm$0.267 & -2.011$\pm$0.445 & -1.600$\pm$0.316 & 19.046$\pm$0.002 \\
AC & 431159088103003520 & 0.490$\pm$0.160 & -2.376$\pm$0.249 & -1.445$\pm$0.195 & 18.399$\pm$0.001 \\
AE & 431159088102986880 & 0.279$\pm$0.173 & -0.907$\pm$0.268 & -0.241$\pm$0.227 & 18.559$\pm$0.002 \\
AF & 431158881944551424 & 1.323$\pm$0.324 & -2.381$\pm$0.644 & 0.489$\pm$0.400 & 19.399$\pm$0.003 \\
AG & 431158881944550272 & 0.703$\pm$0.382 & -2.412$\pm$0.779 & 0.626$\pm$0.453 & 19.768$\pm$0.004 \\
AH & 431158881944553216 & 0.206$\pm$0.087 & -0.704$\pm$0.139 & -0.579$\pm$0.107 & 17.486$\pm$0.001 \\
AI1/HP1 & 431160359417132928 & 2.831$\pm$0.273 & 71.558$\pm$0.530 & -3.030$\pm$0.322 & 19.159$\pm$0.003 \\
AJ & 431160359413306368 & 0.187$\pm$0.206 & -1.102$\pm$0.333 & 0.222$\pm$0.249 & 18.883$\pm$0.002 \\
AK & 431160393773068032 & -0.476$\pm$0.447 & -1.306$\pm$0.938 & -0.142$\pm$0.518 & 20.008$\pm$0.005 \\
AL & 431160398072078592 & 0.383$\pm$0.618 & -2.827$\pm$1.199 & 0.727$\pm$0.789 & 20.461$\pm$0.006 \\
AM & 431160398073281792 & 0.752$\pm$0.825 & 0.303$\pm$1.636 & -2.940$\pm$1.114 & 20.605$\pm$0.008 \\
AN & 431160599931516288 & 0.605$\pm$0.138 & -4.560$\pm$0.221 & -1.330$\pm$0.168 & 18.284$\pm$0.001 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}
\scriptsize
\begin{center}
\caption{ {\it BVR} photometry, distances and proper motions of
stars A--W from B14, compared with the distances and proper motions from
{\it Gaia} DR2 (here the {\it Gaia} proper motions have been transformed to
the system used in B14; see text and, for the {\it Gaia} original values,
see Table 1). There are stars that have no upper limit for the {\it Gaia} distances.
This corresponds to negative parallaxes.
They have been marked
with xxx. There are other stars (I, K, and AB in Table 3) with negative central value of
the parallax, but adding the errors give postive limits. This corresponds
to a lower limit for the distance, therefore they
show a $\geq$ sign.}
\begin{tabular}{cccccccccc}
\\
\hline
\hline
Star & {\it B} & {\it V} & {\it R} & {\it d} (B14) &
{\it d} & $\mu_{\alpha}$ cos $\delta$ & $\mu_{\alpha}$ cos $\delta$ (B14)& $\mu_{\delta}$&$\mu_{\delta}$ (B14)\\
& [mag] & [mag] & [mag] & [kpc] &
[kpc] & [mas/yr] & [mas/yr] & [mas/yr] & [mas/yr]\\
(1) & (2) & (3) & (4) & (5) &
(6) & (7) & (8) & (9) & (10) \\
\hline
A & 14.82$\pm$0.03 & 13.29$\pm$0.03 & 12.24$\pm$0.03 & 1.1$\pm$0.3 &
0.97$^{+0.05}_{-0.04}$ & -3.63$\pm$0.08 & --- & -3.06$\pm$0.07 & --- \\
B & 16.35$\pm$0.03 & 15.41$\pm$0.03 & --- & 2.6$\pm$0.5&
2.03$^{+0.19}_{-0.15}$ & -2.90$\pm$0.06 & -1.67$\pm$0.06 &-0.09$\pm$0.05 & 0.59$\pm$0.08\\
C1 & 21.06$\pm$0.12 & 19.06$\pm$0.05 & 17.77$\pm$0.03& 0.75$\pm$0.5 &
0.18$^{+0.03}_{-0.01}$ & -0.82$\pm$0.73 & -1.98$\pm$0.07 & 0.39$\pm$0.58 &-1.09$\pm$0.06\\
C2 & 22.91$\pm$0.20 & 20.53$\pm$015 & --- & $\sim$40 &
--- & --- & -1.75$\pm$0.07 & --- &-1.07$\pm$0.07\\
C3 & --- & --- & --- & --- &
--- & --- & 0.08$\pm$0.11 & --- & -0.14$\pm$0.10\\
D & 22.97$\pm$0.28 & 20.70$\pm$0.10 & 19.38$\pm$0.06& 0.8$\pm$0.2 &
0.62$^{+0.15}_{-0.11}$& -2.97$\pm$0.64 & -2.03$\pm$0.09 &-1.65$\pm$0.38 &-1.28$\pm$0.07\\
E & 21.24$\pm$0.13 & 19.79$\pm$0.07 & 18.84$\pm$0.05& $>$20 &
7.22$^{+xxx}_{-4.43}$& 1.83$\pm$0.38 & 1.74$\pm$0.05 &-0.10$\pm$0.26 & 0.28$\pm$0.05\\
F & 19.02$\pm$0.05 & 17.73$\pm$0.03 & 16.94$\pm$0.03 & 1.5$\pm$0.5 &
2.15$^{+0.44}_{-0.32}$& -4.14$\pm$0.13 & -3.31$\pm$0.15 & 0.31$\pm$0.10 & 0.25$\pm$0.07\\
G & 20.09$\pm$0.08 & 18.71$\pm$0.04 & 17.83$\pm$0.03 & 2.5-5.0 &
1.95$^{+0.60}_{-0.35}$ & -2.82$\pm$0.19 & -2.63$\pm$0.06 &-3.46$\pm$0.14 &-3.98$\pm$0.04\\
H & 21.39$\pm$0.14 & 19.80$\pm$0.07 & 18.78$\pm$0.05 & $\simeq$1.8/$\sim$24 &
1.61$^{+0.79}_{-0.40}$ & -3.24$\pm$0.34 & -3.13$\pm$0.07 &-0.02$\pm$0.25 &-0.84$\pm$0.03\\
I & --- & 21.75$\pm$0.16 & 20.36$\pm$0.09 & $\simeq$4 &
$\geq$ 1.81 & 0.12$\pm$0.97 & 0.69$\pm$0.06 &-0.25$\pm$0.76 &-0.20$\pm$0.06\\
J & 21.15$\pm$0.12 & 19.74$\pm$0.07 & 18.84$\pm$0.05 & $\simeq$9 &
7.46$^{+xxx}_{-4.77}$ & -2.30$\pm$0.37 & -2.35$\pm$0.06 &-0.45$\pm$0.29 &-0.28$\pm$0.03\\
K & 21.64$\pm$0.15 & 20.11$\pm$0.08 & 19.15$\pm$0.05 & $\simeq$2.4/$\sim$27 &
$\geq$ 41.67 & -0.14$\pm$0.60 & 0.24$\pm$0.12 &-0.21$\pm$0.35 & 0.03$\pm$0.07\\
L & 22.77$\pm$0.26 & 21.08$\pm$0.12 & 20.00$\pm$0.07 & $\simeq$4 &
1.45$^{+2.87}_{-0.58}$ & -0.87$\pm$0.88 & 0.36$\pm$0.12 &1.11$\pm$0.58 &-0.08$\pm$0.04\\
M & 23.49$\pm$0.36 & 21.82$\pm$0.16 & 20.72$\pm$0.10 & $\simeq$4 &
--- & 5.07$\pm$1.94 & -0.61$\pm$0.12 &-1.02$\pm$2.07 & 0.44$\pm$0.08\\
N & 19.59$\pm$0.06 & 18.29$\pm$0.04 & 17.47$\pm$0.03 & $\simeq$1.5-2 &
4.06$^{+2.57}_{-1.14}$ & 1.69$\pm$0.15 & 2.64$\pm$0.13 &0.74$\pm$0.12 & 0.96$\pm$0.04\\
O & 18.62$\pm$0.04 & 17.23$\pm$0.03 & 16.37$\pm$0.03 & $<$1 &
0.85$^{+0.54}_{-0.22}$ & 4.21$\pm$0.10 & 5.13$\pm$0.20 &2.71$\pm$0.08 & 2.85$\pm$0.14\\
P1 & --- & 17.61$\pm$0.03 & 16.78$\pm$0.03 & $\simeq$1 &
5.96$^{+7.23}_{-2.11}$ & --- & 1.39$\pm$0.36 & --- & 0.20$\pm$0.09\\
P2 & --- & --- & --- & --- &
--- & --- & -0.27$\pm$0.20 & --- &-1.64$\pm$0.21\\
Q & 22.35$\pm$0.21 & 20.59$\pm$0.09 & 19.41$\pm$0.06 & $\simeq$2 &
1.51$^{+1.53}_{-0.51}$ & 2.04$\pm$0.64 & 1.34$\pm$0.09 &2.71$\pm$0.40 & 2.38$\pm$0.04\\
R & 22.91$\pm$0.28 & 21.38$\pm$0.13 & 20.26$\pm$0.08 & 3.3$\pm$0.2 &
--- & --- & -0.18$\pm$0.10 & --- & 0.25$\pm$0.05\\
S & --- & 21.30$\pm$0.13 & 19.74$\pm$0.07 & 1.3$\pm$0.1 &
0.81$^{+0.41}_{-0.20}$ & 3.69$\pm$0.77 & 3.68$\pm$0.09 & 0.16$\pm$0.49 & 0.93$\pm$0.05\\
T & 21.82$\pm$0.17 & 20.23$\pm$0.08 & 19.20$\pm$0.05 & $\simeq$2/$\sim$30 &
1.77$^{+1.86}_{-0.60}$ &-3.58$\pm$0.56 & -2.96$\pm$0.04 & 0.61$\pm$0.35 &-0.53$\pm$0.05\\
U & 19.03$\pm$0.05 & 17.73$\pm$0.03 & 16.95$\pm$0.03 & $\simeq$1 &
1.98$^{+0.32}_{-0.24}$ & -0.28$\pm$0.11 & 0.39$\pm$0.10 &-4.49$\pm$0.08 &-4.31$\pm$0.07\\
V & 23.32$\pm$0.33 & 21.41$\pm$0.13 & 20.20$\pm$0.08 & $\simeq$3 &
16.81$^{+xxx}_{-15.89}$& -1.16$\pm$1.18 & -0.67$\pm$0.08 &2.25$\pm$1.28 & 0.49$\pm$0.08\\
W & 22.13$\pm$0.19 & 20.44$\pm$0.09 & 19.27$\pm$0.05 & $\simeq$2 &
5.17$^{+xxx}_{-3.07}$ & -1.16$\pm$0.60 & -0.31$\pm$0.09 &0.76$\pm$0.34 & 0.09$\pm$0.04\\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}
\scriptsize
\begin{center}
\caption{ $G$ magnitudes, distances and proper motions of stars X--AN from B14,
with comparison of the proper motions from B14 and from {\it Gaia} DR2 (here
the {\it Gaia} proper motions have been transformed to the system used in B14;
for the {\it Gaia} original values, see Table 1). These stars have not
been assigned a distance in B14 nor in any other paper. The sign xxx for the
upper
limit in distance to some stars correspond to negative upper limit parallax, as in Table 2.}
\begin{tabular}{ccccccc}
\\
\hline
\hline
Star & {\it G} & {\it d} & $\mu_{\alpha}$ cos $\delta$ & $\mu_{\alpha}$ cos $\delta$ (B14) & $\mu_{\delta}$ & $\mu_{\delta}$ (B14) \\
& [mag] & [kpc] & [mas/yr] & [mas/yr] & [mas/yr] & [mas/yr] \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) \\
\hline
X & 19.81 & 5.20$^{+xxx}_{-3.59}$ & 0.41$\pm$0.81 & 1.32$\pm$0.06 & -0.24$\pm$0.51 & -0.26$\pm$0.06 \\
Y & 18.92 & 1.58$^{+0.87}_{-0.41}$ & 1.74$\pm$0.35 & 2.52$\pm$0.03 & -1.66$\pm$0.29 & -1.75$\pm$0.07 \\
Z & 18.08 & 5.68$^{+2.72}_{-2.57}$ & 0.10$\pm$0.23 & 0.78$\pm$0.04 & 0.31$\pm$0.19 & 0.06$\pm$0.07 \\
AA & 19.97 & 1.04$^{+1.00}_{-0.33}$ & -0.68$\pm$1.00 &-3.19$\pm$0.07 & -0.58$\pm$0.60 & -1.42$\pm$0.04 \\
AB & 19.05 & $\geq$ 5.65 & -0.41$\pm$0.45 &-0.37$\pm$0.03 & -1.00$\pm$0.32 & -1.01$\pm$0.06 \\
AC & 18.40 & 2.04$^{+0.99}_{-0.50}$ & -0.78$\pm$0.25 &-1.09$\pm$0.07 & -0.84$\pm$0.20 & -0.87$\pm$0.04 \\
AD1 & 17.22 & 1.04$^{+0.84}_{-0.32}$ & 0.85$\pm$0.59 &-1.24$\pm$0.14 & --- & -1.58$\pm$0.16 \\
AD2 & --- & --- & --- &-1.12$\pm$0.04 & --- & -2.25$\pm$0.07 \\
AE & 19.05 & 3.58$^{+5.84}_{-1.37}$ & 0.69$\pm$0.27 & 1.07$\pm$0.06 & 0.36$\pm$0.23 & -0.05$\pm$0.06 \\
AF & 19.40 & 0.76$^{+0.24}_{-0.15}$ &-0.78$\pm$0.64 &-0.38$\pm$0.04 & 1.09$\pm$0.40 & 0.01$\pm$0.07 \\
AG & 19.77 & 1.42$^{+1.69}_{-0.50}$ & -0.81$\pm$0.78 &-1.11$\pm$0.06 & 1.23$\pm$0.45 & 0.93$\pm$0.08 \\
AH & 17.49 & 4.85$^{+3.55}_{-1.44}$ & 0.89$\pm$0.14 & 1.26$\pm$0.20 & 0.02$\pm$0.11 & -0.20$\pm$0.26 \\
AI1/HP-1& 19.16 & 0.35$^{+0.04}_{-0.03}$ & 73.16$\pm$0.53 &73.07$\pm$0.09 & -2.43$\pm$0.32 & -2.82$\pm$0.07 \\
AI2 & --- & --- & --- & 1.76$\pm$0.28 & --- & 0.16$\pm$0.21 \\
AJ & 18.88 & 5.35$^{+xxx}_{-2.81}$ & 0.50$\pm$0.33 & 0.18$\pm$0.05 & 0.82$\pm$0.25 & 0.73$\pm$0.07 \\
AK & 20.01 & --- & 0.29$\pm$0.94 &-0.25$\pm$0.09 & 0.46$\pm$0.52 & 0.95$\pm$0.08 \\
AL & 20.46 & 2.61$^{+xxx}_{-1.61}$ & -1.23$\pm$1.20 &-0.14$\pm$0.09 & 1.33$\pm$0.73 & -0.35$\pm$0.09 \\
AM & 20.61 & 1.33$^{+xxx}_{-0.70}$ & 1.90$\pm$1.64 &-1.36$\pm$0.10 & -2.34$\pm$1.11 & -0.60$\pm$0.10 \\
AN & 18.28 & 1.65$^{+0.49}_{-0.30}$ & -2.96$\pm$0.22 &-2.83$\pm$0.11 & -0.73$\pm$0.17 & -0.96$\pm$0.05 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\noindent
Distances to the stars targeted as possible surviving companions of
SN 1572 had first been estimated by RL04 (see their Table 1), for
13 of them.
Those estimates were made by fitting synthetic spectra (under the
assumption of local thermodynamic equilibrium, LTE) to the observed
ones. The grids of model atmospheres and the atomic data from
Kurucz (1993), in combination with the Uppsala Synthetic Spectrum
Package (1975), were used in the spectrum synthesis. The atmospheric
parameters effective temperature $T_{\rm eff}$ and surface gravity
$g$ were thus determined. Intrinsic colours and absolute visual
magnitudes were then deduced from the relationships between spectral
type and colour and spectral type and absolute magnitude for the
different luminosity classes (Schmidt--Kahler 1982). Comparison with
$BVR$ photometry obtained with the 2.5m Isaac Newton Telescope, in
La Palma, yielded the reddening $E(B - V)$, from which the visual
extinction $A_{V}$ and the corrected apparent visual magnitude
$V_{0}$ were calculated. The high--resolution spectra had been
obtained with the UES and ISIS spectrographs, in the 4.2m William
Herschel Telescope, in La Palma. Low--resolution spectra came, in
addition, from the LRIS imaging spectrograph in the 10m Keck
Telescopes, in Hawaii. They were compared, after dereddening, with
template spectra from Lejeune et al. (1997), and that supplemented
the information obtained from the high--resolution spectra.
\noindent
The detailed characterization of Tycho G (singled out as a likely
SN companion in RL04) was done by GH09 using a high--resolution
HIRES spectrum obtained at the Keck I telescope. The stellar
parameters, effective temperature and gravity, were derived using
the excitation and ionization equilibria of Fe together with the fit
of the wings of the H$\alpha$ line compared to different synthetic
spectra computed. The result pointed to a G2 IV star with metallicity
slightly below solar. The individual magnitudes in the
different filters were used to
estimate a range of possible distances of star G. In addition,
low--resolution LIRIS spectra of the stars E, F, G, and D were obtained,
which confirmed the spectral types of these stars. The best fit
for Tycho G gave T$_{\rm eff}$ = 5900 $\pm$ 150\ K, log $g$ = 3.85
$\pm$ 0.35\ dex, and [Fe/H] = -0.05 $\pm$ 0.09 (see GH09). This result
is consistent with that obtained by K13.
\noindent
K13 recalculated spectrophotometric distances to 5 of the stars
(A, B, C, E and G). Many of them had large error bars and are compatible
with the distance estimate in B14 and the distance values
implied by {\it Gaia} parallaxes, except for
the value given for their star C (star C is in fact three stars. Star C1 has
a measured {\it Gaia} parallax corresponding to a distance of
d $=$ 0.18$^{+0.3}_{-0.1}$ kpc. The estimate in B14 is compatible with that
value, whereas K13 found a too large distance of 5.5$\pm$3.5 kpc.)
Finally,
in B14 there is a list of distances to 23 stars (A to W) (their Table 3),
completing the work of RL04. The distances in B14 are in reasonable
agreeement (within 1 $\sigma$) with the {\it Gaia} parallaxes (see Table 2).
\noindent
Now the DR2 from {\it Gaia} has provided us with precise parallaxes
for almost all the stars included in those previous studies. The
corresponding
distances and their errors are given in columnn 6 of Table 2, for
the stars for which we already had estimates of the distances
(column 5), and in column 3 of Table 3 for those for which there
were none. {\it Gaia} DR2 distances are estimated as the inverse of the
parallax.
\begin{figure*}
\centering
\includegraphics[width=0.9\columnwidth]{f2.pdf}
\caption{Distances and distance ranges inferred from the
parallaxes in the {\it Gaia} DR2 and their uncertainties, together with their
proper motions in declination. The dashed vertical lines mark the conservative
limits
of 2.7 $\pm$ 1 kpc on the distance to Tycho's SNR. Solid (blue)
error bars correspond to stars that, within reasonable uncertainties, might be
inside the SNR, dashed lines to those that, although formally their 1 $\sigma$
error bars
reach the distance of the remnant, they are so large as to make it
implausible, while dotted lines correspond to the stars
that are beyond the
limits for the SNR distance or have
a parallax relative error higher than 100$\%$.}
\label{Figure 2}
\end{figure*}
\noindent
In Figure 2 are shown the {\it Gaia} DR2 distances and their
error bars, and they are compared with the estimated distance to
Tycho's SNR (blue vertical line) and its error bars (black dashed
vertical lines). Proper motions in declination are on the vertical
axis. Solid (blue) error bars mark the stars that,
within reasonable uncertainties, might be inside Tycho's SNR.
Dashed (black) error bars correspond to stars that, although the error
bars of their distances reach the range of distances to the
SNR, the errors are so large as to make their association with
the SNR very unlikely. Finally, dotted (black) lines correspond to the
stars that have incompatible distances with the SNR or parallax errors
larger than 100$\%$.
\noindent
There are 15 stars within the range of possible distances
to the SNR (1.7 $<$ d $<$ 3.7 kpc). These are stars B, F,
G, H, L, N, Q, T, U, Y, AA, AC, AE, AG and AN.
From this list of stars, stars G and U have significant
proper motions in declination. Proper motions
of all the targeted stars will be discussed in the next Section.
\noindent
In Table 1, the {\it Gaia} DR2 data on parallaxes, proper
motions, and magnitudes {\it G} are given as in the {\it Gaia} DR2.
The proper motions are absolute, referred in the ICRS (as mentioned above).
\noindent
In Table 2, the {\it Gaia} DR2 distances, as we announced,
are compared with the
distances deduced in B14 from determination of the
stellar atmospheric parameters and comparison of the resulting
luminosities with the available photometry. We see that there
is reasonable agreement in most cases, with a tendency to place the
stars at longer distances in B14 as compared with {\it Gaia},
which can be attributed to an underestimate, in B14, of the
extinction in the direction of Tycho's SNR. Based only on the
stars with distance errors $\leq$ 0.5 kpc in both sets,
the underestimate would be by $\Delta A_{V} \simeq 0.5$ mag.
Exceptions are stars N, P1, U, V and W, although in the two
latter cases the {\it Gaia} error bars are so large that the
comparison is not really meaningful.
\section{Proper motions from {\it Gaia} compared with the {\it HST}}
\noindent
High velocities, mainly due to their orbital motion at the time of explosion,
must be a salient characteristic of SNe Ia companions. Unless they were mostly
moving along the line of sight when the binary system was disrupted, the
components of the velocity on the plane of the sky should be observed as high
proper motions relative to the stars around the location of the SN.
It might happen that the star were moving at some angle with the line
of sight and thus the components of the velocity would be distributed accordingly.
\noindent
The
location of the supernova explosion, in the case of SNe whose remnants still exist, is given in a first,
rough approximation, by the centroid of the remnant. High--precision
astrometric measurements of the proper motions of the stars within some
angular distance from the centroid are the tool needed to detect or discard
the presence of possible companions. Until the advent of {\it Gaia}, this was
only possible with {\it HST} astrometry. Radial velocities obtained from
high resolution spectra give the complementary information on the velocity
along the line of sight.
\noindent
A first set of measurements of the proper motions of the stars around the
centroid of Tycho's SNR was made in RL04. It included 26 stars, labelled from
A to W (see their Fig. 1). Images from the WFPC2 aboard the {\it HST}, taken
two months apart (within Cycle 12), were used. It was found that star G
was at compatible distance to the supernova (the distance estimate
for Tycho G, (most widely named star G)
was 3.0 $^{+1}_{-0.5}$ kpc and its motion was mostly
perpendicular to the Galactic plane,
with $\mu_{b}$ = 6.11$\pm$1.34 mas/yr ($\mu_{l}$ = -2.6$\pm$1.34 mas/yr only).
That meant a heliocentric
tangential velocity $v_{t} \sim$94 km/s, which combined with a high
measured radial velocity of -87.4 $\pm$ 0.5 km/s it gave a total heliocentric
velocity
$v_{tot}$ $\sim$128 km/s $\pm$ 9 km/s,
making star G a likely candidate to have been the companion
star of SN 1572.
\noindent
In B14, proper motions $\mu_{\alpha}$ cos $\delta$ = -2.63$\pm$0.18 mas/yr, and
$\mu_{\delta}$ = -3.98$\pm$0.10 mas/yr were measured. That, for a distance of
2.83$\pm$0.79 kpc, taken as the SN distance, give a heliocentric tangential velocity
$v_{t}$ = 64$\pm$11 km/s. Given the heliocentric radial
velocity, $v_{r}$ = -87.4$\pm$0.5,
the total velocity is $v_{tot}$ = 108$\pm$9 km/s.
\noindent
Now, from the {\it Gaia} DR2, $\mu_{\alpha}$cos $\delta$ = -4.417$\pm$0.191
mas/yr and $\mu_{\delta}$ = -4.064$\pm$0.143 mas/yr. The parallax being
$\varpi$ = 0.512$\pm$0.021, we have
$v_{\alpha}$ cos $\delta$ = -40.88$\pm$2.44 km/s,
$v_{\delta}$ = -37.61$\pm$2.03 km/s, and $v_{t}$ = 55.55$\pm$2.26 km/s
(heliocentric).
\smallskip
\noindent
For the corresponding $v_{r}$, it results a $v_{tot}$ = 103.69 $\pm$ 7.52 km/s.
We thus see that, as compared with B14, there is no significant change with the
new results. As in B14, they can be interpreted in the framework of a
binary model similar to that for U Sco. The excess velocity of star G with
respect to the average of the stars at the same location in the Galaxy could come
from the orbital velocity it had when the binary was disrupted by the SN
explosion.
\smallskip
\noindent
Taking as a reference the Besan\c con model of the Galaxy (Robin et al. 2003),
the average heliocentric tangential velocity of disc stars, at the position and distance
of star G, is almost negligible, while the average radial velocity,
is $\langle v_{r}\rangle \approx -31
\pm 28$ km/s. Then, attributing the excess over average in radial velocity
($\approx$ -56 $\pm$ 28 km/s) to orbital motion and the full tangential
velocity ($\approx$ 55 $\pm$ 2 km/s) to it, we obtain that $v_{orb} \approx$ 78
$\pm$ 20 km/s (the inclination of the plane of the orbit with respect to the
line of sight would thus be $i$ = 44$^{o}$).
\smallskip
\noindent
The evolutionary path giving rise to SN 1572 might have started from a WD
with a mass $\sim$ 0.8 M$_{\odot}$ plus a somewhat evolved companion of
$\sim$ 2.0-2.5 M$_{\odot}$ filling its Roche lobe (RL04),
the system ending up as a WD with the Chandrasekhar mass ($\sim$ 1.4
M$_{\odot}$), plus a companion of $\sim$ 1 M$_{\odot}$. Using Kepler's law,
that $P^{2} = a^{3}/(M_{1}+M_{2})$ (with $P$ in years, $a$ in astronomical
units, $M_{1}$ and $M_{2}$ in solar masses), and since $P = 2\pi a/v$, we find
a separation $a \approx\ 25 R_{\odot}$, the period being $P \approx 9$ days.
Using Eggleton's (1983) formula
$$ R_{L} = a \left[{0.49\over 0.6 + q^{-2/3} {\rm ln}(1 + q^{1/3})}\right]$$
\noindent
($q$ being the mass ratio $M_{2}/M_{1}$) for the effective Roche lobe radius of
the companion just before the explosion, it would thus have been $\approx$
9 R$_{\odot}$. At present, the radius of the star is only of 1-2 R$_{\odot}$
(GH09), and would have resulted from the
combination of mass stripping and shock heating by the impact of the SN
ejecta, plus subsequent fast cooling of the outer layers up to the present
time.
\smallskip
\noindent
However, an alternative to this hypothesis is discussed later in the paper.
\noindent
In B14, the proper motions of 872 stars were measured from
$HST$ astrometry, using images taken in up to four different epochs
and spanning a total of 8 yr. Much higher precision than in RL04 was achieved.
The results for 45 of them (all the stars with names in Figure 1) are given
in Table 2 and Table 3 of B14. The full version was provided as
supplementary electronic
material.
\noindent
When comparing the proper motions given by the {\it Gaia} DR2 with those
obtained by B14 from the astrometry done with the {\it HST}, one must take
into account that the former are {\it absolute} measurements, in the ICRS
system, while the latter are {\it relative} measurements. This means that the
local frame used for the {\it HST} astrometry should, in general, move with
respect to the ICRS frame. Such systematic effect is actually seen when we
make the comparison. In Tables 2 and 3, the {\it Gaia} proper motions have
been transformed to the {\it HST} frame.
\noindent
Including only the stars with proper motion errors smaller than 0.25
mas/yr in B14, we find that, on average,
$\mu_{\alpha}\
{\rm cos}\ {\delta}\ (Gaia) = \mu_{\alpha} {\rm cos}\ {\delta}\ (B14)
- 1.599\pm0.729\ {\rm mas/yr}$
and
$\mu_{\delta}\ (Gaia) = \mu_{\delta}\ (B14) - 0.601\pm0.585\ {\rm mas/yr}$.
\noindent
In Table 2 (columns 7 and 9) and Table 3 (columns 4 and 6), we have transformed
the {\it Gaia} proper motions to the B14 {\it HST} frame according to these
relations. For our purposes, the local, relative proper motions are most
meaningful, since we are interested in the motions of the stars with respect to
the average motions of those around their positions. After applying these
zero--point shifts, there still are residual differences between the two proper
motion sets. On average,
$\Delta\ \mu_{\alpha}\ {\rm cos}\ \delta = -0.017\pm0.788\ {\rm mas/yr}$
and
$\Delta\ \mu_{\delta} = 0.005\pm0.630\ {\rm mas/yr}$.
The whole set is included here, the dispersion being mainly due to stars
which have substantial errors in their {\it Gaia} proper motions (see
columns 7 and 9 in Table 2 and columns 4 and 6 in Table 3, as well as
columns 5 and 6 in Table 4).
\section{Toomre diagram}
\noindent
{\it Gaia} provides a five--parameter astrometric
solution and for some stars line--of--sight velocities
($\alpha$, $\delta$, $\varpi$,
$\mu^{*}_{\alpha}$, $\mu_{\delta}$, V$_{r}$), together with their associated
uncertainties and correlations between the astrometric quantities.
For the 13 stars for which we also know their radial velocities, the
total space velocities can be derived. It is most useful to see their
components in the Galactic coordinate system: U (positive in the
direction of the Galactic center), V (positive in the direction of
Galactic rotation) and W (positive in the direction of the North Galactic
Pole) in the LSR. In Table 4 we give the U, V and W components of the space velocities,
as well as the total velocities on the Galactic meridian plane, in the Local
Standard of Rest, of these 13 stars, based on the {\it Gaia} DR2 parallaxes
and proper motions and on the radial velocities from B14 (save for star A,
which has a quite precise radial velocity from {\it Gaia}). For the
transformation of the motions from heliocentric to the LSR, we
have adopted, as the peculiar velocity of the Sun with respect to the
LSR, (U$_{\odot}$, V$_{\odot}$, W$_{\odot}$) = (11.1, 12.24, 7.25) km s$^{-1}$
(Sch\"onrich et al. 2010).
\begin{figure*}
\centering
\includegraphics[width=1.0\columnwidth]{f3.pdf}
\caption{
Left upper panel: Toomre diagram for a sample of thin
disk, thick disk, and transition thin--thick disk stars, covering a wide range
of metallicities, with our stars (from Table 4)
superimposed (red dots correspond
to thin disk stars, green to transition, and blue to thick disk
stars). Left lower panel: same as upper panel, keeping only stars with
metallicities equal to or higher than that of star G. Right panel: detail
of the lower left panel, leaving out star J. The sample is taken from
Adibekyan et al. (2012) (see text).}
\label{Figure 3}
\end{figure*}
\noindent
The Toomre diagram shows the distribution of U,V,W
velocities in the LSR. This diagram
combines quadratically U and W versus V and allows to distinguish
between stars belonging to different Galactic stellar components (thin disk,
transition thin–-thick, thick disk, and halo).
Adibekyan et al. (2012) sample, with very high quality
spectroscopic data from the HARPS exoplanet search program, is used as a
reference
in Figure 3. The data have
very high precision on radial velocities, stellar parameters and metallicities.
Therefore, the Toomre diagram obtained from this
sample combines kinematics information such as orbital motion
together with the information derived from the spectroscopic analysis on
element abundances. The sample covers a wide range of metallicities
[Fe/H] from -1.2 to 0.4.
\noindent
The sample in the upper left panel of Figure 3
has no imposed boundaries on
metallicity, while those in the lower left and the right panels
include only stars with metallicities equal to or higher than that
of star G minus the 1\ $\sigma$ uncertainty, i.e. for [Fe/H] $>$
-0.14. One sees there that (with the exception of star J,
whose kinematics is very unreliable, with large errors in the
{\it Gaia} DR2 data), no other star in our sample moves as fast
as star G.
\smallskip
\noindent
The Gaia DR2 data place star G above the region where most thin disk
stars are. The kinematics of star G would locate it among the thick
disk stars but its metallicity is that of a thin disk star,
while at its location, only 48 pc above the Galactic plane, the
density of thick disk stars is very low. Using the Adibekyan
et al. (2012) sample, the probability that star G belonged to the
thick disk, given its metallicity, is only of 2 $\%$.
\smallskip
\noindent
There are some thin disk stars, however, that move fast on the
Galactic meridian plane, and thus star G might belong to this
group, although, as we will see, it includes only a small
fraction of the thin disk stars.
\noindent
Quantitatively, in the sample from Adibekyan et al. (2012), of
1111 FGK dwarf stars, there are 601 thin disk stars with
metallicities [Fe/H] $>$ -0.14. 446 of them (74,2\%) are inside
the circle {\bf $V^{2} + (U^{2} + W^{2})^{1/2} <$} 50 km/s, and 596
are inside the $<$ 100 km/s circle. Only 5 (0.8\%) have
velocities higher than 100 km/s. That is, therefore, (0.8\%), the
probability, from kinematics alone, that star G were just a
fast--moving thin disk star.
\noindent
In Section {\bf 6} the orbits of the stars are discussed and
the question of the detailed chemical abundances of star G
will be addressed.
\begin{table*}
\scriptsize
\setlength{\tabcolsep}{4pt}
\begin{center}
\caption{Parallaxes, heliocentric radial velocities, proper motions, Galactic U, V, W velocity components, and
total velocities on the Galactic meridian plane (referred to the LSR) of the stars with both radial velocities
from B14 and parallaxes from {\it Gaia} DR2 (the proper motions, here, are in the {\it Gaia} system).}
\begin{tabular}{llcccccccc}
\\
\hline
\hline
Star & DR2 number & $\varpi$ & $v_{\rm r}$ &$\mu_{\alpha}\ {\rm cos}\ \delta$&$\mu_ {\delta}$& U & V & W &$(U^{2} + W^{2})^{1/2}$ \\
& (4311 ...) & (mas)& (km/s) & (mas/yr) & (mas/yr) & (km/s) & (km/s) & (km/s) & (km/s) \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\
\hline
A & 60565571641856 & 1.03$\pm$0.05 & -30$\pm$1 & -5.321$\pm$0.076 & -3.518$\pm$0.065 & 48.65$\pm$1.12 & -0.74$\pm$0.74 &-7.12$\pm$0.82 & 49.17$\pm$1.11 \\
B & 60569875463936 & 0.49$\pm$0.04 & -45$\pm$8 & -4.505$\pm$0.063 & -0.507$\pm$0.049 & 71.69$\pm$5.08 & -5.10$\pm$7.15 & 5.72$\pm$0.53 & 71.91$\pm$5.07 \\
C1& 60359417132800 & 5.31$\pm$0.48 & -40$\pm$6 & -2.415$\pm$0735 & -0.206$\pm$0.576 & 33.22$\pm$3.06 &-23.00$\pm$5.20 & 6.29$\pm$0.54 & 33.81$\pm$3.05 \\
D & 60363709280768 & 1.62$\pm$0.32 & -58$\pm$ 0.8&-4.566$\pm$0.636 & -2.248$\pm$0.376 & 52.16$\pm$2.73 &-30.83$\pm$1.61 & 0.64$\pm$1.68 & 52.16$\pm$2.73 \\
E & 60565573859584 & 0.14$\pm$0.22& -33$\pm$18 & 0.232$\pm$0.377 & -0.699$\pm$0.265 & 22.80$\pm$17.45 &-18.87$\pm$17.43 &-18.20$\pm$42.45 & 29.18$\pm$28.82 \\
F & 60569875460096 & 0.47$\pm$0.08 & -41$\pm$11 &-5.739$\pm$0.130 & -0.292$\pm$0.097 & 82.40$\pm$10.16 & 5.63$\pm$10.72 & 9.19$\pm$1.02 & 82.91$\pm$10.10 \\
G & 60359413315328 & 0.51$\pm$0.12 & -87$\pm$0.5&-4.417$\pm$0.191 & -4.064$\pm$0.143 & 93.01$\pm$8.85 & -40.33$\pm$5.37 &-28.20$\pm$8.28 & 97.19$\pm$8.80 \\
H & 60599931508480 & 0.62$\pm$0.20 & -78$\pm$10& -4.839$\pm$0.341 & -0.577$\pm$0.248 & 82.61$\pm$11.38& -36.89$\pm$10.49& 4.67$\pm$2.02 & 82.74$\pm$11.36 \\
J & 60565571749760 & 0.13$\pm$0.24& -52$\pm$6 & -3.900$\pm$0.373 & -1.054$\pm$0.292 &159.03$\pm$214.99& 38.04$\pm$125.88 &-17.10$\pm$45.78 &159.94$\pm$213.81 \\
N & 60565571767552 & 0.25$\pm$0.10 & -37$\pm$6 & 0.092$\pm$0.148 & 0.134$\pm$0.121 & 28.12$\pm$4.02 & -21.18$\pm$5.41 & 8.71$\pm$2.25 & 29.44$\pm$3.90 \\
O & 60569875457792 & 1.17$\pm$0.05 & -22$\pm$7 & 2.607$\pm$0.098 & 2.108$\pm$0.076 & 12.57$\pm$3.57 & -13.00$\pm$6.07 & 14.12$\pm$0.47 & 18.90$\pm$2.40 \\
P1& 60565571767424 & 0.17$\pm$0.09 & -43$\pm$10& -0.889$\pm$0.139 & -0.389$\pm$0.106 & 55.13$\pm$13.04& -11.71$\pm$11.20& -2.17$\pm$6.32 & 55.17$\pm$13.03 \\
U & 59092406721280 & 0.50$\pm$0.07 & -45$\pm$4 & -1.877$\pm$0.113 & -5.096$\pm$0.083 & 52.70$\pm$3.32 & -14.82$\pm$3.86 &-39.77$\pm$6.62 & 66.02$\pm$4.78 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\section{Star's orbits}
\noindent
Using the {\it Gaia} DR2 data system, we can calculate
the orbits of the stars in the Galaxy.
With known distances, proper motions can be translated into tangential
velocities. From that and from the radial velocities already obtained,
the total velocities of the targeted stars are reconstructed and their
orbits as they move across the Galaxy can then be calculated.
\noindent
The stellar orbits are obtained by integration of the equations of
motion. A 3D potential of the Galaxy is required for that. Here we
use an axysimmetric potential
consisting of a spherical central bulge, a disk and a massive
spherical halo, developed by Allen \& Santill\'an (1991). It is an
analytic potential that allows an efficient and accurate numerical
computation of the orbits. In the present case, the total mass of
the Galaxy is assumed to be 9$\times10^{11}$ M$_{\odot}$. We take the Sun
to be at 8.5 kpc from the Galactic center and moving circularly
with a frequency $\omega_{\odot}$ = 25.88 km s$^{-1}$ kpc$^{-1}$.
We do not consider a Galactic bar nor a spiral arms potential.
\begin{figure*}
\centering
\includegraphics[width=0.90\columnwidth]{f4a.pdf}
\includegraphics[width=0.90\columnwidth]{f4b.pdf}
\caption{The orbits of stars B (green), G (red), F ((blue), and
U (gray), projected on the Galactic meridian plane (left) and on the Galactic
plane (right), computed forward on time for the next 500 Myr.
The common starting point is marked with
a blue square. In the left panel we see that star U reaches the largest
distance from the Galactic plane, followed by star G (which corresponds to the
respective values of the W component of their velocities in Table 4), while
stars B and F scarcely depart from the plane. The behaviour of the latter
stars is typical of the rest of the sample considered here. In the right panel
we see that the orbit of star G, on the Galactic plane, is highly eccentrical
(which corresponds to the high value of the U component of its velocity in
Table 4), while the other stars (including star U) have orbits close to
circular. Also here, the behaviour of stars B and F is representative of the
whole sample.}
\label{Figure 4}
\end{figure*}
\noindent
In Figure 4 we show the orbits of stars B, G, F and U. We see that only
stars G and U do reach large distances above and below the Galactic
plane, while the other two stars, in contrast, do not appreciable
leave it. It can also be noted, in the motion parallel to the Galactic
plane, the large eccentricity of the orbit of star G.
\noindent
The Figure is meant to show how far from the Galactic plane do reach the stars
with significant proper motions in $\mu_{\alpha}$ and $\mu_{\delta}$.
We take four stars with distance compatible with that of the SN.
We see that star G will reach up to 500 pc and star U up to almost 700 pc above
the Galactic plane within the next 500 Myr. In contrast, Tycho B and Tycho F
(which have an insignificant $\mu_{\delta}$) do not reach 200 pc in their orbits
in
any lapse of time. We have shown Tycho B and Tycho F, because those have
$\mu_{\alpha} \sim$ 4 mas yr$^{-1}$ but negligible $\mu_{\delta}$. Their orbits
do not look peculiar. These are example of the many stars in a similar
situation, which can be seen in our Tables. They will be thin disk stars (as
seen in the Toomre diagram). The case of star U is unique, in the sense that
it has a slighly larger $\mu_{\delta}$ than Tycho G. Tycho U has a negligible
$\mu_{\alpha}$. This makes its orbit very circular. Tycho G has about the same
proper motions in $\mu_{\alpha}$ and in $\mu_{\delta}$. This is why it
reaches 500 pc above the Galactic plane but, at the same time, unlike star U,
its orbit is eccentric.
\noindent
The total velocity of star G is larger than that of star
U. This can already be seen from the orbit and more explicitly in the Toomre
diagram. When we add the radial velocity vector to obtain the total velocity
for star G, we have a v$_{r}$ = -87.40 km s$^{-1}$ (heliocentric), which is
larger than that of star U (-45.40 km s$^{-1}$). Thus total velocity for star G
is 103.69 km s$^{-1}$ while for U is only 68.63 km s$^{-1}$.
\begin{figure*}
\centering
\includegraphics[width=0.90\columnwidth]{f5.pdf}
\caption{
Histogram of the distribution in $\mu_{\delta}$ (in mas yr$^{-1}$) of
the stars within 1 degree of the geometrical center of Tycho's SNR in the
range of distance compatible with SN 1572 (1.7 $<$ d $<$ 3.7 kpc). The data
are
obtained from
{\it Gaia} DR2. The red vertical line shows the $\mu_{\delta}$
of star G.}
\label{Figure 5}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\columnwidth]{f6.pdf}
\caption{
Proper motions of the candidate stars (see first column of Table 5)
to companion
of SN 1572 ploted over the distribution of all the stars with
distance compatible with that of the SNR (1.7 $<$ d $<$ 3.7 kpc) and
included
within a radius of 1 degree of
the remnant.}
\label{Figure 6}
\end{figure*}
\noindent
In Figure 5, we have made an histogram of the heliocentric radial
velocities of the stars at distances between 1.7 $<$ d $<$ 3.7
within 1 degree from the geometrical center of the SN.
We see there that the position of star G is anomalous. We also see that around
1 degree in the SN position the heliocentric radial velocities are small.
\noindent
In Figure 6 we show the position of the stars compatible
with the distance of SN 1572 in proper motions in right ascension
and declination.
All stars are those from
the Gaia DR2 within 1 degree from the SN 1572 center and
1.7 $<$ d $<$ 3.7 kpc.
It is
worth noting the logarithmic scale. Basically within 1 degree of the center
of SN 1572 and those distances
the large majority of stars have very low proper motions.
\noindent
Readers might ask what happens with the stars, in the full sample of Tables 2
and 3, that do not have measured heliocentric radial velocities because they
were far from the 15\% of the radius of the remnant explored in RL04. The {\it
Gaia} DR2 data show no significant proper motions for any of them.
\section{ Candidate stars}
\noindent
In order to evaluate the likelyhood that a given star were the companion of
the SN, we look at the distances provided by {\it Gaia} parallaxes and to
the proper motions. For some of the stars, we also have radial velocities,
obtained from high resolution spectra. Parallaxes and proper motions allow
to discern whether a given star has received extra momentum from the disruption
of a binary system to which it belonged.
Such extra momentum (relative to the motion of the center of mass of the
binary) mostly comes from the orbital motion of the star when the system
was disrupted by the SN explosion. There is also some kick due to the
impact of the SN ejecta on the companion, but it is comparatively small
and depends on orbital separation and on how compact the companion is.
In the hydrodynamic simulations of Marietta, Burrows \& Fryxell (2000), the
momentum gained from the kick ranges from 12\% to 50\% of the momentum the star
had before explosion, for main-sequence and subgiant companions, the kick
being much smaller in the case of red giants. Similar values are found by
Pan, Ricker \& Taam (2014).
\noindent
We examine first the stars closest to the geometrical
center of the remnant.
\bigskip
\noindent{\bf Stars A to G}
\noindent
Star A is the closest star to the geometrical center of the SN remnant.
From its stellar atmosphere parameters, this star is at the foreground of
the SN. The {\it Gaia} parallax places it at 0.97 $^{+0.05}_{-0.04}$ kpc.
By comparison of the absolute magnitude corresponding to the stellar
parameters with the photometric data, we had derived $d$ = 1.1$\pm$ 0.3 kpc
(R04, B14).
\noindent
Star B has recently been thought to be a foreground star
(Kerzendorf et al. 2018). It is a hot star close
to the geometrical center of the remnant.
{\it Gaia} DR2 places this star at a distance compatible with that
of the SNR.
$d$ = 2.03$^{+0.19}_{-0.15}$ kpc, is now compatible with the SN distance.
We already pointed out, in R04 and B14, that the star likely was at a
distance compatible with that of the SNR.
We can discard star B, however, on the basis of having no peculiar
proper motions nor radial velocity. We have made a reconstruction of the
orbit of star B and it moves on the Galactic plane without any disturbance
towards upper or lower Galactic latitudes.
\noindent
Stars C1, C2 and C3 have been observed by the {\it HST}. {\it Gaia} could
only observe C1, and has determined it to be a very nearby star, at a
distance of 0.18$^{+0.03}_{-0.01}$ kpc. From the stellar parameters,
a distance of 0.75$\pm$0.5 kpc was estimated. For C2 and C3, RL04 and B14
could not estimate
any distance, due to their faint magnitudes. The proper motion values
for these three stars obtained from {\it HST} images in B14 show that they are moving close to the Galactic plane.
\noindent
Star D is also very nearby. It is at a distance of 0.62$^{+0.15}_{-0.11}$ kpc
according to {\it Gaia}. We had calculated a distance of $d$ = 0.8$\pm$0.2 kpc.
Most of the targeted stars close to the geometrical center of the SNR
are at distances below 1.5 kpc.
\noindent
Star E, though, is at a very large distance. {\it Gaia} indicates a $d$ =
7.22$^{+xxx}_{-4.43}$ kpc (the upper limit correponds to a negative parallax).
Star E was suggested as the SN companion by Ihara et al. (2007).
The authors detected
absorption lines in the blue side, of the spectrum, at 3720 \AA, consistent
with Fe absorption from Tycho's SNR. They concluded that this might either
be due to the Fe I in the SN ejecta or to a peculiarity of the star.
In GH09 it was pointed out that it is likely a very distant star.
In fact, both its distance and its kinematics do exclude it as a
possible companion of the SN.
\noindent
Star F is compatible with the distance to Tycho's SN. {\it Gaia} measures
a distance of $d$ = 2.15$^{+0.44}_{-0.32}$ kpc. We had calculated a distance
$d$ = 1.5$\pm$0.5 kpc. It is not moving at high radial velocity nor
does it have a high proper motion perpendicular to the Galactic plane.
Its orbit does not depart from the Galactic plane.
\noindent
Star G can also be considered among those close the center of the
SNR. {\it Gaia} has measured a distance $d$ = 1.95$^{+0.6}_{-0.35}$ kpc,
which is within the range of distances suggested for the Tycho SN. This
star has been a proposed companion in R04, GH09, and B14. Its
kinematics corresponds to that of a thick disk star,
but it
has a thin disk composition. It was found an enhanced $[Ni/Fe]$ in GH09 and
questioned in K13. A new calculation was done in B14, which still shows a
value $[Ni/Fe]$ larger than the solar. We leave this point aside and refer
only to the agreed solar metallicity of the star.
\noindent
All the previous stars have measured v$_{r}$. They have been placed in the
Toomre diagram (Figure 3 and Table 4).
\bigskip
\noindent{\bf Proposed stars at the NW of the geometrical center}
\noindent
Xue and Schaefer (2015) place the site of the explosion of Tycho's SN at
the NW of the geometrical center of the SNR. They base their claim, in
part, on a reconstruction of the historical center using observations of
astronomers that wrote records on SN 1572 in the year of its discovery.
Their position is at odds with a previous historically based
reconstruction of the location in the sky of SN 1572 from Stephenson \& Clark
(1977). On the other hand, Xue and Schaefer (2015)
use a substitute of a 2D
hydrodynamical simulation that, as noted by Williams et al. (2016),
would only be valid for perfectly spherical remnants.
\noindent
Xue \& Schaefer (2015)
give as the position of the explosion site R.A = 00$^{h}$
25$^{m}$ 15.36$^{s}$ and Dec= 64$^{o}$ 08' 40.2''.
From it, they suggest that the companion star should be in a small circle
around stars O and R. In their Figure 2 they point to stars O, Q, S and R.
From the {\it Gaia} DR2 data, we see that stars O, Q and S are at too short
distances, incompatible with the distance to Tycho's SN. Star O is at a
distance of 0.85$^{+0.54}_{-0.22}$ kpc. Star Q is at a distance of
1.51$^{+1.53}_{-0.51}$ kpc, only the upper limit being compatible with the
distance to the SNR, but it has no high proper motions. Star S is also at a
small distance of 0.81 $^{+0.41}_{-0.20}$ kpc. Star R has no distance
measurement nor proper motions in the {\it Gaia} DR2. However, we have
proper motions measured with the {\it HST} (B14) and they are
small.
Star P is, as seen in the images of the {\it HST},
two stars P1 and P2. In
the {\it Gaia} DR2 we have only the P1 parallax, though not very well determined
but no proper motions are given. In B14, though, we have the proper motions
of the stars and they are relatively small. P1 has a v$_{r}$ of --43 $\pm$ 10
and it has been possible to place it in the Toomre diagram (Figure 3 and
Table 4) where it shows no kinematic peculiarity. Star N is in the same
area of he sky as stars O, P1 and P2, Q, R and S. It
has small proper motions and it is possible to place it in the Toomre diagram,
where it lies in the region of low kinematical values.
In that corner of the sky suggested by Xue \& Schaefer (2015),
there is no star looking as a companion of Tycho's SN in any way.
\bigskip
\noindent{\bf The NE proposed center}
\noindent
In a recent paper by Williams et al. (2016), the expansion center of
the remnant is suggested to be at the NE of the geometrical center. These
authors use an extrapolation of the trajectories of different regions
of the SNR, but also a
2D hydrodynamical simulation of the expansion of the ejecta in an inhomegeneous
medium. They also assume cylindrical symmetry in the initial ejection of the
supernova material. However, Krause et al (2008), from the spectrum of the
light echo of SN 1572, suggest that the explosion was aspherical and thus
not cylindrically symmetric.
\noindent
Their suggested center is at R.A. = 00$^{h}$ 25$^{m}$ 22.6$^{s}$ and
Dec = 64$^{0}$ 08$^{'}$ 32.7$^{''}$. This would be close to stars L and K.
L is a star at a distance of $d$ = 1.45$^{+2.87}_{-0.58}$ kpc, but with
small proper motions. K has no distance determined by {\it Gaia}.
In B14 we suggest it to be around 4 Kpc. The kinematics of the star, with
small proper motions, makes it a non--suitable companion of Tycho's SN.
In the NE center there are other stars like W, AK, AL, and AM. These stars
do not have accurate parallaxes in the {\it Gaia} DR2 release. They
have upper limits
in distance with negative parallaxes. We have looked at their kinematics
and they have moderate and low proper motions.
\noindent
Therefore, given the various candidates proposed, the best approach is to
look for those that are within the range of the possible distance to
Tycho's SN, show a peculiar kinematics and are within the region of the
sky already explored.
\noindent
We have placed in Table 5 the different stars and evaluated their
viability as possible companions.
\smallskip
\begin{table*}
\scriptsize
\begin{center}
\caption{Criteria satisfied by the stars in Table 1. $v$ refers to the
total velocity, $v_{t}$ to the tangential velocity, and $v_{b}$ to the
velocity perpendicular to the Galactic plane}
\begin{tabular}{llll}
\hline
\hline
Star & 1.7 kpc $\leq d \leq$ 3.7 kpc & High Velocity & High $v_{b}$ \\
(1) & (2) & (3) & (4) \\
\hline
A & No ($d$ = 0.97$^{+0.05}_{-0.04}$ kpc) & --- &
--- \\
B & Yes ($d$ = 2.03$^{+0.19}_{-0.15}$ kpc) & No ($v$ = 72$\pm$5 km/s) &
No ($v_{b}$ = -0.5 $\pm$ 0.6 km/s) \\
C1 & No ($d$ = 0.18$^{+0.03}_{-0.01}$ kpc) & --- &
--- \\
D & No ($d$ = 0.62$^{+0.15}_{-0.11}$ kpc) & --- &
--- \\
E & No ($d$ = 7.22$^{+xx}_{-4.43}$ kpc) & --- &
--- \\
F & Yes ($d$ = 2.15$^{+0.44}_{-0.32}$ kpc) & Yes ($v$ = 83$\pm$10 km/s)&
No ($v_{b}$ = 3 $\pm$ 1 km/s) \\
G & Yes ($d$ = 1.95$^{+0.60}_{-0.35}$ kpc) & Yes ($v$ = 103$\pm$7 km/s)&
Yes ($v_{b}$ = -33 $\pm$ 9 km/s) \\
H & Yes ($d$ = 1.61$^{+0.79}_{-0.40}$ kpc) & Yes ($v$ = 91$\pm$11 km/s)&
No ($v_{b}$ = -0.5 $\pm$ 2.5 km/s) \\
I & --- & --- &
--- \\
J & No ($d$ = 7.46$^{+xx}_{-4.77}$ kpc) & --- &
--- \\
K & --- & --- &
--- \\
L & Yes ($d$ = 1.45$^{+2.87}_{-0.58}$ kpc) & No ($v_{t}$ = 10$^{+20}_{-6}$
km/s)& No ($v_{b}$ = -2 $\pm$ 8 km/s)\\
M & --- & --- &
--- \\
N & Yes ($d$ = 4.06$^{+2.57}_{-1.14}$ kpc) & No ($v$ = 36$\pm$4 km/s) &
No ($v_{b}$ = 2 $\pm$ 3 km/s) \\
O & No ($d$ = 0.85$^{+0.54}_{-0.22}$ kpc) & --- &
--- \\
P1 & No ($d$ = 5.96$^{+7.23}_{-2.11}$ kpc) & --- &
--- \\
Q & Yes ($d$ = 1.51$^{+1.53}_{-0.51}$ kpc) & No ($v_{t}$ = 24$^{+24}_{-8}$
km/s) & No ($v_{b}$ = 10 $\pm$ 10 km/s)\\
R & --- & --- &
--- \\
S & No ($d$ = 0.81$^{+0.41}_{-0.20}$ kpc) & ---
& --- \\
T & Yes ($d$ = 1.77$^{+1.86}_{-0.60}$ kpc) & No ($v_{t}$ = 30$^{+32}_{-10}$
km/s) & No ($v_{b}$ = 4 $\pm$ 3 km/s)\\
U & Yes ($d$ = 1.98$^{+0.32}_{-0.24}$ kpc) & No ($v$ = 68$\pm$5 km/s) &
Yes ($v_{b}$ = -46 $\pm$ 8 km/s)\\
V & No ($d$ = 16.81$^{+xx}_{-15.89}$ kpc)& ---
& --- \\
W & No ($d$ = 5.17$^{+xx}_{-3.07}$ kpc) & ---
& --- \\
X & No ($d$ = 5.20$^{+xx}_{-3.59}$ kpc) & ---
& --- \\
Y & Yes ($d$ = 1.58$^{+0.87}_{-0.41}$ kpc) & No ($v_{t}$ = 18$^{+10}_{-5}$
km/s) & No ($v_{b}$ = -17 $\pm$ 9 km/s)\\
Z & No ($d$ = 5.68$^{+2.72}_{-2.57}$ kpc) & ---
& --- \\
AA & Yes ($d$ = 1.04$^{+1.00}_{-0.33}$ kpc) & No ($v_{t}$ = 4$^{+8}_{-7}$ km/s)&
No ($v_{b}$ = -5 $\pm$ 6 km/s)\\
AB & --- & --- &
--- \\
AC & Yes ($d$ = 2.04$^{+0.99}_{-0.50}$ kpc) & No ($v_{t}$ = 11$^{+6}_{-4}$ km/s)&
No ($v_{b}$ = -12 $\pm$ 7 km/s) \\
AE & Yes ($d$ = 3.58$^{+5.84}_{-1.37}$ kpc) & No ($v_{t}$ = 16$^{+13}_{-5}$ km/s)
& No $(v_{b}$ = 6$^{+20}_{-1}$ km/s) \\
AF & No ($d$ = 0.76$^{+0.24}_{-0.15}$ kpc) & ---
& --- \\
AG & Yes ($d$ = 1.42$^{+1.69}_{-0.50}$ kpc) & No ($v_{t}$ = 10$^{+12}_{-5}$
km/s) & No ($v_{b}$ = -6 $\pm$ 6 km/s)\\
AH & No ($d$ = 4.85$^{+3.55}_{-1.44}$ kpc) & ---
& --- \\
AI1/HP-1 & No ($d$ = 0.35$^{+0.04}_{-0.03}$ kpc) & ---
& --- \\
AI2 & --- & --- &
--- \\
AJ & No ($d$ = 5.35$^{+xx}_{-2.81}$ kpc) & ---
& --- \\
AK & --- & --- &
--- \\
AL & No ($d$ = 2.61$^{+xx}_{-1.61}$ kpc) & ---
& --- \\
AM & No ($d$ = 1.33$^{+xx}_{-0.70}$ kpc) & ---
& --- \\
AN & Yes ($d$ = 1.65$^{+0.49}_{-0.30}$ kpc) & No ($v_{t}$ = 24$^{+7}_{-4}$
km/s) & No ($v_{b}$ = -7 $\pm$ 3 km/s)\\
\hline
\end{tabular}
\end{center}
\end{table*}
\smallskip
\section{Luminosities and models}
\noindent
There are significant differences in the predictions of the characteristics of
the surviving companions of the supernova explosion. Podsiadlowski (2003) found
that, for a subgiant companion, the object $\sim$ 400 years after the explosion
might be either significantly overluminous or underluminous, relative to
its pre-SN luminosity, depending on the amount of heating and the amount
of mass stripped by the impact of the SN ejecta. More recently Shappee,
Kochanek \& Stanek (2013) have also followed the evolution of luminosity
for years after the impact of the ejecta on a main--sequence the companion. The
models first rise in temperature and luminosity, peaking at
10$^{4}$ L$_{\odot}$ to start cooling and dimming down to 10 L$_{\odot}$ some
10$^{4}$ yr after
the explosion. Around 500 days after explosion the companion
luminosity would be 10$^{3}$ L$_{\odot}$. Pan, Ricker \& Taam (2012, 2013,
2014) criticize the two preceding approaches for the arbitrary of the initial
models. Starting from their hydrodynamic 3D models, they find lower
luminosities for the companions than the previous authors. They find
luminosities of the order of only 10 L$_{\odot}$ for the companions, several
hundred days after the explosion.
\noindent
Now, knowing the distances from {\it Gaia}, we can derive the luminosities of
the stars compatible with being inside the SNR. We take a distance
to the SNR coming from the measurements from various reliable approaches,
which means a value 1.7 $<$ d $ <$ 3.7 kpc. We have 15 candidates
compatible with that distance. We already had {\it UBV} photometry for some of
them and now {\it Gaia} photometry for all. From that we find that there
is no clearly overluminous candidate.
\noindent
It has been suggested that, within the double--degenerate channel to produce
SNe Ia, the explosion can be triggered just at the beginning of the coalescence
process of the two WDs, by detonation of a thin helium layer coming from the
surface of the less masive one. That would induce a second detonation in the
core of the more massive WD. This hypotetical process has been dubbed as the
``dynamically driven double--degenerate double--detonation scenario'' (see
Shen et al. 2018 and references therein). In this case, the less massive
WD would survive the SN explosion and be ejected at the very high orbital
velocity ($>$ 1000 km s$^{-1}$) it had at the moment of the explosion. Those
would be seen as ``hypervelocity WDs'' (Shen et al. 2018). The number of
hypervelocity WDs detectable by {\it Gaia} depends on the assumed luminosity
of these objects. Shen et al. (2018) conclude that, taking into account tidal
heating undergone by the WD before the explosion, a typical object would have,
after subsequently cooling for $\sim$ 10$^{6}$ yr, a luminosity $\geq$ 0.1
$L_{\odot}$, and thus be detectable by {\it Gaia} up to a distance of 1 kpc.
Based on that, they predict that $\sim$ 30 potentially detectable hypervelocity
WDs should be found within 1 kpc from the Sun. They have actually found,
from {\it Gaia} DR2, three objects that, after having been followed up with
ground--based telescopes, although not looking as typical WDs might be the
result of heating and bloating of a SN Ia WD companion.
\noindent
In the case of Tycho's SN,
the cooling time of a possible surviving WD companion is only $\sim$ 450 yr,
and thus the luminosity should be significantly higher than the
0.1 $L_{\odot}$ adopted by Shen et al. (2018) for a typical companion having
cooled for $\sim$ 10$^{6}$ yr.
\noindent
In order to look for a possible hypervelocity WD companion to Tycho, we
must considerably enlarge the search area around the center of the SNR.
Taking as an upper limit a velocity perpendicular to the line of sight of
4000 km s$^{-1}$, the maximum distance traveled in 450 yr, 5.7$\times 10^{13}$
km, translates, at a distance of the SNR, into an angular
displacement of 2.1 arcmin (that is slightly more than 50\% of the average
radius of the SNR, which is about 4 arcmin).
\noindent
We have checked that there is no object with unusually high proper motion
in the {\it Gaia} DR2 data release, within the searched area and up to a
$G$--magnitude of 20.7 (V $\sim$ 22).
For an extinction $A_{V}$ = 2.4 mag (GH09), and at
the distance of Tycho, that means a luminosity $L \sim 0.3 L_{\odot}$,
similar to the lower limit adopted by Shen et al. (2018). That does not take
into account the capture of radioactive material by the companion WD predicted
by Shen \& Schwab (2017). Objects such as the three candidates to hypervelocity
former SN Ia companions found by Shen et al. (2018), with G--magnitudes
$\sim$ 17--18 mag, would be clearly seen.
\section{Summary and conclusions}
\noindent
We have reexamined the distances and proper motions of the stars close
to the center of Tycho's SNR, using the data provided by the {\it Gaia}
DR2. Previously, the distances were only know from determination of
the stellar atmosphere parameters and comparison of the corresponding
luminosities with the observed apparent magnitudes, with only an
approximative knowledge of the extinction and uncertainty about the
luminosity classes in a number of cases. More accurate were the proper
motions, coming from astrometry made with the {\it HST}, but the
DR2 has allowed a cross--check here. Besides, only a precise knowledge
of the distances allows to convert proper motions into tangential velocities
reliably.
\noindent
{\it Gaia} now provides the last word about the distances and kinematics
of the previously proposed companions of Tycho's SN.
\noindent
A good agreement between the distances from {\it Gaia} DR2 and those
reported in B14 has been found in many cases, but with
a general trend to shorter {\it Gaia} distances as compared with
B14, which can be attributed to an underestimate of extinction in
the direction of the remnant, in B14. In a few cases, however,
the discrepancies are large.
\noindent
Concerning proper motions, the agreement is very good once due
account is made of the systematic effect of the motion of the
local frame to which the {\it HST} measurements are referred
with respect to {\it Gaia}'s absolute frame.
\noindent
We find that, within the remaining uncertainties, up to 15
stars are at distances compatible with that of the SNR. The case
for Tycho G is that in samples such as the one shown in Figure 3,
this star has a thick disk kinematics, but has thin
disk metallicity. There is only a 0.8\% of star having similar characteristics.
We have inspected the proper motions of all the stars visible up to limit of
the Gaia DR2 in magnitude,
and we have found no one with the same peculiar total velocity.
We have presented a scenario in which Tycho G could be the companion of
the SN 1572.
There is,
however, the possibility that after performing several orbits around the
Galactic center, and encountering globular clusters and spiral arms, the
star orbit becomes eccentric and migrates towards higher Galactic latitudes.
This is a suggested explanation for the characteristics of Tycho G.
A counterargument is why the other stars that could have performed as well
several Galactic orbits, in close locations, would not have
migrated.
\noindent
We agree
with Kerzendorf et al. (2018) that Tycho B is not a good candidate to
companion of the explosion. We can also exclude, in view of the {\it Gaia} DR2
data, that star E could be a companion, since it lies very far away.
\noindent
In case that Tycho G were not the companion star, the double--degenerate
scenario or the core degenerate scenario are favored, since we have gone well
below solar luminosities.
\noindent
With {\it Gaia} DR2, we have also looked for the hypervelocities stars
predicted by some scenarios, but within the magnitudes reached by {\it Gaia}
we have found none.
\bigskip
\noindent
\section{Acknowledgements}
This work has made use of data from the European Space Agency
(ESA) mission {\it Gaia} (https://www.cosmos.esa.int/gaia), processed by the
{\it Gaia} Data Processing and Analysis Consortium (DPAC,
https//www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has
been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.
P.R.--L. is supported by
AYA2015--67854--P from the Ministry of Industry, Science and Innovation of
Spain and the FEDER funds. J.I.G.H. acknowledges financial support from the
Spanish MINECO (Ministry of Economy of Spain)
under the 2013 Ram\'on y Cajal program MINECO RyC--2013--14875,
also from the MINECO AYA2014--56359--P and AYA2017-86389-P.
This
work was supported as well by the MINECO through grant ESP2016--80079--C2--1--R
(MINECO/FEDER, UE) and ESP2014--55996---C2--1--R (MINECO/FEDER, UE) and
MDM--2014--0369 of ICCUB (Unidad de Excelencia 'Maria de Maeztu).
\newpage
| {'timestamp': '2018-11-22T02:15:44', 'yymm': '1807', 'arxiv_id': '1807.03593', 'language': 'en', 'url': 'https://arxiv.org/abs/1807.03593'} |
\section{Introduction}
We interact with computer algorithms throughout the day. Algorithms guide us to our destinations, finish our sentences in emails, automate business processes and decisions in healthcare, and recommend movies and music for us to enjoy. On social media platforms, algorithms select messages for our news feed and find new accounts for us to follow. Despite algorithms' ubiquity and broad impact, the interaction between algorithms and people within socio-technical systems is still poorly understood, especially when algorithms learn from data based on past predictions \cite{Sinha2016}.
We have some evidence, however, that this interaction can have unexpectedly negative consequences, such as pushing people into filter bubbles \cite{Ge2020,Sirbu2019} or reducing the diversity of recommended content in online platforms \cite{Chaney2018,Mansoury2020}. Something observed in crowdsourcing systems, but under-explored in other systems, is how the interplay between people and algorithms can stochastically make some options very popular or unpopular (known as \textit{instability}), and the popularity of items has little relation to the items people prefer~\cite{Salganik2006,Burghardt2020}. Mitigating this problem in crowdsourcing systems is an ongoing struggle but, until now, it was unknown if this instability extends to other socio-technical systems, such as personalized recommendation. This feedback loop could make many items or videos more popular than better ones and generate more unpredictable trends. These emergent problems are bad for recommendation systems because they will recommend items or content users are less likely to want, reducing revenue. The unpredictability is a new challenge for businesses or content creators, who would be less certain about what items or content will be the next big thing. These problems lead us to explore two research questions in this paper:
\begin{enumerate}
\item[\textbf{RQ1}] How can we measure the stability of non-crowdsourced socio-technical systems?
\item[\textbf{RQ2}] Can we improve the stability and performance of a socio-technical system?
\end{enumerate}
We address these gaps in knowledge by systematically studying the complex dynamics of an algorithmically-driven socio-technical system, and focus on recommender systems. There are many systems we could explore, such as predictive policing \cite{Ensign2018}, or bank loans \cite{DAmour2020}, in which a feedback loop between people interacting with algorithms and the algorithms trained on these interactions could unfairly benefit some people over others. Recommender systems are chosen because they are a key component of social media platforms, e-commerce systems \cite{Ge2020}, and are used by many music and video streaming services \cite{Bell2007,Schafer2007}. In order to better understand the feedback loop, known as algorithmic confounding, in recommendation systems, we create a teacher-student framework that approximates a continuously trained recommendation system \cite{Lampinen2018}, as shown in Fig.~\ref{fig:teaser}. The left panel of Fig.~\ref{fig:teaser} shows the ground truth \emph{teacher model}, which models human choices within recommendation systems using agents. These agents stochastically choose items according to intrinsic features of items and preferences of users (modeled as a factorized matrix, which fits empirical data well \cite{funk2006netflix,Bell2007,Koren2009,Portugal2018}), as well as human biases (such as choosing items at random, a heuristic seen in crowdsourcing \cite{Burghardt2020}). This model gives each user-item pair a unique probability to be chosen by a simulated agent when the item is personally recommended by the algorithm. We can alternatively interpret ``choosing'' items as agents rating something as good or bad (a goal of some recommendation systems \cite{funk2006netflix}), but we will stick to choosing items for consistency.
We train a separate \textit{student model} from agent's binary decisions (right panel of Fig.~\ref{fig:teaser}) and use it to make new recommendations to agents. The student model is a separate factorized matrix that estimates what items each agent is likely to choose. The system records whether agents do or do not choose recommended items and the data is used to further fine-tune the student model. We have agents interact with items exactly once in order to model agents preferring new content over old \cite{Sinha2016} although this assumption can be easily modified in the code we provide, and should not affect our results.
Our results show that recommending items that agents are predicted to like most leads to item popularity instability, where the same item can be very popular or unpopular in different simulation realizations. We use the simulation to test an alternative recommendation strategy in which random items are sometimes recommended. This new strategy improves stability and model accuracy as well as mean item popularity (a proxy for purchases or view counts of videos) at a given time. Moreover, a side-benefit of this strategy is that it forces the algorithm to recommend diverse content, which could reduce recommendation filter bubbles.
To summarize, our contributions are the following:
\begin{enumerate}
\item We develop a novel framework to evaluate the stability of model training \footnote{The simulation code can be found here: https://github.com/KeithBurghardt/RecSim}.
\item We quantify the stability of different recommendation algorithms to algorithmic confounding.
\item We provide a simple recommendation algorithm strategy that improves accuracy and stability.
\end{enumerate}
These results demonstrate that personalized recommendation systems can produce exceedingly unstable recommendations. While the simulation is an idealized system, it gives new insight into why the systems work, why they sometimes fail, and algorithm strategies to mitigate their shortcomings.
\begin{table*}[tbh!]
\centering
\begin{tabular}{|p{3.5cm}|p{13cm}|}
\hline
{\bf Terminology} & {\bf Definition} \\\hline\hline
\textit{Personalized recommendations} & Recommendations unique to each person, in contrast to crowdsourced ranking, where the same recommendations are seen by everyone\\\hline
\textit{Algorithmic confounding} & The positive feedback loop between model and people in recommendation systems\\\hline
\textit{Simulation} & The evolution of the recommendation system from the initial condition until agents choose all items\\\hline
\textit{Agent} & The simulated user in our model\\\hline
\textit{Items} & What the recommendation algorithm recommends and agents choose\\\hline
\textit{User-item matrix} & The Boolean matrix of items each agent has been recommended, that are (1) or are not (0) chosen (these could be alternatively interpreted as binary ratings) \\\hline
\textit{Matrix factorization} & Algorithms that approximate a matrix as the product of two lower-rank matrices\\\hline
\textit{Recommendation system} & The recommendation algorithm along with the agents who interact with it\\\hline
\textit{Recommendation algorithm} & The matrix factorization-based student model and algorithm to recommend items to agents\\\hline
\textit{Teacher model} & The model approximating how agents interact with items. Namely the likelihood any item is selected\\\hline
\textit{Student model} & The recommendation model that approximates the available user-item matrix with matrix factorization in order to approximate how agents choose unseen items\\\hline
\textit{Teacher-student framework} & The framework in which a student model tries to reconstruct a teacher model based on the teacher model’s output\\\hline
\textit{Instability} & Sensitivity of recommended items to initial conditions\\\hline
\textit{Popularity} & How many agents have chosen an item at a given timepoint\\\hline
\textit{Quality} & The overall probability an item is select. These are elements of the teacher matrix\\\hline
\end{tabular}
\caption{Terminology definitions.}
\label{tab:terminology}
\end{table*}
\section{Related Work}
\paragraph{Recommendation systems}
There has been significant research into improving recommendation systems. Many methods exist to recommend everything from game characters \cite{conley2013does} to educative content \cite{Tan2008} to movies \cite{biancalana2011context}, and are often based on collaborative filtering (recommending based on the behavior of similar people), content filtering (recommending similar content), or a combination of both \cite{Balabanovic1997,Schafer2007,Portugal2018}. Collaborative filtering, which the present paper simulates, can use a number of different models from more K-means and ensemble-based methods to neural networks \cite{He2017,Bell2007,kim2016convolutional}.
A popular and accurate recommendation model is matrix factorization, in which the sparse matrix of users-items pairs, $\mathbf{R}^\text{data}$, is approximated as the product of two lower-rank matrices $\approx \mathbf{P'} \mathbf{Q'}^T$ \cite{Koren2009}. Throughout the paper, matrices are $\mathbf{bold}$ while elements within a matrix are italicized. The intuition behind matrix factorization is that users and items may individually have latent features that make users more likely to pick one item (such as watching an action movie) over another (such as watching a romantic comedy). There has been significant interest in matrix factorization both due to its performance \cite{Bell2007,kim2016convolutional}, and relative ease to analyze theoretically \cite{Lesieur2017}. This method is often used in conjunction with other models, but for simplicity we model matrix factorization alone in the present paper.
\paragraph{Algorithm biases}
Training on biased data, a common practice in recommendation systems, can enhance biases, leading to greater unfairness and more mistakes \cite{Ensign2018,DAmour2020,Jabbari2017,Joseph2016,angwin2016machine}. This is known as algorithmic bias or algorithmic confounding in recommendation systems \cite{Mansoury2020,Chaney2018,Sinha2016}. This bias might create filter bubbles that enhance polarization \cite{Sirbu2019,Bessi2016}.
\paragraph{Ranking instability in crowdsourcing} A large body of literature has explored the behavior of crowdsourcing systems. In contrast to recommendation systems that personalize content, in crowdsourcing systems all users see the same content. These systems aggregate decisions of many people to find the best items, typically by ranking them. Examples include StackExchange, where users choose the best answers to questions, and Reddit, where users choose the most interesting stories for the front page. Past work has shown that ranking, especially by popularity, creates feedback loops that amplify human biases affecting item choices, such as choosing popular items or those they see first, rather than high-quality items
\cite{lerman14as,MyopiaCrowd}. Recent literature has also identified instabilities in crowdsourced ranking \cite{Burghardt2020,Salganik2006}, in which the crowdsourced rank of items are strongly influenced by position and social influence biases. As a result, the emergent popularity of mid-range quality content is both unpredictable and highly uneven, although the best (worst) items usually end up becoming most (least) popular~\cite{Salganik2006,Burghardt2018}. Along these lines, Burghardt et al. (\citeyear{Burghardt2020}) developed a model to explain how the better item in a two-item list was not guaranteed to become highest ranked, which implies good content is often harder to spot unless the ranking algorithm controls for these biases. Finally, content recommended by Reddit had a poor correlation with user preferences \cite{Glenski2018}, suggesting factors including algorithmic confounding have produced poor crowdsourced recommendations.
\paragraph{Reinforcement Learning} Reinforcement learning is the algorithmic technique of interacting with an environment with a set of actions and learning what actions maximize cumulative utility \cite{Kaelbling1996}. A number of reinforcement learning methods exist, from genetic algorithms and dynamic programming \cite{Kaelbling1996} to deep learning-based algorithms \cite{Arulkumaran2017}. The typical goal is to initially explore the space of actions and then exploit actions learned that can optimize cumulative utility. Personalized recommendations are a natural fit for reinforcement learning because users choose items sequentially, and the metrics to optimize, e.g., money spent or videos watched, can easily be interpreted as a cumulative utility to optimize. A growing body of literature has shown how reinforcement learning algorithms can learn a sequence of recommendations to increase the number of items bought or content viewed \cite{Taghipour2007,Lu2016,Arulkumaran2017,Afsar2021}. These algorithms are often trained on data of past users interacting with the system, which could lead to algorithmic confounding. The framework within the present paper can be extended in the future to measure algorithmic confounding within reinforcement learning-based recommendation systems and help companies develop new techniques to better train these models in online settings.
\paragraph{Our novelty} The present paper contrasts with previous work by developing a teacher-learner framework to model and better understand the interaction between recommendation algorithms and users. Furthermore, we use these findings to demonstrate how item instability can be a vexing feature of recommendation systems, in which popular or highly recommended items may not strongly correlate with items agents prefer. Finally, we provide a novel reinforcement learning-inspired approach to better train collaborative filtering models, which can improve stability and recommendation accuracy.
\begin{table}[tbh!]
\centering
\begin{tabular}{|p{1.1cm}|p{6cm}|}
\hline
{\bf Symbol} & {\bf Definition} \\\hline\hline
$\mathbf{R}^{\text{data}}$ & User-item matrix\\\hline
$\mathbf{R}^{\text{teacher}}$ & Teacher model\\\hline
$\mathbf{R}^{\text{student}}$ & Student model\\\hline
$\boldsymbol{\beta}$ & The teacher model probability to choose items independent of their features \\\hline
$\beta$ & Scalar value of all elements in $\boldsymbol{\beta}$ (free parameter between 0 and 1)\\\hline
$\boldsymbol{J}$ & All-ones matrix\\\hline
$\circ$ & Hadamard product (element-by-element multiplication between matrices)\\\hline
$\mathbf{P}$, $\mathbf{Q}$ & The latent features of users ($\mathbf{P}$) and items ($\mathbf{Q}$) in the teacher model \\\hline
$\mathbf{P'}$, $\mathbf{Q'}$ & The estimated latent features of users ($\mathbf{P'}$) and items ($\mathbf{Q'}$) in the student model \\ \hline
$k$ & Number of latent features in the teacher model ($k=4$)\\\hline
$k'$ & Number of latent features in the student model ($k'=5$)\\\hline
$n$ & Number of agents ($n=4000$) \\\hline
$m$ & Number of items ($m=200$) \\\hline
$A_{ij}$ & $ij^{th}$ element in a matrix $\mathbf{A}$\\\hline
$T$ & Timestep (value from $1$ to $m$)\\\hline
\end{tabular}
\caption{Symbol definitions.}
\label{tab:symbol}
\end{table}
\section{Methods}
We introduce the teacher-student modeling framework, simulation assumptions, and the recommendation algorithm strategies for student model training. Terminology referenced this section is available in Table~\ref{tab:terminology}, and symbols can be referenced in Table~\ref{tab:symbol}.
\subsection{Outline of our approach}
We analyze recommendation systems using simulations that capture the essence of recommendation algorithms while being simple enough to make generalizable conclusions. Model simplifications include:
\begin{enumerate}
\item A recommendation algorithm recommends $r=1$ items to each agent before retraining on past data.
\item We assume agents are roughly the same age within the system.
\item Agents make binary choices, alternatively interpreted as ratings (upvote or downvote) \cite{Trnecka2021}.
\item Agent decisions follow the teacher model seen in the left panel of Fig.~\ref{fig:teaser}.
\item Agent choices are the same regardless of the order items are offered.
\item Regardless of agent choice, the item is not recommended again to that agent. This captures how old items may have a lower utility to users than novel items \cite{Chaney2018}.
\end{enumerate}
Additional realism can be built into the simulation in the future; the present work can be viewed as a proof-of-concept.
\subsection{Teacher-student framework}
The recommendation system simulation has two components: a \textit{teacher model}, $\mathbf{R}^{\text{teacher}}$, which models agent decisions, and a \textit{student model}, $\mathbf{R}^{\text{student}}$, which models the recommendation engine. The student model is trained on a matrix of recommended items agents have or have not chosen, $\mathbf{R}^{\text{data}}$. The joint teacher-student framework has the following benefits: (1) the teacher model encodes agent preferences and can be made arbitrarily complex to improve realism, (2) the student model prediction can be directly compared to the teacher model ground truth, and (3) we can explore counterfactual conditions of how the recommendation system would behave if agents chose a different set of items.
The left panel of Fig.~\ref{fig:teaser} shows the teacher model, which assumes that agents choose items stochastically according to a probability matrix that models both human biases and intrinsic preferences. The teacher model is
\begin{equation}
\mathbf{R}^{\text{teacher}} = \boldsymbol{\beta} + (\boldsymbol{J}-\boldsymbol{\beta}) \circ \mathbf{P}\mathbf{Q}^T
\end{equation}
where $\boldsymbol{\beta}$ is a matrix representing the probability a user will pick a given item regardless of its intrinsic qualities, $\boldsymbol{J}$ is an all-ones matrix, $\circ$ is the Hadamard product, and $\mathbf{P}$ and $\mathbf{Q}$ are both low-rank matrices. This last term approximates how agents choose items due to their intrinsic preferences for items with particular latent features. The teacher model is similar to previous models of human decisions in crowdsourced systems \cite{Burghardt2020}, where agents stochastically choose items due to intrinsic qualities or at random due to human biases. The biases in real systems could vary in intensity for different scenarios, so we keep $\boldsymbol{\beta}$ as a set of free parameters. For simplicity, we initially set the matrix $\boldsymbol{\beta} = \beta \mathbf{J}$, where $\beta$ is a scalar. For robustness, we compare our results to the case when the $\boldsymbol{\beta}$ matrix are probabilities distributed uniformly at random between 0 and 1 (random $\boldsymbol{\beta}$ condition).
To further simplify the simulation, we let $\mathbf{P}$ and $\mathbf{Q}$ be rank-$k$ whose elements are uniformly distributed between 0 and $1/k$. This ensures that, after the rank-$k$ matrices are multiplied, the probabilities are always positive definite and less than 1 (on average 0.25, which avoids highly imbalanced datasets), while otherwise making minimal assumptions about the matrix element values. In this model, we arbitrarily choose $k=4$ to ensure that $k\ll n,~m$ as is typical in other recommendation systems \cite{Bell2007,funk2006netflix}.
The student model approximates the user-item matrix, $\mathbf{R}^{\text{data}}$, with matrix factorization
\begin{equation}
\mathbf{R}^{\text{student}}=\mathbf{P'}\mathbf{Q'}^T
\end{equation}
shown in the right panel of Fig.~\ref{fig:teaser}. Matrix factorization assumes that agent choices are best explained by a smaller number of latent factors (agent preferences) that can be learned from their observed choices, as we assume in the teacher model. The model's output is the expected probability a user will choose a particular item. After agents decide which of the recommended items they will choose, their data are fed into the student model whose matrices have an arbitrary low rank $k'$ (not necessarily equal to $k=4$). We chose $k'=5$ in our simulations, although results are similar if we chose $k'=2$, but the fit to data is worse. While some matrix factorization models train with a ridge regression loss term \cite{Bell2007,funk2006netflix}, we choose a slightly easier approach: stochastic gradient decent (SGD) with early stopping. We split the available data at random with 80\% for training and 20\% for validation and apply SGD until the 20\% validation set's brier score error is minimized (where the initial conditions are the matrix weights from the previous timepoint). While this differs from some recommendation algorithms, previous work has shown that matrix factorization is a variant of dense neural networks that commonly implement this method \cite{Lesieur2017}, and there is a close quantitative connection between ridge regression and early stopping \cite{Gunasekar2018}. Finally, this method allows us to stop training early, which makes simulations run faster. The recommendation algorithm uses this model to recommend $r=1$ item to each agent following a strategy outlined below
\subsection{Recommendation Algorithm Strategies}
We propose a number of realistic strategies to recommend items, including a greedy strategy (recommending the content the student model predicts will most likely be chosen), an $\epsilon$-greedy strategy, in which random unchosen content is recommended with probability $\epsilon$, and a random strategy, in which random unchosen content is recommended with equal probability. We compare these against the best case scenario, the oracle strategy, in which we unrealistically set the student model equal to the teacher model. The $\epsilon$-greedy strategy is inspired by reinforcement learning systems \cite{Kaelbling1996,Arulkumaran2017,Afsar2021}, where recommendations are usually built up from interactions between individuals and the system. In contrast to previous work \cite{Afsar2021}, however, we incorporate reinforcement learning strategies into training a collaborative filtering model.
\subsection{Simulation Parameters}
These recommendations are then stochastically chosen by agents following the teacher model probabilities, and the simulation repeats until all items have been recommended. The student model is first trained on a sparse initial set of data. More specifically, 0.1\% of data was initially sampled uniformly a random with values $R^{\text{data}}_{ij}=$ 0 or 1 depending on a Bernoulli distribution with probability ${R}^{\text{teacher}}_{ij}$. Many datasets, such as the Netflix Prize dataset, have a greater proportion of user-item pairs (roughly 1\% of all possible pairs \cite{Bell2007}). However, the user-item pairs were themselves recommended with Netflix's in-house algorithm that was trained on an even smaller set of data, which we assume is 0.1\% of all possible pairs.
We run this model for $n=4000$ agents and $m=200$ simulated items with ten realizations for each value of $\beta$ (or five realizations for the random $\boldsymbol{\beta}$ teacher model). A realization is where we retrain the student model from scratch, starting with a random 0.1\% of all pairs. We also generate a new teacher model, but keeping the same teacher model does not significantly affect results.
The ratio of agents to items was chosen to approximately correspond to that of the Netflix prize data \cite{Bell2007}, roughly 20 agents per item. Largely because of the number of times we fit the student model over the course of each simulation realization ($m=200$ times), and because matrix factorization takes up to $O(n\times m)$, the simulation takes $O(n\times m^2)$. This time complexity means that modeling our comparatively modest set of agents and items would take several computing days for one simulation realization if it were not run in parallel. We are able to finish all the simulations in this paper in roughly 1-2 weeks on three 48-logical-core servers using Python (see link to code in the Introduction).
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.7\linewidth]{figures/Instability.pdf
\caption{Student model instability. (a) The correlation between item popularity at timestep 100 and the teacher model probability an item would be chosen as a function of the human bias parameter, $\beta$. An alternative random $\boldsymbol{\beta}$ model, where the bias is uniformly distributed between 0 and 1 for each user-item pair, shows similar results. (b) The item popularity correlation between different realizations of the model after 100 timesteps. }
\label{fig:Correl}
\end{figure*}
\section{Results}
We test whether the recommendation algorithm provides the agents with the items they want, whether items are ranked consistently, and finally how to improve recommendation algorithm stability.
We can gain intuition about how the greedy strategy affects the system in the limit that the student model's low-rank matrices are rank-one matrices ($k'=1$), which is even simpler than the simulation discussed in the rest of the paper ($k'=5$). If we want to recommend the top items to each agent, $i$, then we want to find item $j$ with the largest value in the student model, ${R}^{\text{student}}_{ij}= P'_i \times Q'_j$. However, in this case, the system recommends the same item to each agent because the relative ranking of items only depends on $Q'_j$. This common ranking also implies that the recommendation system will only recommend popular content to agents rather than niche items agents may prefer. The homogeneity in recommendations and relationship between recommendations and popularity is seen in previous work on more realistic systems \cite{Chaney2018,Mansoury2020}, therefore even this simplified version of the simulation captures realism of more sophisticated systems. What is not captured in previous work, however, is that $\mathbf{Q'}$ would vary dramatically depending on the initial conditions, which implies item popularity instability: the same item could be very popular or unpopular in different simulation realizations by chance. The $\epsilon$-greedy strategy, in contrast to the greedy strategy, promotes random items, which we will show reduces the inequality of the system and helps the recommendation algorithm quickly find the most preferred content.
\subsection{Instability of Recommendation Systems}
Figure~\ref{fig:Correl} shows the stability and accuracy of the model. Figure~\ref{fig:Correl}a compares the popularity of items after 100 timesteps, when half of all recommendations are made, to the ground truth (popularity if all user-item pairs were fully sampled). We find that increasing the bias $\beta$ decreases the correlation between algorithm and ground truth popularity, therefore items that should be popular are not. Figure~\ref{fig:Correl}b, in contrast, shows that larger $\beta$ decreases the item popularity correlation between simulation realizations, implying greater item popularity instability. This is alike to previous work on crowdsource systems \cite{Burghardt2020,Salganik2006}, in which ranking items by a simple heuristic can drive some items to become popular by chance. Despite this finding, the $\epsilon$-greedy strategy creates much higher correlations between item popularity and the ground truth (Fig.~\ref{fig:Correl}a) and item popularities between each simulation realization (Fig.~\ref{fig:Correl}b). The new strategy, in other words, improves recommendation accuracy and reduces item popularity instability.
\begin{figure}[tb!]
\centering
\includegraphics[width=0.9\linewidth]{figures/ModelAccuracy.pdf}
\caption{Model error for different algorithm strategies. We find that the Brier score (mean squared error between model prediction and agent decision) drops with time and is highest when the algorithm uses the greedy strategy for $\beta=0.0$, $0.4$. On the other hand, when the algorithm follows the $\epsilon$-greedy strategy, the error drops drops dramatically, and is nearly as low as the random strategy (offering random items to agents).}
\label{fig:Error}
\end{figure}
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.7\linewidth]{figures/InequalityViews.pdf}
\caption{The evolution of item popularity. (a) Gini coefficient and (b) mean item popularity over time for $\beta=0.0$, $0.4$. Four different strategies are used for recommendation: oracle, greedy, $\epsilon$-greedy and random. Gini coefficient is generally lower for $\epsilon$-greedy and random strategies, and the $\epsilon$-greedy strategy makes more ideal recommendations, allowing mean item popularity to be higher than all but the oracle strategy.}
\label{fig:Views}
\end{figure*}
The reason the greedy strategy performs poorly can be better understood when we plot student model error over time in Fig.~\ref{fig:Error}. We show that model error generally decreases with time, as expected, but the greedy strategy has error decreasing slowly with time. The $\epsilon$-greedy strategy enhances the student model through a representative sample of the user-item matrix. Error for this strategy therefore drops to a small fraction of the greedy strategy and is nearly as small as the error for the random strategy
\subsection{Comparing Recommendation Quality}
Next, we compare the quality of recommendations by observing the items chosen over time in Fig.~\ref{fig:Views}. We show that the $\epsilon$-greedy strategy makes more diverse recommendations that are of higher quality on average than the random or greedy strategies. In Fig.~\ref{fig:Views}a, we show the Gini coefficient of item popularity, a proxy for item popularity inequality. If items were fully sampled, their Gini coefficient would be less than 0.2 ($T=200$ values on the right side of the figure). Under the idealized oracle strategy, the Gini coefficient is initially high (only a few of the best items are recommended) and steadily drops. The lower Gini coefficient for the $\epsilon$-greedy strategy is a product of more equal sampling. The greedy strategy Gini coefficient is, however, often higher than all alternative strategies, meaning some items are recommended far more frequently than they should be.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.95\linewidth]{figures/UniformBetaResults.pdf}
\caption{Robustness of results. (a) Brier score, (b) item popularity Gini coefficient, (c) mean item popularity over time for random $\boldsymbol{\beta}$ matrix. As in the previous experiments, the $\epsilon$-greedy strategy has lower Brier score and item popularity Gini coefficient than the greedy strategy, and slightly (but statistically significantly) higher mean item popularity. Difference in item popularity between $\epsilon$-greedy and greedy strategy over time is shown in the inset.}
\label{fig:Robust}
\end{figure*}
Next, we plot the mean popularity over time in Fig.~\ref{fig:Views}b. If the mean popularity of items is high early on then agents are recommended items they really like. The mean popularity of items is highest in the oracle strategy because we know exactly what items users are most likely to choose and recommend those first. We find agents choose more items at any timepoint with the $\epsilon$-greedy strategy than alternatives implying it is the best non-oracle strategy. The random recommendations within the $\epsilon$-greedy strategy trains the student model better, and this in turn creates better recommendations on average.
\subsection{Robustness of Results}
The simulations shown in Figs.~\ref{fig:Error} \&~\ref{fig:Views} make several simplifying assumptions, including a $\boldsymbol{\beta}$ matrix whose elements are all the same. Instead $\boldsymbol{\beta}$ elements could be very different, which we model as values that are independent and uniformly distributed between 0 and 1. We show in Fig.~\ref{fig:Correl} that this does not change our finding that the greedy strategy is less stable than the $\epsilon$-greedy strategy. We similarly show that the model error and item popularity is qualitatively similar in Fig.~\ref{fig:Robust}. Namely, in Fig.~\ref{fig:Robust}a, we show that the greedy method has consistently higher model error than the $\epsilon$-greedy strategy. Similarly, the Gini coefficient (Fig.~\ref{fig:Robust}b) is higher and the mean popularity over time is very slightly, but statistically significantly, lower in Fig.~\ref{fig:Robust}c (shown more clearly in the inset, which shows the difference in item popularity over time). To test the statistical significance, we take the mean popularity difference across all timesteps between the greedy and $\epsilon$-greedy strategies and compare the z-score of this difference (which is greater than $15$), making the $p$-value$<10^{-6}$. This result is approximately the same when we compare the mean difference for individual realizations or the mean item popularity across five realizations; results stated above are for mean popularity across five realizations. Unlike in earlier results, however, the teacher model is now poorly approximated as a product of two lower-rank matrices, therefore error is typically higher. The oracle strategy in turn has a lower Gini coefficient and higher mean popularity than alternative strategies.
\section{Discussion \& Conclusions}
In conclusion, we develop a teacher-student framework to understand the accuracy and stability of collaborative filtering models as they interact with agents. The simulations demonstrate that the greedy strategy produces unstable and inaccurate predictions over time. Namely, the recommendation algorithm recommends a small set of items (leading to a high popularity Gini coefficient) and this leads to higher error. In contrast, the $\epsilon$-greedy strategy follows a less intuitive training regime in which random items are recommended to agents. This leads to better sampling (lower Gini coefficient), lower error, and \emph{more} items picked at any given time because the items recommended are what agents prefer to choose. Finally, the $\epsilon$-greedy strategy might force users out of filter bubbles by exposing them to a more diverse set of items. This potential should be explored in future work. This paper adds to growing literature on the instability of ranking systems \cite{Salganik2006, Burghardt2020}, but also gives greater insight into personalized ranking, and emergent properties, both desired and unintended, of these systems. For example, the sensitivity of the recommendation algorithm to initial conditions is reminiscent of chaotic systems, but future work is needed to test the relationship between these findings and non-linear dynamics or chaos theory.
\subsection{Limitations}
There are a number of potential limitations with the current method. First, our work must rely on synthetic data because we cannot know whether a user would choose an item they were not recommended in empirical data. Furthermore, assumptions built into the present simulation may not reflect the true human dynamics. For example, agents are the same age and equally active in the system. In reality, some agents may be much older and have much more data than others. In addition, the order items are recommended could affect agent decisions. For example, users might not buy a vinyl record unless they are first recommended a record player. Moreover, we use a simple student model to recommend items, but newer and more sophisticated collaborative filtering methods could be offered.
\subsection{Ethical Considerations}
The present simulations offer policy suggestions to improve recommendations by combining ideas from collaborative filtering with reinforcement learning. This current work does not, however, explore the potential adverse effects of recommendation systems, such as filter bubbles \cite{Bessi2016}. Items recommended by the $\epsilon$-greedy strategy could be of low-quality or promote harm, while we would expect such items are screened out when recommendation algorithms promote what agents should like. That said, this method actively fights filter bubbles by offering items outside of the user's expected preferences, and a variation of this strategy, such as recommendations among a cleaned or popular set of items could provide users better and more diverse items.
\subsection{Future Work}
Additional realism, such as agents arriving or leaving the system, could easily be incorporated into the simulation. The present work is a baseline that researchers can modify for specific systems. Additional features should be explored, however, including polarization, in which people preferentially pick one type of content over another. While there has been a growing interest in algorithmic polarization \cite{Sirbu2019}, the dynamic interaction between agents and trained models should be explored in greater depth, especially if it can drive people away from echo chambers. Moreover, the teacher-student methodology in this paper can be extended to audit other socio-technical systems such as predictive policing \cite{Ensign2018}, bail \cite{kleinberg2016inherent}, or banking loans, whose data is known to be intrinsically biased \cite{DAmour2020}. The instability we could measure is, for example, who gets loans or goes to jail. If this varies due to the simulation realization and not the intrinsic features of the people, we could quantify, and find ways to address, the algorithm's instability.
\section*{Acknowledgements}
Research was supported by DARPA under award \# HR001121C0169.
\section*{Conflicts of Interest}
The authors declare no conflicts of interest.
\section{Introduction}
We interact with computer algorithms throughout the day. Algorithms guide us to our destinations, finish our sentences in emails, automate business processes and decisions in healthcare, and recommend movies and music for us to enjoy. On social media platforms, algorithms select messages for our news feed and find new accounts for us to follow. Despite algorithms' ubiquity and broad impact, the interaction between algorithms and people within socio-technical systems is still poorly understood, especially when algorithms learn from data based on past predictions \cite{Sinha2016}.
We have some evidence, however, that this interaction can have unexpectedly negative consequences, such as pushing people into filter bubbles \cite{Ge2020,Sirbu2019} or reducing the diversity of recommended content in online platforms \cite{Chaney2018,Mansoury2020}. Something observed in crowdsourcing systems, but under-explored in other systems, is how the interplay between people and algorithms can stochastically make some options very popular or unpopular (known as \textit{instability}), and the popularity of items has little relation to the items people prefer~\cite{Salganik2006,Burghardt2020}. Mitigating this problem in crowdsourcing systems is an ongoing struggle but, until now, it was unknown if this instability extends to other socio-technical systems, such as personalized recommendation. This feedback loop could make many items or videos more popular than better ones and generate more unpredictable trends. These emergent problems are bad for recommendation systems because they will recommend items or content users are less likely to want, reducing revenue. The unpredictability is a new challenge for businesses or content creators, who would be less certain about what items or content will be the next big thing. These problems lead us to explore two research questions in this paper:
\begin{enumerate}
\item[\textbf{RQ1}] How can we measure the stability of non-crowdsourced socio-technical systems?
\item[\textbf{RQ2}] Can we improve the stability and performance of a socio-technical system?
\end{enumerate}
We address these gaps in knowledge by systematically studying the complex dynamics of an algorithmically-driven socio-technical system, and focus on recommender systems. There are many systems we could explore, such as predictive policing \cite{Ensign2018}, or bank loans \cite{DAmour2020}, in which a feedback loop between people interacting with algorithms and the algorithms trained on these interactions could unfairly benefit some people over others. Recommender systems are chosen because they are a key component of social media platforms, e-commerce systems \cite{Ge2020}, and are used by many music and video streaming services \cite{Bell2007,Schafer2007}. In order to better understand the feedback loop, known as algorithmic confounding, in recommendation systems, we create a teacher-student framework that approximates a continuously trained recommendation system \cite{Lampinen2018}, as shown in Fig.~\ref{fig:teaser}. The left panel of Fig.~\ref{fig:teaser} shows the ground truth \emph{teacher model}, which models human choices within recommendation systems using agents. These agents stochastically choose items according to intrinsic features of items and preferences of users (modeled as a factorized matrix, which fits empirical data well \cite{funk2006netflix,Bell2007,Koren2009,Portugal2018}), as well as human biases (such as choosing items at random, a heuristic seen in crowdsourcing \cite{Burghardt2020}). This model gives each user-item pair a unique probability to be chosen by a simulated agent when the item is personally recommended by the algorithm. We can alternatively interpret ``choosing'' items as agents rating something as good or bad (a goal of some recommendation systems \cite{funk2006netflix}), but we will stick to choosing items for consistency.
We train a separate \textit{student model} from agent's binary decisions (right panel of Fig.~\ref{fig:teaser}) and use it to make new recommendations to agents. The student model is a separate factorized matrix that estimates what items each agent is likely to choose. The system records whether agents do or do not choose recommended items and the data is used to further fine-tune the student model. We have agents interact with items exactly once in order to model agents preferring new content over old \cite{Sinha2016} although this assumption can be easily modified in the code we provide, and should not affect our results.
Our results show that recommending items that agents are predicted to like most leads to item popularity instability, where the same item can be very popular or unpopular in different simulation realizations. We use the simulation to test an alternative recommendation strategy in which random items are sometimes recommended. This new strategy improves stability and model accuracy as well as mean item popularity (a proxy for purchases or view counts of videos) at a given time. Moreover, a side-benefit of this strategy is that it forces the algorithm to recommend diverse content, which could reduce recommendation filter bubbles.
To summarize, our contributions are the following:
\begin{enumerate}
\item We develop a novel framework to evaluate the stability of model training \footnote{The simulation code can be found here: https://github.com/KeithBurghardt/RecSim}.
\item We quantify the stability of different recommendation algorithms to algorithmic confounding.
\item We provide a simple recommendation algorithm strategy that improves accuracy and stability.
\end{enumerate}
These results demonstrate that personalized recommendation systems can produce exceedingly unstable recommendations. While the simulation is an idealized system, it gives new insight into why the systems work, why they sometimes fail, and algorithm strategies to mitigate their shortcomings.
\begin{table*}[tbh!]
\centering
\begin{tabular}{|p{3.5cm}|p{13cm}|}
\hline
{\bf Terminology} & {\bf Definition} \\\hline\hline
\textit{Personalized recommendations} & Recommendations unique to each person, in contrast to crowdsourced ranking, where the same recommendations are seen by everyone\\\hline
\textit{Algorithmic confounding} & The positive feedback loop between model and people in recommendation systems\\\hline
\textit{Simulation} & The evolution of the recommendation system from the initial condition until agents choose all items\\\hline
\textit{Agent} & The simulated user in our model\\\hline
\textit{Items} & What the recommendation algorithm recommends and agents choose\\\hline
\textit{User-item matrix} & The Boolean matrix of items each agent has been recommended, that are (1) or are not (0) chosen (these could be alternatively interpreted as binary ratings) \\\hline
\textit{Matrix factorization} & Algorithms that approximate a matrix as the product of two lower-rank matrices\\\hline
\textit{Recommendation system} & The recommendation algorithm along with the agents who interact with it\\\hline
\textit{Recommendation algorithm} & The matrix factorization-based student model and algorithm to recommend items to agents\\\hline
\textit{Teacher model} & The model approximating how agents interact with items. Namely the likelihood any item is selected\\\hline
\textit{Student model} & The recommendation model that approximates the available user-item matrix with matrix factorization in order to approximate how agents choose unseen items\\\hline
\textit{Teacher-student framework} & The framework in which a student model tries to reconstruct a teacher model based on the teacher model’s output\\\hline
\textit{Instability} & Sensitivity of recommended items to initial conditions\\\hline
\textit{Popularity} & How many agents have chosen an item at a given timepoint\\\hline
\textit{Quality} & The overall probability an item is select. These are elements of the teacher matrix\\\hline
\end{tabular}
\caption{Terminology definitions.}
\label{tab:terminology}
\end{table*}
\section{Related Work}
\paragraph{Recommendation systems}
There has been significant research into improving recommendation systems. Many methods exist to recommend everything from game characters \cite{conley2013does} to educative content \cite{Tan2008} to movies \cite{biancalana2011context}, and are often based on collaborative filtering (recommending based on the behavior of similar people), content filtering (recommending similar content), or a combination of both \cite{Balabanovic1997,Schafer2007,Portugal2018}. Collaborative filtering, which the present paper simulates, can use a number of different models from more K-means and ensemble-based methods to neural networks \cite{He2017,Bell2007,kim2016convolutional}.
A popular and accurate recommendation model is matrix factorization, in which the sparse matrix of users-items pairs, $\mathbf{R}^\text{data}$, is approximated as the product of two lower-rank matrices $\approx \mathbf{P'} \mathbf{Q'}^T$ \cite{Koren2009}. Throughout the paper, matrices are $\mathbf{bold}$ while elements within a matrix are italicized. The intuition behind matrix factorization is that users and items may individually have latent features that make users more likely to pick one item (such as watching an action movie) over another (such as watching a romantic comedy). There has been significant interest in matrix factorization both due to its performance \cite{Bell2007,kim2016convolutional}, and relative ease to analyze theoretically \cite{Lesieur2017}. This method is often used in conjunction with other models, but for simplicity we model matrix factorization alone in the present paper.
\paragraph{Algorithm biases}
Training on biased data, a common practice in recommendation systems, can enhance biases, leading to greater unfairness and more mistakes \cite{Ensign2018,DAmour2020,Jabbari2017,Joseph2016,angwin2016machine}. This is known as algorithmic bias or algorithmic confounding in recommendation systems \cite{Mansoury2020,Chaney2018,Sinha2016}. This bias might create filter bubbles that enhance polarization \cite{Sirbu2019,Bessi2016}.
\paragraph{Ranking instability in crowdsourcing} A large body of literature has explored the behavior of crowdsourcing systems. In contrast to recommendation systems that personalize content, in crowdsourcing systems all users see the same content. These systems aggregate decisions of many people to find the best items, typically by ranking them. Examples include StackExchange, where users choose the best answers to questions, and Reddit, where users choose the most interesting stories for the front page. Past work has shown that ranking, especially by popularity, creates feedback loops that amplify human biases affecting item choices, such as choosing popular items or those they see first, rather than high-quality items
\cite{lerman14as,MyopiaCrowd}. Recent literature has also identified instabilities in crowdsourced ranking \cite{Burghardt2020,Salganik2006}, in which the crowdsourced rank of items are strongly influenced by position and social influence biases. As a result, the emergent popularity of mid-range quality content is both unpredictable and highly uneven, although the best (worst) items usually end up becoming most (least) popular~\cite{Salganik2006,Burghardt2018}. Along these lines, Burghardt et al. (\citeyear{Burghardt2020}) developed a model to explain how the better item in a two-item list was not guaranteed to become highest ranked, which implies good content is often harder to spot unless the ranking algorithm controls for these biases. Finally, content recommended by Reddit had a poor correlation with user preferences \cite{Glenski2018}, suggesting factors including algorithmic confounding have produced poor crowdsourced recommendations.
\paragraph{Reinforcement Learning} Reinforcement learning is the algorithmic technique of interacting with an environment with a set of actions and learning what actions maximize cumulative utility \cite{Kaelbling1996}. A number of reinforcement learning methods exist, from genetic algorithms and dynamic programming \cite{Kaelbling1996} to deep learning-based algorithms \cite{Arulkumaran2017}. The typical goal is to initially explore the space of actions and then exploit actions learned that can optimize cumulative utility. Personalized recommendations are a natural fit for reinforcement learning because users choose items sequentially, and the metrics to optimize, e.g., money spent or videos watched, can easily be interpreted as a cumulative utility to optimize. A growing body of literature has shown how reinforcement learning algorithms can learn a sequence of recommendations to increase the number of items bought or content viewed \cite{Taghipour2007,Lu2016,Arulkumaran2017,Afsar2021}. These algorithms are often trained on data of past users interacting with the system, which could lead to algorithmic confounding. The framework within the present paper can be extended in the future to measure algorithmic confounding within reinforcement learning-based recommendation systems and help companies develop new techniques to better train these models in online settings.
\paragraph{Our novelty} The present paper contrasts with previous work by developing a teacher-learner framework to model and better understand the interaction between recommendation algorithms and users. Furthermore, we use these findings to demonstrate how item instability can be a vexing feature of recommendation systems, in which popular or highly recommended items may not strongly correlate with items agents prefer. Finally, we provide a novel reinforcement learning-inspired approach to better train collaborative filtering models, which can improve stability and recommendation accuracy.
\begin{table}[tbh!]
\centering
\begin{tabular}{|p{1.1cm}|p{6cm}|}
\hline
{\bf Symbol} & {\bf Definition} \\\hline\hline
$\mathbf{R}^{\text{data}}$ & User-item matrix\\\hline
$\mathbf{R}^{\text{teacher}}$ & Teacher model\\\hline
$\mathbf{R}^{\text{student}}$ & Student model\\\hline
$\boldsymbol{\beta}$ & The teacher model probability to choose items independent of their features \\\hline
$\beta$ & Scalar value of all elements in $\boldsymbol{\beta}$ (free parameter between 0 and 1)\\\hline
$\boldsymbol{J}$ & All-ones matrix\\\hline
$\circ$ & Hadamard product (element-by-element multiplication between matrices)\\\hline
$\mathbf{P}$, $\mathbf{Q}$ & The latent features of users ($\mathbf{P}$) and items ($\mathbf{Q}$) in the teacher model \\\hline
$\mathbf{P'}$, $\mathbf{Q'}$ & The estimated latent features of users ($\mathbf{P'}$) and items ($\mathbf{Q'}$) in the student model \\ \hline
$k$ & Number of latent features in the teacher model ($k=4$)\\\hline
$k'$ & Number of latent features in the student model ($k'=5$)\\\hline
$n$ & Number of agents ($n=4000$) \\\hline
$m$ & Number of items ($m=200$) \\\hline
$A_{ij}$ & $ij^{th}$ element in a matrix $\mathbf{A}$\\\hline
$T$ & Timestep (value from $1$ to $m$)\\\hline
\end{tabular}
\caption{Symbol definitions.}
\label{tab:symbol}
\end{table}
\section{Methods}
We introduce the teacher-student modeling framework, simulation assumptions, and the recommendation algorithm strategies for student model training. Terminology referenced this section is available in Table~\ref{tab:terminology}, and symbols can be referenced in Table~\ref{tab:symbol}.
\subsection{Outline of our approach}
We analyze recommendation systems using simulations that capture the essence of recommendation algorithms while being simple enough to make generalizable conclusions. Model simplifications include:
\begin{enumerate}
\item A recommendation algorithm recommends $r=1$ items to each agent before retraining on past data.
\item We assume agents are roughly the same age within the system.
\item Agents make binary choices, alternatively interpreted as ratings (upvote or downvote) \cite{Trnecka2021}.
\item Agent decisions follow the teacher model seen in the left panel of Fig.~\ref{fig:teaser}.
\item Agent choices are the same regardless of the order items are offered.
\item Regardless of agent choice, the item is not recommended again to that agent. This captures how old items may have a lower utility to users than novel items \cite{Chaney2018}.
\end{enumerate}
Additional realism can be built into the simulation in the future; the present work can be viewed as a proof-of-concept.
\subsection{Teacher-student framework}
The recommendation system simulation has two components: a \textit{teacher model}, $\mathbf{R}^{\text{teacher}}$, which models agent decisions, and a \textit{student model}, $\mathbf{R}^{\text{student}}$, which models the recommendation engine. The student model is trained on a matrix of recommended items agents have or have not chosen, $\mathbf{R}^{\text{data}}$. The joint teacher-student framework has the following benefits: (1) the teacher model encodes agent preferences and can be made arbitrarily complex to improve realism, (2) the student model prediction can be directly compared to the teacher model ground truth, and (3) we can explore counterfactual conditions of how the recommendation system would behave if agents chose a different set of items.
The left panel of Fig.~\ref{fig:teaser} shows the teacher model, which assumes that agents choose items stochastically according to a probability matrix that models both human biases and intrinsic preferences. The teacher model is
\begin{equation}
\mathbf{R}^{\text{teacher}} = \boldsymbol{\beta} + (\boldsymbol{J}-\boldsymbol{\beta}) \circ \mathbf{P}\mathbf{Q}^T
\end{equation}
where $\boldsymbol{\beta}$ is a matrix representing the probability a user will pick a given item regardless of its intrinsic qualities, $\boldsymbol{J}$ is an all-ones matrix, $\circ$ is the Hadamard product, and $\mathbf{P}$ and $\mathbf{Q}$ are both low-rank matrices. This last term approximates how agents choose items due to their intrinsic preferences for items with particular latent features. The teacher model is similar to previous models of human decisions in crowdsourced systems \cite{Burghardt2020}, where agents stochastically choose items due to intrinsic qualities or at random due to human biases. The biases in real systems could vary in intensity for different scenarios, so we keep $\boldsymbol{\beta}$ as a set of free parameters. For simplicity, we initially set the matrix $\boldsymbol{\beta} = \beta \mathbf{J}$, where $\beta$ is a scalar. For robustness, we compare our results to the case when the $\boldsymbol{\beta}$ matrix are probabilities distributed uniformly at random between 0 and 1 (random $\boldsymbol{\beta}$ condition).
To further simplify the simulation, we let $\mathbf{P}$ and $\mathbf{Q}$ be rank-$k$ whose elements are uniformly distributed between 0 and $1/k$. This ensures that, after the rank-$k$ matrices are multiplied, the probabilities are always positive definite and less than 1 (on average 0.25, which avoids highly imbalanced datasets), while otherwise making minimal assumptions about the matrix element values. In this model, we arbitrarily choose $k=4$ to ensure that $k\ll n,~m$ as is typical in other recommendation systems \cite{Bell2007,funk2006netflix}.
The student model approximates the user-item matrix, $\mathbf{R}^{\text{data}}$, with matrix factorization
\begin{equation}
\mathbf{R}^{\text{student}}=\mathbf{P'}\mathbf{Q'}^T
\end{equation}
shown in the right panel of Fig.~\ref{fig:teaser}. Matrix factorization assumes that agent choices are best explained by a smaller number of latent factors (agent preferences) that can be learned from their observed choices, as we assume in the teacher model. The model's output is the expected probability a user will choose a particular item. After agents decide which of the recommended items they will choose, their data are fed into the student model whose matrices have an arbitrary low rank $k'$ (not necessarily equal to $k=4$). We chose $k'=5$ in our simulations, although results are similar if we chose $k'=2$, but the fit to data is worse. While some matrix factorization models train with a ridge regression loss term \cite{Bell2007,funk2006netflix}, we choose a slightly easier approach: stochastic gradient decent (SGD) with early stopping. We split the available data at random with 80\% for training and 20\% for validation and apply SGD until the 20\% validation set's brier score error is minimized (where the initial conditions are the matrix weights from the previous timepoint). While this differs from some recommendation algorithms, previous work has shown that matrix factorization is a variant of dense neural networks that commonly implement this method \cite{Lesieur2017}, and there is a close quantitative connection between ridge regression and early stopping \cite{Gunasekar2018}. Finally, this method allows us to stop training early, which makes simulations run faster. The recommendation algorithm uses this model to recommend $r=1$ item to each agent following a strategy outlined below
\subsection{Recommendation Algorithm Strategies}
We propose a number of realistic strategies to recommend items, including a greedy strategy (recommending the content the student model predicts will most likely be chosen), an $\epsilon$-greedy strategy, in which random unchosen content is recommended with probability $\epsilon$, and a random strategy, in which random unchosen content is recommended with equal probability. We compare these against the best case scenario, the oracle strategy, in which we unrealistically set the student model equal to the teacher model. The $\epsilon$-greedy strategy is inspired by reinforcement learning systems \cite{Kaelbling1996,Arulkumaran2017,Afsar2021}, where recommendations are usually built up from interactions between individuals and the system. In contrast to previous work \cite{Afsar2021}, however, we incorporate reinforcement learning strategies into training a collaborative filtering model.
\subsection{Simulation Parameters}
These recommendations are then stochastically chosen by agents following the teacher model probabilities, and the simulation repeats until all items have been recommended. The student model is first trained on a sparse initial set of data. More specifically, 0.1\% of data was initially sampled uniformly a random with values $R^{\text{data}}_{ij}=$ 0 or 1 depending on a Bernoulli distribution with probability ${R}^{\text{teacher}}_{ij}$. Many datasets, such as the Netflix Prize dataset, have a greater proportion of user-item pairs (roughly 1\% of all possible pairs \cite{Bell2007}). However, the user-item pairs were themselves recommended with Netflix's in-house algorithm that was trained on an even smaller set of data, which we assume is 0.1\% of all possible pairs.
We run this model for $n=4000$ agents and $m=200$ simulated items with ten realizations for each value of $\beta$ (or five realizations for the random $\boldsymbol{\beta}$ teacher model). A realization is where we retrain the student model from scratch, starting with a random 0.1\% of all pairs. We also generate a new teacher model, but keeping the same teacher model does not significantly affect results.
The ratio of agents to items was chosen to approximately correspond to that of the Netflix prize data \cite{Bell2007}, roughly 20 agents per item. Largely because of the number of times we fit the student model over the course of each simulation realization ($m=200$ times), and because matrix factorization takes up to $O(n\times m)$, the simulation takes $O(n\times m^2)$. This time complexity means that modeling our comparatively modest set of agents and items would take several computing days for one simulation realization if it were not run in parallel. We are able to finish all the simulations in this paper in roughly 1-2 weeks on three 48-logical-core servers using Python (see link to code in the Introduction).
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.7\linewidth]{figures/Instability.pdf
\caption{Student model instability. (a) The correlation between item popularity at timestep 100 and the teacher model probability an item would be chosen as a function of the human bias parameter, $\beta$. An alternative random $\boldsymbol{\beta}$ model, where the bias is uniformly distributed between 0 and 1 for each user-item pair, shows similar results. (b) The item popularity correlation between different realizations of the model after 100 timesteps. }
\label{fig:Correl}
\end{figure*}
\section{Results}
We test whether the recommendation algorithm provides the agents with the items they want, whether items are ranked consistently, and finally how to improve recommendation algorithm stability.
We can gain intuition about how the greedy strategy affects the system in the limit that the student model's low-rank matrices are rank-one matrices ($k'=1$), which is even simpler than the simulation discussed in the rest of the paper ($k'=5$). If we want to recommend the top items to each agent, $i$, then we want to find item $j$ with the largest value in the student model, ${R}^{\text{student}}_{ij}= P'_i \times Q'_j$. However, in this case, the system recommends the same item to each agent because the relative ranking of items only depends on $Q'_j$. This common ranking also implies that the recommendation system will only recommend popular content to agents rather than niche items agents may prefer. The homogeneity in recommendations and relationship between recommendations and popularity is seen in previous work on more realistic systems \cite{Chaney2018,Mansoury2020}, therefore even this simplified version of the simulation captures realism of more sophisticated systems. What is not captured in previous work, however, is that $\mathbf{Q'}$ would vary dramatically depending on the initial conditions, which implies item popularity instability: the same item could be very popular or unpopular in different simulation realizations by chance. The $\epsilon$-greedy strategy, in contrast to the greedy strategy, promotes random items, which we will show reduces the inequality of the system and helps the recommendation algorithm quickly find the most preferred content.
\subsection{Instability of Recommendation Systems}
Figure~\ref{fig:Correl} shows the stability and accuracy of the model. Figure~\ref{fig:Correl}a compares the popularity of items after 100 timesteps, when half of all recommendations are made, to the ground truth (popularity if all user-item pairs were fully sampled). We find that increasing the bias $\beta$ decreases the correlation between algorithm and ground truth popularity, therefore items that should be popular are not. Figure~\ref{fig:Correl}b, in contrast, shows that larger $\beta$ decreases the item popularity correlation between simulation realizations, implying greater item popularity instability. This is alike to previous work on crowdsource systems \cite{Burghardt2020,Salganik2006}, in which ranking items by a simple heuristic can drive some items to become popular by chance. Despite this finding, the $\epsilon$-greedy strategy creates much higher correlations between item popularity and the ground truth (Fig.~\ref{fig:Correl}a) and item popularities between each simulation realization (Fig.~\ref{fig:Correl}b). The new strategy, in other words, improves recommendation accuracy and reduces item popularity instability.
\begin{figure}[tb!]
\centering
\includegraphics[width=0.9\linewidth]{figures/ModelAccuracy.pdf}
\caption{Model error for different algorithm strategies. We find that the Brier score (mean squared error between model prediction and agent decision) drops with time and is highest when the algorithm uses the greedy strategy for $\beta=0.0$, $0.4$. On the other hand, when the algorithm follows the $\epsilon$-greedy strategy, the error drops drops dramatically, and is nearly as low as the random strategy (offering random items to agents).}
\label{fig:Error}
\end{figure}
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.7\linewidth]{figures/InequalityViews.pdf}
\caption{The evolution of item popularity. (a) Gini coefficient and (b) mean item popularity over time for $\beta=0.0$, $0.4$. Four different strategies are used for recommendation: oracle, greedy, $\epsilon$-greedy and random. Gini coefficient is generally lower for $\epsilon$-greedy and random strategies, and the $\epsilon$-greedy strategy makes more ideal recommendations, allowing mean item popularity to be higher than all but the oracle strategy.}
\label{fig:Views}
\end{figure*}
The reason the greedy strategy performs poorly can be better understood when we plot student model error over time in Fig.~\ref{fig:Error}. We show that model error generally decreases with time, as expected, but the greedy strategy has error decreasing slowly with time. The $\epsilon$-greedy strategy enhances the student model through a representative sample of the user-item matrix. Error for this strategy therefore drops to a small fraction of the greedy strategy and is nearly as small as the error for the random strategy
\subsection{Comparing Recommendation Quality}
Next, we compare the quality of recommendations by observing the items chosen over time in Fig.~\ref{fig:Views}. We show that the $\epsilon$-greedy strategy makes more diverse recommendations that are of higher quality on average than the random or greedy strategies. In Fig.~\ref{fig:Views}a, we show the Gini coefficient of item popularity, a proxy for item popularity inequality. If items were fully sampled, their Gini coefficient would be less than 0.2 ($T=200$ values on the right side of the figure). Under the idealized oracle strategy, the Gini coefficient is initially high (only a few of the best items are recommended) and steadily drops. The lower Gini coefficient for the $\epsilon$-greedy strategy is a product of more equal sampling. The greedy strategy Gini coefficient is, however, often higher than all alternative strategies, meaning some items are recommended far more frequently than they should be.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=0.95\linewidth]{figures/UniformBetaResults.pdf}
\caption{Robustness of results. (a) Brier score, (b) item popularity Gini coefficient, (c) mean item popularity over time for random $\boldsymbol{\beta}$ matrix. As in the previous experiments, the $\epsilon$-greedy strategy has lower Brier score and item popularity Gini coefficient than the greedy strategy, and slightly (but statistically significantly) higher mean item popularity. Difference in item popularity between $\epsilon$-greedy and greedy strategy over time is shown in the inset.}
\label{fig:Robust}
\end{figure*}
Next, we plot the mean popularity over time in Fig.~\ref{fig:Views}b. If the mean popularity of items is high early on then agents are recommended items they really like. The mean popularity of items is highest in the oracle strategy because we know exactly what items users are most likely to choose and recommend those first. We find agents choose more items at any timepoint with the $\epsilon$-greedy strategy than alternatives implying it is the best non-oracle strategy. The random recommendations within the $\epsilon$-greedy strategy trains the student model better, and this in turn creates better recommendations on average.
\subsection{Robustness of Results}
The simulations shown in Figs.~\ref{fig:Error} \&~\ref{fig:Views} make several simplifying assumptions, including a $\boldsymbol{\beta}$ matrix whose elements are all the same. Instead $\boldsymbol{\beta}$ elements could be very different, which we model as values that are independent and uniformly distributed between 0 and 1. We show in Fig.~\ref{fig:Correl} that this does not change our finding that the greedy strategy is less stable than the $\epsilon$-greedy strategy. We similarly show that the model error and item popularity is qualitatively similar in Fig.~\ref{fig:Robust}. Namely, in Fig.~\ref{fig:Robust}a, we show that the greedy method has consistently higher model error than the $\epsilon$-greedy strategy. Similarly, the Gini coefficient (Fig.~\ref{fig:Robust}b) is higher and the mean popularity over time is very slightly, but statistically significantly, lower in Fig.~\ref{fig:Robust}c (shown more clearly in the inset, which shows the difference in item popularity over time). To test the statistical significance, we take the mean popularity difference across all timesteps between the greedy and $\epsilon$-greedy strategies and compare the z-score of this difference (which is greater than $15$), making the $p$-value$<10^{-6}$. This result is approximately the same when we compare the mean difference for individual realizations or the mean item popularity across five realizations; results stated above are for mean popularity across five realizations. Unlike in earlier results, however, the teacher model is now poorly approximated as a product of two lower-rank matrices, therefore error is typically higher. The oracle strategy in turn has a lower Gini coefficient and higher mean popularity than alternative strategies.
\section{Discussion \& Conclusions}
In conclusion, we develop a teacher-student framework to understand the accuracy and stability of collaborative filtering models as they interact with agents. The simulations demonstrate that the greedy strategy produces unstable and inaccurate predictions over time. Namely, the recommendation algorithm recommends a small set of items (leading to a high popularity Gini coefficient) and this leads to higher error. In contrast, the $\epsilon$-greedy strategy follows a less intuitive training regime in which random items are recommended to agents. This leads to better sampling (lower Gini coefficient), lower error, and \emph{more} items picked at any given time because the items recommended are what agents prefer to choose. Finally, the $\epsilon$-greedy strategy might force users out of filter bubbles by exposing them to a more diverse set of items. This potential should be explored in future work. This paper adds to growing literature on the instability of ranking systems \cite{Salganik2006, Burghardt2020}, but also gives greater insight into personalized ranking, and emergent properties, both desired and unintended, of these systems. For example, the sensitivity of the recommendation algorithm to initial conditions is reminiscent of chaotic systems, but future work is needed to test the relationship between these findings and non-linear dynamics or chaos theory.
\subsection{Limitations}
There are a number of potential limitations with the current method. First, our work must rely on synthetic data because we cannot know whether a user would choose an item they were not recommended in empirical data. Furthermore, assumptions built into the present simulation may not reflect the true human dynamics. For example, agents are the same age and equally active in the system. In reality, some agents may be much older and have much more data than others. In addition, the order items are recommended could affect agent decisions. For example, users might not buy a vinyl record unless they are first recommended a record player. Moreover, we use a simple student model to recommend items, but newer and more sophisticated collaborative filtering methods could be offered.
\subsection{Ethical Considerations}
The present simulations offer policy suggestions to improve recommendations by combining ideas from collaborative filtering with reinforcement learning. This current work does not, however, explore the potential adverse effects of recommendation systems, such as filter bubbles \cite{Bessi2016}. Items recommended by the $\epsilon$-greedy strategy could be of low-quality or promote harm, while we would expect such items are screened out when recommendation algorithms promote what agents should like. That said, this method actively fights filter bubbles by offering items outside of the user's expected preferences, and a variation of this strategy, such as recommendations among a cleaned or popular set of items could provide users better and more diverse items.
\subsection{Future Work}
Additional realism, such as agents arriving or leaving the system, could easily be incorporated into the simulation. The present work is a baseline that researchers can modify for specific systems. Additional features should be explored, however, including polarization, in which people preferentially pick one type of content over another. While there has been a growing interest in algorithmic polarization \cite{Sirbu2019}, the dynamic interaction between agents and trained models should be explored in greater depth, especially if it can drive people away from echo chambers. Moreover, the teacher-student methodology in this paper can be extended to audit other socio-technical systems such as predictive policing \cite{Ensign2018}, bail \cite{kleinberg2016inherent}, or banking loans, whose data is known to be intrinsically biased \cite{DAmour2020}. The instability we could measure is, for example, who gets loans or goes to jail. If this varies due to the simulation realization and not the intrinsic features of the people, we could quantify, and find ways to address, the algorithm's instability.
\section*{Acknowledgements}
Research was supported by DARPA under award \# HR001121C0169.
\section*{Conflicts of Interest}
The authors declare no conflicts of interest.
| {'timestamp': '2022-01-19T03:05:34', 'yymm': '2201', 'arxiv_id': '2201.07203', 'language': 'en', 'url': 'https://arxiv.org/abs/2201.07203'} |
\section{Introduction}
Repeated measurements data consists of observations from several experimental units, subjects, or groups under different conditions.
This grouping or clustering of the individual responses into experimental units
typically introduces dependencies:
the different units are assumed to be independent,
but there may be heterogeneity between units
and correlation within units.
Mixed-effects models
provide a powerful and flexible tool to analyze grouped data by incorporating fixed and random effects. Fixed effects are associated with the entire population, and random effects are associated with individual
groups
and model the heterogeneity across them and the dependence structure within them~\citep{Pinheiro2000}.
Linear mixed-effects models~\citep{Laird-Ware1982, Pinheiro2000, Verbeke-Molenberghs2009, Demidenko2004}
impose a linear relationship between all covariates and the response.
Partially linear mixed-effects models~\citep{Zeger-Diggle1994} extend the linear ones.
We consider the partially linear mixed-effects model
\begin{equation}\label{eq:initPLMM}
\mathbf{Y}_i = \Xi\beta_0 + g(\mathbf{W}_i) + \mathbf{Z}_i\mathbf{b}_i+ \boldsymbol{\varepsilon}_i
\end{equation}
for groups $i\in\{1,\ldots,N\}$. There are $n_i$ observations per group $i$.
The unobserved random variable $\mathbf{b}_i$, called random effect, introduces
correlation
within its group $i$ because all $n_i$ observations within this group share $\mathbf{b}_i$.
We make the assumption generally made that both the random effect $\mathbf{b}_i$ and the error term $\boldsymbol{\varepsilon}_i$
follow a Gaussian distribution~\citep{Pinheiro2000}.
The matrices $\mathbf{Z}_i$ assigning the random effects to group-level observations are fixed.
The linear covariables $\Xi$ and the nonparametric and potentially high-dimensional covariables $\mathbf{W}_i$ are observed and random, and they may be dependent. Furthermore, the nonparametric covariables may contain nonlinear transformations and interaction terms of the linear ones.
Please see Assumption~\ref{assumpt:Distribution} in Section~\ref{sect:ModelAndDML} for further details.
Our aim is to estimate and make inference for the so-called fixed effect $\beta_0$ in~\eqref{eq:initPLMM} in the presence of a highly complex $g$ using general machine learning
algorithms.
The parametric component $\beta_0$
provides a simple summary of the covariate effects that are of main scientific interest. The nonparametric component $g$ enhances model flexibility because time trends and further covariates with possibly nonlinear and interaction effects can be modeled nonparametrically.
\\
Repeated measurements, or longitudinal, data is omnipresent
in empirical research.
For example, assume we want to study the effect of a treatment over time. Observing the same subjects repeatedly presents three main advantages over having cross-sectional data.
First, subjects can serve as their own controls.
Second, the between-subject variability is explicitly modeled and can be excluded from the experimental error. This yields more efficient estimators of the relevant model parameters.
Third, data can be collected more reliably~\citep{Davies2002, Fitzmaurice2011}.
\\
Various approaches have been considered in the literature to estimate the nonparametric component
$g$ in~\eqref{eq:initPLMM}:
kernel methods~\citep{Hart-Wehrly1986, Zeger-Diggle1994, Taavoni2019b, Chen2017},
backfitting~\citep{Zeger-Diggle1994, Taavoni2019b},
spline methods~\citep{Rice-Silverman1991, Zhang2004, Qin-Zhu2007, Qin-Zhu2009, Li-Zhu2010, Kim2017, Aniley2019},
and local linear regression~\citep{Taavoni2019b, Liang2009}.
Our aim is to make inference for $\beta_0$ in the presence of potentially highly complex effects of $\mathbf{W}_i$ on $\Xi$ and $\mathbf{Y}_i$. First, we adjust $\Xi$ and $\mathbf{Y}_i$ for $\mathbf{W}_i$ by regressing $\mathbf{W}_i$ out of them using machine learning algorithms. These machine learning algorithms may yield biased results, especially if regularization methods are used, like for instance with the lasso~\citep{Tibshirani1996}.
Second, we fit a linear mixed-effects model to these regression residuals to estimate $\beta_0$. Our estimator of $\beta_0$ converges at the parametric rate, follows a Gaussian distribution asymptotically, and is semiparametrically efficient.
We rely on the double machine learning
framework of~\citet{Chernozhukov2018} to estimate $\beta_0$ using general machine learning algorithms.
To the best of our knowledge, this is the first work to allow the nonparametric nuisance components of a
partially linear mixed-effects model to be estimated with arbitrary machine learners like random forests~\citep{Breiman2001} or the lasso~\citep{Tibshirani1996, Buehlmann2011}.
In contrast to the setting and proofs of~\citet{Chernozhukov2018}, we have dependent data and need to incorporate this accordingly.
\citet{Chernozhukov2018} introduce double machine learning to estimate a low-dimensional parameter in the presence of nonparametric nuisance components using machine learning
algorithms.
This estimator converges at the parametric rate and is asymptotically Gaussian due to Neyman orthogonality and sample splitting with cross-fitting.
We would like to remark that nonparametric components can be estimated without sample splitting and cross-fitting if the underlying function class satisfies some entropy conditions;
see for instance~\citet{Geer-Mammen1997}. However, these regularity conditions limit the complexity of the function class, and
machine learning algorithms usually do not satisfy them. Particularly, these conditions fail to hold if the dimension of the nonparametric variables increases with the sample size~\citep{Chernozhukov2018}.
\subsection{Additional Literature}
Expositions and overviews of mixed-effects modeling techniques can be found in~\citet{Pinheiro1994, Davidian-Giltinan1995, Vonesh-Chinchilli1997, Pinheiro2000, Davidian-Giltinan2003}.
\\
\citet{Zhang1998} consider partially linear
mixed-effects models and estimate the nonparametric component with natural cubic splines. They treat the smoothing parameter as an extra variance component that is jointly estimated with the other variance components of the model.
\citet{Masci2019} consider partially linear
mixed-effects models for unsupervised classification with discrete random effects.
\citet{Schelldorfer2011} consider high-dimensional linear mixed-effects models where the number of fixed effects coefficients may be much larger than the overall sample size.
\citet{Taavoni2019a} employ a regularization approach in generalized
partially linear mixed-effects models using regression splines to approximate the nonparametric component.
\citet{gamm4} use penalized regression splines where the penalized components are treated as random effects.
\\
The unobserved random variables in the partially linear mixed-effects model~\eqref{eq:initPLMM}
are assumed to follow a Gaussian distribution.
\citet{Taavoni2021} introduce multivariate $t$
partially linear
mixed-effects models for longitudinal data. They consider $t$-distributed random effects to account for outliers in the data.
\citet[Chapter 4]{Fahrmeir2011} relax the assumption of Gaussian random effects in generalized linear mixed models. They consider nonparametric Dirichlet processes and Dirichlet process mixture priors for the random effects.
\citet[Chapter 3]{Ohinata2012} consider partially linear mixed-effects models and make no distributional assumptions for the random terms,
and the nonparametric component is estimated with kernel methods.
\citet{Lu2016} consider a partially linear mixed-effects model that is nonparametric in time and that features asymmetrically distributed errors and missing data.
Furthermore, methods have been developed to analyze repeated measurements data that are robust to outliers.
\citet{Guoyou2008} consider robust estimating equations and estimate the nonparametric component with a regression spline.
\citet{Tang2015} consider median-based regression methods in a partially linear model with longitudinal data to account for highly skewed responses.
\citet{Lin2018} present an estimation technique in partially linear models for longitudinal data that is doubly robust in the sense that it simultaneously accounts for missing responses and mismeasured covariates.
\\
It is prespecified in
the partially linear mixed-effects model~\eqref{eq:initPLMM} which covariates are modeled with random effects.
Simultaneous variable selection for fixed effects variables and random effects has been developed by~\citet{Bondell2010, Ibrahim2011}. They use penalized likelihood approaches.
\citet{Li-Zhu2010} use a nonparametric test to test the existence of random effects in partially linear mixed-effects models.
\citet{Zhang2020} propose a variable selection procedure for the linear covariates of a generalized partially linear model with longitudinal data.
\\
\textit{Outline of the Paper.}
Section~\ref{sect:ModelAndDML} presents our double machine learning estimator of the linear coefficient in a partially linear mixed-effects model. Section~\ref{sect:numerical-experiments} presents our numerical results.
\\
\textit{Notation.}
We denote by $\indset{N}$ the set $\{1,2,\ldots,N\}$. We add the probability law as a subscript to the probability operator $\Prob$ and the expectation operator $\E$ whenever we want to emphasize the corresponding dependence.
We denote the $L^p(P)$ norm by $\normP{\cdot}{p}$ and the Euclidean or operator
norm by $\norm{\cdot}$, depending on the context.
We implicitly assume that given expectations and conditional expectations exist. We denote by $\stackrel{d}{\rightarrow}$ convergence in distribution.
The symbol $\independent$ denotes independence of random variables.
We denote by $\mathds{1}_{n}$ the $n\times n$ identity matrix and omit the subscript $n$ if we do not want to emphasize the dimension.
We denote the $d$-variate Gaussian distribution by $\mathcal{N}_d$.
\section{Model Formulation and the Double Machine Learning Estimator}\label{sect:ModelAndDML}
We consider repeated measurements data that is grouped according to experimental units or subjects.
This grouping structure introduces dependency in the data. The individual experimental units or groups are assumed to be independent, but there may be some
between-group heterogeneity and within-group correlation.
We consider the partially linear mixed-effects model
\begin{equation}\label{eq:PLMM}
\mathbf{Y}_i = \Xi\beta_0 + g(\mathbf{W}_i) + \mathbf{Z}_i\mathbf{b}_i+ \boldsymbol{\varepsilon}_i, \quad i\in\indset{N}
\end{equation}
for groups $i$ as in~\eqref{eq:initPLMM} to model the
between-group heterogeneity and within-group correlation with random effects.
We have $n_i$ observations per group that are concatenated row-wise into $\mathbf{Y}_i\in\R^{n_i}$, $\Xi\in\R^{n_i\times d}$, and $\mathbf{W}_i\in\R^{n_i\times v}$.
The nonparametric variables may be high-dimensional, but $d$ is fixed.
Both $\Xi$ and $\mathbf{W}_i$ are random.
The $\Xi$ and $\mathbf{W}_i$ belonging to the same group $i$ may be dependent.
For groups $i\neq j$, we assume $\Xi\independent\mathbf{X}_j$, $\mathbf{W}_i\independent\mathbf{W}_j$, and $\Xi\independent\mathbf{W}_j$.
We assume that $\mathbf{Z}_i\in\R^{n_i\times q}$ is fixed.
The random variable $\mathbf{b}_i\in\R^q$ denotes a group-specific vector of random regression coefficients that is assumed to follow a Gaussian distribution.
The dimension $q$ of the random effects model is fixed.
Also the error terms are assumed to follow a Gaussian distribution as is commonly used
in a mixed-effects models framework~\citep{Pinheiro2000}.
All groups $i$ share the common linear coefficient $\beta_0$ and the potentially complex function $g\colon\R^{v}\rightarrow\R$. The function $g$ is applied row-wise to $\mathbf{W}_i$, denoted by $g(\mathbf{W}_i)$.
We denote the total number of observations by $\NN_T:=\sum_{i=1}^{N}n_i$.
We assume that the numbers $n_i$ of within-group observations are uniformly upper bounded by $n_{\mathrm{max}}<\infty$.
Asymptotically, the number of groups, $N$, goes to infinity.
Our distributional and independency assumptions are summarized as follows:
\begin{assumptions}\label{assumpt:Distribution} Consider the partially linear mixed-effects model~\eqref{eq:PLMM}. We assume that there is some $\sigma_0>0$ and some symmetric positive definite matrix
$\Gamma_{0}\in\R^{q\times q}$
such that the following conditions hold.
\begin{enumerate}[label={\theassumptions.\arabic*}]
\item \label{assumpt:D1}
The random effects $\mathbf{b}_1,\ldots,\mathbf{b}_{\NN}$ are independent and identically distributed\ $\mathcal{N}_q(\boldsymbol{0}, \Gamma_{0})$.
\item \label{assumpt:D2}
The error terms $\boldsymbol{\varepsilon}_1,\ldots,\boldsymbol{\varepsilon}_{\NN}$ are independent
and follow a Gaussian distribution, $\boldsymbol{\varepsilon}_i\sim\mathcal{N}_{n_i}(\boldsymbol{0}, \sigma_0^2\mathds{1}_{n_i})$ for $i\in\indset{N}$, with the common variance component $\sigma_0^2$.
\item \label{assumpt:D3}
The variables $\mathbf{b}_1,\ldots,\mathbf{b}_{\NN}, \boldsymbol{\varepsilon}_1,\ldots,\boldsymbol{\varepsilon}_{\NN}$ are independent.
\item \label{assumpt:D4-2}
For all $i, j \in\indset{N}$, $i\neq j$, we have $(\mathbf{b}_i,\boldsymbol{\varepsilon}_i)\independent(\mathbf{W}_i, \Xi)$ and $(\mathbf{b}_i,\boldsymbol{\varepsilon}_i)\independent(\mathbf{W}_j, \mathbf{X}_j)$.
\item \label{assumpt:D6}
For all $i, j \in\indset{N}$, $i\neq j$, we have $\Xi\independent\mathbf{X}_j$, $\mathbf{W}_i\independent\mathbf{W}_j$, and $\Xi\independent\mathbf{W}_j$.
\end{enumerate}
\end{assumptions}
We would like to remark that the distribution of the error terms $\boldsymbol{\varepsilon}_i$ in Assumption~\ref{assumpt:D2} can be generalized to $\boldsymbol{\varepsilon}_i\sim\mathcal{N}_{n_i}(\boldsymbol{0},\sigma_0^2\Lambda_i(\boldsymbol{\lambda}))$, where $\Lambda_i(\boldsymbol{\lambda})\in\R^{n_i\timesn_i}$ is a symmetric positive definite matrix parametrized by some finite-dimensional parameter vector $\boldsymbol{\lambda}$ that all groups have in common.
For the sake of notational simplicity, we restrict ourselves to Assumption~\ref{assumpt:D2}.
Moreover, we may consider stochastic random effects matrices $\mathbf{Z}_i$. Alternatively, the nonparametric variables $\mathbf{W}_i$ may be part of the random effects matrix. In this case,
we consider
the random effects matrix $\widetilde{\mathbf{Z}}_i = \zeta(\mathbf{Z}_i,\mathbf{W}_i)$ for some known function $\zeta$ in~\eqref{eq:PLMM} instead of $\mathbf{Z}_i$.
Please see Section~\ref{sec:nonfixedZi} in the appendix for further details.
For simplicity, we restrict ourselves to fixed random effects matrices $\mathbf{Z}_i$ that are disjoint from $\mathbf{W}_i$.
\\
The unknown parameters in our model are $\beta_0$, $\Gamma_{0}$, and $\sigma_0$.
Our aim is to estimate
$\beta_0$ and
make inference for it.
Although the variance parameters $\Gamma_{0}$ and $\sigma_0$ need to be estimated consistently to construct an estimator of $\beta_0$, it is not our goal to perform inference for them.
\subsection{The Double Machine Learning Fixed-Effects Estimator}
Subsequently, we describe our estimator of $\beta_0$ in~\eqref{eq:PLMM}.
To motivate our procedure, we first consider the population version with the residual terms
\begin{displaymath}
\mathbf{R}_{\Xi} := \Xi - \E[\Xi | \mathbf{W}_i] \quad\textrm{and}\quad
\mathbf{R}_{\Yi} := \mathbf{Y}_i - \E[\mathbf{Y}_i | \mathbf{W}_i] \quad\textrm{for}\quad
i\in\indset{N}
\end{displaymath}
that adjust $\Xi$ and $\mathbf{Y}_i$ for $\mathbf{W}_i$. On this adjusted level, we have the linear mixed-effects model
\begin{equation}\label{eq:LMM}
\mathbf{R}_{\Yi} = \mathbf{R}_{\Xi} \beta_0 + \mathbf{Z}_i\mathbf{b}_i + \boldsymbol{\varepsilon}_i, \quad i\in\indset{N}
\end{equation}
due to~\eqref{eq:PLMM} and Assumption~\ref{assumpt:D4-2}.
In particular, the adjusted and grouped responses in this model are independent in the sense that we have $\mathbf{R}_{\Yi} \independent \mathbf{R}_{\Yj}$ for $i\neq j$.
The strategy now is to first estimate the residuals with machine learning algorithms and then use
linear mixed model techniques to infer $\beta_0$. This is done with sample splitting and cross-fitting, and the details are described next.
\\
Let us define $\Sigma_{0} := \sigma_0^{-2}\Gamma_{0}$
and $\mathbf{V}_{0, i}:=(\mathbf{Z}_i\Sigma_{0}\mathbf{Z}_i^T + \mathds{1}_{n_i})$ so that we have
\begin{equation}\label{eq:resNormal}
(\mathbf{R}_{\Yi} | \mathbf{W}_i,\Xi) \sim \mathcal{N}_{n_i}\big(\mathbf{R}_{\Xi}\beta_0, \sigma_0^2\mathbf{V}_{0, i}\big).
\end{equation}
We assume that there exist functions $m_X^0\colon\R^{v}\rightarrow\R^d$ and $m_Y^0\colon\R^{v}\rightarrow\R$ that we can apply row-wise to $\mathbf{W}_i$ to have
$\E[\Xi | \mathbf{W}_i] = m_X^0(\mathbf{W}_i)$ and $\E[\mathbf{Y}_i | \mathbf{W}_i] = m_Y^0(\mathbf{W}_i)$.
In particular, $m_X^0$ and $m_Y^0$ do not depend on the grouping index $i$.
Let $\eta^0:= (m_X^0,m_Y^0)$ denote the true unknown nuisance parameter.
Let us denote by $\theta_0 := (\beta_0, \sigma_0^2,\Sigma_{0})$ the complete true unknown parameter vector and by $\theta := (\beta,\sigma^2,\Sigma)$ and $\mathbf{V}_{i}:=\mathbf{Z}_i\Sigma\mathbf{Z}_i^T+\mathds{1}_{n_i}$ respective general parameters.
The log-likelihood of group $i$ is given by
\begin{equation}\label{eq:log-likelihood}
\begin{array}{rcl}
\libig{\theta, \eta^0} &=& -\frac{n_i}{2} \log(2\pi) - \frac{n_i}{2}\log(\sigma^2) - \frac{1}{2}\log\big( \det(\mathbf{V}_{i})\big) \\
&&\quad- \frac{1}{2\sigma^2} (\mathbf{R}_{\Yi}-\mathbf{R}_{\Xi}\beta)^T\mathbf{V}_{i}^{-1}(\mathbf{R}_{\Yi}-\mathbf{R}_{\Xi}\beta) - \log \big(p(\mathbf{W}_i,\Xi)\big),
\end{array}
\end{equation}
where $p(\mathbf{W}_i,\Xi)$ denotes the joint density of $\mathbf{W}_i$ and $\Xi$. We assume that $p(\mathbf{W}_i,\Xi)$ does not depend on $\theta$.
The true nuisance parameter $\eta^0$ in the log-likelihood~\eqref{eq:log-likelihood} is unknown and estimated with machine learning algorithms (see below).
Denote by $\eta := (m_X,m_Y)$ some general nuisance parameter. The terms that adjust $\Xi$ and $\mathbf{Y}_i$ for $\mathbf{W}_i$ with this general nuisance parameter are given by $\Xi-m_X(\mathbf{W}_i)$ and $\mathbf{Y}_i-m_Y(\mathbf{W}_i)$. Up to additive constants that do not depend on $\theta$ and $\eta$, we thus consider maximum likelihood estimation with the likelihood
\begin{displaymath}
\begin{array}{rl}
&\ell_i(\theta,\eta)
= -\frac{n_i}{2}\log(\sigma^2) - \frac{1}{2}\log\big(\det(\mathbf{V}_{i}) \big)\\
&\quad- \frac{1}{2\sigma^2} \Big(\mathbf{Y}_i-m_Y(\mathbf{W}_i)-\big(\Xi-m_X(\mathbf{W}_i)\big)\beta\Big)^T\mathbf{V}_{i}^{-1}\Big(\mathbf{Y}_i-m_Y(\mathbf{W}_i)-\big(\Xi-m_X(\mathbf{W}_i)\big)\beta\Big),
\end{array}
\end{displaymath}
which is a function of both the finite-dimensional parameter $\theta$ and the infinite-dimensional
nuisance parameter $\eta$.
\\
Our estimator of $\beta_0$ is constructed as follows using double machine learning.
First, we estimate $\eta^0$ with machine learning
algorithms and plug these estimators into the estimating equations for $\theta_0$, equation~\eqref{eq:setToZero} below,
to obtain an estimator for $\beta_0$. This procedure is done with sample splitting and cross-fitting as explained next.
Consider repeated measurements from $N$ experimental units, subjects, or groups as in~\eqref{eq:PLMM}. Denote by $\mathbf{S}_i :=(\mathbf{W}_i,\Xi,\mathbf{Z}_i,\mathbf{Y}_i)$ the observations of group $i$.
First, we split the group indices $\indset{N}$ into $K\ge 2$ disjoint sets $I_1, \ldots, I_{K}$ of approximately equal size;
please see Section~\ref{sect:AssumptionsDefinitions} in the appendix for further details.
For each $k\in\indset{K}$, we estimate the conditional expectations $m_X^0(W)$ and $m_Y^0(W)$ with data from $I_{\kk}^c$. We call the resulting estimators $\hat m_X^{I_{\kk}^c}$ and $\hat m_Y^{I_{\kk}^c}$, respectively.
Then, the adjustments $\widehat{\mathbf{R}}_{\Xi}^{I_{\kk}}:=\Xi-\hat m_X^{I_{\kk}^c}(\mathbf{W}_i)$, and $\widehat{\mathbf{R}}_{\mathbf{Y}_i}^{I_{\kk}}:=\mathbf{Y}_i-\hat m_Y^{I_{\kk}^c}(\mathbf{W}_i)$ for $i\inI_{\kk}$ are evaluated on $I_{\kk}$, the complement of $I_{\kk}^c$.
Let $\hat\eta^{I_{\kk}^c} := (\hat m_X^{I_{\kk}^c}, \hat m_Y^{I_{\kk}^c})$ denote the estimated nuisance parameter.
Consider the score function
$\psi(\mathbf{S}_i; \theta,\eta) := \nabla_{\theta}\ell_i(\theta,\eta)$,
where $\nabla_{\theta}$ denotes the gradient with respect to $\theta$ interpreted as a vector.
On each set $I_{\kk}$, we consider an estimator $\hat\theta_{\kk} = (\hat\beta_{\kk},\hat\sigma_{\kk}^2, \hat{\Sigma}_{\kk})$ of $\theta_0$ that, approximately, in the sense of Assumption~\ref{assumpt:Theta2} in the appendix, solves
\begin{equation}\label{eq:setToZero}
\frac{1}{\nn_{T, k}}\sum_{i\inI_{\kk}} \psibig{\mathbf{S}_i;\hat\theta_{\kk},\hat\eta^{I_{\kk}^c}}
= \frac{1}{\nn_{T, k}}\sum_{i\inI_{\kk}} \nabla_{\theta}\ell_i(\theta,\eta)
\stackrel{!}{=} \boldsymbol{0},
\end{equation}
where $\nn_{T, k} := \sum_{i\inI_{\kk}}n_i$ denotes the total number of observations from experimental units that belong to the set $I_{\kk}$.
These $K$ estimators $\hat\theta_{\kk}$ for $k\in\indset{K}$ are assembled to form the final cross-fitting estimator
\begin{equation}\label{eq:betahat}
\hat\beta := \frac{1}{K} \sum_{k=1}^{K} \hat\beta_{\kk}
\end{equation}
of $\beta_0$.
We remark that one can simply use linear mixed model computation and software to compute $\hat\beta_{\kk}$ based on the estimated residuals $\widehat{\mathbf{R}}^{I_{\kk}}$.
The estimator $\hat\beta$ fundamentally depends on the particular sample split. To alleviate this effect, the overall procedure may be repeated $\mathcal{S}$ times~\citep{Chernozhukov2018}. The $\mathcal{S}$ point estimators are aggregated by the median, and an additional term accounting for the random splits is added to the variance estimator of $\hat\beta$; please see
Algorithm~\ref{algo:Summary} that presents the complete procedure.
\begin{algorithm}[h!]
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{$N$ independent and identically distributed\ observations $\{\mathbf{S}_i=(\mathbf{W}_i,\Xi, \mathbf{Z}_i, \mathbf{Y}_i)\}_{i\in\indset{N}}$ from the model~\eqref{eq:PLMM} satisfying Assumption~\ref{assumpt:Distribution}, a natural number $K$, a natural number $\mathcal{S}$.}
\Output{An estimator of $\beta_0$ in~\eqref{eq:PLMM} together with its estimated asymptotic variance.}
\For{$s\in\indset{\mathcal{S}}$}
{
Split the grouped observation index set $\indset{N}$ into $K$ sets $I_1, \ldots, I_{K}$ of approximately equal size.
\For{$k\inK$}
{
Compute the conditional expectation estimators $\hat m_X^{I_{\kk}^c}$ and $\hat m_Y^{I_{\kk}^c}$ with some
machine learning algorithm and data from $I_{\kk}^c$.
Evaluate the adjustments $\widehat{\mathbf{R}}_{\Xi}^{I_{\kk}}=\Xi-\hat m_X^{I_{\kk}^c}(\mathbf{W}_i)$ and $\widehat{\mathbf{R}}_{\mathbf{Y}_i}^{I_{\kk}}=\mathbf{Y}_i-\hat m_Y^{I_{\kk}^c}(\mathbf{W}_i)$ for $i\inI_{\kk}$.
Compute $\hat\theta_{\kk, s} = (\hat\beta_{\kk, s}, \hat\sigma_{\kk, s}^2, \hat\Sigma_{\kk, s})$ using, for instance, linear mixed model techniques.
}
Compute $\hat\beta_{s} = \frac{1}{K}\sum_{k=1}^{K}\hat\beta_{\kk, s}$ as an approximate solution to~\eqref{eq:setToZero}.
Compute an estimate $\hat T_{0,s}$ of the asymptotic variance-covariance matrix $T_0$ in Theorem~\ref{thm:asymptoticGaussian}.
}
Compute $\hat\beta = \mathrm{median}_{s\in\indset{\mathcal{S}}}(\hat\beta_{s})$.
Estimate $T_0$ by $\hat T_0 = \mathrm{median}_{s\in\indset{\mathcal{S}}}(\hat T_{0,s} + (\hat\beta-\hat\beta_{s})(\hat\beta-\hat\beta_{s})^T)$.\label{algo:varianceCorrection}
\caption{Double machine learning in a partially linear mixed-effects model with repeated measurements.}\label{algo:Summary}
\end{algorithm}
\subsection{Theoretical Properties of the Fixed-Effects Estimator}
The estimator $\hat\beta$ as in~\eqref{eq:betahat} converges at the parametric rate, $N^{-1/2}$, and is asymptotically Gaussian distributed and semiparametrically efficient.
\begin{theorem}\label{thm:asymptoticGaussian}
Consider grouped observations $\{\mathbf{S}_i=(\mathbf{W}_i,\Xi,\mathbf{Y}_i)\}_{i\in\indset{N}}$ from the partially linear mixed-effects model~\eqref{eq:PLMM} that satisfy Assumption~\ref{assumpt:Distribution} such that $p(\mathbf{W}_i,\Xi)$ does not depend on $\theta$.
Let $\NN_T:=\sum_{i=1}^{N}n_i$ denote the total number of unit-level observations.
Furthermore, suppose the assumptions in Section~\ref{sect:AssumptionsDefinitions} in the appendix hold,
and consider the symmetric positive-definite matrix $T_0$ given in Assumption~\ref{assumpt:regularity7} in the appendix.
Then, $\hat\beta$ as in~\eqref{eq:betahat} concentrates in a $1/\surd{\NN_{T}}$ neighborhood of $\beta_0$ and is centered Gaussian, namely
\begin{equation}\label{eq:thmEquation}
\surd{\NN_{T}}T_0^{\frac{1}{2}}(\hat\beta-\beta_0)
\stackrel{d}{\rightarrow}\mathcal{N}_d(\boldsymbol{0}, \mathds{1}_{d}) \quad (N\rightarrow\infty),
\end{equation}
and semiparametrically efficient.
The convergence in~\eqref{eq:thmEquation} is in fact uniformly over the law $P$ of $\{\mathbf{S}_i=(\mathbf{W}_i,\Xi,\mathbf{Y}_i)\}_{i\in\indset{N}}$.
\end{theorem}
Please see Section~\ref{sect:asymptoticDistribution} in the appendix for a proof of Theorem~\ref{thm:asymptoticGaussian}.
Our proof builds on~\citet{Chernozhukov2018}, but we have to take into account the correlation within units that is introduced by the random effects.
The inverse asymptotic variance-covariance matrix $T_0$ can be consistently estimated;
see Lemma~\ref{lem:multiplierMatrix} in the appendix.
The estimator $\hat\beta$ is semiparametrically efficient because the score function comes from the log-likelihood of our data and because $\eta^0$ solves a concentrating-out equation for fixed $\theta$; see~\citet{Chernozhukov2018, Newey1994b}.
The assumptions in Section~\ref{sect:AssumptionsDefinitions} of the appendix specify regularity conditions and required
convergence rates of the machine learning estimators. The
machine learning errors need to satisfy the product relationship
\begin{displaymath}
\normP{m_X^0(W)-\hat m_X^{I_{\kk}^c}(W)}{2}\big(\normP{m_Y^0(W)-\hat m_Y^{I_{\kk}^c}(W)}{2}+ \normP{m_X^0(W)-\hat m_X^{I_{\kk}^c}(W)}{2} \big)\llN^{-\frac{1}{2}}.
\end{displaymath}
This bound requires that only the products of the
machine learning estimation errors
$\normP{m_X^0(W)-\hat m_X^{I_{\kk}^c}(W)}{2}$ and $\normP{m_Y^0(W)-\hat m_Y^{I_{\kk}^c}(W)}{2}$
but not the individual ones need to vanish at a rate smaller than
$N^{-1/2}$. In particular, the individual estimation errors may vanish at the rate
smaller than $N^{-1/4}$. This is achieved by many machine learning
methods (cf.
\citet{Chernozhukov2018}):
$\ell_1$-penalized and related methods in a variety of sparse models
\citep{Bickel2009, Buehlmann2011, Belloni2011, Belloni-Chernozhukov2011, Belloni2012, Belloni-Chernozhukov2013}, forward selection in sparse models
\citep{Kozbur2020}, $L_2$-boosting in sparse linear models
\citep{Luo2016}, a class of regression trees and random forests
\citep{Wager2016}, and neural networks \citep{Chen1999}.
We note that so-called Neyman orthogonality makes score functions insensitive to inserting potentially biased machine learning
estimators of the nuisance parameters. A score function is Neyman orthogonal if its Gateaux derivative vanishes at the true $\theta_0$ and the true $\eta^0$. In particular, Neyman orthogonality is a first-order property. The product relationship of the machine learning
estimating errors described above is used to bound second-order terms. We refer to Section~\ref{sect:asymptoticDistribution} in the appendix for more details.
\section{Numerical Experiments}\label{sect:numerical-experiments}
We apply our method to an empirical and a pseudorandom dataset and in a simulation study.
Our implementation is available in the \textsf{R}-package \texttt{dmlalg}~\citep{dmlalg}.
\subsection{Empirical Analysis: CD4 Cell Count Data}
Subsequently, we apply our method to longitudinal CD4 cell counts data collected from human immunodeficiency virus (HIV) seroconverters. This data has previously been analyzed by~\citet{Zeger-Diggle1994} and is available in the \textsf{R}-package \texttt{jmcm}~\citep{jmcm} as \texttt{aids}.
It contains $2376$ observations of CD4 cell counts measured on $369$ subjects. The data was collected during a period ranging from $3$ years before to $6$ years after seroconversion. The number of observations per subject ranges from $1$ to $12$, but for most subjects, $4$ to $10$ observations are available.
Please see~\citet{Zeger-Diggle1994} for more details on this dataset.
Apart from time, five other covariates are measured: the age at seroconversion in years (age), the smoking status measured by the number of cigarette packs consumed per day (smoking), a binary variable indicating drug use (drugs), the number of sex partners (sex), and the depression status measured on the Center for Epidemiologic Studies Depression (CESD) scale (cesd), where higher CESD values indicate the presence of more depression symptoms.
We incorporate a random intercept per person.
Furthermore, we consider a square-root transformation of the CD4 cell counts to reduce the skewness of this variable as proposed by~\citet{Zeger-Diggle1994}.
The CD4 counts are our response. The covariates that are of scientific interest are considered as $X$'s,
and the remaining covariates are considered as $W$'s in the partially linear mixed-effects model~\eqref{eq:PLMM}.
The effect of time is modeled nonparametrically, but there are several options to model the other covariates.
Other models than partially linear mixed-effects model
have also been considered in the literature to analyze this dataset.
For instance, \citet{Fan2000} consider a functional linear model where the linear coefficients are a function of the time.
\\
We consider two partially linear mixed-effects models for this dataset.
First, we incorporate all covariates except time linearly.
Most approaches in the literature considering a partially linear mixed-effects model for this data that model time nonparametrically
report that sex and cesd are significant and that either smoking or drugs is significant as well; see for instance~\citet{Zeger-Diggle1994, Taavoni2019b, Wang2011}.
\citet{Guoyou2008} develop a robust estimation method
for longitudinal data and estimate
nonlinear effects from time with regression splines.
With the CD4 dataset,
They find that smoking and cesd are significant.
We apply our method with $K=2$ sample splits, $\mathcal{S}=100$ repetitions of splitting the data, and learn the conditional expectations with random forests that consist of $500$ trees whose minimal node size is $5$.
Like~\citet{Guoyou2008},
we conclude that smoking and cesd are significant; please see the first row of Table~\ref{tab:empirical} for a more precise account of our findings.
Therefore,
we can expect
that our method implicitly performs robust estimation. Apart from sex, our point estimators are larger or of about the same size in absolute value as what~\citet{Guoyou2008} obtain. This suggests that our method incorporates potentially less bias. However, apart from age, the standard deviations are slightly larger with our method.
This can be expected because random forests are more complex than the regression splines~\citet{Guoyou2008} employ.
\\
\begingroup
\begin{table}
\footnotesize
\centering
\begin{tabular}{| L{2.5cm} | C{1.9cm} | C{1.9cm} | C{1.9cm} | C{1.9cm} | C{2.2cm} |}
\hline
& \textbf{age} & \textbf{smoking} & \textbf{drugs} & \textbf{sex} & \textbf{cesd}\\
\hline
\hline
$W = (\mathrm{time})$ & $0.004$ ($0.027$) & $0.752$ ($0.123$) & $0.704$ ($0.360$) & $0.001$ ($0.043$) & $-0.042$ ($0.015$)\\
\hline
$W = (\mathrm{time}, \mathrm{age}, \mathrm{sex})$ & - & $0.620$ ($0.126$) & $0.602$ ($0.335$) &- & $-0.047$ ($0.015$) \\
\hline
\hline
\citet{Zeger-Diggle1994}
& $0.037$ ($0.18$) & $0.27$ ($0.15$) & $0.37$ ($0.31$) & $0.10$ ($0.038$) & $-0.058$ ($0.015$) \\
\hline
\citet{Taavoni2019b}
& $1.5\cdot 10^{-17}$ ($3.5\cdot 10^{-17}$)& $0.152$ ($0.208$) & $0.130$ ($0.071$) & $0.0184$ ($0.0039$) & $-0.0141$ ($0.0061$) \\
\hline
\citet{Wang2011}
& $0.010$ ($0.033$) & $0.549$ ($0.144$) & $0.584$ ($0.331$) & $0.080$ ($0.038$) & $-0.045$ ($0.013$)\\
\hline
\citet{Guoyou2008}
& $0.006$ ($0.038$) & $0.538$ ($0.136$) & $0.637$ ($0.350$)& $0.066$ ($0.040$) & $-0.042$ ($0.015$)\\
\hline
\end{tabular}
\caption{\label{tab:empirical}Estimates of the linear coefficient and its standard deviation in parentheses with our method
for nonparametrically adjusting for time (first row) and for time, age, and sex (second row).
The remaining rows display the results from~\citet[Section 5]{Zeger-Diggle1994}, \citet[Table 1, ``Kernel'']{Taavoni2019b}, \citet[Table 2, ``Semiparametric efficient scenario I'']{Wang2011}, and \citet[Table 5, ``Robust'']{Guoyou2008}, respectively.
}
\end{table}
\endgroup
We consider a second estimation approach where we model the variables time, age, and sex nonparametrically and allow them to interact.
It is conceivable that these variables are not (causally) influenced by smoking, drugs, and cesd and that they are therefore exogenous.
The variables smoking, drugs, and cesd are modeled linearly,
and they are considered as treatment variables. Some direct causal effect interpretations are possible if one is willing to assume, for instance, that the nonparametric adjustment variables are causal parents of the linear variables or the response.
However, we do not pursue this line of thought further.
We estimate the conditional expectations given the three nonparametric variables time, age, and sex again with random forests that consist of $500$ trees whose minimal node size is $5$ and use $K=2$ and $\mathcal{S}=100$ in Algorithm~\ref{algo:Summary}.
We again find that smoking and cesd are significant; please see the second row of Table~\ref{tab:empirical}.
This cannot be expected a priori because this second model incorporates more complex adjustments, which can lead to less significant variables.
\subsection{Pseudorandom Simulation Study: CD4 Cell Count Data}\label{sect:pseudorandom}
Second, we consider the CD4 cell count data from the previous subsection and perform a pseudorandom simulation study. The variables smoking, drugs, and cesd are modeled linearly and the variables time, age, and sex nonparametrically.
We condition on these six variables in our simulation. That is, they are the same in all repetitions.
The function $g$ in~\eqref{eq:PLMM} is chosen as a regression tree that we built beforehand.
We let $\beta_0=(0.62, 0.6, -0.05)^T$, where the first component corresponds to smoking, the second one to drugs, and the last one to cesd, consider a standard deviation of the random intercept per subject of $4.36$, and a standard deviation of the error term of $4.35$. These are the point estimates of the respective quantities obtained in the previous subsection.
Our fitting procedure uses random forests consisting of $500$ trees whose minimal node size is $5$ to estimate the conditional expectations, and we use $K=2$ and $\mathcal{S}=10$ in Algorithm~\ref{algo:Summary}. We perform $5000$ simulation runs.
We compare the performance of our method with that of the spline-based function \texttt{gamm4} from the package~\texttt{gamm4}~\citep{gamm4} for the statistical software \textsf{R}~\citep{R}.
This method represents the nonlinear part of the model by smooth additive functions and estimates them by penalized regression splines. The penalized components are treated as random effects and the unpenalized components
as fixed.
The results are displayed in Figure~\ref{fig:pseudorandom}.
With our method, \texttt{mmdml}, the two-sided confidence intervals for $\beta_0$ are of about the same length but achieve a coverage that is closer to the nominal $95\%$ level than with \texttt{gamm4}. The \texttt{gamm4} method largely undercovers the packs component of $\beta_0$, which can be explained by the incorporated bias.
\begin{figure}[]
\centering
\caption[]{\label{fig:pseudorandom}
Coverage and length of two-sided confidence intervals at significance level 5\% and bias for our method, \texttt{mmdml}, and
\texttt{gamm4}.
In the coverage plot, solid dots represent point estimators, and circles represent $95\%$ confidence bands with respect to the $5000$ simulation runs.
The confidence interval length and bias are displayed with boxplots without outliers.}
\includegraphics[width=0.75\textwidth]{pics/pseudoRand_paper_pics__25_January_2022__17_35_42__pseudorandom__Xforest_Yforest_S10_reps5000_FINAL.pdf}
\end{figure}
\subsection{Simulation Study}\label{sect:simul-study}
We consider a partially linear mixed-effects model with $q=3$
random effects and where $\beta_0$ is one-dimensional.
Every subject has their own random intercept term and a nested random effect with two levels.
Thus, the random effects structure is more complex than in the previous two subsections because these models only used a random intercept.
We compare three data generating mechanisms:
One where the function $g$ is nonsmooth and the number of observations per group is balanced,
one where the function $g$ is smooth and the number of observations per group is balanced,
and one where the function $g$ is nonsmooth and the number of observations per group is unbalanced;
please see Section~\ref{sect:dataSimulation} in the appendix for more details.
We estimate the nonparametric nuisance components, that is, the conditional expectations, with
random forests consisting of $500$ trees whose minimal node size is $5$.
Furthermore, we use $K=2$ and $\mathcal{S}=10$ in Algorithm~\ref{algo:Summary}.
We perform $1000$ simulation runs and consider different numbers of groups $N$. As in the previous subsection, we compare the performance of our method with
\texttt{gamm4}.
The results are displayed in Figure~\ref{fig:simulation}. Our method, \texttt{mmdml}, highly outperforms \texttt{gamm4} in terms of coverage for nonsmooth $g$ because the coverage of \texttt{gamm4} equals $0$ due to its substantial bias. Our method overcovers slightly due to the correction factor that results from the $\mathcal{S}$ repetitions. However, this correction factor is highly recommended in practice.
With smooth $g$, \texttt{gamm4} is closer to the nominal coverage and has shorter confidence intervals than our method. Because the underlying model is smooth and additive, a spline-based estimator is better suited. In all scenarios, our method outputs longer confidence intervals than \texttt{gamm4} because we use random forests; consistent with theory, the difference in absolute value decreases though when $N$ increases.
\begin{figure}[]
\centering
\caption[]{\label{fig:simulation}
Coverage and median length of two-sided confidence intervals for $\beta_0$ at significance level 5\% (true $\beta_0 = 0.5$) and median bias
for three data generating scenarios for our method, \texttt{mmdml}, and
\texttt{gamm4}.
The shaded regions in the coverage plot represent $95\%$ confidence bands with respect to the $1000$ simulation runs.
The dots in the coverage and bias plot are jittered, but neither are their interconnecting lines nor their confidence bands.}
\includegraphics[width=\textwidth]{pics/coverage_paper_pics__26_January_2022__99_99_99__simulation__Xforest_Yforest_S10_reps1000_FINAL_merged_1.pdf}
\end{figure}
\section{Conclusion}\label{sect:conclusion}
Our aim was to develop inference for the linear coefficient $\beta_0$ of a partially linear mixed-effects model
that includes a linear term and potentially complex nonparametric terms.
Such models can be used to describe heterogenous and correlated data that feature
some grouping structure, which may result from taking repeated measurements.
Traditionally, spline or kernel approaches are used to cope with the nonparametric part of such a model. We presented a scheme that uses the double machine learning
framework of~\citet{Chernozhukov2018} to estimate any
nonparametric components with arbitrary machine learning algorithms. This allowed us to consider complex nonparametric components with interaction structures and high-dimensional variables.
Our proposed method is as follows. First, the nonparametric variables are regressed out from the response and the linear variables. This step adjusts the response and the linear variables for the nonparametric variables and may be performed with any machine learning algorithm.
The adjusted variables satisfy a linear mixed-effects model, where the linear coefficient $\beta_0$ can be estimated with standard linear
mixed-effects
techniques. We showed that the estimator of $\beta_0$
asymptotically follows a Gaussian distribution, converges at the parametric rate, and is semiparametrically efficient.
This asymptotic result
allows us to perform inference for $\beta_0$.
Empirical experiments demonstrated the performance of our proposed method.
We conducted an empirical and pseudorandom data analysis and a simulation study.
The simulation study and the pseudorandom experiment confirmed the effectiveness of our method in terms of coverage, length of confidence intervals, and estimation bias
compared to a penalized regression spline approach relying on additive models.
In the empirical experiment, we analyzed longitudinal CD4 cell counts data collected from
HIV-infected individuals.
In the literature, most methods only incorporate the time component nonparametrically to analyze this dataset.
Because we estimate nonparametric components with machine learning algorithms, we can allow several variables to enter the model nonlinearly, and we can allow these variables to interact.
A comparison of our results with the literature suggests that our method
may perform robust estimation.
Implementations of our method are available in the \textsf{R}-package \texttt{dmlalg}~\citep{dmlalg}.
\section*{Acknowledgements}
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 786461).
\phantomsection
\addcontentsline{toc}{section}{References}
| {'timestamp': '2022-02-04T02:15:07', 'yymm': '2108', 'arxiv_id': '2108.13657', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.13657'} |
\section{Magnetic blackbody shift}
The total magnetic blackbody shift of a level $\left| a\,m_a \right>$ is given by equation~\eref{eq:deltaE}. For electrons $\vect{\mu} = -\mu_B (\vect{J} + \vect{S})$ if we neglect the anomalous magnetic moment (i.e. $g_s=2$). Then
\begin{align}
\begin{split}
\Delta E_a &= \frac{1}{4}\sum_{b\,m_b}
\left|\left< b\,m_b\left| \mu_B (\vect{J} + \vect{S})\cdot\vect{B} \right| a\,m_a\right>\right|^2 \\
&\qquad\qquad \cdot \left( \frac{1}{E_a-E_b+\omega} + \frac{1}{E_a-E_b-\omega} \right) \nonumber
\end{split}\\
&= \frac{\mu_B^2 B^2(\omega)}{6} \sum_b C_{ba}\, \frac{E_a - E_b}{(E_a-E_b)^2 - \omega^2}
\end{align}
Consider the interaction of levels in an atom with nuclear spin $I$.
If we denote the angular quantum numbers of $\left| a \right>$ with $L$, $J$ and $F$, and those of $\left| b \right>$ with $L'$, $J'$ and $F'$, then
\begin{widetext}
\begin{equation}
C_{ba} = [F'] \sixj{F'}{F}{1}{J}{J'}{I}^2
\left( \delta_{J,J'}\sqrt{J(J+1)(2J+1)}\ +
\delta_{L,L'}(-1)^{P}[J,J']^{{\nicefrac{1}{2}}}\sixj{J'}{J}{1}{{\nicefrac{1}{2}}}{{\nicefrac{1}{2}}}{L}
\sqrt{\frac{3}{2}} \right)^2
\end{equation}
where $P=J+L+{\nicefrac{1}{2}}+F+F'+2I+1$ and the square brackets $[J]=(2J+1)$.
\end{widetext}
| {'timestamp': '2009-10-12T02:48:12', 'yymm': '0910', 'arxiv_id': '0910.2064', 'language': 'en', 'url': 'https://arxiv.org/abs/0910.2064'} |
\section{Introduction}
\quad We consider the T-congruence Sylvester equation of the form
\begin{equation}
\label{T-con}
AX+X^{\rm T}B=C,
\end{equation}
where $A\in \mathbb R^{m\times n}$, $B\in \mathbb R^{n\times m}$ and $C\in \mathbb R^{m\times m}$ are given, and $X\in \mathbb R^{n\times m}$ is to be determined.
If $X=X^{\rm T}$, then the equation \eqref{T-con} reduces to the Sylvester equation that is widely known in control theory and numerical linear algebra.\\
\quad The T-congruence Sylvester equation \eqref{T-con} has recently attracted attention because of a relationship with palindromic eigenvalue problems \cite{unique2,palindromic}: necessary and sufficient conditions for the existence of a unique solution for every right-hand side \cite{unique1,unique2}, an algorithm to compute a unique solution of \eqref{T-con} in the case of $n=m$ \cite{general}, and a method to find the general solutions of $AX+X^\star B=O$ \cite{AX+X^TB=0}, where
$X^\star$ denotes either the transpose or the conjugate transpose of $X$. \\
\quad The standard technique to solve or to analyze the T-congruence Sylvester equation \eqref{T-con} is to find nonsingular matrices $T_1$, $T_2$, $T_3$ and $T_4$ of the form
\begin{equation}\label{T}
T_1AT_2(T_2^{-1}XT_3)+(T_1X^{\rm T}T_4^{-1})(T_4BT_3)=T_1CT_3
\end{equation}
such that the equation \eqref{T-con} is simplified. Furthermore, if the above equation \eqref{T} can be transformed into a well-known matrix equation, then one can use the studies of the corresponding matrix equation in order to solve or to analyze the original equation \eqref{T-con}. However, there seems to be no such transformation due to the existence of $X^{\rm T}$.\\
\quad In this paper, we show that, if $A$ or $B$ is nonsingular, the T-congruence Sylvester equation \eqref{T-con} can be transformed into the Lyapunov equation that is also known in control theory \cite{L_unique} or the Sylvester equation by using the tensor product, the vec operator and a permutation matrix.
This result will provide an approach to finding mathematical properties and efficient numerical solvers for the T-congruence Sylvester equation \eqref{T-con}.\\
\quad The paper is organized as follows. Section 2 describes the definition of the tensor product and the exchange of the tensor product by using permutation matrix.
Section 3 presents our main results that the T-congruence Sylvester equation can be transformed into the Lyapunov equation or the Sylvester equation.
Finally, Section 4 gives concluding remarks and future work.
\section{Preliminaries}
\quad In this section, the tensor product (also referred to as the Kronecker product) and its properties are briefly reviewed.
Let $A=[a_{ij}]\in \mathbb R^{m\times n}$ and $B\in \mathbb R^{p\times q}$, then the tensor product is defined by
\begin{equation}\nonumber
A\otimes B:=
\begin{bmatrix}
a_{11}B &a_{12}B &\cdots &a_{1n}B\\
a_{21}B &a_{22}B &\cdots &a_{2n}B\\
\vdots &\vdots & &\vdots\\
a_{m1}B &a_{m2}B &\cdots &a_{mn}B
\end{bmatrix}
\in \mathbb R^{mp\times nq}.
\end{equation}
In addition, let $C\in \mathbb R^{n\times l}$ and $D\in \mathbb R^{q\times r}$, then it follows that
\begin{equation}\nonumber
(A\otimes B)(C\otimes D)=(AC)\otimes(BD).
\end{equation}
\quad For $A=[\bm{a}_1,\bm{a}_2,\dots,\bm{a}_n]\in \mathbb R^{m\times n}$, the vec operator, vec:$\mathbb R^{m\times n}\to \mathbb R^{mn},$ is defined by
\begin{equation}\nonumber
{\rm vec} (A):=
\begin{bmatrix}
\bm{a}_1\\
\bm{a}_2\\
\vdots \\
\bm{a}_n
\end{bmatrix},
\end{equation}
and vec$^{-1}$:$\mathbb R^{mn}\to \mathbb R^{m\times n}$ is the inverse vec operater such that
\begin{equation}\nonumber
{\rm vec}^{-1} ({\rm vec} (A))=A.
\end{equation}
We shall use the following two lemmas to prove the main results in the next section.
\begin{lemma}$\!\!(${\rm See, e.g., \cite[p.275]{demmel})}
Let $A\in \mathbb R^{m\times n}, B\in \mathbb R^{n\times p}$. Then it follows that
\begin{align}\nonumber
{\rm vec} (AB)=(I_p\otimes A){\rm vec}(B)
=(B^{{\rm T}}\otimes I_m){\rm vec}(A),
\end{align}
where $I_n$ denotes the $n \times n$ identity matrix.
\end{lemma}
\begin{lemma}$\!\!(${\rm \cite{tensor})}\label{per}
Let $\bm{e}_{in}$ be an $n$-dimensional column vector that has 1 in the $i$th position and 0's elsewhere, i.e.,
\begin{equation}\nonumber
\bm{e}_{in}:=[0,0,\dots,0,1,0,\dots,0]^{{\rm T}}\in \mathbb R^n.
\end{equation}
Then for the permutation matrix
\begin{equation}\nonumber
P_{mn}:=
\begin{bmatrix}
I_m\otimes \bm{e}_{1n}^{\scalebox{0.5}{\rm T}}\\
I_m\otimes \bm{e}_{2n}^{\scalebox{0.5}{\rm T}}\\
\vdots\\
I_m\otimes \bm{e}_{nn}^{\scalebox{0.5}{\rm T}}
\end{bmatrix}
\in \mathbb R^{mn\times mn},
\end{equation}
the following properties hold {\rm:}
\begin{align}
&P_{mn}^{{\rm T}}=P_{nm},\label{first}\\
&P_{mn}^{{\rm T}}P_{mn}=P_{mn}P_{mn}^{{\rm T}}=I_{mn},\label{second}\\
&{\rm vec}(A)=P_{mn}{\rm vec}(A^{{\rm T}}), A\in \mathbb R^{m\times n},\label{third}\\
&P_{mr}(A\otimes I_r)P_{nr}^{{\rm T}}=I_r\otimes A.\label{forth}
\end{align}
\end{lemma}
\section{Main results}
In this section, we consider the T-congruence Sylvester equation \eqref{T-con} for the case $A$, $B$, $C\in \mathbb R^{n\times n}$.
\begin{theorem}\label{the1}
Let $A$, $B$, $C$, $X\in \mathbb R^{n\times n}$. Then we have, if $A$ is nonsingular, the T-congruence Sylvester equation \eqref{T-con} can be transformed into the Lyapunov equation of the form
\begin{equation}\label{theorem_A}
\tilde{X}-M\tilde{X}M^{\rm T}=Q,
\end{equation}
where $\tilde{X}:=AX$, $M:=B^{{\rm T}}A^{-1}$, $Q:=C-{\rm vec}^{-1}(P_{nn}{\rm vec}(MC)).$
\end{theorem}
{\bf Proof}
\quad Applying the vec operator to \eqref{T-con} and using Lemma 2.1 yield
\begin{equation}
\label{vecT}
(I_n\otimes A){\rm vec}(X)+(B^{\rm T}\otimes I_n){\rm vec}(X^{\rm T})={\rm vec}(C).
\end{equation}
From \eqref{first}, \eqref{second} and \eqref{forth} of Lemma \ref{per}, it follows that
\begin{equation}\label{perm_exchange}
P_{nn}(B^{\rm T}\otimes I_n)P_{nn}=I_n\otimes B^{\rm T}
\Leftrightarrow (B^{\rm T}\otimes I_n)P_{nn}=P_{nn}(I_n\otimes B^{\rm T}).
\end{equation}
By using \eqref{perm_exchange} and \eqref{third} of Lemma \ref{per}, the second term of the left-hand side in \eqref{vecT} is culculated to be
\begin{align}
(B^{\rm T}\otimes I_n){\rm vec}(X^{\rm T})=(B^{\rm T}\otimes I_n)P_{nn}{\rm vec}(X)\nonumber
=P_{nn}(I_n\otimes B^{\rm T}){\rm vec}(X).\nonumber
\end{align}
Thus \eqref{T-con} is rewritten as
\begin{equation}
\{(I_n\otimes A)+P_{nn}(I_n\otimes B^{\rm T})\}\bm{x}=\bm{c},\nonumber
\end{equation}
where $\bm{x}:={\rm vec}(X)$ and $\bm{c}:={\rm vec}(C)$.
Since $A$ is assumed to be nonsingular, it follows that $I_{n^2}=(I_n\otimes A^{-1})(I_n\otimes A)$, and thus
\begin{align}
&\{(I_n\otimes A)+P_{nn}(I_n\otimes B^{\rm T})\}(I_n\otimes A^{-1})(I_n\otimes A)\bm{x}=\bm{c}\nonumber\\
&\quad \Leftrightarrow \{I_n\otimes I_n+P_{nn}(I_n\otimes M)\}\bm{\tilde{x}}=\bm{c},\label{vecT2}
\end{align}
where $\bm{\tilde{x}}:=(I_n\otimes A)\bm{x}$.
Multiplying on the left of \eqref{vecT2} by $\{I_n\otimes I_n-P_{nn}(I_n\otimes M)\}$ and using \eqref{forth} of Lemma \ref{per} yield
\begin{align}
&\{I_n\otimes I_n-P_{nn}(I_n\otimes M)\}\{I_n\otimes I_n+P_{nn}(I_n\otimes M)\}\bm{\tilde{x}}=\bm{c}'\nonumber\\
&\quad \Leftrightarrow\{I_n\otimes I_n-P_{nn}(I_n\otimes M)P_{nn}(I_n\otimes M)\}\bm{\tilde{x}}=\bm{c}'\nonumber\\
&\quad \Leftrightarrow\{I_n\otimes I_n-(M\otimes I_n)(I_n\otimes M)\}\bm{\tilde{x}}=\bm{c}'\nonumber\\
&\quad \Leftrightarrow\{I_n\otimes I_n-(M\otimes M)\}\bm{\tilde{x}}=\bm{c}',\label{theorem1}
\end{align}
where $\bm{c}':=\{I_n\otimes I_n-P_{nn}(I_n\otimes M)\}\bm{c}$.
Applying the inverse vec operator to \eqref{theorem1}, we obtain
\begin{equation}
{\rm vec}^{-1}\{\{I_n\otimes I_n-(M\otimes M)\}\bm{\tilde{x}}\}={\rm vec}^{-1}(\bm{c}')\Leftrightarrow \tilde{X}-M\tilde{X}M^{\rm T}=Q.\nonumber
\end{equation}
Thus when $A$ is nonsingular, the T-congruence Sylvester equation \eqref{T-con} can be transformed into the Lyapunov equation.\hfill \ $\Box$
\begin{corollary}
Let $A$, $B$, $C$, $X\in \mathbb R^{n\times n}$. Then we have, if $B$ is nonsingular, the T-congruence Sylvester equation \eqref{T-con} can be transformed into the Lyapunov equation of the form
\begin{equation}\label{theorem_B}
\hat{X}-\hat{M}\hat{X}\hat{M}^{\rm T}=\hat{Q},
\end{equation}
where $\hat{X}:=X^{\rm T}B$, $\hat{M}:=A(B^{{\rm T}})^{-1}$, $\hat{Q}:=C-{\rm vec}^{-1}(P_{nn}{\rm vec}(C\hat{M}^{\rm T}))$.
\end{corollary}
{\bf Proof}
Transposing the equation (\ref{T-con}) yields
\begin{equation}\label{the_2}
B^{\rm T}X+X^{\rm T}A^{\rm T}=C^{\rm T}.
\end{equation}
By replacing $B^{\rm T}$, $A^{\rm T}$, and $C^{\rm T}$ with $A$, $B$, and $C$ respectively, the equation \eqref{the_2} becomes \eqref{T-con}. As the result of Theorem 3.1, when B is nonsingular, the T-congruence Sylvester equation \eqref{T-con} can be transformed into the Lyapunov equation. \hfill \ $\Box$
\par
\quad As an application, if the matrix $A$ or $B$ is nonsingular, one may obtain the solution of \eqref{T-con} as follows:\par
1. solve the Lyapunov equation \eqref{theorem_A} (or \eqref{theorem_B});\par
2. solve $AX=\tilde{X}$ (or $B^{\rm T}X=\hat{X}^{\rm T}$).\par
\quad A slightly stronger condition gives a close relationship between the T-congruence Sylvester equation and the Sylvester equation as shown below.
\begin{corollary}
Let $A$, $B$, $C$, $X\in \mathbb R^{n\times n}$. Then we have, if $A$ and $B$ are nonsingular, the T-congruence Sylvester equation can be transformed into the Sylvester equation{\rm:}
\begin{equation}
-M\tilde{X}+\tilde{X}(M^{-1})^{\rm T}=Q',
\end{equation}
where $\tilde{X}:=AX$, $M:=B^{{\rm T}}A^{-1}$, $Q':={\rm vec}^{-1}\{(M^{-1}\otimes I_n)\bm{c'}\}$.
\end{corollary}
{\bf Proof}\quad
Multiplying on the left of \eqref{theorem1} by $(M^{-1}\otimes I_n)$ gives
\begin{equation}
\{(M^{-1}\otimes I_n)+(I_n\otimes (-M))\}\bm{\tilde{x}}=\bm{c}'',\label{corollary1}
\end{equation}
where $\bm{c}'':=(M^{-1}\otimes I_n)\bm{c}'$.
Applying the inverse vec operator to (\ref{corollary1}), we have
\begin{align}
&{\rm vec}^{-1}\{\{(M^{-1}\otimes I_n)+(I_n\otimes (-M))\}\bm{\tilde{x}}\}={\rm vec}^{-1}\{\bm{c}''\}\nonumber\\
&\quad \Leftrightarrow -M\tilde{X}+\tilde{X}(M^{-1})^{\rm T}=Q'.\nonumber
\end{align}
This completes the proof. \hfill \ $\Box$
\par
\quad
\section{Concluding remarks}
\quad In this paper, we showed that the T-congruence Sylvester equation can be transformed into the Lyapunov equation if the matrix $A$ or $B$ is nonsingular, and can be further transformed into the Sylvester equation if the matrix $A$ and $B$ are nonsingular. \\
\quad As applications, these results will lead to the following potential advantages: (1) simplification of the conditions for the unique solution of T-congruence Sylvester equation by using the condition for the Lyapunov equation, see, e.g., \cite{L_unique,S_unique}; (2)useful tools to find necessary and sufficient conditions for consistency of the T-congruence Sylvester equation; (3) efficient numerical solvers for the equation by using the numerical solvers of the Lyapunov equation (the Sylvester equation), see, e.g., \cite{S_unique,{Sylvester},Lyapunov}.\\
\quad In future work we will investigate whether there exists a relationship between the T-congruence Sylvester equation and the Lyapunov equation for the case where the matrix $A$ and $B$ are nonsingular or rectangular.
\begin{ack}
This work has been supported in part by JSPS KAKENHI Grant No. 26286088.
\end{ack}
| {'timestamp': '2015-11-06T02:06:05', 'yymm': '1511', 'arxiv_id': '1511.01597', 'language': 'en', 'url': 'https://arxiv.org/abs/1511.01597'} |
\section{Background} The starting point for the development of algebraic invariants in topological data analysis is the classification of finite persistence modules over a field $k$: that any such module decomposes into a direct sum of indecomposable interval modules; moreover, the decomposition is unique up to reordering. The barcodes associated to the original module correspond to these interval submodules, which are indexed by the set of connected subgraphs of the finite directed graph associated to the finite, totally ordered indexing set of the module. Modules that decompose in such a fashion are conventionally referred to as {\it tame}.
\vskip.2in
A central problem in the subject has been to determine what, if anything, holds true for more complex types of poset modules (ps-modules) - those indexed on finite partially ordered sets; most prominent among these being $n$-dimensional persistence modules \cite{cs, cz}. Gabriel's theorem \cite{pg, gr} implies that the only types for which the module is {\it always} tame are those whose underlying graph corresponds to a simply laced Dynkin diagram of type $A_n$, $D_n, n\ge 4$ or one of the exceptional graphs $E_6, E_7$ or $E_8$; a result indicating there is no simple way to generalize the 1-dimensional case.
\vskip.2in
However it is natural to ask whether the existence of some (naturally occuring) additional structure for such modules might lead to an appropriate generalization that is nevertheless consistent with Gabriel's theorem. This turns out to be the case. Before stating our main results, we will need to briefly discuss the framework in which we will be working. We consider finite ps-modules (referred to as $\cal C$-modules in this paper) equipped with i) no additional structure, ii) a {\it weak inner product} ($\cal WIPC$-module) , iii) an {\it inner product} ($\cal IPC$-module). The ``structure theorem" - in reality a sequence of theorems and lemmas - is based on the fundamental notion of a {\it multi-flag} of a vector space $V$, referring to a collection of subspaces of $V$ closed under intersections, and the equally important notion of {\it general position} for such an array. Using terminology made precise below, our results may be summarized as follows:
\begin{itemize}
\item Any $\cal C$-module admits a (non-unique) weak inner product structure (can be realized as a $\cal WIPC$-module). However, the obstruction to further refining this to an $\cal IPC$-structure is in general non-trivial, and we give an explicit example of a $\cal C$-module which cannot admit an inner product structure.
\item Associated to any finite $\cal WIPC$-module $M$ is a functor ${\cal F}:{\cal C}\to (multi\mhyphen flags/k)$ which associates to each $x\in obj({\cal C})$ a multi-flag ${\cal F}(M)(x)$ of the vector space $M(x)$, referred to as the {\it local structure} of $M$ at $x$.
\item This local strucure is naturally the direct limit of a directed system of recursively defined multi-flags $\{{\cal F}_n(M), \iota_n\}$, and is called {\it stable} when this directed system stabilizes at a finite stage.
\item In the case $M$ is an $\cal IPC$-module with stable local structure
\begin{itemize}
\item it determines a {\it tame covering} of $M$ - a surjection of $\cal C$-modules $p_M:T(M)\surj M$ with $T(M)$ weakly tame, and with $p_M$ inducing an isomorphism of associated graded local structures. The projection $p_M$ is an isomorphism iff $M$ itself is weakly tame, which happens exactly when the multi-flag ${\cal F}(M)(x)$ is in general position for each object $x$. In this way $T(M)$ is the closest weakly tame approximation to $M$.
\item If, in addition, the category $\cal C$ is {\it holonomy free} (h-free), then each block of $T(M)$ may be non-canonically written as a finite direct sum of GBCs (generalized bar codes); in this case $T(M)$ is tame and $M$ is tame iff it is weakly tame.
\end{itemize}
\item In the case $M$ is equipped only with a $\cal WIPC$-structure, the tame cover may not exist, but one can still define the {\it generalized bar code vector} of $M$ which, in the case $M$ is an $\cal IPC$-module, measures the dimensions of the blocks of $M$. This vector does not depend on the choice of $\cal WIPC$-structure, and therefore is defined for all $\cal C$-modules $M$ with stable local structure.
\item All finite $n$-dimensional zig-zag modules have strongly stable local structure for all $n\ge 1$ (this includes all finite $n$-dimensional persistence modules, and strongly stable implies stable).
\item All finite $n$-dimensional persistence modules, in addition, admit a (non-unique) inner product structure.
\end{itemize}
A distinct advantage to the above approach is that the decomposition into blocks, although dependent on the choice of inner product, is {\it basis-free}; moreover the local structure is derived solely from the underlying structure of $M$ via the iterated computation of successively refined functors ${\cal F}_n(M)$ determined by images, kernels and intersections. For modules with stable local structure, the total dimension of the kernel of $p_M$ - referred to as the {\it excess} of $M$ - is an isomorphism invariant that provides a complete numerical obstruction to an $\cal IPC$-module $M$ being weakly tame. Moreover, the block diagram of $M$, codified by its tame cover $T(M)$, always exists for $\cal IPC$-modules with stable local structure, even when $M$ itself is not weakly tame. It would seem that the computation of the local structure of $M$ in this case should be amenable to algorithmic implementation. And although there are obstructions (such as holonomy) to stability for $\cal WIPC$-modules indexed on arbitrary finite ps-categories, these obstructions vanish for finite zig-zag modules in all dimensions (as indicated by the last bullet point). Additionally, although arbitrary $\cal C$-modules may not admit an inner product, all finite $n$-dimensional persistence modules do (for all dimensions $n\ge 1$), which is our main case of interest.
\vskip.2in
A brief organizational description: in section 2 we make precise the notion of multi-flags, general position, and the local structure of a $\cal WIPC$-module. The {\it excess} of the local structure - a whole number which measures the failure of general position - is defined. In section 3 we show that when $M$ is an $\cal IPC$-module, the associated graded local structure ${\cal F}(M)_*$ defines the blocks of $M$, which in turn can be used to create the tame cover via direct sum. Moreover, this tame cover is isomorphic to $M$ iff the excess is zero. We define the holonomy of the indexing category; for holonomy-free (h-free) categories, we show that this block sum may be further decomposed into a direct sum of generalized bar codes, yielding the desired generalization of the classical case mentioned above. As an illustration of the efficacy of this approach, we use it at the conclusion of section 3.2 to give a 2-sentence proof of the structure theorem for finite 1-dimensional persistence modules. In section 3.3 we show that the dimension vector associated to the tame cover can still be defined in the absence of an inner product structure, yielding an isomorphism invariant for arbitrary $\cal C$ modules. Section 3.4 investigates the obstruction to equipping a $\cal C$-module with an inner product, the main results being that i) it is in general non-trivial, and ii) the obstruction vanishes for all finite $n$-dimensional persistence modules. In section 3.5 we consider the related obstruction to being h-free; using the introduced notion of an elementary homotopy we show all finite $n$-dimensional persistence modules are strongly h-free (implying h-free). We also show how the existence of holonomy can prevent stability of the local structure. Finally section 3.6 considers the stability question; although (from the previous section) the local structure can fail to be stable in general, it is always so for i) finite $n$-dimensional zig-zag modules (which includes persistence modules as a special case) over an arbitrary field, and ii) any $\cal C$-module over a finite field.
\vskip.2in
In section 4 we introduce the notion of geometrically based $\cal C$-modules; those which arise via application of homology to a $\cal C$-diagram of simplicial sets or complexes with finite skeleta. We show that the example of section 3.4 can be geometrically realized, implying that geometrically based $\cal C$-modules need not admit an inner product. However, by a cofibrant replacement argument we show that any geometrically based $\cal C$-module admits a presentation by $\cal IPC$-modules, a result which is presently unknown for general $\cal C$-modules.
\vskip.2in
I would like to thank Dan Burghelea and Fedor Manin for their helpful comments on earlier drafts of this work, and Bill Dwyer for his contribution to the proof of the cofibrancy replacement result presented in section 4.1.
\vskip.5in
\section{$\cal C$-modules}
\subsection{Preliminaries} Throughout we work over a fixed field $k$. Let $(vect/k)$ denote the category of finite dimensional vector spaces over $k$, and linear homomorphisms between such. Given a category $\cal C$, a {\it $\cal C$-module over $k$} is a covariant functor $M:{\cal C}\to (vect/k)$. The category $({\cal C}\mhyphen mod)$ of $\cal C$-modules then has these functors as objects, with morphisms represented in the obvious way by natural transformations. All functorial constructions on vector spaces extend to the objects of $({\cal C}\mhyphen mod)$ by objectwise application. In particular, one has the appropriate notions of
\begin{itemize}
\item monomorphisms, epimorphisms, short and long-exact sequences;
\item kernel and cokernel;
\item direct sums, Hom-spaces, tensor products;
\item linear combinations of morphisms.
\end{itemize}
With these constructs $({\cal C}\mhyphen mod)$ is an abelian category, without restriction on $\cal C$. By a {\it ps-category} we will mean the categorical representation of a poset $(S,\le)$, where the objects identify with the elements of $S$, while $Hom(x,y)$ contains a unique morphism iff $x\le y$ in $S$. A ps-category is {\it finite} iff it has a finite set of objects, and is {\it connected} if its nerve $N({\cal C})$ is connected. A ps-module is then a functor $F:{\cal C}\to (vect/k)$ from a ps-category $\cal C$ to $(vect/k)$. A morphism $\phi_{xy}:x\to y$ in $\cal C$ is {\it atomic} if it does not admit a non-trivial factorization (in terms of the partial ordering, this is equivalent to saying that if $x\le z\le y$ then either $z=x$ or $z=y$). Any morphism in $\cal C$ can be expressed (non-uniquely) as a composition of atomic morphisms. The {\it minimal graph} of $\cal C$ is then defined as the (oriented) subgraph of the 1-skeleton of $N({\cal C})$ with the same vertices, but whose edges are represented by atomic morphisms (not compositions of such). The minimal graph of $\cal C$ is denoted by $\Gamma({\cal C})$ and will be referred to simply as the graph of $\cal C$. We observe that $\cal C$ is connected iff $\Gamma({\cal C})$ is a connected.
\vskip.2in
In all that follows we will assume $\cal C$ to be a {\it connected, finite ps-category}, so that all $\cal C$-modules are finite ps-modules. If $M$ is a $\cal C$-module and $\phi_{xy}\in Hom_{\cal C}(x,y)$, we will usually denote the linear map $M(\phi_{xy}): M(x)\to M(y)$ simply as $\phi_{xy}$ unless more precise notation is needed. A very special type of ps-category occurs when the partial ordering on the finite set is a total ordering. In this case the resulting categorical representation $\cal C$ is isomorphic to $\underline{n}$, which denotes the category corresponding to $\{1 < 2 < 3\dots < n\}$. A finite persistence module is, by definition, an $\underline{n}$-module for some natural number $n$. So the $\cal C$-modules we consider in this paper occur as natural generalizations of finite persistence modules.
\vskip.3in
\subsection{Inner product structures} It will be useful to consider two refinements of the category $(vect/k)$.
\begin{itemize}
\item $(WIP/k)$, the category whose objects are inner product (IP)-spaces $V = (V,<\ ,\ >_V)$ and whose morphisms are linear transformations (no compatibility required with respect to the inner product structures on the domain and range);
\item $(IP/k)$, the wide subcategory of $(WIP/k)$ whose morphisms $L:(V,<\ ,\ >_V)\to (W,<\ ,\ >_W)$ satisfy the property that $\wt{L}: ker(L)^\perp\to W$ is an isometric embedding, where $ker(L)^\perp\subset V$ denotes the orthogonal complement of $ker(L)\subset V$ in $V$ with respect to the inner product $<\ ,\ >_V$, and $\wt{L}$ is the restriction of $L$ to $ker(L)^\perp$.
\end{itemize}
There are obvious transformations
\[
(IP/k)\xrightarrow{\iota_{ip}} (WIP/k)\xrightarrow{p_{wip}} (vect/k)
\]
where the first map is the inclusion which is the identity on objects, while the second map forgets the inner product on objects and is the identity on transformations between two fixed objects.
\vskip.2in
Given a $\cal C$-module $M:{\cal C}\to (vect/k)$ a {\it weak inner product} on $M$ is a factorization
\[
M: {\cal C}\to (WIP/k)\xrightarrow{p_{wip}} (vect/k)
\]
while an {\it inner product} on $M$ is a further factorization through $(IP/k)$:
\[
M: {\cal C}\to (IP/k)\xrightarrow{\iota_{ip}}(WIP/k)\xrightarrow{p_{wip}} (vect/k)
\]
A $\cal WIPC$-module will refer to a $\cal C$-module $M$ equipped with a weak inner product, while an $\cal IPC$-module is a $\cal C$-module that is equipped with an actual inner product, in the above sense. As any vector space admits a (non-unique) inner product, we see that
\begin{proposition} Any $\cal C$-module $M$ admits a non-canonical representation as a $\cal WIPC$-module.
\end{proposition}
The question as to whether a $\cal C$-module $M$ can be represented as an $\cal IPC$-module, however, is much more delicate, and discussed in some detail below.
\vskip.2in
Given a $\cal C$-module $M$ and a morphism $\phi_{xy}\in Hom_{\cal C}(x,y)$, we set $KM_{xy} := \ker(\phi_{xy} : M(x)\to M(y)).$ We note that a $\cal C$-module $M$ is an $\cal IPC$-module, iff
\begin{itemize}
\item for all $x\in obj({\cal C})$, $M(x)$ comes equipped with an inner product $< , >_x$;
\item for all $\phi_{xy}\in Hom_{\cal C}(x,y)$, the map $\wt{\phi}_{xy} : KM_{xy}^\perp\to M(y)$ is an isometry, where $\wt{\phi}_{xy}$ denotes the restriction of $\phi_{xy}$ to $KM_{xy}^\perp = $ the orthogonal complement of $KM_{xy}\subset M(x)$ with respect to the inner product $< , >_x$. In other words,
\[
<\phi({\bf v}), \phi({\bf w})>_y = <{\bf v}, {\bf w}>_x,\qquad \forall\, {\bf v}, {\bf w}\in KM_{xy}^\perp
\]
\end{itemize}
\begin{definition} Let $V = (V, < , >)$ be an inner product (IP) space. If $W_1\subseteq W_2\subset V$, we write $(W_1\subset W_2)^\perp$ for the relative orthogonal complement of $W_1$ viewed as a subspace of $W_2$ equipped with the induced inner product, so that $W_2\cong W_1\oplus (W_1\subset W_2)^\perp$.
\end{definition}
Note that $(W_1\subset W_2)^\perp = W_1^\perp\cap W_2$ when $W_1\subseteq W_2$ and $W_2$ is equipped with the induced inner product.
\vskip.3in
\subsection{Multi-flags and general position} Recall that a {\it flag} in a vector space $V$ consists of a finite sequence of proper inclusions beginning at $\{0\}$ and ending at $V$:
\[
\underline{W} := \{W_i\}_{0\le i\le n} = \left\{\{0\} = W_0\subset W_1\subset W_2\subset\dots\subset W_m = V\right\}
\]
If $\underline{m}$ denotes the totally ordered set $0 < 1 < 2 <\dots < m$ viewed as a category, $Sub(V)$ the category of subspaces of $V$ and inclusions of such, with $PSub(V)\subset Sub(V)$ the wide subcategory whose morphisms are proper inclusions, then there is an evident bijection
\[
\{\text{flags in } V\}\Leftrightarrow \underset{m\ge 1}{\coprod} Funct(\underline{m}, PSub(V))
\]
We will wish to relax this structure in two different ways. First, one may consider a sequence as above where not all of the inclusions are proper; we will refer to such an object as a {\it semi-flag}. Thus a semi-flag is represented by (and corresponds to) a functor $F:\underline{m}\to Sub(V)$ for some $m$. More generally, we define a {\it multi-flag} in $V$ to be a collection ${\cal F} = \{W_\alpha\subset V\}$ of subspaces of $V$ containing $\{0\}, V$, partially ordered by inclusion, and closed under intersection. It need not be finite.
\vskip.2in
Assume now that $V$ is equipped with an inner product. Given an element $W\subseteq V$ of a multi-flag $\cal F$ associated to $V$, let $S(W) := \{U\in {\cal F}\ |\ U\subsetneq W\}$ be the elements of $\cal F$ that are proper subsets of $W$, and set
\begin{equation}\label{eqn:one}
W_{\cal F} := \left(\left(\displaystyle\sum_{U\in S(W)} U\right) \subset W\right)^\perp
\end{equation}
\begin{definition}\label{def:genpos} For an IP-space $V$ and multi-flag $\cal F$ in $V$, the associated graded of $\cal F$ is the set of subspaces ${\cal F}_* := \{W_{\cal F}\ |\ W\in{\cal F}\}$. We say that $\cal F$ is in \underbar{general position} iff $V$ can be written as a direct sum of the elements of ${\cal F}_*$: $V\cong \displaystyle\bigoplus_{W\in{\cal F}} W_{\cal F}$.
\end{definition}
Note that, as $V\in{\cal F}$, it will always be the case that $V$ can be expressed as a sum of the subspaces in ${\cal F}_*$. The issue is whether that sum is a direct sum, and whether that happens is completely determined by the sum of the dimensions.
\begin{proposition} For any multi-flag $\cal F$ of an IP-space $V$, $\displaystyle\sum_{W\in{\cal F}} dim(W_{\cal F}) \ge dim(V)$. Moreover the two are equal iff $\cal F$ is in general position.
\end{proposition}
\begin{proof} The first claim follows from the fact that $\displaystyle\sum_{W\in{\cal F}} W = V$. Hence the sum of the dimensions on the left must be at least $dim(V)$, and equals $dim(V)$ precisely when the sum is a direct sum.
\end{proof}
\begin{definition} The excess of a multi-flag $\cal F$ of an IP-space $V$ is $e({\cal F}) := \left[\displaystyle\sum_{W\in{\cal F}} dim(W_{\cal F})\right] - dim(V)$.
\end{definition}
\begin{corollary} For any multi-flag $\cal F$, $e({\cal F})\ge 0$ and $e({\cal F}) = 0$ iff $\cal F$ is in general position.
\end{corollary}
Any semi-flag $\cal F$ of $V$ is in general position; this is a direct consequence of the total ordering. Also the multi-flag $\cal G$ formed by a pair of subspaces $W_1, W_2\subset V$ and their common intersection (together with $\{0\}$ and $V$) is always in general position. More generally, we have
\begin{lemma}\label{lemma:2} If ${\cal G}_i$, $i = 1,2$ are two semi-flags in the inner product space $V$ and $\cal F$ is the smallest multi-flag containing ${\cal G}_1$ and ${\cal G}_2$ (in other words, it is the multi-flag generated by these two semi-flags), then $\cal F$ is in general position.
\end{lemma}
\vskip.1in
Let ${\cal G}_i = \{W_{i,j}\}_{0\le j\le m_i}, i = 1,2$. Set $W^{j,k} := W_{1,j}\cap W_{2,k}$. Note that for each $i$, $\{W^{i,k}\}_{0\le k\le m_2}$ is a semi-flag in $W_{1,i}$, with the inclusion maps $W_{1,i}\hookrightarrow W_{1,i+1}$ inducing an inclusion of semi-flags $\{W^{i,k}\}_{0\le k\le m_2}\hookrightarrow \{W^{i+1,k}\}_{0\le k\le m_2}$. By induction on length in the first coordinate we may assume that the multi-flag of $W := W_{1,m_1-1}$ generated by $\wt{\cal G}_1 := \{W_{1,j}\}_{0\le j\le m_1-1}$ and $\wt{\cal G}_2 := \{W\cap W_{2,k}\}_{0\le k\le m_2}$ are in general position. To extend general position to the multi-flag on all of $V$, the induction step allows reduction to considering the case where the first semi-flag has only one middle term:
\begin{claim} Given $W\subseteq V$, viewed as a semi-flag ${\cal G}'$ of $V$ of length 3, and the semi-flag ${\cal G}_2 = \{W_{2,j}\}_{0\le j\le m_2}$ as above, the multi-flag of $V$ generated by ${\cal G}'$ and ${\cal G}_2$ is in general position.
\end{claim}
\begin{proof} The multi-flag $\cal F$ in question is constructed by intersecting $W$ with the elements of ${\cal G}_2$, producing the semi-flag ${\cal G}_2^W := W\cap {\cal G}_2 = \{W\cap W_{2,j}\}_{0\le j\le m_2}$ of $W$, which in turn includes into the semi-flag ${\cal G}_2$ of $V$. Constructed this way the direct-sum splittings of $W$ induced by the semi-flag $W\cap {\cal G}_2$ and of $V$ induced by the semi-flag ${\cal G}_2$ are compatible, in that if we write $W_{2,j}$ as $(W\cap W_{2,j})\oplus (W\cap W_{2,j}\subset W_{2,j})^\perp$ for each $j$, then the orthogonal complement of $W_{2,k}$ in $W_{2,k+1}$ is given as the direct sum of the orthogonal complement of $(W\cap W_{2,k})$ in $(W\cap W_{2,k+1})$ and the orthogonal complement of $(W\cap W_{2,k}\subset W_{2,k})^\perp$ in $(W\cap W_{2,k+1}\subset W_{2,k+1})^\perp$, which yields a direct-sum decomposition of $V$ in terms of the associated grade terms of $\cal F$, completing the proof both of the claim and of the lemma.
\end{proof}
On the other hand, one can construct simple examples of multi-flags which are not - in fact cannot be - in general position, as the following illustrates.
\begin{example} Let $\mathbb R\cong W_i\subset\mathbb R^2$ be three 1-dimensional subspaces of $\mathbb R^2$ intersecting in the origin, and the $\cal F$ be the multi-flag generated by this data. Then $\cal F$ is not in general position.
\end{example}
\vskip.2in
Given an arbitrary collection of subspaces $T = \{W_\alpha\}$ of an IP-space $V$, the multi-flag generated by $T$ is the smallest multi-flag containing each element of $T$. It can be constructed as the closure of $T$ under the operations i) inclusion of $\{0\}, V$ and ii) taking finite intersections.
\vskip.2in
[Note: Example 1 also illustrates the important distinction between a configuration of subspaces being of {\it finite type} (having finitely many isomorphism classes of configurations), and the stronger property of {\it tameness} (the multi-flag generated by the subspaces is in general position).]
\vskip.2in
A multi-flag $\cal F$ of $V$ is a poset in a natural way; if $V_1,V_2\in {\cal F}$, then $V_1\le V_2$ as elements in $\cal F$ iff $V_1\subseteq V_2$ as subspaces of $V$. If $\cal F$ is a multi-flag of $V$, $\cal G$ a multi-flag of $W$, a {\it morphism} of multi-flags $(L,f):{\cal F}\to {\cal G}$ consists of
\begin{itemize}
\item a linear map from $L:V\to W$ and
\item a map of posets $f:{\cal F}\to {\cal G}$ such that
\item for each $U\in {\cal F}$, $L(U)\subseteq f(U)$.
\end{itemize}
Then $\{multi\mhyphen flags\}$ will denote the category of multi-flags and morpisms of such.
\vskip.2in
If $L:V\to W$ is a linear map of vector spaces and $\cal F$ is a multi-flag of $V$, the multi-flag generated by $\{L(U)\ |\ U\in {\cal F}\}\cup \{W\}$ is a multi-flag of $W$ which we denote by $L({\cal F})$ (or $\cal F$ pushed forward by $L$). In the other direction, if $\cal G$ is a multi-flag of $W$, we write $L^{-1}[{\cal G}]$ for the multi-flag $\{L^{-1}[U]\ |\ U\in {\cal G}\}\cup \{\{0\}\}$ of $V$ (i.e., $\cal G$ pulled back by $L$; as intersections are preserved under taking inverse images, this will be a multi-flag once we include - if needed - $\{0\}$). Obviously $L$ defines morphisms of multi-flags ${\cal F}\xrightarrow{(L,\iota)} L({\cal F})$, $L^{-1}[{\cal G}]\xrightarrow{(L,\iota')} {\cal G}$.
\vskip.3in
\subsection{The local structure of $\cal C$-modules}
Assume first that $M$ is an $\cal WIPC$-module. A {\it multi-flag of $M$} or {\it $M$-multi-flag} is a functor $F:{\cal C}\to \{multi\mhyphen flags\}$ which assigns
\begin{itemize}
\item to each $x\in obj({\cal C})$ a multi-flag $F(x)$ of $M(x)$;
\item to each $\phi_{xy}:M(x)\to M(y)$ a morphism of multi-flags $F(x)\to F(y)$
\end{itemize}
To any $\cal WIPC$-module $M$ we may associate the multi-flag $F_0$ which assigns to each $x\in obj({\cal C})$ the multi-flag $\{\{0\}, M(x)\}$ of $M(x)$. This is referred to as the {\it trivial} multi-flag of $M$.
\vskip.2in
A $\cal WIPC$-module $M$ determines a multi-flag on $M$. Precisely, the {\it local structure} ${\cal F}(M)$ of $M$ is defined recursively at each $x\in obj({\cal C})$ as follows: let $S_1(x)$ denote the set of morphisms of $\cal C$ originating at $x$, and $S_2(x)$ the set of morphisms terminating at $x$, $x\in obj({\cal C})$ (note that both sets contain $Id_x:x\to x$). Then
\vskip.05in
\begin{enumerate}
\item[\underbar{LS1}] ${\cal F}_0(M)(x) =$ the multi-flag of $M(x)$ generated by
\[
\{\ker(\phi_{xy}:M(x)\to M(y))\}_{\phi_{xy}\in S_1(x)}\cup \{im(\phi_{zx} : M(z)\to M(x)\}_{\phi_{zx}\in S_2(x)};
\]
\item[\underbar{LS2}] For $n\ge 0$, ${\cal F}_{n+1}(M)(x) =$ the multi-flag of $M(x)$ generated by
\begin{itemize}
\item[{LS2.1}] $\phi_{xy}^{-1}[W]\subset M(x)$, where $W\in{\cal F}_n(M)(y)$ and $\phi_{xy}\in S_1(x)$;
\item [{LS2.2}] $\phi_{zx}[W]\subset M(x)$, where $W\in{\cal F}_n(M)(z)$ and $\phi_{zx}\in S_2(x)$;
\end{itemize}
\item [\underbar{LS3}]${\cal F}(M)(x) = \varinjlim {\cal F}_n(M)(x)$.
\end{enumerate}
More generally, starting with a multi-flag $F$ on $M$, the local structure of $M$ relative to $F$ is arrived at in exactly the same fashion, but starting in LS1 with the multi-flag generated (at each object $x$) by ${\cal F}_0(M)(x)$ and $F(x)$. The resulting direct limit is denoted ${\cal F}^F(M)$. Thus the local structure of $M$ (without superscript) is the local structure of $M$ relative to the trivial multi-flag on $M$. In almost all cases we will only be concerned with the local structure relative to the trivial multi-flag on $M$.
\begin{proposition}\label{prop:invimage} For all $k\ge 1$, $W\in {\cal F}_k(M)(x)$, and $\phi_{zx}:M(z)\to M(x)$, there is a unique maximal element of $W'\in {\cal F}_{k+1}(M)(z)$ with $\phi_{zx}(W') = W$.
\end{proposition}
\begin{proof} This is an immediate consequence of property (LS2.1).
\end{proof}
\begin{definition} The local structure of a $\cal WIPC$-module $M$ is the functor ${\cal F}(M)$, which associates to each vertex $x\in obj({\cal C})$ the multi-flag ${\cal F}(M)(x)$.
\end{definition}
A key question arises as to whether the direct limit used in defining ${\cal F}(M)(x)$ stablizes at a finite stage. For infinite fields $k$ it turns out that this property is related the existence of {\it holonomy}, as we will see below. For now, we include it as a definition.
\begin{definition} The local structure on $M$ is \underbar{locally stable} at $x\in obj({\cal C})$ iff there exists $N = N_x$ such that ${\cal F}_n(M)(x)\inj {\cal F}_{n+1}(M)(x)$ is the identity map whenever $n\ge N$. It is \underbar{stable} if it is locally stable at each object. It is \underbar{strongly stable} if for all \underbar{finite} multi-flags $F$ on $M$ there exists $N = N(F)$ such that ${\cal F}^F(M)(x) = {\cal F}^F_N(M)(x)$ for all $x\in obj({\cal C})$.
\end{definition}
In almost all applications of this definition we will only be concerned with stability, not the related notion of strong stability. The one exception occurs in the statement and proof of Theorem \ref{thm:6} below.
\vskip.2in
For each $0\le k\le \infty$ and at each object $x$ we may consider the associated graded ${\cal F}_k(M)_*(x)$ of ${\cal F}_k(M)(x)$. Stabilization via direct limit in the construction of ${\cal F}(M)$ yields a multi-flag structure that is preserved under the morphisms of the $\cal C$-module $M$. The following result identifies the effect of a morphism on the associated graded limit ${\cal F}(M)_*$, under the more restrictive hypothesis that $M$ is equipped with an inner product structure (which guarantees that the relative orthogonal complements coming from the associated graded are compatible under the morphisms of $M$).
\begin{theorem}\label{thm:1} Let $M$ be an $\cal IPC$-module with stable local structure. Then for all $k\ge 0$, $x,y,z\in obj({\cal C})$, $W\in {\cal F}(M)(x)$, $\phi_{zx}:M(z)\to M(x)$, and $\phi_{xy}:M(x)\to M(y)$
\begin{enumerate}
\item The morphisms of $M$ and their inverses induce well-defined maps of associated graded sets
\begin{gather*}
\phi_{xy}:{\cal F}(M)_*(x)\to {\cal F}(M)_*(y)\\
\phi_{zx}^{-1}: {\cal F}(M)_*(x)\to {\cal F}(M)_*(z)
\end{gather*}
\item $\phi_{xy}(W)\in {\cal F}(M)(y)$, and either $\phi_{xy}(W_{\cal F}) = \{0\}$, or $\phi_{xy}:W_{\cal F}\xrightarrow{\cong}\phi_{xy}(W_{\cal F}) = \left(\phi_{xy}(W_{\cal F})\right)_{\cal F}$ where $\phi_{xy}(W_{\cal F})$ denotes the element in the associated graded ${\cal F}(M)_*(y)$ induced by $\phi_{xy}(W)$;
\item either $im(\phi_{zx})\cap W_{\cal F} = \{0\}$, or there is a canonically defined element $U_{\cal F} = \left(\phi_{zx}^{-1}[W_{\cal F}]\right)_{\cal F} = \left(\phi_{zx}^{-1}[W]\right)_{\cal F}\in {\cal F}(M)_*(z)$ with $\phi_{zx}:U_{\cal F}\xrightarrow{\cong} W_{\cal F}$.
\end{enumerate}
\end{theorem}
\begin{proof} Stabilization with respect to the operations (LS.1) and (LS.2), as given in (LS.3), implies that for any object $x$, morphisms $\phi_{xy},\phi_{zx}$, and $W\in {\cal F}(M)(x)$, that $\phi(W)\in {\cal F}(M)(y)$ and $\phi^{-1}[W]\in {\cal F}(M)(z)$, verifying the first statement. Let $K = \ker(\phi_{xy})\cap W$. Then either $K=W$ or is a proper subset of $W$. If the former case, $\phi_{xy}(W_{\cal F}) = \{0\}$, while if the latter we see (again, by stabilization) that $K\in S(W)$ and so $K\cap W_{\cal F} = \{0\}$, implying that $W_{\cal F}$ maps isomorphically to its image under $\phi_{xy}$. Moreover, in this last case $\phi_{xy}$ will map $S(W)$ surjectively to $S(\phi_{xy}(W))$, implying the equality $\phi_{xy}(W_{\cal F}) = \left(\phi_{xy}(W_{\cal F})\right)_{\cal F}$.
\vskip.2in
Now given $\phi_{zx}:M(z)\to M(x)$, let $U = \phi_{zx}^{-1}[W]\in {\cal F}(M)(z)$. As before, the two possibilities are that $\phi_{zx}(U) = W$ or that $T := \phi_{zx}(U)\subsetneq W$. In the first case, $\phi_{zx}$ induces a surjective map of sets $S(U)\surj S(W)$, and so will map $U_{\cal F}$ surjectively to $W_{\cal F}$. By statement 2. of the theorem, this surjection must be an isomorphism. In the second case we see that the intersecton $im(\phi_{zx})\cap W$ is an element of $S(W)$ (as ${\cal F}(M)(x)$ is closed under intersections), and so $W_{\cal F}\cap im(\phi_{zx}) = \{0\}$ by the definition of $W_{\cal F}$.
\end{proof}
\vskip.1in
Using the local structure of $M$, we define the {\it excess} of a $\cal WIPC$-module $M$ as
\[
e(M) = \sum_{x\in obj({\cal C})} e({\cal F}(M)(x))
\]
We say ${\cal F}(M)$ is {\it in general position} at the vertex $x$ iff ${\cal F}(M)(x)$ is in general position as defined above; in other words if $e({\cal F}(M)(x)) = 0$ . Thus ${\cal F}(M)$ is {\it in general position} (without restriction) iff $e(M) = 0$. The previous theorem implies
\begin{corollary}\label{cor:2} ${\cal F}(M)$ is {\it in general position} at the vertex $x$ if and only if $e({\cal F}(M)(x)) = 0$. It is in general position (without restriction) iff $e(M) = 0$.
\end{corollary}
Note that as $M(x)$ is finite-dimensional for each $x\in obj({\cal C})$, ${\cal F}(M)(x)$ must be locally stable at $x$ if it is in general position (in fact, general position is a much stronger requirement).
\vskip.2in
Now assume given a $\cal C$-module $M$ without any additional structure. A multi-flag on $M$ is then defined to be a multi-flag on $M$ equipped with an arbitrary $\cal WIPC$-structure. Differing choices of weak inner product on $M$ affect the choice of relative orthogonal complements appearing in the associated graded at each object via equation (\ref{eqn:one}). However the constructions in LS1, LS2, and LS3 are independent of the choice of inner product, as are the definitions of excess and stability at an object and also for the module as a whole. So the results stated above for $\cal WIPC$-modules may be extended to $\cal C$-modules. The only result requiring an actual $\cal IPC$-structure is Theorem \ref{thm:1}.
\vskip.5in
\section{Statement and proof of the main results} In discussing our structural results, we first restrict to the case $M$ is an $\cal IPC$-module, and then investigate what properties still hold for more general $\cal WIPC$-modules.
\subsection{Blocks, generalized barcodes, and tame $\cal C$-modules} To understand how blocks and generalized barcodes arise, we first need to identify the type of subcategory on which they are supported. For a connected poset category $\cal C$, its oriented (minimal) graph $\Gamma = \Gamma({\cal C})$ was defined above. A subgraph $\Gamma'\subset\Gamma$ will be called {\it admissible} if
\begin{itemize}
\item it is connected;
\item it is pathwise full: if $v_1e_1v_2e_2\dots v_{k-1}e_{k-1}v_k$ is an oriented path in $\Gamma'$ connecting $v_1$ and $v_k$, and $(v_1=w_1)e'_1w_2e'_2\dots w_{l-1}e'_{l-1}(w_l = v_k)$ is any other path in $\Gamma$ connecting $v_1$ and $v_k$ then the path $v_1=w_1e'_1w_2e'_2\dots w_{l-1}e'_{l-1}w_l$ is also in $\Gamma'$.
\end{itemize}
Any admissible subgraph $\Gamma'$ of $\Gamma$ determines a unique subcategory ${\cal C}'\subset {\cal C}$ for which $\Gamma({\cal C}') = \Gamma'$, and we will call a subcategory ${\cal C}'\subset {\cal C}$ admissible if $\Gamma({\cal C}')$ is an admissible subgraph of $\Gamma({\cal C})$. If $M'\subset M$ is a sub-$\cal C$-module of the $\cal C$-module $M$, its {\it support} will refer to the full subcategory ${\cal C}(M')\subset {\cal C}$ generated by $\{x\in obj({\cal C})\ |\ M'(x)\ne \{0\} \}$. It is easily seen that being a submodule of $M$ (rather than just a collection of subspaces indexed on the objects of $\cal C$) implies that the support of $M'$, if connected, is an admissible subcatgory of $\cal C$ in the above sense. A {\it block} will refer to a sub-$\cal C$-module $M'$ of $M$ for which $\phi_{xy}:M'(x)\xrightarrow{\cong} M'(y)$ whenever $x,y\in obj({\cal C}(M'))$ (any morphism between non-zero vertex spaces of $M'$ is an isomorphism). Finally, $M'$ is a {\it generalized barcode} (GBC) for $M$ if it is a block where $dim(M'(x) ) = 1$ for all $x\in obj({\cal C}(M'))$.
\vskip.2in
It is evident that if $M'\subset M$ is a GBC, it is an indecomposeable $\cal C$-submodule of $M$. If $\Gamma$ represents an oriented graph, we write $\ov{\Gamma}$ for the underlying unoriented graph. Unlike the particular case of persistence (or more generally zig-zag) modules, blocks occuring as $\cal C$-modules for an arbitrary ps-category may not decompose into a direct sum of GBCs. The following two simple oriented graphs illustrate the obstruction.
\vskip.5in
({\rm D1})\vskip-.4in
\centerline{
\xymatrix{
\bullet\ar[rr]\ar[dd] && \bullet\ar[dd] &&&& \bullet && \bullet\ar[dd]\ar[ll]\\
& \Gamma_1 & &&&& & \Gamma_2 &\\
\bullet\ar[rr] && \bullet &&&& \bullet\ar[uu]\ar[rr] && \bullet
}}
\vskip.2in
For a block represented by the graph $\Gamma_1$ on the left, the fact $\cal C$ is a poset category implies that, even though the underlying unoriented graph is a closed loop, going once around the loop yields a composition of isomorphisms which is the identity. As a consequence, it is easily seen that a block whose support is an admissible category ${\cal C}'$ with graph $\Gamma({\cal C}') = \Gamma_1$ can be written as a direct sum of GBCs indexed on ${\cal C}'$ (see the lemma below). However, if the graph of the supporting subcategory is $\Gamma_2$ as shown on the right, then the partial ordering imposes no restrictions on the composition of isomorphisms and their inverses, starting and ending at the same vertex. For such a block with base field $\mathbb R$ or $\mathbb C$, the moduli space of isomorphism types of blocks of a given vertex dimension $n$ is non-discrete for all $n>1$ and can be identified with the space of $n\times n$ Jordan normal forms. The essential difference between these two graphs lies in the fact that the category on the left exhibits one initial and one terminal object, while the category on the right exhibits two of each. Said another way, the zig-zag length of the simple closed loop on the left is two, while on the right is four. We remark that the obstruction here is not simply a function of the underlying unoriented graph, as $\ov{\Gamma}_1 = \ov{\Gamma}_2$ in the above example. A closed loop in $\Gamma({\cal C})$ is an {\it h-loop} if it is able to support a sequence of isomorphsms whose composition going once around, starting and ending at the same vertex, is other than the identity map (``h'' for holonomy). Thus $\Gamma_2$ above exhibits an h-loop. Note that the existence of an h-loop implies the existence of a simple h-loop.
\vskip.2in
We wish explicit criteria which identify precisely when this can happen. One might think that the zig-zag length of a simple closed loop is enough, but this turns out to not be the case. The following illustrates what can happen.
\vskip.5in
({\rm D2})\vskip-.4in
\centerline{
\xymatrix{
& \bullet\ar[rr]\ar[dd] && \bullet\ar[rr]\ar@{-->}[dd] && \bullet\ar[dd]\\
\\
\Gamma({\cal C}'): & \bullet\ar[dd]\ar@{-->}[rr] && \bullet\ar[rr]\ar[dd] && \bullet\\
\\
&\bullet\ar[rr] && \bullet &&
}}
\vskip.2in
Suppose $\cal C$ indexes $3\times 3$ two-dimensional persistance modules (so that $\Gamma({\cal C})$ looks like an oriented two-dimensonal $3\times 3$ lattice, with arrows pointing down and also to the right). Suppose ${\cal C}'\subset {\cal C}$ is an admissible subcategory of $\cal C$ with $\Gamma({\cal C}')$ containing the above simple closed curve indicated by the solid arrows. The zig-zag length of the curve is four, suggesting that it might support holonomy and so be a potential h-loop. However, the admissibility condition forces ${\cal C}'$ to also contain the morphisms represented by the dotted arrows, resulting in three copies of the graph $\Gamma_1$ above. Including these morphisms one sees that holonomy in this case is not possible.
\vskip.2in
Given an admissible subcategory ${\cal C}'$ of $\cal C$, we will call ${\cal C}'$ {\it h-free} if $\Gamma({\cal C}')$ does not contain any simple closed h-loops (and therefore no closed h-loops).
\begin{lemma}\label{lemma:3} Any block $M'$ of $M$ whose support ${\cal C}'$ is h-free can be written (non-uniquely) as a finite direct sum of GBCs all having the same support as $M'$.
\end{lemma}
\begin{proof} Fix $x\in obj(supp(M'))$ and a basis $\{\bf{v}_1,\dots,\bf{v}_n\}$ for $M'(x)$. Let $y\in obj(supp(M'))$, and choose a path $xe_1x_1e_2x_2\dots x_{k-1}e_ky$ from $x_0 = x$ to $x_k = y$ in $\ov{\Gamma}(M')$. Each edge $e_j$ is represented by an invertible linear map $\lambda_j = (\phi_{x_{j-1}x_j})^{\pm 1}$, with
\[
\lambda := \lambda_k\circ\lambda_{k-1}\circ\dots\circ\lambda_1:M'(x)\xrightarrow{\cong} M'(y)
\]
As ${\cal C}' = supp(M')$ is h-free, the isomorphism between $M'(x)$ and $M'(y)$ resulting from the above construction is independent of the choice of path in $\ov{\Gamma}(M')$ from $x$ to $y$, and is uniquely determined by the ${\cal C}'$-module $M'$. Hence the basis $\{\bf{v}_1,\dots,\bf{v}_n\}$ for $M'(x)$ determines one for $M'(y)$ given as $\{\lambda(\bf{v}_1),\dots,\lambda(\bf{v}_n)\}$ which is independent of the choice of path connecting these two vertices. In this way the basis at $M'(x)$ may be compatibly extended to all other vertices of ${\cal C}'$, due to the connectivity hypothesis. The result is a system of {\it compatible bases} for the ${\cal C}'$-module $M'$, from which the splitting of $M'$ into a direct sum of GBCs each supported by ${\cal C}'$ follows.
\end{proof}
A $\cal C$-module $M$ is said to be {\it weakly tame} iff it can be expressed as a direct sum of blocks. It is {\it strongly tame} or simply {\it tame} if, in addition, each of those blocks may be further decomposed as a direct sum of GBCs.
\vskip.3in
\subsection{The main results} We first establish the relation between non-zero elements of the associated graded at an object of $\cal C$ and their corresponding categorical support. We assume throughout this section that $M$ is an $\cal IPC$-module with stable local structure.
\vskip.2in
Suppose $W\in {\cal F}(M)(x)$ with $0\ne W_{\cal F}\in {\cal F}(M)_*(x)$. Then $W_{\cal F}$ uniquely determines a subcategory ${\cal C}(W_{\cal F})\subset {\cal C}$ satisfying the following three properties:
\begin{enumerate}
\item $x\in obj({\cal C}(W_{\cal F}))$;
\item For each path $xe_1x_1e_2\dots x_{k-1}e_ky$ in $\Gamma({\cal C}(W_{\cal F}))$ beginning at $x$, with each edge $e_j$ represented by $\lambda_j = (\phi_{x_{j-1}x_j})^{\pm 1}$ ($\phi_{x_{j-1}x_j}$ a morphism in $\cal C$), $W_{\cal F}$ maps isomorphically under the composition $\lambda = \lambda_k\circ\lambda_{k-1}\circ\dots\circ\lambda_1$ to $0\ne \lambda(W_{\cal F})\in W_{\cal F}(M)_*(y)$;
\item ${\cal C}(W_{\cal F})$ is the largest subcategory of $\cal C$ satisfying properties 1. and 2.
\end{enumerate}
We refer to ${\cal C}(W_{\cal F})$ as the {\it block category} associated to $W_{\cal F}$. It is easy to see that $\varnothing\ne {\cal C}(W_{\cal F})$, and moreover that ${\cal C}(W_{\cal F})$ is admissible as defined above. Now let ${\cal S}({\cal C})$ denote the set of admissible subcategories of $\cal C$. If $x\in obj({\cal C})$ we write ${\cal S}_x{\cal C}$ for the subset of ${\cal S}({\cal C})$ consisting of those admimissible ${\cal C}'\subset {\cal C}$ with $x\in obj({\cal C}')$.
\begin{lemma}\label{lemma:4} For each $x\in obj({\cal C})$ and $\cal IPC$-module $M$, the assignment
\begin{gather*}
{\cal A}_x: {\cal F}(M)_*(x)\backslash\{0\}\longrightarrow {\cal S}_x{\cal C}\\
0\ne W_{\cal F}\mapsto {\cal C}(W_{\cal F})
\end{gather*}
defines an injection from ${\cal F}(M)_*(x)\backslash\{0\}$ to the set of admissible subcategories of $\cal C$ which occur as the block category of a non-zero element of ${\cal F}(M)_*(x)$.
\end{lemma}
\begin{proof} The fact that ${\cal C}(W_{\cal F})$ is uniquely determined by $W_{\cal F}$ ensures the map is well-defined. To see that the map is 1-1, we observe that corresponding to each ${\cal C}'\in {\cal S}_x{\cal C}$ is a unique maximal $W\in {\cal F}(M)(x)$ with image-kernel-intersection data determined by the manner in which each vertex $y\in obj({\cal C}')$ connects back to $x$. More precisely, the subspace $W$ is the largest element of ${\cal F}M(x)$ satisfying the property that for every
\begin{itemize}
\item $y\in obj({\cal C}')$;
\item zig-zag sequence $p$ of morphisms in ${\cal C}'$ connecting $x$ and $y$;
\item morphism $\phi_{yz}$ in $\cal C$ from $y$ to $z\in obj({\cal C})\backslash obj({\cal C}')$;
\end{itemize}
the pull-back and push-forward of $ker(\phi_{xz})$ along the path back from $M(z)$ to $M(x)$ yields a subspace of $M(x)$ containing $W$. This clearly determines $W$ uniquely; note that the conditions may result in $W = \{0\}$. Restricting to the image of ${\cal A}_x$ we arrive at the desired result.
\end{proof}
Write ${\cal AS}({\cal C})$ for the subset of ${\cal S}({\cal C})$ consisting of those admissible subcategories for which there exists $x\in obj({\cal C})$ with $\{0\}\ne im({\cal A}_x)\subset {\cal S}_x{\cal C}$. This lemma, in conjunction with Theorem \ref{thm:1}, implies
\begin{theorem}\label{thm:2} Let $M$ be an $\cal IPC$-module. Each ${\cal C}'\in {\cal AS}({\cal C})$ uniquely determines a block $\cal C$-submodule $M({\cal C}')$ of $M$, where $M({\cal C}')(x) = $ the unique non-zero element $W_{\cal F}$ of ${\cal F}(M)_*(x)$ for which ${\cal C}(W_{\cal F}) = {\cal C}'$.
\end{theorem}
\begin{proof} Fix ${\cal C}'\in {\cal AS}({\cal C})$ and $x\in obj({\cal C}')$. By Theorem \ref{thm:1} and Lemma \ref{lemma:4}, for any $\phi_{xy'}\in Hom({\cal C}')$, $W_{\cal F} := {\cal A}^{-1}({\cal C}'\in {\cal S}_x{\cal C})$ maps isomorphically under $\phi_{xy'}$ to $\phi_{xy'}(W_{\cal F})\in {\cal F}(M)_*(y')$.
\vskip.1in
Now any other vertex $y\in\Gamma({\cal C}')$ is connected to $x$ by a zig-zag path of oriented edges. Let $\lambda_{xy}$ represent such a path, corresponding to a composition sequence of morphisms and their inverses. As $M$ is not required to be h-free, the resulting isomorphism between $W_{\cal F}$ and $\lambda_{xy}(W_{\cal F})$ is potentially dependent on the choice of path $\lambda_{xy}$ in $\Gamma({\cal C}')$. However the space itself is not. Moreover the same lemma and theorem also imply that for any $\phi_{xz}\in Hom({\cal C})$ with $z\in obj({\cal C})\backslash obj({\cal C}')$, $W_{\cal F}$ maps to $0$ under $\phi_{xz}$. This is all that is needed to identify an actual submodule of $M$ by the assignments
\begin{itemize}
\item $M({\cal C}')(y) = \lambda_{xy}(W_{\cal F})$ for $\lambda_{xy}$ a zig-zag path between $x$ and $y$ in $\Gamma({\cal C}')$;
\item $M({\cal C}')(z) = 0$ for $z\in obj({\cal C})\backslash obj({\cal C}')$
\end{itemize}
As defined, $M({\cal C}')$ is a block, completing the proof.
\end{proof}
We define the {\it (weakly) tame cover} of $M$ as
\begin{equation}
T(M)(x) = \underset{{\cal C}'\in {\cal AS}({\cal C})}{\bigoplus} M({\cal C}')
\end{equation}
with the projection $p_M:T(M)\surj M$ given on each summand $M({\cal C}')$ by the inclusion provided by the previous theorem. We are now in a position to state the main result.
\begin{theorem}\label{thm:3} An $\cal IPC$-module $M$ is weakly tame iff its excess $e(M) = 0$. In this case the decomposition into a direct sum of blocks is basis-free, depending only on the underlying $\cal IPC$-module $M$, and is unique up to reordering. If in addition $\cal C$ is h-free then $M$ is tame, as each block decomposes as a direct sum of GBCs, uniquely up to reordering after fixing a choice of basis at a single vertex.
\end{theorem}
\begin{proof} The excess at a given vertex $x$ is zero iff the projection map at that vertex is an isomorphism, as the excess is equal to the dimension of the kernel of $p_M$ at $x$. Moreover, if $\cal C$ is h-free then each block further decomposes in the manner described by Lemma \ref{lemma:3}; the precise way in which this decomposition occurs will depend on a choice of basis at a vertex in the support of that block, but once that has been chosen, the basis at each other vertex is uniquely determined. All that remains is to decide the order in which to write the direct sum.
\end{proof}
Note that the excess of $M$ need not be finite. If $\cal C$ is not h-free and $M$ exhibits holonomy at a vertex x, then the tame cover of $M$ might be infinite dimensional at $x$, which will make the overall excess infinite. Nevertheless, $T(M)$ in all cases should be viewed as the ``closest" weakly tame approximation to $M$, which equals $M$ if and only if $M$ itself is weakly tame. Another way to view this proximity is to observe that $T(M)$ and the projection to $M$ are constructed in such a way that $p_M$ induces a global isomorphism of associated graded objects
\[
p_M: {\cal F}(T(M))_*\xrightarrow{\cong} {\cal F}(M)_*
\]
so that $T(M)$ is uniquely characterized up to isomorphism as the weakly tame $\cal C$-module which maps to $M$ by a map which induces an isomorphism of the associated graded local structure.
\vskip.2in
To conclude this subsection we illustrate the efficiency of this approach by giving a geodesic proof of the classical strucuture theorem for finite 1-dimensional persistence modules. Let us first observe that such a module $M$ may be equipped with an inner product structure; the proof follows easily by induction on the length of $M$. So for the following theorem we may assume such an IP-structure has been given.
\begin{theorem}\label{thm:4} If ${\cal C} \cong \underline{n}$ is the categorical representation of a finite totally ordered set, then any $\cal C$-module $M$ is tame.
\end{theorem}
\begin{proof} By Lemma \ref{lemma:2}, the multiflag ${\cal F}(M)(x)$ is in general position for each object $x$, implying the excess $e(M) = 0$, so $M$ is weakly tame by the previous theorem. But there are no non-trivial closed zig-zag loops in $\Gamma({\cal C})$, so $\cal C$ is h-free and $M$ is tame.
\end{proof}
\vskip.3in
\subsection{The GBC vector for $\cal C$-modules} In the absence of an $\cal IP$-structure on the $\cal C$-module $M$, assuming only that $M$ is a $\cal WIP$-module, we may not necessarily be able to construct a weakly tame cover of $M$ but can still extract useful numerical information. By the results of Proposition \ref{prop:invimage} and the proof of Theorem \ref{thm:1}, we see that the results of that theorem still hold for the assoicated graded ${\cal F}_*(M)$, assuming only a $\cal WIPC$-module structure. Moreover, a slightly weaker version of the results of Theorem \ref{thm:2} still apply for this weaker $\cal WIPC$-structure. Summarizing,
\begin{theorem}\label{thm:wipc} Let $M$ be an $\cal WIPC$-module with stable local structure. Then for all $k\ge 0$, $x,y,z\in obj({\cal C})$, $\phi_{zx}:M(z)\to M(x)$, and $\phi_{xy}:M(x)\to M(y)$, the morphisms of $M$ and their inverses induce well-defined maps of associated graded sets
\begin{gather*}
\phi_{xy}:{\cal F}(M)_*(x)\to {\cal F}(M)_*(y)\\
\phi_{zx}^{-1}: {\cal F}(M)_*(x)\to {\cal F}(M)_*(z)
\end{gather*}
Moreover, if $W\in {\cal F}(M)_*(x)$, viewed as a subquotient space of $M(x)$, then either $dim(\phi_{xy}(W)) = dim(W)$ or $dim(\phi_{xy}(W)) = 0$. Similarly, either $dim(\phi_{zx}^{-1}(W)) = dim(W)$ or $dim(\phi_{zx}^{-1}(W)) = 0$. In this way we may, as before, define the support ${\cal C}(W)$ of $W$, which will be an admissible subcategory of $\cal C$. Each ${\cal C}'\in {\cal AS}({\cal C})$ uniquely determines a block $\cal C$-module $M({\cal C}')$, where $M({\cal C}')(x) = $ the unique non-zero element $W$ of ${\cal F}(M)_*(x)$ for which ${\cal C}(W) = {\cal C}'$.
\end{theorem}
The lack of IP-structure means that, unlike the the statement of Theorem \ref{thm:2}, we cannot identify the $\cal C$-module $M({\cal C}')$ as an actual submodule of $M$, or even construct a map of $\cal C$ modules $M({\cal C}')\to M$, as $M({\cal C}')$ is derived purely from the associated graded local structure ${\cal F}_*(M)$.
\vskip.2in
Nevertheless, Theorem \ref{thm:wipc} implies the dimension of each of these blocks - given as the dimension at any element in the support - is well-defined, as $dim(M({\cal C}')(x)) = dim(M({\cal C}')(y))$ for any pair $x,y\in obj({\cal C}')$ by the theorem above. The {\it generalized bar code dimension} of $M$ is the vector ${\cal S}({\cal C})\to \mathbb W$ given by
\[
GBCD(M)({\cal C}') =
\begin{cases}
dim(M({\cal C}')) := dim\big(M({\cal C}')(x)\big), x\in obj({\cal C}')\qquad\text{if }{\cal C}'\in {\cal AS}({\cal C})\\
0 \hskip2.68in\text{if }{\cal C}'\notin {\cal AS}({\cal C})
\end{cases}
\]
Finally if $M$ is simply a $\cal C$-module, let $M'$ denote $M$ with a fixed weak inner product structure. Setting
\[
GBCD(M) := GBCD(M')
\]
yields a well-defined function $GBCD:\{{\cal C}$-$ modules\}\to \mathbb W$, as one easily sees that $GBCD(M')$ is independent of the choice of lift of $M$ to a $\cal WIPC$-module; moreover this is an isomorphism invariant of $M$.
\vskip.3in
\subsection{Obstructions to admitting an inner product} The obstruction to imposing an IP-structure on a $\cal C$-module is, in general, non-trivial.
\begin{theorem}\label{thm:obstr} Let ${\cal C} = {\cal C}_2$ be the poset category for which $\Gamma({\cal C}_2) = \Gamma_2$, as given in diagram (D1). Then there exist ${\cal C}_2$-modules $M$ which do not admit an inner product structure.
\end{theorem}
\begin{proof} Label the initial objects of $\cal C$ as $x_1, x_2$, and terminal objects as $y_1, y_2$, with morphisms $\phi_{i,j}:x_i\to y_j$, $1\le i,j\le 2$. For each $(i,j)$ fix an identification $M(x_i) = M(y_j) = \mathbb R$. In terms of this identification, let
\[
M(\phi_{i,j})({\bf v}) =
\begin{cases}
2{\bf v}\quad\text{if } (i,j) = (1,1)\\
{\bf v}\quad\ \text{otherwise}
\end{cases}
\]
The self-map $M(\phi_{1,2})^{-1}\circ M(\phi_{2,2})\circ M(\phi_{2,1})^{-1}\circ M(\phi_{1,1}): M(x_1)\to M(x_1)$ is given as scalar multiplication by $2$. There is no norm on $\mathbb R$ for which this map is norm-preserving; hence there cannot be any collection of inner products $<_-,_->_{i,j}$ on $M_{i,j}$ giving $M$ the structure of an $\cal IPC$-module.
\end{proof}
More generally, we see that
\begin{theorem} If $\cal C$ admits holonomy, then there exist $\cal C$-modules which do not admit the structure of an inner product. Moreover, the obstruction to admitting an inner product is an isomorphism invariant.
\end{theorem}
However, for an important class of $\cal C$-modules the obstruction vanishes. An $n$-dimensional persistence module is defined as a $\cal C$-module where $\cal C$ is an $n$-dimensional {\it persistence category}, i.e., one isomorphic to $\underline{m_1}\times\underline{m_2}\times\dots\times\underline{m_n}$ where $m_p$ is the categorical representation of the totally ordered set $\{1 < 2 < \dots < p\}$.
\begin{theorem} Any (finite) n-dimensional persistence module admits an IP-structure.
\end{theorem}
\begin{proof} It was already observed above that the statement is true for ordinary 1-dim.~persistence modules. So we may proceed by induction, assuming $n > 1$ and that the statement holds in dimensions less than $n$. Before proceeding we record the following useful lemmas. Let ${\cal C}[1]$ denote the categorical representation of the poset $\{0 < 1\}$, and let ${\cal C}[m] = \prod_{i=1}^m{\cal C}[1]$. This is a poset category with objects $m$-tuples $(\varepsilon_1,\dots,\varepsilon_m)$ and a unique morphism $(\varepsilon_1,\dots,\varepsilon_m)\to (\varepsilon'_1,\dots,\varepsilon'_m)$ iff $\varepsilon_j\le \varepsilon'_j, 1\le j\le m$. The oriented graph $\Gamma({\cal C}[m])$ may be viewed as the oriented 1-skeleton of a simplicial $m$-cube. Write $t$ for the terminal object $(1,1,\dots,1)$ in ${\cal C}[m]$, and let ${\cal C}[m,0]$ denote the full subcategory of ${\cal C}[m]$ on objects $obj({\cal C}[m])\backslash \{t\}$.
\begin{lemma} Let $M$ be a ${\cal C}[m]$ module, and let $M(0) = M|_{{\cal C}[m,0]}$ be the restriction of $M$ to the subcategory ${\cal C}[m,0]$. Then any inner product structure on $M[m,0]$ may be extended to one on $M$.
\end{lemma}
\begin{proof} Let $M'$ be the ${\cal C}[m]$-module defined by
\begin{align*}
M'|_{{\cal C}[m,0]} &= M|_{{\cal C}[m,0]}\\
M'(t) &= \underset{{\cal C}[m,0]}{colim}\ M'
\end{align*}
with the map $M'(\phi_{xt})$ given by the unique map to the colimit when $x\in obj({\cal C}[m,0])$. The inner product on $M(0)$ extends to a unique inner product on $M'$. We may then choose an inner product on $M(t)$ so that the unique morphism $M'(t)\to M(t)$ (determined by $M$) lies in $(IP/k)$. Fixing this inner product on $M(t)$ gives $M$ an IP-structure compatible with the given one on $M(0)$.
\end{proof}
For evident reasons we will refer to this as a {\it pushout extension} of the inner product. More generally, iterating the same line of argument yields
\begin{lemma}\label{lemma:5} Let $M$ be a ${\cal C}[m]$-module and $\wt{M} = M|_{{\cal C}'}$ where ${\cal C}'$ is an admissible subcategory of ${\cal C}[m]$ containing the initial object. Then any IP-structure on $\wt{M}$ admits a compatible extension to $M$.
\end{lemma}
Continuing with the proof of the theorem, let ${\cal C} = \underline{m_1}\times\underline{m_2}\times\dots\times\underline{m_n}$ with $m_p = \{1 < 2 < \dots < p\}$ as above. Let ${\cal C}_q = \underline{m_1}\times\underline{m_2}\times\dots\times\underline{m_{n-1}}\times \{1 < 2 < \dots < q\}$, viewed as a full subcategory of $\cal C$. Given a $\cal C$-module $M$, let $M_i$ be the ${\cal C}_i$-module constructed as the restriction of $M$ to ${\cal C}_i$. By induction on dimension, we may assume $M_1$ has been equipped with an IP-structure. By induction on the last index, assume that this IP-structure has been compatibly extended to $M_i$. Now $\Gamma({\cal C}_{i+1})$ can be viewed as being constructed from $\Gamma({\cal C}_i)$ via a sequence of $m = m_1m_2\dots m_{n-1}$ concatenations, where each step concatenates the previous graph with the graph $\Gamma({\cal C}[n])$ along an admissible subgraph of $\Gamma({\cal C}[n])$ containing the initial vertex. Denote this inclusive sequence of subgraphs by $\{\Gamma_\alpha\}_{1\le \alpha\le m}$; for each $\alpha$ let ${\cal C}_{\alpha}$ be the subcategory of ${\cal C}_{i+1}$ with $\Gamma({\cal C}_\alpha) = \Gamma_\alpha$. Finally, let $N_\alpha$ denote the restriction of $M$ to ${\cal C}_\alpha$, so that $N_1 = M_i$ and $N_m = M_{i+1}$. Then $N_1$ comes equipped with an IP-structure, and by Lemma \ref{lemma:5} an IP-structure on $N_j$ admits a pushout extension to one on $N_{j+1}$ for each $1\le j\le (m-1)$. Induction in this coordinate then implies the IP-structure on $M_i$ an be compatibly extended (via iterated pushouts) to one on $M_{i+1}$, completing the induction step. As $M_{m_n} = M$, this completes the proof of the theorem.
\end{proof}
\vskip.3in
\subsection{h-free modules} When is an indexing category $\cal C$ h-free? To better understand this phenomenon, we note that the graph $\Gamma_1$ in diagram (D1) - and the way it appears again in diagram (D2) - suggests it may be viewed from the perspective of homotopy theory: define an {\it elementary homotopy} of a closed zig-zag loop $\gamma$ in $\Gamma({\cal C})$ to be one which performs the following replacements in either direction
\vskip.5in
({\rm D3})\vskip-.4in
\centerline{
\xymatrix{
A\ar[rr]\ar[dd] && B && && && B\ar[dd]\\
&& & \ar@{<=>}[rr]& && &&\\
C && && && C\ar[rr] && D
}}
\vskip.2in
In other words, if $C\leftarrow A\rightarrow B$ is a segment of $\gamma$, we may replace $\gamma$ by $\gamma'$ in which the segment $C\leftarrow A\rightarrow B$ is replaced by $C\rightarrow D\leftarrow B$ with the rest of $\gamma$ remaining intact; a similar description applies in the other direction. We do not require the arrows in the above diagram to be represented by atomic morphisms, simply oriented paths between vertices.
\begin{lemma} If a zig-zag loop $\gamma$ in $\Gamma({\cal C})$ is equivalent, by a sequence of elementary homotopies, to a collection of simple closed loops of type $\Gamma_1$ as appearing in (D1), then $\gamma$ is h-free. If this is true for all zig-zag loops in $\Gamma({\cal C})$ then $\cal C$ itself is h-free.
\end{lemma}
\begin{proof} Because $\Gamma_1$ has no holonomy, replacing the connecting segment between B and C by moving in either direction in diagram (D3) does not change the homonomy of the closed path. Thus, if by a sequence of such replacements one reduces to a connected collection of closed loops of type $\Gamma_1$, the new loop - hence also the original loop - cannot have any holonomy.
\end{proof}
Call $\cal C$ {\it strongly h-free} if every zig-zag loop in $\Gamma({\cal C})$ satisfies the hypothesis of the above lemma. Given $n$ ps-categories ${\cal C}_1, {\cal C}_2,\dots,{\cal C}_n$, the graph of the $n$-fold cartesian product is given as
\[
\begin{split}
\Gamma({\cal C}_1\times{\cal C}_2\times\dots\times{\cal C}_n) = N_1({\cal C}_1\times{\cal C}_2\times\dots\times{\cal C}_n)
= diag(N({\cal C}_1)\times N({\cal C}_2)\times\dots N({\cal C}_n))_1\\
= diag(\Gamma({\cal C}_1)\times\Gamma({\cal C}_2)\times\dots\times\Gamma({\cal C}_n))
\end{split}
\]
the oriented 1-skeleton of the diagonal of the product of the oriented graphs of each category. Of particular interest are $n$-dimensional persistence categories, as defined above.
\begin{theorem}\label{thm:5} Finite $n$-dimensional persistence categories are strongly h-free.
\end{theorem}
\begin{proof} The statement is trivially true for $n=1$ (there are no simple closed loops), so assume $n\ge 2$. Let ${\cal C}_i = \underline{m_i}$, $1\le i\le n$.
\begin{claim} The statement is true for $n=2$.
\end{claim}
\begin{proof} Given a closed zig-zag loop $\gamma$ in $\Gamma({\cal C}_1\times {\cal C}_2)$, we may assume ${\bf a} = (a_1,a_2)$ are the coordinates of an initial vertex of the loop. We orient $\Gamma({\cal C}_1\times {\cal C}_2)$ so that it moves to the right in the first coordinate and downwards in the second coordinate, viewed as a lattice in $\mathbb R^2$. As it is two-dimensional, we may assume that $\gamma$ moves away from $\bf a$ by a horizontal path to the right of length at least one, and a vertical downwards path of length also at least one. That means we may apply an elementary homotopy to the part of $\gamma$ containing $\bf a$ as indicated in diagram (D3) above, identifying $\bf a$ with the vertex ``A" in the diagram, and replacing $C\leftarrow A\rightarrow B$ with $C\rightarrow D\leftarrow B$. If $D$ is already a vertex in $\gamma$, the result is a single simple zig-zag loop of type $\Gamma_1$, joined at $D$ with a closed zig-zag-loop of total length less than $\gamma$. By induction on total length, both of these loops are h-free, hence so the original $\gamma$. In the second case, $D$ was not in the original loop $\gamma$. In this case the total length doesn't change, but the total area enclosed by the curve (viewed as a closed curve in $\mathbb R^2$) does. By induction on total bounded area, the curve is h-free in this case as well, completing the proof of the claim.
\end{proof}
Continuing with the proof of the theorem, we assume $n > 2$, and that we are given a zig-zag path $\gamma$ in $\Gamma({\cal C}_1\times{\cal C}_2\times\dots\times{\cal C}_n)$. From the above description we may apply a sequence of elementary homotopies in the first two coordinates to yield a zig-zag loop $\gamma'$ in $\Gamma({\cal C}_1\times{\cal C}_2\times\dots\times{\cal C}_n)$ with the same degree of h-freeness as $\gamma$, but where the first two coordinates are constant. The theorem follows by induction on $n$.
\end{proof}
We conclude this subsection with an illustration of how holonomy can prevent stability of the local structure over an infinite field. Consider the indexing category $\cal C$ whose graph $\Gamma({\cal C})$ is
\vskip.5in
({\rm D4})\vskip-.4in
\centerline{
\xymatrix{
\bullet && y\ar[dd]\ar[ll] && x\ar[ll]\\
& \Gamma_2 & &&\\
\bullet\ar[uu]\ar[rr] && \bullet &&
}}
\vskip.2in
where the part of the graph labeled $\Gamma_2$ is as in (D1). Suppose the base field to be $k = \mathbb R$ and $M$ the $\cal C$-module which assigns the vector space $\mathbb R^2$ to each vertex in $\Gamma_2$, and assigns $\mathbb R$ to $x$. Each arrow in the $\Gamma_2$-part of the graph is an isomorphism, chosen so that going once around the simple closed zig-zag loop is represented by an element of $SO(2)\cong S^1$ of infinite order (i.e., and irrational rotation). Let $M(x)$ map to $M(y)$ by an injection. In such an arrangement, the local structure of $M$ at the vertex $y$, or the other three vertices of $\cal C$ lying in the graph $\Gamma_2$, never stabilizes.
\vskip.3in
\subsection{Modules with stable local structure} Stability of the local structure can be verified directly in certain important cases. We have given the definition of an $n$-dimensional persistence category above. This construction admits a natural zig-zag generalization. Write \underbar{zm} for any poset of the form $\{1\ R_1\ 2\ R_2\ 3\dots (m-1)\ R_{m-1}\ m\}$ where $R_i = $ ``$\le$" or ``$\ge$" for each $i$. A zig-zag module of length $m$, as defined in \cite{cd}, is a functor $M:$ \underbar{zm}$\to (vect/k)$ for some choice of zig-zag structure on the underlying set of integers $\{1,2,\dots,m\}$. More generally, an $n$-dimensional zig-zag category $\cal C$ is one isomorphic to ${\rm \underline{zm_1}}\times{\rm \underline{zm_2}}\times\dots{\rm \underline{zm_n}}$ for some choice of $\rm \underline{zm_i}$, $1\le i\le n$, and a finite $n$-dimensional zig-zag module is defined to be a functor
\[
M : {\rm \underline{zm_1}}\times{\rm \underline{zm_2}}\times\dots{\rm \underline{zm_n}}\to (vect/k)
\]
for some sequence of positive integers $m_1,m_2,\dots,m_n$ and choice of zig-zag structure on each correpsonding underying set. As with $n$-dimensional persistence modules, $n$-dimensional zig-zag modules may be viewed as a zig-zag diagram of $(n-1)$-dimensional zig-zag modules in essentially $n$ different ways. The proof of the next theorem illustrates the usefulness of strong stability.
\begin{theorem}\label{thm:6} Finite $n$-dimensional zig-zag modules have strongly stable local structure for all $n\ge 0$.
\end{theorem}
\begin{proof} We will first consider the case of $n$-dimensional persistence modules. We say an $n$-dimensional persistence category $\cal C$ has multi-dimension $(m_1,m_2,\dots,m_n)$ if $\cal C$ is isomorphic to $\underline{m_1}\times\underline{m_2}\times\dots\times\underline{m_n}$; note that this $n$-tuple is a well-defined invariant of the isomorphism class of $\cal C$, up to reordering. We may therefore assume the dimensions $m_i$ have been arranged in non-increasing order. We assume the vertices of $\Gamma({\cal C})$ have been labeled with multi-indices $(i_1,i_2,\dots,i_n), 1\le i_j\le m_j$, so that an oriented path in $\Gamma({\cal C})$ from $(i_1,i_2,\dots,i_n)$ to $(j_1,j_2,\dots,j_n)$ (corresponding to a morphism in $\cal C$) exists iff $i_k\le j_k, 1\le k\le n$. We will reference the objects of $\cal C$ by their multi-indices. The proof is by induction on dimension; the base case $n=0$ is trivially true as there is nothing to prove.
\vskip.2in
Assume then that $n\ge 1$. For $1\le i\le j\le m_n$, let ${\cal C}[i,j]$ denote the full subcategory of $\cal C$ on objects $(k_1,k_2,\dots,k_n)$ with $i\le k_n\le j$, and let $M[i,j]$ denote the restriction of $M$ to ${\cal C}[i,j]$. Let ${\cal F}_1$ resp.~${\cal F}_2$ denote the local structures on $M[1,m_n-1]$ and $M[m_n]$ respectively; by induction on the cardinality of $m_n$ we may assume these local structures are stable with stabilization indices $N_1,N_2$. Let $\phi_i:M[i]\to m[i+1]$ be the structure map from level $i$ to level $(i+1)$ in the $n$th coordinate. Then define $\phi_\bullet : M[1,m_n-1]\to M[m_n]$ be the morphism of $n$-dimensional persistence modules which on $M[i]$ is given by the composition
\[
M[i]\xrightarrow{\phi_i} M[i+1]\xrightarrow{\phi_{i+1}}\dots M[m_n-1]\xrightarrow{\phi_{m_n-1}} M[m_n]
\]
Define a multi-flag on $M[1,m_n-1]$ by ${\cal F}_1^* := \phi_\bullet^{-1}[{\cal F}_2]$ and on $M[m_n]$ by ${\cal F}_2^* := \phi_\bullet ({\cal F}_1)$. By induction on length and dimension we may assume that $M[1,m_n-1]$ and $M[m_n]$ have local structures which stabilize strongly (we note that $M[m_n]$ is effectively an $(n-1)$-dimensional persistence module). As these multi-flags are finite, we have that
\begin{itemize}
\item the restricted local structures ${\cal F}_i$ are stable (noted above);
\item the local structure of $M[1,m_n-1]$ is stable relative to ${\cal F}_1^*$;
\item the local structure of $M[m_n]$ is stable relative to ${\cal F}_2^*$.
\end{itemize}
We may then choose $N$ so that in each of the three itemized cases, stabilization has been achieved by the $N^{th}$ stage. Let $\cal G$ be the multi-flag on $M$ which on $M[1,m_n-1]$ is the local structure relative to ${\cal F}_1^*$ and on $M[m_n]$ is the local structure relative to ${\cal F}_2^*$. Then $\cal G$ is the local structure on $M$, and has been achieved after at most $2N$ stages starting with the trivial semi-flag on $M$. This implies $M$ has stable local structure. To verify the induction step for the statement that $M$ has srongly stable local structure, let $F$ be a finite multi-flag on $M$. Let $F_1$ be its restriction to $M[1,m_n-1]$, and $F_2$ its restriction to $M[m_n]$. Then let ${\cal F}_i^{**}$ denote the multi-flag generated by ${\cal F}_i^*$ and $F_i$. Proceeding with the same argument as before yields a multi-flag ${\cal G}^*$ achieved at some finite stage which represents the local structure of $M$ relative to $F$, completing the induction step for persistence modules.
\vskip.2in
In the more general case that one starts with a finite, $n$-dimensional zig-zag module $M$, the argument is esssentially identical but with one adjustment. Representing $M$ as
\[
M[1]\leftrightarrow M[2]\leftrightarrow \dots M[m_n-1]\leftrightarrow M[m_n]
\]
where ``$\leftrightarrow$" indicates either ``$\leftarrow$" or ``$\rightarrow$", the multi-flags ${\cal F}_i^*$ are defined on $M[1,m_n-1]$ and $M[m_n]$ respectively by starting with the stabilized local structure on the other submodule, and then extending by either pulling back or pushing forward as needed to the other. The rest of the induction step is the same, as is the basis step when $n=0$ and there are no morphisms.
\end{proof}
\vskip.2in
The above discussion applies to arbitrary fields; in this case, as we have seen, it is possible that the local structure fails to be stable. However, if the base field $k$ is finite, then the finiteness of $\cal C$ together with the finite dimensionality of a $\cal C$-module $M$ at each vertex implies that any $\cal C$-module $M$ over $k$ is a finite set. In this case, the infinite refinement of ${\cal F}(M)$ that must occur in order to prevent stabilization at some finite stage is no longer possible. Hence
\begin{theorem} Assume the base field $k$ is finite. Then for all (finite) poset categories $\cal C$ and $\cal C$-modules $M$, $M$ has stable local structure.
\end{theorem}
\vskip.5in
\section{Geometrically based $\cal C$-modules} A $\cal C$-module $M$ is said to be {\it geometrically based} if $M = H_n(F)$ for some positive integer $n$, where $F:{\cal C}\to {\cal D}$ is a functor from $\cal C$ to a category $\cal D$, equalling either
\begin{itemize}
\item {\bf f-s-sets} - the category of simplicial sets with finite skeleta and morphisms of simplicial sets, or
\item {\bf f-s-com} - the category of finite simplicial complexes and morphisms of simplicial complexes.
\end{itemize}
Almost all $\cal C$-modules that arise in applications are of this type. A central question, then, is whether or not such modules admit an inner product structure of the type needed for the above structure theorems to hold. We show that the obstruction to imposing an IP-structure on geometrically based modules is in general non-trivial, by means of an explicit example given below. On the other hand, all geometrically based $\cal C$-modules admit a presentation by $\cal IPC$-modules. In what follows we will restrict ourselves to the category {\bf f-s-sets}, as it is slightly easier to work in (although all results carry over to {\bf f-s-complexes}).
\subsection{Cofibrant replacement} Any $\cal C$-diagram in {\bf f-s-sets} can be cofibrantly replaced, up to weak homotopical transformation. Precisely,
\begin{theorem} If $F:{\cal C}\to$ {\bf f-s-sets}, then there is a $\cal C$-diagram $\wt{F}:{\cal C}\to$ {\bf f-s-sets} and a natural transformation $\eta:\wt{F}\xrightarrow{\simeq} F$ which is a weak equivalence at each object, where $\wt{F}(\phi_{xy})$ is a closed cofibration (inclusion of simplicial sets) for all morphisms $\phi_{xy}$\footnote{The proof following is a minor elaboration of an argument communicated to us by Bill Dwyer \cite{bd}.}.
\end{theorem}
\begin{proof} The simplicial mapping cylinder construction $Cyl(_-)$ applied to any morphism in {\bf f-s-sets} verifies the statement of the theorem in the simplest case $\cal C$ consists of two objects and one non-identity morphism. Suppose $\cal C$ has $n$ objects; we fix a total ordering on $obj({\cal C})$ that refines the partial ordering: $\{x_1 \prec x_2 \prec \dots \prec x_n\}$ where if $\phi_{x_i x_j}$ is a morphism in $\cal C$ then $i\le j$ (but not necessarily conversely). Let ${\cal C}(m)$ denote the full subcategory of $\cal C$ on objects $x_1,\dots,x_m$, with $F_m = F|_{{\cal C}(m)}$. By induction, we may assume the statement of the theorem for $F_m:{\cal C}(m)\to$ {\bf f-s-sets}, with cofibrant lift denoted by $\wt{F}_m$; with $\eta_m:\wt{F}_m\xrightarrow{\simeq} F_m$.
\vskip.2in
Now let ${\cal D}(m)$ denote the slice category ${\cal C}/x_{m+1}$; as ``$\prec$" is a refinement of the poset ordering ``$<$", the image of the forgetful functor $P_m:{\cal D}(m)\to {\cal C}; (y\to x_{m+1})\mapsto y$ lies in ${\cal C}(m)$. And as $\cal C$ is a poset category, the collection of morphisms $\{\phi_{y x_{m+1}}\}$ uniquely determine a map
\[
f_m : \underset{{\cal D}(m)}{colim}\ \wt{F}_m\circ P_m\xrightarrow{\eta_m} \underset{{\cal D}(m)}{colim}\ F_m\circ P_m \to F(x_{m+1})
\]
Define $\wt{F}_{m+1}:{\cal C}(m+1)\to$ {\bf f-s-sets} by
\begin{itemize}
\item $\wt{F}_{m+1}|_{{\cal C}(m)} = \wt{F}_m$;
\item $\wt{F}_{m+1}(x_{m+1}) = Cyl(f_m)$;
\item If $\phi_{x x_{m+1}}$ is a morphism from $x\in obj({\cal C}(m))$ to $x_{m+1}$, then
\[
\wt{F}_{m+1}(\phi_{x x_{m+1}}):\wt{F}_{m}(x) = \wt{F}_{m+1}(x)\to \wt{F}_{m+1}(x_{m+1})
\]
is given as the composition
\[
\wt{F}_{m}(x) = \wt{F}_m\circ P_m(x\xrightarrow{\phi_{x x_{m+1}}} x_{m+1})\hookrightarrow
\underset{{\cal D}(m)}{colim}\ \wt{F}_m\circ P_m\hookrightarrow Cyl(f_m) = \wt{F}_{m+1}(x_{m+1})
\]
\end{itemize}
where the first inclusion into the colimit over ${\cal D}(m)$ is induced by the inclusion of the object \newline
$(x\xrightarrow{\phi_{x x_{m+1}}} x_{m+1})\hookrightarrow obj({\cal D}(m))$. As all morphisms in ${\cal D}(m)$ map to simplicial inclusions under $\wt{F}_m\circ P_m$ the resulting map of $\wt{F}_m(x)$ into the colimit will also be a simplicial inclusion. Finally, the natural transformation $\eta_m:\wt{F}_m\to F_m$ is extended to $\eta_{m+1}$ on $\wt{F}_{m+1}$ by defining $\eta_{m+1}(x_{m+1}): \wt{F}_{m+1}(x_{m+1})\to F_{m+1}(x_{m+1})$ as the natural collapsing map $Cyl(f_m)\surj F(x_{m+1})$, which has the effect of making the diagram
\centerline{
\xymatrix{
\wt{F}_{m+1}(x)\ar[rr]^{\wt{F}_{m+1}(\phi_{xy})}\ar[dd]^{\eta_{m+1}(x)} && \wt{F}_{m+1}(y)\ar[dd]^{\eta_{m+1}(y)}\\
\\
F_{m+1}(x)\ar[rr]^{F_{m+1}(\phi_{xy})} && F_{m+1}(y)
}}
\vskip.2in
commute for morphisms $\phi_{xy}\in Hom({\cal C}_{m+1})$. This completes the induction step, and the proof.
\end{proof}
\begin{corollary}\label{cor:pres} Any geometrically based $\cal C$-module $M$ admits a presentation by $\cal C$-modules $N_1\inj N_2\surj M$ where $N_i$ is an $\cal IPC$-module and $N_1\inj N_2$ is an isometric inclusion of $\cal IPC$-modules.
\end{corollary}
\begin{proof} By the previous result and the homotopy invariance of homology, we may assume $M = H_n(F)$ where $F :{\cal C}\to$ {\bf i-f-s-sets}, the subcategory of {\bf f-s-sets} on the same set of objects, but where all morphisms are simplicial set injections. In this case, for each object $x$, $C_n(F(x)) = C_n(F(x);k)$ admits a canonical inner product determined by the natural basis of $n$-simplices $F(x)_n$, and each morphism $\phi_{xy}$ induces an injection of basis sets $F(x)_n\inj F(y)_n$, resulting in an isometric inclusion $C_n(F(x))\inj C_n(F(y))$. In this way the functor $C_n(F) := C_n(F;k):{\cal C}\to (vect/k)$ inherits a natural $\cal IPC$-module structure. If $Q$ is an $\cal IPC$-module where all of the morphisms are isometric injections, then any $\cal C$-submodule $Q'\subset Q$, equipped with the same inner product, is an $\cal IPC$-submodule of $Q$. Now $C_n(F)$ contains the $\cal C$-submodules $Z_n(F)$ ($n$-cycles) and $B_n(F)$ ($n$-boundaries); equipped with the induced inner product the inclusion $B_n(F)\hookrightarrow Z_n(F)$ is an isometric inclusion of $\cal IPC$-modules, for which $M$ is the cokernel $\cal C$-module.
\end{proof}
[Note: The results for this subsection have been stated for {\bf f-s-sets}; similar results can be shown for {\bf f-s-complexes} after fixing a systematic way for representing the mapping cyclinder of a map of simplicial complexes as a simplicial complex; this typically involves barycentrically subdividing.]
\vskip.3in
\subsection{Geometrically realizing an IP-obstruction} As we saw in Theorem \ref{thm:obstr}, the ${\cal C}_2$-module
\vskip.5in
({\rm D5})\vskip-.4in
\centerline{
\xymatrix{
\mathbb R && \mathbb R\ar[dd]^{1}\ar[ll]_{2}\\
\\
\mathbb R\ar[uu]^{1}\ar[rr]_{1} && \mathbb R
}}
\vskip.2in
does not admit an IP-structure. We note the same diagram can be formed with $S^1$ in place of $\mathbb R$:
\vskip.5in
({\rm D6})\vskip-.4in
\centerline{
\xymatrix{
\mathbb S^1 && S^1\ar[dd]^{1}\ar[ll]_{2}\\
\\
S^1\ar[uu]^{1}\ar[rr]_{1} && S^1
}}
\vskip.2in
Here ``$2:S^1\to S^1$" represents the usual self-map of $S^1$ of degree $2$. This diagram can be realized up to homotopy by a digram of simplicial complexes and simplicial maps as follows: let $T_1 = \partial(\Delta^2)$ denote the standard triangulation of $S^1$, and let $T_2$ be the barycentric subdivision of $T_1$. We may form the ${\cal C}_2$-diagram in {\bf f-s-com}
\vskip.5in
({\rm D7})\vskip-.4in
\centerline{
\xymatrix{
T_1 && T_2\ar[dd]^{f_1}\ar[ll]_{f_2}\\
\\
T_1\ar[uu]^{1}\ar[rr]_{1} && T_1
}}
\vskip.2in
The map $f_2$ is the triangulation of the top map in (D6), while $f_1$ is the simplicial map which collapses every other edge to a point. The geometric realization of (D7) agrees up to homotopy with (D6). Of course this diagram of simplicial complexes can also be viewed as a diagram in {\bf f-s-sets}. Applying $H_1(_-;\mathbb Q)$ to diagram (D7) we have
\begin{theorem} There exist geometrically based $\cal C$-modules with domain category {\bf f-s-com} (and hence also {\bf f-s-sets}) which do not admit an IP-structure.
\end{theorem}
In this way we see that the presentation result of Corollary \ref{cor:pres} is, in general, the best possible in terms of representing a geometrically based $\cal C$-module in terms of modules equipped with an $\cal IPC$-structure.
\vskip.5in
\section*{Open questions}
\subsubsection*{If $\cal C$ is h-free, does every $\cal C$-module admit an inner product structure?} More generally, what are necessary and sufficient conditions on the indexing category $\cal C$ that guarantee the existence of an inner product (IP) structure on any $\cal C$-module $M$?
\subsubsection*{If $\cal C$ is h-free, does every $\cal C$-module $M$ have stable local structure?} In other words (as with IP-structures), is holonomy the only obstruction to ${\cal F}(M)$ stabilizing at some finite stage? One is tempted to conjecture that the answer is ``yes", however the only evidence so far is the lack of examples of non-stable local structures occuring in the absence of holonomy, or even imagining how this could happen.
\vskip.3in
\subsubsection*{Does h-free imply strongly h-free?} Again, this question is based primarily on the absence, so far, of any counterexample illustrating the difference between these two properties.
\vskip.3in
\subsubsection*{If $M$ has stable local structure, does it have strongly stable local structure?} Obviously, strongly stable implies stable. The issue is whether these conditions are, for some reason, equivalent. If not, then a more refined version of the question would be: under what conditions (on either the indexing category $\cal C$ or the $\cal C$-module $M$) are they equivalent?
\vskip.5in
| {'timestamp': '2018-06-29T02:11:49', 'yymm': '1803', 'arxiv_id': '1803.08108', 'language': 'en', 'url': 'https://arxiv.org/abs/1803.08108'} |
\section{Introduction}
We consider the problem of finding a function ${u}\in{V}$ such that
\begin{equation}\label{abs-pro}
{\mathcal{N}}({u},{v};\boldsymbol{\mu}) = {\mathcal{F}}({v};\boldsymbol{\mu})\quad\forall\, {v}\in{V},
\end{equation}
where $\boldsymbol{\mu}\in D$ denotes a point in a parameter domain $D\subset {\mathbb R}^{M}$, ${V}$ a function space,
${\mathcal{N}}(\cdot,\cdot;\boldsymbol{\mu})$ a given form that is linear in ${v}$ but generally nonlinear in ${u}$, and ${\mathcal{F}}(\cdot)$
a linear functional on ${V}$. Note that either ${\mathcal{N}}$ or ${\mathcal{F}}$ or both could depend on some or all the
components of the parameter vector $\boldsymbol{\mu}$. We view the problem \eqref{abs-pro} as a variational
formulation of a nonlinear partial differential equation (PDE) or a system of such equations
in which ${M}$ parameters appear.
We are interested in systems that undergo bifurcations,i.e.~the solution ${u}$ of \eqref{abs-pro}
differs in character
for parameter vectors $\boldsymbol{\mu}$ belonging to different subregions of $D$.
We are particularly interested in situations that require solutions of \eqref{abs-pro} for a set of parameter vectors that span across
two or more of the subregions of the bifurcation diagram. This is the case, for example, if
one needs to trace the bifurcation diagram.
In general, one could approximate the solutions to \eqref{abs-pro}
using a Full Order Method (FOM), like for example the Finite Element Method
or the Spectral Element Method.
Let ${\VVV_\NNN}$ be a ${N}$-dimensional subspace that is a subset of ${V}$.
A FOM seeks an approximation ${\uuu_\NNN}\in{\VVV_\NNN}$ such that
\begin{equation}\label{abs-dis}
{\mathcal{N}}_h({\uuu_\NNN},{v};\boldsymbol{\mu}) = {\mathcal{F}}_h({v};\boldsymbol{\mu})\quad\forall\, {v}\in{\VVV_\NNN},
\end{equation}
where ${\mathcal{N}}_h$ and ${\mathcal{F}}_h$ are the discretized forms for ${\mathcal{N}}$ and ${\mathcal{F}}$.
FOMs are often expensive, especially if multiple solutions are needed.
For this reason, one is interested in finding surrogate methods that are much less costly.
Such surrogates are constructed using a ``few'' solutions obtained with the FOM.
Here, we are interested in {\em reduced-order models} (ROMs) for which one constructs a
low-dimensional approximating subspace ${\VVV_\LLL}\subset{\VVV_\NNN}$ of dimension ${L}$
that still contains an acceptably accurate approximation ${\uuu_\LLL}$ to the \rev{FOM solution ${\uuu_\NNN}$, and thus also to the} solution $u$ of \eqref{abs-pro}.
That approximation is determined from the reduced discrete system
\begin{equation}\label{abs-romg}
{\mathcal{N}}_h({\uuu_\LLL},{v};\boldsymbol{\mu}) = {\mathcal{F}}_h({v};\boldsymbol{\mu})\quad\forall\, {\vvv_\LLL}\in{\VVV_\LLL}
\end{equation}
that, if ${L}\ll{N}$, is much cheaper to solve compared to \eqref{abs-dis}.
We refer to approach \eqref{abs-romg} as \emph{global ROM}, since
a single {\em global} basis is used to determine the ROM approximation ${\uuu_\LLL}$
at any chosen parameter point $\boldsymbol{\mu}\in D$.
Global ROMs in the setting of bifurcating solutions are considered in the early papers
\cite{NOOR1982955,Noor:1994,Noor:1983,NOOR198367} for buckling bifurcations in solid mechanics.
More recently, in \cite{Terragni:2012} it is shown that a
Proper Orthogonal Decomposition (POD) approach allows for considerable computational time
savings for the analysis of bifurcations in some nonlinear dissipative systems.
Reduced Basis (RB) methods have been used to study symmetry breaking bifurcations
\cite{Maday:RB2,PLA2015162} and Hopf bifurcations \cite{PR15} for
natural convection problems. A RB method for symmetry breaking bifurcations in contraction-expansion channels
has been proposed in \cite{PITTON2017534}.
A RB method for the stability of flows under perturbations in the forcing term or in the boundary conditions,
is introduced in \cite{Yano:2013}. Furthermore, in~\cite{Yano:2013}
it is shown how a space-time inf-sup constant approaches zero as the computed solutions get close to a bifurcating value.
Recent works have proposed ROMs for bifurcating solutions in structural mechanics \cite{pichirozza}
and for a nonlinear Schr\"{o}dinger equation, called Gross--Pitaevskii equation \cite{Pichi2020}, respectively.
Finally, we would like to mention that machine learning techniques based on sparse optimization
have been applied to detect bifurcating
branches of solutions for a two-dimensional laterally heated cavity and Ginzburg-Landau model
in \cite{BTBK14,KGBNB17}, respectively.
Finding all branches after a bifurcation occurs can be done with deflation methods (see \cite{PintorePichiHessRozzaCanuto2019}), which require
introducing a pole at each known solution. In principle, the ROM approaches under investigation here can be combined with deflation techniques.
In a setting of a bifurcation problem (i.e.~$D$ consists of subregions for which the corresponding solutions of \eqref{abs-pro}
have different character), it may be the case that ${L}$,
although small compared to ${N}$, may be large enough so that solving system
\eqref{abs-romg} many times becomes expensive.
To overcome this problem, in \cite{Hess2019CMAME} we proposed a
\emph{local ROM} approach. The idea is to construct several {\em local} bases
(in the sense that they use solutions for parameters that lie in subregions of the parameter domain), each of which is
used for parameters belonging to a different subregion of the bifurcation diagram.
So, we construct ${K}$ such local bases of dimension ${\LLL_\kkk}$, each spanning a local subspace ${\VVV_\LLLk}\subset{\VVV_\NNN}$.
We then construct ${K}$ {\em local reduced-order models}
\begin{equation}\label{abs-roml}
\begin{aligned}
{\mathcal{N}}({\uuu_\LLLk},{v};\boldsymbol{\mu}) = {\mathcal{F}}({v};\boldsymbol{\mu})\quad\forall\, {v}\in{\VVV_\LLLk} &\qquad \mbox{for ${k}=1,\ldots,{K}$}
\end{aligned}
\end{equation}
that provide acceptably accurate approximations ${\uuu_\LLLk}$ to the solution $u$ of \eqref{abs-pro}
for parameters $\boldsymbol{\mu}$ belonging to different parts of the bifurcation diagram.
A key ingredient in this approach is how to identify which local basis should be used in \eqref{abs-roml}
to determine the corresponding ROM approximation for any parameter point $\boldsymbol{\mu}\in D$
that was not among those used to generate the snapshots.
Several criteria are proposed and compared for a
two-parameter study in Sec.~\ref{sec:criteria}.
To the best of our knowledge, no work other than \cite{Hess2019CMAME} addresses
the use of local ROM basis for bifurcation problems. This is the continuation of previous work on model reduction with spectral element methods \cite{HessRozza2019}
and including parametric variations of the geometry \cite{10.1007/978-3-030-39647-3_45}, \cite{HessQuainiRozza2020}.
This paper aims at comparing one global ROM approach,
our local ROM approach with the ``best'' criterion to select
the local basis, and a recently proposed RB method that uses
neural networks to accurately approximate the coefficients of the reduced model
\cite{HESTHAVEN201855}. This third method is referred to as POD-NN.
The global ROM as explained in, e.g., uses the most dominant POD modes of a uniform
snapshot set over the parameter domain.
The dominant modes define the projection space for every parameter evaluation of interest and is in this sense \emph{global}
with respect to the parameter space.
See, e.g., \cite{LMQR:2014} for more details. Like the global ROM, the
POD-NN employs the most dominant POD modes. However, the difference is that
the coefficients of the snapshots in the ROM expansion are
used as training data for an artificial neural network.
\rev{The local ROM first employs a classification ANN to determine the corresponding cluster of a parameter location, but this can be improved upon with a regression ANN using the relative errors of the local ROMs at the snapshot locations as training data.}
As a concrete setting for the comparison, we use the Navier-Stokes equations
and in particular flow through a channel with a contraction.
The outline of the paper is as follows. In Sec.~\ref{sec:NS}, we briefly
present the Navier-Stokes equations and consider a specific benchmark test.
Sec.~\ref{sec:num_res} reports the comparison of many local basis selection
criteria for the local ROM approach and the main comparison of the three
ROM approaches. Concluding remarks are provided in Sec.~\ref{sec:concl}.
\section{Application to the incompressible Navier-Stokes equations}\label{sec:NS}
The Navier-Stokes equations describe the incompressible motion of a viscous, Newtonian fluid in the spatial domain $\Omega \subset\mathbb{R}^d$, $d = 2$ or $3$, over a time interval of interest $(0, T]$.
They are given by
\begin{equation}\label{NS-1}
\begin{aligned}
\frac{\partial {\bm u}}{\partial t}+({\bm u}\cdot \nabla {\bm u})-\nu\Delta {\bm u}+\nabla p={\bm 0} &\qquad\mbox{in } \Omega\times(0,T]\\
\nabla \cdot {\bm u}=0&\qquad\mbox{in } \Omega\times (0,T],
\end{aligned}
\end{equation}
where ${\bm u}$ and $p$ denote the unknown velocity and pressure fields, respectively, and
$\nu>0$ denotes the kinematic viscosity of the fluid. Note that there is no external body force
because we will not need one for the specific benchmark test under consideration.
Problem \eqref{NS-1} needs to be endowed with initial and boundary conditions, e.g.:
\begin{eqnarray}
{\bm u}={\bm u}_0&\qquad&\mbox{in } \Omega \times \{0\} \label{IC}\\
{\bm u}={\bm u}_D&\qquad&\mbox{on } \partial\Omega_D\times (0,T] \label{BC-D} \\
-p {\bm n} + \nu \frac{\partial {\bm u}}{\partial {\bm n}}={\bm g} &\qquad&\mbox{on } \partial\Omega_N\times, (0,T], \label{BC-N}
\end{eqnarray}
where $\partial\Omega_D \cap \partial\Omega_N = \emptyset$ and $\overline{\partial\Omega_D} \cup \overline{\partial\Omega_N} = \overline{\partial\Omega}$. Here, ${\bm u}_0$, ${\bm u}_D$, and ${\bm g}$ are given and ${\bm n}$ denotes the unit normal vector on the boundary $\partial\Omega_N$ directed outwards. In the rest of this section, we will explicitly denote the dependence of the solution of the problem \eqref{NS-1}-\eqref{BC-N} on the parameter vector $\boldsymbol{\mu}$.
Let $L^2(\Omega)$ denote the space of square integrable functions in $\Omega$ and $H^1(\Omega)$ the space of functions belonging to $L^2(\Omega)$ with first derivatives in $L^2(\Omega)$. Moreover, let
\begin{align}
{\bm V} &:= \left\{ {\bm v} \in [H^1(\Omega)]^d: ~ {\bm v} = {\bm u}_D \mbox{ on }\partial\Omega_D \right\}, \nonumber \\
{\bm V}_0 &:=\left\{{\bm v} \in [H^1(\Omega)]^d: ~ {\bm v} = \boldsymbol{0} \mbox{ on }\partial\Omega_D \right\}. \nonumber
\end{align}
The standard variational form corresponding to \eqref{NS-1}-\eqref{BC-N} is:
find $({\bm u}(\boldsymbol{\mu}),p(\boldsymbol{\mu}))\in {\bm V} \times L^2(\Omega)$ satisfying the initial condition \eqref{IC} and
\begin{equation}\label{eq:weakNS-1}
\begin{aligned}
&\int_{\Omega} \frac{\partial{\bm u}(\boldsymbol{\mu})}{\partial t}\cdot{\bm v}\mathop{}\!\mathrm{d}\mathbf{x}+\int_{\Omega}\left({\bm u}(\boldsymbol{\mu})\cdot\nabla {\bm u}\right)\cdot{\bm v}\mathop{}\!\mathrm{d} \mathbf{x} - \int_{\Omega}p(\boldsymbol{\mu})\nabla \cdot{\bm v}\mathop{}\!\mathrm{d}\mathbf{x}\\
&\hspace{3cm} = \int_{\partial \Omega_N}{\bf g}\cdot{\bm v}\mathop{}\!\mathrm{d}\mathbf{x}
\qquad\forall\,{\bm v} \in {\bm V}_0 \\
& \int_{\Omega}q\nabla \cdot{\bm u}(\boldsymbol{\mu})\mathop{}\!\mathrm{d}\mathbf{x} =0 \qquad\forall\, q \in L^2(\Omega).
\end{aligned}
\end{equation}
Problem \eqref{eq:weakNS-1} constitutes the particular case of the abstract problem \eqref{abs-pro} we use for the numerical illustrations.
We consider a benchmark test that has been widely studied in the literature:
channel flow through a narrowing of width $w$;
see, e.g., \cite{fearnm1,drikakis1,hawar1,mishraj1} and the references cited therein.
The 2D geometry under consideration is depicted in Fig.~\ref{fig:channel_solution}.
A parabolic horizontal velocity component with maximum $\frac{9}{4}$ and
zero vertical component is inscribed on the inlet at the left side.
At the top and bottom of the channel as well as the narrowing boundaries, zero velocity walls are assumed.
The right end of the channel is an outlet, where zero Neumann boundaries \rev{(i.e., $\bf g = 0$)} are assumed.
We will let both the narrowing width and the viscosity vary in given ranges
that include a bifurcation.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=4in]{figures/field_max_para_upper.jpg}
\includegraphics[width=.7in]{figures/legend_max_para.png} \\
\includegraphics[width=4in]{figures/lower_max_para.jpg}
\includegraphics[width=.7in]{figures/legend_max_para.png}
\caption{Steady state solutions for kinematic viscosity $\nu = 0.1$ and orifice width $w = 0.5$.
Shown is the horizontal component of the velocity.
The two stable solutions can be characterized by an attachment to the upper and lower wall, respectively.}
\label{fig:channel_solution}
\end{center}
\end{figure}
The Reynolds number $\text{Re}$ can be used to characterize the flow regime.
For the chosen data, we have Re $= 9/ (4\nu)$\rev{, since the characteristic length has been set to one.}
As the Reynolds number $\text{Re}$ increases from zero, we first observe a steady {\em symmetric} jet with
two recirculation regions downstream of the narrowing that are symmetric about the centerline.
As $\text{Re}$ increases, the recirculation length progressively increases.
At a certain critical value $\text{Re}_{\text{crit}}$, one recirculation zone expands whereas the other shrinks,
giving rise to a steady {\em asymmetric} jet.
This asymmetric solution remains stable as $\text{Re}$ increase further, but the asymmetry becomes more pronounced.
The configuration with a symmetric jet is still a solution, but is unstable \cite{sobeyd1}.
Snapshots of the stable solutions for kinematic viscosity $\nu = 0.1$ and orifice width $w = 0.5$
are illustrated in Fig.~\ref{fig:channel_solution}.
This loss of symmetry in the steady solution as $\text{Re}$ changes is a supercritical pitchfork bifurcation \cite{Prodi}.
Because we are interested in studying a flow problem close to a steady bifurcation point,
our snapshot sets include only steady-state solutions \cite{Hess2019CMAME}.
To obtain the snapshots, we approximate the solution of problem \eqref{eq:weakNS-1}
by a time-marching scheme that we stop when sufficiently close to the steady state, e.g., when the stopping condition
\begin{equation}
\frac{\|{\bm u}_N^n-{\bm u}_N^{n-1}\|_{L^2(\Omega)}}{\|{\bm u}_N^n\|_{L^2(\Omega)}}<\mathtt{tol}
\label{eq:def_increment}
\end{equation}
\noindent is satisfied for a prescribed tolerance $\mathtt{tol}>0$, where $n$ denotes the time-step index.
\section{Numerical results}\label{sec:num_res}
We conduct a parametric study for the channel flow
where we let the the viscosity $\nu$ (physical parameter) vary in $[0.1, 0.2]$
and the narrowing width $w$ (geometric parameter) vary in $[ 0.5, 1.0 ]$.
We choose the Spectral Element Method (SEM) as FOM.
For the {\em spectral element discretization},
the SEM software framework Nektar++, version 4.4.0, (see {\tt https://www.nektar.info/}) is used.
The domain is discretized into $36$ triangular elements as shown in Fig. \ref{fig_SEM_domain_channel}.
Modal Legendre ansatz functions of order $12$ are used in every element and for every solution component.
This results in $4752$ degrees of freedom for each of the horizontal and vertical velocity components
and the pressure for the time-dependent simulations. For temporal discretization, an IMEX scheme of
order 2 is used with a time-step size of $\Delta t = 10^{-4}$; typically $10^5$ time steps are needed to reach a steady state.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=.95\textwidth]{figures/domain.png}
\caption{The $36$ triangular elements used for spatial approximation.}
\label{fig_SEM_domain_channel}
\end{center}
\end{figure}
Fig.~\ref{fig:bifurcation_diagram_channel_example_model_2p} shows
a bifurcation diagram: the vertical component of the velocity at the point $(3.0, 1.5) $
computed by SEM is plotted over the parameter domain.
The reference snapshots have been computed on a uniform $ 40 \times 41 $ grid.
Notice that Fig.~\ref{fig:bifurcation_diagram_channel_example_model_2p} reports only
the lower branch of asymmetric solutions.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/lower_FOM_VV_dof.pdf}
\caption{Bifurcation diagram of the channel with variable width and viscosity:
vertical component of the velocity computed by SEM close to steady state and evaluated at the point $(3.0, 1.5)$.
Color encodes the vertical velocity component.}
\label{fig:bifurcation_diagram_channel_example_model_2p}
\end{center}
\end{figure}
In \cite{Hess2019CMAME}, we presented preliminary results obtained with our local ROM approach
for a two-parameter study related to the channel flow.
We showed that two criteria to select the local ROM basis that work well
for one-parameter studies fail for the two-parameter case, in the sense
that they provide a poor reconstruction of the bifurcation diagram.
Sec.~\ref{sec:criteria} addresses the need to find an accurate and inexpensive criterion to assign the
local ROM basis for a given parameter $\boldsymbol{\mu}$ in a multi-parameter context.
The comparison for global ROM, local ROM, and POD-NN is reported in Sec.~\ref{sec:comparison}.
\subsection{Comparing criteria to select the local basis in the local ROM approach}\label{sec:criteria}
For our local ROM approach, we sample $72$ snapshots and divide them into
8 clusters using k-means clustering. The number of clusters is chosen according to
the minimal k-means energy. For more details, we refer to \cite{Hess2019CMAME}.
First, we consider the two criteria presented in \cite{Hess2019CMAME}, namely
the distance to parameter centroid and the distance to the closest snapshot location.
The first criterion entails finding the closest parameter centroid and
using the corresponding local ROM basis.
The second criterion finds
the closest snapshot location to the given parameter vector $\boldsymbol{\mu}$ and the
local ROM basis that includes this snapshot is considered.
The bifurcation diagrams reconstructed by the local ROM approach with
these two criteria are compared in Fig.~\ref{fig:compare_cluster_selection_cmame}.
The distance to parameter centroid criterion does not manage to recover the bifurcation diagram well
and this seems largely due to the local ROM assignment scheme. See Fig.~\ref{fig:compare_cluster_selection_cmame} (top).
In the bifurcation diagram corresponding to the distance to the closest snapshot location,
we observe several jumps when moving from one cluster to the next.
See Fig.~\ref{fig:compare_cluster_selection_cmame} (bottom).
These jumps, which correspond to large approximation errors, can perhaps
be better appreciated from another view of the same bifurcation diagram shown
in Fig.~\ref{fig:compare_cluster_no_overlap}.
Next, we will try to reduce the jumps.
\begin{figure}
\begin{center}
\includegraphics[width=.9\textwidth]{figures/grid_8_9_with_transitions_pCent.pdf}
\includegraphics[width=\textwidth]{figures/grid_8_9_no_transitions_m.pdf}
\caption{Local ROM: bifurcation diagram reconstructed with
basis selection criterion distance to parameter centroid (top) and
distance to the closest snapshot location (bottom).
Different colors are used for different clusters.}
\label{fig:compare_cluster_selection_cmame}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{figures/grid_8_9_no_transitions.pdf}
\caption{Different view of the bifurcation diagram shown in
Fig.~\ref{fig:compare_cluster_selection_cmame} (bottom).}
\label{fig:compare_cluster_no_overlap}
\end{center}
\end{figure}
To alleviate the bad approximation in the transition regions between two clusters (i.e., local ROMs),
we introduce overlapping clusters.
In particular, the collected snapshots of a cluster from the k-means algorithm undergo a first POD and
are then enriched with the orthogonal complement of neighboring snapshots
according to the sampling grid. Then, a second POD with a lower POD tolerance
defines the ROM ansatz space.
The corresponding bifurcation diagram is shown in Fig.~\ref{fig:compare_cluster_overlap}.
We observe that several jumps have been smoothed out.
Compare Fig.~\ref{fig:compare_cluster_overlap} (top) with Fig.~\ref{fig:compare_cluster_no_overlap}.
The mean approximation error reduces by about an order of magnitude thanks to the overlap.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{figures/grid_8_9_with_transitions.pdf}
\includegraphics[width=\textwidth]{figures/grid_8_9_with_transitions_m.pdf}
\caption{Local ROM with basis selection criterion distance to the closest snapshot location and overlapping clusters:
two views of the reconstructed bifurcation diagram. Different colors are used for different clusters.}
\label{fig:compare_cluster_overlap}
\end{center}
\end{figure}
Although the overlapping clusters lead to a better reconstruction of the bifurcation diagram,
the result is still not satisfactory. Thus, we propose
an alternative selection criterion that uses an artificial neural network (ANN). The ANN is trained using the k-means clustering as training information and enforcing a perfect match at the snapshot locations with the corresponding
cluster. This is a classification problem, implemented in Keras Tensorflow.
\rev{The ANN is designed as multilayer perceptron with $4$ layers, the first layer having just two nodes (or "neurons") taking
the two dimensional parameter values. The two inner layers are big ($2048$ and $1024$ nodes, respectively),
while the last layer corresponds to the number of clusters, so $8$ in this case.
For the first three layers a ReLU activation function is used, while for the last layer activation function is a softmax.
The perfect match can be enforced by either running the training as long as the training data can be exactly matched or using a Keras early stopping with an outer loop checking for a match.
}
Fig.~\ref{fig:compare_cluster_selection_naive} shows the bifurcation diagram reconstructed with the
ANN selection criterion. We observe to a better reconstruction
of the bifurcation diagram: compare Fig.~\ref{fig:compare_cluster_selection_naive}
with Fig.~\ref{fig:compare_cluster_selection_cmame} and
\ref{fig:compare_cluster_overlap}.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{figures/grid_8_9_with_transitions_naiveANN.pdf}
\caption{Local ROM with basis selection selection that uses the ANN selection criterion:
reconstruction of the bifurcation diagram.}
\label{fig:compare_cluster_selection_naive}
\end{center}
\end{figure}
For a more quantitative comparison, we run 4 tests. The tests differ in the number of samples and
whether cluster overlapping is used. The specifications of each test are reported in Tables
\ref{table:errors_naive_criterion1} and \ref{table:errors_naive_criterion2}, which
list the relative errors evaluated over the $40 \times 41$ reference grid
in the $L^2$ and $L^\infty$ norms, respectively. Three local basis selection
criteria are considered: distance to parameter centroid, distance to the closest snapshot location,
and ANN.
Tables \ref{table:errors_naive_criterion1} and \ref{table:errors_naive_criterion2} confirm that the overlapping
cluster represent an improvement over non-overlapping clusters. This is true for all three criteria, but in particular
for the distance to parameter centroid criterion. Thus, for tests 3 and 4 we only used overlapping clusters.
We notice that the ANN criterion outperforms the other two criteria in all the tests, both in the
$L^2$ and $L^\infty$ norms. However,
the margin of improvement becomes smaller as the number of samples increases.
Tables \ref{table:errors_naive_criterion1} and \ref{table:errors_naive_criterion2} also report
the mean relative error for an optimal cluster selection, which is explained next.
\begin{table}[h!]
\begin{center}
\begin{tabular}{ | l | l | l | l | l |}
\hline
& Test 1 & Test 2 & Test 3 & Test 4 \\ \hline
samples & 72 & 72 & 110 & 240 \\ \hline
uniform grid & $8 \times 9$ & $8 \times 9$ & $10 \times 11$ & $15 \times 16$ \\ \hline
overlapping clusters & yes & no & yes & yes \\ \hline
parameter centroid mean & 0.0294 & 0.0632 & 0.1085 & 0.0559 \\ \hline
distance snapshot mean & 0.0241 & 0.0295 & 0.1011 & 0.0227 \\ \hline
ANN mean & 0.0238 & 0.0273 & 0.1002 & 0.0223 \\ \hline
optimum & 0.0046 & 0.0106 & 0.0048 & 0.0092 \\ \hline
\end{tabular}
\caption{Local ROM with three different basis selection: mean
relative $L^2$ errors for the velocity over all reference parameter points ($1640$ on a uniform $40 \times 41$ grid).}
\label{table:errors_naive_criterion1}
\end{center}
\end{table}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ | l | l | l | l | l |}
\hline
& Test 1 & Test 2 & Test 3 & Test 4 \\ \hline
samples & 72 & 72 & 110 & 240 \\ \hline
uniform grid & $8 \times 9$ & $8 \times 9$ & $10 \times 11$ & $15 \times 16$ \\ \hline
overlapping clusters & yes & no & yes & yes \\ \hline
parameter centroid mean & 0.0275 & 0.0600 & 0.0970 & 0.0538 \\ \hline
distance snapshot mean & 0.0230 & 0.0284 & 0.0908 & 0.0217 \\ \hline
ANN mean & 0.0227 & 0.0263 & 0.0899 & 0.0214 \\ \hline
optimum & 0.0044 & 0.0101 & 0.0045 & 0.0088 \\ \hline
\end{tabular}
\caption{Local ROM with three different basis selection: mean relative $L^\infty$ errors for the velocity
over all reference parameter points ($1640$ on a uniform $40 \times 41$ grid).}
\label{table:errors_naive_criterion2}
\end{center}
\end{table}
\rev{By "optimal cluster selection" we address the question of how parameter points are optimally associated with an already given clustering.}
Fig.~\ref{fig:cluster_selection_optimal} shows the optimal cluster selection.
This optimal selection can only be obtained from a fine grid of reference solutions. Thus it is usually not available.
The reason why we report it is because some interesting conclusions can be drawn from it.
First, the best possible clustering does not have contiguous clusters.
This is in contrast to the clusters created by the k-means algorithm, which are contiguous in the parameter space in all of our tests.
Second, the best possible approximation at a snapshot location is not necessarily given by the local cluster
to which that snapshot is assigned.
Third, there still is a reduction factor of $5 - 20$ in the relative $L^2$ for the velocity error that
could be gained, as one can see when comparing the respective errors
for the optimal cluster and the ANN selection criteria
in Tables~\ref{table:errors_naive_criterion1} and \ref{table:errors_naive_criterion2}.
\begin{figure}
\begin{center}
\includegraphics[width=.75\textwidth]{figures/grid_8_9_with_transitions_optimal.pdf}
\caption{Optimal cluster selection. Once again, different colors are used for different clusters.}
\label{fig:cluster_selection_optimal}
\end{center}
\end{figure}
To get close to the optimum, we adopt the following strategy.
We compute relative errors of all local ROMs at all snapshot locations.
This operation is performed offline and is not expensive since the exact
solution at the snapshot locations is available.
The relative errors can be used as training data for an ANN, which means that
the ANN training is treated as a regression and not a classification. Thus,
the ANN will approximate relative errors of each local ROM over the parameter domain.
This approximation is used as cluster selection criterion.
In this procedure, it is important to normalize the data.
Here, we use the inverse relative error and normalize the error vectors at each snapshot location.
We note that the training time for the ANN increases, but is still well below the ROM offline time, i.e.~30 minutes vs. several hours.
\rev{The ROM offline time is dominated by computing the affine expansion of the trilinear form. The computational cost grows
with the cube of the reduced order model dimension. This makes the localized ROM much faster than the global ROM as,
for example, the global ROM might have dimension of $40$ while each of the
$8$ local ROMs has a dimension of about $10$. We found impossible to quantify
the training cost of an ANN. The performance of the stochastic gradient employed in the ANN training
varied significantly over multiple runs and required outer loops to check the accuracy. Thus, the given
measure of ~30 minutes vs. several hours is only our experience with this particular model and probably cannot be generalized.}
We consider test 3 ($10 \times 11$ sampling grid, overlapping clusters) to assess two variants of the regression ANN,
which differ in how the training is done.
One variant takes all clusters into account simultaneously and is called ``regression ANN''.
It generates a mapping from $\mathbb{R}^2 \mapsto \mathbb{R}^8$, i.e.~the two dimensional
parameter domain to the expected errors of the clusters.
The second variant treats each clusters independently and is called ``regression ANN, independent local ROMs''.
It generates eight mappings from $\mathbb{R}^2 \mapsto \mathbb{R}$, i.e.~one for each cluster.
Table~\ref{table:errors_smart_crit} reports the mean relative $L^2$ and $L^\infty$ errors for the velocity.
Interestingly, taking all clusters into account simultaneously (``regression ANN'') is about $10\%$
more accurate than considering each cluster separately (``regression ANN, independent local ROMs'').
Moreover, we observe that the mapping snapshot location to errors of local ROMs
holds useful information and the ANN consistently gets closest to the optimum.
Table~\ref{table:errors_smart_crit} reports also the errors obtained with the
Kriging DACE software\footnote{Kriging DACE - Design and Analysis of Computer Experiments, \texttt{http://www.omicron.dk/dace.html}.}.
and with simply taking the cluster, which best approximates the closest snapshot.
The closest snapshot is determined in parameter domain with the Euclidean norm.
This is an easily implementable tool, which still performs better than the distance to the parameter centroid;
see \cite{Hess2019CMAME}.
From Table~\ref{table:errors_smart_crit}, we see that this simple criterion performs
only $15\% - 20\%$ worse and is very cheap to evaluate.
\begin{center}
\begin{table}[h!]
\begin{tabular}{ | l | l | l | }
\hline
& mean $L^2$ error & mean $L^\infty$ error \\ \hline
optimum & 0.0048 & 0.0045 \\ \hline
regression ANN & 0.0068 & 0.0064 \\ \hline
regression ANN, independent local ROMs & 0.0076 & 0.0071 \\ \hline
Kriging DACE & 0.0077 & 0.0073 \\ \hline
distance to next best-approx. snapshot & 0.0081 & 0.0077 \\ \hline
\end{tabular}
\caption{Local ROM with different basis selection: mean relative $L^2$ and $L^\infty$
errors for the velocity over all reference parameter points ($1640$ on a uniform $40 \times 41$ grid).}
\label{table:errors_smart_crit}
\end{table}
\end{center}
\subsection{Comparing the global ROM, local ROM, and POD-NN approaches}\label{sec:comparison}
In the previous section, we learned that the regression ANN criterion outperforms all other local basis
selection criteria. In this section, we compare the local ROM approach with the regression ANN criterion
to our global ROM approach and the POD-NN over the reconstruction of the bifurcation diagram.
Table \ref{table:errors_compare_to_pod_nn} reports the mean relative $L^2$ and $L^\infty$ errors
for the velocity for the three approaches under consideration. We consider four different
numbers of samples: 42, 72, 110, and 240.
It can be observed that the global ROM shows a slow convergence, not even $7\%$ accuracy is reached with
the finest sampling grid.
The local ROM with regression ANN cluster selection shows no distinctive convergence behavior,
which might indicate that the accuracy saturates at lower snapshot grid sizes.
The POD-NN shows the fastest convergence, reaching about $0.3\%$ error with the finest sampling grid.
We note that the POD-NN training did not take overfitting into account.
\rev{Overfitting occurs when the training data is more accurately approximated than the actual data of interest.
It can be checked with having a validation set, whose accuracy is measured independently and not included in the training.
Training can be stopped when the training data is more accurately approximated than the validation set.}
\begin{table}[h!]
\begin{center}
\begin{tabular}{ | l | l | l | }
\hline
42 snapshots & mean $L^2$ error & mean $L^\infty$ error \\ \hline
global ROM & 3.7022 & 3.1120 \\ \hline
local ROM + regression ANN & 0.0510 & 0.0486 \\ \hline
POD-NN & 0.0108 & 0.0104 \\ \hline \hline
72 snapshots & mean $L^2$ error & mean $L^\infty$ error \\ \hline
global ROM& 0.6970 & 0.5831 \\ \hline
local ROM + regression ANN & 0.0103 & 0.0098 \\ \hline
POD-NN & 0.0080 & 0.0075 \\ \hline \hline
110 snapshots & mean $L^2$ error & mean $L^\infty$ error \\ \hline
global ROM & 0.1044 & 0.0948 \\ \hline
local ROM + regression ANN & 0.0068 & 0.0064 \\ \hline
POD-NN & 0.0059 & 0.0053 \\ \hline \hline
240 snapshots & mean $L^2$ error & mean $L^\infty$ error \\ \hline
global ROM & 0.0762 & 0.0734 \\ \hline
local ROM + regression ANN & 0.0101 & 0.0096 \\ \hline
POD-NN & 0.0032 & 0.0027 \\ \hline
\end{tabular}
\end{center}
\caption{Comparison of global ROM, local ROM and POD-NN for four snapshot grids.}
\label{table:errors_compare_to_pod_nn}
\end{table}
Several remarks are in order.
\begin{remark}
The data reported in Table \ref{table:errors_compare_to_pod_nn} concerning the local ROM with regression ANN cluster selection
and the POD-NN are sensitive to the neural network training. In addition, the data for local ROM approach
are sensitive to the POD tolerances are changed. Nonetheless, Table \ref{table:errors_compare_to_pod_nn}
provides a general indication of the performance of each method in relation to the other two.
\end{remark}
\begin{remark}
A rigorous comparison in term of computational times is not possible because the different
methods are implemented in different platforms. However, we can make some general comments.
The POD-NN method has a significant advantage in terms of computational time:
one does not need to assemble the trilinear reduced form associated to the convective term, which
is our simulations takes about 1-3 hours. The time required for the
POD-NN evaluation in the online phase is virtually zero, while
projection methods need to do a few iterations of the reduced fixed point scheme. That takes about 10-30 s.
On the other hand, the POD-NN required the training of the ANN. However, in our simulations
that takes only about 20 minutes.
\end{remark}
\begin{remark}
We also investigated a local POD-NN, i.e. we combined a k-means based localization approach with the POD-NN.
However, this led to significantly larger errors than the global POD-NN method.
In general, the error of a local POD-NN approach was in the range of the error of the ``local ROM + regression ANN''. Thus,
we did not pursue this approach any further.
\end{remark}
\section{Concluding remarks}\label{sec:concl}
We focused on reduced-order models (ROMs) for PDE
problems that exhibit a bifurcation when more than one parameter is varied.
For a particular fluid problem that features a supercritical pitchfork bifurcation
under variation of Reynolds number and geometry,
we investigated projection-based local ROMs and compared them
to an established global projection-based ROM as well as an emerging artificial neural network (ANN) based method
called POD-NN.
We showed that k-means based clustering, transition regions, cluster-selection criteria based on best-approximating clusters and
ANNs gain more than an order of magnitude in accuracy over the global projection-based ROM.
Upon examining the accuracy of POD-NN, it became obvious that the POD-NN provides consistently
more accurate approximations than the local projection-based ROM.
Nevertheless, the local projection-based ROM might be more amenable to the use of reduced-basis error estimators than the POD-NN.
This could be the object of future work.
\bibliographystyle{plain}
| {'timestamp': '2022-01-03T02:12:14', 'yymm': '2010', 'arxiv_id': '2010.07370', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.07370'} |
\section{Introduction}\label{sec:intro}
In recent years, the development of cosmic microwave background observations, led by surveys such as the Wilkinson Microwave Anisotropy Probe (WMAP) \citep{2013ApJS..208...19H}, {\it Planck} \citep{2020A&A...641A...1P,2020A&A...641A...6P}, the South Pole Telescope (SPT) \citep{Carlstrom:2009um} and the Atacama Cosmology Telescope (ACT) \citep{Aiola:2020azj}, has brought cosmology into the precision era. The new frontier for cosmological observations is to now reach a similar precision in surveys of the cosmic large-scale structure. Observations of the large-scale structure can provide information on the matter distribution in the Universe and on the growth of primordial perturbations with time. This is achieved, for example, by observing the lensing effect of intervening matter on background galaxies (cosmic shear) or by measuring the correlation function of the positions of galaxies (galaxy clustering). The former has been the main focus of the Kilo-Degree Survey (KiDS) collaboration which has provided constraints on cosmological parameters both for the standard $\Lambda$CDM model and for some extensions \citep{Kohlinger:2017sxk}. The latter has been explored to exquisite precision by several observational collaborations such as the two-degree Field Galaxy Redshift Survey \citep{Cole:2005sx}, the six-degree Field Galaxy Survey \citep{2011MNRAS.416.3017B}, WiggleZ \citep{2011MNRAS.418.1707B,2012PhRvD..86j3518P} and the Sloan Digital Sky Survey (SDSS) \citep{Eisenstein:2005su,2010MNRAS.401.2148P,2012MNRAS.427.3435A,Alam:2016hwk}. Experiments like the Dark Energy Survey (DES) have recently provided state-of-the-art measurements of cosmological parameters using both shear and clustering from photometric measurements \citep{Abbott:2021bzy}.
In the near future, observations of the large-scale structure will be further improved by new missions, either space-borne such as {\it Euclid} \citep{2011arXiv1110.3193L,2013LRR....16....6A,2018LRR....21....2A,2020A&A...642A.191E}, the Roman Space Telescope \citep{2015arXiv150303757S} and the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx) \citep{2014arXiv1412.4872D,2018arXiv180505489D}, or ground-based such as the Dark Energy Spectroscopic Instrument (DESI) \citep{2016arXiv161100036D,2016arXiv161100037D}, the Rubin Observatory Legacy Survey of Space and Time (LSST) \citep{2009arXiv0912.0201L,2018arXiv180901669T,Ivezic:2008fe} and the SKA Observatory (SKAO) \citep{2015aska.confE..17A,2015aska.confE..19S,2015aska.confE..23B,2015aska.confE..24B,2015aska.confE..25C,2015aska.confE..31R,2020PASA...37....7S}. These future surveys will indeed improve the sensitivity of the measurements, and, in addition, will make it possible to perform observations on large volumes of the sky. With such observations, it will be possible to access, for the first time, ultra-large scales when measuring the correlation function of galaxy positions and shear. While this ability to access such large scales will allow us to better constrain cosmological models and test fundamental theories such as general relativity \citep{2015ApJ...811..116B,2021arXiv210512582S}, it will also pose new challenges to our ability to theoretically model the observables involved.
In particular, the galaxy correlation function at very large scales receives contributions from lensing, redshift-space distortions (RSD) and relativistic effects \citep{Yoo:2010ni,Bonvin:2011bg,Challinor:2011bk,2014JCAP...09..037B}, which are mostly negligible for the scales probed by current surveys \citep[see, e.g.,][]{2015MNRAS.447.1789Y,Fonseca:2015laa,2015ApJ...814..145A}. The modelling problem presented by such contributions is not as severe as the one of modelling nonlinear effects at small scales, where one needs to rely on model-dependent numerical simulations \citep[see, e.g.,][]{Martinelli:2020yto,Safi:2020ugb,Bose:2021mkz,2021MNRAS.503.1897C,2021arXiv210611718C}. However, in order to simplify the modelling of large-scale effects, several approximations are commonly made in computing theoretical predictions for galaxy number counts, such as the Limber \citep{LoVerde:2008re} and the flat-sky \citep{Matthewson:2020rdt} approximations.
Calculations that include large-scale effects and do not rely on approximations are feasible, and codes commonly used to compute theoretical predictions, such as \texttt{CAMB} \citep{Lewis:1999bs,Howlett:2012mh} and \texttt{CLASS} \citep{2011JCAP...07..034B}, allow us to obtain ``exact'' galaxy clustering power spectra. However, the computational time required for such exact calculations is significantly longer, causing parameter estimation pipelines to become unfeasible, as they require calculating tens of thousands of spectra to reconstruct posterior probability distributions for cosmological parameters.
Several attempts have been made to overcome this problem. For instance, fast Fourier transform (FFT) or logarithmic FFT (FFTLog) methods can be exploited to accelerate the computation of the theoretical predictions \citep{2017JCAP...11..054A,2017A&A...602A..72C,2018PhRvD..97b3504G}. Alternatively, approximations can be made to reduce the dimensionality of the integration, namely either assuming that the observed patch of sky is flat, and thus performing a two-dimensional Fourier transform on the sky \citep{Datta:2006vh,White:2017ufc,Jalilvand:2019brk,Matthewson:2020rdt}, or exploiting the behaviour of spherical Bessel functions at large angular multipoles \citep{1953ApJ...117..134L,1954ApJ...119..655L,1992ApJ...388..272K}.
In this work, we investigate how applying these commonly used approximations and neglecting lensing, RSD and relativistic contributions at large scales can bias the estimation of cosmological parameters, and possibly lead to false detections of non-standard cosmological models. Such an analysis has been of interest for some time \citep[see e.g.][]{Camera:2014dia,Camera:2014sba,Thiele:2019fcu,Villa:2017yfg}, but we investigate it here considering all the large-scale effects and approximations at the same time, while relying on a full Markov chain Monte Carlo (MCMC) pipeline for parameter estimation, rather than using Fisher matrices. Note that other studies \citep[e.g.][]{cardona2016,Tanidis:2019teo,Tanidis:2019fdh} did approach the problem from the MCMC point of view, but they all, in one way or another, had to simplify the problem in a way that either made them differ from a benchmark analysis, or assumed some of the aforementioned approximations.
Additionally, we propose a simple debiasing method to recover the true values of cosmological parameters without the need for exact calculations of the power spectra. Such a method will allow us to analyse future data sets in a manner that avoids computational problems, but ensures that we accurately obtain the correct best-fit values of cosmological parameters and estimates of their posterior distributions.
The paper is structured as follows. We review in \autoref{sec:GC} the theoretical modelling of galaxy number count correlations, presenting both the exact computation and the approximated one. In \autoref{sec:survey}, the experimental setup used throughout the paper is presented, while in \autoref{sec:cases} we describe the cosmological models considered in this paper and their impacts on galaxy number counts. In \autoref{sec:methods}, we present our analysis pipeline and introduce a debiasing method able to significantly reduce the bias on cosmological parameters introduced by incorrect modelling of the observables. We present our results in \autoref{sec:results} and draw our conclusions in \autoref{sec:conclusions}.
\section{Galaxy number counts and harmonic-space correlation functions}\label{sec:GC}
Observed fluctuations in galaxy number counts are primarily caused by underlying inhomogeneities in the matter density field on cosmological scales and, for galaxies, are a biased tracer of the cosmic large-scale structure. However, there is a score of secondary effects that also contribute to the observed signal \citep{Yoo:2010ni,Challinor:2011bk,Bonvin:2011bg}. The most important of them are the well-known redshift-space distortions, which represent the dominant term on sub-Hubble scales, and weak lensing magnification, important for deep surveys and wide redshift bins. Additionally, there is a more complicated set of relativistic terms that arise from radial and transverse perturbations along the photon path from the source to the observer.
Thus, we can write the observed galaxy number count fluctuation field in configuration space and up to first order in cosmological perturbation theory as
\begin{equation}
\Delta_{\rm g}=
b_{\rm lin}\,\delta_{\rm cs}
+\frac{1}{\mathcal{H}}\partial_\parallel^2V
+
b_{\rm mag}\,\kappa
+\Delta_{\rm loc}+\Delta_{\rm int}\;.\label{eq:Delta_g}
\end{equation}
To understand better what the expression above means, we shall now break it up in all its terms:
\begin{enumerate}
\item The first term in \autoref{eq:Delta_g} sees the linear galaxy bias, \(b_{\rm lin}\), multiplying matter density fluctuations in the comoving-synchronous gauge, \(\delta_{\rm cs}=\delta_{\rm l}+3\,\mathcal{H}\,V\), with \(\delta_{\rm l}\) the density contrast of the matter field in the longitudinal gauge, \(\mathcal{H}
\) the conformal Hubble factor, and \(V\) the peculiar velocity potential.
\item The second term is linear RSD, with
\(\partial_\parallel\) the spatial derivative along the line-of-sight direction \(\hat{\bm r}\).
\item The third term is the lensing magnification contribution, sourced by the integrated matter density along the line of sight, i.e.\ the weak lensing convergence \(\kappa\), modulated by the so-called magnification bias, \(b_{\rm mag}\),\footnote{\label{Footnote1}Note that several different symbols are used in the literature to denote the magnification bias and---as we shall see later on---the evolution bias, e.g.\ \(\alpha\), \(\mathcal Q\), and \(s\) for the former, and \(b_{\rm e}\) and \(f_{\rm evo}\) for the latter. Here, however, we adopt a more uniform notation, with \(b_{\rm lin}\), \(b_{\rm mag}\), and \(b_{\rm evo}\) respectively denoting the linear galaxy bias, the magnification bias, and the evolution bias. For the first two, the rationale behind our notation is that they respectively are what modulates the matter density fluctuations and lensing convergence.} which respectively take the forms
\begin{align}
\kappa(\bm r)&=\frac{1}{2}\int_0^r{\rm d}\tilde r\;( r-\tilde r)\,\frac{\tilde r}{ r}\,\nabla^2_\perp\Upsilon(\hat{\bm r},\tilde r)\;,\label{eq:kappa}\\
b_{\rm mag}(z)&=2\left[1-\left.\frac{\partial\ln\bar n_{\rm g}(z;L>L_\star)}{\partial\ln L}\right|_{L_\star}\right]\;,\label{eq:bmag
\end{align}
with \( r(z)\) the radial comoving distance to redshift \(z\), such that \({\rm d} r={\rm d} z/H(z)\) and \(H(z)=(1+z)\mathcal{H}(z)\), \(\nabla^2_\perp\) the Laplacian on the transverse screen space, \(\Upsilon\) the Weyl potential, and \(\bar n_{\rm g}\) the mean redshift-space comoving number density of galaxies, which is a function of redshift and luminosity \(L\) (equivalently flux, or magnitude). \(L_\star\) represents the threshold luminosity value that a galaxy should have in order to be detected by the adopted instrument.
\item The penultimate term in \autoref{eq:Delta_g} gathers all the local contributions at the source, such as Sachs-Wolfe and Doppler terms, and reads
\begin{equation}
\Delta_{\rm loc}=(3-b_{\rm evo})\mathcal{H}\,V+A\,\partial_\parallel V+(
b_{\rm mag}+1
-A)\Phi+\frac{\Phi^\prime}{\mathcal{H}}\;,\label{eq:Delta_loc}
\end{equation}
with
\begin{align}
b_{\rm evo}(z)&=-\frac{\partial\ln\bar n_{\rm g}(z)}{\partial\ln(1+z)}\label{eq:bevo}
\end{align}
usually referred to as the evolution bias,\textsuperscript{\ref{Footnote1}}
\begin{equation}
A\equivb_{\rm evo}
-b_{\rm mag}-2
-\frac{\mathcal{H}^\prime}{\mathcal{H}^2}+
\frac{b_{\rm mag}}{\mathcal{H} r}\;,
\end{equation}
\(\Phi\) being one of the two Bardeen's potentials (the other being $\Psi$), and a prime denoting derivation with respect to conformal time.
\item The last term, on the other hand, collects all non-local contributions, such as time delay and integrated Sachs-Wolfe type terms, and reads
\begin{equation}
\Delta_{\rm int}=
2\frac{b_{\rm mag}}{r}\int_0^r{\rm d}\tilde r\;\Phi-\int_0^r{\rm d}\tilde r\;\Phi^\prime\;.
\end{equation}
\end{enumerate}
\subsection{The exact expression}\label{sec:std}
The exact harmonic-space angular power spectrum of the observed galaxy number count fluctuations between two (infinitesimally thin) redshift slices at \(z\) and \(z^\prime\), \(C^{\rm Ex}_\ell(z,z^\prime)\), is then obtained by expanding \autoref{eq:Delta_g} in spherical harmonics, and taking the ensemble average
\begin{equation}
\left\langle\Delta_{{\rm g},\ell m}(z)\Delta^*_{{\rm g},\ell^\prime m^\prime}(z^\prime)\right\rangle\equiv\delta^{\rm K}_{\ell\ell^\prime}\delta^{\rm K}_{mm^\prime}C^{\rm Ex}_\ell(z,z^\prime),
\end{equation}
with $\delta^{\rm K}$ the Kronecker delta symbol. This leads to the expression
\begin{equation}
C^{\rm Ex}_\ell(z,z^\prime)=4\pi\int{\rm d}\ln k\;\mathcal{W}_\ell^{\rm g}(k;z)\,\mathcal{W}_\ell^{\rm g}(k;z^\prime)\,\mathcal{P}_\zeta(k)\;,\label{eq:Cl_exact}
\end{equation}
with \(\mathcal{W}_\ell^{\rm g}\) the kernel of galaxy clustering, encompassing contributions from all terms present in \autoref{eq:Delta_g}, and \(\mathcal{P}_\zeta(k)\propto A_{\rm s}\,k^{n_{\rm s}-1}\) the power spectrum of primordial curvature perturbations, \(A_{\rm s}\) and \(n_{\rm s}\) respectively being its amplitude and spectral index.
For a full expression for \(\mathcal{W}_\ell^{\rm g}\), we can write
\begin{equation}
\mathcal{W}_\ell^{\rm g}=\mathcal{W}_\ell^{\rm g,den}+\mathcal{W}_\ell^{\rm g,vel}+\mathcal{W}_\ell^{\rm g,len}+\mathcal{W}_\ell^{\rm g,rel}\;,\label{eq:W_full}
\end{equation}
with \(\mathcal{W}_\ell^{\rm g,vel}=\mathcal{W}_\ell^{\rm g,RSD}+\mathcal{W}_\ell^{\rm g,Dop}\), where \citep[see, e.g.,][]{DiDio:2013bqa}
\begin{equation}
\mathcal{W}_\ell^{\rm g,den}(k;z)=b_{\rm lin}(k,z)\delta(k,z)j_\ell\left[k r(z)\right]\;,\label{eq:W_den}
\end{equation}
\begin{equation}
\mathcal{W}_\ell^{\rm g,RSD}(k;z)=\frac{k}{\mathcal{H}(z)}V(k,z)j^{\prime\prime}_\ell\left[k r(z)\right]\;,\label{eq:W_RSD}
\end{equation}
\begin{align}
\mathcal{W}_\ell^{\rm g,Dop}(k;z)&=\bigg\{\left[b_{\rm evo}(z)-3\right]\frac{\mathcal{H}(z)}{k}j_\ell\left[k r(z)\right]\nonumber\\
&\phantom{=\bigg\{}-A(z)j^{\prime}_\ell\left[k r(z)\right]\bigg\}V(k,z)\;,\label{eq:W_Dop}
\end{align}
\begin{align}
\mathcal{W}_\ell^{\rm g,len}(k;z)&=\ell(\ell+1)b_{\rm mag}(z)\int_0^{ r(z)}{\rm d}\tilde r\;\frac{ r(z)-\tilde r}{r(z)\tilde r}\,\Upsilon(k,\tilde r)j_\ell\left(k\tilde r\right)\;,\label{eq:W_len}
\end{align}
\begin{align}
\mathcal{W}_\ell^{\rm g,rel}(k;z)&=\big\{\left[1-A(z)\right]\Psi(k, r)+2\left[b_{\rm mag}(z)-1\right]\Phi(k, r)\nonumber\\
&\phantom{=\big\{}+\mathcal{H}^{-1}(z)\Phi^\prime(k,z)\big\}j_\ell\left[k r\right]\nonumber\\
&\phantom{=}-2\frac{b_{\rm mag}(z)}{ r(z)}\int_0^{ r(z)}{\rm d}\tilde r\;\Upsilon(k,\tilde r)j_\ell\left(k\tilde r\right)\;.\label{eq:W_rel}
\end{align}
In harmonic-space analyses, it is customary to subdivide the observed source population into redshift bins. This is done, for instance, to reduce the dimensionality of the data vector---and consequently the covariance matrix---with the aim of reducing in turn the computational complexity of the problem. Otherwise, redshift information for the observed galaxies might be too poor to allow us to pin them down in the radial direction, as is the case with photometric redshift estimation. In this case, galaxies are usually binned into $\mathcal O(1)-\mathcal O(10)$ bins spanning the observed redshift range. Whatever the reason, in practice this corresponds to having
\begin{equation}
C^{\rm Ex}_{ij\ell}=4\pi\int{\rm d}\ln k\;\mathcal{W}_{i\ell}^{\rm g}(k)\,\mathcal{W}_{j\ell}^{\rm g}(k)\,\mathcal{P}_\zeta(k)\;,\label{eq:Cijl_exact}
\end{equation}
where
\begin{equation}
\mathcal{W}_{i\ell}^{\rm g}(k)=\int{\rm d} z\;\frac{c}{H(z)}\mathcal{W}_\ell^{\rm g}(k;z)n^i(z)\;,
\end{equation}
with $n^i(z)$ the galaxy redshift distribution in the $i$th redshift bin, normalised to unit area.
\subsection{Widely used approximations}\label{sec:approx}
The computation of harmonic-space power spectra has to be performed following the triple integral of \autoref{eq:Cijl_exact} and the equations giving the kernel $\mathcal{W}_{i\ell}^{\rm g}$. However, such an integration is numerically cumbersome, especially because of the presence of spherical Bessel functions---highly oscillatory functions whose amplitude and period vary significantly with the argument of the function. As a consequence, numerical integration has to be performed with highly adaptive methods, at the cost of computation speed.
Over the years, various algorithms have been proposed with the aim of speeding up the computation of harmonic-space power spectra. Mostly, they rely on FFT/FFTLog methods \citep[see, e.g.,][]{2017JCAP...11..054A,2017A&A...602A..72C,2018PhRvD..97b3504G}.
On the other hand, the full computation is not always necessary, and approximations can be made to speed up the computation, e.g.\ by applying the Limber or the flat-sky approximation (often erroneously thought to be the same). Here, we shall focus on the former, which is by far the most widely employed. It relies on the following property of spherical Bessel functions,
\begin{equation}
j_\ell(x)\underset{\ell\gg1}{\longrightarrow}\sqrt{\frac{\upi}{2\ell+1}}\delta_{\rm D}\left(\ell+\frac{1}{2}-x\right)\;,\label{eq:Limber}
\end{equation}
where $\delta_{\rm D}$ is a Dirac delta.\footnote{Note that the $+1/2$ term comes from the relation between a spherical Bessel function of order $\ell$, $j_\ell$, and the ordinary Bessel function of order $L=\ell+1/2$, $J_L$.} By performing the substitution of \autoref{eq:Limber} into \autoref{eq:Cijl_exact}, we can effectively get rid of two integrations, thus boosting significantly the speed of the computation.
Moreover, the relative importance of the different terms in \autoref{eq:W_full} depends on various, survey-dependent factors. For instance, RSD are mostly washed out for broad redshift bins, whereas, on the contrary, lensing magnification favours them. Similarly, the Doppler contribution decays quickly as the redshift of the shell grows, whilst integrated terms like lensing gain in weight. Lastly, the importance of the various effects also varies with the scales of interest, as can be seen by the $\mathcal{H}/k$ factors in \autoref{eq:W_den} to \autoref{eq:W_rel}. Moreover, note that at first order in cosmological perturbation theory, Einstein equations fix $V\sim\delta/k$ and $\Phi\sim\Psi\sim\delta/k^2$. All combined, this makes $\mathcal{W}^{\rm g,rel}_\ell$ important only on very large scales.
For the reasons explained above, galaxy clustering in harmonic space is customarily restricted to Newtonian density fluctuations alone, leading to the well-known expression
\begin{equation}
C^{\rm Ap}_{ij\ell}=\int{\rm d} z\;\frac{\left[H(z)\,b_{\rm lin}(z)\right]^2\,n^i(z)\,n^j(z)}{r^2(z)}\,P_{\rm lin}\!\left[\frac{\ell+1/2}{r(z)},z\right]\;,\label{eq:Cijl_approx}
\end{equation}
where $P_{\rm lin}(k,z)$ is the linear matter power spectrum, and we have assumed that linear galaxy bias is only redshift-dependent.
The actual accuracy of such an approximation, however, cannot be estimated a priori, since it strongly depends on the integrand of \autoref{eq:Cijl_approx}. In particular, \autoref{eq:Cijl_approx} is known to agree well with the exact expression of \autoref{eq:Cijl_exact} if the kernel of the integral is broad in redshift, and it works better at low redshift than at high redshift.
\section{Experimental setup}\label{sec:survey}
In the coming decade, several planned surveys of the cosmic large-scale structure will provide us with observations of the galaxy distribution with unprecedented sensitivity at very large scales. It is therefore crucial to assess how the common approximations described in \autoref{sec:GC} will impact the accuracy of the results we will be able to obtain. Therefore, in this paper we adopt the specifications of a very deep and wide galaxy clustering survey with high redshift accuracy. We emphasise that we are not interested in forecasts for a specific experiment, but rather in assessing whether and how much various approximations affect the final science output. For this reason, we shall focus on an idealised survey, loosely inspired by the envisaged future construction phase of the SKAO. Specifically, we consider an HI-galaxy redshift survey, assuming that the instrument will be able to provide us with spectroscopic measurements of the galaxies' redshifts through the detection of the HI emission line in the galaxy spectra. Therefore, for the purposes of the harmonic-space tomographic studies we focus on in this paper, we shall consider the error on such redshift measurements to be negligible.
Here, we follow the prescription and fitting functions of \citet{2015MNRAS.450.2251Y} to characterise the source galaxy distribution as a function of both redshift and flux limit. The latter will be particularly important in determining the magnification bias of the sample. Calculations in \citet{2015MNRAS.450.2251Y} were based on the $S^3$-SAX simulations by \citet{Obreschkow:2009ha} and assumed that any galaxy with an integrated line flux above a given signal-to-noise ratio threshold would be detected. The fitting formulae adopted here are
\begin{align}
\frac{{\rm d} N_{\rm gal}}{{\rm d} z} &= 10^{c_1}z^{c_2} \exp({-c_3 z})\,\mathrm{deg^{-2}}\;,\label{eq:hi_dndz}\\
b_{\rm lin}(z) &= c_4 \exp(c_5\,z)\;,\label{eq:linbias}
\end{align}
where $N_{\rm gal}$ is the total number of galaxies in the entire redshift range of the survey, and parameters \(c_i\) can be found in \citet[][]{2015MNRAS.450.2251Y} for a wide range of flux thresholds, from \(0\) to \(200\,\mathrm{\mu Jy}\). We show in \autoref{tab:survey} values of \(c_i\) used in the present work, corresponding to those used in \citet{Sprenger:2018tdb}.
Given the galaxy distribution of \autoref{eq:hi_dndz}, we focus on the redshift range $0.001<z<1.1$ with $N_{\rm gal}$ given in \autoref{tab:survey}, and divide it into $N_{\rm bin}=15$ redshift bins assuming that each one contains the same number of galaxies (see the upper panel of \autoref{fig:specs}). In the lower panel of \autoref{fig:specs}, we show the redshift evolution of the linear galaxy bias, the magnification bias and the evolution bias given, respectively, by \autoref{eq:linbias}, \autoref{eq:bmag} and \autoref{eq:bevo}, for the survey under consideration.
Using these survey specifications, we create a mock data set for galaxy clustering observations by calculating the exact angular power spectra $C^{\rm Ex}_\ell$ described in \autoref{sec:GC}. We assume a $\Lambda$CDM cosmology with fiducial values of parameters given in \autoref{tab:survey}, where $\omega_{\rm b}$ and $\omega_{\rm c}$ are the baryon and cold dark matter energy densities, respectively, $h$ is the reduced present-day Hubble expansion rate, $A_{\rm s}$ and $n_{\rm s}$ are, respectively, the amplitude and spectral index of the primordial curvature power spectrum, and $\sum{m_\nu}$ is the sum of the neutrino masses.
\begin{table}
\LARGE
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{9}{c}{Survey specifications}\\
\hline
$N_{\rm gal}$ & $f_{\rm sky}$ & $z_{\rm min}$ & $z_{\rm max}$ & $c_1$ & $c_2$ & $c_3$ & $c_4$ & $c_5$\\
$9.4\times10^8$ & $0.7$ & $0.001$ & $1.1$ & $6.32$ & $1.74$ & $5.42$ & $0.55$ & $0.78$\\
\hline
\hline
\multicolumn{9}{c}{Fiducial cosmology}\\
\hline
$\omega_{\rm b}$ & $\omega_{\rm c}$ & $h$ & $A_{\rm s}\times 10^9$ & $n_{\rm s}$ & $\sum{m_\nu}$ [eV] & $w$ & $f_{\rm NL}$ & \\
$0.02245$ & $0.12056$ & $0.67$ & $2.126$ & $0.96$ & $0.06$ & $-1$ & $0$ & \\
\hline
\end{tabular}}
\caption{Survey specifications and fiducial cosmology used in the present work to obtain the mock data set and experimental noise.}
\label{tab:survey}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{plots/galdist.pdf}\\
\includegraphics[width=0.48\textwidth]{plots/biases.pdf}
\caption{{\it Upper panel:} Galaxy distribution as described by \autoref{eq:hi_dndz} (in black) with the limits of the equipopulated redshift bins considered in the present paper (in colour). {\it Lower panel:} Trends in redshift for the linear galaxy bias of \autoref{eq:linbias} (blue curve), the magnification bias of \autoref{eq:bmag} (orange curve) and the evolution bias of \autoref{eq:bevo} (green curve). The intersection between the horizontal dashed and vertical dotted black lines shows where the linear galaxy bias crosses unity.}
\label{fig:specs}
\end{figure}
\section{Case studies}\label{sec:cases}
We study four representative cosmological models in order to demonstrate how the approximations of \autoref{sec:approx} can bias the estimation of cosmological parameters using a next-generation survey able to access ultra-large scales, as described in \autoref{sec:survey}, and how the method we present in this paper debiases the constraints while keeping the computational cost of the parameter estimation procedure significantly lower than that of an exact analysis. These four models are the standard $\Lambda$CDM model and three of its minimal extensions, where either the dark energy equation of state $w$ or the sum of the neutrino masses $\sum{m_\nu}$ or the local primordial non-Gaussianity (PNG) parameter $f_\mathrm{NL}$ is allowed to vary as an additional free parameter. We denote these extensions by $w$CDM, $\Lambda$CDM+$m_\nu$ and $\Lambda$CDM+$f_\mathrm{NL}$, respectively.
\subsection{Standard model and its simple extensions}\label{sec:lcdm}
We specify the standard $\Lambda$CDM model by the five free parameters $\omega_{\rm b}$, $\omega_{\rm c}$, $h$, $A_{\rm s}$ and $n_{\rm s}$.\footnote{Note that $\Lambda$CDM also requires the reionization optical depth $\tau$ as a free parameter. However, we do not vary $\tau$ in our analysis as we do not expect the large-scale observables to constrain this quantity.} Following \citet{2020A&A...641A...6P}, we fix the value of $\sum{m_\nu}$ to $0.06~\mathrm{eV}$ for $\Lambda$CDM. The parameters $\{\omega_{\rm b},\omega_{\rm c},h,A_{\rm s},n_{\rm s}\}$ affect the angular power spectra of the observed galaxy number count fluctuations differently and on different angular scales. Here we are interested in ultra-large scales, which are expected to be particularly sensitive to the parameters that quantify cosmic initial conditions, i.e.\ $A_{\rm s}$ and $n_{\rm s}$.
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{plots/ns_exact_ratio_final.pdf}
\includegraphics[width=0.45\textwidth]{plots/w_exact_ratio_final.pdf}
\includegraphics[width=0.45\textwidth]{plots/mnu_exact_ratio_final.pdf}
\includegraphics[width=0.45\textwidth]{plots/fNL_exact_ratio_final.pdf}
\caption{Effects of cosmological parameters on the angular power spectrum of observed galaxy number count fluctuations, $C_\ell$, on large scales. The four panels depict the effects of: the primordial scalar spectral index $n_{\rm s}$ in $\Lambda$CDM (upper left panel); the dark energy equation of state $w$ in $w$CDM (upper right panel); the sum of the neutrino masses $\sum{m_\nu}$ in $\Lambda$CDM+$m_\nu$, with values in $\mathrm{eV}$ (lower left panel); and the local primordial non-Gaussianity $f_\mathrm{NL}$ in $\Lambda$CDM+$f_\mathrm{NL}$ (lower right panel). All the power spectra are exact, i.e.\ no approximations are made in their computations, and they are shown in comparison with the fiducial $\Lambda$CDM spectra with $n_{\rm s}=0.96$, $w=-1$, $\sum{m_\nu}=0.06~\mathrm{eV}$ and $f_\mathrm{NL}=0$. Each panel contains three sets of spectra computed for the three redshift bins $5$, $10$ and $15$, corresponding to low, medium and high redshifts (from top to bottom in each panel). The redshift range of each bin is indicated in the respective plot in the upper left panel.}
\label{fig:theory}
\end{figure*}
In order to illustrate the large-scale effects of the parameters, we show, as an example, in the upper left panel of \autoref{fig:theory} the impact of varying the scalar spectral index $n_{\rm s}$ on the power spectrum at angular scales larger than $\ell=400$ computed at redshift bins $5$, $10$ and $15$ (corresponding to redshift ranges $0.28<z<0.32$, $0.49<z<0.54$ and $0.86<z<1.04$, respectively) as given in the upper panel of \autoref{fig:specs}. For each redshift bin, the corresponding galaxy number count power spectra for three values of $n_{\rm s}=0.92$, $n_{\rm s}=0.96$ and $n_{\rm s}=1$ are shown relative to the spectrum for $n_{\rm s}=0.96$, which we use as our fiducial value in the rest of this paper. Note that these spectra are all exact, i.e.\ they are computed without making any approximations. As the figure shows, in all the redshift bins, the lower the value of $n_{\rm s}$, the more enhanced the power spectra at ultra-large scales, namely scales with $\ell\lesssim \mathcal{O}(100)$, while we see the opposite effect at smaller scales. This is because the smaller the value of $n_{\rm s}$, the steeper (or more ``red-tilted'') the primordial power spectrum, resulting in larger amplitudes of fluctuations at extremely large scales. This steeper spectrum will then lead to suppression of amplitudes at scales smaller than some ``pivot’’ scale. Note, however, that the enhancement/suppression is not physical, as it depends on the scale used as a pivot---namely, fixing either $A_{\rm s}$ or $\sigma_8$ (amplitude of the linear power spectrum on the scale of $8 h^{-1} \mathrm{Mpc}$) as a fundamental parameter.
The figure for $n_{\rm s}$ already shows the importance of correctly computing the angular power spectra for accurately estimating the cosmological parameters using ultra-large-scale information. As the figure shows, even changing $n_{\rm s}$ to the extreme values of $0.92$ and $1$, both of which having aleady been ruled out by the current constraint $n_{\rm s}\approx0.965\pm0.0042$ \citep{2020A&A...641A...6P}, changes the power spectra by $\mathcal{O}(10\%)$. On the other hand, as we will see in \autoref{sec:results}, the approximations of \autoref{sec:approx} may easily result in $>\mathcal{O}(10\%)$ errors in the computation of the spectra on large scales, which will then lead to inaccurate, or biased, estimates of parameters like $n_{\rm s}$.
An inaccurate estimation of a cosmological parameter can also result in a false detection of new physics when there is none, or in no detection when there is. In order to demonstrate this problem, we present in the upper right and lower left panels of \autoref{fig:theory} the effects of the two important non-standard cosmological parameters, $w$ and $\sum{m_\nu}$, on the power spectrum at large scales for $w$CDM and $\Lambda$CDM+$m_\nu$, the two simple extensions of $\Lambda$CDM that we introduced earlier. The panels again depict the spectra for the three redshift bins $5$, $10$ and $15$, with the additional parameters $w$ and $\sum{m_\nu}$ of the two extensions set to $\{-1.2,-1,-0.8\}$ and $\{0.003,0.06,0.3\}$, respectively. Note that throughout this paper, we always use $w=-1$ and $\sum{m_\nu}=0.06$ as the fiducial values for these parameters.
We notice that changing the value of $w$ has a few large-scale effects. First of all, setting $w$ to a value smaller or larger than $-1$ does not affect the spectra similarly in different redshift bins. Focusing first on the $w=-1.2$ case, which corresponds to a phantom dark energy, we see that the spectra are all suppressed at ultra-large scales compared to the standard $w=-1$ case, and by increasing the bin's redshift, not only does the range of the suppressed power extend to smaller scales, but also the higher the redshift, the more suppressed the spectra (on all scales). The effect is the opposite for the $w=-0.8$ case, and increasing the bin's redshift results in more enhanced spectra compared to the baseline $w=-1$. The $w\neq-1$ enhancement/suppression of power and its redshift dependence can be explained for smaller scales by the fact that the linear growth rate of the large-scale structure, $f$, is significantly affected by $w$, especially at low redshifts, where dark energy becomes more important \citep[see, e.g.,][]{2010deto.book.....A}. At any given redshift $z$, a larger $w$ makes the dark energy component more important compared to the matter component, and since the growth rate $f(z)$ increases by increasing the dark matter component, it decreases by increasing $w$. This is exactly what we see in \autoref{fig:theory} for the three values of $w=-1.2$, $w=-1$ and $w=-0.8$. We also see that, as expected, the differences between the three spectra at smaller scales are significantly reduced when we increase the bin's redshift. The dependence of the power spectrum on the value of $w$ is, however, much more involved for very large scales, as the spectrum on those scales is determined by a combination of different $w$-dependent effects, such as the integrated Sachs Wolfe effect. Finally, in all the three bins of the upper right panel of \autoref{fig:theory}, the oscillatory features in the ratios $C_\ell/C_\ell^\mathrm{fid}$ show that the baryon acoustic oscillations shift towards smaller scales with increasing redshift for both $w=-1.2$ and $w=-0.8$.
When considering the sum of the neutrino masses, we see that increasing $\sum{m_\nu}$ results in the suppression of power on all scales and in all redshift bins, although this suppression is significantly stronger at smaller scales (or higher multipoles). There are several reasons for the small-scale reduction of the power spectra in the presence of massive neutrinos~\citep[see, e.g.,][]{Lesgourgues:2012uu}, the most important of which is the absence of neutrino perturbations in the total power spectrum and a slower growth rate of matter perturbations at late times. On extremely large scales, however, neutrino free-streaming can be ignored~\citep[see, e.g.,][]{Lesgourgues:2012uu} and neutrino perturbations are therefore indistinguishable from matter perturbations. The power spectra then depend only on the total matter+neutrino density fraction today and on the primordial power spectrum. The small suppression of the angular power spectra at ultra-large scales, as seen in \autoref{fig:theory}, is therefore because of the contribution of massive neutrinos to the total density parameter $\Omega_\mathrm{m}$.
\subsection{Primordial non-Gaussianity and scale-dependent bias}\label{sec:fnl}
An important extension of the standard $\Lambda$CDM model for our studies of ultra-large scales is $\Lambda$CDM+$f_\mathrm{NL}$, where the parameter $f_\mathrm{NL}$ is added to the model in order to capture the effects of a non-zero local primordial non-Gaussianity. \citet{Dalal:2007cu,Matarrese:2008nc,Slosar:2008hx} showed that a local PNG modifies the Gaussian bias by contributing a scale-dependent piece of the form
\begin{equation}
\Delta b(z,k)=3[b_\mathrm{lin}(z)-1]\frac{\delta_{\rm c}\,\Omega_\mathrm{m}\,H_0^2}{k^2\,T(k)\,D(z)}f_\mathrm{NL}\,,\label{eq:scaledepbias}
\end{equation}
where $\Omega_\mathrm{m}$ is the present-day matter density parameter, $H_0$ is the value of the Hubble expansion rate today, $T(k)$ is the matter transfer function (with $T\to 1$ as $k\to 0$), $D(z)$ is the linear growth factor normalised to $(1+z)^{-1}$ in the matter-dominated Universe, and $\delta_{\rm c}\sim1.68$ is the (linear) critical matter density threshold for spherical collapse. The appearance of the $k^2$ factor in the denominator of \autoref{eq:scaledepbias} immediately tells us that ultra-large scales are the natural choice for placing constraints on $f_\mathrm{NL}$ using this scale-dependent bias, as the signal becomes stronger when $k\to 0$.
The lower right panel of \autoref{fig:theory} shows the effects of non-zero values of $f_\mathrm{NL}$ on the power spectrum at large scales---note that similar to the previous cases, the spectra are exact, i.e.\ no approximations are made in their computations. We first notice that, as expected, a non-zero $f_\mathrm{NL}$ only affects the ultra-large scales substantially, by enhancing or suppressing the power spectra, and that this happens in all the redshift bins shown in the figure. This again emphasises the importance of accurately and precisely measuring the power spectra at ultra-large scales, as even the unrealistically large values of $f_\mathrm{NL}=\pm20$ (see \citet{2020A&A...641A...9P} for the current observational constraints on $f_\mathrm{NL}$) shown in the figure affect the spectra by only $<5\%$.
The figure also shows that a negative (positive) $f_\mathrm{NL}$ enhances (suppresses) the spectra for the two low-redshift bins $5$ and $10$, while the effect is the opposite for the high-redshift bin $15$. Here we explain the reason for this surprising but important feature. For that, let us investigate the redshift dependence of \autoref{eq:scaledepbias} for the full bias. The quantity $b_\mathrm{lin}$ is redshift-dependent and given by \autoref{eq:linbias} for the survey we consider in this paper. As can be seen in the lower panel of \autoref{fig:specs}, the quantity $b_\mathrm{lin}-1$ is negative for $z\lesssim0.75$ and positive for $z\gtrsim0.75$, which means that a negative (positive) $f_\mathrm{NL}$ enhances the bias at low (high) redshifts and suppresses it at high (low) redshifts. Now looking at the upper panel of \autoref{fig:specs}, we see that the two upper bins of the lower right panel of \autoref{fig:theory} (bins 5 and 10) contain redshifts that are lower than $0.75$, while the lower bin (bin 15) includes redshifts higher than $0.75$.
It is, however, important to note that this is the case only if one assumes a $b_\mathrm{lin}-1$ factor in \autoref{eq:scaledepbias}, whose validity has been questioned in the literature (see, e.g., \citet{Barreira:2020ekm} and references therein). For this reason, we modify \autoref{eq:scaledepbias} as
\begin{equation}
\Delta b(z,k)=3f_\mathrm{NL}[b_\mathrm{lin}(z)-p]\delta_{\rm c}\frac{\Omega_\mathrm{m}H_0^2}{k^2T(k)D(z)}\,,\label{eq:scaledepbias-p}
\end{equation}
where $p$ is now a free parameter to be determined by cosmological simulations. It is argued by~\citet{Barreira:2020ekm} that $p=1$ for gravity-only dynamics and when universality of the halo mass function is assumed, while other values of $p$ provide better descriptions of observed galaxies. Depending on the specific analysis and modelling, different values of $p$ have been obtained, e.g., \citet{Slosar:2008hx} and \cite{Pillepich:2008ka} showed that $p=1.6$ provides a better description of host halo mergers, while \citet{Barreira:2020kvh} showed that $p=0.55$ better describes
IllustrisTNG-simulated stellar-mass-selected galaxies.
\section{Methodology}\label{sec:methods}
In this paper, we are interested in estimating the impact of large-scale effects and approximations on the estimation of cosmological parameters. In order to do so, we fit the mock data set obtained by the exact $C^{\rm Ex}_\ell$ spectra as described in \autoref{sec:survey} using the $C^{\rm Ap}_\ell$ spectra which make use of the several common approximations discussed in \autoref{sec:approx}.
We implement in the public code \texttt{Cobaya} \citep{Torrado:2020dgo} a new likelihood module which enables us to obtain from \texttt{CAMB} \citep{Lewis:1999bs,Howlett:2012mh} the approximated spectra $C^{\rm Ap}_\ell$ and compare them with the mock data set. Such an analysis matches the approach commonly used for parameter estimation with galaxy number count data, where $C^{\rm Ap}_\ell$ is computed at each step in the MCMC rather than $C^{\rm Ex}_\ell$, as the computation of the latter is extremely time consuming and therefore unfeasible to repeat $\mathcal O(10^4)$ times.
For each point $\bm\Theta$ in the sampled parameter space, we compute the $\chi^2$ using the approach presented in \citet{2013JCAP...01..026A}, i.e.,
\begin{equation}\label{eq:like}
\chi^2(\bm\Theta)=\sum_\ell{(2\ell+1)f_{\rm sky}\left(\frac{d_\ell^{\rm mix}(\bm\Theta)}{d_\ell^{\rm th}(\bm\Theta)}+\ln{\frac{d_\ell^{\rm th}(\bm\Theta)}{d_\ell^{\rm obs}}}-N_{\rm bin}\right)}\, ,
\end{equation}
where $N_{\rm bin}$ is the number of bins, and
\begin{align}
d_\ell^{\rm th}(\bm\Theta) &= {\rm det}\left[\tilde{C}_{ij\ell}^{{\rm Ap}}(\bm\Theta)\right]\, , \\
d_\ell^{\rm obs} &= {\rm det}\left[\tilde{C}_{ij\ell}^{{\rm Ex}}(\bm\Theta^{\rm fid})\right]\,.
\end{align}
The tilde indicates that the used spectra contain an observational noise $N_{ij\ell}=\delta^{\rm K}_{ij}/n_i$, with $n_i$ the number of galaxies in the $i$th bin and $\delta^{\rm K}_{ij}$ the Kronecker delta, i.e.,\ $\tilde{C}_{ij\ell}=C_{ij\ell}+N_{ij\ell}$.
The quantity $d_\ell^{\rm mix}(\bm\Theta)$ in \autoref{eq:like} is constructed from $d_\ell^{\rm th}(\bm\Theta)$ by replacing, one after each other, the theoretical spectra with the corresponding observational ones \citep[for details, see][]{2013JCAP...01..026A}.
For currently available observations, which are not able to survey extremely large volumes of the Universe and therefore do not explore the ultra-large-scale regime, the approximated spectra generally mimic the true power spectrum. Thus, the approximations made do not significantly affect the results. However, we expect future surveys, such as the HI-galaxy redshift survey for which we generated the mock data set in \autoref{sec:survey}, to provide data at scales where lensing, RSD, relativistic effects, and the Limber approximation significantly impact the power spectra. Consequently, using the different approximations presented in \autoref{sec:approx} in fitting the models to the data will likely lead to shifts in the inferred cosmological parameters with respect to the fiducial values used to generate the data set.
In this paper, we quantify the significance of these shifts, in units of $\sigma$, as
\begin{equation}\label{eq:shift}
S(\theta) = \frac{|\theta-\theta^{\rm fid}|}{\sigma_\theta},
\end{equation}
where $\theta$ is a generic parameter estimated in our analysis, $\sigma_\theta$ is the Gaussian error we obtain on $\theta$, and $\theta^{\rm fid}$ is the fiducial value of $\theta$ used to generate the mock data set.
We apply this pipeline to the models described in \autoref{sec:cases}, with the baseline $\Lambda$CDM model described by the set of five free parameters $\bm\Theta=\{\omega_{\rm b}, \omega_{\rm c}, h, A_{\rm s}, n_{\rm s}\}$. When analysing an extended model, we add one extra free parameter to this set: the dark energy equation of state $w$, the sum of the neutrino masses $\sum{m_\nu}$, or the local primordial non-Gaussianity parameter $f_{\rm NL}$. We adopt flat priors on all these parameters.
Note that here we consider an optimistic setting in which the linear galaxy bias $b_{\rm lin}(z)$ is perfectly known. Adding nuisance parameters accounting for the uncertainty on this function and marginalising over them would enlarge the errors on cosmological parameters and reduce the statistical significance of the shifts we find, but would not qualitatively change the effects we are interested in. Moreover, as we are interested in the largest scales, in our analysis we only consider the data up to the multipole $\ell=400$. Adding smaller scales to the analysis could reduce the significance of the shifts, but would not change our results qualitatively.
Throughout this work we rely on \texttt{CAMB} to compute the exact and approximated power spectra. We use a modified version of the code, following \citet{Camera:2014bwa}, when we consider the primordial non-Gaussianity parameter, $f_{\rm NL}$.
\subsection{Debiasing constraints on cosmological parameters}\label{sec:debiasing}
Noting that using approximated spectra, $C^{\rm Ap}_\ell$, in the MCMC analysis results in significant shifts on cosmological parameters, we propose a method for debiasing the parameter estimates while still allowing for the use of the quickly computed $C^{\rm Ap}_\ell$. This method is based on adding a correction to the $C_\ell$'s used in the likelihood evaluation as
\begin{equation}\label{eq:corrected_cl}
C^{\rm Ap}_\ell(\bm\Theta) \rightarrow C^{\rm Ap}_\ell(\bm\Theta) + \left[C^{\rm Ex}_\ell(\bm\Theta^0)-C^{\rm Ap}_\ell(\bm\Theta^0)\right],
\end{equation}
where $\bm\Theta^0$ refers to a specific set of the cosmological parameters. We define the debiasing term $\alpha(\bm\Theta^0)$ as
\begin{equation}\label{eq:correction_factor_alpha}
\alpha(\bm\Theta^0) \equiv C^{\rm Ex}_\ell(\bm\Theta^0)-C^{\rm Ap}_\ell(\bm\Theta^0)\,.
\end{equation}
We use $\bm\Theta^0 = \bm\Theta^{\rm fid}$ for most of the results shown below, but discuss in \autoref{sec:mle_finding} how we can use a maximum likelihood estimate of $\bm\Theta^0$ when working with actual data for which $\bm\Theta^{\rm fid}$ does not exist.
In \autoref{fig:corr_ratio} we show the dependence of this debiasing method on the choice of $\bm\Theta^0$; we compute the debiasing term at $\bm\Theta^{\rm fid}$ and at $500$ other points of the parameter space, randomly sampled from a Gaussian distribution centred at $\bm\Theta^{\rm fid}$ with a variance on each parameter corresponding to $10\%$ of its fiducial value. These debiasing terms are then applied to $C^{\rm Ap}_\ell(\bm\Theta^{\rm fid})$. Assuming that the resulting spectra also follow a Gaussian distribution around the $C^{\rm Ap}_\ell+\alpha(\bm\Theta^{\rm fid})$ spectra, we show the corresponding $1\sigma$ and $2\sigma$ uncertainty regions. The figure shows that although the results we present below are based on computing $\alpha(\bm\Theta^0)$ using $\bm\Theta^{\rm fid}$, which would not be known in the case of actual data, our results would also hold for other choices of $\bm\Theta^0$ if they were reasonably close to $\bm\Theta^{\rm fid}$. This method of debiasing cosmological parameter estimates works precisely because the debiasing term $\alpha(\bm\Theta^0)$ is not strongly dependent on the choice of $\bm\Theta^0$ and can therefore account for the differences between the exact and approximated spectra over the full range of parameter space that the MCMC explores. Since $\alpha(\bm\Theta^0)$ only needs to be computed at a single set of parameter values, rather than each step in the MCMC, it allows one to obtain unbiased results without being computationally expensive, unlike using $C^{\rm Ex}_\ell$ which makes the analysis unfeasible.
We therefore use, at each sampled point $\bm\Theta$, the $\chi^2$ expression of \autoref{eq:like}, but with the substitution
\begin{equation}
\tilde{C}_{ij\ell}^{\rm Ap}(\bm\Theta)\rightarrow\tilde{C}_{ij\ell}^{\rm Ap}(\bm\Theta)+\alpha(\bm\Theta^0)\,.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{plots/additive_effect_W8xW8.pdf}
\caption{Effect of debiasing when different $\bm\Theta_0$ points in the parameter space are used to compute the debiasing term $\alpha(\bm\Theta^0)$. Here $\alpha(\bm\Theta^0)$ is computed at the fiducial set of parameters, $\bm\Theta^{\rm fid}$, and at $500$ other points in the parameter space, randomly sampled from a Gaussian distribution centred at $\bm\Theta^{\rm fid}$ with a variance of $10\%$ for each parameter. The black solid curve shows the fiducial $C^{\rm Ex}_\ell(\bm\Theta^{\rm fid})$ spectrum, while the grey band shows the errors corresponding to the experimental setup considered in the paper. For each of the $500$ computed $\alpha(\bm\Theta^0)$, the debiasing term is applied to the $C^{\rm Ap}_\ell (\bm\Theta^{\rm fid})$ spectrum. Assuming that the resulting spectra also follow a Gaussian distribution around the $C^{\rm Ap}_\ell (\bm\Theta^{\rm fid})+\alpha(\bm\Theta^{\rm fid})$ spectra, the orange and red areas show the $1\sigma$ and $2\sigma$ uncertainty regions, respectively. Here the auto-correlation in the eighth bin is shown as a typical example.}
\label{fig:corr_ratio}
\end{figure}
\subsection{Debiasing with maximum likelihood}\label{sec:mle_finding}
While in this paper we work with mock data sets, and therefore $\bm\Theta^{\rm fid}$ is known, this will not be the case when analysing real data. In order to use our approach in a realistic setting, we need to find a point in the parameter space that approximates the fiducial cosmology, which corresponds to the peak of the multivariate posterior probability distribution for the parameters. This can be achieved by analysing the mock data set built with $C^{\rm Ex}_\ell$ using the correct theoretical predictions, but without attempting to reconstruct the full shape of the posterior distribution. One can use maximisation methods to find the peak of the distribution, and since these methods only aim to find the maximum likelihood (or best-fit) point in the parameter space, they require a significantly smaller number of iterations with respect to MCMC methods.
Here, we use the maximisation pipeline of \texttt{Cobaya}, which relies on the \texttt{BOBYQA} algorithm \citep{2018arXiv180400154C,2018arXiv181211343C}, to fit the $C^{\rm Ex}_\ell$ spectra to our mock data set, and we find the maximum likelihood parameter set presented in \autoref{tab:mle_results}. The maximum likelihood point ($\bm\Theta^{\rm peak}$) found with this method is very close to the actual fiducial point used to generate the data set and would therefore be suitable for computing the debiasing term $\alpha$ (see \autoref{sec:debiasing}). Although we use the fiducial parameter set $\bm\Theta^{\rm fid}$ to compute the debiasing term in the rest of this paper, we have verified that there would be no significant changes in our results if $\bm\Theta^{\rm peak}$ were chosen instead (see \autoref{sec:resLCDM}).
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
$\theta$ & Fiducial value & ML (or peak) value\\
\hline
$\omega_{\rm b}$ & $0.022445$ & $0.022485$ \\
$\omega_{\rm c}$ & $0.1206$ & $0.1209$ \\
$h$ & $0.67$ & $0.67$ \\
$A_{\rm s}\times10^{-9}$ & $2.12605$ & $2.11$ \\
$n_{\rm s}$ & $0.96$ & $0.96$ \\
\hline
\end{tabular}
\caption{Maximum likelihood (ML) parameter set obtained by minimising the $\chi^2$ when $C^{\rm Ex}_\ell$ is used. The values are obtained through the \texttt{BOBYQA} minimisation algorithm implemented in \texttt{Cobaya}.}\label{tab:mle_results}
\end{table}
\section{Results and discussion}\label{sec:results}
In this section, we present the results of our analysis, highlighting how neglecting effects that are relevant at very large scales can result in significant biases in the estimation of cosmological parameters, potentially leading to false detections of non-standard physics. We split our results in two subsections, the first focusing on $\Lambda$CDM and its simple extensions $\Lambda$CDM+$m_\nu$ and $w$CDM, and the second discussing the results obtained when a scale-dependent bias generated by primordial non-Gaussianity is included in the analysis.
\subsection{$\Lambda$CDM and its simple extensions}\label{sec:resLCDM}
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
& & \multicolumn{6}{c}{Assumed cosmology}\\
\hline
& & \multicolumn{2}{c}{$\Lambda$CDM} & \multicolumn{2}{c}{$\Lambda$CDM+$m_\nu$} & \multicolumn{2}{c}{$w$CDM} \\
\hline
& Fiducial value & $\theta$ & $S(\theta)\ [\sigma]$ & $\theta$ & $S(\theta)\ [\sigma]$ & $\theta$ & $S(\theta)\ [\sigma]$ \\
\hline
\hline
$\omega_{\rm b}$ & $0.022445$ & $0.0163^{+0.0016}_{-0.0018}$ & $3.6$ & $0.0160^{+0.0015}_{-0.0018}$ & $3.8$& $0.0156^{+0.0013}_{-0.0016}$& $4.7$ \\
$\omega_{\rm c}$ & $0.1206$ & $0.1098\pm 0.0046$ & $2.3$ & $0.1143^{+0.0044}_{-0.0052}$ & $1.3$& $0.1014\pm 0.0039$ & $4.9$ \\
$h$ & $0.67$ & $0.616\pm 0.018$ & $3.0$ & $0.615^{+0.017}_{-0.019}$ & $3.0$& $0.600\pm 0.015$ & $4.7$ \\
$A_{\rm s}\times10^{9}$ & $2.12605$& $2.176^{+0.068}_{-0.081}$ & $0.7$ & $2.290\pm 0.082$ & $2.0$& $2.398^{+0.079}_{-0.088}$ & $3.2$ \\
$n_{\rm s}$ & $0.96$ & $0.9948\pm 0.0071$ & $4.9$ & $0.9872\pm 0.0074$ & $3.7$& $1.0494\pm 0.0096$ & $9.3$ \\
$\sum{m_\nu}$ eV & $0.06$ & $-$ & $-$ & $0.327\pm 0.043$ & $6.2$& $-$ & $-$ \\
$w$ & $-1$ & $-$ & $-$ & $-$ & $-$ & $-0.886\pm 0.013$ & $8.7$ \\
\hline
$\chi^2_{\rm min}$ & & \multicolumn{2}{c}{$4972$} & \multicolumn{2}{c}{$4925$} & \multicolumn{2}{c}{$4912$} \\
\hline
\end{tabular}
\caption{Marginalised constraints on the sampled parameters $\theta$ and values of the shift estimator $S(\theta)$ obtained by analysing the mock data set with the approximated $C^{\rm Ap}_\ell$ spectra for the standard $\Lambda$CDM model and its simple extensions, $\Lambda$CDM+$m_\nu$ and $w$CDM, considered in the present work.}\label{tab:extshift}
\end{table*}
\begin{figure*}
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.3\textwidth]{plots/ns_h.pdf} &
\includegraphics[width=0.3\textwidth]{plots/ns_mnu.pdf} &
\includegraphics[width=0.3\textwidth]{plots/ns_w.pdf} \\
\end{tabular}
\caption{$68\%$ and $95\%$ confidence level contours obtained by fitting the approximated $C^{\rm Ap}_\ell$ spectra to the data set built using the exact $C^{\rm Ex}_\ell$ spectra (colour-filled contours). The violet contours show the result when a $\Lambda$CDM model is assumed, while the green and orange contours correspond instead to $\Lambda$CDM+$m_\nu$ and $w$CDM cosmologies, respectively. The empty contours show the results of the analysis when the debiasing term described in \autoref{sec:debiasing} is included. The black dashed lines show the fiducial values of the cosmological parameters.}
\label{fig:biasedres}
\end{figure*}
In \autoref{tab:extshift}, we present the results obtained by analysing our mock data set, generated with $C^{\rm Ex}_\ell$ spectra for a $\Lambda$CDM fiducial cosmology, using $C^{\rm Ap}_\ell$ spectra for the three assumed cosmologies $\Lambda$CDM, $\Lambda$CDM+$m_\nu$ and $w$CDM. In the first case, we find that the obtained constraints on cosmological parameters are significantly shifted with respect to their fiducial values, despite using for the analysis the same cosmological model as the one assumed in generating the mock data set. With the exception of $A_{\rm s}$, which affects the amplitude of the spectra, the other parameters are all shifted by more than $2\sigma$, with $n_{\rm s}$ being the most affected parameter ($S=4.9\sigma)$, as a result of using approximations to achieve a reasonable computation time for the MCMC analysis. When we allow for simple extensions of $\Lambda$CDM, we see that such an effect leads to significant false detections of departures from the standard model. With the sum of the neutrino masses $\sum{m_\nu}$ added as an extra free parameter, we indeed find a significant detection of a non-vanishing value, where $\sum{m_\nu}=0$~eV is excluded with more than $6\sigma$ significance and the estimated value is shifted from the fiducial minimal value $\sum{m_\nu}=0.06$~eV by $S=6.2\sigma$; this implies that an analysis of data sensitive to large-scale effects would provide a false detection of the neutrino masses if one used the approximations considered here. The same effect can be seen if one allows for dark energy with an equation of state parameter that deviates from the cosmological constant value ($w=-1$). In this case, the free parameter $w$ is shifted from the fiducial value by $S=8.7\sigma$, resulting in a significant detection of a non-standard behaviour, which is driven only by the use of the approximated $C^{\rm Ap}_\ell$ in the parameter estimation pipeline. Also in these extended cases, the estimated values of the standard cosmological parameters are shifted with respect to the fiducial ones. This highlights how these simple extensions alone are not able to mimic the $C^{\rm Ex}_\ell$ spectra, as shifts in the values of the standard parameters are also necessary for fitting the $C^{\rm Ex}_\ell$ to the data when $C^{\rm Ap}_\ell$ are being used. The new degeneracies introduced by extensions of the $\Lambda$CDM model explain the changes in values of $S$ with respect to the standard model.
The decrease in the values of the minimum $\chi^2$ for the extended models with respect to $\Lambda$CDM shows that a false detection of the extensions also leads to a better fit to the data. However, given that our likelihood calculation of \autoref{eq:like} would lead to $\chi^2=0$ for the fiducial values, the $\chi^2_{\rm min}$ shown in \autoref{tab:extshift} highlight how even with these significant shifts the $C^{\rm Ap}_\ell$ spectra are not able to reproduce the cosmology used to generate the data.
In \autoref{fig:biasedres}, we show the $68\%$ and $95\%$ confidence level contours on a few representative parameters for the cases described above. The colour-filled contours show the results of the analysis performed with $C^{\rm Ap}_\ell$, highlighting the deviation of the estimated values of the parameters from the fiducial values (shown with black dashed lines). The empty contours instead show the results obtained when the debiasing term described in \autoref{sec:debiasing} is added to the spectra, which are then compared to the mock data set. These results show how the method we propose is able to debias the results and how it allows us to recover the correct values for the parameters, for both the standard $\Lambda$CDM cosmology and its extensions, thus avoiding false detections of non-standard cosmologies and improving the goodness of fit with a $\chi^2$ now of $\mathcal{O}(1)$.
In order to see in more detail the biasing effect of the approximations included in the $C^{\rm Ap}_\ell$, we show in \autoref{fig:biased_spectra} the impact of the biases on the angular power spectra for a representative redshift bin auto-correlation, highlighting how the approximated $C^{\rm Ap}_\ell$ spectrum (green) significantly departs from the expected $C^{\rm Ex}_\ell$ spectrum (black) when the fiducial values of the cosmological parameters are used to obtain both. We also include, with a red dashed curve, the $C^{\rm Ap}_\ell$ spectrum obtained using the biased values of the cosmological parameters reported in \autoref{tab:extshift}, showing how in this case the $C^{\rm Ap}_\ell$ at the shifted best-fit cosmology are better able to reproduce the fiducial $C^{\rm Ex}_\ell$, thus producing a better fit to the data.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{plots/fiducials_W8xW8.pdf}
\caption{Angular power spectra for the eighth redshift bin auto-correlation in a $\Lambda$CDM cosmology using the exact $C^{\rm Ex}_\ell$ (black solid curve) and the approximated $C^{\rm Ap}_\ell$ (green solid curve) obtained assuming the fiducial values for the cosmological parameters. The red dashed curve shows the $C^{\rm Ap}_\ell$ obtained for the biased parameter estimation of \autoref{tab:extshift}. The grey area shows the errors corresponding to the experimental setup used throughout the paper.}
\label{fig:biased_spectra}
\end{figure}
While in \autoref{fig:biasedres} we only show a subset of the free parameters of our models, the debiasing procedure is effective for all cosmological parameters. In \autoref{fig:triangle_LCDM}, we show the constraints obtained on all the free parameters of our $\Lambda$CDM analysis, obtained by both comparing the $C^{\rm Ap}_\ell$ to the mock data set (red, filled contours) and applying the debiasing method of \autoref{sec:debiasing}, using the debiasing term computed at both the fiducial values, $\alpha(\bm\Theta^{\rm fid})$ (yellow, filled contours), and the peak values found in \autoref{sec:mle_finding}, $\alpha(\bm\Theta^{\rm peak})$ (purple, empty contours). We notice how in the first case all the parameters are shifted with respect to their expected values, with the most significant shifts on $n_s$, $\omega_{\rm b}$ and $h$, while when we apply the debiasing approach the fiducial values are recovered for all the parameters, with no significant differences between the two cases of $\alpha(\bm\Theta^{\rm fid})$ and $\alpha(\bm\Theta^{\rm peak})$. Even though the results shown in \autoref{fig:triangle_LCDM} correspond to the $\Lambda$CDM model, they are qualitatively similar for all the considered cosmologies.
The posterior probability distributions of the parameters recovered after debiasing the MCMC results do not necessarily coincide with those that would be obtained by a full analysis. We can, however, consider these as reasonable estimates, as \autoref{fig:corr_ratio} shows that the debiasing term does not depend strongly on the $\bm\Theta^0$ point at which it is computed, as long as $\alpha(\bm\Theta^0)$ is sufficiently close to $\bm\Theta^{\rm fid}$. Thus, rather than computing $\alpha(\bm\Theta^0)$ at each point in the parameter space, we can approximate $\alpha(\bm\Theta^0)$ with $\alpha(\bm\Theta^{\rm peak})$ (or $\alpha(\bm\Theta^{\rm fid})$ in the case of the results shown here). This applies only in the vicinity of the peak of the distribution, and the estimation of the tails suffers from an error that propagates into the confidence intervals shown in \autoref{fig:biasedres} and \autoref{fig:triangle_LCDM}. We leave a quantification of this error for future work.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{plots/SKA2_LCDM.pdf}
\caption{$68\%$ and $95\%$ confidence level contours, as well as one-dimensional marginalised posterior probability distribution functions, obtained by fitting the $\Lambda$CDM model to the mock data set. The red, filled contours correspond to the analysis where the theoretically predicted approximated spectra $C^{\rm Ap}_\ell$ are used to fit the model to the mock data set. The yellow, filled contours show the results obtained when the debiasing term $\alpha(\bm\Theta^{\rm fid})$ is included, and the purple, empty contours correspond to the debiasing term $\alpha(\bm\Theta^{\rm peak})$ computed at the estimated maximum likelihood point $\bm\Theta^{\rm peak}$.}
\label{fig:triangle_LCDM}
\end{figure}
\subsection{Primordial non-Gaussianity}\label{sec:resfNL}
In this subsection, we focus on the results when $f_{\rm NL}$ is included as a free parameter, thus allowing for a non-vanishing local primordial non-Gaussianity; this affects the galaxy clustering spectra through the scale-dependent bias as described in \autoref{sec:fnl}. As a first case, we use the same experimental setup we used in \autoref{sec:resLCDM}, and use the standard expression of \autoref{eq:scaledepbias} for our theoretical predictions for the scale-dependent bias. In this case, which we refer to as ``baseline'', when we analyse the mock data set using the approximated $C^{\rm Ap}_\ell$ we find results that are similar to the $\Lambda$CDM case of \autoref{sec:resLCDM}, with approximately the same shifts for the standard parameters and no bias for $f_{\rm NL}$ (see \autoref{tab:fNLshift}). This may seem to be a surprising result, as the impact of $f_{\rm NL}$ on the theoretical predictions is significant at very large scales (see \autoref{fig:theory}), where the approximations included in $C^{\rm Ap}_\ell$ fail. One would therefore expect that a biased value for this parameter would help with fitting the $C^{\rm Ex}_\ell$ spectra of the mock data set, and that a false non-vanishing $f_{\rm NL}$ would be detected. However, given \autoref{eq:scaledepbias} that we rely upon, the scale-dependent bias depends not only on $f_{\rm NL}$, but also on the $b_{\rm lin}-1$ factor. As shown in \autoref{fig:specs} and discussed in \autoref{sec:fnl}, our choice of the linear galaxy bias implies that $b_{\rm lin}-1$ changes sign at $z\approx0.75$; the impact of $f_{\rm NL}$ on the $C^{\rm Ap}_\ell$ spectra is therefore the opposite for the redshift bins beyond this redshift threshold with respect to the lower-redshift ones. Such an effect leads to a cancellation of the impact of the primordial non-Gaussianity on the goodness of fit, and therefore the standard case of $f_{\rm NL}=0$ is still preferred.
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
& & \multicolumn{6}{c}{Settings for $f_{\rm NL}$ analysis}\\
\hline
& & \multicolumn{2}{c}{baseline} & \multicolumn{2}{c}{$z$ cut} & \multicolumn{2}{c}{$p=0.5$} \\
\hline
& Fiducial value & $\theta$ & $S(\theta)\ [\sigma]$ & $\theta$ & $S(\theta)\ [\sigma]$ & $\theta$ & $S(\theta)\ [\sigma]$ \\
\hline
\hline
$\omega_{\rm b}$ & $0.022445$ & $0.0164^{+0.0016}_{-0.0019}$ & $3.4$ & $0.0219^{+0.0024}_{-0.0035}$ & $0.2$ & $0.0169^{+0.0017}_{-0.0021}$ & $2.9$ \\
$\omega_{\rm c}$ & $0.1206$ & $0.1101^{+0.0044}_{-0.0050}$ & $2.2$ & $0.1300^{+0.0064}_{-0.0089}$ & $1.2$ & $0.1143^{+0.0048}_{-0.0056}$ & $1.2$ \\
$h$ & $0.67$ & $0.617\pm 0.018$ & $2.9$ & $0.679^{+0.023}_{-0.031}$ & $0.3$ & $0.620\pm 0.020$ & $2.6$ \\
$A_{\rm s}\times10^{9}$ & $2.12605$& $2.170\pm 0.077$ & $0.6$ & $1.839\pm 0.084$ & $3.4$ & $2.069\pm 0.077$ & $0.7$ \\
$n_{\rm s}$ & $0.96$ & $0.9945\pm 0.0070$ & $4.9$ & $0.9955^{+0.0093}_{-0.0081}$ & $4.0$ & $1.0047\pm 0.0075$ & $6.0$ \\
$f_{\rm NL}$ & $0$ & $-0.8\pm 3.9$ & $0.2$ & $-85^{+13}_{-12}$ & $6.9$ & $66.5\pm 7.2$ & $9.2$ \\
\hline
\end{tabular}
\caption{Marginalised constraints on the sampled parameters $\theta$ and values of the shift estimator $S(\theta)$ obtained by analysing the mock data set with the approximated $C^{\rm Ap}_\ell$ spectra for the three cases of $\Lambda$CDM$+f_{\rm NL}$ considered in the present work.}\label{tab:fNLshift}
\end{table*}
In order to ensure that this indeed is the reason for the lack of shift in the recovered $f_{\rm NL}$ value, we run our parameter estimation pipeline by removing the redshift bins above $z\approx0.75$. We refer to this case as ``$z$ cut''. The results are shown in \autoref{tab:fNLshift}, where it can be seen how removing the higher-redshift bins eliminates the cancellation effect described above; now we find significant biases on $f_{\rm NL}$ and $A_{\rm s}$, with $S(f_{\rm NL})=6.9\sigma$ and $S(A_{\rm s})=3.4\sigma$, respectively, for the shifts with respect to the fiducial values. The shifts on the other free parameters are reduced with respect to the baseline case. The combined effect of $f_{\rm NL}$ and $A_{\rm s}$ allows the $C^{\rm Ap}_\ell$ to fit the mock data set, as the global effect is boosting the power spectra at large scales.
On the other hand, as we discussed in \autoref{sec:fnl}, the modulating factor $b_{\rm lin}(z)-1$ in \autoref{eq:scaledepbias} is not the only possibility for describing the scale-dependent bias. We have repeated our analysis, following the more general \autoref{eq:scaledepbias-p}, by setting $p=0.5$, which ensures that the $b_{\rm lin}(z)-p$ factor does not change sign in our redshift range, given our choice of the linear galaxy bias. In the last two columns of \autoref{tab:fNLshift} we report the results we find in this case, where we see again a significant false detection of a non-vanishing $f_{\rm NL}$, with $S(f_{\rm NL})=9.2\sigma$, while the other parameters are less shifted from their fiducial values compared to the baseline case, with the exception of $n_{\rm s}$. In \autoref{fig:fNLbiasedres}, we also notice how the shift on $f_{\rm NL}$ has an opposite sign in this $p=0.5$ case with respect to the $z$ cut case, where the analysis prefer a negative value of $f_\mathrm{NL}$. This is due to the fact that the $b_{\rm lin}-p$ factor is now always positive, and one needs an $f_{\rm NL}>0$ in order to achieve the boost in the $C^{\rm Ap}_\ell$ needed to fit the model to the mock data set.
Finally, we apply the debiasing procedure of \autoref{sec:debiasing} to the three cases described and show the results in \autoref{fig:fNLbiasedres}. As the figure shows, applying the debiasing correction allows us to recover a vanishing $f_{\rm NL}$. The debiased contours are different from each other here, which was not the case in \autoref{sec:resLCDM}; this is due to the different strategies applied to account for the effects of $f_{\rm NL}$ in our analysis.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{plots/ns_fNL.pdf}
\caption{$68\%$ and $95\%$ confidence level contours obtained by fitting the approximated $C^{\rm Ap}_\ell$ spectra, with a free $f_{\rm NL}$ parameter, to the data set built using the exact $C^{\rm Ex}_\ell$ spectra (colour-filled contours). The red and yellow contours show the results obtained with the scale-dependent bias of \autoref{eq:scaledepbias}, with our baseline settings and with removing the last two redshift bins, respectively.
The violet contour shows instead the case where the scale-dependent bias is computed following \autoref{eq:scaledepbias-p} with $p=0.5$. The empty contours show the results of the analysis when the debiasing term described in \autoref{sec:debiasing} is included. The black dashed lines show the values of the fiducial cosmological parameters. }
\label{fig:fNLbiasedres}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
The constant improvement in galaxy surveys will soon unlock the largest scales in the sky for cosmological studies. While the expected angular correlations at smaller scales are well understood and efficiently modelled (up to the nonlinear regime), calculations of power spectra commonly make use of approximations aimed at reducing the computational efforts needed to obtain theoretical predictions of the spectra. This is a necessary requirement for such calculations if one wants to exploit MCMC methods for performing parameter estimation analyses. Such approximations, however, break down at very large scales, where effects including lensing, galaxy velocities and relativistic corrections become relevant.
In this paper, we have investigated the impact of approximations that neglect such large-scale effects on a parameter estimation analysis. We have produced a mock data set for a next-generation survey that will be able to explore the angular correlation of galaxies at very large scales through the full treatment described in \autoref{sec:std}. We have then analysed this data set by applying the commonly used approximations described in \autoref{sec:approx}, where the large-scale corrections due to lensing, velocities and relativistic effects have been neglected, and the Limber approximation has been employed. We have found that this analysis produces significantly biased results, with parameter estimates being shifted up to $\sim 5\sigma$ when assuming a minimal 5-parameter $\Lambda$CDM cosmology, and with false detections of non-standard cosmologies when simple extensions of the standard model are considered.
We have also explored the impact of the approximations on a more complex extension of the $\Lambda$CDM model, where we have allowed for a non-vanishing local primordial non-Gaussianity by including $f_{\rm NL}$ as a free parameter in our analysis. This contributes a scale-dependent term to the galaxy bias which is relevant at large scales. We expected estimates of this parameter to be significantly biased, as a non-zero $f_{\rm NL}$ would help the approximated spectra to mimic those used in creating the data set. However, we have found that in our baseline setting, such an effect cannot be seen due to a cancellation between the low- and high-redshift bins. Given our choice of the linear galaxy bias (see \autoref{sec:survey}), the commonly used scale-dependent term changes sign at $z\approx0.75$ and, therefore, the effect of a non-vanishing $f_{\rm NL}$ on the overall goodness of fit cancels out between low- and high-redshift bins. We have confirmed this explanation by cutting out all bins at $z>0.75$, and we have found, with this setting, a significant false detection of a non-vanishing and negative $f_{\rm NL}$. We have also performed our analysis for a case where the scale-dependent piece of the bias depends differently on the linear bias term (\autoref{eq:scaledepbias-p}). We have found in this case a $9.2\sigma$ shift in the estimated value of $f_{\rm NL}$, opposite in sign with respect to the previous case, highlighting how different modellings of the scale-dependent term can affect the final results.
In this work, not only have we assessed the impact of the approximations on the estimation of cosmological parameters, but we have also proposed a simple method to obtain debiased results that can approximate those that one would obtain by taking into account all the effects. We have described this method in \autoref{sec:debiasing} and pointed out how the computation of the debiasing term $\alpha({\bm\Theta^0})$ does not depend strongly on the choice of the parameter set $\bm\Theta^0$ where the computation is performed, as long as it is close to the true cosmology. Indeed our advantage in using this method relies on the fact that, in our forecasts, the fiducial cosmology has been known. However, we have pointed out that in a realistic setting, with an unknown fiducial cosmology, one could rely on minimisation algorithms to identify the best-fit point in the parameter space. Such a minimisation would be significantly less computationally expensive than a full parameter estimation pipeline and could therefore be performed using the exact spectra. We have tested the feasibility of such an approach, and we found in \autoref{sec:mle_finding} that the debiased cosmological parameter constraints found using an estimate for the peak of the multivariate distribution are almost exactly the same as those found using the fiducial point. Thus, this method can be applied to real data, where the fiducial point is unknown.
We have applied the debiasing method to all the cases we have investigated, and we have found that it indeed allows us to recover the expected values for the free parameters of our analyses. This method could therefore be used in real data analysis when unexpected detections of non-standard behaviour are seen. Additionally, while not providing a fully correct parameter estimation, our method allows one to obtain accurate values for cosmological parameters and estimates of their corresponding posterior probability distributions. While the recovered distributions are reasonable estimates of the ones obtained through a full analysis, we leave a quantitative assessment of the errors on their shapes for future work.
\section*{Acknowledgements}
M.M. has received the support of a fellowship from ``la Caixa'' Foundation (ID 100010434), with fellowship code LCF/BQ/PI19/11690015, and the support of the Spanish Agencia Estatal de Investigacion through the grant ``IFT Centro de Excelencia Severo Ochoa SEV-2016-0599''. R.D. acknowledges support from the Fulbright U.S.\ Student Program and the NSF Graduate Research Fellowship Program under Grant No.\ DGE-2039656. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Y.A. is supported by LabEx ENS-ICFP: ANR-10-LABX-0010/ANR-10-IDEX-0001-02 PSL*. S.C. acknowledges support from the `Departments of Excellence 2018-2022' Grant (L.\ 232/2016) awarded by the Italian Ministry of University and Research (\textsc{mur}). S.C. also acknowledges support by \textsc{mur} Rita Levi Montalcini project `\textsc{prometheus} -- Probing and Relating Observables with Multi-wavelength Experiments To Help Enlightening the Universe's Structure', for the early stages of this project, and from the `Ministero degli Affari Esteri della Cooperazione Internazionale -- Direzione Generale per la Promozione del Sistema Paese Progetto di Grande Rilevanza ZA18GR02.
\section*{Data Availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
| {'timestamp': '2021-06-30T02:32:18', 'yymm': '2106', 'arxiv_id': '2106.15604', 'language': 'en', 'url': 'https://arxiv.org/abs/2106.15604'} |
\section{Introduction}
\label{sec:Introduction}
The discovery of the Higgs particle, announced on
4 July 2012 by the LHC experiments ATLAS \cite{Aad:2012tfa} and CMS
\cite{Chatrchyan:2012xdj} marked a milestone for
particle physics. It structurally completed the Standard Model (SM)
providing us with a theory that remains weakly interacting all the
way up to the Planck scale. While the SM can successfully describe
numerous particle physics phenomena at the quantum level at highest
precision, it leaves open several questions. Among these are {\it
e.g.}~the one for the nature of Dark Matter (DM), the baryon
asymmetry of the universe or the hierarchy problem. This calls for
physics beyond the SM (BSM). Models beyond the SM usually entail
enlarged Higgs sectors that can provide candidates for Dark Matter or
guarantee successful baryogenesis. Since the discovered Higgs boson
with a mass of 125.09~GeV \cite{Aad:2015zhl} behaves SM-like any BSM extension
has to make sure to contain a Higgs boson in its spectrum that is in
accordance with the LHC Higgs data. Moreover, the models have to be
tested against theoretical and further experimental
constraints from electroweak precision tests, $B$-physics, low-energy
observables and the negative searches for new particles that may be
predicted by some of the BSM theories.
The lack of any direct sign of
new physics renders the investigation of the Higgs sector more and more
important. The precise investigation of the discovered Higgs boson may reveal
indirect signs of new physics through mixing with other Higgs bosons
in the spectrum, loop effects due to the additional Higgs bosons
and/or further new states predicted by the model, or decays into non-SM
states or Higgs bosons, including the possibility of invisible
decays. Due to the SM-like nature of the 125~GeV Higgs boson indirect
new physics effects on its properties are expected to be small. Moreover, different BSM
theories can lead to similar effects in the Higgs sector. In order not
to miss any indirect sign of new physics and to be able to identify the
underlying theory in case of discovery, highest precision in the prediction of the
observables and sophisticated experimental techniques are therefore
indispensable. The former calls for the inclusion of higher-order
corrections at highest possible level, and theorists all over the
world have spent enormous efforts to improve the predictions
for Higgs observables \cite{deFlorian:2016spz}.
Among the new physics models supersymmetric (SUSY) extensions
\cite{Golfand:1971iw,Volkov:1973ix,Wess:1974tw,Fayet:1974pd,Fayet:1976cr,Fayet:1976et,Fayet:1977yc,Nilles:1983ge,Haber:1984rc,Sohnius:1985qm,Gunion:1984yn,Gunion:1986nh,Gunion:1989we}
certainly belong to the best motivated and most thoroughly
investigated models beyond the SM, and numerous higher-order
predictions exist for the production and decay cross sections as well
as the Higgs potential parameters, {\it i.e.}~the masses and Higgs
self-couplings \cite{deFlorian:2016spz}. The Higgs sector of the
minimal supersymmetric extension (MSSM) \cite{Gunion:1989we,Martin:1997ns,Dawson:1997tz,Djouadi:2005gj} is a 2-Higgs doublet model
(2HDM) of type II \cite{Lee:1973iz,Branco:2011iw}. While due to
supersymmetry the MSSM Higgs potential parameters are given in terms
of gauge couplings this is not the
case for general 2HDMs so that the 2HDM entails an interesting and
more diverse Higgs phenomenology and is also
affected differently by
higher-order electroweak (EW) corrections. Moreover, 2HDMs allow for
successful baryogenesis
\cite{McLerran:1990zh,Turok:1990zg,Cohen:1991iu,Turok:1991uc,Funakubo:1993jg,Davies:1994id,Cline:1995dg,Cline:1996mga,Fromme:2006cm,Cline:2011mm,Dorsch:2013wja,Fuyuto:2015vna,Dorsch:2014qja,Dorsch:2016tab,Dorsch:2016nrg,Basler:2016obg,Dorsch:2017nza,Basler:2017uxn,Basler:2018cwe} and in their inert version provide a Dark
Matter candidate
\cite{Deshpande:1977rw,Barbieri:2006dq,LopezHonorez:2006gr,Dolle:2009fn,Honorez:2010re,Gustafsson:2012aj,Swiezewska:2012ej,Swiezewska:2012eh,Arhrib:2013ela,Klasen:2013btp,Abe:2014gua,Krawczyk:2013jta,Goudelis:2013uca,Chakrabarty:2015yia,Ilnicka:2015jba}.
The situation with respect to EW corrections in non-SUSY models is not
as advanced as for SUSY extensions.
While the QCD corrections can be taken over to those
models with a minimum effort from the SM and the MSSM
by applying appropriate changes, this is not the case
for the EW corrections. Moreover, some issues arise with
respect to renormalization. Thus, only recently a renormalization
procedure has been proposed by authors of this paper for the
mixing angles of the 2HDM that ensures
explicitly gauge-independent decay amplitudes,
\cite{Krause:2016gkg,Krause:2016oke}. Subsequent groups have confirmed
this in different Higgs channels
\cite{Denner:2016etu,Altenkamp:2017ldc,Altenkamp:2017kxk,Denner2018,Fox:2017hbw}. Moreover, in Ref.~\cite{Denner2018} four schemes have been proposed based on on-shell and symmetry-inspired renormalization conditions for the mixing angles (and by applying the background field method \cite{Zuber:1975sa,Zuber:1975gh,Boulware:1981,Abbott:1981,Abbott:1982,Hart:1983,Denner:1994xt}) and on $\overline{\mbox{MS}}$ prescriptions for the remaining new quartic Higgs couplings, and their features have been
investigated in detail. The authors of Ref.~\cite{Kanemura:2017wtm} use an improved on-shell
scheme that is essentially equivalent to the mixing angle
renormalization scheme presented by our group in
\cite{Krause:2016gkg,Krause:2016oke}. It has been applied to compute
the renormalized one-loop Higgs boson couplings in the Higgs singlet
model and the 2HDM and to implement these in the program
{\texttt{H-COUP}} \cite{Kanemura:2017gbi}. In
\cite{Kanemura:2018yai} the authors present, for these models, the
one-loop electroweak and QCD corrections to the Higgs decays into
fermion and gauge boson pairs. The complete
phenomenological analysis, however, requires the corrections to all
Higgs decays, as we present them here for the first time in the
computer tool {\texttt{2HDECAY}}.
In \cite{Krause:2016xku} we completed the renormalization of the 2HDM and calculated
the higher-order corrections to Higgs-to-Higgs decays. We have applied
and extended this renormalization procedure in \cite{Krause:2017mal} to the
next-to-2HDM (N2HDM) which includes an additional real singlet. The
computation of the (N)2HDM EW corrections has shown that the corrections
can become very large for certain areas of the parameters space. There can be
several reasons for this. The corrections can be parametrically enhanced
due to involved couplings that can be large
\cite{Kanemura:2002vm,Kanemura:2004mg,Krause:2016xku,Krause:2017mal}. This
is in particular the case for the trilinear Higgs self-couplings that in
contrast to SUSY are not given in terms of the
gauge couplings of the theory and that are so far only weakly
constrained by the LHC Higgs data. The corrections can be large
due to light particles in the loop in combination with not too small
couplings, {\it e.g.}~light Higgs states of the extended Higgs
sector. Also an inapt choice of the renormalization
scheme can artificially enhance loop corrections. Thus we found for our investigated
processes that process-dependent renormalization schemes or
$\overline{\mbox{MS}}$ renormalization of the scalar
mixing angles can blow up the one-loop
corrections due to an insufficient cancellation of the large finite
contributions from wave function renormalization
constants \cite{Krause:2016oke,Krause:2016xku}. Moreover,
counterterms can blow up in certain parameter
regions because of small leading-order couplings, {\it e.g.}~in the 2HDM the coupling of
the heavy non-SM-like CP-even Higgs boson to gauge bosons, which in
the limit of a light SM-like CP-even Higgs boson is almost zero. The
same effects are observed for supersymmetric theories where a
badly chosen parameter set for the renormalization
can lead to very large counterterms and hence enhanced loop
corrections, {\it cf.}~Ref.~\cite{Belanger:2017rgu} for a recent analysis.
This discussion shows that the renormalization of the EW corrections to BSM Higgs
observables is a highly non-trivial task. In addition, there may be no unique
best renormalization scheme for the whole parameter space of a
specific model, and the user has to decide which scheme to choose to
obtain trustworthy predictions. With the publication of the new tool {\tt
2HDECAY} we aim to give an answer to this problematic task.
The program {\tt 2HDECAY} computes, for 17 different renormalization
schemes, the EW corrections to the Higgs decays of the 2HDM Higgs bosons
into all possible on-shell two-particle final states of the model that
are not loop-induced. It is
combined with the widely used Fortran code {\tt HDECAY} version 6.52
\cite{DJOUADI199856,Djouadi:2018xqq} which provides the loop-corrected
decay widths and branching ratios for the
SM, the MSSM and 2HDM incorporating the state-of-the-art higher-order QCD
corrections including also loop-induced and off-shell decays. Through the combination
of these corrections with the
2HDM EW corrections {\tt 2HDECAY} becomes a code for the prediction of
the 2HDM Higgs boson decay widths at the presently highest possible level of
precision. Additionally, the separate output of the leading order (LO)
and next-to-leading order (NLO) EW corrected decay widths allows to
perform studies on the importance of the relative EW corrections (as function
of the parameter choices), comparisons with the relative EW corrections
within the MSSM or investigations on the most suitable renormalization
scheme for specific parameter regions. The comparison
of the results for different renormalization schemes moreover
permits to estimate the remaining theoretical error due to missing
higher-order corrections. To that end, {\texttt{2HDECAY}} includes a parameter conversion routine which performs the automatic conversion of input parameters from one renormalization scheme to another for all 17 renormalization schemes that are implemented. With this tool we contribute to
the effort of improving the theory predictions for BSM Higgs physics
observables so that in combination with sophisticated experimental
techniques Higgs precision physics becomes possible and the gained
insights may advance us in our understanding of the mechanism of
electroweak symmetry breaking (EWSB) and the true underlying theory.
The program package was developed and tested under
{\texttt{Windows 10}}, {\texttt{openSUSE Leap 15.0}} and
{\texttt{macOS Sierra 10.12}}. It requires an up-to-date version of
{\texttt{Python 2}} or {\texttt{Python 3}} (tested with versions
{\texttt{2.7.14}} and {\texttt{3.5.0}}), the {\texttt{FORTRAN}}
compiler {\texttt{gfortran}} and the {\texttt{GNU C}} compilers
{\texttt{gcc}} (tested for compatibility with versions
{\texttt{6.4.0}} and {\texttt{7.3.1}}) and {\texttt{g++}}. The latest
version of the package can be downloaded from
\begin{center}
\href{https://github.com/marcel-krause/2HDECAY}{https://github.com/marcel-krause/2HDECAY} ~.
\end{center}
The paper is organized as follows. The subsequent
Sec.\,\ref{sec:EWQCD2HDMMain} forms the theoretical background for our
work. We briefly introduce the 2HDM, all
relevant parameters and particles and set our notation.
We give a summary of all counterterms that are needed for the computation
of the EW corrections and state them explicitly. The relevant formulae
for the computation of the partial decay widths at one-loop level are
presented and the combination of the electroweak corrections with the
QCD corrections already implemented in {\texttt{HDECAY}} is described. In
Sec.\,\ref{sec:programDescriptionMain}, we introduce
{\texttt{2HDECAY}} in detail, describe the structure of the program
package and the input and output file formats. Additionally, we
provide installation and usage manuals. We conclude with a short
summary of our work in Sec.\,\ref{sec:summary}. As reference for the
user, we list exemplary input and output files in Appendices
\ref{sec:AppendixInputFile} and \ref{sec:AppendixOutputFile},
respectively.
\section{One-Loop Electroweak and QCD Corrections in the 2HDM}
\label{sec:EWQCD2HDMMain}
In the following, we briefly set up our notation and introduce the
2HDM along with the input parameters used in our
parametrization. We give details on the EW one-loop
renormalization of the 2HDM. We discuss how the calculation of the
one-loop partial decay widths is performed. At the end of the section,
we explain how the EW corrections are combined with the existing
state-of-the-art QCD corrections already implemented in {\texttt{HDECAY}} and present the automatic parameter conversion routine that is implemented in {\texttt{2HDECAY}}.
\subsection{Introduction of the 2HDM}
\label{sec:setupOfModel}
For our work, we consider a general CP-conserving 2HDM
\cite{Lee:1973iz,Branco:2011iw} with a global discrete $\mathbb{Z}_2$
symmetry that is softly broken. The model consists of two complex
$SU(2)_L$ doublets $\Phi _1$ and $\Phi _2$, both with hypercharge
$Y=+1$. The electroweak part of the 2HDM can be described by the
Lagrangian
\begin{equation}
\mathcal{L} ^\text{EW}_\text{2HDM} = \mathcal{L} _\text{YM} + \mathcal{L} _\text{F} + \mathcal{L} _\text{S} + \mathcal{L} _\text{Yuk} + {\mathcal{L}} _\text{GF} + \mathcal{L} _\text{FP}
\label{eq:electroweakLagrangian}
\end{equation}
in terms of the Yang-Mills Lagrangian $\mathcal{L} _\text{YM}$ and the fermion
Lagrangian $\mathcal{L} _\text{F}$ containing the kinetic terms of the
gauge bosons and fermions and their interactions, the Higgs Lagrangian
$\mathcal{L} _\text{S}$, the Yukawa
Lagrangian $\mathcal{L}_{\text{yuk}}$ with the Higgs-fermion
interactions, the gauge-fixing and the Fadeev-Popov Lagrangian,
${\mathcal{L}} _\text{GF}$ and $\mathcal{L}_\text{FP}$,
respectively. Explicit forms of $\mathcal{L} _\text{YM}$ and
$\mathcal{L} _\text{F}$ can be found {\it
e.g.}~in~\cite{Peskin:1995ev, Denner:1991kt} and of the general 2HDM
Yukawa Lagrangian {\it e.g.}~in \cite{Aoki:2009ha, Branco:2011iw}. We
do not give them explicitly here.
For the renormalization of the 2HDM, we follow the approach of
Ref.~\cite{Santos:1996vt} and apply the gauge-fixing only \textit{after}
the renormalization of the theory, {\it i.e.}~$\mathcal{L} _\text{GF}$
contains only fields that are already renormalized. For the purpose of
our work we do not present ${\mathcal{L}} _\text{GF}$ nor $\mathcal{L}_\text{FP}$ since
their explicit forms are not needed in the following.
The scalar Lagrangian $\mathcal{L} _\text{S}$ introduces the kinetic
terms of the Higgs doublets and their scalar potential. With the
the covariant derivative
\begin{equation}
D_\mu = \partial _\mu + \frac{i}{2} g \sum _{a=1}^3 \sigma ^a W_\mu ^a + \frac{i}{2} g{'} B_\mu
\end{equation}
where $W_\mu ^a$ and $B_\mu$ are the gauge bosons of the $SU(2)_L$ and
$U(1)_Y$ respectively, $g$ and $g{'}$ are the corresponding coupling
constants of the gauge groups and $\sigma ^a$ are the Pauli matrices,
the scalar Lagrangian is given by
\begin{equation}
\mathcal{L} _S = \sum _{i=1}^2 (D_\mu \Phi _i)^\dagger (D^\mu \Phi _i) - V_\text{2HDM} ~.
\label{eq:scalarLagrangian}
\end{equation}
The scalar potential of the CP-conserving 2HDM reads
\cite{Branco:2011iw}
\begin{equation}
\begin{split}
V_\text{2HDM} =&~ m_{11}^2 \left| \Phi _1 \right| ^2 + m_{22}^2 \left|
\Phi _2 \right| ^2 - m_{12}^2 \left( \Phi _1 ^\dagger \Phi _2 +
\textit{h.c.} \right) + \frac{\lambda _1}{2} \left( \Phi _1^\dagger
\Phi _1 \right) ^2 + \frac{\lambda _2}{2} \left( \Phi _2^\dagger
\Phi _2 \right) ^2 \\
&+ \lambda _3 \left( \Phi _1^\dagger \Phi _1 \right) \left( \Phi
_2^\dagger \Phi _2 \right) + \lambda _4 \left( \Phi _1^\dagger \Phi
_2 \right) \left( \Phi _2^\dagger \Phi _1 \right) + \frac{\lambda
_5}{2} \left[ \left( \Phi _1^\dagger \Phi _2 \right) ^2 +
\textit{h.c.} \right] ~.
\end{split}
\label{eq:scalarPotential}
\end{equation}
Since we consider a CP-conserving model, the 2HDM potential can be
parametrized by three real-valued mass parameters $m_{11}$, $m_{22}$
and $m_{12}$ as well as five real-valued dimensionless coupling
constants $\lambda _i$ ($i=1,...,5$). For later convenience, we define
the frequently appearing combination of three of these coupling
constants as
\begin{equation}
\lambda _{345} \equiv \lambda _3 + \lambda _4 + \lambda _5 ~.
\end{equation}
For $m_{12}^2=0$, the potential $V_\text{2HDM}$ exhibits a discrete $\mathbb{Z}_2$
symmetry under the simultaneous field transformations $\Phi _1
\rightarrow - \Phi _1$ and $\Phi _2 \rightarrow \Phi
_2$. This
symmetry, implemented in the scalar potential in order to avoid
flavour-changing neutral currents (FCNC) at tree level, is softly
broken by a non-zero mass parameter $m_{12}$.
After EWSB the neutral components of
the Higgs doublets develop vacuum expectation values (VEVs) which are
real in the CP-conserving case. After expanding about the real VEVs $v_1$
and $v_2$, the Higgs doublets $\Phi_i$ ($i=1,2$) can be expressed in
terms of the charged complex field $\omega_i^+$ and the real neutral
CP-even and CP-odd fields $\rho_i$ and $\eta_i$, respectively as
\begin{equation}
\Phi _1 = \begin{pmatrix} \omega ^+ _1 \\ \frac{v_1 + \rho _1 + i \eta
_1 }{\sqrt{2}} \end{pmatrix} ~~~\text{and}~~~~ \Phi _2 = \begin{pmatrix}
\omega ^+ _2 \\ \frac{v_2 + \rho _2 + i \eta
_2}{\sqrt{2}} \end{pmatrix}
\label{eq:vevexpansion}
\end{equation}
where
\begin{equation}
v^2 = v_1^2 + v_2^2 \approx (246.22~\text{GeV})^2
\label{eq:vevRelations}
\end{equation}
is the SM VEV obtained from the Fermi constant
$G_F$ and we define the ratio of the VEVs through the mixing angle $\beta$
as
\beq
\tan \beta = \frac{v_2}{v_1}
\eeq
so that
\beq
v_1 = v c_\beta \quad \mbox{and} \quad v_2 = v s_\beta~.
\eeq
Insertion of Eq.~(\ref{eq:vevexpansion}) in the kinetic part of the scalar
Lagrangian in Eq.~(\ref{eq:scalarLagrangian}) yields after rotation to
the mass eigenstates the tree-level relations for the masses of the
electroweak gauge bosons
\begin{align}
m_W^2 &= \frac{g^2v^2}{4} \\
m_Z^2 &= \frac{(g^2 + g{'}^2)v^2}{4} \\
m_\gamma ^2 &= 0 ~.
\end{align}
The electromagnetic coupling constant $e$ is connected to the
fine-structure constant $\alpha _\text{em}$ and to the gauge boson
coupling constants through the tree-level relation
\begin{equation}
e = \sqrt{4\pi \alpha _\text{em} } = \frac{g g{'}}{\sqrt{g^2 + g{'} ^2}}
\label{eq:electromagneticCouplingDefinition}
\end{equation}
which allows to replace $g{'}$ in favor of $e$ or
$\alpha_\text{em}$. In our work, we use the fine-structure constant $\alpha
_\text{em}$ as an independent input. Alternatively, one could use the
tree-level relation to the Fermi constant
\begin{equation}
G_F \equiv \frac{\sqrt{2} g^2}{8m_W^2} = \frac{\alpha _\text{em} \pi
}{\sqrt{2} m_W^2 \left( 1 - \frac{m_W^2}{m_Z^2} \right)}
\label{eq:definitionFermiConstant}
\end{equation}
to replace one of the parameters of the electroweak sector in favor of
$G_F$. Since $G_F$ is used as an input value for {\texttt{HDECAY}}, we
present the formula here explicitly and explain the conversion between
the different parametrizations in Sec.\,\ref{sec:connectionHDECAY}.
Inserting Eq.~(\ref{eq:vevexpansion}) in the scalar potential in
\eqref{eq:scalarPotential} leads to
\begin{equation}
\begin{split}
V_\text{2HDM} =&~ \frac{1}{2} \left( \rho _1 ~~ \rho _2 \right) M_\rho ^2 \begin{pmatrix} \rho _1 \\ \rho _2 \end{pmatrix} + \frac{1}{2} \left( \eta _1 ~~ \eta _2 \right) M_\eta ^2 \begin{pmatrix} \eta _1 \\ \eta _2 \end{pmatrix} + \frac{1}{2} \left( \omega ^\pm _1 ~~ \omega ^\pm _2 \right) M_\omega ^2 \begin{pmatrix} \omega ^\pm _1 \\ \omega ^\pm _2 \end{pmatrix} \\
& + T_1 \rho _1 + T_2 \rho _2 + ~~ \cdots
\end{split}
\label{eq:scalarPotentialMultilinearFields}
\end{equation}
where the terms $T_1$ and $T_2$ and the matrices $M_\omega ^2$,
$M_\rho ^2$ and $M_\eta ^2$ are defined below. By requiring the
VEVs of \eqref{eq:vevexpansion} to
represent the minimum of the potential, the minimum conditions for the
potential can be expressed as
\begin{equation}
\frac{\partial V_\text{2HDM}}{\partial \Phi _i} \Bigg| _{\left\langle \Phi _j \right\rangle} = 0~.
\end{equation}
This is equivalent to the statement that the two terms linear in the
CP-even fields $\rho _1$ and $\rho _2$, the tadpole terms,
\begin{align}
\frac{T_1}{v_1} &\equiv m_{11}^2 - m_{12}^2 \frac{v_2}{v_1} + \frac{v_1^2 \lambda _1}{2} + \frac{v_2 ^2 \lambda _{345}}{2} \label{eq:tadpoleCondition1} \\
\frac{T_2}{v_2} &\equiv m_{22}^2 - m_{12}^2 \frac{v_1}{v_2} + \frac{v_2^2 \lambda _2}{2} + \frac{v_1 ^2 \lambda _{345}}{2} \label{eq:tadpoleCondition2}
\end{align}
have to vanish at tree level:
\begin{equation}
T_1 = T_2 = 0 ~~~ (\text{at tree level}) ~.
\label{eq:tadpoleVanishAtTreelevel}
\end{equation}
The tadpole equations can be solved for $m_{11}^2$ and $m_{22}^2$ in
order to replace these two parameters by the tadpole parameters $T_1$
and $T_2$.
The terms bilinear in the fields given in
\eqref{eq:scalarPotentialMultilinearFields} define the scalar mass
matrices
\begin{align}
M_\rho ^2 &\equiv \begin{pmatrix}
m_{12}^2 \frac{v_2}{v_1} + \lambda _1 v_1^2 & -m_{12}^2 + \lambda _{345} v_1 v_2 \\ -m_{12}^2 + \lambda _{345} v_1 v_2 & m_{12}^2 \frac{v_1}{v_2} + \lambda _2 v_2^2
\end{pmatrix} + \begin{pmatrix}
\frac{T_1}{v_1} & 0 \\ 0 & \frac{T_2}{v_2}
\end{pmatrix} \label{eq:massMatrices1} \\
M_\eta ^2 &\equiv \left( \frac{m_{12}^2}{v_1v_2} - \lambda _5 \right) \begin{pmatrix}
v_2^2 & -v_1 v_2 \\ -v_1 v_2 & v_1 ^2
\end{pmatrix} + \begin{pmatrix}
\frac{T_1}{v_1} & 0 \\ 0 & \frac{T_2}{v_2}
\end{pmatrix} \\
M_\omega ^2 &\equiv \left( \frac{m_{12}^2}{v_1v_2} - \frac{\lambda _4 + \lambda _5}{2} \right) \begin{pmatrix}
v_2^2 & -v_1 v_2 \\ -v_1 v_2 & v_1 ^2
\end{pmatrix} + \begin{pmatrix}
\frac{T_1}{v_1} & 0 \\ 0 & \frac{T_2}{v_2}
\end{pmatrix} \label{eq:massMatrices3}
\end{align}
where Eqs.\,(\ref{eq:tadpoleCondition1}) and
(\ref{eq:tadpoleCondition2}) have already been applied to replace the
parameters $m_{11}^2$ and $m_{22}^2$ in favor of $T_1$ and
$T_2$. Keeping the latter explicitly in the expressions of the
mass matrices is crucial for the correct renormalization of the scalar
sector, as explained in Sec.\,\ref{sec:renormalization2HDM}. By means
of two mixing angles $\alpha$ and $\beta$ which define the rotation
matrices\footnote{Here and in the following, we use the short-hand
notation $s_x \equiv \sin (x)$, $c_x \equiv \cos (x)$, $t_x \equiv
\tan (x)$ for convenience.}
\begin{equation}
R (x) \equiv \begin{pmatrix} c_x & - s_x \\ s_x & c_x \end{pmatrix}
\end{equation}
the fields $\omega ^+ _i$, $\rho _i$ and $\eta _i$ are rotated to the mass basis according to
\begin{align}
\begin{pmatrix} \rho _1 \\ \rho _2 \end{pmatrix} &= R(\alpha ) \begin{pmatrix} H \\ h \end{pmatrix} \label{eq:rotationCPEven} \\
\begin{pmatrix} \eta _1 \\ \eta _2 \end{pmatrix} &= R(\beta ) \begin{pmatrix} G^0 \\ A \end{pmatrix} \\
\begin{pmatrix} \omega ^\pm _1 \\ \omega ^\pm _2 \end{pmatrix} &= R(\beta ) \begin{pmatrix} G^\pm \\ H^\pm \label{eq:rotationCharged} \end{pmatrix}
\end{align}
with the two CP-even Higgs bosons $h$ and $H$, the CP-odd Higgs boson
$A$, the CP-odd Goldstone boson $G^0$ and the charged Higgs bosons
$H^\pm$ as well as the charged Goldstone bosons $G^\pm$. In the mass
basis, the diagonal mass matrices are given by
\begin{align}
D_\rho ^2 &\equiv \begin{pmatrix} m_H^2 & 0 \\ 0 & m_h^2 \end{pmatrix} \\
D_\eta ^2 &\equiv \begin{pmatrix} m_{G^0}^2 & 0 \\ 0 & m_A^2 \end{pmatrix} \\
D_\omega ^2 &\equiv \begin{pmatrix} m_{G^\pm}^2 & 0 \\ 0 & m_{H^\pm}^2 \end{pmatrix}
\end{align}
with the diagonal entries representing the squared masses of the
respective particles. The Goldstone bosons are massless,
\begin{equation}
m_{G^0}^2 = m_{G^\pm}^2 = 0~.
\end{equation}
The squared masses expressed in terms of the potential parameters and
the mixing angle $\alpha$ can be cast into the form~\cite{Kanemura:2004mg}
\begin{align}
m_H^2 &= c_{\alpha - \beta}^2 M_{11}^2 + s_{2(\alpha - \beta )} M_{12}^2 + s_{\alpha - \beta } ^2 M_{22}^2 \label{eq:parameterTransformationInteractionToMass1} \\
m_h^2 &= s_{\alpha - \beta}^2 M_{11}^2 - s_{2(\alpha - \beta )} M_{12}^2 + c_{\alpha - \beta } ^2 M_{22}^2 \\
m_A^2 &= \frac{m_{12}^2}{s_\beta c_\beta} - v^2 \lambda _5 \\
m_{H^\pm } ^2 &= \frac{m_{12}^2}{s_\beta c_\beta} - \frac{v^2}{2}
\left( \lambda _4 + \lambda _5 \right) \\
t_{2(\alpha - \beta ) } &= \frac{2M_{12}^2}{M_{11}^2 - M_{22}^2} \label{eq:parameterTransformationInteractionToMass5}
\end{align}
where we have introduced
\begin{align}
M_{11}^2 &\equiv v^2 \left( c_\beta ^4 \lambda _1 + s_\beta ^4 \lambda _2 + 2 s_\beta ^2 c_\beta ^2 \lambda _{345} \right) \\
M_{12}^2 &\equiv s_\beta c_\beta v^2 \left( - c_\beta ^2 \lambda _1 + s_\beta ^2 \lambda _2 + c_{2\beta } \lambda _{345} \right) \\
M_{22}^2 &\equiv \frac{m_{12}^2}{s_\beta c_\beta } + \frac{v^2}{8} \left( 1 - c_{4\beta } \right) \left( \lambda _1 + \lambda _2 - 2\lambda _{345} \right) ~.
\end{align}
Inverting these relations, the quartic couplings $\lambda _i$
($i=1,...,5$) can be expressed in terms of the mass
parameters $m_h^2$, $m_H^2$, $m_A^2$, $m_{H^\pm}^2$ and the CP-even
mixing angle $\alpha$ as \cite{Kanemura:2004mg}
\begin{align}
\lambda _1 &= \frac{1}{v^2 c_\beta ^2} \left( c_\alpha ^2 m_H^2 + s_\alpha ^2 m_h^2 - \frac{s_\beta}{c_\beta} m_{12}^2 \right) \label{eq:parameterTransformationMassToInteraction1} \\
\lambda _2 &= \frac{1}{v^2 s_\beta ^2} \left( s_\alpha ^2 m_H^2 + c_\alpha ^2 m_h^2 - \frac{c_\beta}{s_\beta} m_{12}^2 \right) \\
\lambda _3 &= \frac{2m_{H^\pm}^2}{v^2} + \frac{s_{2\alpha}}{s_{2\beta} v^2} \left( m_H^2 - m_h^2\right) - \frac{m_{12}^2}{s_\beta c_\beta v^2} \\
\lambda _4 &= \frac{1}{v^2} \left( m_A^2 - 2m_{H^\pm}^2 + \frac{m_{12}^2}{s_\beta c_\beta} \right) \\
\lambda _5 &= \frac{1}{v^2} \left( \frac{m_{12}^2}{s_\beta c_\beta} - m_A^2 \right) ~. \label{eq:parameterTransformationMassToInteraction5}
\end{align}
In order to avoid tree-level FCNC currents, as introduced by the most
general 2HDM Yukawa Lagrangian, one type of fermions is allowed to
couple only to one Higgs doublet by imposing a global $\mathbb{Z}_2$
symmetry under which $\Phi_{1,2} \to \mp \Phi_{1,2}$. Depending on the
$\mathbb{Z}_2$ charge assignments, there are four phenomenologically
different types of 2HDMs summarized in Tab.\,\ref{tab:yukawaDefinitions}.
\begin{table}[tb]
\centering
\begin{tabular}{ c c c c }
\hline
& $u$-type & $d$-type & leptons \\ \hline
I & $\Phi _2$ & $\Phi _2$ & $\Phi _2$ \\
II & $\Phi _2$ & $\Phi _1$ & $\Phi _1$ \\
lepton-specific & $\Phi _2$ & $\Phi _2$ & $\Phi _1$ \\
flipped & $\Phi _2$ & $\Phi _1$ & $\Phi _2$ \\
\hline
\end{tabular}
\caption{The four Yukawa types of the $\mathbb{Z}_2$-symmetric
2HDM defined by the Higgs doublet that couples to each kind of fermions.}
\label{tab:yukawaDefinitions}
\end{table}
For the four 2HDM types considered in this work, all Yukawa couplings can be
parametrized through six different Yukawa coupling parameters $Y_i$
($i=1,...,6$) whose values for the different types are presented
in Tab.\,\ref{tab:yukawaCouplings}. They are introduced here for later
convenience.
\begin{table}[tb]
\centering
\begin{tabular}{ c c c c c c c }
\hline
2HDM type & $Y_1$ & $Y_2$ & $Y_3$ & $Y_4$ & $Y_5$ & $Y_6$ \\ \hline
I & $\frac{c_\alpha }{s_\beta }$ & $\frac{s_\alpha }{s_\beta }$ & $-\frac{1}{t_\beta}$ & $\frac{c_\alpha }{s_\beta }$ & $\frac{s_\alpha }{s_\beta }$ & $-\frac{1}{t_\beta}$ \\
II & $-\frac{s_\alpha }{c_\beta }$ & $\frac{c_\alpha }{c_\beta }$ & $t_\beta $ & $-\frac{s_\alpha }{c_\beta }$ & $\frac{c_\alpha }{c_\beta }$ & $t_\beta $ \\
lepton-specific & $\frac{c_\alpha }{s_\beta }$ & $\frac{s_\alpha }{s_\beta }$ & $-\frac{1}{t_\beta}$ & $-\frac{s_\alpha }{c_\beta }$ & $\frac{c_\alpha }{c_\beta }$ & $t_\beta $ \\
flipped & $-\frac{s_\alpha }{c_\beta }$ & $\frac{c_\alpha }{c_\beta }$ & $t_\beta $ & $\frac{c_\alpha }{s_\beta }$ & $\frac{s_\alpha }{s_\beta }$ & $-\frac{1}{t_\beta}$ \\
\hline
\end{tabular}
\caption{Parametrization of the Yukawa coupling parameters in
terms of six parameters $Y_i$ ($i=1,...,6$)
for each 2HDM type.}
\label{tab:yukawaCouplings}
\end{table}
We conclude this section with an overview over the full set of
independent parameters that is used as input for the computations in
{\texttt{2HDECAY}}. Additionally to the parameters defined by
${\mathcal L}_{\text{2HDM}}^{\text{EW}}$, {\texttt{HDECAY}} requires
the electromagnetic coupling constant
$\alpha_{\text{em}}$ in the Thomson limit for the calculation of
the loop-induced decays into a photon pair and into $Z\gamma$, the
strong coupling constant
$\alpha _s$ for the loop-induced decay into gluons
and the QCD corrections as well as the total decay widths of
the $W$ and $Z$ bosons, $\Gamma _W$ and $\Gamma _Z$, for the
computation of the off-shell decays into
massive gauge boson final states. In the mass basis of the
scalar sector, the set of independent parameters is given by
\begin{equation}
\{ G_F, \alpha _s , \Gamma _W , \Gamma _Z , \alpha _\text{em} , m_W ,
m_Z , m_f, V_{ij} , t_\beta , m_{12}^2 , \alpha , m_h , m_H , m_A ,
m_{H^\pm} \} ~.
\label{eq:inputSetMassBase}
\end{equation}
Here $m_f$ denote the fermion masses of the strange, charm, bottom and
top quarks and of the $\mu$ and $\tau$ leptons
($f=s,c,b,t,\mu,\tau$). All other fermion
masses are assumed to be zero in {\texttt{HDECAY}} and will also be
assumed to be zero in our computation of the EW corrections to the
decay widths. The fermion and gauge boson masses are defined in
accordance with the recommendations of the LHC Higgs cross section
working group \cite{Denner:2047636}. The $V_{ij}$ denote the CKM
mixing matrix elements. All {\texttt{HDECAY}} decay widths are
computed in terms of the Fermi constant $G_F$
except for processes involving on-shell external photon vertices that are
expressed by $\alpha_\text{em}$ in the Thomson limit. In the
computation of the EW corrections, however, we require the on-shell
masses $m_W$ and $m_Z$ and the electromagnetic coupling at
the $Z$ boson mass scale,
$\alpha_{\text{em}}(m_Z^2)$ (not to be confused with the mixing angle
$\alpha$ in the Higgs sector), as input parameters for our
renormalization conditions. We will come back to this
point later.
Alternatively, the original parametrization of the scalar
potential in the interaction basis can be
used\footnote{{\texttt{HDECAY}} internally translates the parameters
from the interaction
to the mass basis, in terms of which the decay widths are
implemented.}. In this case, the set of
independent parameters is given by
\begin{equation}
\{ G_F, \alpha _s , \Gamma _W , \Gamma _Z , \alpha _\text{em} , m_W , m_Z ,
m_f, V_{ij} , t_\beta , m_{12}^2 , \lambda _1 , \lambda _2 , \lambda _3
, \lambda _4 , \lambda _5 \} ~.
\label{eq:inputSetInteractionBase}
\end{equation}
However, we want to emphasize that the automatic parameter conversion routine in {\texttt{2HDECAY}} is only performed when the parameters are given in the mass basis of \eqref{eq:inputSetMassBase}.
Actually, also the tadpole parameters $T_1$ and $T_2$ should be
included in the two sets as independent parameters of the Higgs
potential. However, as described in
Sec.\,\ref{sec:renormalization2HDM}, the treatment of the minimum of
the Higgs potential at higher orders requires special care, and in an
alternative treatment of the minimum conditions, the tadpole
parameters disappear as independent parameters. In any case, after the
renormalization procedure is completely performed, the tadpole
parameters vanish again and hence, do not count as input parameters
for {\texttt{2HDECAY}}.
\subsection{Renormalization}
\label{sec:renormalization2HDM}
We focus on the calculation of EW one-loop corrections to decay widths
of Higgs particles in the 2HDM. Since the higher-order (HO)
corrections of these decay widths are in general ultraviolet (UV)-divergent, a
proper regularization and renormalization of the UV divergences is
required. In the following, we briefly present the definition of the
counterterms (CTs) needed for the calculation of the EW one-loop
corrections. For a thorough derivation and presentation of the
gauge-independent renormalization of the 2HDM, we refer the reader to
\cite{Krause:2016gkg, Krause:2016oke, Krause:2016xku}.\footnote{See
also Refs.~\cite{Denner:2016etu,Altenkamp:2017ldc,Denner2018}
for a discussion of the renormalization of the 2HDM. For recent
works discussing gauge-independent renormalization within multi-Higgs
models, see \cite{Fox:2017hbw,Grimus:2018rte}.}
All input parameters that are renormalized for the calculation of the
EW corrections (apart from the mixing angles $\alpha $ and
$\beta$ and the soft-$\mathbb{Z}_2$-breaking scale $m_{12}$) are
renormalized in the on-shell (OS) scheme. For the physical fields,
we employ the conditions that any mixing of fields with the same
quantum numbers shall vanish on the mass shell of the respective
particles and that the fields are normalized by fixing the
residue of their corresponding propagators at their poles to unity. Mass
CTs are fixed through the condition that the masses are defined as the
real parts of the poles of the renormalized propagators. These OS
conditions suffice to renormalize most of the parameters of the 2HDM
necessary for our work. The renormalization of the mixing angles
$\alpha$ and $\beta$ follows an OS-motivated approach, as discussed in
Sec.\,\ref{sec:renormalizationMixingAngles}, while $m_{12}$ is
renormalized via an $\overline{\text{MS}}$ condition as discussed in
Sec.\,\ref{sec:renormalizationSoftm12Squared}.
\subsubsection{Renormalization of the Tadpoles}
\label{sec:renormalizationTadpoles}
As shown for the 2HDM for the first time in \cite{Krause:2016gkg,
Krause:2016oke}, the proper treatment of the
tadpole terms at one-loop order is crucial for the
gauge-independent definition of the CTs of the mixing angles $\alpha$
and $\beta$. This allows for the calculation of one-loop partial decay widths with
a manifestly gauge-independent relation between input variables and
the physical observable.
In the following, we briefly repeat the different renormalization
conditions for the tadpoles that can be employed in the 2HDM.
The \textit{standard tadpole scheme} is a commonly used
renormalization scheme for the tadpoles ({\it cf.}~{\it e.g.}~\cite{Denner:1991kt}
for the SM or \cite{Kanemura:2004mg, Kanemura:2015mxa} for the
2HDM). While the tadpole parameters vanish at tree level, as stated in
\eqref{eq:tadpoleVanishAtTreelevel}, they are in general non-vanishing
at higher orders in perturbation theory. Since the tadpole terms,
being the terms linear in the Higgs potential, define the minimum
of the potential, it is necessary to employ a renormalization of the
tadpoles in such a way that the ground state of the potential still
represents the minimum at higher orders. In the standard tadpole
scheme, this condition is imposed on the loop-corrected potential. By
replacing the tree-level tadpole terms at one-loop order with the
physical ({\it i.e.}~renormalized) tadpole terms and the tadpole CTs $\delta
T _i$,
\begin{equation}
T_i ~\rightarrow ~ T_i + \delta T_i ~~~ (i = 1,2)
\end{equation}
the correct minimum of the loop-corrected potential
is obtained by demanding the renormalized tadpole
terms $T_i$ to vanish. This directly connects the tadpole CTs $\delta
T_i$ with the corresponding one-loop tadpole diagrams,
\begin{equation}
i\delta T_{H/h} = \mathord{ \left(\rule{0cm}{30px}\right. \vcenter{
\hbox{ \includegraphics[height=57px , trim = 16.6mm 12mm 16.6mm
10mm, clip]{TadpoleDiagramHiggsBasis.pdf} } }
\left.\rule{0cm}{30px}\right) }
\label{eq:tadpoleCountertermDefinition}
\end{equation}
where we switched the tadpole terms from the interaction basis to the
mass basis by means of the rotation matrix $R(\alpha )$, as indicated
in \eqref{eq:rotationCPEven}. Since the tadpole terms explicitly
appear in the mass matrices in
Eqs.\,(\ref{eq:massMatrices1})-(\ref{eq:massMatrices3}), their CTs
explicitly appear in the mass matrices at one-loop
order. The rotation from the interaction to the mass basis yields
nine tadpole CTs in total which depend
on the two tadpole CTs $\delta T_{H/h}$ defined by the one-loop
tadpole diagrams in \eqref{eq:tadpoleCountertermDefinition}:
\begin{mdframed}[frametitle={Renormalization of the tadpoles (standard scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt]\begin{align}
\delta T_{HH} &= \frac{c _\alpha ^3 s _\beta + s _\alpha ^3 c _\beta }{vs _\beta c _\beta } \delta T_{H} - \frac{s_{2\alpha} s_{\beta - \alpha} }{vs_{2\beta} } \delta T_{h} \label{eq:RenormalizationRadpolesTadpoleCountertermDeltaTH0H0ExplicitForm} \\
\delta T_{Hh} &= -\frac{s _{2\alpha} s _{\beta - \alpha} }{vs_{2\beta}} \delta T_{H} + \frac{s _{2\alpha} c _{\beta - \alpha} }{vs_{2\beta}} \delta T_{h} \\
\delta T_{hh} &= \frac{s_{2\alpha} c_{\beta - \alpha} }{vs_{2\beta}} \delta T_{H} - \frac{s_\alpha ^3 s _\beta - c _\alpha ^3 c _\beta }{vs_\beta c_\beta } \delta T_{h} \\
\delta T_{G^0G^0} &= \frac{c_{\beta -\alpha} }{v} \delta T_{H} + \frac{s _{\beta - \alpha} }{v} \delta T_{h} \label{eq:RenormalizationRadpolesTadpoleCountertermDeltaTG0G0ExplicitForm} \\
\delta T_{G^0A} &= -\frac{s_{\beta - \alpha} }{v} \delta T_{H} + \frac{c _{\beta - \alpha} }{v} \delta T_{h} \\
\delta T_{AA} &= \frac{c_\alpha s_\beta ^3 + s _\alpha c_\beta ^3 }{vs_\beta c_\beta } \delta T_{H} - \frac{s_\alpha s_\beta ^3 - c_\alpha c_\beta ^3 }{vs_\beta c_\beta } \delta T_{h} \label{eq:RenormalizationRadpolesTadpoleCountertermDeltaTA0A0ExplicitForm} \\
\delta T_{G^\pm G^\pm } &= \frac{c_{\beta -\alpha} }{v}
\delta T_{H} + \frac{s _{\beta -
\alpha} }{v} \delta T_{h} \\
\delta T_{G^\pm H^\pm } &= -\frac{s_{\beta - \alpha} }{v}
\delta T_{H} + \frac{c _{\beta -
\alpha} }{v} \delta T_{h} \\
\delta T_{H^\pm H^\pm } &= \frac{c_\alpha s_\beta ^3
+ s _\alpha c_\beta ^3
}{vs_\beta c_\beta }
\delta T_{H} - \frac{s_\alpha
s_\beta ^3 - c_\alpha
c_\beta ^3 }{vs_\beta
c_\beta } \delta
T_{h}
\label{eq:RenormalizationRadpolesTadpoleCountertermDeltaTHpHpExplicitForm}
\end{align}\end{mdframed}
Since the minimum of the potential is defined through the
loop-corrected scalar potential, which in general is a gauge-dependent
quantity, the CTs defined through this minimum ({\it e.g.}~the CTs of
the scalar or gauge boson masses) become manifestly gauge-dependent
themselves. This is no problem as long as all gauge dependences
arising in a fixed-order calculation cancel against each other. In the
2HDM, however, an improper renormalization condition for the mixing
angle CTs within the standard tadpole scheme can lead to uncanceled
gauge dependences in the calculation of partial decay widths. This is
discussed in more detail in
Sec.\,\ref{sec:renormalizationMixingAngles}. Apart from the appearance
of the tadpole diagrams in
Eqs.\,(\ref{eq:RenormalizationRadpolesTadpoleCountertermDeltaTH0H0ExplicitForm})-(\ref{eq:RenormalizationRadpolesTadpoleCountertermDeltaTHpHpExplicitForm}),
and subsequently in the CTs and the wave function renormalization
constants (WFRCs) defined through these, the renormalization condition
in \eqref{eq:tadpoleCountertermDefinition} ensures that all other
appearances of tadpoles are canceled in the one-loop calculation,
{\it i.e.}~tadpole diagrams in the self-energies or vertex corrections
do not have to be taken into account.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth, trim=2.2cm 2.2cm 0cm 2.2cm, clip]{Propagators.pdf}
\caption{Generic definition of the self-energies $\Sigma$ and $\Sigma
^\text{tad}$ as function of the external momentum $p^2$
used in our CT definitions of the 2HDM. While $\Sigma$
is the textbook definition of the one-particle irreducible
self-energy, the self-energy $\Sigma ^\text{tad}$ additionally
contains tadpole diagrams, indicated by the gray
blob. For the actual calculation, the full particle content of the 2HDM has to be
inserted into the self-energy topologies depicted here.}
\label{fig:definitionOfSelfenergies}
\end{figure}
An alternative treatment of the tadpole renormalization was proposed
by J. Fleischer and F. Jegerlehner in the SM
\cite{PhysRevD.23.2001}. It was applied to the extended scalar sector
of the 2HDM for the first time in \cite{Krause:2016gkg,
Krause:2016oke} and is called \textit{alternative (FJ) tadpole
scheme} in the following. In this alternative approach, the VEVs
$v_{1,2}$ are considered as the fundamental quantities instead of the
tadpole terms. The \textit{proper} VEVs are the renormalized all-order
VEVs of the Higgs fields which represent the true ground state of the
theory and which are connected to the particle masses and the
couplings of the electroweak sector. Since the alternative approach
relies on the minimization of the gauge-independent tree-level scalar
potential, the mass CTs defined in this framework become manifestly
gauge-independent quantities by themselves. Moreover, the alternative
tadpole scheme connects the all-order renormalized VEVs directly to
the corresponding tree-level VEVs. Since the tadpoles are not the
fundamental quantities of the Higgs minimum in this framework, they do
not receive CTs. Instead, CTs for the VEVs are introduced by replacing
the VEVs with the renormalized VEVs and their CTs,
\begin{equation}
v_i ~ \rightarrow ~ v_i + \delta v_i
\end{equation}
and by fixing the latter in such a way that it is ensured that the
renormalized VEVs represent the proper tree-level minima to all
orders. At one-loop level, this leads to the following connection
between the VEV CTs in the interaction basis and the one-loop tadpole
diagrams in the mass basis,
\begin{equation}
\delta v_1 = \frac{-i c_\alpha }{m_{H}^2} \mathord{
\left(\rule{0cm}{30px}\right. \vcenter{
\hbox{ \includegraphics[height=57px , trim = 18mm 12mm 17.4mm
10mm, clip]{TadpoleDiagramHiggsBasisHH.pdf} } }
\left.\rule{0cm}{30px}\right) } - \frac{-i s_\alpha }{m_{h}^2}
\mathord{ \left(\rule{0cm}{30px}\right. \vcenter{
\hbox{ \includegraphics[height=57px , trim = 18mm 12mm 17.4mm
10mm, clip]{TadpoleDiagramHiggsBasish0.pdf} } }
\left.\rule{0cm}{30px}\right) } ~~~\text{and} ~~~ \delta v_2 = \frac{-i
s_\alpha }{m_{H}^2} \mathord{
\left(\rule{0cm}{30px}\right. \vcenter{
\hbox{ \includegraphics[height=57px , trim = 18mm 12mm 17.4mm
10mm, clip]{TadpoleDiagramHiggsBasisHH.pdf} } }
\left.\rule{0cm}{30px}\right) } + \frac{-i c_\alpha }{m_{h}^2}
\mathord{ \left(\rule{0cm}{30px}\right. \vcenter{
\hbox{ \includegraphics[height=57px , trim = 18mm 12mm 17.4mm
10mm, clip]{TadpoleDiagramHiggsBasish0.pdf} } }
\left.\rule{0cm}{30px}\right) } ~.
\label{eq:vevCountertermDefinition}
\end{equation}
The renormalization of the VEVs in the alternative tadpole scheme
effectively shifts the VEVs by tadpole contributions. As a
consequence, tadpole diagrams have to be considered wherever they can
appear in the 2HDM. For the self-energies, this means that the
fundamental self-energies used to define the CTs are the ones defined
as $\Sigma ^\text{tad}$ in \figref{fig:definitionOfSelfenergies}
instead of the usual one-particle irreducible self-energies
$\Sigma$. Additionally, tadpole diagrams have to be considered in the
calculation of the one-loop vertex corrections to the Higgs decays. In
summary, the renormalization of the tadpoles in the alternative scheme
leads to the following conditions:
\begin{mdframed}[frametitle={Renormalization of the tadpoles (alternative FJ scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt]\begin{align}
\delta T_{ij} &= 0 \\
\Sigma (p^2) ~ &\rightarrow ~\Sigma ^\text{tad} (p^2) \\
\text{Tadpole diagrams have to be }&\text{considered in the vertex
corrections.}
\nonumber
\end{align}\end{mdframed}
\subsubsection{Renormalization of the Gauge Sector}
\label{sec:renormalizationGaugeSector}
For the renormalization of the gauge sector, we introduce CTs and
WFRCs for all parameters and fields of the electroweak sector of the
2HDM by applying the shifts
\begin{align}
m_W^2 ~ &\rightarrow ~ m_W^2 + \delta m_W^2 \\
m_Z^2 ~ &\rightarrow ~ m_Z^2 + \delta m_Z^2 \\
\alpha _\text{em} ~ &\rightarrow ~ \alpha _\text{em} + \delta \alpha _\text{em} \equiv \alpha _\text{em} + 2 \alpha _\text{em} \delta Z_e \\
W^\pm _\mu ~ &\rightarrow ~ \left( 1 + \frac{\delta Z_{WW} }{2} \right) W^\pm _\mu \\
\begin{pmatrix} Z \\ \gamma \end{pmatrix} ~ &\rightarrow ~ \begin{pmatrix} 1 + \frac{\delta Z_{ZZ} }{2} & \frac{\delta Z_{Z \gamma }}{2} \\ \frac{\delta Z_{\gamma Z }}{2} & 1 + \frac{\delta Z_{\gamma \gamma }}{2} \end{pmatrix} \begin{pmatrix} Z \\ \gamma \end{pmatrix}
\;
\end{align}
where for convenience, we additionally introduced the shift
\begin{equation}
e ~ \rightarrow ~ e\,( 1 + \delta Z_e )
\end{equation}
for the electromagnetic coupling constant by using
\eqref{eq:electromagneticCouplingDefinition}. Applying OS
conditions to the gauge sector of the 2HDM leads to equivalent
expressions for the CTs as derived in Ref.~\cite{Denner:1991kt} for the
SM\footnote{In contrast to Ref.~\cite{Denner:1991kt}, however, we choose a
different sign for the $SU(2)_L$ term of the covariant derivative,
which subsequently leads to a different sign in front of the second
term of \eqref{eq:RenormalizationGaugeSectorExplicitFormDeltaZe}.},
for the standard and alternative tadpole scheme, respectively,
\begin{mdframed}[frametitle={Renormalization of the gauge sector (standard scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt]\begin{align}
\delta m_W^2 &= \textrm{Re} \left[ \Sigma _{WW} ^{T} \left( m_W ^2 \right) \right] \\
\delta m_Z^2 &= \textrm{Re} \left[ \Sigma _{ZZ} ^{T} \left( m_{Z} ^2 \right) \right]
\end{align}\end{mdframed}
\begin{mdframed}[frametitle={Renormalization of the gauge sector (alternative FJ scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt]\begin{align}
\delta m_W^2 &= \textrm{Re} \left[ \Sigma _{WW} ^{\textrm{tad},T} \left( m_W ^2 \right) \right] \\
\delta m_Z^2 &= \textrm{Re} \left[ \Sigma _{ZZ} ^{\textrm{tad},T} \left( m_{Z} ^2 \right) \right]
\end{align}\end{mdframed}
The WFRCs are the same in both tadpole schemes,
\begin{mdframed}[frametitle={Renormalization of the gauge sector
(standard and alternative FJ scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta Z_e (m_Z^2) &= \frac{1}{2} \left. \frac{\partial \Sigma ^T _{\gamma \gamma } \left( p^2 \right) }{\partial p^2 } \right| _{p^2 = 0} + \frac{s_W }{c_W } \frac{\Sigma ^T _{\gamma Z} \left( 0\right) }{m_Z ^2 } - \frac{1}{2} \Delta \alpha (m_Z^2) \label{eq:RenormalizationGaugeSectorExplicitFormDeltaZe} \\
\delta Z_{WW} &= - \textrm{Re} \left[ \frac{\partial \Sigma ^T _{WW} \left( p^2 \right) }{\partial p^2 } \right] _{p^2 = m_W^2 } \label{eq:RenormalizationGaugeSectorExplicitFormDeltaWW} \\
\begin{pmatrix} \delta Z_{ZZ} & \delta Z _{Z \gamma } \\ \delta Z _{\gamma Z } & \delta Z_{ \gamma \gamma } \end{pmatrix} &= \begin{pmatrix} - \textrm{Re} \left[ \frac{\partial \Sigma ^T _{ZZ} \left( p^2 \right) }{\partial p^2 } \right] _{p^2 = m_Z^2 } & \frac{2}{m_Z ^2 } \Sigma ^{T} _{Z \gamma } \left( 0 \right) \\ -\frac{2}{m_Z ^2 } \textrm{Re} \left[ \Sigma ^{T} _{Z \gamma } \left( m_Z ^2 \right) \right] & - \textrm{Re} \left[ \frac{\partial \Sigma ^T _{\gamma \gamma } \left( p^2 \right) }{\partial p^2 } \right] _{p^2 = 0} \end{pmatrix} \label{eq:RenormalizationGaugeSectorExplicitFormDeltaZZ}
\end{align}\end{mdframed}
The superscript $T$ indicates that only the transverse parts of the
self-energies are taken into account. The CT for the
electromagnetic coupling $\delta Z_e (m_Z^2)$ is defined at the scale
of the $Z$ boson mass instead of the Thomson limit. For this, the additional term
\begin{equation}
\Delta \alpha (m_Z^2) = \frac{\partial \Sigma _{\gamma \gamma} ^{\text{light},T} (p^2) }{\partial p^2} \Bigg| _{p^2 = 0} - \frac{\Sigma ^T _{\gamma \gamma } (m_Z^2) }{m_Z^2}
\label{eq:lightContributions}
\end{equation}
is required, where the transverse photon self-energy $\Sigma _{\gamma \gamma}
^{\text{light},T} (p^2)$ in \eqref{eq:lightContributions} contains
solely light fermion contributions ({\it i.e.}~contributions from all
fermions apart from the $t$ quark). This ensures that the results of
our EW one-loop computations are independent of large
logarithms due to light fermion contributions \cite{Denner:1991kt}.
For later convenience, we additionally introduce the shift of the weak coupling constant
\begin{equation}
g ~ \rightarrow ~ g + \delta g ~.
\end{equation}
Since $g$ is not an independent parameter in our approach, {\it
cf.}~\eqref{eq:electromagneticCouplingDefinition}, the CT $\delta g$
is not independent either and can be expressed through the other CTs
derived in this subsection as
\begin{equation}
\frac{\delta g}{g} = \delta Z_e (m_Z^2) + \frac{1}{2( m_Z^2 - m_W^2)} \left( \delta m_W^2 - \frac{m_W^2}{m_Z^2} \delta m_Z^2 \right) ~.
\end{equation}
\subsubsection{Renormalization of the Scalar Sector}
\label{sec:renormalizationScalarSector}
In the scalar sector of the 2HDM, the masses and fields of the scalar particles are shifted as
\begin{align}
m_{H}^2 ~ &\rightarrow ~ m_{H}^2 + \delta m_{H}^2 \\
m_{h}^2 ~ &\rightarrow ~ m_{h}^2 + \delta m_{h}^2 \\
m_{A}^2 ~ &\rightarrow ~ m_{A}^2 + \delta m_{A}^2 \\
m_{H^\pm }^2 ~ &\rightarrow ~ m_{H^\pm }^2 + \delta m_{H^\pm }^2 \\
\begin{pmatrix} H \\ h \end{pmatrix} ~ &\rightarrow ~ \begin{pmatrix} 1 + \frac{\delta Z _{{H} {H}}}{2} & \frac{\delta Z _{{H} {h}}}{2} \\ \frac{\delta Z _{{h} {H}}}{2} & 1 + \frac{\delta Z _{{h} {h}}}{2} \end{pmatrix} \renewcommand*{\arraystretch}{1.6} \begin{pmatrix} H \\ h \end{pmatrix} \label{RenormalizationOnShellLabelSectionScalarSectorFieldRenormalizationConstantsCPEvenHiggses} \\
\begin{pmatrix} G^0 \\ A \end{pmatrix} ~ &\rightarrow ~ \begin{pmatrix} 1 + \frac{\delta Z _{{G}^0 {G}^0}}{2} & \frac{\delta Z _{{G}^0 {A}}}{2} \\ \frac{\delta Z _{{A} {G}^0}}{2} & 1 + \frac{\delta Z _{{A} {A}}}{2} \renewcommand*{\arraystretch}{1.6} \end{pmatrix} \begin{pmatrix} G^0 \\ A \end{pmatrix} \label{RenormalizationOnShellLabelSectionScalarSectorFieldRenormalizationConstantsCPOddHiggses} \\
\begin{pmatrix} G^\pm \\ H^\pm \end{pmatrix} ~ &\rightarrow ~ \begin{pmatrix} 1 + \frac{\delta Z _{{G}^\pm {G}^\pm }}{2} & \frac{\delta Z _{{G}^\pm {H}^\pm }}{2} \\ \frac{\delta Z _{{H}^\pm {G}^\pm }}{2} & 1 + \frac{\delta Z _{{H}^\pm {H}^\pm }}{2} \renewcommand*{\arraystretch}{1.6} \end{pmatrix} \begin{pmatrix} G^\pm \\ H^\pm \end{pmatrix} \label{RenormalizationOnShellLabelSectionScalarSectorFieldRenormalizationConstantsChargedHiggses} ~.
\end{align}
Applying OS renormalization conditions leads to the following CT
definitions \cite{Krause:2016gkg},
\begin{mdframed}[frametitle={Renormalization of the scalar sector (standard scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta Z_{Hh} &= \frac{2}{m_{H}^2 - m_{h}^2} \textrm{Re} \Big[ \Sigma _{Hh} (m_{h}^2) - \delta T_{Hh} \Big] \label{RenormalizationScalarFieldsMassesExplicitFormWaveFunctionRenormalizationConstantH0h0} \\
\delta Z_{hH} &= -\frac{2}{m_{H}^2 - m_{h}^2} \textrm{Re} \Big[ \Sigma _{Hh} (m_{H}^2) - \delta T_{Hh} \Big] \label{RenormalizationScalarFieldsMassesExplicitFormWaveFunctionRenormalizationConstanth0H0} \\
\delta Z_{G^0A} &= -\frac{2}{m_{A}^2} \textrm{Re} \Big[ \Sigma _{G^0A} (m_{A}^2) - \delta T_{G^0A} \Big] \\
\delta Z_{AG^0} &= \frac{2}{m_{A}^2} \textrm{Re} \Big[ \Sigma _{G^0A} (0) - \delta T_{G^0A} \Big] \\
\delta Z_{G^\pm H^\pm} &= -\frac{2}{m_{H^\pm}^2} \textrm{Re} \Big[ \Sigma _{G^\pm H^\pm} (m_{H^\pm}^2) - \delta T_{G^\pm H^\pm} \Big] \\
\delta Z_{H^\pm G^\pm} &= \frac{2}{m_{H^\pm}^2} \textrm{Re} \Big[ \Sigma _{G^\pm H^\pm} (0) - \delta T_{G^\pm H^\pm} \Big] \\
\delta m_{H}^2 &= \textrm{Re} \Big[ \Sigma _{HH} (m_{H}^2) - \delta T_{HH} \Big] \\
\delta m_{h}^2 &= \textrm{Re} \Big[ \Sigma _{hh} (m_{h}^2) - \delta T_{hh} \Big] \\
\delta m_{A}^2 &= \textrm{Re} \Big[ \Sigma _{AA} (m_{A}^2) - \delta T_{AA} \Big] \\
\delta m_{H^\pm }^2 &= \textrm{Re} \Big[ \Sigma _{H^\pm H^\pm } (m_{H^\pm }^2) - \delta T_{H^\pm H^\pm } \Big]
\end{align}\end{mdframed}
\begin{mdframed}[frametitle={Renormalization of the scalar sector (alternative FJ scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta Z_{Hh} &= \frac{2}{m_{H}^2 - m_{h}^2} \textrm{Re} \Big[ \Sigma ^\textrm{tad} _{Hh} (m_{h}^2) \Big] \label{RenormalizationScalarFieldsMassesExplicitFormWaveFunctionRenormalizationConstantH0h0Alt} \\
\delta Z_{hH} &= -\frac{2}{m_{H}^2 - m_{h}^2} \textrm{Re} \Big[ \Sigma ^\textrm{tad} _{Hh} (m_{H}^2) \Big] \label{RenormalizationScalarFieldsMassesExplicitFormWaveFunctionRenormalizationConstanth0H0Alt} \\
\delta Z_{G^0A} &= -\frac{2}{m_{A}^2} \textrm{Re} \Big[ \Sigma ^\textrm{tad} _{G^0A} (m_{A}^2) \Big] \\
\delta Z_{AG^0} &= \frac{2}{m_{A}^2} \textrm{Re} \Big[ \Sigma ^\textrm{tad} _{G^0A} (0) \Big] \\
\delta Z_{G^\pm H^\pm} &= -\frac{2}{m_{H^\pm}^2} \textrm{Re} \Big[ \Sigma ^\textrm{tad} _{G^\pm H^\pm} (m_{H^\pm}^2) \Big] \\
\delta Z_{H^\pm G^\pm} &= \frac{2}{m_{H^\pm}^2} \textrm{Re} \Big[ \Sigma ^\textrm{tad} _{G^\pm H^\pm} (0) \Big] \\
\delta m_{H}^2 &= \textrm{Re} \Big[ \Sigma ^\textrm{tad} _{HH} (m_{H}^2) \Big] \label{RenormalizationScalarFieldsMassesExplicitFormMassCountertermH0} \\
\delta m_{h}^2 &= \textrm{Re} \Big[ \Sigma ^\textrm{tad} _{hh} (m_{h}^2) \Big] \label{RenormalizationScalarFieldsMassesExplicitFormMassCountertermh0} \\
\delta m_{A}^2 &= \textrm{Re} \Big[ \Sigma ^\textrm{tad} _{AA} (m_{A}^2) \Big] \\
\delta m_{H^\pm }^2 &= \textrm{Re} \Big[ \Sigma ^\textrm{tad} _{H^\pm H^\pm } (m_{H^\pm }^2) \Big] \label{RenormalizationScalarFieldsMassesExplicitFormMassCountertermHp}
\end{align}\end{mdframed}
\begin{mdframed}[frametitle={Renormalization of the scalar sector
(standard and alternative FJ scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta Z_{HH} &= - \textrm{Re} \left[ \frac{\partial \Sigma _{H H } \left( p^2 \right) }{\partial p^2 } \right] _{p^2 = m_{H} ^2} \label{RenormalizationScalarFieldsMassesExplicitFormWaveFunctionRenormalizationConstantH0H0} \\
\delta Z_{hh} &= - \textrm{Re} \left[ \frac{\partial \Sigma _{h h } \left( p^2 \right) }{\partial p^2 } \right] _{p^2 = m_{h} ^2} \\
\delta Z_{G^0G^0} &= - \textrm{Re} \left[ \frac{\partial \Sigma _{G^0 G^0 } \left( p^2 \right) }{\partial p^2 } \right] _{p^2 = 0} \\
\delta Z_{AA} &= - \textrm{Re} \left[ \frac{\partial \Sigma _{A A } \left( p^2 \right) }{\partial p^2 } \right] _{p^2 = m_{A} ^2} \\
\delta Z_{G^\pm G^\pm } &= - \textrm{Re} \left[ \frac{\partial \Sigma _{G^\pm G^\pm } \left( p^2 \right) }{\partial p^2 } \right] _{p^2 = 0} \\
\delta Z_{H^\pm H^\pm } &= - \textrm{Re} \left[ \frac{\partial \Sigma _{H^\pm H^\pm } \left( p^2 \right) }{\partial p^2 } \right] _{p^2 = m_{H^\pm} ^2} \label{RenormalizationScalarFieldsMassesExplicitFormWaveFunctionRenormalizationConstantHpHp}
\end{align}\end{mdframed}
with the tadpole CTs in the standard scheme defined in Eqs.\,(\ref{eq:RenormalizationRadpolesTadpoleCountertermDeltaTH0H0ExplicitForm})-(\ref{eq:RenormalizationRadpolesTadpoleCountertermDeltaTHpHpExplicitForm}).
\subsubsection{Renormalization of the Scalar Mixing Angles}
\label{sec:renormalizationMixingAngles}
In the following, we describe the renormalization of the scalar mixing
angles $\alpha$ and $\beta$ in the 2HDM. In our approach, we perform
the rotation from the interaction to the mass basis,
{\it cf.}~Eqs.\,(\ref{eq:rotationCPEven})-(\ref{eq:rotationCharged}), before
renormalization so that the mixing angles need to be renormalized.
At one-loop level, the bare mixing angles are replaced by their
renormalized values and counterterms as
\begin{align}
\alpha ~ &\rightarrow ~ \alpha + \delta \alpha \\
\beta ~ &\rightarrow ~ \beta + \delta \beta ~.
\end{align}
The renormalization of the mixing angles in the 2HDM is a non-trivial
task and several different schemes have been proposed in the
literature. In the following, we only briefly present the definition
of the mixing angle CTs in all different schemes that are implemented
in {\texttt{2HDECAY}} and refer to \cite{Krause:2016gkg,
Krause:2016oke} for details on the derivation of these
schemes. \\
\textbf{$\overline{\text{MS}}$ scheme.} It was shown in \cite{Lorenz:2015, Krause:2016gkg} that an
$\overline{\text{MS}}$ condition for $\delta \alpha$ and $\delta
\beta$ can lead to one-loop corrections that
are orders of magnitude larger than the LO result\footnote{In \cite{Denner:2016etu},
an $\overline{\text{MS}}$ condition for the scalar mixing angles
in certain processes led to corrections that are numerically
well-behaving due to a partial cancellation of large contributions
from tadpoles. In the decays considered in our work, an
$\overline{\text{MS}}$ condition of $\delta \alpha$ and $\delta
\beta$ in general leads to very large corrections, however.}. We implemented this scheme in {\texttt{2HDECAY}} for reference, as the $\overline{\text{MS}}$ CTs contain only the UV divergences of the CTs, but no finite parts $\left. \delta \alpha \right| _\text{fin}$ and $\left. \delta \beta \right| _\text{fin}$. After having checked for UV finiteness of the full decay width, the CTs of the mixing angles $\alpha$ and $\beta$ are effectively set to zero in {\texttt{2HDECAY}} in this scheme for the numerical evaluation of the partial decay widths.
\begin{mdframed}[frametitle={ Renormalization of $\delta \alpha$ and $\delta \beta$: $\overline{\text{MS}}$ scheme (both schemes) },frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\left. \delta \alpha \right| _\text{fin} &= 0 \\
\left. \delta \beta \right| _\text{fin} &= 0
\end{align}\end{mdframed}
The $\overline{\text{MS}}$ CTs of $\alpha$ and
$\beta$ depend on the renormalization scale $\mu_R$. The user has to
specifiy in the input file the scale at which $\alpha$ and $\beta$
are understood to be given when the $\overline{\text{MS}}$ renormalization scheme is
chosen. The one-loop corrected decay widths that contain these CTs,
then additionally depend on the renormalization scale of $\alpha$ and
$\beta$. The scale at which the decays are evaluated is also defined
by the user in the input file and should be chosen appropriately in
order to avoid the appearance of large logarithms in the EW one-loop
corrections. In case this scale differs from the scale of the
$\overline{\text{MS}}$ mixing angles
$\alpha$ and $\beta$, the automatic parameter conversion
routine converts $\alpha$ and $\beta$ to the scale of the
loop-corrected decay widths, as further described in
Sec.\,\ref{sec:ParameterConversion}. For the
conversion, the UV-divergent terms for the CTs $\delta \alpha$ and
$\delta \beta$ are needed, \textit{i.e.}\,the terms proportional to
$1/\varepsilon$. These UV-divergent terms are presented analytically
for both the standard and alternative tadpole scheme in
Ref.\,\cite{Altenkamp:2017ldc}. We cross-checked these terms
analytically in an independent calculation. \\
\textbf{KOSY scheme.} The KOSY scheme (denoted by the authors' initials)
was suggested in \cite{Kanemura:2004mg}. It combines the standard
tadpole scheme with the definition of the counterterms
through off-diagonal wave function renormalization constants.
As shown in \cite{Krause:2016gkg,Krause:2016oke}, the KOSY scheme not
only implies a gauge-dependent definition of the mixing angle CTs but also
leads to explicitly gauge-dependent decay amplitudes. The CTs are
derived by temporarily switching from the mass to the gauge
basis. Since $\beta$ diagonalizes both the charged and CP-odd
sector not all scalar fields can be defined OS at the same time,
unless a systematic modification of the $SU(2)$
relations is performed which we do not do here.
We implemented two different CT definitions where $\delta \beta$ is defined
through the CP-odd or the charged sectors, indicated by superscripts $o$
and $c$, respectively. The KOSY scheme is implemented in
{\texttt{2HDECAY}} both in the standard and in the alternative FJ
scheme as a benchmark scheme for comparison with other schemes, but
for actual computations, we do not recommend to use it due to the
explicit gauge dependence of the decay amplitudes. In the KOSY
scheme, the mixing angle CTs are defined as
\begin{mdframed}[frametitle={Renormalization of $\delta \alpha$ and $\delta \beta$: KOSY scheme (standard scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta \alpha &= \frac{1}{2(m_H^2 - m_h^2)} \text{Re} \left[ \Sigma _{Hh} (m_H^2) + \Sigma _{Hh} (m_h^2) - 2\delta T_{Hh} \right] \\
\delta \beta ^o &= -\frac{1}{2m_A^2} \text{Re} \left[ \Sigma _{G^0A} (m_A^2) + \Sigma _{G^0A} (0) - 2\delta T_{G^0A} \right] \\
\delta \beta ^c &= -\frac{1}{2m_{H^\pm}^2} \text{Re} \left[ \Sigma _{G^\pm H^\pm} (m_{H^\pm}^2) + \Sigma _{G^\pm H^\pm} (0) - 2\delta T_{G^\pm H^\pm} \right]
\end{align}\end{mdframed}
\begin{mdframed}[frametitle={Renormalization of $\delta \alpha$ and $\delta \beta$: KOSY scheme (alternative FJ scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta \alpha &= \frac{1}{2(m_H^2 - m_h^2)} \text{Re} \left[ \Sigma ^\text{tad} _{Hh} (m_H^2) + \Sigma ^\text{tad} _{Hh} (m_h^2) \right] \\
\delta \beta ^o &= -\frac{1}{2m_A^2} \text{Re} \left[ \Sigma ^\text{tad} _{G^0A} (m_A^2) + \Sigma ^\text{tad} _{G^0A} (0) \right] \\
\delta \beta ^c &= -\frac{1}{2m_{H^\pm}^2} \text{Re} \left[ \Sigma ^\text{tad} _{G^\pm H^\pm} (m_{H^\pm}^2) + \Sigma ^\text{tad} _{G^\pm H^\pm} (0) \right]
\end{align}\end{mdframed}
\vspace*{0.14cm}
\textbf{$p_{*}$-pinched scheme.} One possibility to avoid
gauge-parameter-dependent mixing angle CTs was
suggested in \cite{Krause:2016gkg, Krause:2016oke}. The main idea is
to maintain the OS-based definition of $\delta \alpha$ and $\delta
\beta$ of the KOSY scheme, but instead of using the usual
gauge-dependent off-diagonal WFRCs, the WFRCs are defined through
pinched self-energies in the alternative FJ scheme by applying the pinch
technique (PT) \cite{Binosi:2004qe, Binosi:2009qm, Cornwall:1989gv,
Papavassiliou:1989zd, Degrassi:1992ue, Papavassiliou:1994pr,
Watson:1994tn, Papavassiliou:1995fq}. As worked out for the
2HDM for the first time in \cite{Krause:2016gkg, Krause:2016oke}, the
pinched scalar self-energies are equivalent to the usual scalar
self-energies in the alternative FJ scheme, evaluated in Feynman-'t
Hooft gauge ($\xi =1$), up to additional UV-finite self-energy contributions
$\Sigma ^\text{add} _{ij} (p^2)$. The mixing angle
CTs depend on the scale where the pinched self-energies are
evaluated. In the $p_{*}$-pinched scheme, we follow the approach of
\cite{Espinosa:2002cd} in the MSSM, where the self-energies $\Sigma
^\text{tad} _{ij} (p^2)$ are evaluated at the scale
\begin{equation}
p_{*}^2 \equiv \frac{m_i^2 + m_j^2}{2} ~.
\end{equation}
At this scale, the additional contributions $\Sigma ^\text{add} _{ij}
(p^2)$ vanish. Using the $p_{*}$-pinched scheme at
one-loop level yields explicitly gauge-parameter-independent partial
decay widths. The mixing angle CTs are defined as
\begin{mdframed}[frametitle={Renormalization of $\delta \alpha$ and $\delta \beta$: $p_{*}$-pinched scheme (alternative FJ scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta \alpha &= \frac{1}{m_H^2 - m_h^2} \text{Re} \left[ \Sigma ^\text{tad} _{Hh} \left( \frac{m_H^2 + m_h^2}{2} \right) \right] _{\xi = 1} \\
\delta \beta ^o &= -\frac{1}{m_A^2} \text{Re} \left[ \Sigma ^\text{tad} _{G^0A} \left( \frac{m_A^2 }{2} \right) \right] _{\xi = 1} \\
\delta \beta ^c &= -\frac{1}{m_{H^\pm}^2} \text{Re} \left[ \Sigma ^\text{tad} _{G^\pm H^\pm} \left( \frac{m_{H^\pm}^2 }{2} \right) \right] _{\xi = 1}
\end{align}\end{mdframed}
\textbf{OS-pinched scheme.} In order to allow for the analysis of the effects
of different scale choices of the mixing angle CTs, we implemented
another OS-motivated scale choice, which is called the OS-pinched
scheme. Here, the additional terms do not vanish and are given by
\cite{Krause:2016gkg}
\begin{align}
\Sigma ^\textrm{add} _{H h } (p^2) &= \frac{\alpha _\text{em} m_Z^2 s_{\beta - \alpha} c_{\beta - \alpha} }{8 \pi m_W^2 \left( 1 - \frac{m_W^2}{m_Z^2} \right) } \left( p^2 - \frac{ m_{H}^2 + m_{h}^2}{2} \right) \bigg\{ \left[ B_0( p^2; m_Z^2, m_{A}^2) - B_0( p^2; m_Z^2, m_{Z}^2) \right] \nonumber \\
&\hspace*{0.4cm} + 2\frac{m_W^2}{m_Z^2} \left[ B_0( p^2; m_W^2, m_{H^\pm }^2) - B_0( p^2; m_W^2, m_{W}^2) \right] \bigg\} \label{eq:RenormalizationScalarAnglesAdditionalTermCPEven} \\
\Sigma ^\textrm{add} _{G^0 A } (p^2) &= \frac{\alpha _\text{em} m_Z^2 s_{\beta - \alpha} c_{\beta - \alpha}}{8 \pi m_W^2 \left( 1 - \frac{m_W^2}{m_Z^2} \right) } \left( p^2 - \frac{m_{A}^2}{2} \right) \left[ B_0( p^2; m_Z^2, m_{H}^2) - B_0( p^2; m_Z^2, m_{h}^2) \right] \label{eq:RenormalizationScalarAnglesAdditionalTermCPOdd} \\
\Sigma ^\textrm{add} _{G^\pm H^\pm } (p^2) &= \frac{\alpha _\text{em} s_{\beta - \alpha} c_{\beta - \alpha}}{4 \pi \left( 1 - \frac{m_W^2}{m_Z^2} \right) } \left( p^2 - \frac{m_{H^\pm }^2}{2} \right) \left[ B_0( p^2; m_W^2, m_{H}^2) - B_0( p^2; m_W^2, m_{h}^2) \right] ~. \label{eq:RenormalizationScalarAnglesAdditionalTermCharged}
\end{align}
The mixing angle CTs in the OS-pinched scheme are then defined as
\begin{mdframed}[frametitle={Renormalization of $\delta \alpha$ and $\delta \beta$: OS-pinched scheme (alternative FJ scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta \alpha &= \frac{\textrm{Re} \Big[ \left[ \Sigma ^\textrm{tad} _{Hh} (m_{H}^2) + \Sigma ^\textrm{tad} _{Hh} (m_{h}^2) \right] _{\xi = 1} + \Sigma ^\textrm{add} _{Hh} (m_{H}^2) + \Sigma ^\textrm{add} _{Hh} (m_{h}^2) \Big]}{2\left( m_{H}^2 - m_{h}^2\right) } \label{eq:RenormalizationScalarAnglesDeltaAlphaOSPinchedResult} \\
\delta \beta ^o &= -\frac{\textrm{Re} \Big[ \left[ \Sigma ^\textrm{tad} _{G^0A} (m_{A}^2) + \Sigma ^\textrm{tad} _{G^0A} (0) \right] _{\xi = 1} + \Sigma ^\textrm{add} _{G^0A} (m_{A}^2) + \Sigma ^\textrm{add} _{G^0A} (0) \Big]}{2 m_{A}^2} \label{eq:RenormalizationScalarAnglesDeltaBeta1OSPinchedResult} \\
\delta \beta ^c &= -\frac{\textrm{Re} \Big[ \left[ \Sigma ^\textrm{tad} _{G^\pm H^\pm } (m_{H^\pm }^2) + \Sigma ^\textrm{tad} _{G^\pm H^\pm } (0) \right] _{\xi = 1} + \Sigma ^\textrm{add} _{G^\pm H^\pm } (m_{H^\pm }^2) + \Sigma ^\textrm{add} _{G^\pm H^\pm } (0) \Big]}{2 m_{H^\pm }^2} \label{eq:RenormalizationScalarAnglesDeltaBeta2OSPinchedResult}
\end{align}\end{mdframed}
\textbf{Process-dependent schemes.} The definition of the mixing angle
CTs through observables, like {\it e.g.}~partial decay widths of Higgs
bosons, was proposed for the MSSM in \cite{Coarasa:1996qa,
Freitas:2002um} and for the 2HDM in \cite{Santos:1996hs}. This
scheme leads to explicitly gauge-independent partial
decay widths per construction. Moreover, the connection of the mixing angle CTs with
physical observables allows for a more physical interpretation of the
unphysical mixing angles $\alpha $ and $\beta$. However, as
it was shown in \cite{Krause:2016gkg, Krause:2016oke},
process-dependent schemes can in general lead to very large one-loop
corrections. We implemented three different process-dependent schemes
for $\delta \alpha$ and $\delta \beta$ in
{\texttt{2HDECAY}}. The schemes differ in the processes that are used
for the definition of the CTs. In all cases we have chosen leptonic
Higgs boson decays. For these, the QED corrections can be
separated in a UV-finite way from the rest of the EW corrections and therefore be
excluded from the counterterm
definition. This is necessary to avoid
the appearance of infrared (IR) divergences in the CTs \cite{Freitas:2002um}. The
NLO corrections to the partial decay widths of the leptonic decay
of a Higgs particle $\phi _i$ into a pair of leptons $f_j$, $f_k$ can
then be cast into the form
\begin{equation}
\Gamma _{\phi _i f_j f_k}^{\text{NLO,weak}} = \Gamma _{\phi _i f_j f_k}^{\text{LO}} \left( 1 + 2\text{Re} \left[ \mathcal{F}_{\phi _i f_j f_k}^\text{VC} + \mathcal{F}_{\phi _i f_j f_k}^\text{CT} \right] \right)
\end{equation}
where $\mathcal{F}_{\phi _i f_j f_k}^\text{VC}$ and $\mathcal{F}_{\phi
_i f_j f_k}^\text{CT}$ are the form factors of the vertex
corrections and the CT, respectively, and the superscript weak
indicates that in the vertex corrections IR-divergent QED
contributions are excluded. The form factor $\mathcal{F}_{\phi _i f_j
f_k}^\text{CT}$ contains either $\delta \alpha$ or $\delta \beta$ or
both simultaneously as well as other CTs that are fixed as described
in the other subsections of
Sec.\,\ref{sec:renormalization2HDM}. Employing the renormalization
condition
\begin{equation}
\Gamma ^\text{LO} _{\phi _i f_j f_k} \equiv \Gamma ^\text{NLO,weak} _{\phi _i f_j f_k}
\end{equation}
for two different decays then allows for a process-dependent
definition of the mixing angle CTs. For more details on the
calculation of the CTs in process-dependent schemes in the 2HDM, we
refer to \cite{Krause:2016gkg, Krause:2016oke}. In {\texttt{2HDECAY}},
we have chosen the following three different combinations of processes
as definition for the CTs,
\begin{enumerate}
\item $\delta \beta$ is first defined by $A \rightarrow \tau ^+ \tau ^-$ and $\delta \alpha$ is subsequently defined by $H \rightarrow \tau ^+ \tau ^-$.
\item $\delta \beta$ is first defined by $A \rightarrow \tau ^+ \tau ^-$ and $\delta \alpha$ is subsequently defined by $h \rightarrow \tau ^+ \tau ^-$.
\item $\delta \beta$ and $\delta \alpha$ are simultaneously defined by $H \rightarrow \tau ^+ \tau ^-$ and $h \rightarrow \tau ^+ \tau ^-$.
\end{enumerate}
Employing these renormalization conditions yields the following
definitions of the mixing angle CTs\footnote{While the definition of
the CTs is generically the same for both tadpole schemes, their actual analytic
forms differ in both schemes since some of the CTs used in the
definition differ in the two schemes, as well. However, when choosing a process-dependent scheme for the mixing angle CTs, the full partial decay width is independent of the chosen tadpole scheme, which was checked explicitly by us. Therefore, in {\texttt{2HDECAY}} we implemented the process-dependent schemes in the alternative tadpole scheme, only. }:
\begin{mdframed}[frametitle={Renormalization of $\delta \alpha$ and $\delta \beta$: process-dependent scheme 1 (both schemes)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta \alpha &= \frac{- Y_5}{Y_4} \bigg[ \mathcal{F}^\textrm{VC}_{H \tau \tau } + \frac{\delta g}{g} + \frac{\delta m_\tau }{m_\tau } - \frac{\delta m_W^2}{2m_W^2} + Y_6 \delta \beta + \frac{\delta Z_{HH}}{2} + \frac{Y_4}{Y_5} \frac{\delta Z_{hH}}{2} + \frac{\delta Z^\textrm{L} _{\tau \tau}}{2} \\
&\hspace*{1.4cm} + \frac{\delta Z^\textrm{R} _{\tau \tau}}{2} \bigg] \nonumber \\
\delta \beta &= \frac{- Y_6}{1+Y_6^2} \bigg[ \mathcal{F}^\textrm{VC}_{A \tau \tau } + \frac{\delta g}{g} + \frac{\delta m_\tau }{m_\tau } - \frac{\delta m_W^2}{2m_W^2} + \frac{\delta Z_{AA}}{2} - \frac{1}{Y_6} \frac{\delta Z_{G^0A}}{2} + \frac{\delta Z^\textrm{L} _{\tau \tau}}{2} + \frac{\delta Z^\textrm{R} _{\tau \tau}}{2} \bigg]
\end{align}\end{mdframed}
\begin{mdframed}[frametitle={Renormalization of $\delta \alpha$ and $\delta \beta$: process-dependent scheme 2 (both schemes)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta \alpha &= \frac{Y_4}{Y_5} \bigg[ \mathcal{F}^\textrm{VC}_{h \tau \tau } + \frac{\delta g}{g} + \frac{\delta m_\tau }{m_\tau } - \frac{\delta m_W^2}{2m_W^2} + Y_6 \delta \beta + \frac{\delta Z_{hh}}{2} + \frac{Y_5}{Y_4} \frac{\delta Z_{Hh}}{2} + \frac{\delta Z^\textrm{L} _{\tau \tau}}{2} \\
&\hspace*{1.1cm} + \frac{\delta Z^\textrm{R} _{\tau \tau}}{2} \bigg] \nonumber \\
\delta \beta &= \frac{- Y_6}{1+Y_6^2} \bigg[ \mathcal{F}^\textrm{VC}_{A \tau \tau } + \frac{\delta g}{g} + \frac{\delta m_\tau }{m_\tau } - \frac{\delta m_W^2}{2m_W^2} + \frac{\delta Z_{AA}}{2} - \frac{1}{Y_6} \frac{\delta Z_{G^0A}}{2} + \frac{\delta Z^\textrm{L} _{\tau \tau}}{2} + \frac{\delta Z^\textrm{R} _{\tau \tau}}{2} \bigg]
\end{align}\end{mdframed}
\begin{mdframed}[frametitle={Renormalization of $\delta \alpha$ and $\delta \beta$: process-dependent scheme 3 (both schemes)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta \alpha &= \frac{Y_4 Y_5}{Y_4^2 + Y_5^2} \bigg[ \mathcal{F}^\textrm{VC}_{h \tau \tau } - \mathcal{F}^\textrm{VC}_{H \tau \tau } + \frac{\delta Z_{hh}}{2} - \frac{\delta Z_{HH}}{2} + \frac{Y_5}{Y_4} \frac{\delta Z_{Hh}}{2} - \frac{Y_4}{Y_5} \frac{\delta Z_{hH}}{2} \bigg] \\
\delta \beta &= \frac{- 1}{Y_6(Y_4^2+Y_5^2)} \bigg[ (Y_4^2 + Y_5^2) \left( \frac{\delta g}{g} + \frac{\delta m_\tau }{m_\tau } - \frac{\delta m_W^2}{2m_W^2} + \frac{\delta Z^\textrm{L} _{\tau \tau}}{2} + \frac{\delta Z^\textrm{R} _{\tau \tau}}{2} \right) \\
&\hspace*{0.4cm}+ Y_4Y_5 \left( \frac{\delta Z_{Hh}}{2} + \frac{\delta Z_{hH}}{2} \right) + Y_4^2 \left( \frac{\delta Z_{hh}}{2} + \mathcal{F}^\textrm{VC}_{h \tau \tau } \right) + Y_5^2 \left( \frac{\delta Z_{HH}}{2} + \mathcal{F}^\textrm{VC}_{H \tau \tau } \right) \bigg] \nonumber
\end{align}\end{mdframed}
Note that for the process-dependent schemes, decays have to be chosen
that are experimentally accessible. This may not be the case for
certain parameter configurations, in which case the user has to
choose, if possible, the decay combination that leads to large enough
decay widths to be measurable. \\
\textbf{Physical (on-shell) schemes.} In order to exploit the advantages of process-dependent schemes, \textit{i.e.}\,gauge independence of the mixing angle CTs that are defined within these schemes, while simultaneously avoiding possible drawbacks, \textit{e.g.}\,potentially large NLO corrections, the mixing angle CTs can be defined through certain observables or combinations of $S$ matrix elements in such a way that the CTs of all other parameters of the theory do not contribute to the mixing angle CTs. Such a scheme was proposed for the quark mixing within the SM in \cite{Denner:2004bm} and for the mixing angle CTs in the 2HDM in \cite{Denner2018}, where the derivation of the scheme is presented in detail. Here, we only recapitulate the key ideas and state the relevant formulae. For the sole purpose of renormalizing the mixing angles, two right-handed fermion singlets $\nu _{1\text{R}}$ and $\nu _{2\text{R}}$ are added to the 2HDM Lagrangian. An additional discrete $\mathbb{Z}_2$ symmetry is imposed under which the singlets transform as
\begin{align}
\nu _{1\text{R}} &\longrightarrow - \nu _{1\text{R}} \\
\nu _{2\text{R}} &\longrightarrow \nu _{2\text{R}}
\end{align}
which prevents lepton generation mixing. The two singlets are coupled via Yukawa couplings $y_{\nu _1}$ and $y_{\nu _2}$ to two arbitrary left-handed lepton doublets of the 2HDM, giving rise to two massive Dirac neutrinos $\nu _1$ and $\nu _2$. The CT of the mixing angle $\alpha$ can then be defined by demanding that the ratio of the decay amplitudes of the decays $H \rightarrow \nu _i \bar{\nu} _i$ and $h \rightarrow \nu _i \bar{\nu} _i$ (for either $i=1$ or $i=2$) is the same at tree level and at NLO. Taking the ratio of the decay amplitudes has the advantage that other CTs apart from some WFRCs and the mixing angle CTs cancel against each other. For the CT of the mixing angle $\beta$, analogous conditions are imposed, involving additionally the decay of the pseudoscalar Higgs boson $A$ into the pair of massive neutrinos in the ratios of the LO and NLO decay amplitudes. In all cases, the mixing angle CTs are then given as functions of the scalar WFRCs as well as the genuine one-loop vertex corrections to the decays of the scalar particles into the pair of massive neutrinos, namely $\delta _{H\nu _i \bar{\nu} _i}$, $\delta _{h\nu _i \bar{\nu} _i}$ and $\delta _{A\nu _i \bar{\nu} _i}$, as given in Ref.\,\cite{Denner2018}. In this reference, three combinations of ratios of decay amplitudes were chosen to define three different renormalization schemes for the mixing angle CTs in the physical (on-shell) scheme:
\begin{itemize}
\item ``OS1'' scheme: $\mathcal{A} _{H_1 \rightarrow \nu _1 \bar{\nu }_1} / \mathcal{A} _{H_2 \rightarrow \nu _1 \bar{\nu }_1} $ for $\delta \alpha$ and $\mathcal{A} _{A \rightarrow \nu _1 \bar{\nu }_1} / \mathcal{A} _{H_1 \rightarrow \nu _1 \bar{\nu }_1} $ for $\delta \beta$
\item ``OS2'' scheme: $\mathcal{A} _{H_1 \rightarrow \nu _2 \bar{\nu }_2} / \mathcal{A} _{H_2 \rightarrow \nu _2 \bar{\nu }_2} $ for $\delta \alpha$ and $\mathcal{A} _{A \rightarrow \nu _2 \bar{\nu }_2} / \mathcal{A} _{H_1 \rightarrow \nu _2 \bar{\nu }_2} $ for $\delta \beta$
\item ``OS12'' scheme: $\mathcal{A} _{H_1 \rightarrow \nu _2 \bar{\nu }_2} / \mathcal{A} _{H_2 \rightarrow \nu _2 \bar{\nu }_2} $ for $\delta \alpha$ and a specific combination of all possible decay amplitudes $\mathcal{A} _{H_i \rightarrow \nu _j \bar{\nu }_j}$ and $\mathcal{A} _{A \rightarrow \nu _j \bar{\nu }_j}$ ($i,j=1,2$) for $\delta \beta$ .
\end{itemize}
All three of these schemes were implemented in
{\texttt{2HDECAY}}\footnote{As for the process-dependent schemes
before, the generic form of the CTs is valid for both the standard
and alternative tadpole scheme, while the actual analytic
expressions differ between the schemes. Since the full partial decay
width is again independent of the tadpole scheme when using the
physical (on-shell) scheme, we implemented these schemes in the
alternative tadpole scheme,
only.}\textsuperscript{,}\footnote{Note
that the CTs of the physical on-shell schemes are defined in
\cite{Denner2018} in the framework of the complex mass scheme
\cite{Denner:2005fg, Denner:2006ic} while in {\texttt{2HDECAY}},
we take the real parts of the self-energies through which these
CTs are defined. These different definitions can lead to
different finite parts in the one-loop partial decay
widths. These differences are formally of
next-to-next-to leading order.} according to the following definitions of the mixing angle CTs:
\begin{mdframed}[frametitle={ Renormalization of $\delta \alpha$ and $\delta \beta$: physical (on-shell) scheme OS1 (both schemes) },frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta \alpha &= s_\alpha c_\alpha \left( \delta _{H \nu _1 \bar{\nu} _1} - \delta _{h \nu _1 \bar{\nu} _1} \right) + s_\alpha c_\alpha \frac{\delta Z _{HH} - \delta Z_{hh}}{2} + \frac{c_\alpha ^2 \delta Z_{Hh} - s_\alpha ^2 \delta Z_{hH}}{2} \\
\delta \beta &= t_\beta \Bigg[ c_\alpha ^2 \delta _{H\nu _1 \bar{\nu} _1} + s_\alpha ^2 \delta _{h\nu _1 \bar{\nu} _1} - \delta _{A \nu _1 \bar{\nu} _1} + \frac{ c_\alpha ^2 \delta Z_{HH} + s_\alpha ^2 \delta Z_{hh} -\delta Z_{AA}}{2} \\
&\hspace*{1.24cm} - s_\alpha c_\alpha \frac{\delta Z_{Hh} + \delta Z_{hH}}{2} \Bigg] + \frac{\delta Z_{G^0A}}{2} \nonumber
\end{align}\end{mdframed}
\begin{mdframed}[frametitle={ Renormalization of $\delta \alpha$ and $\delta \beta$: physical (on-shell) scheme OS2 (both schemes) },frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta \alpha &= s_\alpha c_\alpha \left( \delta _{h \nu _2 \bar{\nu} _2} - \delta _{H \nu _2 \bar{\nu} _2} \right) + s_\alpha c_\alpha \frac{\delta Z _{hh} - \delta Z_{HH}}{2} + \frac{s_\alpha ^2 \delta Z_{Hh} - c_\alpha ^2 \delta Z_{hH}}{2} \\
\delta \beta &= \frac{1}{t_\beta } \Bigg[ \delta _{A \nu _2 \bar{\nu} _2} - s_\alpha ^2 \delta _{H\nu _2 \bar{\nu} _2} - c_\alpha ^2 \delta _{h\nu _2 \bar{\nu} _2} + \frac{ \delta Z_{AA} - s_\alpha ^2 \delta Z_{HH} - c_\alpha ^2 \delta Z_{hh} }{2} \\
&\hspace*{1.24cm} - s_\alpha c_\alpha \frac{\delta Z_{Hh} + \delta Z_{hH}}{2} \Bigg] + \frac{\delta Z_{G^0A}}{2} \nonumber
\end{align}\end{mdframed}
\begin{mdframed}[frametitle={ Renormalization of $\delta \alpha$ and $\delta \beta$: physical (on-shell) scheme OS12 (both schemes) },frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta \alpha &= s_\alpha c_\alpha \left( \delta _{h \nu _2 \bar{\nu} _2} - \delta _{H \nu _2 \bar{\nu} _2} \right) + s_\alpha c_\alpha \frac{\delta Z _{hh} - \delta Z_{HH}}{2} + \frac{s_\alpha ^2 \delta Z_{Hh} - c_\alpha ^2 \delta Z_{hH}}{2} \\
\delta \beta &= s_\beta c_\beta \left[ c_{2\alpha} \frac{\delta Z_{HH} - \delta Z_{hh}}{2} - s_{2\alpha} \frac{\delta Z_{Hh} + \delta Z_{hH}}{2} \right] + \frac{\delta Z_{G^0A}}{2} \\
&\hspace*{0.4cm} + s_\beta c_\beta \left[ \delta _{A\nu _2 \bar{\nu } _2} - \delta _{A\nu _1 \bar{\nu } _1} + c_\alpha ^2 \delta _{H\nu _1 \bar{\nu } _1} - s_\alpha ^2 \delta _{H\nu _2 \bar{\nu } _2} + s_\alpha ^2 \delta _{h\nu _1 \bar{\nu } _1} - c_\alpha ^2 \delta _{h\nu _2 \bar{\nu } _2} \right] \nonumber
\end{align}\end{mdframed}
For $y _{\nu _i} \rightarrow 0$ ($i=1,2$), the two Dirac neutrinos become massless again, the right-handed neutrino singlets decouple and the original 2HDM Lagrangian is recovered. The vertex corrections $\delta _{H\nu _i \bar{\nu} _i}$, $\delta _{h\nu _i \bar{\nu} _i}$ and $\delta _{A\nu _i \bar{\nu} _i}$ are non-vanishing in this limit, however, so that the mixing angle CTs can still be defined through these processes. The mixing angle CTs defined in these physical (on-shell) schemes are manifestly gauge-independent. \\
\textbf{Rigid symmetry scheme.} The renormalization of mixing matrix elements, \textit{e.g.}\,of $\alpha$ and $\beta$ for the scalar sector of the 2HDM, can be connected to the renormalization of the WFRCs by using the rigid symmetry of the Lagrangian. More specifically, it is possible to renormalize the fields and dimensionless parameters of the unbroken gauge theory and to connect the renormalization of the mixing matrix elements of \textit{e.g.}\,the scalar sector through a field rotation from the symmetric to the broken phase of the theory. Such a scheme was applied for the renormalization of the SM in \cite{Bohm:1986rj}. In \cite{Denner2018}, the scheme was applied to the scalar mixing angles of the 2HDM within the framework of the background field method (BFM) \cite{Zuber:1975sa,Zuber:1975gh,Boulware:1981,Abbott:1981,Abbott:1982,Hart:1983,Denner:1994xt}, which allows to formulate the mixing angle CTs as functions of the WFRCs $\delta Z_{\hat{H} \hat{h}}$ and $\delta Z_{\hat{h} \hat{H}}
$ in the alternative tadpole scheme, where the hat denotes that the fields are given in the BFM framework. These WFRCs differ from the ones used in the non-BFM framework, \textit{i.e.}\,$\delta Z_{Hh}$ and $\delta Z_{hH}
$ as given by Eqs.\,(\ref{RenormalizationScalarFieldsMassesExplicitFormWaveFunctionRenormalizationConstantH0h0Alt}) and (\ref{RenormalizationScalarFieldsMassesExplicitFormWaveFunctionRenormalizationConstanth0H0Alt}), by some additional term as presented in App.\,B of \cite{Denner2018} which coincides with the additional term given in \eqref{eq:RenormalizationScalarAnglesAdditionalTermCPEven} derived by means of the PT. The scalar self-energies involved in defining the mixing angle CTs are evaluated in a specifically chosen gauge, \textit{e.g.}\, the Feynman-'t Hooft gauge. This leads to the following definition of the CTs according to \cite{Denner2018} which is implemented in {\texttt{2HDECAY}},
\begin{mdframed}[frametitle={ Renormalization of $\delta \alpha$ and $\delta \beta$: BFMS scheme (alternative FJ scheme) },frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta \alpha &= \frac{\textrm{Re} \Big[ \left[ \Sigma ^\textrm{tad} _{Hh} (m_{H}^2) + \Sigma ^\textrm{tad} _{Hh} (m_{h}^2) \right] _{\xi = 1} + \Sigma ^\textrm{add} _{Hh} (m_{H}^2) + \Sigma ^\textrm{add} _{Hh} (m_{h}^2) \Big]}{2\left( m_{H}^2 - m_{h}^2\right) } \\
\delta \beta &= \frac{s_{2\beta}}{s_{2\alpha}} \frac{\textrm{Re} \Big[ \left[ \Sigma ^\textrm{tad} _{Hh} (m_{h}^2) - \Sigma ^\textrm{tad} _{Hh} (m_{H}^2) \right] _{\xi = 1} + \Sigma ^\textrm{add} _{Hh} (m_{h}^2) - \Sigma ^\textrm{add} _{Hh} (m_{H}^2) \Big]}{2\left( m_{H}^2 - m_{h}^2\right) } \\
&\hspace*{0.4cm} + \frac{e}{2m_W \sqrt{ 1 - \frac{m_W^2}{m_Z^2} } } \left[ s_{\beta - \alpha} \frac{\delta T_H}{m_H^2} - c_{\beta - \alpha} \frac{\delta T_h}{m_h^2} \right] \nonumber
\end{align}\end{mdframed}
where we replaced the BFM WFRCs with the corresponding self-energies and additional terms. As mentioned in \cite{Denner2018}, we want to remark that the definition of $\delta \alpha$ in the BFMS scheme coincides with the definition in the OS-pinched scheme of \eqref{eq:RenormalizationScalarAnglesDeltaAlphaOSPinchedResult}, while the definition of $\delta \beta$ in the BFMS scheme is different from the one in the OS-pinched scheme.
\subsubsection{Renormalization of the Fermion Sector}
\label{sec:renormalizationFermionSector}
The masses $m_f$, where $f$ generically stands for any fermion of the
2HDM, the CKM matrix elements $V_{ij}$ ($i,j=1,2,3$), the Yukawa coupling parameters
$Y_k$ ($k=1,...,6)$ and the fields of the fermion sector are replaced by the
renormalized quantities and the respective CTs and WFRCs as
\begin{align}
m_f ~ &\rightarrow ~ m_f + \delta m_f \\
V_{ij} ~&\rightarrow ~ V_{ij} + \delta V_{ij} \\
Y_k ~& \rightarrow ~ Y_k + \delta Y_k \\
f_i^L ~& \rightarrow ~ \left(\delta _{ij} + \frac{\delta Z_{ij}^{f,L} }{2} \right) f_j^L \\
f_i^R ~& \rightarrow ~ \left(\delta _{ij} + \frac{\delta Z_{ij}^{f,R} }{2} \right) f_j^R
\end{align}
where we use Einstein's sum convention in the last two lines. The
superscripts $L$ and $R$ denote the left- and right-chiral component
of the fermion fields, respectively. The Yukawa coupling parameters
$Y_i$ are not independent input parameters, but functions of $\alpha$ and
$\beta$, {\it cf.}~Tab.~\ref{tab:yukawaCouplings}. Their one-loop
counterterms are therefore given in terms of $\delta \alpha$
and $\delta \beta$ defined in
Sec.\,\ref{sec:renormalizationMixingAngles} by the following formulae
which are independent of the 2HDM type,
\begin{align}
\delta Y_1 &= Y_1 \left( -\frac{Y_2}{Y_1} \delta \alpha + Y_3 \delta \beta \right) \\
\delta Y_2 &= Y_2 \left( \frac{Y_1}{Y_2} \delta \alpha + Y_3 \delta \beta \right) \\
\delta Y_3 &= \left( 1+ Y_3 ^2 \right) \delta \beta \\
\delta Y_4 &= Y_4 \left( -\frac{Y_5}{Y_4} \delta \alpha + Y_6 \delta \beta \right) \\
\delta Y_5 &= Y_5 \left( \frac{Y_4}{Y_5} \delta \alpha + Y_6 \delta \beta \right) \\
\delta Y_6 &= \left( 1+ Y_6 ^2 \right) \delta \beta ~.
\end{align}
Before presenting the renormalization conditions of the
mass CTs and WFRCs, we shortly discuss the
renormalization of the CKM matrix. In \cite{Denner:1991kt} the
renormalization of the CKM matrix is connected to the renormalization
of the fields, which in turn are renormalized in an OS approach,
leading to the definition ($i,j,k=1,2,3$)
\begin{equation}
\delta V_{ij} = \frac{1}{4} \left[ \left( \delta Z ^{u,L} _{ik} - \delta Z ^{u,L \dagger} _{ik} \right) V_{kj} - V_{ik} \left( \delta Z ^{d,L} _{kj} - \delta Z ^{d,L \dagger} _{kj} \right) \right]
\label{eq:CKMCTdefinition}
\end{equation}
where the superscripts $u$ and $d$ denote up-type and down-type
quarks, respectively. This definition of the CKM matrix CTs leads to
uncanceled explicit gauge dependences when used in the calculation of
EW one-loop corrections, however, \cite{Gambino:1998ec,
Barroso:2000is, Kniehl:2000rb, Pilaftsis:2002nc, Yamada:2001px,
Diener:2001qt}. Since the CKM matrix is approximately a unit matrix
\cite{1674-1137-38-9-090001}, the
numerical effect of this gauge dependence is typically very small, but
the definition nevertheless introduces uncanceled explicit gauge dependences
into the partial decay widths, which should be avoided. In our work,
we follow the approach of
Ref.~\cite{Yamada:2001px} and use pinched fermion self-energies for the
definition of the CKM matrix CT. An analytic analysis shows that this
is equivalent with defining the CTs in \eqref{eq:CKMCTdefinition} in
the Feynman-'t Hooft gauge.
Apart from the CKM matrix CT, all other CTs of the fermion sector are
defined through OS conditions. The resulting forms of the CTs are
analogous to the ones presented in \cite{Denner:1991kt} and given by
\begin{mdframed}[frametitle={Renormalization of the fermion sector (standard scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta m_{f, i} &= \frac{m_{f,i}}{2} \text{Re} \left( \Sigma _{ii}^{f,L} (m_{f,i}^2) + \Sigma _{ii}^{f,R} (m_{f,i}^2) + 2\Sigma _{ii}^{f,S} (m_{f,i}^2) \right) \\
\delta Z^{f,L}_{ij} &= \frac{2}{m_{f,i}^2 - m_{f,j}^2} \text{Re} \bigg[ m_{f,j}^2 \Sigma _{ij}^{f,L} (m_{f,j}^2) + m_{f,i} m_{f,j} \Sigma _{ij} ^{f,R} (m_{f,j}^2) \\
&\hspace*{3.2cm} + (m_{f,i}^2 + m_{f,j}^2) \Sigma _{ij}^{f,S} (m_{f,j}^2) \bigg] ~~~~~~ (i\neq j) \nonumber \\
\delta Z^{f,R}_{ij} &= \frac{2}{m_{f,i}^2 - m_{f,j}^2} \text{Re} \bigg[ m_{f,j}^2 \Sigma _{ij}^{f,R} (m_{f,j}^2) + m_{f,i} m_{f,j} \Sigma _{ij} ^{f,L} (m_{f,j}^2) \\
&\hspace*{3.2cm} + 2 m_{f,i}m_{f,j} \Sigma _{ij}^{f,S} (m_{f,j}^2) \bigg] ~~~~~~ (i\neq j) \nonumber
\end{align}\end{mdframed}
\begin{mdframed}[frametitle={Renormalization of the fermion sector (alternative FJ scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta m_{f, i} &= \frac{m_{f,i}}{2} \text{Re} \left( \Sigma _{ii}^{f,L} (m_{f,i}^2) + \Sigma _{ii}^{f,R} (m_{f,i}^2) + 2\Sigma _{ii}^{\text{tad},f,S} (m_{f,i}^2) \right) \\
\delta Z^{f,L}_{ij} &= \frac{2}{m_{f,i}^2 - m_{f,j}^2} \text{Re} \bigg[ m_{f,j}^2 \Sigma _{ij}^{f,L} (m_{f,j}^2) + m_{f,i} m_{f,j} \Sigma _{ij} ^{f,R} (m_{f,j}^2) \\
&\hspace*{3.2cm} + (m_{f,i}^2 + m_{f,j}^2) \Sigma _{ij}^{\text{tad},f,S} (m_{f,j}^2) \bigg] ~~~~~~ (i\neq j) \nonumber \\
\delta Z^{f,R}_{ij} &= \frac{2}{m_{f,i}^2 - m_{f,j}^2} \text{Re} \bigg[ m_{f,j}^2 \Sigma _{ij}^{f,R} (m_{f,j}^2) + m_{f,i} m_{f,j} \Sigma _{ij} ^{f,L} (m_{f,j}^2) \\
&\hspace*{3.2cm} + 2 m_{f,i}m_{f,j} \Sigma _{ij}^{\text{tad},f,S} (m_{f,j}^2) \bigg] ~~~~~~ (i\neq j) \nonumber \label{RenormalizationFermionSectorExplicitFormMassCountertermTauAlternativeTadpoleScheme}
\end{align}\end{mdframed}
\begin{mdframed}[frametitle={Renormalization of the fermion sector
(standard and alternative FJ scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]\begin{align}
\delta V_{ij} &= \frac{1}{4} \left[ \left( \delta Z ^{u,L} _{ik} - \delta Z ^{u,L \dagger} _{ik} \right) V_{kj} - V_{ik} \left( \delta Z ^{d,L} _{kj} - \delta Z ^{d,L \dagger} _{kj} \right) \right] _{\xi = 1} \\
\delta Z^{f,L} _{ii} &= - \textrm{Re} \Big[ \Sigma ^{f,L} _{ii} (m_{f,i} ^2) \Big] - m_{f,i} ^2 \textrm{Re} \left[ \frac{\partial \Sigma ^{f,L} _{ii} (p ^2)}{\partial p^2} + \frac{\partial \Sigma ^{f,R} _{ii} (p ^2)}{\partial p^2} + 2\frac{\partial \Sigma ^{f,S} _{ii} (p ^2)}{\partial p^2} \right] _{p^2 = m_{f,i} ^2} \raisetag{2.2\baselineskip} \\
\delta Z^{f,R} _{ii} &= - \textrm{Re} \Big[ \Sigma ^{f,R} _{ii} (m_{f,i} ^2) \Big] - m_{f,i} ^2 \textrm{Re} \left[ \frac{\partial \Sigma ^{f,L} _{ii} (p ^2)}{\partial p^2} + \frac{\partial \Sigma ^{f,R} _{ii} (p ^2)}{\partial p^2} + 2\frac{\partial \Sigma ^{f,S} _{ii} (p ^2)}{\partial p^2} \right] _{p^2 = m_{f,i} ^2} \raisetag{2.2\baselineskip}
\end{align}\end{mdframed}
where as before, the superscripts $L$ and $R$ denote the left- and
right-chiral parts of the self-energies, while the superscript $S$
denotes the scalar part.
\subsubsection{Renormalization of the Soft-$\mathbb{Z}_2$-Breaking Parameter $m_{12}^2$}
\label{sec:renormalizationSoftm12Squared}
The last remaining parameter of the 2HDM that needs to be renormalized
is the soft-$\mathbb{Z}_2$-breaking parameter $m_{12}^2$. As before,
we replace the bare parameter by the renormalized one and its
corresponding CT,
\begin{equation}
m_{12}^2 ~\rightarrow ~ m_{12}^2 + \delta m_{12}^2 ~.
\end{equation}
In order to fix $\delta m_{12}^2$ in a physical way, one could use a
process-dependent scheme analogous to
Sec.\,\ref{sec:renormalizationMixingAngles} for the scalar mixing
angles. Since $m_{12}^2$ only appears in trilinear Higgs couplings, a
Higgs-to-Higgs decay width would have to be chosen as observable that
fixes the CT. However, as discussed in \cite{Krause:2016xku}, a
process-dependent definition of $\delta m_{12}^2$ can lead to very
large one-loop corrections in Higgs-to-Higgs decays. We therefore
employ an $\overline{\text{MS}}$ condition in {\texttt{2HDECAY}} to
fix the CT. This is done by calculating the
off-shell decay process $h \rightarrow hh$ at one-loop order and by
extracting all UV-divergent terms. This fixes the CT of $m_{12}^2$ to
\begin{mdframed}[frametitle={Renormalization of $m_{12}^2$ (standard
and alternative FJ scheme)},frametitlerule=true,frametitlebackgroundcolor=black!14,frametitlerulewidth=0.6pt,nobreak=true]
\begin{align}
\delta m_{12}^2 &= \frac{\alpha _\text{em} m_{12}^2 }{16\pi m_W^2 \left( 1 - \frac{m_W^2}{m_Z^2} \right)} \Big[ \frac{8m_{12}^2}{s_{2\beta }} - 2m_{H^\pm }^2 - m_A^2 + \frac{s_{2\alpha }}{s_{2\beta }} (m_H^2 - m_h^2) - 3(2m_W^2 + m_Z^2) \nonumber \\
&\hspace*{0.3cm} + \sum _u 3 m_u^2 \frac{1}{s_\beta ^2} - \sum _d 6 m_d^2 Y_3 \left( -Y_3 - \frac{1}{t_{2\beta}} \right) - \sum _l 2 m_l^2 Y_6 \left( -Y_6 - \frac{1}{t_{2\beta}} \right) \Big] \Delta \label{eq:renormalizationConditionm12Sq}
\end{align}\end{mdframed}
where the sum indices $u$, $d$ and $l$ indicate a summation over all
up-type and down-type quarks and charged leptons,
respectively, and
\begin{equation}
\Delta \equiv \frac{1}{\varepsilon } - \gamma _E + \ln (4\pi ) + \ln \left( \frac{\mu ^2}{\mu _R^2} \right) ~.
\end{equation}
Here, $\gamma _E$ is the Euler-Mascheroni
constant, $\varepsilon$ the dimensional shift when switching
from 4 physical to $D=4-2\varepsilon$ space-time dimensions in the framework of
dimensional regularization
\cite{Wilson1971,Wilson1972,Ashmore1972,Bollini:1972ui,THOOFT1972189} and $\mu$ is the
mass-dimensional 't Hooft scale which cancels in the calculation of
the decay amplitudes.
The result in \eqref{eq:renormalizationConditionm12Sq} is in
agreement with the formula presented in \cite{Kanemura:2015mxa}.
Since $m_{12}^2$ is $\overline{\text{MS}}$
renormalized, the user has to specify in the input file the scale at
which the parameter is understood to be
given.\footnote{All $\overline{\text{MS}}$ input
parameters are understood to be given at the same scale so that in
the input file there is only one entry for its specification.} Just as for the
$\overline{\text{MS}}$ renormalized mixing angles, the automatic
parameter conversion routine adapts $m_{12}^2$ to the scale at which
the EW one-loop corrected decay widths are evaluated in case the two
scales differ.
\subsection{Electroweak Decay Processes at LO and NLO}
\label{sec:decayProcessesAtLOandNLO}
Figure~\ref{fig:decayHiggsParticles} shows the topologies that
contribute to the tree-level and one-loop corrected decay of a scalar
particle $\phi$ with four-momentum $p_1$ into two other particles
$X _1$ and $X _2$ with four-momenta $p_2$ and $p_3$,
respectively. We emphasize that for the EW corrections, we restrict ourselves to
OS decays, {\it i.e.}~we demand
\begin{equation}
p _1^2 \ge (p_2 + p_3)^2
\end{equation}
with $p_i^2 = m_i^2$ ($i=1,2,3$) where $m_i$ denote the masses of the
three particles. Moreover, we do not calculate EW corrections to
loop-induced Higgs decays, which are of two-loop order. In particular,
we do not provide EW corrections to Higgs boson decays into two-gluon,
two-photon or $Z\gamma$ final states.
Note, however, that the decay widths implemented in
{\texttt{HDECAY}} include also loop-induced decay widths as well as
off-shell decays into heavy-quark,
massive gauge boson, neutral Higgs pair as well as Higgs and gauge boson final
states. We come back to this point in Sec.\,\ref{sec:connectionHDECAY}.
\begin{figure}[tb]
\centering
\includegraphics[width=12.5cm, trim=0cm 0cm 0cm 0.8cm, clip]{DecayAmplitudesLOandNLO.pdf}
\caption{Decay amplitudes at LO and NLO. The LO decay amplitude
$\mathcal{A}^\text{LO} _{\phi X _1 X _2}$ simply
consists of the trilinear coupling of the three particles $\phi
_1$, $X _1$ and $X _2$, while the one-loop amplitude is
given by the sum of the genuine vertex corrections
$\mathcal{A}^\text{VC} _{\phi X _1 X _2}$, indicated by
a grey blob, and the vertex counterterm $\mathcal{A}^\text{CT}
_{\phi X _1 X _2}$ which also includes all WFRCs
necessary to render the NLO amplitude UV-finite. We do not show
corrections on the external legs since in the decays we
consider, they vanish either due to OS renormalization
conditions or due to Slavnov-Taylor identities. In the case of
the alternative tadpole scheme, the vertex corrections
$\mathcal{A}^\text{VC} _{\phi X _1 X _2}$ also in
general contain tadpole diagrams.}
\label{fig:decayHiggsParticles}
\end{figure}
The LO and NLO partial decay widths were calculated by first
generating all Feynman diagrams and the corresponding amplitudes for
all decay modes that exist for the 2HDM, shown topologically in
\figref{fig:decayHiggsParticles}, with help of the tool
{\texttt{FeynArts 3.9}} \cite{Hahn:2000kx}. To that end, we used the
2HDM model file that is implemented in {\texttt{FeynArts}}, but
modified the Yukawa couplings to implement all four 2HDM
types. Diagrams that account for NLO corrections on the external legs
were not calculated since for all decay modes that we considered, they
either vanish due to OS renormalization conditions or due to
Slavnov-Taylor identities. All amplitudes were then calculated
analytically with {\texttt{FeynCalc 8.2.0}} \cite{MERTIG1991345,
Shtabovenko:2016sxi}, together with all self-energy amplitudes
needed for the CTs. For the numerical evaluation of all loop integrals involved in the analytic expression of the one-loop amplitudes, {\texttt{2HDECAY}} links {\texttt{LoopTools 2.14}} \cite{HAHN1999153}.
The LO partial decay width is obtained from the LO amplitude
$\mathcal{A}^\text{LO} _{\phi X _1 X _2}$, while the NLO
amplitude is given by the sum of all amplitudes stemming from
the vertex correction and the necessary CTs as defined in
Sec.\,\ref{sec:renormalization2HDM},
\begin{equation}
\mathcal{A}^\text{1loop} _{\phi X _1 X _2} \equiv \mathcal{A}^\text{VC} _{\phi X _1 X _2} + \mathcal{A}^\text{CT} _{\phi X _1 X _2} ~.
\end{equation}
By introducing the K\"{a}ll\'en phase space function
\begin{equation}
\lambda (x,y,z) \equiv \sqrt{x^2 + y^2 + z^2 - 2xy - 2xz - 2yz}
\end{equation}
the LO and NLO partial decay widths can be cast into the form
\begin{align}
\Gamma ^\text{LO} _{\phi X _1 X _2} &= S \frac{\lambda (m_1^2 , m_2^2 , m_3^2 )}{16\pi m_1^3} \sum _\text{d.o.f.} \left| \mathcal{A} _{\phi X _1 X _2}^\text{LO} \right| ^2 \label{eq:decayWidthLO} \\
\Gamma ^\text{NLO} _{\phi X _1 X _2} &= \Gamma ^\text{LO} _{\phi X _1 X _2} + S \frac{\lambda (m_1^2 , m_2^2 , m_3^2 )}{8\pi m_1^3} \sum _\text{d.o.f.} \text{Re} \left[ \left( \mathcal{A} _{\phi X _1 X _2}^\text{LO} \right) ^{*} \mathcal{A} _{\phi X _1 X _2}^\text{1loop} \right] + \Gamma _{\phi X _1 X _2 + \gamma} \label{eq:decayWidthNLO}
\end{align}
where the symmetry factor $S$ accounts for identical particles in the
final state and the sum extends over all degrees of freedom of the
final-state particles, {\it i.e.}~over spins or polarizations. The partial
decay width $\Gamma _{\phi X _1 X _2 + \gamma}$ accounts for
real corrections that are necessary for removing IR divergences in all
decays that involve charged particles in the initial or final
state. For this, we implemented the results given in
\cite{Goodsell:2017pdq} for generic one-loop two-body partial decay
widths. Since the involved integrals are analytically
solvable for two-body decays \cite{Denner:1991kt},
the IR corrections that are implemented in {\texttt{2HDECAY}} are
given in analytic form as well and do not require numerical
integration. Additionally, since the implemented integrals account for
the full phase-space of the radiated photon, {\it i.e.} both the ``hard''
and ``soft'' parts, our results do not depend on arbitrary cuts in the
photon phase-space.
In the following, we present all decay channels for which the
EW corrections were calculated at one-loop order:
\begin{itemize}
\item $h/H/A \to f\bar{f}$ ~ ($f=u,d,c,s,t,b,e, \mu ,\tau $)
\item $h/H \to VV$ ~ ($V=W^\pm ,Z$)
\item $h/H \to VS$ ~ ($V=Z, W^\pm$, $S=A, H^\pm$)
\item $h/H \to SS$ ~ ($S = A, H^\pm$)
\item $H \to hh$
\item $H^\pm \to VS$ ~ ($V=W^\pm$, $S=h,H,A$)
\item $H^+ \to f\bar{f}$ ~ ($f=u,c,t, \nu _e , \nu _\mu ,
\nu _\tau $ , $\bar{f} = \bar{d}, \bar{s}, \bar{b}, e^+ ,
\mu ^+ , \tau ^+ $)
\item $A \to VS$ ~ ($V=Z,W^\pm$, $S=h,H,H^\pm$)
\end{itemize}
All analytic results of these decay processes are stored in
subdirectories of {\texttt{2HDECAY}}. For a consistent connection with
{\texttt{HDECAY}}, {\it cf.}\,also Sec.\,\ref{sec:connectionHDECAY}, not all
of these decay processes are used for the calculation of the decay
widths and branching ratios, however. Decays containing pairs of
first-generation fermions are neglected, {\it i.e.}\,in {\texttt{2HDECAY}},
the EW corrections of the following processes are not used
for the calculation of the partial decay widths and branching ratios:
$h/H/A \to f\bar{f}$ ($f=u,d,e $) and $H^+ \to f\bar{f}$
($f\bar{f}=u\bar{d}, \nu _e e^+ $). The reason is
that they are overwhelmed by the Dalitz decays $\Phi \to f\bar{f}^{(')}
\gamma$ ($\Phi=h,H,A,H^\pm$) that are induced {\it
e.g.}~by off-shell $\gamma^* \to f\bar{f}$ splitting.
\subsection{Link to HDECAY, Calculated Higher-Order Corrections and Caveats}
\label{sec:connectionHDECAY}
The EW one-loop corrections to the Higgs decays in the 2HDM derived in
this work are combined with
{\texttt{HDECAY}} version 6.52 \cite{DJOUADI199856,
Djouadi:2018xqq}\footnote{The program code for
{\texttt{HDECAY}} can be downloaded from the URL
\href{http://tiger.web.psi.ch/hdecay/}{http://tiger.web.psi.ch/hdecay/}.}
in form of the new tool {\texttt{2HDECAY}}. The Fortran code {\texttt{HDECAY}}
provides the LO and QCD corrected decay widths.
As outlined in
Sec.\,\ref{sec:renormalizationGaugeSector} the EW corrections use
$\alpha _\text{em}$ at the $Z$ boson mass scale as input parameter
instead of $G_F$ as used in {\tt HDECAY}. For a
consistent combination of the EW corrected decay widths with the {\tt HDECAY}
implementation in the $G_F$ scheme we would have to convert between
the $\{\alpha_{\text{em}}, m_W, m_Z\}$ and the $\{ G_F, m_W, m_Z\}$
scheme including 2HDM higher-order corrections in the conversion
formulae. Since these conversion
formulae are not implemented yet, we chose a
pragmatic approximate solution:
In the configuration of {\texttt{2HDECAY}} with {\texttt{OMIT ELW2=0}}
being set ({\it cf.}~the input file format described in
Sec.\,\ref{sec:InputFileFormat}), the EW corrections to the decay
widths are calculated automatically. This setting
also overwrites the value that the user chooses for the input
{\texttt{2HDM}}. If {\it e.g.}~the user does not choose the 2HDM by
setting {\texttt{2HDM=0}} but at the same time chooses {\texttt{OMIT
ELW2=0}} in order
to calculate the EW corrections, then a warning is printed and
{\texttt{2HDM=1}} is automatically set internally. In
this configuration, the value of
$G_F$ given in the input file of {\texttt{2HDECAY}} is ignored by the
part of the program that calculates the EW
corrections. Instead, $\alpha _\text{em} (m_Z^2)$, given in line 26 of
the input file, is taken as independent input. This $\alpha _\text{em}
(m_Z^2)$ is used for the calculation of all electroweak
corrections. Subsequently, for the consistent combination with the
decay widths of {\texttt{HDECAY}} computed in terms of the Fermi
constant $G_F$, the latter decay widths are adapted to the input
scheme of the EW corrections by rescaling the {\texttt{HDECAY}} decay
widths with $G_F^\text{calc}/G_F$, where $G_F^\text{calc}$
is calculated by means of the tree-level relation
\eqref{eq:definitionFermiConstant} as a function of $\alpha_\text{em}
(m_Z^2)$. We expect the differences between the observables within
these two schemes to be small.
On the other hand, if {\texttt{OMIT ELW2=1}} is set, no
EW corrections are computed and {\texttt{2HDECAY}}
reduces to the original program code {\texttt{HDECAY}}, including
(where applicable)
the QCD corrections in the decay widths, the off-shell decays and the
loop-induced decays. In this case, the value of
$G_F$ given in line 27 of the input file is used as input parameter
instead of being calculated through the input value of $\alpha
_\text{em} (m_Z^2)$, and no rescaling with $G_F^\text{calc}$ is
performed. We note in particular that therefore the QCD corrected
decay widths, printed out separately by {\texttt{2HDECAY}}, will be
different in the two input options {\texttt{OMIT ELW2=0}} and {\texttt{OMIT ELW2=1}}.
Another comment is at order in view of the fact that we
implemented EW corrections to OS decays only, while
{\texttt{HDECAY}} also features the computation of
off-shell decays.
More specifically, {\texttt{HDECAY}} includes off-shell decays into
final states with an off-shell top-quark $t^*$, {\it i.e.}~$\phi \to t^*
\bar{t}$ ($\phi=h,H,A$), $H^+ \to t^* + \bar{d},\bar{s},\bar{b}$, into gauge and Higgs
boson final states with an off-shell gauge boson, $h/H \to Z^* A, A
\to Z^* h/H, \phi \to H^- W^{+*}, H^+ \to \phi W^{+*}$, and into
neutral Higgs pairs with one off-shell Higgs boson that is assumed to
predominantly decay into the $b\bar{b}$ final state, $h/H \to AA^*$,
$H \to hh^*$. The top quark total width within the 2HDM, required for the off-shell
decays with top final states, is calculated internally in {\tt HDECAY}.
In {\texttt{2HDECAY}}, we combine the EW
and QCD corrections in such a way that {\texttt{HDECAY}} still
computes the decay widths of off-shell decays, while the electroweak corrections
are added only to OS decay channels. It
is important to keep this restriction in mind when performing the
calculation for large varieties of input data. If {\it e.g.}~the lighter
Higgs boson $h$ is chosen to be the SM-like Higgs boson, then the OS
decay $h \rightarrow
W^+ W^-$ would be kinematically forbidden while the heavier Higgs boson
decay $H \rightarrow W^+ W^-$ might be OS. In such
cases, {\texttt{2HDECAY}} calculates the EW NLO corrections only for the latter decay
channel, while the LO (and QCD decay widths where applicable) are calculated
for both. The same is true for any other decay channel for which we
implemented EW corrections but which are off-shell in certain
input scenarios. Note, that the NLO EW corrections for the off-shell decays
into the massive gauge boson final states have been provided for the
2HDM in \cite{Altenkamp:2017ldc,Denner2018,Altenkamp:2017kxk}. For
the SM, the combination of {\texttt{HDECAY}} and {\texttt{Prophecy4f}}
\cite{Bredenstein:2006rh,Bredenstein:2006nk,Bredenstein:2006ha}
provides the decay widths including EW corrected off-shell decays into
these final states. In a similar way, a combination of {\texttt{2HDECAY}} and
{\texttt{Prophecy4f}} with the 2HDM decays may be envisaged in future.
For the combination of the QCD and EW corrections finally, we assume
that these corrections factorize. We denote by $\delta^{\text{QCD}}$
and $\delta^{\text{EW}}$ the relative QCD and EW corrections,
respectively. Here $\delta^{\text{QCD}}$ is normalized to the LO width
$\Gamma^{\text{HD,LO}}$,
calculated internally by {\tt HDECAY}. This means for example in the
case of quark pair final states that the LO width includes the running
quark mass in order to improve the perturbative
behaviour.
The relative EW corrections $\delta^{\text{EW}}$ on the other hand are obtained by
normalization to the LO width with on-shell particle masses. With these
definitions the QCD and EW corrected decay width into a specific final
state, $\Gamma^{\text{QCD\&EW}}$, is given by
\beq
\Gamma^{\text{QCD\&EW}} = \frac{G_F^{\text{calc}}}{G_F} \Gamma^{\text{HD,LO}}
[1+\delta^{\text{QCD}}] [1+ + \delta^{\text{EW}}]
\equiv \frac{G_F^\text{calc}}{G_F}
\Gamma^{\text{HD,QCD}}
[1 + \delta^{\text{EW}}] \;.
\eeq
We have included the rescaling factor $G_F^\text{calc}/G_F$ which is
necessary for the consistent connection of our EW corrections with the
decay widths obtained from {\tt HDECAY}, as outline above.
\underline{QCD\&EW-corrected branching ratios:}
The program code will provide the branching ratios calculated
originally by {\tt HDECAY}, which, however, for {\tt OMIT ELW2=0} are
rescaled by $G_F^\text{calc}/G_F$. They include all loop decays,
off-shell decays and QCD corrections where applicable. We summarize
these branching ratios under the name 'QCD-corrected' branching ratios
and call their associated decay widths $\Gamma^{\text{HD,QCD}}$,
keeping in mind that the QCD corrections are included only where
applicable.
Furthermore, the EW and QCD corrected branching ratios will be given
out. Here, we add the EW corrections to the decay widths calculated
internally by {\tt HDECAY} where possible, {\it i.e.}~for non-loop
induced and OS decay widths. We summarize these branching
ratios under the name 'QCD\&EW-corrected' branching ratios and call
their associated decay widths $\Gamma^{\text{QCD\&EW}}$. In
Table~\ref{tab:brs} we summarize all details and caveats on their
calculation that we described here above. All these branching ratios
are written to the output file carrying the suffix '\_BR' with its
filename, see also end of section~\ref{sec:InputFileFormat} for details.
\begin{table}[tb]
\centering
\begin{tabular}{ c c c }
\hline
{\tt IELW2=0} & QCD-corrected & QCD\&EW-corrected \\ \hline
on-shell and & $\Gamma^{\text{HD,QCD}} \frac{G_F^\text{calc}}{G_F}$ &
$\Gamma^{\text{HD,QCD}} [1+\delta^{\text{EW}}] \frac{G_F^\text{calc}}{G_F}$ \\
non-loop induced & & \\ \hline
off-shell or & $\Gamma^{\text{HD,QCD}}
\frac{G_F^\text{calc}}{G_F}$ & $\Gamma^{\text{HD,QCD}} \frac{G_F^\text{calc}}{G_F}$ \\
loop-induced & & \\ \hline
\end{tabular}
\caption{The QCD-corrected and the QCD\&EW-corrected decay widths
as calculated by {\tt 2HDECAY} for {\tt IELW2=0}. The label QCD
is in the sense that the QCD corrections are included where applicable.}
\label{tab:brs}
\end{table}
\underline{NLO EW-corrected decay widths:} For ${\tt IELW2=0}$,
we additionally give out the LO and the EW-corrected NLO decay widths
as calculated by the new addition to {\tt HDECAY}. Here the LO widths
do not include any running of the quark masses in the case of quark
final states, but are obtained for OS masses. They can hence
differ quite substantially from the LO widths as calculated in the
original {\tt HDECAY} version. These LO and EW-corrected NLO widths are
computed in the $\{\alpha_{\text{em}}, m_W, m_Z\}$ scheme and therefore
obviously do not need the inclusion of the rescaling factor
$G_F^\text{calc}/G_F$. The
decay widths are written to the output file carrying the suffix
'\_EW' with its filename. While the widths given out here are not meant
to be applied in Higgs observables as they do not include the
important QCD corrections, the study of the NLO EW-corrected decay
widths for various renormalization schemes, as provided by {\tt 2HDECAY}, allows to
analyze the importance of the EW corrections and estimate the
remaining theoretical error due to missing higher-order EW
corrections. The decay widths can also be used for phenomenological studies like
{\it e.g.}~the comparison with the EW-corrected decay widths in the
MSSM in the limit of large supersymmetric particle masses, or the
investigation of specific 2HDM parameter regions at LO and NLO as {\it e.g.}~the
alignment limit, the non-decoupling limit or the wrong-sign limit.
\underline{Caveats:} We would like to point out to
the user that it can happen that the EW-corrected decay widths become
negative because of too large negative EW corrections compared to
the LO width. There can be several reasons for this: $(i)$ The LO width
may be very small in parts of the parameter space due to
suppressed couplings. For example the decay of the heavy Higgs boson
$H$ into massive vector bosons is very small in the region where the lighter $h$
becomes SM-like and takes over almost the whole coupling to massive
gauge bosons. If the NLO EW width is not suppressed by the same
power of the relevant coupling or if at NLO there are cancellations
between the various terms that remove the
suppression, the NLO width can largely exceed the LO width. $(ii)$
The EW corrections are artificially enhanced due to a badly chosen renormalization
scheme, {\it
cf.}~Refs.~\cite{Krause:2016oke,Krause:2016xku,Krause:2017mal} for
investigations on this
subject. The choice of a different renormalization scheme may cure
this problem, but of course raises also the question for the remaining
theoretical error due to missing higher-order corrections.
$(iii)$ The EW corrections are parametrically enhanced due to
involved couplings that are large, because of small coupling
parameters in the denominator or due to light particles in the loop, see also
Refs.~\cite{Krause:2016oke,Krause:2016xku,Krause:2017mal} for discussions. This
would call for the resummation of EW corrections
beyond NLO to improve the behaviour. It is obvious that the EW
corrections should not be trusted in case of extremely large positive
or negative corrections and rather be discarded, in particular in the
comparison with experimental observables, unless some of the suggested
measures are taken to improve the behaviour.
\subsection{Parameter Conversion}
\label{sec:ParameterConversion}
Through the higher-order corrections the decay widths depend on the
renormalization scale. In {\texttt{2HDECAY}} the user can choose this
scale, called $\mu_{\text{out}}$ in the following, in the input file. It can either
chosen to be a fixed scale or the mass of the decaying Higgs
boson. Input parameters in the $\overline{\mbox{MS}}$ scheme depend
explicitly on the renormalization scale $\mu_R$. This scale also has to be
given by the user in the input scale, and is called $\mu_R$ in the
following. The value of the scale becomes particularly important when the values of
$\mu_R$ and $\mu_{\text{out}}$ differ. In this case the $\overline{\mbox{MS}}$
parameters have to be evolved from the scale $\mu_R$ to the scale $\mu_{\text{out}}$. This
applies for $m_{12}^2$ which is always understood to be an
$\overline{\mbox{MS}}$ parameter, and for $\alpha$ and $\beta$ in case
they are chosen to be $\overline{\mbox{MS}}$ renormalized.
{\texttt{2HDECAY}} internally converts the $\overline{\mbox{MS}}$
parameters from $\mu_R$ to $\mu_{\text{out}}$ by means of a linear
approximation, applying the formula
\begin{equation}
\varphi \left( \{ \mu _\text{out} \} \right) \approx \varphi
\left( \{ \mu _R \} \right) + \ln \left( \frac{\mu _\text{out}
^2}{\mu _R ^2} \right) \delta \varphi^\text{div} \left( \{
\varphi \} \right) \label{eq:scalechange}
\end{equation}
where $\varphi$ and $\delta \varphi$ denote the $\overline{\mbox{MS}}$
parameters ($\alpha$ and $\beta$, if chosen as such, $m_{12}^2$) and
their respective counterterms. The index 'div' means that only the
divergent part of the counterterm, {\it i.e.}~the terms proportional
to $1/\varepsilon$ (or equivalently $\Delta$), is taken. \newline \vspace*{-3.5mm}
In addition, a parameter conversion has to be performed, when the
chosen renormalization scheme of the input parameter differs from the
renormalization scheme at which the EW corrected decay widths are
chosen to be evaluated. {\texttt{2HDECAY}} performs this conversion
automatically which is necessary for a consistent interpretation of
the results. The renormalization schemes implemented in {\texttt{2HDECAY}} differ solely in their definition of the scalar mixing angle CTs, while the defnition of all
other CTs is fixed. Therefore, the values of $\alpha$ and $\beta$ must
be converted when switching from one renormalization scheme to
another. For this conversion, we follow the linearized approach described in Ref.\,\cite{Altenkamp:2017ldc}. Since the bare mixing angles are independent of the
renormalization scheme, their values $\varphi_i$ in a different renormalization scheme
are given by the values $\varphi_{\text{ref}}$ in the input scheme (called reference scheme in the following) and the corresponding counterterms $\delta \varphi_{\text{ref}}$ and $\delta \varphi_i$ in the reference and the other renormalization scheme, respectively, as
\begin{equation}
\varphi _i \left( \{ \mu _\text{out} \} \right) \approx \varphi _\text{ref} \left( \{ \mu _R \} \right) + \delta \varphi _\text{ref} \left( \{ \varphi _\text{ref} , \mu _R \} \right) - \delta \varphi _i \left( \{ \varphi _\text{ref} , \mu _\text{out} \} \right) ~. \label{eq:convertedParameterValues}
\end{equation}
Note, that Eq.~(\ref{eq:convertedParameterValues}) also contains the
dependence on the scales $\mu_R$ and $\mu_{\text{out}}$ introduced
above. They are relevant in case $\alpha$ and $\beta$ are
understood as $\overline{\mbox{MS}}$ parameters and additionally
depend on the renormalization scale, at which they are defined.
The relation Eq.~(\ref{eq:convertedParameterValues}) holds
approximately up to higher-order terms, as the CTs
involved in this equation are all evaluated with the mixing angles
given in the reference scheme.
\section{Program Description}
\label{sec:programDescriptionMain}
In the following, we describe the system requirements needed for
compiling and running {\texttt{2HDECAY}}, the installation procedure
and the usage of the program. Additionally, we describe the input and
output file formats in detail.
\subsection{System Requirements}
The \texttt{Python/FORTRAN} program code {\texttt{2HDECAY}} was
developed under {\texttt{Windows 10}} and {\texttt{openSUSE Leap
15.0}}. The supported operating systems are:
\begin{itemize}
\item {\texttt{Windows 7}} and {\texttt{Windows 10}} (tested
with {\texttt{Cygwin 2.10.0}})
\item {\texttt{Linux}} (tested with {\texttt{openSUSE Leap 15.0}})
\item {\texttt{macOS}} (tested with {\texttt{macOS Sierra 10.12}})
\end{itemize}
In order to compile and run {\texttt{2HDECAY}} on {\texttt{Windows}}, you need to install
{\texttt{Cygwin}} first (together with the packages {\texttt{cURL}},
{\texttt{find}}, {\texttt{gcc}}, {\texttt{g++}} and
{\texttt{gfortran}}, which also are required to be installed on {\texttt{Linux}}
and {\texttt{{macOS}}). For the compilation,
the {\texttt{GNU C}} compilers {\texttt{gcc}} (tested with versions
{\texttt{6.4.0}} and {\texttt{7.3.1}}), {\texttt{g++}} and the {\texttt{FORTRAN}}
compiler {\texttt{gfortran}} are required. Additionally, an up-to-date
version of {\texttt{Python 2}} or {\texttt{Python 3}} is required
(tested with versions {\texttt{2.7.14}} and {\texttt{3.5.0}}). For an
optimal performance of {\texttt{2HDECAY}}, we recommend that the
program is installed on a solid state drive (SSD) with high reading
and writing speeds.
\subsection{License}
{\texttt{2HDECAY}} is released under the GNU General Public License
(GPL) ({\texttt{GNU GPL-3.0-or-later}}). {\texttt{2HDECAY}} is free
software, which means that anyone can redistribute it and/or modify it
under the terms of the GNU GPL as published by the Free Software
Foundation, either version 3 of the License, or any later
version. {\texttt{2HDECAY}} is distributed without any warranty. A
copy of the GNU GPL is included in the {\texttt{LICENSE.md}} file in
the root directory of {\texttt{2HDECAY}}.
\subsection{Download}
\label{sec:Download}
The latest version of the program as well as a short quick-start
documentation is given at
\href{https://github.com/marcel-krause/2HDECAY}{https://github.com/marcel-krause/2HDECAY}. To
obtain the code either the repository is cloned or the zip archive is
downloaded and unzipped to a directory of the user's choice, which
here and in the following will be referred to as
{\texttt{\$2HDECAY}}. The main folder of {\texttt{2HDECAY}} consists
of several subfolders:
\begin{description}
\item[{\texttt{BuildingBlocks}}] Contains the analytic electroweak
one-loop corrections for all decays considered, as well as the
real corrections and CTs needed to render the decay widths UV- and
IR-finite.
\item[{\texttt{Documentation}}] Contains this documentation.
\item[{\texttt{HDECAY}}] This subfolder contains a modified version
of
{\texttt{HDECAY}} 6.52~\cite{DJOUADI199856,Djouadi:2018xqq},
needed for the computation of the LO and (where applicable) QCD
corrected decay widths.
{\tt HDECAY} also provides off-shell
decay widths and the loop-induced decay widths into gluon and
photon pair final states and into $Z\gamma$. {\tt HDECAY} is
furthermore used for the computation of the branching ratios.
\item[{\texttt{Input}}] In this subfolder, at least one
or more input files can be stored that shall be used for the computation. The
format of the input file is explained in
Sec.\,\ref{sec:InputFileFormat}. In the Github repository, the
{\texttt{Input}} folder contains an exemplary input file which is
printed in App.\,\ref{sec:AppendixInputFile}.
\item[{\texttt{Results}}] All results of a successful run of
{\texttt{2HDECAY}} are stored as output files in this subfolder
under the same name as the corresponding input files in the
{\texttt{Input}} folder, but with the file extension {\texttt{.in}}
replaced by {\texttt{.out}} and a suffix ``\_BR'' and ``\_EW'' for
the branching ratios and electroweak partial decay widths,
respectively. In the Github repository, the
{\texttt{Results}} folder contains two exemplary output files which
are given in App.\,\ref{sec:AppendixOutputFile}.
\end{description}
The main folder {\texttt{\$2HDECAY}} itself also contains several
files:
\begin{description}
\item[{\texttt{2HDECAY.py}}] Main program file of
{\texttt{2HDECAY}}. It serves as a wrapper file for calling
{\texttt{HDECAY}} in order to convert the charm and bottom quark
masses from the $\overline{\text{MS}}$ input values to the
corresponding OS values and to calculate the LO widths, QCD
corrections, off-shell and loop-induced decays, the branching ratios as well as
{\texttt{electroweakCorrections}} for the calculation of
the EW one-loop corrections.
\item[{\texttt{Changelog.md}}] Documentation of all changes made in
the program since version \newline {\texttt{2HDECAY\,1.0.0}}.
\item[{\texttt{CommonFunctions.py}}] Function library of
{\texttt{2HDECAY}}, providing functions frequently used in the
different files of the program.
\item[{\texttt{Config.py}}] Main configuration file. If
{\texttt{LoopTools}} is not installed automatically by the installer
of {\texttt{2HDECAY}}, the paths to the {\texttt{LoopTools}}
executables and libraries have to be set manually in this file.
\item[{\texttt{constants.F90}}] Library for all constants
used in {\texttt{2HDECAY}}.
\item[{\texttt{counterterms.F90}}] Definition of all fundamental CTs
necessary for the EW one-loop renormalization of the Higgs boson
decays. The CTs defined in this file require the analytic results
saved in the {\texttt{BuildingBlocks}} subfolder.
\item[{\texttt{electroweakCorrections.F90}}] Main file for the
calculation of the EW one-loop corrections to the Higgs boson
decays. It combines the EW one-loop corrections to the decay
widths with the necessary CTs and IR corrections and calculates
the EW contributions to the tree-level decay widths that
are then combined with the QCD corrections in {\texttt{HDECAY}}.
\item[{\texttt{getParameters.F90}}] Routine to
read in the input values
given by the user in the input files that are needed by
{\texttt{2HDECAY}}.
\item[{\texttt{LICENSE.md}}] Contains the full GNU General Public
License ({\texttt{GNU GPL-3.0-or-later}}) agreement under which
{\texttt{2HDECAY}} is published.
\item[{\texttt{README.md}}] Provides an overview over basic
information about the program as well as a quick-start guide.
\item[{\texttt{setup.py}}] Main setup and installation file of
{\texttt{2HDECAY}}. For a guided installation, this file should be
called after downloading the program.
\end{description}
\subsection{Installation}
\label{sec:Installation}
We highly recommend to use the automatic installation script
{\texttt{setup.py}} that is part of the {\texttt{2HDECAY}}
download. The script guides the user through the installation and asks
what components should be installed. For an installation under
{\texttt{Windows}}, the user should open the configuration file
{\texttt{\$2HDECAY/Config.py}} and check that the path to the
{\texttt{Cygwin}} executable in line 36 is set correctly before
starting the installation. In order to initiate the installation, the
user navigates to the {\texttt{\$2HDECAY}} folder and executes the
following in the command-line shell:
\begin{lstlisting}[numbers=none,language=bash,frame=single,backgroundcolor=\color{mygray}]
python setup.py
\end{lstlisting}
The script first asks the user if {\texttt{LoopTools}} should be
downloaded and installed. By entering {\texttt{y}}, the installer
downloads the {\texttt{LoopTools}} version that is specified in the
{\texttt{\$2HDECAY/Config.py}} file in line 37 and starts the
installation automatically. {\texttt{LoopTools}} is then installed in
a subdirectory of {\texttt{2HDECAY}}. Further information about the
installation of the program can be found in \cite{HAHN1999153}.
If the user already has a working version of {\texttt{LoopTools}} on the system,
this step of the installation can be skipped. In this case, the user
has to open the file {\texttt{\$2HDECAY/Config.py}} in an editor and
change the lines 33-35 to the absolute path of the
{\texttt{LoopTools}} root directory and to the {\texttt{LoopTools}}
executables and libraries on the system. Additionally, line 32 has
to be changed to
\begin{lstlisting}[language=Python,frame=single,backgroundcolor=\color{mygray},numbers=none]
useRelativeLoopToolsPath = False
\end{lstlisting}
This step is important if {\texttt{LoopTools}} is not installed
automatically with the install script, since otherwise,
{\texttt{2HDECAY}} will not be able to find the necessary executables
and libraries for the calculation of the EW one-loop
corrections.
As soon as {\texttt{LoopTools}} is installed (or alternatively, as
soon as paths to the {\texttt{LoopTools}} libraries and executables on
the user's system are being set manually in {\texttt{\$2HDECAY/Config.py}}),
the installation script asks whether it should automatically create
the makefile and the main EW corrections file
{\texttt{electroweakCorrections.F90}} and whether the program shall be
compiled. For an automatic installation, the user should type
{\texttt{y}} for all these requests to compile the main program as
well as to compile the modified version of {\texttt{HDECAY}} that
is included in {\texttt{2HDECAY}. The compilation may take several
minutes to finish. At the end of the installation the
user has the choice to 'make clean' the installation. This is optional.
In order to test if the installation was successful, the user can type
\begin{lstlisting}[numbers=none,language=bash,frame=single,backgroundcolor=\color{mygray}]
python 2HDECAY.py
\end{lstlisting}
in the command-line shell, which runs the main program. The exemplary
input file provided by the default {\texttt{2HDECAY}} version is used for the
calculation. In the command window, the output of several steps of the
computation should be printed, but no errors. If the installation was
successful, {\texttt{2HDECAY}} terminates with no errors and the
existing output files in {\texttt{\$2HDECAY/Results}} are overwritten by
the newly created ones, which, however, are equivalent to the exemplary
output files that are provided with the program.
\subsection{Input File Format}
\label{sec:InputFileFormat}
\begin{table}[tb]
\centering
\begin{tabular}{ c c c c }
\hline
Line & Input name & Allowed values and meaning \\ \hline
\makecell[tc]{6} & \makecell[tc]{{\texttt{OMIT ELW2}}} &
\makecell[tl]{0: electroweak corrections (2HDM) are calculated \\ 1: electroweak
corrections (2HDM) are neglected} \\
\makecell[tc]{9} & \makecell[tc]{{\texttt{2HDM}}} &
\makecell[tl]{0: considered model is not the 2HDM \\ 1: considered
model is the 2HDM } \\
\makecell[tc]{56} & \makecell[tc]{{\texttt{PARAM}}} & \makecell[tl]{1: 2HDM Higgs masses and $\alpha$ (lines 66-70) are given as input \\ 2: 2HDM potential parameters (lines 72-76) are given as input} \\
\makecell[tc]{57} & \makecell[tc]{{\texttt{TYPE}}} &
\makecell[tl]{1: 2HDM type I \\ 2: 2HDM type II \\
3: 2HDM lepton-specific \\ 4: 2HDM flipped} \\
\makecell[tc]{58} & \makecell[tc]{{\texttt{RENSCHEM}}} &
\makecell[tl]{0: all renormalization schemes are calculated \\ 1-17: only the chosen scheme ({\it cf.}~Tab.\,\ref{tab:2HDECAYImplementedSchemes}) is calculated} \\
\makecell[tc]{59} & \makecell[tc]{ {\texttt{REFSCHEM}} } &
\makecell[tl]{1-17: the input values of
$\alpha$, $\beta$ and $m_{12}^2$
(\textit{cf.}\,Tab.\,\ref{tab:2HDECAYInputValues}) are given in
the \\ \hspace*{0.35cm} chosen reference
scheme and at the scale $\mu _R$ given by {\texttt{INSCALE}} in
\\ \hspace*{0.35cm} case of $\overline{\mbox{MS}}$
parameters; the values of $\alpha$, $\beta$ and $m_{12}^2$ in all other
\\ \hspace*{0.35cm} schemes and
at the scale $\mu_{\text{out}}$ at which the decays are calculated,
\\ \hspace*{0.35cm} are evaluated using
Eqs.~(\ref{eq:scalechange}) and (\ref{eq:convertedParameterValues})} \\
\hline
\end{tabular}
\caption{Input parameters for the basic control of
{\texttt{2HDECAY}}. The line number corresponds to the line of
the input file where the input value can be found. In order to
calculate the EW corrections for the 2HDM, the input parameter
{\texttt{OMIT ELW2}} has to be set to 0. In this case, the given
input value of {\texttt{2HDM}} is ignored and {\texttt{2HDM=1}}
is set automatically, independent of the chosen input value. All
input values presented in this table have to be entered as integer values.}
\label{tab:2HDECAYControlInputs}
\end{table}
The format of the input file is adopted from {\texttt{HDECAY}}
\cite{DJOUADI199856, Djouadi:2018xqq}, with minor modifications to
account for the EW corrections that are implemented. The
file has to be stored as a text-only file in UTF-8
format. Since {\texttt{2HDECAY}} is a program designed for the calculation of
higher-order corrections solely for the 2HDM, only a subset of input
parameters in comparison to the original {\texttt{HDECAY}} input file
is actually used ({\it e.g.}~SUSY-related input parameters are not needed
for {\texttt{2HDECAY}}). The input file nevertheless contains the full
set of input parameters from {\texttt{HDECAY}} to make
{\texttt{2HDECAY}} fully backwards-compatible,
{\it i.e.}\,{\texttt{HDECAY\,6.52}} is fully contained in
{\texttt{2HDECAY}}. The input file contains two classes of
input parameters. The first class are input values that control the
main flow of the program ({\it e.g.\,}whether corrections for the SM or the
2HDM are calculated). The control parameters relevant for
{\texttt{2HDECAY}} are shown in Tab.\,\ref{tab:2HDECAYControlInputs},
together with their line numbers in the input file, their allowed
values and the meaning of the input values. In order to choose the
2HDM as the model that is considered, the input value {\texttt{2HDM =
1}} has to be chosen. By setting {\texttt{OMIT ELW2 = 0}}, the
EW and QCD corrections are calculated for the 2HDM, whereas
for {\texttt{OMIT ELW2 = 1}}, only the QCD corrections are
calculated. The latter choice corresponds to the corrections for the
2HDM that are already implemented in
{\texttt{HDECAY\,6.52}}. If the user sets {\texttt{OMIT ELW2 = 0}} in
the input file, then {\texttt{2HDM = 1}} is automatically set
internally, independent of the input value of {\texttt{2HDM}} that the
user provides. The input value {\texttt{PARAM}}
determines which parametrization of the Higgs sector shall be
used. For {\texttt{PARAM = 1}}, the Higgs boson masses and mixing angle
$\alpha$ are chosen as input, while for {\texttt{PARAM =
2}}, the Higgs potential parameters $\lambda _i$ are used as
input. As described at the end of Sec.\,\ref{sec:setupOfModel},
however, it should be noted that the EW corrections in
{\texttt{2HDECAY}} are in both cases parametrized through the Higgs
masses and mixing angle. Hence, if {\texttt{PARAM = 2}} is chosen, the
masses and mixing angle are calculated as functions of $\lambda _i$ by
means of
Eqs.\,(\ref{eq:parameterTransformationInteractionToMass1})-(\ref{eq:parameterTransformationInteractionToMass5}). The
input value {\texttt{TYPE}} sets the type of the 2HDM, as described in
Sec.\,\ref{sec:setupOfModel}, and {\texttt{RENSCHEM}} determines the
renormalization schemes that are used for the calculation. By setting
{\texttt{RENSCHEM = 0}}, the EW corrections to the Higgs boson
decays are calculated for all 17 implemented renormalization
schemes. This allows for analyses of the renormalization scheme dependence
and for an estimate of the effects of missing higher-order EW
corrections, but this setting has the caveat of increasing
the computation time and output file size rather significantly. A
specific integer value of {\texttt{RENSCHEME}} between 1 and 17 sets
the renormalization scheme to the chosen one. An overview of all
implemented schemes and their identifier values between 1 and 17 is
presented in Tab.\,\ref{tab:2HDECAYImplementedSchemes}.
As discussed in
Sec.\,\ref{sec:ParameterConversion}, the consistent comparison of
partial decay widths calculated in different renormalization schemes
requires the conversion of the input parameters between these
schemes. By setting {\texttt{REFSCHEM}} to a value between 1 and 17,
the input parameters for $\alpha$ and $\beta$ ({\it cf.}
Tab.\,\ref{tab:2HDECAYInputValues}) are understood as input parameters in
the chosen reference scheme and the automatic parameter conversion
is activated. The input value of the $\overline{\mbox{MS}}$
parameter $m_{12}^2$ is given at the input scale $\mu _R$. The same
applies for the input values of $\alpha$ and $\beta$ when they are
chosen to be $\overline{\text{MS}}$ renormalized. The values of
$\alpha$, $\beta$ and $m_{12}^2$ in all other renormalization
schemes and at all other scales $\mu _\text{out}$ are then
calculated using Eqs.~(\ref{eq:scalechange}) and
(\ref{eq:convertedParameterValues}). The automatic parameter
conversion requires the input parameters to be given in the mass
basis of \eqref{eq:inputSetMassBase}, \textit{i.e.}\,for the
automatic parameter conversion to be active, it is necessary to set
{\texttt{PARAM = 1}}. If instead {\texttt{PARAM = 2}} is set, then
{\texttt{REFSCHEM = 0}} is set automatically internally so that the
automatic parameter conversion is deactivated. In this case, a
warning is printed in the console. All input
values of the first class must be entered as integers.
\begin{table}[tb]
\centering
\begin{tabular}{ c c c c }
\hline
Line & Input name & Name in Sec.\,\ref{sec:EWQCD2HDMMain} & Allowed values and meaning \\ \hline
\makecell[tc]{18} & \makecell[tc]{{\texttt{ALS(MZ)}}} & $\alpha_s (m_Z)$ & \makecell[tl]{strong coupling constant (at $m_Z$)} \\
\makecell[tc]{19} & \makecell[tc]{{\texttt{MSBAR(2)}}} & $m_s (2\,\text{GeV})$ & \makecell[tl]{$s$-quark $\overline{\text{MS}}$ mass at 2 GeV in GeV} \\
\makecell[tc]{20} & \makecell[tc]{{\texttt{MCBAR(3)}}} & $m_c (3\,\text{GeV})$ & \makecell[tl]{$c$-quark $\overline{\text{MS}}$ mass at 3 GeV in GeV} \\
\makecell[tc]{21} & \makecell[tc]{{\texttt{MBBAR(MB)}}} & $m_b (m_b)$ & \makecell[tl]{$b$-quark $\overline{\text{MS}}$ mass at $m_b$ in GeV} \\
\makecell[tc]{22} & \makecell[tc]{{\texttt{MT}}} & $m_t$ & \makecell[tl]{$t$-quark pole mass in GeV} \\
\makecell[tc]{23} & \makecell[tc]{{\texttt{MTAU}}} & $m_\tau $ & \makecell[tl]{$\tau$-lepton pole mass in GeV} \\
\makecell[tc]{24} & \makecell[tc]{{\texttt{MMUON}}} & $m_\mu $ & \makecell[tl]{$\mu $-lepton pole mass in GeV} \\
\makecell[tc]{25} & \makecell[tc]{{\texttt{1/ALPHA}}} & $\alpha _\text{em} ^{-1} (0)$ & \makecell[tl]{inverse fine-structure constant (Thomson limit)} \\
\makecell[tc]{26} & \makecell[tc]{{\texttt{ALPHAMZ}}} & $\alpha _\text{em} (m_Z)$ & \makecell[tl]{fine-structure constant (at $m_Z$)} \\
\makecell[tc]{29} & \makecell[tc]{{\texttt{GAMW}}} & $\Gamma _W$ & \makecell[tl]{partial decay width of the $W$ boson } \\
\makecell[tc]{30} & \makecell[tc]{{\texttt{GAMZ}}} & $\Gamma _Z$ & \makecell[tl]{partial decay width of the $Z$ boson } \\
\makecell[tc]{31} & \makecell[tc]{{\texttt{MZ}}} & $m_Z$ &
\makecell[tl]{$Z$ boson on-shell mass in GeV} \\
\makecell[tc]{32} & \makecell[tc]{{\texttt{MW}}} & $m_W$ &
\makecell[tl]{$W$ boson on-shell mass in GeV} \\
\makecell[tc]{33-41} & \makecell[tc]{{\texttt{Vij}}} & $V_{ij}$ & \makecell[tl]{CKM matrix elements ($i\in \{ u,c,t\}$ , $j\in \{ d,s,b\} $) } \\
\makecell[tc]{61} & \makecell[tc]{{\texttt{TGBET2HDM}}} & $t_\beta $ & \makecell[tl]{ratio of the VEVs in the 2HDM} \\
\makecell[tc]{62} & \makecell[tc]{{\texttt{M\_12\textasciicircum
2}}} & $m_{12}^2 $ & \makecell[tl]{squared soft-$\mathbb{Z}_2$-breaking scale in GeV$^2$} \\
\makecell[tc]{63} & \makecell[tc]{{\texttt{INSCALE}}} & $\mu_R $
& \makecell[tl]{renormalization scale for $\overline{\text{MS}}$ inputs in GeV} \\
\makecell[tc]{64} & \makecell[tc]{ {\texttt{OUTSCALE}} } & $\mu_\text{out} $
& \makecell[tl]{ renormalization scale for the evaluation of the } \\
& & & \makecell[tl]{ partial decay widths in GeV or in terms of {\texttt{MIN}} } \\
\makecell[tc]{66} & \makecell[tc]{{\texttt{ALPHA\_H}}} & $\alpha $ & \makecell[tl]{CP-even Higgs mixing angle in radians} \\
\makecell[tc]{67} & \makecell[tc]{{\texttt{MHL}}} & $m_h $ &
\makecell[tl]{light CP-even Higgs boson mass in GeV} \\
\makecell[tc]{68} & \makecell[tc]{{\texttt{MHH}}} & $m_H $ &
\makecell[tl]{heavy CP-even Higgs boson mass in GeV} \\
\makecell[tc]{69} & \makecell[tc]{{\texttt{MHA}}} & $m_A $ &
\makecell[tl]{CP-odd Higgs boson mass in GeV} \\
\makecell[tc]{70} & \makecell[tc]{{\texttt{MH+-}}} & $m_{H^\pm } $
& \makecell[tl]{charged Higgs boson mass in GeV} \\
\makecell[tc]{72-76} & \makecell[tc]{{\texttt{LAMBDAi}}} &
$\lambda_i $ & \makecell[tl]{Higgs potential parameters
[see Eq.~(\ref{eq:scalarPotential})]} \\
\hline
\end{tabular}
\caption{Shown are all relevant physical input parameters of
{\texttt{2HDECAY}} that are necessary for the calculation of the
QCD and EW corrections. The
line number corresponds to the line of the input file where the input value
can be found. Depending on the chosen value of {\texttt{PARAM}}
({\it cf.}~Tab.\,\ref{tab:2HDECAYControlInputs}), either the Higgs
masses and mixing angle $\alpha$ (lines 66-70) or the 2HDM
potential parameters (lines 72-76) are chosen as input, but
never both simultaneously. The value {\texttt{OUTSCALE}} is
entered either as a double-precision number or as
{\texttt{MIN}}, representing the mass scale of the decaying
Higgs boson. All other input values presented in this table are
entered as double-precision numbers.}
\label{tab:2HDECAYInputValues}
\end{table}
The second class of input values in the input file are the physical
input parameters shown in Tab.\,\ref{tab:2HDECAYInputValues}, together
with their line numbers in the input file, their allowed input values
and the meaning of the input values. This is the full set of input
parameters needed for the calculation of the electroweak and QCD
corrections. All other input parameters present in the input file that
are not shown in Tab.\,\ref{tab:2HDECAYInputValues} are neglected for
the calculation of the QCD and EW corrections in the
2HDM. We want to emphasize again that depending on the
choice of {{\texttt{PARAM}} ({\it cf.}~Tab.\,\ref{tab:2HDECAYControlInputs}),
either the Higgs masses and mixing angle $\alpha$ or the Higgs
potential parameters $\lambda _i$ are chosen as independent input,
but never both simultaneously, {\it i.e.}~if {{\texttt{PARAM = 1}} is
chosen, then the input values for $\lambda _i$ are ignored, while
for {\texttt{PARAM = 2}}, the input values of the Higgs masses
and $\alpha$ are ignored and instead calculated by means of
Eqs.\,(\ref{eq:parameterTransformationInteractionToMass1})-(\ref{eq:parameterTransformationInteractionToMass5}).
All input values of the second class are entered in
{\texttt{FORTRAN}} double-precision format, {\it i.e.}~valid input
formats are {\it e.g.}~{\texttt{MT = 1.732e+02}} or {\texttt{MHH =
258.401D0}}. Since $m_{12}^2$ and, in case of a chosen $\overline{\text{MS}}$ scheme, $\alpha$ and $\beta$ depend on the renormalization scale $\mu _R$ at which these parameters are given, the calculation of the partial decay widths depends on this scale. Moreover, since the partial decay widths are evaluated at the (potentially different) renormalization scale $\mu _\text{out}$, the decay widths and branching ratios depend on this scale as well. In order to avoid artificially large corrections, both scales should be chosen appropriately. The input value
{\texttt{INSCALE}} of $\mu _R$, \textit{i.e.}\,the scale at which all $\overline{\text{MS}}$ parameters are defined, is entered as a double-precision number. The input value
{\texttt{OUTSCALE}} of $\mu _\text{out}$, \textit{i.e.}\,the renormalization scale at which the partial decay widths are evaluated, can be entered either as a
double-precision number or it can be expressed in terms of the mass
scale {\texttt{MIN}} of the decaying Higgs boson, {\it i.e.}\,setting {\texttt{OUTSCALE=MIN}} sets $\mu _R = m_1$ for each decay
channel, where $m_1$ is the mass of the decaying Higgs boson in the
respective channel. Note finally, that the input
masses for the $W$ and $Z$ gauge bosons must be the on-shell
values for consistency with the renormalization conditions applied
in the EW corrections.
\begin{table}[tb]
\centering
\begin{tabular}{ c c c c c }
\hline
Input ID & Tadpole scheme & $\delta \alpha$ & $\delta \beta$ & Gauge-par.-indep. $\Gamma$ \\ \hline
\makecell[tc]{1} & \makecell[tc]{standard} & \makecell[tc]{KOSY} & \makecell[tc]{KOSY (odd)} & \makecell[tc]{\xmark } \\
\makecell[tc]{2} & \makecell[tc]{standard} & \makecell[tc]{KOSY} & \makecell[tc]{KOSY (charged)} & \makecell[tc]{\xmark } \\
\makecell[tc]{3} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{KOSY} & \makecell[tc]{KOSY (odd)} & \makecell[tc]{\xmark } \\
\makecell[tc]{4} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{KOSY} & \makecell[tc]{KOSY (charged)} & \makecell[tc]{\xmark } \\
\makecell[tc]{5} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{$p_{*}$-pinched} & \makecell[tc]{$p_{*}$-pinched (odd)} & \makecell[tc]{\cmark } \\
\makecell[tc]{6} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{$p_{*}$-pinched} & \makecell[tc]{$p_{*}$-pinched (charged)} & \makecell[tc]{\cmark } \\
\makecell[tc]{7} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{OS-pinched} & \makecell[tc]{OS-pinched (odd)} & \makecell[tc]{\cmark } \\
\makecell[tc]{8} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{OS-pinched} & \makecell[tc]{OS-pinched (charged)} & \makecell[tc]{\cmark } \\
\makecell[tc]{9} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{proc.-dep. 1} & \makecell[tc]{proc.-dep. 1} & \makecell[tc]{\cmark } \\
\makecell[tc]{10} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{proc.-dep. 2} & \makecell[tc]{proc.-dep. 2} & \makecell[tc]{\cmark } \\
\makecell[tc]{11} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{proc.-dep. 3} & \makecell[tc]{proc.-dep. 3} & \makecell[tc]{\cmark } \\
\makecell[tc]{12} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{OS1} & \makecell[tc]{OS1} & \makecell[tc]{\cmark } \\
\makecell[tc]{13} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{OS2} & \makecell[tc]{OS2} & \makecell[tc]{\cmark } \\
\makecell[tc]{14} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{OS12} & \makecell[tc]{OS12} & \makecell[tc]{\cmark } \\
\makecell[tc]{15} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{BFMS} & \makecell[tc]{BFMS} & \makecell[tc]{\cmark } \\
\makecell[tc]{16} & \makecell[tc]{standard} & \makecell[tc]{ $\overline{\text{MS}}$ } & \makecell[tc]{ $\overline{\text{MS}}$ } & \makecell[tc]{\xmark } \\
\makecell[tc]{17} & \makecell[tc]{alternative (FJ)} & \makecell[tc]{ $\overline{\text{MS}}$ } & \makecell[tc]{ $\overline{\text{MS}}$ } & \makecell[tc]{\cmark } \\
\hline
\end{tabular}
\caption{Overview over all renormalization schemes for the mixing
angles $\alpha$ and $\beta$ that are implemented in
{\texttt{2HDECAY}}. By setting {\texttt{RENSCHEM}} in the input
file, {\it cf.}~Tab.\,\ref{tab:2HDECAYControlInputs}, equal to the
Input ID the renormalization scheme is chosen. In case of 0 the
results for all renormalization schemes are given out. The
definition of the CTs $\delta \alpha$ and $\delta \beta$ in each
scheme is explained in
Sec.\,\ref{sec:renormalizationMixingAngles}. The crosses and
check marks in the column for gauge independence indicate
whether the chosen scheme in general yields explicitly gauge-independent
partial decay widths or not.}
\label{tab:2HDECAYImplementedSchemes}
\end{table}
The amount of input files that can be stored in the input folder is
not limited. The input files can have arbitrary non-empty names and
filename extensions\footnote{On some systems, certain filename
extensions should be avoided when naming the input files, as they
are reserved for certain types of files ({\it e.g.}~under {\texttt{Windows}},
the {\texttt{.exe}} file extension is automatically connected to
executables by the operating system, which can under certain
circumstances lead to runtime problems when trying to read the
file). Choosing text file extensions like {\texttt{.in}},
{\texttt{.out}}, {\texttt{.dat}} or {\texttt{.txt}} should in
general be unproblematic.}. The output files are saved in the
{\texttt{\$2HDECAY/Results}} subfolder under the same name as the
corresponding input files, but with their filename extension replaced
by {\texttt{.out}}.
For each input file, two output files are generated. The output file
containing the branching ratios is indicated by the filename suffix
'\_BR', while the output file containing the electroweak
partial decay widths is indicated by the filename suffix '\_EW'.
\subsection{Structure of the Program}
As briefly mentioned in Sec.\,\ref{sec:Download}, the main program
{\texttt{2HDECAY}} combines the already existing QCD corrections from
{\texttt{HDECAY}} with the full EW one-loop
corrections. Depicted in \figref{fig:flowchart2HDECAY} is the
flowchart of {\texttt{2HDECAY}} which shows how the QCD and
EW corrections are combined by the main wrapper file
{\texttt{2HDECAY.py}}.
\begin{figure}[tb]
\centering
\includegraphics[width=13.8cm]{flowchart.pdf}
\caption{Flowchart of {\texttt{2HDECAY}}. The main wrapper file
{\texttt{2HDECAY.py}} generates a list of input files, provided by the
user in the subfolder {\texttt{\$2HDECAY/Input}}, and iterates over
the list. For each selected input file in the list, the wrapper
calls {\tt HDECAY} and the subprogram {\tt
electroweakCorrections}. The computed branching ratios including
the EW and QCD corrections as described in the text are written
to the output file with suffix '\_BR', the calculated LO and NLO EW-corrected
partial decay widths are given out in the output file with suffix
'\_EW'. For further details, we refer to the text.}
\label{fig:flowchart2HDECAY}
\end{figure}
First, the wrapper file generates a list of all input files that the
user provides in {\texttt{\$2HDECAY/Input}}. The user can provide an
arbitrary non-zero amount of input files with arbitrary filenames, as
described in Sec.\,\ref{sec:InputFileFormat}. For any input file in
the list, the wrapper file first calls {\texttt{HDECAY}} in a
so-called minimal run, technically by calling {\texttt{HDECAY}} in the
subfolder {\texttt{\$2HDECAY/HDECAY}} with an additional flag ``1'':
\begin{lstlisting}[numbers=none,language=bash,frame=single,backgroundcolor=\color{mygray}]
run 1
\end{lstlisting}
With this flag, HDECAY reads the selected input file from the input
file list and uses the input values only to convert the
$\overline{\text{MS}}$ values of the $c$- and $b$-quark masses, as
given in the input file, to the corresponding pole masses, but no other computations are performed at
this step.
The wrapper file then calls the subprogram
{\texttt{electroweakCorrections}}, which reads the selected input file
as well as the OS values of the quark masses. With these input values,
the full EW one-loop corrections are calculated for all decays that
are kinematically allowed, as described in
Sec.\,\ref{sec:decayProcessesAtLOandNLO}, and the value of $G_F^{\text{calc}}$
at the $Z$ mass is calculated, as described in
Sec.\,\ref{sec:connectionHDECAY}. Subsequently, a temporary new input
file is created, which consists of a copy of the selected input file
with the calculated OS quark masses, the calculated value of
$G_F^{\text{calc}}$ and all EW corrections being appended.
Lastly, the wrapper file calls {\texttt{HDECAY}} without the minimal
flag. In this configuration, {\texttt{HDECAY}} reads the temporary
input file and calculates the LO widths and QCD corrections to the
decays. Moreover, the program calculates off-shell decay widths
as well as the loop-induced decays to final-state pairs
of gluons or photons and $Z \gamma$. Furthermore, the branching ratios
are calculated by {\texttt{HDECAY}}. The results of these computations
are consistently combined with the electroweak corrections, as
described in Sec.\,\ref{sec:connectionHDECAY}. The results are saved in an
output file in the {\texttt{\$2HDECAY/Results}} subfolder.
The wrapper file repeats these steps for each file in the input file
list until the end of the list is reached.
\subsection{Usage}
Before running the program, the user should check that all input files
for which the computation shall be performed are stored in the
subfolder {\texttt{\$2HDECAY/Input}}. The input files have to be
formatted exactly as described in Sec.\,\ref{sec:InputFileFormat} or
otherwise the input values are not read in correctly and the program
might crash with a segmentation error. The exemplary input file
printed in App.\,\ref{sec:AppendixInputFile} that is part of the
{\texttt{2HDECAY}} repository can be used as a template for generating
other input files in order to avoid formatting problems.
The user should check the output subfolder
{\texttt{\$2HDECAY/Results}} for any output files of previous runs of
{\texttt{2HDECAY}}. These previously created output files are
overwritten if in a new run input files with the same names as the
already stored output files are used. Hence, the user is advised to
create backups of the output files before starting a new run of
{\texttt{2HDECAY}}.
In order to run the program, open a terminal, navigate to the
{\texttt{\$2HDECAY}} folder and execute the following command:
\begin{lstlisting}[numbers=none,language=bash,frame=single,backgroundcolor=\color{mygray}]
python 2HDECAY.py
\end{lstlisting}
If {\texttt{2HDECAY}} was installed correctly according to
Sec.\,\ref{sec:Installation} and if the input files have the correct
format, the program should now compute the EW and/or QCD
corrections according to the flowchart shown in
\figref{fig:flowchart2HDECAY}. Several intermediate results and
information about the computation are printed in the terminal. As
soon as the computation for all input files is done,
{\texttt{2HDECAY}} is terminated and the resulting output files can be
found in the {\texttt{\$2HDECAY/Results}} subfolder.
\subsection{Output File Format}
\label{sec:OutputFileFormat}
For each input file, two output files with the suffixes '\_QCD'
and '\_EW' for the branching ratios and electroweak partial decay
widths, respectively, are generated in an SLHA format, as described in
Sec.~\ref{sec:connectionHDECAY}. The SLHA output
format \cite{Skands:2003cj,Allanach:2008qq,Mahmoudi:2010iz} in its strict and original
sense has only been designed for supersymmetric models. We have
modified the format to account for the EW corrections that are implemented in
{\texttt{2HDECAY}} in the 2HDM. As a reference for the following description,
exemplary output files are given in
App.\,\ref{sec:AppendixOutputFile}. These modified SLHA output files are
only generated if {\texttt{OMIT ELW2=0}} is set in the input file,
{\it i.e.}\,only if the electroweak corrections to the 2HDM decays are
taken into account. In the following we describe the changes that we
have applied.
The first block {\texttt{BLOCK DCINFO}} contains basic information
about the program itself, while the subsequent three blocks
{\texttt{SMINPUTS}}, {\texttt{2HDMINPUTS}} and {\texttt{VCKMIN}}
contain the input parameters used for the calculation that were
already described in Sec.\,\ref{sec:InputFileFormat}. As explained in
Sec.\,\ref{sec:connectionHDECAY}, the value of $G_F$ printed in the
output file is not necessarily the same as the one given in the input
file if {\texttt{OMIT ELW2=0}} is set, since in this case, $G_F$ is
calculated from the input value $\alpha _\text{em} (m_Z^2)$
instead, and this value is then given out. These four blocks are
given out in both output files.
In the output file containing the branching ratios, indicated by the
suffix '\_BR', subsequently two blocks follow for
each Higgs boson ($h,H,A$ and
$H^\pm$). They are called {\texttt{DECAY QCD}} and {\texttt{DECAY
QCD\&EW}}.
The block {\texttt{DECAY QCD}} contains the total decay
width, the mixing angles $\alpha$, $\beta$,
the $\overline{\mbox{MS}}$ parameter
$m_{12}^2$\footnote{Note that they differ from
the input values if $\mu_R \ne \mu_{\text{out}}$ or if the
reference/input scheme is different from the renormalization scheme in
which the decays are evaluated.}, and the
branching ratios of the decays of the respective Higgs boson, as
implemented in {\tt HDECAY}. These are in
particular the LO (loop-induced for the $gg$, $\gamma\gamma$ and
$Z\gamma$ final states) decay widths including the relevant and state-of-the
art QCD corrections where applicable ({\it
cf.}~\cite{DJOUADI199856,Djouadi:2018xqq} for further details).
For decays into heavy quarks, massive vector bosons, neutral Higgs
pairs as well as gauge and Higgs boson final states
also off-shell decays are computed if necessary. We want to emphasize
again that the partial and total decay widths differ from the ones of the original {\tt
HDECAY} version if {\texttt{OMIT ELW2=0}} is set, as for
consistency with the computed EW corrections in this case the {\tt
HDECAY} decay widths are rescaled by $G_F^\text{calc}/G_F$, as
explained in Sec.~\ref{sec:connectionHDECAY}. If {\texttt{OMIT
ELW2=1}} is set, no EW corrections are computed and the {\tt HDECAY}
decay widths are computed with $G_F$ as in the original {\tt HDECAY} version.
The block {\texttt{DECAY
QCD\&EW}} contains the total decay width, the
mixing angles $\alpha$, $\beta$, the $\overline{\mbox{MS}}$
parameter $m_{12}^2$, and the
branching ratios of the respective Higgs boson including both the QCD
corrections (provided by {\tt
HDECAY}) and the EW corrections (computed by {\tt 2HDECAY}) }to the
LO decay widths. Note that the
LO decay widths are also computed by {\tt 2HDECAY}. As an additional
cross-check, we internally compare the respective {\tt HDECAY} LO decay
width (rescaled by $G_F^\text{calc}/G_F$ and calculated with OS masses
for this comparison) with the one computed by
{\tt 2HDECAY}. If they differ (which they should not), a warning is
printed on the screen. As described in
Sec.\,\ref{sec:connectionHDECAY}, we emphasize again that the EW corrections are
calculated and included only for OS decay channels that are
kinematically allowed and for non-loop-induced decays. Therefore, some
of the branching ratios given out may be QCD-, but not
EW-corrected. The total decay width given out in this block is
the sum of all accordingly computed partial decay widths.
The last block at the end of the file with the branching ratios
contains the QCD-corrected branching ratios of the top-quark calculated in the 2HDM.
It is required for the computation of the Higgs decays into final states
with an off-shell top.
In the output file with the EW corrected NLO decay widths, indicated
by the suffix '\_EW', the first four blocks
described above are instead followed by the two blocks {\texttt{LO DECAY WIDTH}} and
{\texttt{NLO DECAY WIDTH}} for each Higgs boson ($h,H,A$ and
$H^\pm$). In these blocks, the partial decay widths at
LO and including the one-loop EW corrections are given out,
respectively, together with the
mixing angles $\alpha$, $\beta$ and the $\overline{\mbox{MS}}$
parameter $m_{12}^2$. These values of the widths
are particularly useful for studies of
the relative size of the EW corrections and for studying the
renormalization scheme dependence of the EW corrections. This allows
for a rough estimate of the remaining theoretical error due to missing higher-order
EW corrections. Since the EW corrections are calculated only for OS
decays and additionally only for decays that are not loop-induced,
these two blocks do not contain all final states written out in the
blocks {\texttt{DECAY QCD}} and {\texttt{DECAY
QCD\&EW}}. Hence, depending on the input values that are
chosen, it can happen that the two blocks {\texttt{DECAY QCD}} and {\texttt{DECAY
QCD\&EW}} contain decays that are not printed out in the blocks
{\texttt{LO DECAY WIDTH}} and {\texttt{NLO DECAY WIDTH}}, since for the calculation of
the branching ratios, off-shell and loop-induced decays are considered
by {\texttt{HDECAY}} as well.
\section{Summary}
\label{sec:summary}
We have presented the program package {\texttt{2HDECAY}} for the
calculation of the Higgs boson decays in the 2HDM. The tool computes the
NLO EW corrections to all 2HDM Higgs boson decays into OS final states
that are not loop-induced. The user can choose among 17 different
renormalization schemes that have been specified in the manual.
They are based on different renormalization schemes for the mixing
angles $\alpha $ and $\beta$, an $\overline{\text{MS}}$ condition for
the soft-$\mathbb{Z}_2$-breaking scale $m_{12}^2$ and an OS scheme for
all other counterterms and wave function renormalization constants of
the 2HDM necessary for calculating the EW corrections.
The EW corrections are combined with the state-of-the-art QCD
corrections obtained from {\texttt{HDECAY}}. The EW\&QCD-corrected
total decay widths and branching ratios are given out in an
SLHA-inspired output file format. Moreover, the tool provides
separately an SLHA-inspired output for the LO and
EW NLO partial decay widths to all OS and non-loop-induced
decays. This separate output enables {\it e.g.}~an
efficient analysis of the size of the EW corrections in the 2HDM or
the comparison with the relative
EW corrections in the MSSM as a SUSY benchmark model. The
implementation of several different renormalization schemes
additionally allows for the investigation of the numerical effects of
the different schemes and an estimate of the residual theoretical
uncertainty due to missing higher-order EW
corrections. For a consistent estimate of this error,
an automatic parameter conversion routine is implemented,
performing the automatic conversion of the input
values of $\alpha$, $\beta$ and $m_{12}^2$ from a reference scheme to
all other renormalization schemes that are implemented, as well as
from the $\overline{\text{MS}}$ input renormalization scale $\mu _R$
to the renormalization scale $\mu _\text{out}$ at which the partial
decay widths are evaluated. Being fast, our new
tool enables efficient phenomenological studies of the 2HDM Higgs
sector at high precision. The latter is necessary to reveal indirect
new physics effects in the Higgs sector and to identify the true
underlying model in case of the discovery of additional Higgs bosons. This
brings us closer to our goal of understanding electroweak symmetry
breaking and deciphering the physics puzzle in fundamental particle
physics.
\subsection*{Acknowledgments}
The authors thank David Lopez-Val and Jonas M\"{u}ller for
independently cross-checking some of the analytic results derived for
this work. The authors express gratitude to David Lopez-Val for his
endeavors on debugging the early alpha versions of {\texttt{2HDECAY}}
and to Stefan Liebler and Florian Staub for helpful discussions concerning the real corrections to the decays. The authors thank Ansgar Denner, Stefan Dittmaier and Jean-Nicolas Lang for helpful discussions and for providing the analytic results of their mixing angle counterterms to us for the implementation in {\texttt{2HDECAY}}. MK
and MM acknowledge financial support from the DFG project “Precision
Calculations in the Higgs Sector - Paving the Way to the New Physics
Landscape” (ID: MU 3138/1-1).
\begin{appendix}
\section{Exemplary Input File}
\label{sec:AppendixInputFile}
In the following, we present an exemplary input file
{\texttt{2hdecay.in}} as it is included in the subfolder
{\texttt{\$2HDECAY/Input}} in the {\texttt{2HDECAY}} repository. The
first integer in each line represents the line number and is not part
of the actual input file, but printed here for convenience. The
meaning of the input parameters is specified in
Sec.\,\ref{sec:InputFileFormat}. In comparison to the input file
format of the unmodified {\texttt{HDECAY}}
program\cite{DJOUADI199856,Djouadi:2018xqq}, the lines 6, 26, 28, 58, 59, 63 and 64 are new, but the rest of the input file format is unchanged. We
want to emphasize again that the value {\texttt{GFCALC}} in the input
file is overwritten by the program and thus not an input value that
is provided by the user, but it is calculated by {\texttt{2HDECAY}}
internally. The sample 2HDM parameter point has been checked
against all relevant theoretical and experimental constraints. In
particular it features a SM-like Higgs boson with a mass of
125.09~GeV which is given by the lightest CP-even neutral Higgs
boson $h$. For details on the applied constraints, we refer to
Refs.~\cite{Basler:2016obg,Muhlleitner:2017dkd}.
\lstinputlisting{2hdecay.in}
\section{Exemplary Output Files}
\label{sec:AppendixOutputFile}
In the following, we present exemplary output files
{\texttt{2hdecay\_BR.out}} and {\texttt{2hdecay\_EW.out}} as they
are generated from the sample input file
{\texttt{2hdecay.in}} and included in the subfolder \\
{\texttt{\$2HDECAY/Results}} in the
{\texttt{2HDECAY}} repository. The suffixes ``\_BR'' and ``\_EW''
stand for the branching ratios and electroweak partial decay widths,
respectively. The first integer in each line represents the line
number and is not part of the actual output file, but printed here for
convenience. The output file format is explained in detail in
Sec.\,\ref{sec:OutputFileFormat}. The exemplary output file was
generated for a specific choice of the renormalization scheme, {\it i.e.}~we
have set {\texttt{RENSCHEM = 7}} in line 58 of the input file,
{\it cf.}~App.\,\ref{sec:AppendixInputFile}. For {\texttt{RENSCHEM = 0}},
the output file becomes considerably longer, since the electroweak
corrections are calculated for all 17 implemented renormalization
schemes. We chose {\texttt{REFSCHEM = 5}} and
{\texttt{INSCALE = 125.09D0}}. This means that the input values for
$\alpha$ and $\beta$ are understood to be given in the renormalization
scheme 5 and the scale at which $\alpha$, $\beta$ and the
$\overline{\mbox{MS}}$ parameter $m_{12}^2$ are defined is equal to 125.09~GeV.
\subsection{Exemplary Output File for the Branching Ratios}
The exemplary output file {\texttt{2hdecay\_BR.out}} contains the
branching ratios without and with the electroweak corrections. The
content of the file is presented in the following.
\lstinputlisting{2hdecay_BR.out}
In the following, we make some comments on the output
files that partly pick up hints and caveats made in the main text of
the manual.
As can be inferred from the output, we give for the decays of each
Higgs boson the values of $\alpha$, $\beta$ and $m_{12}^2$. These
values change from the input values and for each Higgs boson as we
have to perform the parameter conversion from the input reference
scheme 5 to the renormalization scheme 7 and because we use for the
loop corrected widths the renormalization scale given by the mass of
the decaying Higgs boson, since we set
{\texttt{OUTSCALE = MIN}} while the input values for these parameters
are understood to be given at the mass of the SM-like Higgs boson.
Furthermore notice that indeed the branching ratios of
the lightest CP-even Higgs boson $h$ are SM-like. All branching ratios
presented in the blocks {\texttt{DECAY QCD}} can be compared to the
ones generated by the program code {\texttt{HDECAY}} version 6.52. The
user will notice that the partial widths related to the branching
ratios generated by {\texttt{2HDECAY}} and {\texttt{HDECAY}},
respectively, differ due to
the rescaling factor $G_F^\text{calc}/G_F =
1.026327$, which is applied in {\texttt{2HDECAY}}
for the consistent
combination of the EW-corrected decay widths with the decay widths generated by
{\texttt{HDECAY}}. Be aware that the rescaling factor
appears in the loop induced decay into $Z\gamma$ and in the off-shell decays non-linearly. This is why also the branching
ratios given here differ from the ones generated by
{\texttt{HDECAY}}6.52.
The comparison furthermore shows an additional difference
between the decay widths for the heavy CP-even Higgs boson $H$ into
massive vector bosons, $\Gamma (H \to VV)$ ($V=W,Z$), of around
2-3\%. The reason is that {\texttt{HDECAY}}
throughout computes these
decay widths using the double off-shell formula while
{\texttt{2HDECAY}} uses the on-shell formula for Higgs boson masses
above the threshold. Let us also note some phenomenological features of the chosen
parameter point. The $H$ boson with a mass of 382~GeV is heavy enough
to decay on-shell into $WW$ and $ZZ$,
and also into the 2-Higgs boson final state $hh$. It decays off-shell into
$AA$ and the gauge plus Higgs boson final state $ZA$ with branching
ratios of ${\cal O}(10^{-10})$ and ${\cal O}(10^{-4})$,
respectively. The pseudoscalar with a mass of 351~GeV decays on-shell
into the gauge plus Higgs boson final state $Zh$ with a branching ratio
at the per cent level. The charged Higgs boson has a mass of 414~GeV
allowing it to decay on-shell in the gauge plus Higgs boson final
state $W^+ h$ with a branching ratio at the per cent level. It decays
off-shell into the final states $W^+ H$ and $W^+ A$ with branching
ratios of ${\cal O}(10^{-5})$ and ${\cal O}(10^{-3})$,
respectively.
\subsection{Exemplary Output File for the Electroweak Partial Decay Widths}
The exemplary output file {\texttt{2hdecay\_EW.out}} contains the LO
and electroweak NLO partial decay widths. The content of the file is
presented in the following.
\lstinputlisting{2hdecay_EW.out}
The inspection of the output file shows that the EW
corrections reduce the $h$ decay widths, and the
relative NLO EW corrections,
$\Delta^{\text{EW}} = (\Gamma^{\text{EW}} -
\Gamma^{\text{LO}})/\Gamma^{\text{LO}}$, range between -6.3 and
-2.2\% for the decays $\Gamma(h \to \mu^+\mu^-)$ and $\Gamma(h\to
s\bar{s})$, respectively. Regarding $H$, the corrections can both
enhance and reduce the decay widths. The relative corrections range
between -11.5 and 27.7\% for the decays $\Gamma(H\to \mu^+ \mu^-)$
and $\Gamma(H \to hh)$, respectively. The relative corrections to
the $A$ decay widths vary between -31.2 and 0.3\% for the decays $\Gamma(A\to Zh)$
and $\Gamma(A \to t\bar{t})$, respectively. And those for the $H^\pm$
decays between -20.6 and 11.1\% for the decays $\Gamma(H^+ \to u\bar{b})$
and $\Gamma(H^+ \to W^+h)$, respectively. The EW corrections (for the
renormalization scheme number 7) of the
chosen parameter point can hence be sizeable. Finally, note also
that LO and NLO EW-corrected decay widths are given out for
on-shell and non-loop induced decays only.
\end{appendix}
| {'timestamp': '2019-02-22T02:18:53', 'yymm': '1810', 'arxiv_id': '1810.00768', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.00768'} |
\section{Introduction}
Stability conditions in derived categories provide a framework for the study of moduli spaces of complexes of sheaves. They have been introduced in \cite{Bri07:stability_conditions}, with inspiration from work in string theory \cite{Dou02:mirror_symmetry}. It turns out that the theory has far further reach. A non exhaustive list of influenced areas is given by counting invariants, geometry of moduli spaces of sheaves, representation theory, homological mirror symmetry, and classical algebraic geometry.
This article will focus on the basic theory of Bridgeland stability on smooth projective varieties and give some applications to the geometry of moduli spaces sheaves. We pay particular attention to the case of complex surfaces.
\medskip
\noindent
{\bf Stability on curves.}
The theory starts with vector bundles on curves.
We give an overview of the classical theory in Section \ref{sec:curves}.
Let $C$ be a smooth projective curve.
In order to obtain a well behaved moduli space one has to restrict oneself to so called \emph{semistable} vector bundles. Any vector bundle $E$ has a \emph{slope} defined as $\mu(E) = \tfrac{d(E)}{r(E)}$, where $d(E)$ is the \emph{degree} and $r(E)$ is the \emph{rank}. It is called semistable if for all sub-bundles $F \subset E$ the inequality $\mu(F) \leq \mu(E)$ holds.
The key properties of this notion that one wants to generalize to higher dimensions are the following.
Let $\ensuremath{\mathcal A} = \mathop{\mathrm{Coh}}\nolimits(C)$ denote the category of coherent sheaves.
One can recast the information of rank and degree as an additive homomorphism
\begin{equation*}
Z: K_0(C) \to \ensuremath{\mathbb{C}}, \, \, v \mapsto -d(v) + \sqrt{-1}\, r(v),
\end{equation*}
where $K_0(C)$ denotes the Grothendieck group of $C$, generated by classes of vector bundles.
Then:
\begin{enumerate}
\item For any $E \in \ensuremath{\mathcal A}$, we have $\Im Z(E) \geq 0$.
\item\label{enum:Intro1} If $\Im Z(E) = 0$ for some non trivial $E \in \ensuremath{\mathcal A}$, then $\Re Z(E) < 0$.
\item For any $E \in \ensuremath{\mathcal A}$ there is a filtration \[
0 = E_0 \subset E_1 \subset \ldots \subset E_{n-1} \subset E_n = E
\]
of objects $E_i \in \ensuremath{\mathcal A}$ such that $A_i = E_i/E_{i-1}$ is semistable for all $i = 1, \ldots, n$ and $\mu(A_1) > \ldots > \mu(A_n)$.
\end{enumerate}
\medskip
\noindent
{\bf Higher dimensions.}
The first issue one has to deal with is that if one asks for the same properties to be true for coherent sheaves on a higher dimensional smooth projective variety $X$, it is not so hard to see that property \eqref{enum:Intro1} cannot be achieved (by any possible group homomorphism $Z$).
The key idea is then to change the category in which to define stability.
The bounded derived category of coherent sheaves $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ contains many full abelian subcategories with similar properties as $\mathop{\mathrm{Coh}}\nolimits(X)$ known as \emph{hearts of bounded t-structures}.
A \emph{Bridgeland stability condition} on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ is such a heart $\ensuremath{\mathcal A} \subset \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ together with an additive homomorphism $Z: K_0(X) \to \ensuremath{\mathbb{C}}$ satisfying the three properties above (together with a technical condition, called the \emph{support property}, which will be fundamental for the deformation properties below).
The precise definition is in Section \ref{sec:bridgeland_stability}.
Other classical generalizations of stability on curves such as slope stability or Gieseker stability (see Section \ref{sec:generalizations}) do not directly fit into the framework of Bridgeland stability. However, for most known constructions, their moduli spaces still appear as special cases of Bridgeland stable objects. We will explain this in the case of surfaces.
\medskip
\noindent
{\bf Bridgeland's deformation theorem.}
The main theorem in \cite{Bri07:stability_conditions} (see Theorem \ref{thm:BridgelandMain}) is that the set of stability conditions $\mathop{\mathrm{Stab}}\nolimits(X)$ can be given the structure of a complex manifold in such a way that the map $(\ensuremath{\mathcal A}, Z) \mapsto Z$ which forgets the heart is a local homeomorphism.
For general $X$ is not even known whether $\mathop{\mathrm{Stab}}\nolimits(X) \neq \emptyset$. However, if $\mathop{\mathrm{dim}}\nolimits X = 2$, the situation is much more well understood. In Section \ref{sec:surfaces} we construct stability conditions $\sigma_{\omega, B}$ for each choice of ample $\ensuremath{\mathbb{R}}$-divisor class $\omega$ and arbitrary $\ensuremath{\mathbb{R}}$-divisor class $B$. This construction originated in \cite{Bri08:stability_k3} in the case of K3 surfaces. Arcara and Bertram realized that the construction can be generalized to any surface by using the Bogomolov Inequality in \cite{AB13:k_trivial}.
The proof of the support property is in \cite{BM11:local_p2,BMT14:stability_threefolds,BMS14:abelian_threefolds}.
As a consequence, these stability conditions vary continuously in $\omega$ and $B$.
\medskip
\noindent
{\bf Moduli spaces.}
If we fix a numerical class $v$, it turns out that semistable objects with class $v$ vary nicely within $\mathop{\mathrm{Stab}}\nolimits(X)$ due to \cite{Bri08:stability_k3}. More precisely, there is a locally finite wall and chamber structure such that the set of semistable objects with class $v$ is constant within each chamber.
In the case of surfaces, as mentioned before, there will be a chamber where Bridgeland semistable objects of class $v$ are exactly (twisted) Gieseker semistable sheaves.
The precise statement is Corollary \ref{cor:LargestWallExists}.
The first easy applications of the wall and chamber structure are in Section \ref{sec:applications}.
The next question is about moduli spaces.
Stability of sheaves on curves (or more generally, Gieseker stability on higher dimensional varieties) is associated to a GIT problem.
This guarantees that moduli spaces exist as projective schemes.
For Bridgeland stability, there is no natural GIT problem associated to it.
Hence, the question of existence and the fact that moduli spaces are indeed well-behaved is not clear in general.
Again, in the surface case, for the stability conditions $\sigma_{\omega,B}$, it is now known \cite{Tod08:K3Moduli} that moduli spaces exist as Artin stacks of finite-type over $\ensuremath{\mathbb{C}}$. In some particular surfaces, it is also known coarse moduli spaces parameterizing S-equivalence classes of semistable objects exist, and they are projective varieties.
We review this in Section \ref{subsec:ModuliSurfaces}.
\medskip
\noindent
{\bf Birational geometry of moduli spaces of sheaves on surfaces.}
The birational geometry of moduli spaces of sheaves on surfaces has been heavily studied by using wall crossing techniques in Bridgeland's theory. Typical question are what their nef and effective cones are and what the stable base locus decomposition of the effective cone is. The case of $\ensuremath{\mathbb{P}}^2$ was studied in many articles such as \cite{ABCH13:hilbert_schemes_p2, CHW14:effective_cones_p2, LZ13:stability_p2, LZ16:NewStabilityP2, Woo13:torsion_sheaves_p2}. The study of the abelian surfaces case started in \cite{MM13:stability_abelian_surfaces} and was completed in \cite{MYY14:stability_k_trivial_surfaces,YY14:stability_abelian_surfaces}. The case of K3 surfaces was handled in \cite{MYY14:stability_k_trivial_surfaces,BM14:projectivity, BM14:stability_k3}. Enriques surfaces were studied in \cite{Nue14:stability_enriques}. We will showcase some of the techniques in Section \ref{sec:nef} by explaining how to compute the nef cone of the Hilbert scheme of $n$ points for surfaces of Picard rank one if $n \gg 0$. In this generality it was done in \cite{BHLRSW15:nef_cones} and then later generalized to moduli of vector bundles with large discriminant in \cite{CH15:nef_cones}.
The above proofs are based on the so called \emph{Positivity Lemma} from \cite{BM14:projectivity} (see Theorem \ref{thm:positivity_lemma}).
Roughly, the idea is that to any Bridgeland stability condition $\sigma$ and to any family $\ensuremath{\mathcal E}$ of $\sigma$-semistable objects parameterized by a proper scheme $S$, there is a nef divisor class $D_{\sigma,\ensuremath{\mathcal E}}$ on $S$.
Moreover, if there exists a curve $C$ in $S$ such that $D_{\sigma,\ensuremath{\mathcal E}}.C=0$, then all objects $\ensuremath{\mathcal E}_{|X \times \{c\}}$ are S-equivalent, for all $c\in C$.
In examples, the divisor class will induce an ample divisor class on the moduli space of stable objects. Hence, we can use the Positivity Lemma in two ways: until we move in a chamber in the space of stability conditions, this gives us a subset of the ample cone of the moduli space.
Once we hit a wall, we have control when we hit also a boundary of the nef cone if we can find a curve of $\sigma$-stable objects in a chamber which becomes properly semistable on the wall.
\medskip
\noindent
{\bf Bridgeland stability for threefolds.}
As mentioned before, the original motivation for Bridgeland stability comes from string theory. In particular, it requires the construction of stability conditions on Calabi-Yau threefolds. It is an open problem to even find a single example of a stability condition on a simply connected projective Calabi-Yau threefold where skyscraper sheaves are stable (examples where they are semistable are in \cite{BMS14:abelian_threefolds}). Most successful attempts on threefolds trace back to a conjecture in \cite{BMT14:stability_threefolds}. In the surface case the construction is based on the classical Bogomolov inequality for Chern characters of semistable vector bundles. By analogy a generalized Bogomolov inequality for threefolds involving the third Chern character was conjectured in \cite{BMT14:stability_threefolds} that allows to construct Bridgeland stability. In \cite{Sch16:counterexample} it was shown that this conjectural inequality needs to be modified, since it does not hold for the blow up of $\ensuremath{\mathbb{P}}^3$ in a point.
There are though many cases in which the original inequality is true. The first case was $\ensuremath{\mathbb{P}}^3$ in \cite{BMT14:stability_threefolds,Mac14:conjecture_p3}. A similar argument was then successfully applied to the smooth quadric hypersurface in $\ensuremath{\mathbb{P}}^4$ in \cite{Sch14:conjecture_quadric}. The case of abelian threefolds was independently proved in \cite{MP15:conjecture_abelian_threefoldsI,MP16:conjecture_abelian_threefoldsII} and \cite{BMS14:abelian_threefolds}. Moreover, as pointed out in \cite{BMS14:abelian_threefolds}, this also implies the case of \'etale quotients of abelian threefolds and gives the existence of Bridgeland stability condition on orbifold quotients of abelian threefolds (this includes examples of Calabi-Yau threefolds which are simply-connected). The latest progress is the proof of the conjecture for all Fano threefolds of Picard rank one in \cite{Li15:conjecture_fano_threefold} and a proof of a modified version for all Fano threefolds independently in \cite{BMSZ16:conjecture_fano_threefolds} and \cite{Piy16:conjecture_fano_threefolds}.
Once stability conditions exist on threefolds, it is interesting to study moduli spaces therein and which geometric information one can get by varying stability.
For projective space this approach has led to first results in \cite{Sch15:stability_threefolds, Xia16:twisted_cubics, GHS16:elliptic_quartics}.
\medskip
\noindent
{\bf Structure of the notes.}
In Section \ref{sec:curves} we give a very light introduction to stability of vector bundles on curves. The chapter serves mainly as motivation and is logically independent of the remaining notes. Therefore, it can safely be skipped if the reader wishes to do so. In Section \ref{sec:generalizations}, we discuss classical generalizations of stability from curves to higher dimensional varieties. Moduli spaces appearing out of those are often times of classical interest and connecting these to Bridgeland stability is usually key. In Section \ref{sec:stability_abelian} and Section \ref{sec:bridgeland_stability} we give a full definition of what a Bridgeland stability condition is and prove or point out important basic properties. In Section \ref{sec:surfaces}, we demonstrate the construction of stability on smooth projective surfaces. Section \ref{sec:applications} and Section \ref{sec:nef} are about concrete examples. We show how to compute the nef cone for Hilbert schemes of points in some cases, how one can use Bridgeland stability to prove Kodaira vanishing on surfaces and discuss some further questions on possible applications to projective normality for surfaces. The last Section \ref{sec:P3} is about threefolds. We explain the construction of stability conditions on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(\ensuremath{\mathbb{P}}^3)$. As an application we point out how Castelnuovo's classical genus bound for non degenerate curves turns out to be a simple consequence of the theory. In Appendix \ref{sec:derived_categories} we give some background on derived categories.
These notes contain plenty of exercises and we encourage the reader to do as many of them as possible. A lot of aspects of the theory that seem obscure and abstract at first turn out to be fairly simple in practice.
\medskip
\noindent
{\bf What is not in these notes.}
One of the main topics of current interest we do not cover in these notes is the ``local case'' (i.e., stability conditions on CY3 triangulated categories defined using quivers with potential; see, e.g., \cite{Bri06:non_compact, KS08:wall_crossing, BS15:quadratic_differentials}) or more generally stability conditions on Fukaya categories (see, e.g., \cite{DHKK14:dynamical_systems, HKK14:flat_surfaces, Joy15:conjectures_fukaya}).
Another fundamental topic is the connection with counting invariants;
for a survey we refer to \cite{Tod12:introduction_dt_theory, Tod12:stability_curve_counting, Tod14:icm}.
Connections to string theory are described for example in \cite{Asp05:d-branes, Bri06:icm, GMN13:hitchin}.
Connections to Representation Theory are in \cite{ABM15:stability}.
There is also a recent survey \cite{Hui16:stability_survey} focusing more on both the classical theory of semistable sheaves and concrete examples still involving Bridgeland stability. The note \cite{Bay16:BrillNoether} focuses instead on deep geometric applications of the theory (the classical Brill-Noether theorem for curves). The survey \cite{Huy11:intro_stability} focuses more on K3 surfaces and applications therein.
Finally, Bridgeland's deformation theorem is the topic of the excellent survey \cite{Bay11:lectures_notes_stability}, with a short proof recently appearing in \cite{Bay16:short_proof}.
\medskip
\noindent
{\bf Notation.}
\begin{center}
\begin{tabular}{ r l }
$G_k$ & the $k$-vector space $G\otimes k$ for a field $k$ and abelian group $G$ \\
$X$ & a smooth projective variety over $\ensuremath{\mathbb{C}}$ \\
$\ensuremath{\mathcal I}_Z$ & the ideal sheaf of a closed subscheme $Z \subset X$ \\
$\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ & the bounded derived category of coherent sheaves on $X$ \\
$\mathop{\mathrm{ch}}\nolimits(E)$ & the Chern character of an object $E \in D^b(X)$ \\
$K_0(X)$ & the Grothendieck group of $X$ \\
$K_{\mathop{\mathrm{num}}\nolimits}(X)$ & the numerical Grothendieck group of $X$ \\
$\mathop{\mathrm{NS}}\nolimits(X)$ & the N\'eron-Severi group of $X$ \\
$N^1(X)$ & $\mathop{\mathrm{NS}}\nolimits(X)_{\ensuremath{\mathbb{R}}}$ \\
$\mathop{\mathrm{Amp}}\nolimits(X)$ & the ample cone inside $N^1(X)$ \\
$\mathop{\mathrm{Pic}}\nolimits^d(C)$ & the Picard variety of lines bundles of degree $d$ on a smooth curve $C$
\end{tabular}
\end{center}
\medskip
\noindent
{\bf Acknowledgments.}
We would very much like to thank Benjamin Bakker, Arend Bayer, Aaron Bertram, Izzet Coskun, Jack Huizenga, Daniel Huybrechts, Mart\'i Lahoz, Ciaran Meachan, Paolo Stellari, Yukinobu Toda, and Xiaolei Zhao for very useful discussions and many explanations on the topics of these notes.
We are also grateful to Jack Huizenga for sharing a preliminary version of his survey article \cite{Hui16:stability_survey} with us and to the referee for very useful suggestions which improved the readability of these notes.
The first author would also like to thank very much the organizers of the two schools for the kind invitation and the excellent atmosphere, and the audience for many comments, critiques, and suggestions for improvement.
The second author would like to thank Northeastern University for the hospitality during the writing of this article. This work was partially supported by NSF grant DMS-1523496 and a Presidential Fellowship of the Ohio State University.
\section{Stability on Curves}\label{sec:curves}
The theory of Bridgeland stability conditions builds on the usual notions of stability for sheaves.
In this section we review the basic definitions and properties of stability for vector bundles on curves. Basic references for this section are \cite{New78:introduction_moduli,Ses80:vector_bundles_curves_book, Pot97:lectures_vector_bundles, HL10:moduli_sheaves}. Throughout this section $C$ will denote a smooth projective complex curve of genus $g \geq 0$.
\subsection{The projective line}
\label{subsec:P1}
The starting point is the projective line $\ensuremath{\mathbb{P}}^1$. In this case, vector bundles can be fully classified, by using the following decomposition theorem, which is generally attributed to Grothendieck, who proved it in modern language. It was known way before in the work of Dedekind-Weber, Birkhoff, Hilbert, among others (see \cite[Theorem 1.3.1]{HL10:moduli_sheaves}).
\begin{thm}
\label{thm:P1}
Let $E$ be a vector bundle on $\ensuremath{\mathbb{P}}^1$. Then there exist unique integers $a_1, \ldots, a_n \in \ensuremath{\mathbb{Z}}$ satisfying $a_1> \ldots >a_n$ and unique non-zero vector spaces $V_1, \ldots, V_n$ such that $E$ is isomorphic to
\[
E \cong \left(\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a_1)\otimes_\ensuremath{\mathbb{C}} V_1\right) \oplus \ldots \oplus \left(\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a_n) \otimes_C V_n\right).
\]
\end{thm}
\begin{proof}
For the existence of the decomposition we proceed by induction on the rank $r(E)$. If $r(E)=1$ there is nothing to prove. Hence, we assume $r(E)>1$.
By Serre duality and Serre vanishing, for all $a \gg 0$ we have
\[
\mathop{\mathrm{Hom}}\nolimits(\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a), E) = H^1(\ensuremath{\mathbb{P}}^1, E^\vee(a-2))^\vee = 0.
\]
We pick the largest integer $a \in \ensuremath{\mathbb{Z}}$ such that $\mathop{\mathrm{Hom}}\nolimits(\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a), E) \neq 0$ and a non-zero morphism $\phi \in \mathop{\mathrm{Hom}}\nolimits(\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a), E)$.
Since $\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a)$ is torsion-free of rank $1$, $\phi$ is injective.
Consider the exact sequence
\[
0 \to \ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a) \xrightarrow{\phi} E \to F=\mathop{\mathrm{Cok}}\nolimits(\phi) \to 0.
\]
We claim that $F$ is a vector bundle of rank $r(E)-1$. Indeed, by Exercise \ref{ex:TorsionExactSequence} below, if this is not the case, there is a subsheaf $T_F \ensuremath{\hookrightarrow} F$ supported in dimension $0$. In particular, we have a morphism from the skyscraper sheaf $\ensuremath{\mathbb{C}}(x) \ensuremath{\hookrightarrow} F$. But this gives a non-zero map $\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a+1) \to E$, contradicting the maximality of $a$.
By induction $F$ splits as a direct sum
\[
F \cong \bigoplus_j \ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(b_j)^{\oplus r_j}.
\]
The second claim is that $b_j \leq a$, for all $j$.
If not, there is a non-zero morphism $\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a+1) \ensuremath{\hookrightarrow} F$.
Since $\mathop{\mathrm{Ext}}\nolimits^1(\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a+1),\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a)) = H^1(\ensuremath{\mathbb{P}}^1,\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(-1))=0$, this morphism lifts to a non-zero map $\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a+1)\to E$, contradicting again the maximality of $a$.
Finally, we obtain a decomposition of $E$ as direct sum of line bundles since
\[
\mathop{\mathrm{Ext}}\nolimits^1(\oplus_j \ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(b_j)^{\oplus r_j},\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a)) = \oplus_j H^0(\ensuremath{\mathbb{P}}^1,\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(b_j-a-2))^{\oplus r_j}=0.
\]
The uniqueness now follows from
\[
\mathop{\mathrm{Hom}}\nolimits(\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a),\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(b))=0
\]
for all $a > b$.
\end{proof}
In higher genus, Theorem \ref{thm:P1} fails, but some of its features still hold. The direct sum decomposition will be replaced by a filtration (the \emph{Harder-Narasimhan filtration}) and the blocks $\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(a)\otimes_\ensuremath{\mathbb{C}} V$ by \emph{semistable} sheaves. The ordered sequence $a_1>\ldots>a_n$ will be replaced by an analogously ordered sequence of \emph{slopes}. Semistable sheaves (of fixed rank and degree) can then be classified as points of a projective variety, the \emph{moduli space}. Finally, the uniqueness part of the proof will follow by an analogous vanishing for morphisms of semistable sheaves.
\begin{exercise}
\label{ex:TorsionExactSequence}
Show that any coherent sheaf $E$ on a smooth projective curve $C$ fits into a unique canonical exact sequence
\[
0 \to T_E \to E \to F_E \to 0,
\]
where $T_E$ is a torsion sheaf supported and $F_E$ is a vector bundle. Show additionally that this sequence splits (non-canonically).
\end{exercise}
\subsection{Stability}\label{subsec:StabilityCurves}
We start with the basic definition of slope stability.
\begin{defn}\label{def:SlopeCurves}
\begin{enumerate}
\item The \emph{degree} $d(E)$ of a vector bundle $E$ on $C$ is defined to be the degree of the line bundle $\bigwedge^{r(E)} E$.
\item The \emph{degree} $d(T)$ for a torsion sheaf $T$ on $C$ is defined to be the length of its scheme theoretic support.
\item The \emph{rank} of an arbitrary coherent sheaf $E$ on $C$ is defined as $r(E)=r(F_E)$, while its
\emph{degree} is defined as $d(E) = d(T_E) + d(F_E)$.
\item The \emph{slope} of a coherent sheaf $E$ on $C$ is defined as
\[
\mu(E) = \frac{d(E)}{r(E)},
\]
where dividing by $0$ is interpreted as $+\infty$.
\item A coherent sheaf $E$ on $C$ is called \emph{(semi)stable} if for any proper non-trivial subsheaf $F \subset E$ the inequality $\mu(F) < (\leq) \mu(E)$ holds.
\end{enumerate}
\end{defn}
The terms in the previous definition are often times only defined for vector bundles. The reason for this is the following exercise.
\begin{exercise}
\begin{enumerate}
\item Show that the degree is additive in short exact sequences, i.e., for any short exact sequence
\[
0 \to F \to E \to G \to 0
\]
in $\mathop{\mathrm{Coh}}\nolimits(X)$ the equality $d(E) = d(F) + d(G)$ holds.
\item If $E \in \mathop{\mathrm{Coh}}\nolimits(C)$ is semistable, then it is either a vector bundle or a torsion sheaf.
\item A vector bundle $E$ on $C$ is (semi)stable if and only if for all non trivial subbundles $F \subset E$ with $r(F) < r(E)$ the inequality $\mu(F) < (\leq) \mu(E)$ holds.
\end{enumerate}
\end{exercise}
\begin{exercise}\label{ex:EllipticCurve}
Let $g=1$ and let $p\in C$ be a point.
Then
\[
\mathop{\mathrm{Ext}}\nolimits^1(\ensuremath{\mathcal O}_C,\ensuremath{\mathcal O}_C)=\ensuremath{\mathbb{C}} \text{ and } \mathop{\mathrm{Ext}}\nolimits^1(\ensuremath{\mathcal O}_C(p),\ensuremath{\mathcal O}_C)\cong \ensuremath{\mathbb{C}}.
\]
Hence, we have two non-trivial extensions:
\begin{align*}
& 0 \to \ensuremath{\mathcal O}_C \to V_0 \to \ensuremath{\mathcal O}_C \to 0\\
& 0 \to \ensuremath{\mathcal O}_C \to V_1 \to \ensuremath{\mathcal O}_C(p) \to 0
\end{align*}
Show that $V_0$ is semistable but not stable and that $V_1$ is stable.
\end{exercise}
A useful fact to notice is that whenever a non-zero sheaf $E \in \mathop{\mathrm{Coh}}\nolimits(C)$ has rank $0$, its degree is strictly positive. This is one of the key properties we want to generalize to higher dimensions. It also turns out to be useful for proving the following result, which generalizes Theorem \ref{thm:P1} to any curve.
\begin{thm}[\cite{HN74:hn_filtration}]
\label{thm:HNcurves}
Let $E$ be a non-zero coherent sheaf on $C$.
Then there is a unique filtration (called \emph{Harder-Narasimhan filtration})
\[
0 = E_0 \subset E_1 \subset \ldots \subset E_{n-1} \subset E_n = E
\]
of coherent sheaves such that $A_i = E_i/E_{i-1}$ is semistable for all $i = 1, \ldots, n$ and $\mu(A_1) > \ldots > \mu(A_n)$.
\end{thm}
We will give a proof of this statement in a more general setting in Proposition \ref{prop:hn_exists}. In the case of $\ensuremath{\mathbb{P}}^1$ semistable vector bundles are simply direct sums of line bundles, and stable vector bundles are simply line bundles. Vector bundles on elliptic curves can also be classified (see \cite{Ati57:vector_bundles_elliptic_curves} for the original source and \cite{Pol03:abelian_varieties} for a modern proof):
\begin{ex}
Let $g = 1$, fix a pair $(r,d) \in \ensuremath{\mathbb{Z}}_{\geq 1} \times \ensuremath{\mathbb{Z}}$, let $E$ be a vector bundle with $r(E)=r$ and $d(E) = d$. Then
\begin{itemize}
\item $\mathop{{\mathrm{gcd}}}\nolimits(r,d)=1$: $E$ is semistable if and only if $E$ is stable if and only if $E$ is indecomposable (i.e., it cannot be decomposed as a non-trivial direct sum of vector bundles) if and only if $E$ is simple (i.e., $\mathop{\mathrm{End}}\nolimits(E)=\ensuremath{\mathbb{C}}$). Moreover, all such $E$ are of the form $L \otimes E_0$, where $L\in\mathop{\mathrm{Pic}}\nolimits^0(C)$ and $E_0$ can be constructed as an iterated extension similarly as in Exercise \ref{ex:EllipticCurve}.
\item $\mathop{{\mathrm{gcd}}}\nolimits(r,d)\neq1$: There exist no stable vector bundles. Semistable vector bundles can be classified via torsion sheaves. More precisely, there is a one-to-one correspondence between semistable vector bundles of rank $r$ and degree $d$ and torsion sheaves of length $\mathop{{\mathrm{gcd}}}\nolimits(r,d)$. Under this correspondence, indecomposable vector bundles are mapped onto torsion sheaves topologically supported on a point.
\end{itemize}
\end{ex}
\begin{rmk}\label{rmk:JHfiltrations}
Let $E$ be a semistable coherent sheaf on $C$.
Then there is a (non-unique!) filtration (called the \emph{Jordan-H\"older filtration})
\[
0 = E_0 \subset E_1 \subset \ldots \subset E_{n-1} \subset E_n = E
\]
of coherent sheaves such that $A_i = E_i/E_{i-1}$ is stable for all $i = 1, \ldots, n$ and their slopes are all equal. The factors $A_i$ are unique up to reordering. We say that two semistable sheaves are \emph{S-equivalent} if they have the same stable factors up to reordering. More abstractly, this follows from the fact that the category of semistable sheaves on $C$ with fixed slope is an abelian category of finite length (we will return to this in Exercise \ref{exercise:StableObjectsBridgelandStability}). Simple objects in that category agree with stable sheaves.
\end{rmk}
\subsection{Moduli spaces}
\label{subsec:ModuliSpacesCurves}
In this section, let $g \geq 2$ and denote by $C$ a curve of genus $g$. We fix integers $r \in \ensuremath{\mathbb{Z}}_{\geq 1}$, $d \in \ensuremath{\mathbb{Z}}$, and a line bundle $L \in \mathop{\mathrm{Pic}}\nolimits^d(C)$ of degree $d$.
\begin{defn}
\label{def:functors}
We define two functors $\mathop{\underline{\mathrm{Sch}}}_{\ensuremath{\mathbb{C}}} \to \mathop{\underline{\mathrm{Set}}}$ as follows:
\begin{align*}
\underline{M}_C(r,d) (B) & := \left\{\ensuremath{\mathcal E} \text{ vector bundle on } C\times B\,:\, \forall b\in B,\, \begin{array}{l} \ensuremath{\mathcal E}|_{C\times\{b\}} \text{ is semistable}\\ r(\ensuremath{\mathcal E}|_{C\times\{b\}})=r\\
d(\ensuremath{\mathcal E}|_{C\times\{b\}})=d
\end{array} \right\} \, /\, \cong, \\
\underline{M}_C(r,L) (B) & := \left\{\ensuremath{\mathcal E} \in \underline{M}_C(r,d) (B) \,:\, \det(\ensuremath{\mathcal E}|_{C\times\{b\}})\cong L \right\},
\end{align*}
where $B$ is a scheme (locally) of finite-type over $\ensuremath{\mathbb{C}}$.
\end{defn}
We will denote the (open) subfunctors consisting of stable vector bundles by $\underline{M}^s_C(r,d)$ and $\underline{M}^s_C(r,d)$. The following result summarizes the work of many people, including Dr\'ezet, Mumford, Narasimhan, Ramanan, Seshadri, among others (see \cite{DN89:picard_group_semistable_curves, New78:introduction_moduli, Ses80:vector_bundles_curves_book, Pot97:lectures_vector_bundles, HL10:moduli_sheaves}).
It gives a further motivation to study stable vector bundles, besides classification reasons: the moduli spaces are interesting algebraic varieties.
\begin{thm}
\label{thm:MainSemistableBundlesCurves}
(i) There exists a coarse moduli space $M_C(r,d)$ for the functor $\underline{M}_C(r,d)$ parameterizing S-equivalence classes of semistable vector bundles on $C$. It has the following properties:
\begin{itemize}
\item It is non-empty.
\item It is an integral, normal, factorial, projective variety over $\ensuremath{\mathbb{C}}$ of dimension $r^2(g-1)+1$.
\item It has a non-empty smooth open subset $M^s_C(r,d)$ parameterizing stable vector bundles.
\item Except when $g=2$, $r=2$, and $d$ is even \footnote{See Example \ref{ex:g2d0}.}, $M_C(r,d)\setminus M^s_C(r,d)$ consists of all singular points of $M_C(r,d)$, and has codimension at least $2$.
\item $\mathop{\mathrm{Pic}}\nolimits(M_C(r, d)) \cong \mathop{\mathrm{Pic}}\nolimits(\mathop{\mathrm{Pic}}\nolimits^d(C)) \times \ensuremath{\mathbb{Z}}$.
\item If $\mathop{{\mathrm{gcd}}}\nolimits(r,d)=1$, then $M_C(r,d)=M^s_C(r,d)$ is a fine moduli space\footnote{To be precise, we need to modify the equivalence relation for $\underline{M}^s_C(r,d)(B)$: $\ensuremath{\mathcal E}\sim\ensuremath{\mathcal E}'$ if and only if $\ensuremath{\mathcal E} \cong \ensuremath{\mathcal E}' \otimes p_B^*\ensuremath{\mathcal L}$, where $\ensuremath{\mathcal L}\in\mathop{\mathrm{Pic}}\nolimits(B)$.}.
\end{itemize}
(ii) A similar statement holds for $\underline{M}_C(r,L)$\footnote{By using the morphism $\det\colon M_C(r,d)\to \mathop{\mathrm{Pic}}\nolimits^d(C)$, and observing that $M_C(r,L)=\det^{-1}(L)$.}, with the following additional properties:
\begin{itemize}
\item Its dimension is $(r^2-1)(g-1)$.
\item $\mathop{\mathrm{Pic}}\nolimits(M_C(r,L)=\ensuremath{\mathbb{Z}} \cdot \theta$, where $\theta$ is ample.
\item If $u:=\mathop{{\mathrm{gcd}}}\nolimits(r,d)$, then the canonical line bundle is $K_{M_C(r,L)}=-2u\theta$.
\end{itemize}
\end{thm}
\begin{proof}[Ideas of the proof]
$\bullet$ Boundedness: Fix a very ample line bundle $\ensuremath{\mathcal O}_C(1)$ on $C$.
Then, for any semistable vector bundle $E$ on $C$, Serre duality implies
\[
H^1(C, E\otimes \ensuremath{\mathcal O}_C(m-1)) \cong \mathop{\mathrm{Hom}}\nolimits(E, \ensuremath{\mathcal O}_C(-m+1)\otimes K_C)=0,
\]
for all $m\geq m_0$, where $m_0$ only depends on $\mu(E)$. Hence, by replacing $E$ with $E\otimes\ensuremath{\mathcal O}_C(m_0)$, we obtain a surjective map
\[
\ensuremath{\mathcal O}_C^{\oplus \chi(C,E)} \ensuremath{\twoheadrightarrow} E.
\]
$\bullet$ Quot scheme: Let $\gamma = \chi(C,E)$, where $E$ is as in the previous step.
We consider the subscheme $R$ of the Quot scheme on $C$ parameterizing quotients $\phi: \ensuremath{\mathcal O}_C^{\oplus \gamma} \ensuremath{\twoheadrightarrow} U$, where $U$ is locally-free and $\phi$ induces an isomorphism $H^0(C,\ensuremath{\mathcal O}_C^{\oplus \gamma})\cong H^0(C,U)$. Then it can be proved that $R$ is smooth and integral. Moreover, the group $\mathop{\mathrm{PGL}}(\gamma)$ acts on $R$.
$\bullet$ Geometric Invariant Theory: We consider the set of semistable (resp., stable) points $R^{ss}$ (resp., $R^s$) with respect to the group action given by $\mathop{\mathrm{PGL}}(\gamma)$ (and a certain multiple of the natural polarization on $R$). Then $M_C(r,d) = R^{ss} \sslash \mathop{\mathrm{PGL}}(\gamma)$ and $M_C^s(r,d) = R^{s} / \mathop{\mathrm{PGL}}(\gamma)$.
\end{proof}
\begin{rmk}
While the construction of the line bundle $\theta$ in Theorem \ref{thm:MainSemistableBundlesCurves} and its ampleness can be directly obtained from GIT, we can give a more explicit description by using purely homological algebra techniques; see Exercise \ref{exer:ThetaCurves} below.
In fact, the ``positivity'' of $\theta$ can be split into two statements: the first is the property of $\theta$ to be strictly nef (i.e., strictly positive when intersecting all curves in $M_C(r,L)$); the second is semi-ampleness (or, in this case, to show that $\mathop{\mathrm{Pic}}\nolimits(M_C(r,L)\cong\ensuremath{\mathbb{Z}}$). The second part is more subtle and it requires a deeper understanding of stable vector bundles or of the geometry of $M_C(r,L)$. But the first one turns out to be a fairly general concept, as we will see in Section \ref{sec:nef} in the Positivity Lemma.
\end{rmk}
The following exercise foreshadows how we will construct nef divisors in the case of Bridgeland stability in Section \ref{sec:nef}.
\begin{exercise}\label{exer:ThetaCurves}
Let $Z: \ensuremath{\mathbb{Z}}^{\oplus 2} \to \ensuremath{\mathbb{C}}$ be the group homomorphism given by
\[
Z(r,d) = - d + \sqrt{-1} r.
\]
Let $r_0,d_0\in\ensuremath{\mathbb{Z}}$ be such that $r_0\geq1$ and $\mathop{{\mathrm{gcd}}}\nolimits(r_0,d_0)=1$, and let $L_0 \in \mathop{\mathrm{Pic}}\nolimits^{d_0}(C)$.
We consider the moduli space $M:=M_C(r_0,L_0)$ and we let $\ensuremath{\mathcal E}_0$ be a universal family on $C\times M$.
For an integral curve $\gamma \subset M$, we define
\[
\ell_Z . \gamma := - \Im \frac{Z(r(\Phi_{\ensuremath{\mathcal E}_0}(\ensuremath{\mathcal O}_\gamma)),d(\Phi_{\ensuremath{\mathcal E}_0}(\ensuremath{\mathcal O}_\gamma)))}{Z(r_0,d_0)} \in\ensuremath{\mathbb{R}},
\]
where $\ensuremath{\mathcal O}_\gamma\in\mathop{\mathrm{Coh}}\nolimits(M)$ is the structure sheaf of $\gamma$ in $M$, $\Phi_{\ensuremath{\mathcal E}_0}:\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(M)\to \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(C)$, $\Phi_{\ensuremath{\mathcal E}_0}(-):= (p_C)_*(\ensuremath{\mathcal E}_0\otimes p_M^*(-))$ is the Fourier-Mukai transform with kernel $\ensuremath{\mathcal E}_0$, and $\Im$ denotes the imaginary part of a complex number.
\begin{enumerate}
\item Show that by extending $\ell_Z$ by linearity we get the numerical class of a real divisor $\ell_Z\in N^1(M) = N_1(M)^\vee$.
\item Show that
\[
\Im \frac{Z(r(\Phi_{\ensuremath{\mathcal E}_0}(\ensuremath{\mathcal O}_\gamma)),d(\Phi_{\ensuremath{\mathcal E}_0}(\ensuremath{\mathcal O}_\gamma)))}{Z(r_0,d_0)} = \Im \frac{Z(r(\Phi_{\ensuremath{\mathcal E}_0}(\ensuremath{\mathcal O}_\gamma(m))),d(\Phi_{\ensuremath{\mathcal E}_0}(\ensuremath{\mathcal O}_\gamma(m))))}{Z(r_0,d_0)},
\]
for any $m\in\ensuremath{\mathbb{Z}}$ and for any ample line bundle $\ensuremath{\mathcal O}_\gamma(1)$ on $\gamma$.
\item Show that $\Phi_{\ensuremath{\mathcal E}_0}(\ensuremath{\mathcal O}_\gamma(m))$ is a sheaf, for $m\gg0$.
\item By using the existence of relative Harder-Narasimhan filtration (see, e.g., \cite[Theorem 2.3.2]{HL10:moduli_sheaves}), show that, for $m \gg 0$,
\[
\mu(\Phi_{\ensuremath{\mathcal E}_0}(\ensuremath{\mathcal O}_\gamma(m))) \leq \frac{d_0}{r_0}.
\]
\item Deduce that $\ell_Z$ is nef, namely $\ell_Z.\gamma\geq0$ for all $\gamma\subset M$ integral curve.
\end{enumerate}
\end{exercise}
\subsection{Equivalent definitions for curves}\label{subsec:EquivalentDefCurves}
In this section we briefly mention other equivalent notions of stability for curves.
Let $C$ be a curve.
We have four equivalent definition of stability for vector bundles:
\begin{enumerate}
\item Slope stability (Definition \ref{def:SlopeCurves}); this is the easiest to handle.
\item GIT stability (sketched in the proof of Theorem \ref{thm:MainSemistableBundlesCurves}); this is useful in the construction of moduli spaces as algebraic varieties.
\item Faltings-Seshadri definition (see below); this is useful to find sections of line bundles on moduli spaces, and more generally to construct moduli spaces without using GIT.
\item Differential geometry definition (see below); this is the easiest for proving results and deep properties of stable objects.
\end{enumerate}
\subsubsection*{Faltings' approach}
The main result is the following.
\begin{thm}[Faltings-Seshadri]
\label{thm:Faltings}
Let $E$ be a vector bundle on a curve $C$. Then E is semistable if and only if there exists a vector bundle $F$ such that $\mathop{\mathrm{Hom}}\nolimits(F,E)=\mathop{\mathrm{Ext}}\nolimits^1(F,E)=0$.
\end{thm}
\begin{proof}
``$\Rightarrow$'': This is very hard. We refer to \cite{Ses93:vector_bundles_curves}.
``$\Leftarrow$'': Suppose that $E$ is not semistable.
Then, by definition, there exists a subbundle $G\subset E$ such that $\mu(G) > \mu(E)$. By assumption, $\chi(F,E) = 0$. Due to the Riemann-Roch Theorem, we have
\[
1-g+\mu(E) - \mu(F) =0.
\]
Hence,
\[
1-g+\mu(G) - \mu(F) > 0.
\]
By applying the Riemann-Roch Theorem again, we have $\chi(F,G)>0$, and so $\mathop{\mathrm{Hom}}\nolimits(F,G)\neq0$.
But this implies $\mathop{\mathrm{Hom}}\nolimits(F,E)\neq0$, which is a contradiction.
\end{proof}
An example for Theorem \ref{thm:Faltings} is in Section \ref{subsec:ExamplesCurves}.
\subsubsection*{Differential geometry approach}
To simplify, since this is not the main topic of this survey, we will treat only the degree $0$ case.
We refer to \cite[Theorem 2]{NS65:stable_vector_bundles} for the general result and for all details.
\begin{thm}[Narasimhan-Seshadri]
\label{thm:NarasimhanSeshadri}
There is a one-to-one correspondence
\[
\left\{\mathrm{Stable\, vector\, bundles\, on\, }C\,\mathrm{ of\, degree\, }0 \right\} \stackrel{1:1}{\longleftrightarrow} \left\{\mathrm{Irreducible\, unitary\, representations\, of\, }\pi_1(C)\right\}
\]
\end{thm}
\begin{proof}[Idea of the proof]
We only give a very brief idea on how to associate a stable vector bundle to an irreducible unitary representation.
Let $C$ be a curve of genus $g \geq 2$. Consider its universal cover $p: \ensuremath{\mathbb{H}} \to C$. The group of deck transformation is $\pi_1(C)$. Consider the trivial bundle $\ensuremath{\mathbb{H}} \times \ensuremath{\mathbb{C}}^r$ on $\ensuremath{\mathbb{H}}$ and a representation $\rho \colon \pi_1(C) \to \mathop{\mathrm{GL}}\nolimits(r,\ensuremath{\mathbb{C}})$.
Then we get a $\pi_1(C)$-bundle via the action
\begin{align*}
a_\tau \colon \ensuremath{\mathbb{H}} \times \ensuremath{\mathbb{C}}^r & \to \ensuremath{\mathbb{H}} \times \ensuremath{\mathbb{C}}^r, \\
(y,v) &\mapsto (\tau \cdot y, \rho(\tau) \cdot v),
\end{align*}
for $\tau \in \pi_1(C)$.
This induces a vector bundle $E_\rho$ of rank $r$ on $C$ with degree $0$.
If $\rho$ is a unitary representation, then the semistability of $E$ can be shown as follows. Assume there is a sub vector bundle $W \subset E$ with $d(W) > 0$. By taking $\bigwedge^{r(W)}$ we can reduce to the case $r(W)=1$, namely $W$ is a line bundle of positive degree.
By tensoring with a line bundle of degree $0$, we can assume $W$ has a section.
But it can be shown (see \cite[Proposition 4.1]{NS64:holomorphic_vector_bundles}) that a vector bundle corresponding to a non-trivial unitary irreducible representation does not have any section. By splitting $\rho$ as direct sum of irreducible representation, this implies that $W$ has to be a sub line bundle of the trivial bundle, which is impossible.
If the representation is irreducible, the bundle is stable.
\end{proof}
As mentioned before, an important remark about the interpretation of stability of vector bundles in terms of irreducible representations is that it is easier to prove results on stable sheaves in this language.
The first example is the preservation of stability by the tensor product (\cite[Theorem 3.1.4]{HL10:moduli_sheaves}): the tensor product of two semistable vector bundles is semistable.
Example \ref{ex:Mumford} is an application of Theorem \ref{thm:NarasimhanSeshadri} and preservation of stability by taking the symmetric product.
\subsubsection*{Gieseker's ample bundle characterization}
We also mention that for geometric applications (e.g., in the algebraic proof of preservation of stability by tensor product mentioned above), the following result by Gieseker (\cite{Gie79:bogomolov} and \cite[Theorem 3.2.7]{HL10:moduli_sheaves}) is of fundamental importance.
\begin{thm}
\label{thm:Gieseker}
Let $E$ be a semistable vector bundle on $C$. Consider the projective bundle $\pi:\ensuremath{\mathbb{P}}_C(E) \to C$ and denote the tautological line bundle by $\ensuremath{\mathcal O}_\pi(1)$.
Then:
\begin{enumerate}
\item\label{enum:nef} $d(E)\geq0$ if and only if $\ensuremath{\mathcal O}_\pi(1)$ is nef.
\item $d(E)>0$ if and only if $\ensuremath{\mathcal O}_\pi(1)$ is ample.
\end{enumerate}
\end{thm}
\begin{proof}[Idea of the proof]
We just sketch how to prove the non-trivial implication of \eqref{enum:nef}.
Assume that $d(E)\geq0$.
To prove that $\ensuremath{\mathcal O}_\pi(1)$ is nef, since it is relatively ample, we only need to prove that $\ensuremath{\mathcal O}_\pi(1) \cdot \gamma \geq0$ for a curve $\gamma \subset \ensuremath{\mathbb{P}}_C(E)$ which maps surjectively onto $C$ via $\pi$.
By taking the normalization, we can assume that $\gamma$ is smooth.
Denote by $f:\gamma \to C$ the induced finite morphism. Then we have a surjection $f^*E \to \ensuremath{\mathcal O}_\pi(1)|_\gamma$. But $f^*E$ is semistable, by Exercise \ref{ex:StabilityPullback} below. Hence, $d(\ensuremath{\mathcal O}_\pi(1)|_\gamma)\geq0$, which is what we wanted.
\end{proof}
\begin{exercise}\label{ex:StabilityPullback}
Let $f:C'\to C$ be a finite morphism between curves. Let $E$ be a vector bundle on $C$. Show that $E$ is semistable if and only if $f^*E$ is semistable.
\emph{Hint: One implication (``$\Leftarrow$'') is easy. For the other (``$\Rightarrow$''), by using the first implication, we can reduce to a Galois cover. Then use uniqueness of Harder-Narasimhan filtrations to show that if $f^*E$ is not semistable. Then all its semistable factors must be invariant. Then use that all factors are vector bundles to show that they descend to achieve a contradiction.}
\end{exercise}
\subsection{Applications and examples}
\label{subsec:ExamplesCurves}
In this section we present a few examples and applications of stability of vector bundles on curves. We will mostly skip proofs, but give references to the literature.
\begin{ex}[Mumford's example; {\cite[Theorem 10.5]{Har70:ample_subvarieties}}]
\label{ex:Mumford}
Let $C$ be a curve of genus $g\geq2$.
Then there exists a stable rank $2$ vector bundle $U$ on $C$ such that
\begin{equation}\label{eq:MumfordEx}
H^0(C,\mathop{\mathrm{Sym}}\nolimits^m U) = 0,
\end{equation}
for all $m>0$\footnote{If we let $\pi\colon X:=\ensuremath{\mathbb{P}}_C(U)\to C$ be the corresponding ruled surface, then \eqref{eq:MumfordEx} implies that the nef divisor $\ensuremath{\mathcal O}_\pi(1)$ is not ample, although it has the property that $\ensuremath{\mathcal O}_\pi(1) \cdot \gamma>0$, for all curves $\gamma\subset X$.}.
Indeed, by using Theorem \ref{thm:NarasimhanSeshadri}, since
\[
\pi_1(C) = \langle a_1,b_1,\ldots,a_g,b_g | a_1b_1a_1^{-1}b_1^{-1} \cdot \ldots \cdot a_g b_g a_g^{-1} b_g^{-1} \rangle,
\]
we only need to find two matrices $A,B\in\mathrm{U}(2)$ such that $\mathop{\mathrm{Sym}}\nolimits^m A$ and $\mathop{\mathrm{Sym}}\nolimits^m B$ have no common fixed subspace, for all $m>0$.
Indeed, by letting them be the matrices corresponding to $a_1$ and $b_1$, the matrices corresponding to the other $a_i$'s and $b_i$'s can be chosen to satisfy the relation.
We can choose
\[
A = \begin{pmatrix}
\lambda_1 & 0\\ 0 & \lambda_2
\end{pmatrix}
\]
where $|\lambda_i|=1$ and $\lambda_2/\lambda_1$ is not a root of unity. Then
\[
\mathop{\mathrm{Sym}}\nolimits^m A = \begin{pmatrix}
\lambda_1^m & 0 & \dots & 0\\
0 & \lambda_1^{m-1} \lambda_2 & \dots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \dots & \lambda_2^m
\end{pmatrix}
\]
has all eigenvalues distinct, and so the only fixed subspaces are subspaces generated by the standard basis vectors.
If we let
\[
B = \begin{pmatrix}
\mu_{11} & \mu_{12}\\ \mu_{21} & \mu_{22}
\end{pmatrix},
\]
then all entries of $\mathop{\mathrm{Sym}}\nolimits^m B$ are certain polynomials in the $\mu_{ij}$ not identically zero.
Since $\mathrm{U}(2) \subset \mathop{\mathrm{GL}}\nolimits(2,\ensuremath{\mathbb{C}})$ is not contained in any analytic hypersurface, we can find $\mu_{ij}$ such that all entries of $\mathop{\mathrm{Sym}}\nolimits^m B$ are non-zero, for all $m>0$.
This concludes the proof.
\end{ex}
The two classical examples of moduli spaces are the following (see \cite{NR69:moduli_vector_bundles, DR76:vector_bundles_rank2}).
\begin{ex}\label{ex:g2d0}
Let $C$ be a curve of genus $g=2$. Then $M_C(2,\ensuremath{\mathcal O}_C) \cong \ensuremath{\mathbb{P}}^3$.
Very roughly, the basic idea behind this example is the following.
We can identify $\ensuremath{\mathbb{P}}^3$ with $\ensuremath{\mathbb{P}}(H^0(\mathop{\mathrm{Pic}}\nolimits^1(C), 2 \Theta))$, where $\Theta$ denotes the theta-divisor on $\mathop{\mathrm{Pic}}\nolimits^1(C)$.
The isomorphism in the statement is then given by associating to any rank $2$ vector bundle $W$ the subset
\[
\left\{ \xi \in \mathop{\mathrm{Pic}}\nolimits^1(C)\,:\, H^0(C,W\otimes\xi)\neq 0\right\}.
\]
It is interesting to notice that the locus of strictly semistable vector bundles corresponds to the S-equivalence class of vector bundles of the form $M\oplus M^*$, where $M\in\mathop{\mathrm{Pic}}\nolimits^0(C)$.
Geometrically, this is the Kummer surface associated to $\mathop{\mathrm{Pic}}\nolimits^0(C)$ embedded in $\ensuremath{\mathbb{P}}^3$.
\end{ex}
\begin{ex}
\label{ex:IntersectionQuadrics}
Let $C$ be a curve of genus $g=2$, and let $L$ be a line bundle of degree $1$.
Then $M_C(2,L) \cong Q_1 \cap Q_2 \subset \ensuremath{\mathbb{P}}^5$, where $Q_1$ and $Q_2$ are two quadric hypersurfaces.
The space $M_2(2,L)$ is one of the first instances of derived category theory, semi-orthogonal decompositions, and their connection with birational geometry.
Indeed, the result above can be enhanced by saying that there exists a \emph{semi-orthogonal decomposition} of the derived category of $M_C(2,L)$ in which the non-trivial component is equivalent to the derived category of the curve $C$.
We will not add more details here, since it is outside the scope of these lecture notes; we refer instead to \cite{BO95:semiorthogonal_decomposition, Kuz14:semiorthogonal_decompositions} and references therein.
There is also an interpretation in terms of Bridgeland stability conditions (see \cite{BLMS16:BridgelandStabilitySemiOrth}).
\end{ex}
\begin{ex}
Let $C$ be a curve of genus $2$.
The moduli space $M_C(3,\ensuremath{\mathcal O}_C)$ also has a very nice geometric interpretation, and it is very closely related to the Coble cubic hypersurface in $\ensuremath{\mathbb{P}}^8$.
We refer to \cite{Ort03:ortega_thesis} for the precise statement and for all the details.
\end{ex}
Finally, we mention another classical application of vector bundles techniques to birational geometry. In principle, by using Bridgeland stability, this could be generalized to higher dimensions. We will talk briefly about this in Section \ref{sec:applications}.
\begin{ex}
\label{ex:LazarsfledMukaiCurves}
Let $C$ be a curve of genus $g\geq1$, and let $L$ be an ample divisor on $C$ of degree $d(L) \geq 2g+1$. Then the vector bundle
\[
M_L := \mathop{\mathrm{Ker}}\nolimits\left(\mathop{\mathrm{ev}}\nolimits\colon \ensuremath{\mathcal O}_C\otimes_\ensuremath{\mathbb{C}} H^0(C,L) \ensuremath{\twoheadrightarrow} L\right)
\]
is semistable. As an application this implies that the embedding of $C$ in $\ensuremath{\mathbb{P}}(H^0(C,L)^\vee)$ induced by $L$ is projectively normal, a result by Castelnuovo, Mattuck, and Mumford. We refer to \cite{Laz16:linear_series_survey} for a proof of this result, and for more applications.
We will content ourselves to prove a somehow related statement for the canonical line bundle, by following \cite[Theorem 1.6]{GP00:vanishing_theorems}, to illustrate Faltings' definition of stability: the vector bundle $M_{K_C}$ is semistable.
To prove this last statement we let $L$ be a general line bundle of degree $g+1$ on $C$.
Then $L$ is base-point free and $H^1(C,L)=0$.
Start with the exact sequence
\[
0 \to M_{K_C} \to \ensuremath{\mathcal O}_C\otimes_\ensuremath{\mathbb{C}} H^0(C,K_C) \to K_C \to 0,
\]
tensor by $L$ and take cohomology. We have a long exact sequence
\begin{equation}\label{eq:FaltingsApplication}
\begin{split}
0 \to H^0(C,M_{K_C}\otimes L) \to H^0(C,K_C) \otimes H^0(C,L) &\stackrel{\alpha}{\to} H^0(C,K_C \otimes L) \\ &\to H^1(C,M_{K_C}\otimes L) \to 0.
\end{split}
\end{equation}
By the Riemann-Roch Theorem, $h^0(C,L)=2$.
Hence, we get an exact sequence
\[
0\to L^\vee \to \ensuremath{\mathcal O}_C\otimes H^0(C,L) \to L \to 0.
\]
By tensoring by $K_C$ and taking cohomology, we get
\[
0 \to H^0(C,K_C\otimes L^\vee) \to H^0(C,K_C)\otimes H^0(C,L) \stackrel{\alpha}{\to} H^0(C,K_C \otimes L),
\]
and so $\mathop{\mathrm{Ker}}\nolimits(\alpha)\cong H^0(C,K_C\otimes L^\vee) = H^1(C,L)^\vee = 0$.
By using \eqref{eq:FaltingsApplication}, this gives $H^0(C,M_{K_C}\otimes L) = H^1(C,M_{K_C}\otimes L) = 0$, and so $M_{K_C}$ is semistable, by Theorem \ref{thm:Faltings}.
\end{ex}
\section{Generalizations to Higher Dimensions}
\label{sec:generalizations}
The definitions of stability from Section \ref{subsec:EquivalentDefCurves} generalize in different ways from curves to higher dimensional varieties. In this section we give a quick overview on this. We will not talk about how to generalize the Narasimhan-Seshadri approach. There is a beautiful theory (for which we refer for example to \cite{Kob87:differential_geometry}), but it is outside the scope of this survey.
Let $X$ be a smooth projective variety over $\ensuremath{\mathbb{C}}$ of dimension $n\geq2$.
We fix an ample divisor class $\omega\in\ensuremath{\mathbb{N}}^1(X)$ and another divisor class $B \in N^1(X)$.
\begin{defn}\label{def:TwistedChern}
We define the twisted Chern character as
\[
\mathop{\mathrm{ch}}\nolimits^B := \mathop{\mathrm{ch}}\nolimits \cdot e^{-B}.
\]
\end{defn}
If the reader is unfamiliar with Chern characters we recommend either a quick look at Appendix A of \cite{Har77:algebraic_geometry} or a long look at \cite{Ful98:intersection_theory}.
By expanding Definition \ref{def:TwistedChern}, we have for example
\begin{align*}
\mathop{\mathrm{ch}}\nolimits^{B}_0 &= \mathop{\mathrm{ch}}\nolimits_0 = \mathop{\mathrm{rk}}, \\
\mathop{\mathrm{ch}}\nolimits^{B}_1 &= \mathop{\mathrm{ch}}\nolimits_1 - B \cdot \mathop{\mathrm{ch}}\nolimits_0 ,\\
\mathop{\mathrm{ch}}\nolimits^{B}_2 &= \mathop{\mathrm{ch}}\nolimits_2 - B \cdot \mathop{\mathrm{ch}}\nolimits_1 + \frac{B^2}{2} \cdot \mathop{\mathrm{ch}}\nolimits_0.
\end{align*}
\subsubsection*{Gieseker-Maruyama-Simpson stability}
GIT stability does generalize to higher dimensions. The corresponding ``numerical notion'' is the one of (twisted) Gieseker-Maruyama-Simpson stability (see \cite{Gie77:vector_bundles, Mar77:stable_sheavesI, Mar78:stable_sheavesII, Sim94:moduli_representations, MW97:thaddeus_principle}).
\begin{defn}
\label{def:GiesekerStability}
Let $E\in\mathop{\mathrm{Coh}}\nolimits(X)$ be a pure sheaf of dimension $d$.
\begin{enumerate}
\item The $B$-twisted Hilbert polynomial is
\[
P(E,B,t) := \int_X \mathop{\mathrm{ch}}\nolimits^B(E) \cdot e^{t\omega} \cdot \mathop{\mathrm{td}}\nolimits_X = a_d(E,B) t^d + a_{d-1}(E,B) t^{d-1} + \ldots + a_0(E,B)
\]
\item We say that $E$ is $B$-twisted Gieseker (semi)stable if, for any proper non-trivial subsheaf $F\subset E$, the inequality
\[
\frac{P(F,B,t)}{a_d(F,B)} < (\leq) \frac{P(E,B,t)}{a_d(E,B)}
\]
holds, for $t\gg0$.
\end{enumerate}
\end{defn}
If $E$ is torsion-free, then $d = n$ and $a_d(E,B) = \mathop{\mathrm{ch}}\nolimits_0(E)$. Theorem \ref{thm:HNcurves} and the existence part in Theorem \ref{thm:MainSemistableBundlesCurves} generalize for (twisted) Gieseker stability (the second when $\omega$ and $B$ are rational classes), but singularities of moduli spaces become very hard to understand.
\subsubsection*{Slope stability}
We can also define a twisted version of the standard slope stability function for $E \in \mathop{\mathrm{Coh}}\nolimits(X)$ by
\[
\mu_{\omega, B}(E) := \frac{\omega^{n-1} \cdot \mathop{\mathrm{ch}}\nolimits^{B}_1(E)}{\omega^n \cdot \mathop{\mathrm{ch}}\nolimits^{B}_0(E)} = \frac{\omega^{n-1} \cdot \mathop{\mathrm{ch}}\nolimits_1(E)}{\omega^n \cdot \mathop{\mathrm{ch}}\nolimits_0(E)} - \frac{\omega^{n-1} \cdot B}{\omega^n},
\]
where dividing by $0$ is interpreted as $+\infty$.
\begin{defn}\label{def:SlopeStability}
A sheaf $E \in \mathop{\mathrm{Coh}}\nolimits(X)$ is called slope (semi)stable if for all subsheaves $F \subset E$ the inequality $\mu_{\omega, B}(F) < (\leq) \mu_{\omega, B}(E/F)$ holds.
\end{defn}
Notice that this definition is independent of $B$ and classically one sets $B = 0$. However, this more general notation will turn out useful in later parts of these notes. Also, torsion sheaves are all slope semistable of slope $+\infty$, and if a sheaf with non-zero rank is semistable then it is torsion-free.
This stability behaves well with respect to certain operations, like restriction to (general and high degree) hypersurfaces, pull-backs by finite morphisms, tensor products, and we will see that Theorem \ref{thm:HNcurves} holds for slope stability as well.
The main problem is that moduli spaces (satisfying a universal property) are harder to construct, even in the case of surfaces.
\subsubsection*{Bridgeland stability}
We finally come to the main topic of this survey, Bridgeland stability.
This is a direct generalization of slope stability. The main difference is that we will need to change the category we are working with. Coherent sheaves will never work in dimension $\geq 2$.
Instead, we will look for other abelian categories inside the bounded derived category of coherent sheaves on $X$.
To this end, we first have to treat the general notion of slope stability for abelian categories. Then we will introduce the notion of a bounded t-structure and the key operation of tilting. Finally, we will be able to define Bridgeland stability on surfaces and sketch how to conjecturally construct Bridgeland stability conditions in higher dimensions.
The key advantage of Bridgeland stability is the very nice behavior with respect to change of stability. We will see that moduli spaces of Gieseker semistable and slope semistable sheaves are both particular cases of Bridgeland semistable objects, and one can pass from one to the other by varying stability. This will give us a technique to study those moduli spaces as well.
Finally, we remark that even Faltings' approach to stability on curves, which despite many efforts still lacks a generalization to higher dimensions as strong as for the case of curves, seems to require as well to look at stability for complexes of sheaves (see, e.g., \cite{ACK07:functorial_construction, HP07:postnikov}).
As drawbacks, moduli spaces in Bridgeland stability are not associated naturally to a GIT problem and therefore harder to construct. Also, the categories to work with are not ``local'', in the sense that many geometric constructions for coherent sheaves will not work for these more general categories, since we do not have a good notion of ``restricting a morphism to an open subset''.
\section{Stability in Abelian Categories}
\label{sec:stability_abelian}
The goal of the theory of stability conditions is to generalize the situation from curves to higher dimensional varieties with similar nice properties. In order to do so it turns out that the category of coherent sheaves is not good enough. In this chapter, we will lay out some general foundation for stability in a more general abelian category. Throughout this section we let $\ensuremath{\mathcal A}$ be an abelian category with Grothendieck group $K_0(\ensuremath{\mathcal A})$.
\begin{defn}\label{def:StabilityFunction}
Let $Z: K_0(\ensuremath{\mathcal A}) \to \ensuremath{\mathbb{C}}$ be an additive homomorphism.
We say that $Z$ is a \emph{stability function} if, for all non-zero $E\in\ensuremath{\mathcal A}$, we have
\[
\Im Z(E) \geq 0, \, \, \text{ and } \, \, \Im Z(E) = 0 \, \Rightarrow \, \Re Z(E) <0.
\]
\end{defn}
Given a stability function, we will often denote $R := \Im Z$, the \emph{generalized rank}, $D := - \Re Z$, the \emph{generalized degree}, and $M := \frac{D}{R}$, the \emph{generalized slope}, where as before dividing by zero is interpreted as $+\infty$.
\begin{defn}\label{def:SlopeStabilityAbelian}
Let $Z: K_0(\ensuremath{\mathcal A}) \to \ensuremath{\mathbb{C}}$ be a stability function.
A non-zero object $E\in\ensuremath{\mathcal A}$ is called \emph{(semi)stable} if, for all proper non trivial subobjects $F \subset E$, the inequality $M(F) < (\leq) M(E)$ holds.
\end{defn}
When we want to specify the function $Z$, we will say that objects are $Z$-semistable, and add the suffix $Z$ to all the notation.
\begin{ex}\label{ex:StabilityCurvesAgain}
(1) Let $C$ be a curve, and let $\ensuremath{\mathcal A}:=\mathop{\mathrm{Coh}}\nolimits(C)$.
Then the group homomorphism
\[
Z: K_0(C) \to \ensuremath{\mathbb{C}}, \, \, Z(E) = -d(E) + \sqrt{-1} \, r(E)
\]
is a stability function on $\ensuremath{\mathcal A}$.
Semistable objects coincide with semistable sheaves in the sense of Definition \ref{def:SlopeCurves}.
(2) Let $X$ be a smooth projective variety of dimension $n\geq2$, $\omega\in N^1(X)$ be an ample divisor class and $B\in N^1(X)$.
Let $\ensuremath{\mathcal A}:=\mathop{\mathrm{Coh}}\nolimits(X)$.
Then the group homomorphism
\[
\overline{Z}_{\omega,B}: K_0(X) \to \ensuremath{\mathbb{C}}, \, \, \overline{Z}_{\omega,B}(E) = -\omega^{n-1} \cdot \mathop{\mathrm{ch}}\nolimits_1^B(E) + \sqrt{-1} \, \omega^n \cdot \mathop{\mathrm{ch}}\nolimits_0^B(E)
\]
is {\bf not} a stability function on $\ensuremath{\mathcal A}$.
Indeed, for a torsion sheaf $T$ supported in codimension $\geq 2$, we have $\overline{Z}_{\omega,B}(T)=0$.
More generally, by \cite[Lemma 2.7]{Tod09:limit_stable_objects} there is no stability function whose central charge factors through the Chern character for $\ensuremath{\mathcal A} = \mathop{\mathrm{Coh}}\nolimits(X)$.
(3) Let $X$, $\omega$, and $B$ be as before.
Consider the quotient category
\[
\ensuremath{\mathcal A}:= \mathop{\mathrm{Coh}}\nolimits_{n,n-1}(X) := \mathop{\mathrm{Coh}}\nolimits(X) / \mathop{\mathrm{Coh}}\nolimits(X)_{\leq n-2}
\]
of coherent sheaves modulo the (Serre) subcategory $\mathop{\mathrm{Coh}}\nolimits(X)_{\leq n-2}$ of coherent sheaves supported in dimension $\leq n-2$ (see \cite[Definition 1.6.2]{HL10:moduli_sheaves}). Then $\overline{Z}_{\omega,B}$ is a stability function on $\ensuremath{\mathcal A}$. Semistable objects coincide with slope semistable sheaves in the sense of Definition \ref{def:SlopeStability}.
\end{ex}
\begin{exercise}\label{ex:EquivDefStabilityAbelian}
Show that an object $E \in \ensuremath{\mathcal A}$ is (semi)stable if and only for all quotient $E \ensuremath{\twoheadrightarrow} G$ the inequality $M(E) < (\leq) M(G)$ holds.
\end{exercise}
Stability for abelian categories still has Schur's property:
\begin{lem}\label{lem:Schur}
Let $Z: K_0(\ensuremath{\mathcal A}) \to \ensuremath{\mathbb{C}}$ be a stability function.
Let $A,B\in\ensuremath{\mathcal A}$ be non-zero objects which are semistable with $M(A) > M(B)$.
Then $\mathop{\mathrm{Hom}}\nolimits(A,B)=0$.
\end{lem}
\begin{proof}
Let $f:A\to B$ be a morphism, and let $Q := \mathop{\mathrm{im}}\nolimits(f) \subset B$.
If $f$ is non-zero, then $Q \neq 0$.
Since $B$ is semistable, we have $M(Q)\leq M(B)$.
Since $A$ is semistable, by Exercise \ref{ex:EquivDefStabilityAbelian}, we have $M(A) \leq M(Q)$. This is a contradiction to $M(A) > M(B)$.
\end{proof}
\begin{defn}
\label{defn:stability_abelian}
Let $Z: K_0(\ensuremath{\mathcal A}) \to \ensuremath{\mathbb{C}}$ be an additive homomorphism.
We call the pair $(\ensuremath{\mathcal A}, Z)$ a \emph{stability condition} if
\begin{itemize}
\item $Z$ is a stability function, and
\item Any non-zero $E \in \ensuremath{\mathcal A}$ has a filtration, called the \emph{Harder-Narasimhan filtration},
\[
0 = E_0 \subset E_1 \subset \ldots \subset E_{n-1} \subset E_n = E
\]
of objects $E_i \in \ensuremath{\mathcal A}$ such that $A_i = E_i/E_{i-1}$ is semistable for all $i = 1, \ldots, n$ and $M(F_1) > \ldots > M(F_n)$.
\end{itemize}
\end{defn}
\begin{exercise}\label{exercise:hn_filtration}
Show that for any stability condition $(\ensuremath{\mathcal A}, Z)$ the Harder-Narasimhan filtration for any object $E \in \ensuremath{\mathcal A}$ is unique up to isomorphism.
\end{exercise}
In concrete examples we will need a criterion for the existence of Harder-Narasimhan filtrations. This is the content of Proposition \ref{prop:hn_exists}. It requires the following notion.
\begin{defn}
An abelian category $\ensuremath{\mathcal A}$ is called \emph{noetherian}, if the \emph{descending chain condition} holds, i.e. for any descending chain of objects in $\ensuremath{\mathcal A}$
\[
A_0 \supset A_1 \supset \ldots \supset A_i \supset \ldots
\]
we have $A_i = A_j$ for $i,j \gg 0$.
\end{defn}
Standard examples of noetherian abelian categories are the category of modules over a noetherian ring or the category of coherent sheaves on a variety. Before dealing with Harder-Narasimhan filtrations, we start with a preliminary result.
\begin{lem}
\label{lem:bounded_degree}
Let $Z: K_0(\ensuremath{\mathcal A}) \to \ensuremath{\mathbb{C}}$ be a stability function.
Assume that
\begin{itemize}
\item $\ensuremath{\mathcal A}$ is noetherian, and
\item the image of $R$ is discrete in $\ensuremath{\mathbb{R}}$.
\end{itemize}
Then, for any object $E \in \ensuremath{\mathcal A}$, there is a number $D_E \in \ensuremath{\mathbb{R}}$ such that for any $F \subset E$ the inequality $D(F) \leq D_E$ holds.
\end{lem}
\begin{proof}
Since the image of $R$ is discrete in $\ensuremath{\mathbb{R}}$, we can do induction on $R(E)$. If $R(E) = 0$ holds, then $R(F) = 0$. In particular, $0 < D(F) \leq D(E)$.
Let $R(E) > 0$. Assume there is a sequence $F_n \subset E$ for $n \in \ensuremath{\mathbb{N}}$ such that
\[
\lim_{n \to \infty} D(F_n) = \infty.
\]
If the equality $R(F_n) = R(E)$ holds for any $n \in \ensuremath{\mathbb{N}}$, then the quotient satisfies $R(E/F_n) = 0$. In particular, $D(E/F_n) \geq 0$ implies $D(F_n) \leq D(E_n)$. Therefore, we can assume $R(F_n) < R(E)$ for all $n$.
We will construct an increasing sequence of positive integers $n_k$ such that $D(F_{n_1} + \ldots + F_{n_k}) > k$ and $R(F_{n_1} + \ldots + F_{n_k}) < R(E)$. We are done after that, because this contradicts $\ensuremath{\mathcal A}$ being noetherian. By assumption, we can choose $n_1$ such that $D(F_{n_1}) \geq 1$. Assume we have constructed $n_1$, \ldots, $n_{k-1}$. There is an exact sequence
\[
0 \to (F_{n_1} + \ldots + F_{n_{k-1}}) \cap F_n \to (F_{n_1} + \ldots + F_{n_{k-1}}) \oplus F_n \to F_{n_1} + \ldots + F_{n_{k-1}} + F_n \to 0
\]
for any $n > n_{k-1}$, where intersection and sum are taken inside of $E$. Therefore,
\[
D(F_{n_1} + \ldots + F_{n_{k-1}} + F_n) = D(F_{n_1} + \ldots + F_{n_{k-1}}) + D(F_n) - D((F_{n_1} + \ldots + F_{n_{k-1}}) \cap F_n).
\]
By induction $D((F_{n_1} + \ldots + F_{n_{k-1}}) \cap F_n)$ is bounded from above and we obtain
\[
\lim_{n \to \infty} D(F_{n_1} + \ldots + F_{n_{k-1}} + F_n) = \infty.
\]
As before this is only possible if $R(F_{n_1} + \ldots + F_{n_{k-1}} + F_n) < R(E)$ for $n \gg 0$. Therefore, we can choose $n_k > n_{k-1}$ as claimed.
\end{proof}
We will follow a proof from \cite{Gra84:polygon_proof} (the main idea goes back at least to \cite{Sha76:degeneration_vector_bundles}) for the existence of Harder-Narasimhan filtrations, as reviewed in \cite{Bay11:lectures_notes_stability}.
\begin{prop}
\label{prop:hn_exists}
Let $Z: K_0(\ensuremath{\mathcal A}) \to \ensuremath{\mathbb{C}}$ be a stability function.
Assume that
\begin{itemize}
\item $\ensuremath{\mathcal A}$ is noetherian, and
\item the image of $R$ is discrete in $\ensuremath{\mathbb{R}}$.
\end{itemize}
Then Harder-Narasimhan filtrations exist in $\ensuremath{\mathcal A}$ with respect to $Z$.
\end{prop}
\begin{proof}
We will give a proof when the image of $D$ is discrete in $\ensuremath{\mathbb{R}}$ and leave the full proof as an exercise.
\begin{figure}[ht]
\centerline{
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0,0) }*{\bullet}="v0"
!{(-2,1) }*{\bullet}="v1"
!{(-3,2) }*{\bullet}="v2"
!{(-3,4) }*{\bullet}="vn-2"
!{(-2,5) }*{\bullet}="vn-1"
!{(1,6) }*{\bullet}="vn"
!{(0.3,0) }*{v_0}="l0"
!{(-1.7,1.1) }*{v_1}="l1"
!{(-2.7,2) }*{v_2}="l2"
!{(-2.5,3.9) }*{v_{n-2}}="ln-2"
!{(-1.5,4.8) }*{v_{n-1}}="ln-1"
!{(1.3,6.1) }*{v_n}="ln"
"v0"-"v1"
"v0"-"vn"
"v1"-"v2"
"v2"-@{.}"vn-2"
"vn-2"-"vn-1"
"vn-1"-"vn"
}
}
\caption{The polygon $\ensuremath{\mathcal P}(E)$.}
\label{fig:polygon_hn}
\end{figure}
Let $E \in \ensuremath{\mathcal A}$. The goal is to construct a Harder-Narasimhan filtration for $E$. By $\ensuremath{\mathcal H}(E)$ we denote the convex hull of the set of all $Z(F)$ for $F \subset E$. By Lemma \ref{lem:bounded_degree} this set is bounded from the left. By $\ensuremath{\mathcal H}_l$ we denote the half plane to the left of the line between $Z(E)$ and $0$. If $E$ is semistable, we are done. Otherwise the set $\ensuremath{\mathcal P}(E) = \ensuremath{\mathcal H}(E) \cap \ensuremath{\mathcal H}_L$ is a convex polygon (see Figure \ref{fig:polygon_hn}).
Let $v_0 = 0, v_1, \ldots, v_n = Z(E)$ be the extremal vertices of $\ensuremath{\mathcal P}(E)$ in order of ascending imaginary part.
Choose $F_i \subset E$ with $Z(F_i) = v_i$ for $i = 1, \ldots, n-1$. We will now proof the following three claims to conclude the proof.
\begin{enumerate}
\item The inclusion $F_{i-1} \subset F_i$ holds for all $i = 1, \ldots, n$.
\item The object $G_i = F_i / F_{i-1}$ is semistable for all $i = 1, \ldots, n$.
\item The inequalities $M(G_1) > M(G_2) > \ldots > M(G_n)$ hold.
\end{enumerate}
By definition of $\ensuremath{\mathcal H}(E)$ both $Z(F_{i-1} \cap F_i)$ and $Z(F_{i-1} + F_i)$ have to lie in $\ensuremath{\mathcal H}(E)$. Moreover, we have $R(F_{i-1} \cap F_i) \leq R(F_{i-1}) < R(F_i) \leq R(F_{i-1} + F_i))$. To see part (1) observe that
\[
Z(F_{i-1} + F_i) - Z(F_{i-1} \cap F_i) = (v_{i-1} - Z(F_{i-1} \cap F_i)) + (v_i - Z(F_{i-1} \cap F_i)).
\]
This is only possible if $Z(F_{i-1} \cap F_i) = Z(F_{i-1})$ and $Z(F_{i-1} + F_i)) = Z(F_i)$ (see Figure \ref{fig:inclusion_hn}).
\begin{figure}[ht]
\centerline{
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(0,0) }*{\bullet}="intersection"
!{(-3,1) }*{\bullet}="vi-1"
!{(-4,2) }*{\bullet}="vi"
!{(-1,3) }*{\bullet}="sum"
!{(-3,3) }*{}="upper_limit"
!{(-1,0) }*{}="lower_limit"
!{(2,2) }*{}="upper_ray_right"
!{(2,1) }*{}="lower_ray_right"
!{(1.2,0) }*{Z(F_i \cap F_{i-1})}="lintersection"
!{(-3.3,0.8) }*{v_{i-1}}="lvi-1"
!{(-4.3,1.9) }*{v_i}="lvi"
!{(0.2,3) }*{Z(F_i + F_{i-1})}="lsum"
"vi-1"-"vi"
"vi-1"-@{.}"lower_limit"
"vi"-@{.}"upper_limit"
"vi"-"upper_ray_right"
"vi-1"-"lower_ray_right"
"intersection"-"vi-1"
"intersection"-"vi"
"intersection"-"sum"
}
}
\caption{Vectors adding up within a convex set.}
\label{fig:inclusion_hn}
\end{figure}
This implies $Z(F_{i-1}/(F_{i-1} \cap F_i)) = 0$ and the fact that $Z$ is a stability function shows $F_{i-1} = F_{i-1} \cap F_i$. This directly leads to (1).
Let $\overline{A} \subset G_i$ be a non zero subobject with preimage $A \subset F_i$. Then $A$ has to be in $\ensuremath{\mathcal H}(E)$ and $R(F_{i-1}) \leq R(A) \leq R(F_i)$. That implies $Z(A) - Z(F_{i-1})$ has smaller or equal slope than $Z(F_i) - Z(F_{i-1})$. But this is the same as $M(\overline{A}) \leq M(G_i)$, proving (2).
\begin{figure}[ht]
\centerline{
\xygraph{
!{<0cm,0cm>;<1cm,0cm>:<0cm,1cm>::}
!{(-2,1) }*{\bullet}="vi-1"
!{(-1,1.5) }*{\bullet}="ZA"
!{(-3,2) }*{\bullet}="vi"
!{(-2,3) }*{}="upper_limit"
!{(0,0) }*{}="lower_limit"
!{(1,2) }*{}="upper_ray_right"
!{(1,1) }*{}="lower_ray_right"
!{(-0.4,1.5) }*{Z(A)}="lZA"
!{(-2.3,0.8) }*{v_{i-1}}="lvi-1"
!{(-3.3,1.9) }*{v_i}="lvi"
"vi-1"-"vi"
"vi-1"-@{.}"lower_limit"
"vi"-@{.}"upper_limit"
"vi"-"upper_ray_right"
"vi-1"-"lower_ray_right"
"vi-1"-"ZA"
}
}
\caption{Subobjects of $G_i$ have smaller or equal slope.}
\label{fig:semistability_hn}
\end{figure}
The slope $M(G_i)$ is the slope of the line through $v_i$ and $v_{i-1}$. Convexity of $\ensuremath{\mathcal P}(E)$ implies (3).
\end{proof}
\begin{exercise}
The goal of this exercise is to remove the condition that the image of $D$ is discrete in the proof of Proposition \ref{prop:hn_exists}. Show that there are objects $F_i \subset E$ with $Z(F_i) = v_i$ for $i = 1, \ldots, n-1$. \emph{Hint: By definition of $\ensuremath{\mathcal P}(E)$ there is a sequence of objects $F_{j,i} \subset E$ such that
\[
\lim_{j \to \infty} Z(F_{j,i}) = Z(F_i).
\]
Since $R$ has discrete image, we can assume $R(F_{j,i}) = \Im v_i$ for all $j$. Replace the $F_{j,i}$ by an ascending chain of objects similarly to the proof of Lemma \ref{lem:bounded_degree}.}
\end{exercise}
\begin{ex}\label{ex:ExamplesStability}
For the stability functions in Example \ref{ex:StabilityCurvesAgain} (1) and (3) Harder-Narasimhan filtrations exist.
\end{ex}
We conclude the section with an example coming from representation theory.
\begin{exercise}
\label{ex:Quiver}
Let $A$ be a finite-dimensional algebra over $\ensuremath{\mathbb{C}}$ and $\ensuremath{\mathcal A}=\mathop{\mathrm{mod}}\nolimits\text{-}A$ be the category of finitely dimensional (right) $A$-modules. We denote the simple modules by $S_1, \ldots, S_m$. Pick $Z_i \in \ensuremath{\mathbb{C}}$ for $i = 1, \ldots, m$, where $\Im(Z_i) \geq 0$ holds and if $\Im Z_i = 0$ holds, then we have $\Re Z_i < 0$.
\begin{enumerate}
\item Show that there is a unique homomorphism $Z: K_0(\ensuremath{\mathcal A}) \to \ensuremath{\mathbb{C}}$ with $Z(S_i) = Z_i$.
\item Show that $(\ensuremath{\mathcal A}, Z)$ is a stability condition.
\end{enumerate}
The corresponding projective moduli spaces of semistable $A$-modules were constructed in \cite{Kin94:moduli_quiver_reps}.
\end{exercise}
\begin{rmk}
\label{rmk:WeakStability}
There is also a weaker notion of stability on abelian categories, with similar properties.
We say that a homomorphism $Z$ is a \emph{weak stability function} if, for all non-zero $E\in\ensuremath{\mathcal A}$, we have
\[
\Im Z(E) \geq 0, \, \, \text{ and } \, \, \Im Z(E) = 0 \, \Rightarrow \, \Re Z(E) \leq 0.
\]
Semistable objects can be defined similarly, and Proposition \ref{prop:hn_exists} still holds. The proof can be extended to this more general case by demanding that the $F_i$ are maximal among those objects satisfying $Z(F_i) = v_i$. A \emph{weak stability condition} is then defined accordingly.
For example, slope stability of sheaves is coming from a weak stability condition on $\mathop{\mathrm{Coh}}\nolimits(X)$. This is going to be useful for Bridgeland stability on higher dimensional varieties (see Section \ref{sec:P3}, or \cite{BMT14:stability_threefolds, BMS14:abelian_threefolds, PT15:bridgeland_moduli_properties}).
\end{rmk}
\section{Bridgeland Stability}
\label{sec:bridgeland_stability}
In this section we review the basic definitions and results in the general theory of Bridgeland stability conditions. The key notions are the heart of a bounded t-structure and slicings recalled in Section \ref{subsec:heart}. We then introduce the two equivalent definitions in Section \ref{subsec:BridgelandDefinition}. Bridgeland's Deformation Theorem, together with a sketch of its proof, is in Section \ref{subsec:BridgelandDefoThm}. Finally, we discuss moduli spaces in Section \ref{subsec:moduli} and the fundamental wall and chamber structure of the space of stability conditions in Section \ref{subsec:WallChamberGeneral}. Some of the proofs we sketch here will be fully proved in the case of surfaces in the next section.
Throughout this section, we let $X$ be a smooth projective variety over $\ensuremath{\mathbb{C}}$. We also denote the bounded derived category of coherent sheaves on $X$ by $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X):=\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(\mathop{\mathrm{Coh}}\nolimits(X))$. A quick introduction to bounded derived categories can be found in Appendix \ref{sec:derived_categories}. The results in this section are still true in the more general setting of triangulated categories.
\subsection{Heart of a bounded t-structure and slicing}\label{subsec:heart}
For the general theory of t-structures we refer to \cite{BBD82:pervers_sheaves}.
In this section we content ourselves to speak about the heart of a bounded t-structure, which in this case is equivalent.
\begin{defn}
\label{def:heart}
The \emph{heart of a bounded t-structure} on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ is a
full additive subcategory $\ensuremath{\mathcal A} \subset \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ such that
\begin{itemize}
\item for integers $i > j$ and $A,B \in \ensuremath{\mathcal A}$, the vanishing
$\mathop{\mathrm{Hom}}\nolimits(A[i],B[j]) = \mathop{\mathrm{Hom}}\nolimits(A, B[j-i]) = 0$ holds, and
\item for all $E \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ there are integers $k_1 > \ldots > k_m$, objects $E_i \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$, $A_i \in \ensuremath{\mathcal A}$ for $i = 1, \ldots, m$ and a collection of triangles
\[
\xymatrix{
0=E_0 \ar[r] & E_1 \ar[r] \ar[d] & E_2 \ar[r] \ar[d] & \dots \ar[r] & E_{m-1}
\ar[r] \ar[d] & E_m = E \ar[d] \\
& A_1[k_1] \ar@{-->}[lu] & A_2[k_2] \ar@{-->}[lu] & & A_{m-1}[k_{m-1}]
\ar@{-->}[lu] & A_m[k_m]. \ar@{-->}[lu] }
\]
\end{itemize}
\end{defn}
\begin{lem}
The heart $\ensuremath{\mathcal A}$ of a bounded t-structure in $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ is abelian.
\end{lem}
\begin{proof}[Sketch of the proof]
Exact sequences in $\ensuremath{\mathcal A}$ are nothing but exact triangles in $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ all of whose objects are in $\ensuremath{\mathcal A}$. To define kernel and cokernel of a morphism $f:A \to B$, for $A,B\in\ensuremath{\mathcal A}$ we can proceed as follows. Complete the morphism $f$ to an exact triangle
\[
A \stackrel{f}{\to} B \to C \to A[1].
\]
By the definition of a heart, we have a triangle
\[
C_{>0} \to C \to C_{\leq 0} \to C_{>0}[1],
\]
where $C_{>0}$ belongs to the category generated by extensions of $\ensuremath{\mathcal A}[i]$, with $i>0$, and $C_{\leq0}$ to the category generated by extensions of $\ensuremath{\mathcal A}[j]$, with $j\leq 0$.
Consider the composite map $B\to C\to C_{\leq 0}$.
Then an easy diagram chase shows that $C_{\leq 0}\in\ensuremath{\mathcal A}$.
Similarly, the composite map $C_{>0}\to C \to A[1]$ gives $C_{>0}\in\ensuremath{\mathcal A}[1]$.
Then $\mathop{\mathrm{Ker}}\nolimits(f)=C_{>0}[-1]$ and $\mathop{\mathrm{Cok}}\nolimits(f)=C_{\leq 0}$.
\end{proof}
The standard example of a heart of a bounded t-structure on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ is given by $\mathop{\mathrm{Coh}}\nolimits(X)$. If $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(\ensuremath{\mathcal A}) \cong \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$, then $\ensuremath{\mathcal A}$ is the heart of a bounded t-structure. The converse does not hold (see Exercise \ref{exer:Boston1222016} below), but this is still one of the most important examples for having an intuition about this notion.
In particular, given $E\in\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$, the objects $A_1,\ldots,A_n\in\ensuremath{\mathcal A}$ in the definition of a heart are its \emph{cohomology objects} with respect to $\ensuremath{\mathcal A}$. They are denoted $A_1=:\ensuremath{\mathcal H}^{-k_1}_\ensuremath{\mathcal A}(E),\ldots,A_n=:\ensuremath{\mathcal H}^{-k_n}_\ensuremath{\mathcal A}(E)$. In the case $\ensuremath{\mathcal A}=\mathop{\mathrm{Coh}}\nolimits(X)$, these are nothing but the cohomology sheaves of a complex. These cohomology objects have long exact sequences, just as cohomology sheaves of a complexes.
\begin{exercise}\label{exer:Boston1222016}
Let $X=\mathbb{P}^1$.
Let $\ensuremath{\mathcal A}\subset\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(\ensuremath{\mathbb{P}}^1)$ be the additive subcategory generated by extensions by $\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}[2]$ and $\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(1)$.
\begin{enumerate}
\item Let $A\in\ensuremath{\mathcal A}$. Show that there exists $a_0,a_1\geq0$ such that
\[
A \cong \ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}^{\oplus a_0}[2] \oplus \ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^1}(1)^{\oplus a_1}.
\]
\item Show that $\ensuremath{\mathcal A}$ is the heart of a bounded t-structure on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(\ensuremath{\mathbb{P}}^1)$.
\emph{Hint: Use Theorem \ref{thm:P1}.}
\item Show that, as an abelian category, $\ensuremath{\mathcal A}$ is equivalent to the direct sum of two copies of the category of finite-dimensional vector spaces.
\item Show that $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(\ensuremath{\mathcal A})$ is not equivalent to $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(\ensuremath{\mathbb{P}}^1)$.
\end{enumerate}
\end{exercise}
\begin{exercise}
Let $\ensuremath{\mathcal A}\subset\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ be the heart of a bounded t-structure.
Show that there is a natural identification between Grothendieck groups
\[
K_0(\ensuremath{\mathcal A}) = K_0(X) = K_0(\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)).
\]
\end{exercise}
A slicing further refines the heart of a bounded t-structure.
\begin{defn}[\cite{Bri07:stability_conditions}]
A \emph{slicing} $\ensuremath{\mathcal P}$ of $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ is a collection of subcategories $\ensuremath{\mathcal P}(\phi) \subset
\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ for all $\phi \in \mathbb{R}$ such that
\begin{itemize}
\item $\ensuremath{\mathcal P}(\phi)[1] = \ensuremath{\mathcal P}(\phi + 1)$,
\item if $\phi_1 > \phi_2$ and $A \in \ensuremath{\mathcal P}(\phi_1)$, $B \in \ensuremath{\mathcal P}(\phi_2)$ then
$\mathop{\mathrm{Hom}}\nolimits(A,B) = 0$,
\item for all $E \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ there are real numbers $\phi_1 > \ldots > \phi_m$, objects $E_i \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$, $A_i \in \ensuremath{\mathcal P}(\phi_i)$ for $i = 1, \ldots, m$ and a collection of triangles
\[
\xymatrix{
0=E_0 \ar[r] & E_1 \ar[r] \ar[d] & E_2 \ar[r] \ar[d] & \ldots \ar[r] & E_{m-1}
\ar[r] \ar[d] & E_m = E \ar[d] \\
& A_1 \ar@{-->}[lu] & A_2 \ar@{-->}[lu] & & A_{m-1} \ar@{-->}[lu] & A_m
\ar@{-->}[lu] }
\]
where $A_i \in P(\phi_i)$.
\end{itemize}
For this filtration of an element $E \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ we write $\phi^-(E) := \phi_m$
and $\phi^+(E) := \phi_1$. Moreover, for $E \in P(\phi)$ we call $\phi(E):=\phi$ the \emph{phase} of $E$.
\end{defn}
The last property is called the \emph{Harder-Narasimhan filtration}. By setting
$\ensuremath{\mathcal A} := \ensuremath{\mathcal P}((0,1])$ to be the extension closure of the subcategories $\{\ensuremath{\mathcal P}(\phi):\phi \in (0,1]\}$ one gets the heart of a bounded t-structure from a slicing\footnote{More generally, by fixing $\phi_0\in\ensuremath{\mathbb{R}}$, the category $\ensuremath{\mathcal P}((\phi_0,\phi_0+1])$ is also a heart of a bounded t-structure. A slicing is a family of hearts, parameterized by $\ensuremath{\mathbb{R}}$.}. In both cases of a slicing and the heart of a bounded t-structure the Harder-Narasimhan filtration is unique similarly to Exercise \ref{exercise:hn_filtration}.
\begin{exercise}\label{exercise:tStructuresContained}
Let $\ensuremath{\mathcal A},\ensuremath{\mathcal B}\subset \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ be hearts of bounded t-structures. Show that if $\ensuremath{\mathcal A} \subset \ensuremath{\mathcal B}$, then $\ensuremath{\mathcal A}=\ensuremath{\mathcal B}$. Similarly, if $\ensuremath{\mathcal P}$ and $\ensuremath{\mathcal P}'$ are two slicings and $\ensuremath{\mathcal P}'(\phi) \subset \ensuremath{\mathcal P}(\phi)$ for all $\phi \in \ensuremath{\mathbb{R}}$, then $\ensuremath{\mathcal P}$ and $\ensuremath{\mathcal P}'$ are identical.
\end{exercise}
\subsection{Bridgeland stability conditions: definition}\label{subsec:BridgelandDefinition}
The definition of a Bridgeland stability condition will depend on some additional data.
More precisely, we fix a finite rank lattice $\Lambda$ and a surjective group homomorphism
\[
v\colon K_0(X) \ensuremath{\twoheadrightarrow} \Lambda.
\]
We also fix a norm $\| \cdot \|$ on $\Lambda_\ensuremath{\mathbb{R}}$. Recall that all choices of norms are equivalent here and subsequent definitions will not depend on it.
\begin{ex}
\label{ex:NumericalGrothendieck}
Let $T$ be the set of $v \in K_0(X)$ such that $\chi(v,w) = 0$ for all $w \in K_0(X)$. Then the \emph{numerical Grothendieck group} $K_{\mathop{\mathrm{num}}\nolimits}(X)$ is defined as $K_0(X)/T$.
We have that $K_{\mathop{\mathrm{num}}\nolimits}(X)$ is a finitely generated $\ensuremath{\mathbb{Z}}$-lattice.
The choice of $\Lambda=K_{\mathop{\mathrm{num}}\nolimits}(X)$ together with the natural projection is an example of a pair $(\Lambda,v)$ as before.
If $X$ is a surface, then $K_{\mathop{\mathrm{num}}\nolimits}(X)$ is nothing but the image of the Chern character map
\[
\mathop{\mathrm{ch}}\nolimits\colon K_0(X) \to H^*(X,\ensuremath{\mathbb{Q}}),
\]
and the map $v$ is the Chern character.
For K3 or abelian surfaces, $K_{\mathop{\mathrm{num}}\nolimits}(X) = H^*_{\mathrm{alg}}(X,\ensuremath{\mathbb{Z}}) = H^0(X,\ensuremath{\mathbb{Z}}) \oplus \mathop{\mathrm{NS}}\nolimits(X) \oplus H^4(X,\ensuremath{\mathbb{Z}})$.
\end{ex}
\begin{defn}[\cite{Bri07:stability_conditions}]\label{def:Bridgeland1}
A \emph{Bridgeland stability condition} on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ is a pair $\sigma
= (\ensuremath{\mathcal P},Z)$ where:
\begin{itemize}
\item $\ensuremath{\mathcal P}$ is a slicing of $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$, and
\item $Z\colon \Lambda \to \ensuremath{\mathbb{C}}$ is an additive
homomorphism, called the \emph{central charge},
\end{itemize}
satisfying the following properties:
\begin{enumerate}
\item For any non-zero $E\in\ensuremath{\mathcal P}(\phi)$, we have
\[
Z(v(E)) \in \ensuremath{\mathbb{R}}_{>0} \cdot e^{\sqrt{-1} \pi \phi}.
\]
\item (\emph{support property})
\[
C_{\sigma}:= \mathop{\mathrm{inf}}\nolimits \left\{ \frac{|Z(v(E))|}{\| v(E) \|}\, : \, 0\neq E \in \ensuremath{\mathcal P}(\phi), \phi \in \ensuremath{\mathbb{R}} \right\} > 0.
\]
\end{enumerate}
\end{defn}
As before, the heart of a bounded t-structure can be defined by $\ensuremath{\mathcal A} := P((0,1])$.
Objects in $\ensuremath{\mathcal P}(\phi)$ are called $\sigma$-semistable of phase $\phi$.
The \emph{mass} of an object $E \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ is defined as $m_\sigma(E)=\sum |Z(A_j)|$, where $A_1,\ldots,A_m$ are the Harder-Narasimhan factors of $E$.
\begin{exercise}\label{exercise:StableObjectsBridgelandStability}
Let $\sigma=(\ensuremath{\mathcal P},Z)$ be a stability condition.
Show that the category $\ensuremath{\mathcal P}(\phi)$ is abelian and of finite length (i.e., it is noetherian and artinian). \emph{Hint: Use the support property to show there are only finitely many classes of subobjects in $\ensuremath{\mathcal P}(\phi)$ of a given object in $\ensuremath{\mathcal P}(\phi)$.}
\end{exercise}
The simple object in $\ensuremath{\mathcal P}(\phi)$ are called \emph{$\sigma$-stable}.
As in the case of stability of sheaves on curves (see Remark \ref{rmk:JHfiltrations}), since the category $\ensuremath{\mathcal P}(\phi)$ is of finite length, $\sigma$-semistable objects admit finite \emph{Jordan-H\"older filtrations} in $\sigma$-stable ones of the same phase.
The notion of \emph{S-equivalent} $\sigma$-semistable objects is defined analogously as well.
The support property was introduced in \cite{KS08:wall_crossing}. It is equivalent to Bridgeland's notion of a full locally-finite stability condition in \cite[Definition 4.2]{Bri08:stability_k3} (see \cite[Proposition B.4]{BM11:local_p2}).
There is an equivalent formulation: There is a symmetric bilinear form $Q$ on $\Lambda_\ensuremath{\mathbb{R}}$ such that
\begin{enumerate}
\item all semistable objects $E\in \ensuremath{\mathcal P}$ satisfy the inequality $Q(v(E),v(E))
\geq 0$ and
\item all non zero vectors $v \in \Lambda_\ensuremath{\mathbb{R}}$ with $Z(v) = 0$ satisfy $Q(v, v) < 0$.
\end{enumerate}
The inequality $Q(v(E),v(E)) \geq 0$ can be viewed as some generalization of the classical Bogomolov inequality for vector bundles; we will see the precise relation in Section \ref{sec:surfaces}.
By abuse of notation we will often forget $v$ from the notation.
For example, we will write $Q(E,F)$ instead of $Q(v(E), v(F))$.
We will also use the notation $Q(E) = Q(E,E)$.
\begin{exercise}
Show that the previous two definitions of the support property are equivalent. \emph{Hint: Use $Q(w) = C^2 |Z(w)|^2 - ||w||^2$.}
\end{exercise}
Definition \ref{def:Bridgeland1} is short and good for abstract argumentation, but it is not very practical for finding concrete examples. The following lemma shows that this definition of a stability condition on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ and the one given in the previous section for an arbitrary abelian category $\ensuremath{\mathcal A}$ are closely related.
\begin{lem}[{\cite{Bri07:stability_conditions}[Proposition 5.3]}]
Giving a stability condition $(\ensuremath{\mathcal P},Z)$ on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ is equivalent to giving a stability condition $(\ensuremath{\mathcal A}, Z)$ in the sense of Definition \ref{defn:stability_abelian}, where $\ensuremath{\mathcal A}$ is the heart of a bounded t-structure on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ together with the support property
\[
\mathop{\mathrm{inf}}\nolimits \left\{ \frac{|Z(v(E))|}{\| v(E) \|}\, : \, 0\neq E \in \ensuremath{\mathcal A}\ \mathrm{ semistable}\right\} > 0.
\]
\end{lem}
\begin{proof}
Assume we have a stability conditions $(\ensuremath{\mathcal P},Z)$ on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$. Then we can define a heart $\ensuremath{\mathcal A} = \ensuremath{\mathcal P}((0,1])$. Then $(\ensuremath{\mathcal A}, Z)$ is a stability conditions in the sense of Definition \ref{defn:stability_abelian} satisfying the support property. The other way around, we can define $\ensuremath{\mathcal P}(\phi)$ to be the category of semistable objects of phase $\phi$ in $\ensuremath{\mathcal A}$ whenever $\phi \in (0,1]$. This definition can be extended to any $\phi \in \ensuremath{\mathbb{R}}$ via the property $\ensuremath{\mathcal P}(\phi)[1] = \ensuremath{\mathcal P}(\phi + 1)$.
\end{proof}
From here on we will interchangeably use $(\ensuremath{\mathcal P},Z)$ and $(\ensuremath{\mathcal A}, Z)$ to denote a stability condition.
\begin{ex}\label{ex:BridgelandStabilityCurves}
Let $C$ be a curve.
Then the stability condition in Example \ref{ex:StabilityCurvesAgain} (1) gives a Bridgeland stability condition on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(C)$.
The lattice $\Lambda$ is nothing but $H^0(C,\ensuremath{\mathbb{Z}})\oplus H^2(C,\ensuremath{\mathbb{Z}})$, the map $v$ is nothing but $(r,d)$, and we can choose $Q = 0$.
\end{ex}
\begin{exercise}
Show that the stability condition in Exercise \ref{ex:Quiver} gives a Bridgeland stability condition on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(A)=\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(\mathop{\mathrm{mod}}\nolimits\text{-}A)$. What is $Q$?
\end{exercise}
\subsection{Bridgeland's deformation theorem}\label{subsec:BridgelandDefoThm}
The main theorem in \cite{Bri07:stability_conditions} is the fact that the set of stability conditions can be given the structure of a complex manifold.
Let $\mathop{\mathrm{Stab}}\nolimits(X)$ be the set of stability conditions on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ (with respect to $\Lambda$ and $v$). This set can be given a topology as the coarsest topology such that for any $E \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ the maps $(\ensuremath{\mathcal A},Z) \mapsto Z$, $(\ensuremath{\mathcal A}, Z) \mapsto \phi^+(E)$ and $(\ensuremath{\mathcal A}, Z) \mapsto \phi^-(E)$ are continuous. Equivalently, the topology is induced by the generalized (i.e., with values in $[0,+\infty]$) metric
\[
d(\sigma_1,\sigma_2) = \underset{0\neq E \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)}{\sup}\left\{|\phi^+_{\sigma_1}(E)-\phi^+_{\sigma_2}(E)|,\, | \phi^-_{\sigma_1}(E) - \phi^-_{\sigma_2}(E)|,\, \|Z_1-Z_2\| \right\},
\]
for $\sigma_i=(\ensuremath{\mathcal P}_i,Z_i)\in\mathop{\mathrm{Stab}}\nolimits(X)$, $i=1,2$; here, by abuse of notation, $\| \cdot \|$ also denotes the induced operator norm on $\mathop{\mathrm{Hom}}\nolimits(\Lambda,\ensuremath{\mathbb{C}})$.
\begin{rmk}\label{rmk:GroupActions}
There are two group actions on the space of stability conditions.
(1) The universal cover $\widetilde{\mathop{\mathrm{GL}}\nolimits}^+(2,\ensuremath{\mathbb{R}})$ of $\mathop{\mathrm{GL}}\nolimits^+(2,\ensuremath{\mathbb{R}})$, the $2\times 2$ matrices with real entries and positive determinant, acts on the right of $\mathop{\mathrm{Stab}}\nolimits(X)$ as follows.
We first recall the presentation
\[
\widetilde{\mathop{\mathrm{GL}}\nolimits}^+(2,\ensuremath{\mathbb{R}}) = \left\{ (T,f)\,:\, \begin{array}{l}
f\colon \ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}\ \mathrm{increasing}, \ f(\phi+1)=f(\phi)+1\\
T\in\mathop{\mathrm{GL}}\nolimits^+(2,\ensuremath{\mathbb{R}})\\
f|_{\ensuremath{\mathbb{R}}/2\ensuremath{\mathbb{Z}}} = T|_{(\ensuremath{\mathbb{R}}^2 \backslash \{ 0 \})/\ensuremath{\mathbb{R}}_{>0}}
\end{array} \right\}
\]
Then $(T,f)$ acts on $(\ensuremath{\mathcal P},Z)$ by $(T,f)\cdot (\ensuremath{\mathcal P},Z) = (\ensuremath{\mathcal P}',Z')$, where $Z'=T^{-1}\circ Z$ and $\ensuremath{\mathcal P}'(\phi) = \ensuremath{\mathcal P}(f(\phi))$.
(2) The group of exact autoequivalences $\mathop{\mathrm{Aut}}\nolimits_\Lambda(\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X))$ of $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$, whose action $\Phi_*$ on $K_0(X)$ is compatible with the map $v\colon K_0(X) \to \Lambda$, acts on the left of $\mathop{\mathrm{Stab}}\nolimits(X)$ by $\Phi\cdot (\ensuremath{\mathcal P},Z) = (\Phi(\ensuremath{\mathcal P}), Z \circ \Phi_*)$.
In \cite{BB13:autoequivalences_k3}, Bayer and Bridgeland use this action and a description of the geometry of the space of stability conditions on K3 surfaces of Picard rank $1$ to describe the full group of derived autoequivalences. This idea should work for all K3 surfaces, as envisioned by Bridgeland in \cite[Conjecture 1.2]{Bri08:stability_k3}.
\end{rmk}
\begin{thm}[\cite{Bri07:stability_conditions}]\label{thm:BridgelandMain}
The map $\ensuremath{\mathcal Z}\colon \mathop{\mathrm{Stab}}\nolimits(X) \to \mathop{\mathrm{Hom}}\nolimits(\Lambda, \ensuremath{\mathbb{C}})$ given by $(\ensuremath{\mathcal A},Z) \mapsto Z$ is a local homeomorphism. In particular, $\mathop{\mathrm{Stab}}\nolimits(X)$ is a complex manifold of dimension $\mathop{\mathrm{rk}}(\Lambda)$.
\end{thm}
\begin{proof}[Ideas from the proof]
We follow the presentation in \cite[Section 5.5]{Bay11:lectures_notes_stability} and we refer to it for the complete argument.
Let $\sigma=(\ensuremath{\mathcal A},Z)\in\mathop{\mathrm{Stab}}\nolimits(X)$.
We need to prove that any group homomorphism $W$ near $Z$ extends to a stability condition near $\sigma$.
The first key property is the following (\cite[Lemma 5.5.4]{Bay11:lectures_notes_stability}): The function $C:\mathop{\mathrm{Stab}}\nolimits(X)\to\ensuremath{\mathbb{R}}_{>0}$, $\sigma\mapsto C_\sigma$ is continuous, where $C_\sigma$ is the constant appearing in the support property in Definition \ref{def:Bridgeland1}.
By using this, and the $\widetilde{\mathop{\mathrm{GL}}\nolimits}^+(2,\ensuremath{\mathbb{R}})$-action described in Remark \ref{rmk:GroupActions} (by just rotating of $\pi/2$), it is possible to reduce to the following case:
We let $W\colon \Lambda\to\ensuremath{\mathbb{Z}}$ satisfy the assumptions
\begin{itemize}
\item $\Im W = \Im Z$;
\item $\| W- Z \| < \epsilon C_\sigma$, with $\epsilon<\frac{1}{8}$.
\end{itemize}
The claim is that $(\ensuremath{\mathcal A},W)\in\mathop{\mathrm{Stab}}\nolimits(X)$.
To prove this, under our assumptions, the only thing needed is to show that HN filtrations exist with respect to $W$.
We proceed as in the proof of Proposition \ref{prop:hn_exists}.
We need an analogue of Lemma \ref{lem:bounded_degree} and to show that there are only finitely many vertices in the Harder-Narasimhan polygon.
The idea for these is the following.
Let $E\in\ensuremath{\mathcal A}$.
\begin{itemize}
\item The existence of HN filtrations for $Z$ implies there exists a constant $\Gamma_E$ such that
\[
\Re Z(F) \geq \Gamma_E + m_\sigma(F),
\]
for all $F\subset E$.
\item By taking the HN filtration of $F$ with respect to $Z$, and by using the support property for $\sigma$, we get
\[
|\Re W(F_i) - \Re Z(F_i)| \leq \epsilon | Z(F_i) |,
\]
where the $F_i$'s are the HN factors of $F$ with respect to $Z$.
\item By summing up, we get
\[
\Re W(F) \geq \Re Z(F) - \epsilon m_\sigma(F) \geq \Gamma_E + (\underbrace{1-\epsilon}_{>0}) m_\sigma(F) > \Gamma_E,
\]
namely Lemma \ref{lem:bounded_degree} holds.
\item If $F\subset E$ is an extremal point of the polygon for $W$, then
\[
\max \{ 0,\Re W(E)\} > \Re W(F),
\]
and so
\[
m_\sigma(F) \leq \frac{\max \{ 0,\Re W(E)\} - \Gamma_E}{1-\epsilon}=: \Gamma_E'.
\]
\item Again, by taking the HN factors of $F$ with respect to $Z$, we get $|Z(F_i)|<\Gamma_E'$, and so by using the support property,
\[
\| F_i \| < \frac{\Gamma_E'}{C_\sigma}.
\]
\item From this we deduce there are only finitely many classes, and so finitely many vertices in the polygon. The proof of Proposition \ref{prop:hn_exists} applies now and this gives the existence of Harder-Narasimhan filtrations with respect to $W$.
\item The support property follows now easily.
\end{itemize}
\end{proof}
We can think about Bridgeland's main theorem more explicitly in terms of the quadratic form $Q$ appearing in the support property. This approach actually gives an ``$\epsilon$-free proof'' of Theorem \ref{thm:Gieseker} and appears in \cite{Bay16:short_proof}.
The main idea is contained in the following proposition, which is \cite[Proposition A.5]{BMS14:abelian_threefolds}. We will not prove this, but we will see it explicitly in the case of surfaces.
\begin{prop}\label{prop:ExplicitBridgelandMain}
Assume that $\sigma=(\ensuremath{\mathcal P},Z)\in\mathop{\mathrm{Stab}}\nolimits(X)$ satisfies the support property with respect to a quadratic form $Q$ on $\Lambda_\ensuremath{\mathbb{R}}$.
Consider the open subset of $\mathop{\mathrm{Hom}}\nolimits(\Lambda,\ensuremath{\mathbb{C}})$ consisting of central charges on whose kernel $Q$ is negative definite, and let $U$ be the connected component containing $Z$. Let $\ensuremath{\mathcal U}\subset\mathop{\mathrm{Stab}}\nolimits(X)$ be the connected component of the preimage $\ensuremath{\mathcal Z}^{-1}(U)$ containing $\sigma$. Then:
\begin{enumerate}
\item The restriction $\ensuremath{\mathcal Z}|_{\ensuremath{\mathcal U}}\colon \ensuremath{\mathcal U} \to U$ is a covering map.
\item Any stability condition $\sigma'\in\ensuremath{\mathcal U}$ satisfies the support property with respect to the same quadratic form $Q$.
\end{enumerate}
\end{prop}
\begin{ex}
Spaces of stability conditions are harder to study in general. For curves everything is known.
As before, we keep $\Lambda$ and $v$ as in Example \ref{ex:BridgelandStabilityCurves}.
\begin{enumerate}
\item $\mathop{\mathrm{Stab}}\nolimits(\ensuremath{\mathbb{P}}^1) \cong \ensuremath{\mathbb{C}}^2$ (see \cite{Oka06:P1}).
\item Let $C$ be a curve of genus $g\geq1$. Then $\mathop{\mathrm{Stab}}\nolimits(C)\cong \ensuremath{\mathbb{H}} \times \ensuremath{\mathbb{C}}\, (= \sigma_0 \cdot \widetilde{\mathop{\mathrm{GL}}\nolimits}^+(2,\ensuremath{\mathbb{R}}))$ (see \cite{Bri07:stability_conditions, Mac07:curves}), where $\sigma_0$ is the stability $(\mathop{\mathrm{Coh}}\nolimits(C),-d+\sqrt{-1}\, r)$ in Example \ref{ex:BridgelandStabilityCurves}.
\end{enumerate}
Let $\sigma=(\ensuremath{\mathcal P},Z)\in\mathop{\mathrm{Stab}}\nolimits(C)$. We will examine only the case $g \geq 1$.
The key point of the proof is to show that the skyscraper sheaves $\ensuremath{\mathbb{C}}(x)$, $x\in C$, and all line bundles $L$ are all in the category $\ensuremath{\mathcal P}((\phi_0,\phi_0+1])$, for some $\phi_0\in\ensuremath{\mathbb{R}}$, and they are all $\sigma$-stable.
Assume first that $\ensuremath{\mathbb{C}}(x)$ is not $\sigma$-semistable, for some $x\in C$.
By taking its HN filtration, we obtain an exact triangle
\[
A \to \ensuremath{\mathbb{C}}(x) \to B \to A[1]
\]
with $\mathop{\mathrm{Hom}}\nolimits^{\leq 0}(A,B)=0$\footnote{More precisely, $A\in\ensuremath{\mathcal P}(<\phi_0)$ and $B\in\ensuremath{\mathcal P}(\geq\phi_0)$, for some $\phi_0\in\ensuremath{\mathbb{R}}$.}.
By taking cohomology sheaves, we obtain a long exact sequence of coherent sheaves on $C$
\[
0\to \ensuremath{\mathcal H}^{-1}(B) \to \ensuremath{\mathcal H}^0(A) \to \ensuremath{\mathbb{C}}(x) \stackrel{f}{\to} \ensuremath{\mathcal H}^0(B) \to \ensuremath{\mathcal H}^1(A) \to 0,
\]
and $\ensuremath{\mathcal H}^{i-1}(B)\cong \ensuremath{\mathcal H}^i(A)$, for all $i\neq 0,1$.
But, since $C$ is smooth of dimension $1$, an object $F$ in $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(C)$ is isomorphic to the direct sum of its cohomology sheaves: $F\cong \oplus_i \ensuremath{\mathcal H}^i(F)[-i]$.
Since $\mathop{\mathrm{Hom}}\nolimits^{\leq0}(A,B)=0$, this gives $\ensuremath{\mathcal H}^{i-1}(B)\cong \ensuremath{\mathcal H}^i(A)=0$, for all $i\neq 0,1$.
We look at the case $f=0$. The case in which $f\neq 0$ (and so, $f$ is injective) can be dealt with similarly.
In this case, $\ensuremath{\mathcal H}^{0}(B)\cong \ensuremath{\mathcal H}^1(A)$ and so they are both trivial as well, since otherwise $\mathop{\mathrm{Hom}}\nolimits^{-1}(A,B)\neq0$.
Therefore, $A\cong \ensuremath{\mathcal H}^0(A)$ and $B\cong\ensuremath{\mathcal H}^{-1}(B)[1]$.
But, by using Serre duality, we have
\begin{align*}
0\neq \mathop{\mathrm{Hom}}\nolimits(\ensuremath{\mathcal H}^{-1}(B),\ensuremath{\mathcal H}^0(A))\hookrightarrow \mathop{\mathrm{Hom}}\nolimits(\ensuremath{\mathcal H}^{-1}(B), \ensuremath{\mathcal H}^0(A)\otimes K_C)&\cong\mathop{\mathrm{Hom}}\nolimits(\ensuremath{\mathcal H}^0(A),\ensuremath{\mathcal H}^{-1}(B)[1])\\ &\cong\mathop{\mathrm{Hom}}\nolimits(A,B),
\end{align*}
a contradiction.
We now claim that $\ensuremath{\mathbb{C}}(x)$ is actually stable.
Indeed, a similar argument as before shows that if $\ensuremath{\mathbb{C}}(x)$ is not $\sigma$-stable, then the only possibility is that all its stable factors are isomorphic to a single object $K$.
But then, by looking at its cohomology sheaves, $K$ must be a sheaf as well, which is impossible.
Set $\phi_x$ as the phase of $\ensuremath{\mathbb{C}}(x)$ in $\sigma$.
In the same way, all line bundles are $\sigma$-stable as well.
For a line bundle $A$ on $C$, we set $\phi_A$ as the phase of $A$ in $\sigma$.
The existence of maps $A\to\ensuremath{\mathbb{C}}(x)$ and $\ensuremath{\mathbb{C}}(x)\to A[1]$ gives the inequalities
\[
\phi_x-1\leq\phi_A\leq\phi_x.
\]
Since $A$ and $\ensuremath{\mathbb{C}}(x)$ are both $\sigma$-stable, the strict inequalities hold.
Hence, $Z$ is an isomorphism as a map $\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2$ and orientation-preserving.
By acting with an element of $\widetilde{\mathop{\mathrm{GL}}\nolimits}^+(2,\ensuremath{\mathbb{R}})$, we can therefore assume that $Z = Z_0 = -d + \sqrt{-1}\, r$ and that, for some $x\in C$, the skyscraper sheaf $\ensuremath{\mathbb{C}}(x)$ has phase $1$.
But then all line bundles on $C$ have phases in the interval $(0,1)$, and so all skyscraper sheaves have phase $1$ as well.
But this implies that $\ensuremath{\mathcal P}((0,1])=\mathop{\mathrm{Coh}}\nolimits(C)$, and so $\sigma=\sigma_0$.
\end{ex}
\begin{ex}
One of the motivations for the introduction of Bridgeland stability conditions is coming from mirror symmetry (see \cite{Dou02:mirror_symmetry}).
In case of elliptic curves, the mirror symmetry picture is particularly easy to explain (see \cite[Section 9]{Bri07:stability_conditions} and \cite{Bri09:seattle_survey}).
Indeed, let $C$ be an elliptic curve.
We can look at the action of the subgroup $\ensuremath{\mathbb{C}} \subset \widetilde{\mathop{\mathrm{GL}}\nolimits}^+(2,\ensuremath{\mathbb{R}})$ and of $\mathop{\mathrm{Aut}}\nolimits(\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(C))$ on $\mathop{\mathrm{Stab}}\nolimits(C)$, and one deduces
\[
\mathop{\mathrm{Aut}}\nolimits(\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(C)) \backslash \mathop{\mathrm{Stab}}\nolimits(C) / \ensuremath{\mathbb{C}} \cong \ensuremath{\mathbb{H}} / \mathop{\mathrm{PSL}}(2,\ensuremath{\mathbb{Z}}),
\]
coherently with the idea that spaces of stability conditions should be related to variations of complex structures on the mirror variety (in this case, a torus itself).
\end{ex}
\subsection{Moduli spaces}\label{subsec:moduli}
The main motivation for stability conditions is the study of moduli spaces of semistable objects.
In this section we recall the general theory of moduli spaces of complexes, and then define in general moduli spaces of Bridgeland semistable objects.
To define the notion of a \emph{family of complexes}, we recall first the notion of a \emph{perfect complex}.
We follow \cite{Ina02:moduli_complexes,Lie06:moduli_complexes}.
Let $B$ be a scheme locally of finite type over $\ensuremath{\mathbb{C}}$. We denote the unbounded derived category of quasi-coherent sheaves by $\mathrm{D}(\mathop{\mathrm{Qcoh}}\nolimits(X\times B))$.
A complex $E\in\mathrm{D}(\mathop{\mathrm{Qcoh}}\nolimits(X\times B))$ is called \emph{$B$-perfect} if it is, locally over $B$, isomorphic to a bounded complex of flat sheaves over $B$ of finite presentation.
The triangulated subcategory of $B$-perfect complexes is denoted by $\mathrm{D}_{B\text{-perf}}(X\times B)$.
\begin{defn}
We let $\mathfrak{M}\colon \underline{\mathrm{Sch}}\to \underline{\mathrm{Grp}}$ be the $2$-functor which maps a scheme $B$ locally of finite type over $\ensuremath{\mathbb{C}}$ to the groupoid $\mathfrak{M}(B)$ given by
\[
\left\{\ensuremath{\mathcal E}\in\mathrm{D}_{B\text{-perf}}(X\times B)\,:\, \mathop{\mathrm{Ext}}\nolimits^i(\ensuremath{\mathcal E}_b,\ensuremath{\mathcal E}_b)=0, \text{ for all }i<0\text{ and all geometric pts }b\in B \right\},
\]
where $\ensuremath{\mathcal E}_b$ is the restriction $\ensuremath{\mathcal E}|_{X\times\{b\}}$
\end{defn}
\begin{thm}[{\cite[Theorem 4.2.1]{Lie06:moduli_complexes}}]
\label{thm:Lieblich}
$\mathfrak{M}$ is an Artin stack, locally of finite type, locally quasi separated, and with separated diagonal.
\end{thm}
We will not discuss Artin stacks in detail any further in this survey; we refer to the introductory book \cite{ols16:algebraic_spaces_stacks} for the basic terminology (or \cite{stacks-project}).
We will mostly use the following special case.
Let $\mathfrak{M}_{\mathrm{Spl}}$ be the open substack of $\mathfrak{M}$ parameterizing simple objects (as in the sheaf case, these are complexes with only scalar endomorphisms). This is also an Artin stack with the same properties as $\mathfrak{M}$.
Define also a functor $\underline{M}_{\mathrm{Spl}}\colon \underline{\mathrm{Sch}}\to \underline{\mathrm{Set}}$ by forgetting the groupoid structure on $\mathfrak{M}_{\mathrm{Spl}}$ and by sheafifying it (namely, as in the stable sheaves case, through quotienting by the equivalence relation given by tensoring with line bundles from the base $B$).
\begin{thm}[{\cite[Theorem 0.2]{Ina02:moduli_complexes}}]
\label{thm:Inaba}
The functor $\underline{M}_{\mathrm{Spl}}$ is representable by an algebraic space locally of finite type over $\ensuremath{\mathbb{C}}$.
Moreover, the natural morphism $\mathfrak{M}_{\mathrm{Spl}}\to \underline{M}_{\mathrm{Spl}}$ is a $\mathbb{G}_m$-gerbe.
\end{thm}
We can now define the moduli functor for Bridgeland semistable objects.
\begin{defn}
Let $\sigma=(\ensuremath{\mathcal P},Z)\in\mathop{\mathrm{Stab}}\nolimits(X)$.
Fix a class $v_0\in\Lambda$ and a phase $\phi\in\ensuremath{\mathbb{R}}$ such that $Z(v_0)\in\ensuremath{\mathbb{R}}_{>0}\cdot e^{\sqrt{-1}\pi\phi}$.
We define $\widehat{M}_{\sigma}(v,\phi)$ to be the set of $\sigma$-semistable objects in $\ensuremath{\mathcal P}(\phi)$ of numerical class $v_0$ and $\mathfrak{M}_{\sigma}(v,\phi)\subset\mathfrak{M}$ the substack of objects in $M_{\sigma}(v,\phi)$.
Similarly, we define $\mathfrak{M}^s_{\sigma}(v,\phi)$ as the substack parameterizing stable objects.
\end{defn}
\begin{question}\label{question:OpenAndBounded}
Are the inclusions $\mathfrak{M}^s_{\sigma}(v,\phi)\subset\mathfrak{M}_{\sigma}(v,\phi)\subset\mathfrak{M}$ open?
Is the set $\widehat{M}_{\sigma}(v,\phi)$ bounded?
\end{question}
If the answers to Question \ref{question:OpenAndBounded} are both affirmative, then by Theorem \ref{thm:Lieblich} and \cite[Lemma 3.4]{Tod08:K3Moduli}, $\mathfrak{M}_{\sigma}(v,\phi)$ and $\mathfrak{M}^s_{\sigma}(v,\phi)$ are both Artin stack of finite type over $\ensuremath{\mathbb{C}}$.
Moreover, by Theorem \ref{thm:Inaba}, the associated functor $\underline{M}^s_{\sigma}(v,\phi)$ would be represented by an algebraic space of finite type over $\ensuremath{\mathbb{C}}$.
We will see in Section \ref{subsec:ModuliSurfaces} that Question \ref{question:OpenAndBounded} has indeed an affirmative answer in the case of certain stability conditions on surfaces \cite{Tod08:K3Moduli} (or more generally, when Bridgeland stability conditions are constructed via an iterated tilting procedure \cite{PT15:bridgeland_moduli_properties}).
A second fundamental question is about the existence of a moduli space:
\begin{question}\label{question:ProjectivityModuli}
Is there a coarse moduli space $M_{\sigma}(v,\phi)$ parameterizing S-equivalence classes of $\sigma$-semistable objects? Is $M_{\sigma}(v,\phi)$ a projective scheme?
\end{question}
Question \ref{question:ProjectivityModuli} has an affirmative complete answer only for the projective plane $\ensuremath{\mathbb{P}}^2$ (see \cite{ABCH13:hilbert_schemes_p2}), $\ensuremath{\mathbb{P}}^1 \times \ensuremath{\mathbb{P}}^1$ and the blow up of a point in $\ensuremath{\mathbb{P}}^2$ (see \cite{AM15:projectivity}), and partial answers for other surfaces, including abelian surfaces (see \cite{MYY14:stability_k_trivial_surfaces,YY14:stability_abelian_surfaces}), K3 surfaces (see \cite{BM14:projectivity}), and Enriques surfaces (see \cite{Nue14:stability_enriques, {Yos16:projectivity_enriques}}). In Section \ref{subsec:ModuliSurfaces} we show how the projectivity is shown in case of $\ensuremath{\mathbb{P}}^2$ via quiver representations. The other two del Pezzo cases were proved with the same method, but turn out more technically involved.
As remarked in Section \ref{sec:generalizations}, since Bridgeland stability is not a priori associated to a GIT problem, it is harder to answer Question \ref{question:ProjectivityModuli} in general.
The recent works \cite{AS12:good_moduli, AHR15:luna_stacks} on good quotients for Artin stacks may lead to a general answer to this question, though.
Once a coarse moduli space exists, then separatedness and properness of the moduli space is a general result by Abramovich and Polishchuk \cite{AP06:constant_t_structures}, which we will review in Section \ref{sec:nef}.
It is interesting to note that the technique in \cite{AP06:constant_t_structures} is also very useful in the study of the geometry of the moduli space itself. This will also be reviewed in Section \ref{sec:nef}.
\begin{exercise}
Show that Question \ref{question:OpenAndBounded} and Question \ref{question:ProjectivityModuli} have both affirmative answers for curves (you can assume, for simplicity, that the genus is $\geq1$). Show also that moduli spaces in this case are exactly moduli spaces of semistable sheaves as reviewed in Section \ref{sec:curves}.
\end{exercise}
\subsection{Wall and chamber structure}\label{subsec:WallChamberGeneral}
We conclude the section by explaining how stable object change when the stability condition is varied within $\mathop{\mathrm{Stab}}\nolimits(X)$ itself. It turns out that the topology on $\mathop{\mathrm{Stab}}\nolimits(X)$ is defined in such as way that this happens in a controlled way. This will be one of the key properties in studying the geometry of moduli spaces of semistable objects.
\begin{defn}\label{def:WallGeneral}
Let $v_0,w\in\Lambda\setminus\{0\}$ be two non-parallel vectors.
A \emph{numerical wall} $W_w(v_0)$ for $v_0$ with respect to $w$ is a non empty subset of $\mathop{\mathrm{Stab}}\nolimits(X)$ given by
\[
W_w(v_0):=\left\{\sigma=(\ensuremath{\mathcal P},Z)\in\mathop{\mathrm{Stab}}\nolimits(X)\,:\, \Re Z(v_0)\cdot \Im Z(w)=\Re Z(w)\cdot\Im Z(v_0)\right\}.
\]
\end{defn}
We denote the set of numerical walls for $v_0$ by $\ensuremath{\mathcal W}(v_0)$.
By Theorem \ref{thm:BridgelandMain}, a numerical wall is a real submanifold of $\mathop{\mathrm{Stab}}\nolimits(X)$ of codimension $1$.
We will use walls only for special subsets of the space of stability conditions (the $(\alpha,\beta)$-plane in Section \ref{subsec:WallChamber}). We will give a complete proof in this case in Section \ref{sec:surfaces}.
The general behavior is explained in \cite[Section 9]{Bri08:stability_k3}.
We direct the reader to \cite[Proposition 3.3]{BM11:local_p2} for the complete proof of the following result.
\begin{prop}\label{prop:locally_finite}
Let $v_0\in\Lambda$ be a primitive class, and let $S\subset\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ be an arbitrary set of objects of class $v_0$.
Then there exists a collection of walls $W^S_w(v_0)$, $w\in\Lambda$, with the following properties:
\begin{enumerate}
\item Every wall $W^S_w(v_0)$ is a closed submanifold with boundary of real codimension
one.
\item The collection $W^S_w(v_0)$ is locally finite (i.e., every compact subset $K\subset\mathop{\mathrm{Stab}}\nolimits(X)$ intersects only a finite number of walls).
\item\label{enum:ActualWall3} For every stability condition $(\ensuremath{\mathcal P},Z)$ on a wall in $W^S_w(v_0)$, there exists a phase
$\phi\in\ensuremath{\mathbb{R}}$ and an inclusion $F_w \ensuremath{\hookrightarrow} E_{v_0}$ in $\ensuremath{\mathcal P}(\phi)$ with $v(F_w)=w$ and $E_{v_0}\in S$.
\item\label{enum:ActualWall4} If $C\subset\mathop{\mathrm{Stab}}\nolimits(X)$ is a connected component of the complement of $\cup_{w\in\Lambda} W^S_w(v_0)$ and $\sigma_1,\sigma_2\in C$, then an object $E_{v_0}\in S$ is $\sigma_1$-stable if and only if it is $\sigma_2$-stable.
\end{enumerate}
In particular, the property for an object to be stable is open in $\mathop{\mathrm{Stab}}\nolimits(X)$.
\end{prop}
The walls in Proposition \ref{prop:locally_finite} will be called \emph{actual walls}.
A \emph{chamber} is defined to be a connected component of the complement of the set of actual walls, as in \eqref{enum:ActualWall4} above.
\begin{proof}[Idea of the proof]
For a class $w\in\Lambda$, we let $V^S_w$ be the set of stability conditions for which there exists an inclusion as in \eqref{enum:ActualWall3}.
Clearly $V^S_w$ is contained in the numerical wall $W_w(v_0)$.
The first step is to show that there only finitely many $w$ for which $V^S_w$ intersects a small neighborhood around a stability condition.
This is easy to see by using the support property: see also Lemma \ref{lem:convex_cone} below.
We then let $W^S_w(v_0)$ be the codimension one component of $V^S_w$.
It remains to show \eqref{enum:ActualWall4}.
The idea is the following: higher codimension components
of $V^S_w$ always come from objects $E_{v_0}$ that are semistable on this
component and unstable at any nearby point.
\end{proof}
\begin{rmk}\label{rmk:WallChamberGeneralSemistability}
We notice that if $v_0$ is not primitive and we ask only for semistability on chambers, then part \eqref{enum:ActualWall3} cannot be true: namely actual walls may be of higher codimension. On the other hand, the following holds (see \cite[Section 9]{Bri08:stability_k3}): Let $v_0\in\Lambda$. Then there exists a locally finite collection of numerical walls $W_{w}(v_0)$ such that on each chamber the set of semistable objects $\widehat{M}_\sigma(v_0,\phi)$ is constant.
\end{rmk}
The following lemma clarifies how walls behave with respect to the quadratic form appearing in the support property.
It was first observed in \cite[Appendix A]{BMS14:abelian_threefolds}.
\begin{lem}
\label{lem:convex_cone}
Let $Q$ be a quadratic form on a real vector space $V$ and $Z: V \to \ensuremath{\mathbb{C}}$ a linear map such that the kernel of $Z$ is negative semi-definite with respect to $Q$.
If $\rho$ is a ray in $\ensuremath{\mathbb{C}}$ starting at the origin, we define
\[ \ensuremath{\mathcal C}^+_{\rho} = Z^{-1}(\rho) \cap \{ Q \geq 0 \}.\]
\begin{enumerate}
\item If $w_1, w_2 \in \ensuremath{\mathcal C}^+_{\rho}$, then $Q(w_1, w_2) \geq 0$.
\item The set $\ensuremath{\mathcal C}^+_{\rho}$ is a convex cone.
\item Let $w, w_1, w_2 \in \ensuremath{\mathcal C}^+_{\rho}$ with $w = w_1 + w_2$. Then $0 \leq Q(w_1) + Q(w_2) \leq Q(w)$. Moreover, $Q(w_1) = Q(w)$ implies $Q(w) = Q(w_1) = Q(w_2) = Q(w_1, w_2) = 0$.
\item If the kernel of $Z$ is negative definite with respect to $Q$, then any vector $w \in \ensuremath{\mathcal C}^+_{\rho}$ with $Q(w) = 0$ generates an extremal ray of $\ensuremath{\mathcal C}^+$.
\end{enumerate}
\begin{proof}
For any non trivial $w_1, w_2 \in \ensuremath{\mathcal C}^+_{\rho}$ there is $\lambda > 0$ such that $Z(w_1 - \lambda w_2) = 0$. Therefore, we get
\[
0 \geq Q(w_1 - \lambda w_2) = Q(w_1) + \lambda^2 Q(w_2) - 2\lambda Q(w_1, w_2).
\]
The inequalities $Q(w_1) \geq 0$ and $Q(w_2) \geq 0$ lead to $Q(w_1, w_2) \geq 0$. Part (2) follows directly from (1). The first claim in (3) also follows immediately from (1) by using
\[
Q(w) = Q(w_1) + Q(w_2) + 2 Q(w_1, w_2) \geq 0.
\]
Observe that $Q(w_1) = Q(w)$ implies $Q(w_2) = Q(w_1, w_2) = 0$ and therefore,
\[
0 = 2\lambda Q(w_1, w_2) \geq Q(w_1) + \lambda^2 Q(w_2) = Q(w_1) \geq 0.
\]
Let $w \in \ensuremath{\mathcal C}^+_{\rho}$ with $Q(w) = 0$. Assume that $w$ is not extremal, i.e., there are linearly independent $w_1, w_2 \in \ensuremath{\mathcal C}^+_{\rho}$ such that $w = w_1 + w_2$. By (3) we get $Q(w_1) = Q(w_2) = Q(w_1, w_2) = 0$. As before $Z(w_1 - \lambda w_2) = 0$, but this time $w_1 - \lambda w_2 \neq 0$. That means
\[
0 > Q(w_1 - \lambda w_2) = Q(w_1) + \lambda^2 Q(w_2) - 2\lambda Q(w_1, w_2) = 0. \qedhere
\]
\end{proof}
\end{lem}
\begin{cor}{\cite[Proposition A.8]{BMS14:abelian_threefolds}}
\label{cor:Qzero}
Assume that $U$ is a path-connected open subset of $\mathop{\mathrm{Stab}}\nolimits(X)$ such that all $\sigma \in U$ satisfy the support property with respect to the quadratic form $Q$. If $E \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ with $Q(E) = 0$ is $\sigma$-stable for some $\sigma \in U$ then it is $\sigma'$-stable for all $\sigma' \in U$.
\begin{proof}
If there is a point in $U$ at which $E$ is unstable, then there is a point $\sigma = (\ensuremath{\mathcal A}, Z)$ at which $E$ is strictly semistable. If $\rho$ is the ray that $Z(E)$ lies on, then the previous lemma implies that $v(E)$ is extremal in $\ensuremath{\mathcal C}^+_{\rho}$. That is contradiction to $E$ being strictly semistable.
\end{proof}
\end{cor}
\section{Stability Conditions on Surfaces}
\label{sec:surfaces}
This is one of the key sections of these notes. We introduce the fundamental operation of tilting for hearts of bounded t-structures and use it to give examples of Bridgeland stability conditions on surfaces.
Another key ingredient is the Bogomolov inequality for slope semistable sheaves, which we will recall as well, and show its close relation with the support property for Bridgeland stability conditions introduced in the previous section.
We then conclude with a few examples of stable objects, explicit description of walls, and the behavior at the large volume limit point.
\subsection{Tilting of t-structures}
Given the heart of a bounded t-structure, the process of tilting is used to obtain a new heart. For a detailed account of the general theory of tilting we refer to \cite{HRS96:tilting} and \cite[Section 5]{BvdB03:functors}.
The idea of using this operation to construct Bridgeland stability conditions is due to Bridgeland in \cite{Bri08:stability_k3}, in the case of K3 surfaces. The extension to general smooth projective surfaces is in \cite{AB13:k_trivial}.
\begin{defn}\label{def:TorsionPair}
Let $\ensuremath{\mathcal A}$ be an abelian category.
A \emph{torsion pair} on $\ensuremath{\mathcal A}$ consists of a pair of full additive subcategories $(\ensuremath{\mathcal F},\ensuremath{\mathcal T})$ of $\ensuremath{\mathcal A}$ such that the following two properties hold:
\begin{itemize}
\item For any $F \in \ensuremath{\mathcal F}$ and $T \in \ensuremath{\mathcal T}$, $\mathop{\mathrm{Hom}}\nolimits(T,F)=0$.
\item For any $E \in \ensuremath{\mathcal A}$ there are $F \in \ensuremath{\mathcal F}$, $T \in \ensuremath{\mathcal T}$ together with an exact sequence
\[
0 \to T \to E \to F \to 0.
\]
\end{itemize}
\end{defn}
By the vanishing property, the exact sequence in the definition of a torsion pair is unique.
\begin{exercise}
Let $X$ be a smooth projective variety.
Show that the pair of subcategories
\begin{align*}
&\ensuremath{\mathcal T} := \left\{\text{Torsion sheaves on }X \right\}\\
&\ensuremath{\mathcal F} := \left\{\text{Torsion-free sheaves on }X \right\}
\end{align*}
gives a torsion pair in $\mathop{\mathrm{Coh}}\nolimits(X)$.
\end{exercise}
\begin{lem}[Happel-Reiten-Smal\o]
Let $X$ be a smooth projective variety.
Let $\ensuremath{\mathcal A}\subset\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ be the heart of a bounded t-structure, and let $(\ensuremath{\mathcal F},\ensuremath{\mathcal T})$ be a torsion pair in $\ensuremath{\mathcal A}$.
Then the category
\[
\ensuremath{\mathcal A}^\sharp := \left\{ E\in\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)\,:\, \begin{array}{l}
\ensuremath{\mathcal H}_\ensuremath{\mathcal A}^i(E)=0,\, \mathrm{for\, all\, }i\neq 0,-1\\
\ensuremath{\mathcal H}^0_\ensuremath{\mathcal A}(E)\in\ensuremath{\mathcal T}\\
\ensuremath{\mathcal H}^{-1}_\ensuremath{\mathcal A}(E)\in\ensuremath{\mathcal F}
\end{array} \right\}
\]
is the heart of a bounded t-structure on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$, where $\ensuremath{\mathcal H}^\bullet_\ensuremath{\mathcal A}$ denotes the cohomology object with respect to the t-structure $\ensuremath{\mathcal A}$.
\end{lem}
\begin{proof}
We only need to check the conditions in Definition \ref{def:heart}.
We check the first one and leave the second as an exercise.
Given $E,E'\in\ensuremath{\mathcal A}^\sharp$, we need to show that $\mathop{\mathrm{Hom}}\nolimits^{<0}(E,F)=0$.
By definition of $\ensuremath{\mathcal A}^\sharp$, we have exact triangles
\[
F_E[1] \to E \to T_E \, \, \text{ and } \, \, F_{E'}[1]\to E'\to T_{E'},
\]
where $F_E,F_{E'}\in\ensuremath{\mathcal F}$ and $T_E,T_{E'}\in\ensuremath{\mathcal T}$.
By taking the functor $\mathop{\mathrm{Hom}}\nolimits$, looking at the induced long exact sequences, and using the fact that negative $\mathop{\mathrm{Hom}}\nolimits$'s vanish for objects in $\ensuremath{\mathcal A}$, the required vanishing amounts to showing that $\mathop{\mathrm{Hom}}\nolimits(T_E,F_{E'})=0$. This is exactly the first condition in the definition of a torsion pair.
\end{proof}
\begin{exercise}
Show that $\ensuremath{\mathcal A}^\sharp$ can also be defined as the smallest extension closed full subcategory of $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ containing both $\ensuremath{\mathcal T}$ and $\ensuremath{\mathcal F}[1]$. We will use the notation $\ensuremath{\mathcal A}^\sharp = \langle \ensuremath{\mathcal F}[1], \ensuremath{\mathcal T} \rangle$.
\end{exercise}
\begin{exercise}
\label{exercise:heartIsATilt}
Let $\ensuremath{\mathcal A}$ and $\ensuremath{\mathcal B}$ be two hearts of bounded t-structures on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$ such that
$\ensuremath{\mathcal A} \subset \langle \ensuremath{\mathcal B}, \ensuremath{\mathcal B}[1] \rangle$. Show that $\ensuremath{\mathcal A}$ is a tilt of $\ensuremath{\mathcal B}$. \emph{Hint: Set $\ensuremath{\mathcal T} = \ensuremath{\mathcal B} \cap \ensuremath{\mathcal A}$ and $\ensuremath{\mathcal F} = \ensuremath{\mathcal B} \cap \ensuremath{\mathcal A}[-1]$ and show that $(\ensuremath{\mathcal F}, \ensuremath{\mathcal T})$ is a torsion pair on $\ensuremath{\mathcal B}$.}
\end{exercise}
\subsection{Construction of Bridgeland stability conditions on surfaces}
\label{subsec:SurfaceHeart}
Let $X$ be a smooth projective surface over $\ensuremath{\mathbb{C}}$.
As in Section \ref{sec:generalizations}, we fix an ample divisor class $\omega\in\ensuremath{\mathbb{N}}^1(X)$ and a divisor class $B\in N^1(X)$. We will construct a family of Bridgeland stability conditions that depends on these two parameters.
As remarked in Example \ref{ex:StabilityCurvesAgain} (2), $\mathop{\mathrm{Coh}}\nolimits(X)$ will never be the heart of a Bridgeland stability condition. The idea is to use tilting, by starting with coherent sheaves, and use slope stability to define a torsion pair.
Let $\ensuremath{\mathcal A}=\mathop{\mathrm{Coh}}\nolimits(X)$.
We define a pair of subcategories
\begin{align*}
\ensuremath{\mathcal T}_{\omega, B} &= \left\{E \in \mathop{\mathrm{Coh}}\nolimits(X) : \text{any semistable factor $F$ of $E$ satisfies $\mu_{\omega, B}(F) > 0$} \right\}, \\
\ensuremath{\mathcal F}_{\omega, B} &= \left\{E \in \mathop{\mathrm{Coh}}\nolimits(X) : \text{any semistable factor $F$ of $E$ satisfies $\mu_{\omega, B}(F) \leq 0$} \right\}.
\end{align*}
The existence of Harder-Narasimhan filtrations for slope stability (see Example \ref{ex:ExamplesStability}) and Lemma \ref{lem:Schur} show that this is indeed a torsion pair on $\mathop{\mathrm{Coh}}\nolimits(X)$.
\begin{defn}
We define the tilted heart
\[
\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X) := \ensuremath{\mathcal A}^\sharp = \langle\ensuremath{\mathcal F}_{\omega, B}[1], \ensuremath{\mathcal T}_{\omega, B} \rangle.
\]
\end{defn}
\begin{exercise}
Show that the categories $\ensuremath{\mathcal T}_{\omega,B}$ and $\ensuremath{\mathcal F}_{\omega,B}$ can also be defined as follows:
\begin{align*}
\ensuremath{\mathcal T}_{\omega, B} &= \left\{E \in \mathop{\mathrm{Coh}}\nolimits(X) : \forall\, E\ensuremath{\twoheadrightarrow} Q\neq0,\ \mu_{\omega, B}(Q) \leq 0 \right\},\\
\ensuremath{\mathcal F}_{\omega, B} &= \left\{E \in \mathop{\mathrm{Coh}}\nolimits(X) : \forall\, 0\neq F\subset E,\ \mu_{\omega, B}(F) > 0 \right\}.
\end{align*}
\end{exercise}
\begin{exercise}
Show that the category $\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$ depends only on $\frac{\omega}{\omega^2}$ and $\frac{\omega \cdot B}{\omega^2}$.
\end{exercise}
\begin{exercise}\label{exercise:MinimalObjects}
We say that an object $S$ in an abelian category is \emph{minimal}\footnote{This is commonly called a \emph{simple} object. Unfortunately, in the theory of semistable sheaves, the word ``simple'' is used to indicate $\mathop{\mathrm{Hom}}\nolimits(S,S)=\ensuremath{\mathbb{C}}$; this is why we use this slightly non-standard notation.} if $S$ does not have any non-trivial subobjects (or quotients).
Show that skyscraper sheaves are minimal objects in the category $\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$.
Let $E$ be a $\mu_{\omega,B}$-stable vector bundle on $X$ with $\mu_{\omega, B}(E) = 0$. Show that $E[1]$ is a minimal object in $\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$.
\end{exercise}
We now need to define a stability function on the tilted heart.
We set
\[
Z_{\omega,B} := - \int_X\, e^{\sqrt{-1}\,\omega} \cdot \mathop{\mathrm{ch}}\nolimits^B.
\]
Explicitly, for $E\in\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$,
\[
Z_{\omega, B}(E) = \left(-\mathop{\mathrm{ch}}\nolimits^{B}_2(E) + \frac{\omega^2}{2} \cdot \mathop{\mathrm{ch}}\nolimits^{B}_0(E)\right) + \sqrt{-1} \ \omega \cdot \mathop{\mathrm{ch}}\nolimits^{B}_1(E).
\]
The corresponding slope function is
\[
\nu_{\omega, B}(E) = \frac{\mathop{\mathrm{ch}}\nolimits^{B}_2(E) - \frac{\omega^2}{2} \cdot \mathop{\mathrm{ch}}\nolimits^{B}_0(E)}{\omega \cdot \mathop{\mathrm{ch}}\nolimits^{B}_1(E)}.
\]
The main result is the following (see \cite{Bri08:stability_k3, AB13:k_trivial, BM11:local_p2}).
We will choose $\Lambda=K_{\mathop{\mathrm{num}}\nolimits}(X)$ and $v$ as the Chern character map as in Example \ref{ex:NumericalGrothendieck}.
\begin{thm}\label{thm:BridgelandSurface}
Let $X$ be a smooth projective surface.
The pair $\sigma_{\omega, B} = (\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X),Z_{\omega,B})$ gives a Bridgeland stability condition on $X$. Moreover, the map
\[
\mathop{\mathrm{Amp}}\nolimits(X)\times N^1(X) \to \mathop{\mathrm{Stab}}\nolimits(X), \, \, (\omega,B)\mapsto\sigma_{\omega,B}
\]
is a continuous embedding.
\end{thm}
Unfortunately the proof of this result is not so direct, even in the case of K3 surfaces.
We will give a sketch in Section \ref{subsec:ProofThmSurface} below.
The idea is to first prove the case in which $\omega$ and $B$ are rational classes.
The non trivial ingredient in this part of the proof is the classical Bogomolov inequality for slope semistable torsion-free sheaves. Then we show we can deform by keeping $B$ fixed and letting $\omega$ vary.
This, together with the behavior for $\omega$ ``large'', will give a Bogomolov inequality for Bridgeland stable objects. This will allow to deform $B$ as well and to show a general Bogomolov inequality for Bridgeland semistable objects, which will finally imply the support property and conclude the proof of the theorem.
The key result in the proof of Theorem \ref{thm:BridgelandSurface} can be summarized as follows.
It is one of the main theorems from \cite{BMT14:stability_threefolds} (see also \cite[Theorem 3.5]{BMS14:abelian_threefolds}).
We first need to introduce three notions of discriminant for an object in $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$.
\begin{exercise}\label{exer:ConstantCH}
Let $\omega\in N^1(X)$ be an ample real divisor class.
Then there exists a constant $C_\omega\geq0$ such that, for every effective divisor $D\subset X$, we have
\[
C_\omega (\omega \cdot D)^2 + D^2 \geq 0.
\]
\end{exercise}
\begin{defn}\label{def:discriminant}
Let $\omega,B\in N^1(X)$ with $\omega$ ample.
We define the \emph{discriminant} function as
\[
\Delta := \left(\mathop{\mathrm{ch}}\nolimits_1^B\right)^2 - 2 \mathop{\mathrm{ch}}\nolimits_0^B \cdot \mathop{\mathrm{ch}}\nolimits_2^B = \left(\mathop{\mathrm{ch}}\nolimits_1\right)^2 - 2 \mathop{\mathrm{ch}}\nolimits_0 \cdot \mathop{\mathrm{ch}}\nolimits_2.
\]
We define the \emph{$\omega$-discriminant} as
\[
\overline{\Delta}^B_\omega := \left(\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B \right)^2 - 2 \omega^2 \cdot \mathop{\mathrm{ch}}\nolimits_0^B \cdot \mathop{\mathrm{ch}}\nolimits_2^B.
\]
Choose a rational non-negative constant $C_\omega$ as in Exercise \ref{exer:ConstantCH} above.
Then we define the \emph{$(\omega,B,C_\omega)$-discriminant} as
\[
\Delta^C_{\omega,B} := \Delta + C_\omega (\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B)^2.
\]
\end{defn}
\begin{thm}\label{thm:BogomolovBridgelandStability}
Let $X$ be a smooth projective surface over $\ensuremath{\mathbb{C}}$.
Let $\omega,B\in N^1(X)$ with $\omega$ ample.
Assume that $E$ is $\sigma_{\omega,B}$-semistable.
Then
\[
\Delta^C_{\omega,B}(E)\geq 0 \, \, \text{ and } \, \, \overline{\Delta}^B_\omega(E) \geq0.
\]
\end{thm}
The quadratic form $\Delta^C_{\omega,B}$ will give the support property for $\sigma_{\omega,B}$.
The quadratic form $\overline{\Delta}^B_\omega$ will give the support property on the $(\alpha,\beta)$-plane (see Section \ref{subsec:WallChamber}).
\subsection{Sketch of the proof of Theorem \ref{thm:BridgelandSurface} and Theorem \ref{thm:BogomolovBridgelandStability}}\label{subsec:ProofThmSurface}
We keep the notation as in the previous section.
\subsubsection*{Bogomolov inequality}
Our first goal is to show that $Z_{\omega,B}$ is a stability function on $\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$.
The key result we need is the following (\cite{Bog78:inequality, Gie79:bogomolov, Rei78:bogomolov}, \cite[Theorem 12.1.1]{Pot97:lectures_vector_bundles}, \cite[Theorem 3.4.1]{HL10:moduli_sheaves}).
\begin{thm}[Bogomolov Inequality]
Let $X$ be a smooth projective surface.
Let $\omega,B\in N^1(X)$ with $\omega$ ample, and let $E$ be a $\mu_{\omega,B}$-semistable torsion-free sheaf.
Then
\[
\Delta(E) = \mathop{\mathrm{ch}}\nolimits_1^B(E)^2 - 2 \mathop{\mathrm{ch}}\nolimits_0^B(E) \cdot \mathop{\mathrm{ch}}\nolimits_2^B(E) \geq 0.
\]
\end{thm}
\begin{proof}[Sketch of the proof]
Since neither slope stability nor $\Delta$ depend on $B$, we can assume $B=0$.
Also, by \cite[Lemma 4.C.5]{HL10:moduli_sheaves}, slope stability with respect to an ample divisor changes only at integral classes. Hence, we can assume $\omega=H\in\mathop{\mathrm{NS}}\nolimits(X)$ is the divisor class of a very ample integral divisor (by abuse of notation, we will still denote it by $H$).
Since $\Delta$ is invariant both by tensor products of line bundles and by pull-backs via finite surjective morphisms, and since these two operations preserve slope stability of torsion-free sheaves (see e.g., Exercise \ref{ex:StabilityPullback} for the case of curves, and \cite[Lemma 3.2.2]{HL10:moduli_sheaves} in general), by taking a $\mathop{\mathrm{rk}}(E)$-cyclic cover (see, e.g., \cite[Theorem 3.2.9]{HL10:moduli_sheaves}) and by tensoring by a $\mathop{\mathrm{rk}}(E)$-root of $\mathop{\mathrm{ch}}\nolimits_1(E)$, we can assume $\mathop{\mathrm{ch}}\nolimits_1(E)=0$.
Finally, by taking the double-dual, $\Delta$ can only become worse.
Hence, we can assume $E$ is actually a vector bundle on $X$.
Hence, we are reduced to show $\mathop{\mathrm{ch}}\nolimits_2(E)\leq0$.
We use the restriction theorem for slope stability (see \cite[Section 7.1 or 7.2]{HL10:moduli_sheaves} or \cite{Lang04:positive_char}): up to replacing $H$ with a multiple $uH$, for $u\gg0$ fixed, there exists a smooth projective curve $C\in |H|$ such that $E|_C$ is again slope semistable.
Also, as remarked in Section \ref{subsec:EquivalentDefCurves}, the tensor product $E|_C \otimes \ldots \otimes E|_C$ is still semistable on $C$.
Now the theorem follows by estimating Euler characteristics.
Indeed, on the one hand, by the Riemann-Roch Theorem we have
\[
\chi(X,\underbrace{E\otimes\ldots\otimes E}_{m\text{-times}}) = \mathop{\mathrm{rk}}(E)^m \chi(X,\ensuremath{\mathcal O}_X) + m \mathop{\mathrm{rk}}(E)^{m-1}\mathop{\mathrm{ch}}\nolimits_2(E),
\]
for all $m>0$.
On the other hand, we can use the exact sequence
\[
0 \to \left( E\otimes\ldots\otimes E\right) \otimes \ensuremath{\mathcal O}_X(-H) \to E\otimes\ldots\otimes E \to E|_C\otimes\ldots\otimes E|_C \to 0,
\]
and, by using the remark above, deduce that
\[
h^0(X,E\otimes\ldots\otimes E) \leq h^0(C,E|_C\otimes\ldots\otimes E|_C) \leq \gamma_E \mathop{\mathrm{rk}}(E)^m,
\]
for a constant $\gamma_E>0$ which is independent on $m$.
Similarly, by Serre duality, and by using an analogous argument, we have
\[
h^2(X,E\otimes\ldots\otimes E) \leq \delta_E \mathop{\mathrm{rk}}(E)^m,
\]
for another constant $\delta_E$ which depends only on $E$.
By putting all together, since
\[
\chi(X,E\otimes\ldots\otimes E)\leq h^0(X,E\otimes\ldots\otimes E) + h^2(X,E\otimes\ldots\otimes E) \leq (\gamma_E + \delta_E) \mathop{\mathrm{rk}}(E)^m,
\]
we deduce
\[
\chi(X,\ensuremath{\mathcal O}_X) + m \frac{\mathop{\mathrm{ch}}\nolimits_2(E)}{\mathop{\mathrm{rk}}(E)} \leq \gamma_E + \delta_E,
\]
for any $m>0$, namely $\mathop{\mathrm{ch}}\nolimits_2(E)\leq0$, as we wanted.
\end{proof}
\begin{rmk}\label{rmk:TorsionSheavesBogomolov}
Let $E$ be a torsion sheaf.
Then
\[
\Delta^C_{\omega,B}(E) \geq 0.
\]
Indeed, $\mathop{\mathrm{ch}}\nolimits_1^B(E)=\mathop{\mathrm{ch}}\nolimits^1(E)$ is an effective divisor.
Then the inequality follows directly by the definition of $C_\omega$ in Exercise \ref{exer:ConstantCH}.
\end{rmk}
We can now prove our result:
\begin{prop}\label{prop:StabilityFunctionSurface}
The group homomorphism $Z_{\omega,B}$ is a stability function on $\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$.
\end{prop}
\begin{proof}
By definition, it is immediate to see that $\Im Z_{\omega,B}\geq0$ on $\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$.
Moreover, if $E\in\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$ is such that $\Im Z_{\omega,B}(E)=0$, then $E$ fits in an exact triangle
\[
\ensuremath{\mathcal H}^{-1}(E)[1] \to E \to \ensuremath{\mathcal H}^0(E),
\]
where $\ensuremath{\mathcal H}^{-1}(E)$ is a $\mu_{\omega,B}$-semistable torsion-free sheaf with $\mu_{\omega,B}(\ensuremath{\mathcal H}^{-1}(E))=0$ and $\ensuremath{\mathcal H}^0(E)$ is a torsion sheaf with zero-dimensional support. Here, as usual, $\ensuremath{\mathcal H}^\bullet$ denotes the cohomology sheaves of a complex.
We need to show $\Re Z_{\omega,B}(E) <0$.
Since $\Re Z_{\omega,B}$ is additive, we only need to show this for its cohomology sheaves.
But, on the one hand, since $\ensuremath{\mathcal H}^0(E)$ is a torsion sheaf with support a zero-dimensional subscheme, we have
\[
\Re Z_{\omega,B}(\ensuremath{\mathcal H}^0(E)) = - \mathop{\mathrm{ch}}\nolimits_2(\ensuremath{\mathcal H}^0(E)) < 0.
\]
On the other hand, by the Hodge Index Theorem, since $\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B(\ensuremath{\mathcal H}^{-1}(E))=0$, we have $\mathop{\mathrm{ch}}\nolimits_1^B(\ensuremath{\mathcal H}^{-1}(E))^2\leq0$. Therefore, by the Bogomolov inequality, we have $\mathop{\mathrm{ch}}\nolimits_2^B(\ensuremath{\mathcal H}^{-1}(E))\leq0$.
Hence,
\[
\Re Z_{\omega,B}(\ensuremath{\mathcal H}^{-1}(E)[1]) = - \Re Z_{\omega,B}(\ensuremath{\mathcal H}^{-1}(E)) = \underbrace{\mathop{\mathrm{ch}}\nolimits_2^B(\ensuremath{\mathcal H}^{-1}(E))}_{\leq 0} - \underbrace{\frac{\omega^2}{2} \mathop{\mathrm{ch}}\nolimits_0^B(\ensuremath{\mathcal H}^{-1}(E))}_{>0} < 0,
\]
thus concluding the proof.
\end{proof}
\subsubsection*{The rational case}
We now let $B \in \mathop{\mathrm{NS}}\nolimits_\ensuremath{\mathbb{Q}}$ be a rational divisor class, and we let $\omega=\alpha H$, where $\alpha\in\ensuremath{\mathbb{R}}$ and $H\in \mathop{\mathrm{NS}}\nolimits(X)$ is an integral ample divisor class.
We want to show that the group homomorphism $Z_{\omega,B}$ gives a stability condition in the abelian category $\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$, in the sense of Definition \ref{defn:stability_abelian}. By using Proposition \ref{prop:StabilityFunctionSurface} above, we only need to show that HN filtrations exist.
To this end, we use Proposition \ref{prop:hn_exists}, and since $\Im Z_{\omega,B}$ is discrete under our assumptions, we only need to show the following.
\begin{lem}\label{lem:TiltNoetherian}
Under the previous assumption, the tilted category $\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$ is noetherian.
\end{lem}
\begin{proof}
This is a general fact about tilted categories. It was first observed in the case of K3 surfaces in \cite[Proposition 7.1]{Bri08:stability_k3}. We refer to \cite[Lemma 2.17]{PT15:bridgeland_moduli_properties} for all details and just give a sketch of the proof.
As observed above, under our assumptions, $\Im Z_{\omega,B}$ is discrete and non-negative on $\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$.
This implies that it is enough to show that, for any $M\in\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$, there exists no infinite filtration
\[
0 = A_0 \subsetneq A_1 \subsetneq \ldots \subsetneq A_l \subsetneq \ldots \subset M
\]
with $\Im Z_{\omega,B}(A_l)=0$.
Write $Q_l:=M/A_l$.
By definition, the exact sequence
\[
0 \to A_l \to M \to Q_l \to 0
\]
in $\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$ induces a long exact sequence of sheaves
\[
0 \to \ensuremath{\mathcal H}^{-1}(A_l) \to \ensuremath{\mathcal H}^{-1}(M) \to \ensuremath{\mathcal H}^{-1}(Q_l) \to \ensuremath{\mathcal H}^{0}(A_l) \to \ensuremath{\mathcal H}^{0}(M) \to \ensuremath{\mathcal H}^{0}(Q_l)\to 0.
\]
Since $\mathop{\mathrm{Coh}}\nolimits(X)$ is noetherian, we have that both sequences
\begin{align*}
&\ensuremath{\mathcal H}^0(M) = \ensuremath{\mathcal H}^0(Q_0) \ensuremath{\twoheadrightarrow} \ensuremath{\mathcal H}^0(Q_1) \ensuremath{\twoheadrightarrow} \ensuremath{\mathcal H}^0(Q_2) \ensuremath{\twoheadrightarrow} \ldots \\
&0=\ensuremath{\mathcal H}^{-1}(A_0)\ensuremath{\hookrightarrow}\ensuremath{\mathcal H}^{-1}(A_1) \ensuremath{\hookrightarrow} \ensuremath{\mathcal H}^{-1}(A_2) \ensuremath{\hookrightarrow} \ldots \ensuremath{\hookrightarrow} \ensuremath{\mathcal H}^{-1}(M)
\end{align*}
stabilize.
Hence, from now on we will assume $\ensuremath{\mathcal H}^0(Q_l)=\ensuremath{\mathcal H}^0(Q_{l+1})$ and $\ensuremath{\mathcal H}^{-1}(A_l)=\ensuremath{\mathcal H}^{-1}(A_{l+1})$ for all $l$.
Both $U:=\ensuremath{\mathcal H}^{-1}(M)/\ensuremath{\mathcal H}^{-1}(A_l)$ and $V:=\mathop{\mathrm{Ker}}\nolimits(\ensuremath{\mathcal H}^0(M)\ensuremath{\twoheadrightarrow}\ensuremath{\mathcal H}^0(Q_l))$ are constant and we have an exact sequence
\begin{equation}\label{eq:noetherianity1}
0 \to U \to \ensuremath{\mathcal H}^{-1}(Q_l) \to \ensuremath{\mathcal H}^0(A_l) \to V \to 0,
\end{equation}
again by definition of $\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$, since $\Im Z_{\omega,B}(A_l)=0$, $\ensuremath{\mathcal H}^0(A_l)$ is a torsion sheaf supported in dimension $0$. Write $B_l:=A_l/A_{l-1}$.
Again, by looking at the induced long exact sequence of sheaves and the previous observation, we deduce that
\[
0 \to \ensuremath{\mathcal H}^{-1}(A_{l-1}) \xrightarrow{\cong} \ensuremath{\mathcal H}^{-1}(A_l) \xrightarrow{0} \underbrace{\ensuremath{\mathcal H}^{-1}(B_l)}_{\text{torsion-free}\Rightarrow =0} \to \underbrace{\ensuremath{\mathcal H}^{0}(A_{l-1})}_{\text{torsion}} \to \ensuremath{\mathcal H}^{0}(A_l) \to \ensuremath{\mathcal H}^{0}(B_l)\to 0.
\]
Hence, the only thing we need is to show that $\ensuremath{\mathcal H}^0(B_l)=0$, for all $l\gg 0$, or equivalently to bound the length of $\ensuremath{\mathcal H}^0(A_l)$. But, by letting $K_l:=\mathop{\mathrm{Ker}}\nolimits (\ensuremath{\mathcal H}^0(A_l)\ensuremath{\twoheadrightarrow} V)$, \eqref{eq:noetherianity1} gives an exact sequence
of sheaves
\[
0 \to U \to \ensuremath{\mathcal H}^{-1}(Q_l) \to K_l \to 0.
\]
where $K_l$ is a torsion sheaf supported in dimension $0$ as well and $\ensuremath{\mathcal H}^{-1}(Q_l)$ is torsion-free.
This gives a bound on the length of $K_l$ and therefore a bound on the length of $\ensuremath{\mathcal H}^0(A_l)$, as we wanted.
\end{proof}
To finish the proof that, under our rationality assumptions, $\sigma_{\omega,B}$ is a Bridgeland stability condition, we still need to prove the support property, namely Theorem \ref{thm:BogomolovBridgelandStability} in our case. The idea is to let $\alpha \to \infty$ and use the Bogomolov theorem for sheaves (together with Remark \ref{rmk:TorsionSheavesBogomolov}).
First we use the following lemma (we will give a more precise statement in Section \ref{subsec:WallChamber}).
It first appeared in the K3 surface case as a particular case of \cite[Proposition 14.2]{Bri08:stability_k3}.
\begin{lem}\label{lem:large_volume_limit_tilt}
Let $\omega,B \in N^1(X)$ with $\omega$ ample.
If $E \in \mathop{\mathrm{Coh}}\nolimits^{\omega, B}(X)$ is $\sigma_{\alpha\cdot \omega, B}$-semistable
for all $\alpha \gg 0$, then it satisfies one of the following conditions:
\begin{enumerate}
\item $\ensuremath{\mathcal H}^{-1}(E)=0$ and $\ensuremath{\mathcal H}^0(E)$ is a $\mu_{\omega,B}$-semistable torsion-free sheaf.
\item $\ensuremath{\mathcal H}^{-1}(E)=0$ and $\ensuremath{\mathcal H}^0(E)$ is a torsion sheaf.
\item $\ensuremath{\mathcal H}^{-1}(E)$ is a $\mu_{\omega,B}$-semistable torsion-free sheaf and $\ensuremath{\mathcal H}^0(E)$ is either $0$ or a torsion sheaf supported in dimension zero.
\end{enumerate}
\end{lem}
\begin{proof}
One can compute $\sigma_{\alpha \omega, B}$-stability with slope $\tfrac{2\nu_{\alpha \omega, B}}{\alpha}$ instead of $\nu_{\alpha \omega, B}$. This is convenient in the present argument because
\[
\lim_{\alpha \to \infty} \tfrac{2\nu{\alpha \omega, B}}{\alpha}(E) = - \mu_{\omega, B}^{-1}(E).
\]
By definition of $\mathop{\mathrm{Coh}}\nolimits^{\omega, B}(X)$ the object $E$ is an extension $0 \to F[1] \to E \to T \to 0$,
where $F \in \ensuremath{\mathcal F}_{\omega, B}$ and $T \in \ensuremath{\mathcal T}_{\omega, B}$.
Assume that $\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B(E) = 0$. Then both $\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B(F) = 0$ and $\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B(T) = 0$. By definition of $\ensuremath{\mathcal F}_{\omega, B}$ and $\ensuremath{\mathcal T}_{\omega, B}$ this means $T$ is $0$ or has be supported in dimension $0$ and $F$ is $0$ or a $\mu_{\omega, B}$-semistable torsion free sheaf. Therefore, for the rest of the proof we can assume $\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B(E) > 0$.
Assume that $\mathop{\mathrm{ch}}\nolimits_0^B(E) \geq 0$. By definition $-\mu_{\omega, B}^{-1}(F[1]) \geq 0$. The inequality $\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B(E) > 0$ implies $-\mu_{\omega, B}^{-1}(E) < 0$. Since $E$ is $\sigma_{\alpha \omega, B}$-semistable for $\alpha \gg 0$, we get $F = 0$ and $E \in \ensuremath{\mathcal T}_{\omega, B}$ is a sheaf. If $E$ is torsion, we are in case (2). Assume $E$ is neither torsion nor slope semistable. Then by definition of $\ensuremath{\mathcal T}_{\omega, B}$ there is an exact sequence
\[
0 \to A \to E \to B \to 0
\]
in $\ensuremath{\mathcal T}_{\omega, B} = \mathop{\mathrm{Coh}}\nolimits^{\omega, B}(X) \cap \mathop{\mathrm{Coh}}\nolimits(X)$ such that $\mu_{\omega, B}(A) > \mu_{\omega, B}(E) > 0$. But then $-\mu_{\omega, B}^{-1} (A) > -\mu_{\omega, B}^{-1} (E)$ contradicts the fact that $E$ is $\sigma_{\alpha \omega, B}$-semistable for $\alpha \gg 0$.
Assume that $\mathop{\mathrm{ch}}\nolimits_0^B(E) < 0$. If $\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B(T) = 0$, then $T \in \ensuremath{\mathcal T}_{\omega, B}$ implies $\mathop{\mathrm{ch}}\nolimits_0^B(T) = 0$ and up to semistability of $F$ we are in case (3). Let $\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B(T) > 0$. By definition $-\mu_{\omega, B}^{-1}(T) < 0$. As before we have $-\mu_{\omega, B}^{-1}(T) > 0$ and the fact that $E$ is $\sigma_{\alpha \omega, B}$-semistable for $\alpha \gg 0$ implies $T = 0$. In either case we are left to show that $F$ is $\mu_{\omega, B}$-semistable. If not, there is an exact sequence
\[
0 \to A \to F \to B \to 0
\]
in $\ensuremath{\mathcal F}_{\omega, B} = \mathop{\mathrm{Coh}}\nolimits^{\omega, B}(X)[-1] \cap \mathop{\mathrm{Coh}}\nolimits(X)$ such that $\mu_{\omega, B}(A) > \mu_{\omega, B}(F)$. Therefore, there is an injective map $A[1] \ensuremath{\hookrightarrow} E$ in $\mathop{\mathrm{Coh}}\nolimits^{\omega, B}(X)$ such that $-\mu_{\omega, B}^{-1}(A[1]) > -\mu_{\omega, B}^{-1}(E)$ in contradiction to the fact that $E$ is $\sigma_{\alpha \omega, B}$-semistable for $\alpha \gg 0$.
\end{proof}
\begin{exercise}\label{exer:SmallerImaginaryPart}
Let $B \in \mathop{\mathrm{NS}}\nolimits(X)_\ensuremath{\mathbb{Q}}$ be a rational divisor class and let $\omega = \alpha H$, where $\alpha\in\ensuremath{\mathbb{R}}_{>0}$ and $H \in \mathop{\mathrm{NS}}\nolimits(X)_\ensuremath{\mathbb{Q}}$ is an integral ample divisor.
Show that
\[
c:= \mathop{\mathrm{min}}\nolimits \left\{ H \cdot \mathop{\mathrm{ch}}\nolimits_1^B(F) \, :\, \begin{array}{l} F\in\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)\\ H \cdot \mathop{\mathrm{ch}}\nolimits_1^B(F)>0\end{array}\right\} > 0.
\]
exists.
Let $E\in\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$ satisfy $H \cdot \mathop{\mathrm{ch}}\nolimits_1^B(E)=c$ and $\mathop{\mathrm{Hom}}\nolimits(A,E)=0$, for all $A\in\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$ with $H \cdot \mathop{\mathrm{ch}}\nolimits_1^B(A)=0$. Show that $E$ is $\sigma_{\omega,B}$-stable.
\end{exercise}
We can now prove Theorem \ref{thm:BogomolovBridgelandStability} (and so Theorem \ref{thm:BridgelandSurface}) in the rational case.
\begin{proof}[Proof of Theorem \ref{thm:BogomolovBridgelandStability}, rational case]
Since $B$ is rational and $\omega = \alpha H$, with $H$ being an integral divisor, the imaginary part is a discrete function. We proceed by induction on $H \cdot \mathop{\mathrm{ch}}\nolimits_1^B$.
Let $E\in\mathop{\mathrm{Coh}}\nolimits^{H,B}(X)$ be $\sigma_{\alpha_0 H,B}$-semistable for some $\alpha_0>0$ such that $H \cdot \mathop{\mathrm{ch}}\nolimits_1^B>0$ is minimal.
Then by Exercise \ref{exer:SmallerImaginaryPart}, $E$ is stable for all $\alpha\gg0$.
Therefore, by Lemma \ref{lem:large_volume_limit_tilt}, both $\Delta^C_{\omega,B}(E)\geq 0$ and $\overline{\Delta}^B_\omega(E) \geq0$ hold, since they are true for semistable sheaves.
The induction step is subtle: we would like to use the wall and chamber structure.
If an object $E$ is stable for all $\alpha\gg0$, then again by Lemma \ref{lem:large_volume_limit_tilt} we are done.
If not, it is destabilized at a certain point. By Lemma \ref{lem:convex_cone}, we can then proceed by induction on $H \cdot \mathop{\mathrm{ch}}\nolimits_1^B$. The issue is that we do not know the support property yet and therefore, that walls are indeed well behaved.
Fortunately, as $\alpha$ increases, all possible destabilizing subobjects and quotients have strictly smaller $H \cdot \mathop{\mathrm{ch}}\nolimits_1^B$, which satisfy the desired inequality by our induction assumption.
This is enough to ensure that $E$ satisfies well-behaved wall-crossing. By following the same argument as in Proposition \ref{prop:locally_finite} (and Remark \ref{rmk:WallChamberGeneralSemistability}), it is enough to know a support property type statement for all potentially destabilizing classes.
\end{proof}
\subsubsection*{The general case}
We only sketch the argument for the general case.
This is explained in detail in the case of K3 surfaces in \cite{Bri08:stability_k3}, and can be directly deduced from \cite[Sections 6 \& 7]{Bri07:stability_conditions}.
The idea is to use the support property we just proved in the rational case to deform $B$ and $\omega$ in all possible directions, by using Bridgeland's Deformation Theorem \ref{thm:BridgelandMain} (and Proposition \ref{prop:ExplicitBridgelandMain}).
The only thing we have to make sure is that once we have the correct central charge, the category is indeed $\mathop{\mathrm{Coh}}\nolimits^{\omega,B}(X)$.
The intuition behind this is in the following result.
\begin{lem}
Let $\sigma=(\ensuremath{\mathcal A}, Z_{\omega, B})$ be a stability condition satisfying the support property such that all skyscraper sheaves are stable of phase one. Then $\ensuremath{\mathcal A} = \mathop{\mathrm{Coh}}\nolimits^{\omega, B}(X)$ holds.
\end{lem}
\begin{proof}
We start by showing that all objects $E \in \ensuremath{\mathcal A}$ have $H^i(E) = 0$ whenever $i \neq 0, -1$ and that $H^{-1}(E)$ is torsion free. Notice that $E$ is an iterated extension of stable objects, so we can assume that $E$ is stable. This is clearly true for skyscraper sheaves, so we can assume that $E$ is not a skyscraper sheaf. Then Serre duality implies that for $i \neq 0, 1$ and any $x \in X$ we have
\[
\mathop{\mathrm{Ext}}\nolimits^i(E, \ensuremath{\mathbb{C}}(x)) = \mathop{\mathrm{Ext}}\nolimits^{2-i}(\ensuremath{\mathbb{C}}(x), E) = 0.
\]
By Proposition \ref{prop:locally_free_complex} we get that $E$ is isomorphic to a two term complex of locally free sheaves and the statement follows. This also implies that $H^{-1}(E)$ is torsion free.
Therefore, the inclusion $\ensuremath{\mathcal A} \subset \langle \mathop{\mathrm{Coh}}\nolimits(X), \mathop{\mathrm{Coh}}\nolimits(X)[1] \rangle$ holds. Set $\ensuremath{\mathcal T} = \mathop{\mathrm{Coh}}\nolimits(X) \cap \ensuremath{\mathcal A}$ and $\ensuremath{\mathcal F} = \mathop{\mathrm{Coh}}\nolimits(X) \cap \ensuremath{\mathcal A}[-1]$. By Exercise \ref{exercise:heartIsATilt} we get $\ensuremath{\mathcal A} = \langle \ensuremath{\mathcal T}, \ensuremath{\mathcal F}[1] \rangle$ and $\mathop{\mathrm{Coh}}\nolimits(X) = \langle \ensuremath{\mathcal T}, \ensuremath{\mathcal F} \rangle$. We need to show $\ensuremath{\mathcal T} = \ensuremath{\mathcal T}_{\omega, B}$ and $\ensuremath{\mathcal F} = \ensuremath{\mathcal F}_{\omega, B}$. In fact, it will be enough to show $\ensuremath{\mathcal T}_{\omega, B} \subset \ensuremath{\mathcal T}$ and $\ensuremath{\mathcal F}_{\omega, B} \subset \ensuremath{\mathcal F}$.
Let $E \in \mathop{\mathrm{Coh}}\nolimits(X)$ be slope semistable. There is an exact sequence $0 \to T \to E \to F \to 0$ with $T \in \ensuremath{\mathcal T}$ and $F \in \ensuremath{\mathcal F}$. We already showed that $F = H^{-1}(F)$ is torsion free. If $E$ is torsion this implies $F = 0$ and $E = T \in \ensuremath{\mathcal T}$ as claimed.
Assume that $E$ is torsion free. Since $F[1] \in \ensuremath{\mathcal A}$ and $T \in \ensuremath{\mathcal A}$, we get $\mu_{\omega, B}(F) \leq 0$ and $\mu_{\omega, B}(T) \geq 0$. This is a contradiction to $E$ being stable unless either $F = 0$ or $T = 0$. Therefore, we showed $E \in \ensuremath{\mathcal F}$ or $E \in \ensuremath{\mathcal T}$.
If $\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B(E) > 0$, then $E \in \ensuremath{\mathcal T}$ and if $\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B(E) < 0$, then $E \in \ensuremath{\mathcal F}$. Assume that $\omega \cdot \mathop{\mathrm{ch}}\nolimits_1^B(E) = 0$, but $E \in \ensuremath{\mathcal T}$. Then by definition of a stability condition we have $Z_{\omega, B}(E) \in \ensuremath{\mathbb{R}}_{<0}$ and $E$ is $\sigma$-semistable. Since $E$ is a sheaf, there is a skyscraper sheaf $\ensuremath{\mathbb{C}}(x)$ together with a surjective morphism of coherent sheaves $E \ensuremath{\twoheadrightarrow} \ensuremath{\mathbb{C}}(x)$. Since $\ensuremath{\mathbb{C}}(x)$ is stable of slope $\infty$ this morphism is also a surjection in $\ensuremath{\mathcal A}$. Let $F \in \ensuremath{\mathcal A} \cap \mathop{\mathrm{Coh}}\nolimits(X) = \ensuremath{\mathcal T}$ be the kernel of this map. Then $Z(F) = Z(E) + 1$. Iterating this procedure will lead to an object $F$ with $Z(F) \in \ensuremath{\mathbb{R}}_{\geq 0}$, a contradiction.
\end{proof}
\subsection{The wall and chamber structure in the $(\alpha,\beta)$-plane}\label{subsec:WallChamber}
If we consider a certain slice of the space of Bridgeland stability conditions, the structure of the walls turns out rather simple.
\begin{defn}\label{def:alphabetaPlane}
Let $H\in\mathop{\mathrm{NS}}\nolimits(X)$ be an ample integral divisor class and let $B_0\in \mathop{\mathrm{NS}}\nolimits_\ensuremath{\mathbb{Q}}$.
We define the $(\alpha,\beta)$-plane as the set of stability conditions of the form
$\sigma_{\alpha H, B_0 + \beta H}$, for $\alpha,\beta\in\ensuremath{\mathbb{R}}$, $\alpha>0$.
\end{defn}
When it is clear from context which $(\alpha,\beta)$ plane we choose (for example, if the Picard number is one), we will abuse notation and drop $H$ and $B_0$ from the notation; for example, we denote stability conditions by $\sigma_{\alpha,\beta}$, the twisted Chern character by $\mathop{\mathrm{ch}}\nolimits^\beta$, etc.
The following proposition describes all walls in the $(\alpha,\beta)$-plane.
It was originally observed by Bertram and completely proved in \cite{Mac14:nested_wall_theorem}.
\begin{prop}[Structure Theorem for Walls on Surfaces]\label{prop:StructureThmWallsSurfaces}
Fix a class $v \in K_{\mathop{\mathrm{num}}\nolimits}(X)$.
\begin{enumerate}
\item All numerical walls are either semicircles with center on the $\beta$-axis or vertical rays.
\item \label{item:walls_dont_intersect}
Two different numerical walls for $v$ cannot intersect.
\item For a given class $v \in K_{\mathop{\mathrm{num}}\nolimits}(X)$ the hyperbola $\Re Z_{\alpha, \beta}(v) = 0$ intersects all numerical semicircular walls at their top points.
\item If $\mathop{\mathrm{ch}}\nolimits_0^{B_0}(v) \neq 0$, then there is a unique numerical vertical wall defined by the equation
\[
\beta = \frac{H \mathop{\mathrm{ch}}\nolimits_1^{B_0}(v)}{H^2 \mathop{\mathrm{ch}}\nolimits_0^{B_0}(v)}.
\]
\item If $\mathop{\mathrm{ch}}\nolimits_0^{B_0}(v) \neq 0$, then all semicircular walls to either side of the unique numerical vertical wall are strictly nested semicircles.
\item If $\mathop{\mathrm{ch}}\nolimits_0^{B_0}(v) = 0$, then there are only semicircular walls that are strictly nested.
\item \label{item:walls_dont_stop}
If a wall is an actual wall at a single point, it is an actual wall everywhere along the numerical wall.
\end{enumerate}
\end{prop}
\begin{exercise}
Proof Proposition \ref{prop:StructureThmWallsSurfaces}. \emph{(Hint: For (\ref{item:walls_dont_intersect}) ignore the slope and rephrase everything with just $Z$ using linear algebra. For (\ref{item:walls_dont_stop}) show that a destabilizing subobject or quotient would have to destabilize itself at some point of the wall.)}
\end{exercise}
As application, it is easy to show that a ``largest wall'' exists and we will also be able to prove that walls are locally finite in the surface case without using Proposition \ref{prop:locally_finite}. Both will directly follow from the following statement that is taken from \cite[Lemma 5.6]{Sch15:stability_threefolds}.
\begin{lem}
Let $v \in K_{\mathop{\mathrm{num}}\nolimits}(X)$ be a non-zero class such that $\overline{\Delta}_H^{B_0}(v) \geq 0$. For fixed $\beta_0 \in \ensuremath{\mathbb{Q}}$ there are only finitely many walls intersecting the vertical line $\beta = \beta_0$.
\end{lem}
\begin{proof}
Any wall has to come from an exact sequence $0 \to F \to E \to G \to 0$ in $\mathop{\mathrm{Coh}}\nolimits^{\beta}(X)$. Let $(H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0^{\beta}(E), H \cdot \mathop{\mathrm{ch}}\nolimits_1^{\beta}(E), \mathop{\mathrm{ch}}\nolimits_2^{\beta}(E)) = (R, C, D)$ and $(H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0^{\beta}(F), H \cdot \mathop{\mathrm{ch}}\nolimits_1^{\beta}(F), \mathop{\mathrm{ch}}\nolimits_2^{\beta}(F)) = (r, c, d)$. Notice that due to the fact that $\beta \in \ensuremath{\mathbb{Q}}$ the possible values of $r$, $c$ and $d$ are discrete in $\ensuremath{\mathbb{R}}$. Therefore, it will be enough to bound those values to obtain finiteness.
By definition of $\mathop{\mathrm{Coh}}\nolimits^{\beta}(X)$ one has $0 \leq c \leq C$. If $C = 0$, then $c = 0$ and we are dealing with the unique vertical wall. Therefore, we may assume $C \neq 0$. Let $\Delta := C^2 - 2RD$. The Bogomolov inequality together with Lemma \ref{lem:convex_cone} implies $0 \leq c^2 - 2rd \leq \Delta$. Therefore, we get
\[
\frac{c^2}{2} \geq rd \geq \frac{c^2 - \Delta}{2}.
\]
Since the possible values of $r$ and $d$ are discrete in $\ensuremath{\mathbb{R}}$, there are finitely many possible values unless $r = 0$ or $d = 0$.
Assume $R = r = 0$. Then the equality $\nu_{\alpha, \beta}(F) = \nu_{\alpha,
\beta}(E)$ holds if and only if $Cd-Dc = 0$. In particular, it is independent of
$(\alpha, \beta)$. Therefore, the sequence does not define a wall.
If $r = 0$, $R \neq 0$, and $D - d \neq 0$, then using the same type of inequality for $G$ instead of $E$ will finish the proof. If $r = 0$ and $D - d = 0$, then $d = D$ and there are are only finitely many walls like this, because we already bounded $c$.
Assume $D = d = 0$. Then the equality $\nu_{\alpha, \beta}(F) = \nu_{\alpha,
\beta}(E)$ holds if and only if $Rc - Cr = 0$. Again this cannot define a wall.
If $d = 0$, $D \neq 0$, and $R - r \neq 0$, then using the same type of inequality for $G$ instead of $E$ will finish the proof. If $d = 0$ and $R - r = 0$, then $r = R$ and there are are only finitely many walls like this, because we already bounded $c$.
\end{proof}
\begin{cor}
\label{cor:LargestWallExists}
Let $v \in K_{\mathop{\mathrm{num}}\nolimits}(X)$ be a non-zero class such that $\overline{\Delta}_H^{B_0}(v)\geq 0$. Then semicircular walls in the $(\alpha, \beta)$-plane with respect to $v$ are bounded from above.
\end{cor}
\begin{cor}
\label{cor:locally_finite_surfaces}
Let $v \in K_{\mathop{\mathrm{num}}\nolimits}(X)$ be a non-zero class such that $\overline{\Delta}_H^{B_0}(v)\geq 0$. Walls in the $(\alpha, \beta)$-plane with respect to $v$ are locally finite, i.e., any compact subset of the upper half plane intersects only finitely many walls.
\end{cor}
We will compute the largest wall in examples: see Section \ref{subsec:LargestWallHilbertScheme}. An immediate consequence of Proposition \ref{cor:LargestWallExists} is the following precise version of Lemma \ref{lem:large_volume_limit_tilt}. This was proved in the case of K3 surfaces in \cite[Proposition 14.2]{Bri08:stability_k3}. The general proof is essentially the same if the statement is correctly adjusted.
\begin{exercise}
\label{exercise:BridgelandvsGieseker}
Let $v \in K_{\mathop{\mathrm{num}}\nolimits}(X)$ be a non-zero class with positive rank and let $\beta \in \ensuremath{\mathbb{R}}$ such that $H \cdot \mathop{\mathrm{ch}}\nolimits_1^{\beta}(v) > 0$. Then there exists $\alpha_0>0$ such that for any $\alpha > \alpha_0$ the set of $\sigma_{\alpha, \beta}$-semistable objects with class $v$ is the same as the set of twisted $(\omega, B_0 - \tfrac{1}{2} K_X)$-Gieseker semistable sheaves with class $v$. Moreover, $\sigma_{\alpha, \beta}$-stable objects of class $v$ are the same as twisted $(\omega, B_0 - \tfrac{1}{2} K_X)$-Gieseker stable sheaves with class $v$. \emph{Hint: Follow the proof of Lemma \ref{lem:large_volume_limit_tilt} and compare lower terms.}
\end{exercise}
\subsection{Examples of semistable objects}\label{subsec:ExampleSemistable}
We already saw a few easy examples of stable objects with respect to $\sigma_{\omega,B}$: skyscraper sheaves and objects with minimal $H \cdot \mathop{\mathrm{ch}}\nolimits_1^B$ (or with $H \cdot \mathop{\mathrm{ch}}\nolimits_1^B=0$).
See Exercise \ref{exercise:MinimalObjects} and Exercise \ref{exer:SmallerImaginaryPart}.
The key example of Bridgeland semistable objects are those with trivial discriminant.
\begin{lem}
\label{lem:parallel_line_bundle}
Let $E$ be a $\mu_{\omega,B}$-stable vector bundle.
Assume that either $\Delta_{\omega,B}^C(E)=0$, or $\overline{\Delta}_H^B(E)=0$.
Then $E$ is $\sigma_{\omega,B}$-stable.
\end{lem}
\begin{proof}
We can assume $\omega$ and $B$ to be rational.
Consider the $(\alpha,\beta)$-plane.
By Exercise \ref{exercise:BridgelandvsGieseker}, $E$ (or $E[1]$) is stable for $\alpha\gg0$.
The statement now follows directly from Corollary \ref{cor:Qzero}.
\end{proof}
In particular, all line bundles are stable everywhere only if the N\'eron-Severi group is of rank one, or if the constant $C_\omega$ of Exercise \ref{exer:ConstantCH} is zero (e.g., for abelian surfaces).
\begin{exercise}
Let $\sigma\colon X\to\ensuremath{\mathbb{P}}^2$ be the blow up of $\ensuremath{\mathbb{P}}^2$ at one point.
Let $h$ be the pull-back $\sigma^*\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^2}(1)$ and let $f$ denote the fiber of the $\ensuremath{\mathbb{P}}^1$-bundle $X\to\ensuremath{\mathbb{P}}^1$.
Consider the anti-canonical line bundle $H:= - K_X = 2h+f$, and consider the $(\alpha,\beta)$-plane with respect to $\omega=\alpha H$, $B=\beta H$.
Show that $\ensuremath{\mathcal O}_X(h)$ is $\sigma_{\alpha,\beta}$-stable for all $\alpha,\beta$, while there exists $(\alpha_0,\beta_0)$ for which $\ensuremath{\mathcal O}(2h)$ is not $\sigma_{\alpha_0,\beta_0}$-semistable.
\end{exercise}
\subsection{Moduli spaces}\label{subsec:ModuliSurfaces}
We keep the notation as in the beginning of this section, namely $\omega,B\in N^1(X)$ with $\omega$ ample.
We fix $v_0\in K_{\mathop{\mathrm{num}}\nolimits}(X)$ and $\phi\in\ensuremath{\mathbb{R}}$.
Consider the stack $\mathfrak{M}_{\omega,B}(v_0,\phi):=\mathfrak{M}_{\sigma_{\omega,B}}(v_0,\phi)$ (and $\mathfrak{M}_{\omega,B}^s(v_0,\phi)$) as in Section \ref{subsec:moduli}.
\begin{thm}[Toda]
$\mathfrak{M}_{\omega,B}(v_0,\phi)$ is a universally closed Artin stack of finite type over $\ensuremath{\mathbb{C}}$.
Moreover, $\mathfrak{M}_{\omega,B}^s(v_0,\phi)$ is a $\ensuremath{\mathbb{G}}_m$-gerbe over an algebraic space $M_{\omega,B}(v_0,\phi)$. Finally, if $\mathfrak{M}_{\omega,B}(v_0,\phi)=\mathfrak{M}_{\omega,B}^s(v_0,\phi)$, then $M_{\omega,B}(v_0,\phi)$ is a proper algebraic space over $\ensuremath{\mathbb{C}}$.
\end{thm}
\begin{proof}[Ideas from the proof]
As we observed after Question \ref{question:OpenAndBounded}, we only need to show openness and boundedness.
The idea to show boundedness is to reduce to semistable sheaves and use boundedness from them.
For openness, the key technical result is a construction by Abramovich-Polishchuk \cite{AP06:constant_t_structures}, which we will recall in Theorem \ref{thm:constant_t_structure}. Then openness for Bridgeland semistable objects follows from the existence of relative Harder-Narasimhan filtrations for sheaves.
We refer to \cite{Tod08:K3Moduli} for all the details.
\end{proof}
There are only few examples where we can be more precise on the existence of moduli spaces. We will present a few of them below.
To simplify notation, we will drop from now on the phase $\phi$ from the notation for a moduli space.
\subsubsection*{The projective plane}
Let $X=\ensuremath{\mathbb{P}}^2$.
We identify the lattice $K_0(\ensuremath{\mathbb{P}}^2)=K_{\mathop{\mathrm{num}}\nolimits}(\ensuremath{\mathbb{P}}^2)$ with $\ensuremath{\mathbb{Z}}^{\oplus 2}\oplus \frac{1}{2}\ensuremath{\mathbb{Z}}$, and the Chern character with a triple $\mathop{\mathrm{ch}}\nolimits=(r,c,s)$, where $r$ is the rank, $c$ is the first Chern character, and $s$ is the second Chern character.
We fix $v_0\in K_{\mathop{\mathrm{num}}\nolimits}(\ensuremath{\mathbb{P}}^2)$.
\begin{thm}
\label{thm:P2}
For all $\alpha,\beta\in\ensuremath{\mathbb{R}}$, $\alpha>0$, there exists a coarse moduli space $M_{\alpha,\beta}(v_0)$ parameterizing S-equivalence classes of $\sigma_{\alpha,\beta}$-semistable objects.
It is a projective variety.
Moreover, if $v_0$ is primitive and $\sigma_{\alpha,\beta}$ is outside a wall for $v_0$, then $M_{\alpha,\beta}^s(v_0)=M_{\alpha,\beta}(v_0)$ is a smooth irreducible projective variety.
\end{thm}
The projectivity was first observed in \cite{ABCH13:hilbert_schemes_p2}, while generic smoothness is proved in this generality in \cite{LZ16:NewStabilityP2}. For the proof a GIT construction is used, but with a slightly different GIT problem.
First of all, an immediate consequence of Proposition \ref{prop:StructureThmWallsSurfaces} is the following (we leave the details to the reader).
\begin{exercise}\label{exercise:SmallAlphaP2}
Given $\alpha,\beta\in\ensuremath{\mathbb{R}}$, $\alpha>0$, there exist $\alpha_0,\beta_0\in\ensuremath{\mathbb{R}}$, such that $0<\alpha_0<\frac{1}{2}$ and $\widehat{M}_{\alpha,\beta}(v_0)=\widehat{M}_{\alpha_0,\beta_0}(v_0)$.
\end{exercise}
For $0<\alpha<\frac{1}{2}$, Bridgeland stability on the projective plane is related to finite dimensional algebras, where moduli spaces are easy to construct via GIT (see Exercise \ref{ex:Quiver}).
For $k\in\ensuremath{\mathbb{Z}}$, we consider the vector bundle $\ensuremath{\mathcal E}:=\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^2}(k-1)\oplus \Omega_{\ensuremath{\mathbb{P}}^2}(k+1)\oplus \ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^2}(k)$.
Let $A$ be the finite-dimensional associative algebra $\mathop{\mathrm{End}}\nolimits(E)$.
Then, by the Beilinson Theorem \cite{Bei78:exceptional_collection_pn}, the functor
\[
\Phi_\ensuremath{\mathcal E}: \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(\ensuremath{\mathbb{P}}^2) \to \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(\text{mod-}A), \, \, \Phi_\ensuremath{\mathcal E}(F) := \mathbf{R}\mathop{\mathrm{Hom}}\nolimits(\ensuremath{\mathcal E},F)
\]
is an equivalence of derived categories.
The simple objects in the category $\Phi_\ensuremath{\mathcal E}^{-1}(\text{mod-}A)$ are $\ensuremath{\mathcal O}_{\ensuremath{\mathcal P}^2}(k-3)[2]$, $\ensuremath{\mathcal O}_{\ensuremath{\mathcal P}^2}(k-2)[1]$, $\ensuremath{\mathcal O}_{\ensuremath{\mathcal P}^2}(k-1)$.
\begin{exercise}\label{exercise:quiverP2}
For $\alpha,\beta\in\ensuremath{\mathbb{R}}$, $0<\alpha<\frac{1}{2}$, there exist $k\in\ensuremath{\mathbb{Z}}$ and an element $A\in\widetilde{\mathop{\mathrm{GL}}\nolimits}^+(2,\ensuremath{\mathbb{R}})$ such that $A\cdot \sigma_{\alpha,\beta}$ is a a stability condition as in Exercise \ref{ex:Quiver}.
\end{exercise}
It is not hard to prove that families of Bridgeland semistable objects coincide with families of modules over an algebra, as defined in \cite{Kin94:moduli_quiver_reps}.
Therefore, this proves the first part of Theorem \ref{thm:P2}. The proof of smoothness in the generic primitive case is more subtle. The complete argument is given in \cite{LZ16:NewStabilityP2}.
\subsubsection*{K3 surfaces}
Let $X$ be a K3 surface.
We consider the algebraic Mukai lattice
\[
H^*_{\mathrm{alg}}(X):=H^0(X,\ensuremath{\mathbb{Z}})\oplus \mathop{\mathrm{NS}}\nolimits(X) \oplus H^4(X,\ensuremath{\mathbb{Z}})
\]
together with the Mukai pairing
\[
\left( (r,c,s),(r',c',s') \right) = c.c' - rs' - sr'.
\]
We also consider the \emph{Mukai vector}, a modified version of the Chern character by the square root of the Todd class:
\[
v := \mathop{\mathrm{ch}}\nolimits \cdot \sqrt{\mathop{\mathrm{td}}\nolimits_X} = \left(\mathop{\mathrm{ch}}\nolimits_0, \mathop{\mathrm{ch}}\nolimits_1, \mathop{\mathrm{ch}}\nolimits_2 + \mathop{\mathrm{ch}}\nolimits_0 \right).
\]
Our lattice $\Lambda$ is $H^*_{\mathrm{alg}}(X)$ and the map $v$ is nothing but the Mukai vector.
Bridgeland stability conditions for K3 surfaces can be described beyond the ones we just defined using $\omega,B$.
More precisely, the main result in \cite{Bri08:stability_k3} is that there exists a connected component $\mathop{\mathrm{Stab}}\nolimits^\dagger(X)$, containing all stability conditions $\sigma_{\omega,B}$ such that the map
\[
\eta\colon \mathop{\mathrm{Stab}}\nolimits^\dagger(X) \xrightarrow{\ensuremath{\mathcal Z}} \mathop{\mathrm{Hom}}\nolimits(H^*_{\mathrm{alg}}(X),\ensuremath{\mathbb{C}}) \xrightarrow{(-,-)} H^*_{\mathrm{alg}}(X)_\ensuremath{\mathbb{C}}
\]
is a covering onto its image, which can be described as a certain period domain.
\begin{thm}\label{thm:K3ModuliProjective}
Let $v_0\in H^*_{\mathrm{alg}}(X)$.
Then for all generic stability conditions $\sigma\in\mathop{\mathrm{Stab}}\nolimits^\dagger(X)$, there exists a coarse moduli space $M_{\sigma}(v_0)$ parameterizing S-equivalence classes of $\sigma$-semistable objects.
It is a projective variety.
Moreover, if $v_0$ is primitive, then $M_{\sigma}^s(v_0)=M_{\sigma}(v_0)$ is a smooth integral projective variety.
\end{thm}
We will not prove Theorem \ref{thm:K3ModuliProjective}; we refer to \cite{BM14:projectivity}.
The idea of the proof, based on \cite{MYY14:stability_k_trivial_surfaces}, is to reduce to the case of semistable sheaves by using a Fourier-Mukai transform.
The corresponding statement for non-generic stability conditions in $\mathop{\mathrm{Stab}}\nolimits^\dagger(X)$ is still unknown.
When the vector is primitive, varieties appearing as moduli spaces of Bridgeland stable objects are so called \emph{Irreducible Holomorphic Symplectic}. They are all deformation equivalent to Hilbert schemes of points on a K3 surface.
\section{Applications and Examples}
\label{sec:applications}
In this section we will give examples and applications for studying stability on surfaces.
\subsection{The largest wall for Hilbert schemes}\label{subsec:LargestWallHilbertScheme}
Let $X$ be a smooth complex projective surface of Picard rank one. We will deal with computing the largest wall for ideal sheaves of zero dimensional schemes $Z \subset X$. The moduli space of these ideal sheaves turns out to be the Hilbert scheme of points on $X$. The motivation for this problem lies in understanding its nef cone. We will explain in the next section how, given a stability condition $\sigma$, one constructs nef divisors on moduli spaces of $\sigma$-semistable objects. It turns out that stability conditions on walls will often times induce boundary divisors of the nef cone. If the number of points is large, this was done in \cite{BHLRSW15:nef_cones}.
\begin{prop}
\label{prop:surface_largest_wall}
Let $X$ be a smooth complex projective surface with $\mathop{\mathrm{NS}}\nolimits(X) = \ensuremath{\mathbb{Z}} \cdot H$, where $H$ is ample. Moreover, let $a > 0$ be the smallest integer such that $aH$ is effective and $n > a^2 H^2$. Then the biggest wall for the Chern character $(1,0,-n)$ to the left of the unique vertical wall is given by the equation $\nu_{\alpha, \beta}(\ensuremath{\mathcal O}(-aH)) = \nu_{\alpha, \beta}(1,0,-n)$. Moreover, the ideal sheaves $\ensuremath{\mathcal I}_Z$ that destabilize at this wall are exactly those for which $Z \subset C$ for a curve $C \in |aH|$.
\end{prop}
We need the following version of a result from \cite{CHW14:effective_cones_p2}.
\begin{lem}
\label{lem:higherRankBound}
Let $0 \to F \to E \to G \to 0$ be an exact sequence in $\mathop{\mathrm{Coh}}\nolimits^{\beta}(X)$ defining a non empty semicircular wall $W$. Assume further that $\mathop{\mathrm{ch}}\nolimits_0(F) > \mathop{\mathrm{ch}}\nolimits_0(E) \geq 0$. Then the radius $\rho_W$ satisfies the inequality
\[
\rho_W^2 \leq \frac{\overline{\Delta}_H(E)}{4 H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(F) (H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(F) - H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(E))}.
\]
\begin{proof}
Let $v,w \in K_0(X)$ be two classes such that the wall $W$ given by $\nu_{\alpha, \beta}(v) = \nu_{\alpha, \beta}(w)$ is a non empty semicircle. Then a straightforward computation shows that the radius $\rho_W$ and center $s_W$ satisfy the equation
\begin{equation}
\label{eq:radius_center}
(H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(v))^2 \rho_W^2 + \overline{\Delta}_H(v) = (H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(v) s_W - H \cdot \mathop{\mathrm{ch}}\nolimits_1(v))^2.
\end{equation}
For all $(\alpha, \beta) \in W$ we have the inequalities $H \cdot \mathop{\mathrm{ch}}\nolimits_1^{\beta}(E) \geq H \cdot \mathop{\mathrm{ch}}\nolimits_1^{\beta}(F) \geq 0$. This can be rewritten as
\[
H \cdot \mathop{\mathrm{ch}}\nolimits_1(E) + \beta (H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(F) - H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(E)) \geq H \cdot \mathop{\mathrm{ch}}\nolimits_1(F) \geq \beta H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(F).
\]
Since $H \cdot \mathop{\mathrm{ch}}\nolimits_1(F)$ is independent of $\beta$ we can maximize the right hand side and minimize the left hand side individually in the full range of $\beta$ between $s_W - \rho_W$ and $s_W + \rho_W$. By our assumptions this leads to
\[
H \cdot \mathop{\mathrm{ch}}\nolimits_1(E) + (s_W - \rho_W) (H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(F) - H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(E)) \geq (s_W + \rho_W) H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(F).
\]
By rearranging the terms and squaring we get
\[
(2H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(F) - H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(E))^2 \rho_W^2 \leq (H \cdot \mathop{\mathrm{ch}}\nolimits_1(E) - H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(E) s_W)^2 = (H^2 \cdot \mathop{\mathrm{ch}}\nolimits_0(E))^2 \rho_W^2 + \overline{\Delta}_H(E).
\]
The claim follows by simply solving for $\rho_W^2$.
\end{proof}
\end{lem}
\begin{proof}[Proof of Proposition \ref{prop:surface_largest_wall}]
We give the proof in the case where $H$ is effective, i.e., $a = 1$. The general case is longer, but not substantially harder. The full argument can be found in \cite{BHLRSW15:nef_cones}.
The equation $\nu_{\alpha, \beta}(1,0,-n) = \nu_{\alpha, \beta}(\ensuremath{\mathcal O}(-H))$ is equivalent to
\begin{equation}
\label{eq:largestwall}
\alpha^2 + \left(\beta + \frac{1}{2} + \frac{n}{H^2}\right)^2 = \left( \frac{n}{H^2} -\frac{1}{2} \right)^2.
\end{equation}
This shows that every larger semicircle intersects the line $\beta = -1$. Moreover, the object $\ensuremath{\mathcal O}(-H)$ is in the category along the wall if and only if $n > \tfrac{H^2}{2}$.
We will first show that there is no bigger semicircular wall. Assume we have an exact sequence
\[
0 \to F \to \ensuremath{\mathcal I}_Z \to G \to 0
\]
where $Z \subset X$ has dimension $0$ and length $n$. Moreover, assume the equation $\nu_{\alpha, -1}(F) = \nu_{\alpha, -1}(G)$ has a solution $\alpha > 0$. We have $\mathop{\mathrm{ch}}\nolimits^{-1}(E) = (1, H, \tfrac{H^2}{2} - n)$. By definition of $\mathop{\mathrm{Coh}}\nolimits^{-1}(X)$ we have $H \cdot \mathop{\mathrm{ch}}\nolimits^{-1}_1(F), H \cdot \mathop{\mathrm{ch}}\nolimits^{-1}_1(G) \geq 0$ and both those numbers add up to $H \cdot \mathop{\mathrm{ch}}\nolimits^{-1}_1(E) = H^2$. Since $H$ is the generator of the Picard group, this implies either $H \cdot \mathop{\mathrm{ch}}\nolimits^{-1}_1(F) = 0$ or $H \cdot \mathop{\mathrm{ch}}\nolimits^{-1}_1(G) = 0$. In particular, either $F$ or $G$ have slope infinity and it is impossible for $F$, $E$ and $G$ to have the same slope for $\beta = -1$ and $\alpha > 0$.
Next, assume that $0 \to F \to \ensuremath{\mathcal I}_Z \to G \to 0$ induces the wall $W$. By the long exact sequence in cohomology $F$ is a torsion free sheaf. By Lemma \ref{lem:higherRankBound} the inequality $\mathop{\mathrm{ch}}\nolimits_0(F) \geq 2$ leads to
\[
\rho^2 \leq \frac{2 H^2 n }{8(H^2)^2} = \frac{n}{4H^2} < \left( \frac{n}{H^2} -\frac{1}{2} \right)^2.
\]
Therefore, any such sequence giving the wall $\nu_{\alpha, \beta}(1,0,-n) = \nu_{\alpha, \beta}(\ensuremath{\mathcal O}(-H))$ must satisfy $\mathop{\mathrm{ch}}\nolimits_0(F) = 1$. Moreover, we must also have $H \cdot \mathop{\mathrm{ch}}\nolimits^{-1}_1(F) = H \cdot \mathop{\mathrm{ch}}\nolimits_1(F) + H^2 \geq 0$ and $H \cdot \mathop{\mathrm{ch}}\nolimits^{-1}_1(G) = - H \cdot \mathop{\mathrm{ch}}\nolimits_1(F) \geq 0$. A simple calculation shows that $\mathop{\mathrm{ch}}\nolimits_1(F) = 0$ does not give the correct wall and therefore, $\mathop{\mathrm{ch}}\nolimits_1(F) = -H$. Another straightforward computation implies that only $\mathop{\mathrm{ch}}\nolimits_2(F) = \tfrac{H^2}{2}$ defines the right wall numerically. Since $F$ is a torsion free sheaf, this means $F = \ensuremath{\mathcal O}(-H)$ implying the claim.
\end{proof}
\begin{exercise}
Any subscheme $Z \subset \ensuremath{\mathbb{P}}^n$ of dimension $0$ and length $4$ is contained in a quadric. Said differently there is a morphism $\ensuremath{\mathcal O}(-2) \ensuremath{\hookrightarrow} \ensuremath{\mathcal I}_Z$, i.e., no ideal sheaf is stable below the wall $\nu_{\alpha, \beta}(\ensuremath{\mathcal O}(-2)) = \nu_{\alpha, \beta}(\ensuremath{\mathcal I}_Z)$. Note that $\mathop{\mathrm{ch}}\nolimits(\ensuremath{\mathcal I}_Z) = (1,0,-4)$. The goal of this exercise, is to compute all bigger walls for this Chern character.
\begin{enumerate}
\item Compute the equation of the wall $\nu_{\alpha, \beta}(\ensuremath{\mathcal O}(-2)) = \nu_{\alpha, \beta}(\ensuremath{\mathcal I}_Z)$. Why do all bigger walls intersect the line $\beta = -2$?
\item Show that there are two walls bigger than $\nu_{\alpha, \beta}(\ensuremath{\mathcal O}(-2)) = \nu_{\alpha, \beta}(\ensuremath{\mathcal I}_Z)$. \emph{Hint: Let $0 \to F \to \ensuremath{\mathcal I}_Z \to G \to 0$ define a wall for some $\alpha > 0$ and $\beta = -2$. Then $F$ and $G$ are semistable, i.e. they satisfy the Bogomolov inequality. Additionally show that $0 < \mathop{\mathrm{ch}}\nolimits^{\beta}_1(F) < \mathop{\mathrm{ch}}\nolimits^{\beta}_1(E)$.}
\item Determine the Jordan-H\"older filtration of any ideal sheaf $\ensuremath{\mathcal I}_Z$ that destabilizes at any of these three walls. What do these filtrations imply about the geometry of $Z$? \emph{Hint: Use the fact that a semistable sheaf on $\ensuremath{\mathbb{P}}^2$ with Chern character $n \cdot \mathop{\mathrm{ch}}\nolimits(\ensuremath{\mathcal O}(-m))$ has to be $\ensuremath{\mathcal O}(-m)^{\oplus n}$.}
\end{enumerate}
\end{exercise}
\subsection{Kodaira vanishing}
As another application, we will give a proof of Kodaira vanishing for surfaces using tilt stability. The argument was first pointed out in \cite{AB13:k_trivial}.
While it is a well-known argument by Mumford that Kodaira vanishing in the surface case is a consequence of Bogomolov's inequality, this proof follows a slightly different approach.
\begin{thm}[Kodaira Vanishing for Surfaces]
Let $X$ be a smooth projective complex surface and $H$ an ample divisor on $X$. If $K_X$ is the canonical divisor class of $X$, then the vanishing
\[
H^i(\ensuremath{\mathcal O}(H + K_X)) = 0
\]
holds for all $i > 0$.
\begin{proof}
By Serre duality $H^2(\ensuremath{\mathcal O}(H + K_X)) = H^0(\ensuremath{\mathcal O}(-H))$. Since anti-ample divisors are never effective we get $H^0(\ensuremath{\mathcal O}(-H)) = 0$.
The same way Serre duality implies $H^1(\ensuremath{\mathcal O}(H + K_X)) = H^1(\ensuremath{\mathcal O}(-H)) = \mathop{\mathrm{Hom}}\nolimits(\ensuremath{\mathcal O}, \ensuremath{\mathcal O}(-H)[1])$. By Lemma \ref{lem:parallel_line_bundle} both $\ensuremath{\mathcal O}$ and $\ensuremath{\mathcal O}(-H)[1]$ are tilt semistable for all $\omega = \alpha H$, $B = \beta H$, where $\alpha > 0$ and $\beta \in (-1,0)$. A straightforward computation shows
\[
\nu_{\omega, B}(\ensuremath{\mathcal O}) > \nu_{\omega, B}(\ensuremath{\mathcal O}(-H)[1]) \Leftrightarrow \alpha^2 + \left(\beta - \frac{1}{2}\right)^2 > \frac{1}{4}.
\]
Therefore, there is a region in which $\nu_{\omega, B}(\ensuremath{\mathcal O}) > \nu_{\omega, B}(\ensuremath{\mathcal O}(-H)[1])$ and both these objects are tilt stable, i.e., $\mathop{\mathrm{Hom}}\nolimits(\ensuremath{\mathcal O}, \ensuremath{\mathcal O}(-H)[1]) = 0$.
\end{proof}
\end{thm}
\subsection{Stability of special objects}
As in Section \ref{subsec:ExamplesCurves}, we can look at Bridgeland stability for Lazarsfeld-Mukai bundles on a surface.
The setting is the following.
Let $X$ be a smooth projective surface.
Let $L$ be an ample integral divisor on $X$.
We assume the following: $L^2\geq8$ and $L.C\geq2$, for all integral curves $C\subset X$.
By Reider's Theorem \cite{Rei88:vector_bundle_linear_systems}, the divisor $L+K_X$ is globally generated.
We define the \emph{Lazarsfeld-Mukai} vector bundle as the kernel of the evaluation map:
\[
M_{L+K_X} := \mathop{\mathrm{Ker}}\nolimits \left(\ensuremath{\mathcal O}_X\otimes H^0(X,\ensuremath{\mathcal O}_X(L+K_X)) \ensuremath{\twoheadrightarrow} \ensuremath{\mathcal O}_X(L+K_X) \right).
\]
We consider the ample divisor $H:=2L+K_X$ and $B:=\frac{K_X}{2}$, and consider the $(\alpha,\beta)$-plane with respect to $H$ and $B$.
The following question would have interesting geometric applications:
\begin{question}\label{question:ProjectiveNormality}
For which $(\alpha,\beta)$ is $M_{L+K_X}$ tilt stable?
\end{question}
\begin{exercise}\label{exercise:StabilityLazarsfeldMukai}
Assume that $M_{L+K_X}$ is $\nu_{\alpha,\beta}$-semistable for $(\alpha,\beta)$ inside the wall defined by $\nu_{\alpha,\beta}(M_{L+K_X})=\nu_{\alpha,\beta}(\ensuremath{\mathcal O}_X(-L)[1])$.
Show that $H^1(X,M_{L+K_X}\otimes\ensuremath{\mathcal O}_X(K_X+L))=0$.
\end{exercise}
If the assumption in Exercise \ref{exercise:StabilityLazarsfeldMukai} is satisfied, then we would prove that the multiplication map $H^0(X,\ensuremath{\mathcal O}_X(L+K_X)) \otimes H^0(X,\ensuremath{\mathcal O}_X(L+K_X)) \to H^0(X,\ensuremath{\mathcal O}_X(2L+2K_X))$ is surjective, the first step toward projective normality for the embedding of $X$ given by $L+K_X$.
\begin{ex
\label{ex:LazarsfeldMukaiK3}
The assumption in Exercise \ref{exercise:StabilityLazarsfeldMukai} is satisfied when $X$ is a K3 surface.
Indeed, this is an explicit computation by using the fact that in that case, the vector bundle $M_L$ is a \emph{spherical} object (namely, $\mathop{\mathrm{End}}\nolimits(M_L)=\ensuremath{\mathbb{C}}$ and $\mathop{\mathrm{Ext}}\nolimits^1(M_L,M_L)=0$).
\end{ex}
\section{Nef Divisors on Moduli Spaces of Bridgeland Stable Objects}
\label{sec:nef}
So far we neglected another important aspect of moduli spaces of Bridgeland stable objects, namely the construction of divisors on them. Let $X$ be an arbitrary smooth projective variety. Given a stability condition $\sigma$ and a fixed set of invariants $v \in \Lambda$ we will demonstrate how to construct a nef divisor on moduli spaces of $\sigma$-semistable objects with class $v$. This was originally described in \cite{BM14:projectivity}.
Assume that $\sigma = (Z,\ensuremath{\mathcal A})$ is a stability condition on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$, $S$ is a proper algebraic space of finite type over $\ensuremath{\mathbb{C}}$ and $v \in \Lambda$ a fixed set of invariants. Moreover, let $\ensuremath{\mathcal E} \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X \times S)$ be a flat family of $\sigma$-semistable objects of class $v$, i.e., $\ensuremath{\mathcal E}$ is $S$-perfect (see Subsection \ref{subsec:moduli} for a definition) and for every $\ensuremath{\mathbb{C}}$-point $P \in S$, the derived restriction $\ensuremath{\mathcal E}_{|X \times \{P\}}$ is $\sigma$-semistable of class $v$. The purpose of making the complex $S$-perfect is to make the derived restriction well defined. A divisor class $D_{\sigma, \ensuremath{\mathcal E}}$ can defined on $S$ by its intersection number with any projective integral curve $C \subset S$:
\[
D_{\sigma, \ensuremath{\mathcal E}} \cdot C = \Im \left( -\frac{Z((p_X)_* \ensuremath{\mathcal E}_{|X \times C})}{Z(v)} \right).
\]
We skipped over a detail in the definition that is handled in Section 4 of \cite{BM14:projectivity}. It is necessary to show that the number only depends on the numerical class of the curve $C$.
The motivation for this definition is the following Theorem due to \cite{BM14:projectivity}.
\begin{thm}[Positivity Lemma]
\label{thm:positivity_lemma}
\begin{enumerate}
\item The divisor $D_{\sigma, \ensuremath{\mathcal E}}$ is nef on $S$.
\item A projective integral curve $C \subset X$ satisfies $D_{\sigma, \ensuremath{\mathcal E}} \cdot C = 0$ if and only if for two general elements $c,c' \in C$ the restrictions $\ensuremath{\mathcal E}_{|X \times \{c\}}$ and $\ensuremath{\mathcal E}_{|X \times \{c'\}}$ are $S$-equivalent, i.e., their Jordan-H\"older filtrations have the same stable factors up to order.
\end{enumerate}
\end{thm}
We will give a proof that the divisor is nef and refer to \cite{BM14:projectivity} for the whole statement. Further background material is necessary. We will mostly follow the proof in \cite{BM14:projectivity}. Without loss of generality we can assume that $Z(v) = -1$ by scaling and rotating the stability condition. Indeed, assume that $Z(v) = r_0 e^{\sqrt{-1} \phi_0 \pi}$ and let $\ensuremath{\mathcal P}$ be the slicing of our stability condition. We can then define a new stability condition by setting $\ensuremath{\mathcal P}'(\phi) := \ensuremath{\mathcal P}(\phi + 1 - \phi_0)$ and $Z' = Z \cdot \tfrac{1}{r_0} e^{\sqrt{-1} (1-\phi_0) \pi}$. Clearly, $Z'(v) = -1$. The definition of $D_{\sigma, \ensuremath{\mathcal E}}$ simplifies to
\[
D_{\sigma, \ensuremath{\mathcal E}} \cdot C = \Im \left( Z((p_X)_* \ensuremath{\mathcal E}_{|X \times C}) \right)
\]
for any projective integral curve $C \subset S$. From this formula the motivation for the definition becomes more clear. If $(p_X)_* \ensuremath{\mathcal E}_{|X \times C} \in \ensuremath{\mathcal A}$ holds, the fact that the divisor is nef follows directly from the definition of a stability function. As always, things are more complicated.
We can use Bridgeland's Deformation Theorem and assume further, without changing semistable objects (and the previous assumption), that the heart $\ensuremath{\mathcal A}$ is noetherian.
One of the key properties in the proof is an extension of a result by Abramovich and Polishchuk from \cite{AP06:constant_t_structures} to the following statement by Polishchuk. We will not give a proof.
\begin{thm}{{\cite[Theorem 3.3.6]{Pol07:constant_t_structures}}}
\label{thm:constant_t_structure}
Let $\ensuremath{\mathcal A}$ be the heart of a noetherian bounded t-structure on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$. The category $\ensuremath{\mathcal A}^{qc} \subset D_{qc}(X)$ is the closure of $\ensuremath{\mathcal A}$ under infinite coproducts in the (unbounded) derived category of quasi-coherent sheaves on $X$. There is a noetherian heart $\ensuremath{\mathcal A}_S$ on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X \times S)$ for any finite type scheme $S$ over the complex numbers satisfying the following three properties.
\begin{enumerate}
\item The heart $\ensuremath{\mathcal A}_S$ is characterized by the property
\[
\ensuremath{\mathcal E} \in \ensuremath{\mathcal A}_S \Leftrightarrow (p_X)_* \ensuremath{\mathcal E}_{|X \times U} \in \ensuremath{\mathcal A}^{qc} \text{ for every open affine } U \subset S.
\]
\item If $S = \bigcup_i U_i$ is an open covering of $S$, then
\[
\ensuremath{\mathcal E} \in \ensuremath{\mathcal A}_S \Leftrightarrow \ensuremath{\mathcal E}_{|X \times U_i} \in \ensuremath{\mathcal A}_{U_i} \text{ for all } i.
\]
\item If $S$ is projective and $\ensuremath{\mathcal O}_S(1)$ is ample, then
\[
\ensuremath{\mathcal E} \in \ensuremath{\mathcal A}_S \Leftrightarrow (p_X)_* (\ensuremath{\mathcal E} \otimes p_S^* \ensuremath{\mathcal O}_S(n)) \in \ensuremath{\mathcal A} \text{ for all } n \gg 0.
\]
\end{enumerate}
\end{thm}
In order to apply this theorem to our problem we need the family $\ensuremath{\mathcal E}$ to be in $\ensuremath{\mathcal A}_S$. The proof of the following statement is somewhat technical and we refer to \cite[Lemma 3.5]{BM14:projectivity}.
\begin{lem}
Let $\ensuremath{\mathcal E} \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X \times S)$ be a flat family of $\sigma$-semistable objects of class $v$. Then $\ensuremath{\mathcal E} \in \ensuremath{\mathcal A}_S$.
\end{lem}
\begin{proof}[Proof of Theorem \ref{thm:positivity_lemma} (1)]
Let $\ensuremath{\mathcal O}_C(1)$ be an ample line bundle on $C$. If we can show
\[
D_{\sigma, \ensuremath{\mathcal E}} \cdot C = \Im \left( Z((p_X)_* (\ensuremath{\mathcal E} \otimes p_S^* \ensuremath{\mathcal O}_C(n))) \right)
\]
for $n \gg 0$, then we are done. Indeed, Theorem \ref{thm:constant_t_structure} part (3) implies $(p_X)_* (\ensuremath{\mathcal E} \otimes p_S^* \ensuremath{\mathcal O}_C(n)) \in \ensuremath{\mathcal A}$ for $n \gg 0$ and the proof is concluded by the positivity properties of a stability function.
Choose $n \gg 0$ large enough such that $H^0(\ensuremath{\mathcal O}_C(n)) \neq 0$. Then there is a torsion sheaf $T$ together with a short exact sequence
\[
0 \to \ensuremath{\mathcal O}_C \to \ensuremath{\mathcal O}_C(n) \to T \to 0.
\]
Since $T$ has zero dimensional support and $\ensuremath{\mathcal E}$ is a family of objects with class $v$, we can show $Z((p_X)_* (\ensuremath{\mathcal E} \otimes p_S^* T)) \in \ensuremath{\mathbb{R}}$ by induction on the length of the support of $T$. But that shows
\[
\Im \left( Z((p_X)_* (\ensuremath{\mathcal E} \otimes p_S^* \ensuremath{\mathcal O}_C(n))) \right) = \Im \left( Z((p_X)_* (\ensuremath{\mathcal E} \otimes p_S^* \ensuremath{\mathcal O}_C))) \right). \qedhere
\]
\end{proof}
\subsection{The Donaldson morphism}
The definition of the divisor is made such that the proof of the Positivity Lemma is as clear as possible. However, it is hard to explicitly compute it in examples directly via the definition. Computation is often times done via the Donaldson morphism. This is originally explained in Section 4 of \cite{BM14:projectivity}.
Recall that, for a proper scheme $S$, the Euler characteristic gives a well-defined pairing
\[
\chi\colon K_0(\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(S)) \times K_0(\mathrm{D}_{\mathrm{perf}}(S)) \to \ensuremath{\mathbb{Z}}
\]
between the Grothendieck groups of the bounded derived categories of coherent sheaves $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(S)$ and of perfect complexes $\mathrm{D}_{\mathrm{perf}}(S)$.
Taking the quotient with respect to the kernel of $\chi$ on each
side we obtain numerical Grothendieck groups $K_{\mathop{\mathrm{num}}\nolimits}(S)$ and $K_{\mathop{\mathrm{num}}\nolimits}^{\mathop{\mathrm{perf}}\nolimits}(S)$, respectively, with an induced perfect pairing
\[
\chi: K_{\mathop{\mathrm{num}}\nolimits}(S) \otimes K_{\mathop{\mathrm{num}}\nolimits}^{\mathop{\mathrm{perf}}\nolimits}(S) \to \ensuremath{\mathbb{Z}}.
\]
\begin{defn}
We define the additive \emph{Donaldson morphism} $\lambda_{\ensuremath{\mathcal E}}: v^{\#} \to N^1(S)$ by
\[
w \mapsto \det((p_S)_*(p_X^* w \cdot [\ensuremath{\mathcal E}])).
\]
Here,
\[
v^{\#} = \{ w \in K_{\mathop{\mathrm{num}}\nolimits}(S)_{\ensuremath{\mathbb{R}}} : \chi(v \cdot w) = 0 \}.
\]
\end{defn}
Let $w_{\sigma} \in v^{\#}$ be the unique vector such that
\[
\chi(w_{\sigma} \cdot w') = \Im \left(-\frac{Z(w')}{Z(v)} \right)
\]
for all $w' \in K_{\mathop{\mathrm{num}}\nolimits}(X)_{\ensuremath{\mathbb{R}}}$.
\begin{prop}[{\cite[Theorem 4.4]{BM14:projectivity}}]
\label{prop:donaldson_computation}
We have $\lambda_{\ensuremath{\mathcal E}}(w_{\sigma}) = D_{\sigma, \ensuremath{\mathcal E}}$.
\begin{proof}
Let $\ensuremath{\mathcal L}_{\sigma} = (p_S)_*(p_X^* w_{\sigma} \cdot [\ensuremath{\mathcal E}])$. For any $s \in S$ we can compute the rank
\[
r(\ensuremath{\mathcal L}_{\sigma}) = \chi(S, [\ensuremath{\mathcal O}_s] \cdot \ensuremath{\mathcal L}_{\sigma}) = \chi(S \times X, [\ensuremath{\mathcal O}_{\{s\}\times X}] \cdot [\ensuremath{\mathcal E}] \cdot p_X^* w_{\sigma}) = \chi(w_{\sigma} \cdot v) = 0.
\]
Therefore, $\ensuremath{\mathcal L}_{\sigma}$ has rank zero. This implies
\[
\lambda_{\ensuremath{\mathcal E}}(w_{\sigma}) \cdot C = \chi (\ensuremath{\mathcal L}_{\sigma | C})
\]
for any projective integral curve $C \subset S$. Let $i_C: C \ensuremath{\hookrightarrow} S$ be the embedding of $C$ into $S$. Cohomology and base change implies
\begin{align*}
\ensuremath{\mathcal L}_{\sigma | C} &= i_C^* (p_S)_*(p_X^* w_{\sigma} \cdot [\ensuremath{\mathcal E}]) \\
&= (p_C)_* (i_C \times \mathop{\mathrm{id}}\nolimits_X)^* (p_X^* w_{\sigma} \cdot [\ensuremath{\mathcal E}]) \\
&= (p_C)_* (p_X^* w_{\sigma} \cdot [\ensuremath{\mathcal E}_{|C \times X}]).
\end{align*}
The proof can be finished by using the projection formula as follows
\begin{align*}
\chi (C, \ensuremath{\mathcal L}_{\sigma | C}) & = \chi (C, (p_C)_* (p_X^* w_{\sigma} \cdot [\ensuremath{\mathcal E}_{|C \times X}])) \\
&= \chi(X, w_{\sigma} \cdot (p_X)_* [\ensuremath{\mathcal E}_{|C \times X}]) \\
&= \Im \left( -\frac{Z((p_X)_* \ensuremath{\mathcal E}_{|X \times C})}{Z(v)} \right). \qedhere
\end{align*}
\end{proof}
\end{prop}
\subsection{Applications to Hilbert schemes of points}
Recall that for any positive integer $n \in \ensuremath{\mathbb{N}}$ the Hilbert scheme of $n$-points $X^{[n]}$ parameterizes subschemes $Z \subset X$ of dimension zero and length $n$. This scheme is closely connected to the symmetric product $X^{(n)}$ defined as the quotient $X^n/S_n$ where $S_n$ acts on $X^n$ via permutation of the factors. By work of Fogarty in \cite{Fog68:hilbert_schemeI} the natural map $X^{[n]} \to X^{(n)}$ is a birational morphism that resolves the singularities of $X^{(n)}$.
We will recall the description of $\mathop{\mathrm{Pic}}\nolimits(X^{[n]})$ in case the surface $X$ has irregularity zero, i.e., $H^1(\ensuremath{\mathcal O}_X) = 0$. It is a further result in \cite{Fog73:hilbert_schemeII}. If $D$ is any divisor on $X$, then there is an $S_n$ invariant divisor $D^{\boxtimes n}$ on $X^n$. This induces a divisor $D^{(n)}$ on $X^{(n)}$, which we pull back to a divisor $D^{[n]}$ on $X^{[n]}$. If $D$ is a prime divisor, then $D^{[n]}$ parameterizes those $Z \subset X$ that intersect $D$ non trivially. Then
\[
\mathop{\mathrm{Pic}}\nolimits(X^{[n]}) \cong \mathop{\mathrm{Pic}}\nolimits(X) \oplus \ensuremath{\mathbb{Z}} \cdot \frac{E}{2},
\]
where $E$ parameterizes the locus of non reduces subschemes $Z$. Moreover, the restriction of this isomorphism to $\mathop{\mathrm{Pic}}\nolimits(X)$ is the embedding given by $D \mapsto D^{[n]}$. A direct consequence of this result is the description of the N\'eron-Severi group as
\[
\mathop{\mathrm{NS}}\nolimits(X^{[n]}) \cong \mathop{\mathrm{NS}}\nolimits(X) \oplus \ensuremath{\mathbb{Z}} \cdot \frac{E}{2}.
\]
The divisor $\frac{E}{2}$ is integral because it is given by $\det((p_{X^{[n]}})_* \ensuremath{\mathcal U}_n)$, where $\ensuremath{\mathcal U}_n \in \mathop{\mathrm{Coh}}\nolimits(X \times X^{[n]})$ is the \emph{universal ideal sheaf} of $X^{[n]}$.
\begin{thm}
\label{thm:divisor_hilbert_scheme}
Let $X$ be a smooth complex projective surface with $\mathop{\mathrm{Pic}}\nolimits(X) = \ensuremath{\mathbb{Z}} \cdot H$, where $H$ is ample. Moreover, let $a > 0$ be the smallest integer such that $aH$ is effective. If $n \geq a^2 H^2$, then the divisor
\[
D = \frac{1}{2} K_X^{[n]} + \left(\frac{a}{2} + \frac{n}{aH^2}\right) H^{[n]} - \frac{1}{2} E
\]
is nef. If $g$ is the arithmetic genus of a curve $C \in |aH|$ and $n \geq g+1$, then $D$ is extremal. In particular, the nef cone of $X^{[n]}$ is spanned by this divisor and $H^{[n]}$.
\end{thm}
For the proof of this statement, we need to describe the image of the Donaldson morphism more precisely. In this case, the vector $v$ is given by $(1,0,-n)$ and $\ensuremath{\mathcal E} = \ensuremath{\mathcal U}_n \in \mathop{\mathrm{Coh}}\nolimits(X \times X^{[n]})$ is the universal ideal sheaf for $X^{[n]}$.
\begin{prop}
\label{prop:donaldson_morphism_hilb}
Choose $m$ such that $(1, 0, m) \in v^{\#}$ and for any divisor $D$ on $X$ choose $m_D$ such that $(0, D, m_D) \in v^{\#}$. Then
\begin{align*}
\lambda_{\ensuremath{\mathcal U}_n}(1, 0, m) &= \frac{E}{2}, \\
\lambda_{\ensuremath{\mathcal U}_n}(0, D, m_D) &= -D^{[n]}.
\end{align*}
\begin{proof}[Sketch of the proof]
Let $x \in X$ be an arbitrary point and $\ensuremath{\mathbb{C}}(x)$ the corresponding skyscraper sheaf. The Grothendieck-Riemann-Roch Theorem implies
\[
\mathop{\mathrm{ch}}\nolimits((p_{X^{[n]}})_*(p_X^*\ensuremath{\mathbb{C}}(x) \otimes \ensuremath{\mathcal U}_n)) = (p_{X^{[n]}})_* (\mathop{\mathrm{ch}}\nolimits(p_X^*\ensuremath{\mathbb{C}}(x) \otimes \ensuremath{\mathcal U}_n) \cdot \mathop{\mathrm{td}}\nolimits(T_{p_{X^{[n]}}})).
\]
As a consequence
\[
\lambda_{\ensuremath{\mathcal U}_n}(0,0,1) = p_X^*\left(- [x] \cdot \frac{K_X}{2} \right) = 0
\]
holds. Therefore, the values of $m$ and $m_D$ are irrelevant for the remaining computation. Similarly, we can show that
\[
-\lambda_{\ensuremath{\mathcal U}_n}\left(0,D,-\frac{D^2}{2}\right) = (p_{X^{[n]}})_* (p_X^*(-D) \cdot \mathop{\mathrm{ch}}\nolimits_2(\ensuremath{\mathcal U}_n)).
\]
Intuitively, this divisor consists of those schemes $Z$ parameterized in $X^{[n]}$ that intersect the divisor $D$, i.e., it is $D^{[n]}$ (see \cite{CG90:d_very_ample}).
Finally, we have to compute $\lambda_{\ensuremath{\mathcal U}_n}(1,0,0)$. By definition it is given as $\det((p_{X^{[n]}})_* \ensuremath{\mathcal U}_n) = \tfrac{E}{2}$.
\end{proof}
\end{prop}
\begin{cor}
\label{cor:explicit_divisor}
Let $W$ be a numerical semicircular wall for $v = (1, 0, -n)$ with center $s_W$. Assume that all ideal sheaves $I_Z$ on $X$ with $\mathop{\mathrm{ch}}\nolimits(I_Z) = (1, 0, -n)$ are semistable along $W$. Then
\[
D_{\alpha, \beta, \ensuremath{\mathcal U}_n} \in \ensuremath{\mathbb{R}}_{> 0} \left( \frac{K_X^{[n]}}{2} - s_W H^{[n]} - \frac{E}{2} \right)
\]
for all $\sigma \in W$.
\begin{proof}
We fix $\sigma = (\alpha, \beta) \in W$. By Proposition \ref{prop:donaldson_computation} there is a class $w_{\sigma} \in v^{\#}$ such that the divisor $D_{\sigma, \ensuremath{\mathcal U}_n}$ is given by $\lambda_{\ensuremath{\mathcal U}_n}(w_{\sigma})$. This class is characterized by the property
\[
\chi(w_{\sigma} \cdot w') = \Im \left(-\frac{Z_{\alpha, \beta}(w')}{Z_{\alpha, \beta}(v)} \right)
\]
for all $w' \in K_{\mathop{\mathrm{num}}\nolimits}(X)_{\ensuremath{\mathbb{R}}}$. We define $x,y \in \ensuremath{\mathbb{R}}$ by
\[
-\frac{1}{Z_{\alpha, \beta}(v)} = x + \sqrt{-1} y.
\]
The fact that $Z_{\alpha, \beta}$ is a stability function implies $y \geq 0$. We even have $y>0$ because $y=0$ holds if and only if $(\alpha, \beta)$ is on the unique numerical vertical wall contrary to assumption. The strategy of the proof is to determine $w_{\sigma}$ by pairing it with some easy to calculate classes and then using the previous proposition.
Let $w_{\sigma} = (r,C,d)$, where $r \in \ensuremath{\mathbb{Z}}$, $d \in \tfrac{1}{2} \ensuremath{\mathbb{Z}}$ and $C$ is a curve class. We have
\[
r = \chi(w_{\sigma} \cdot (0,0,1)) = \Im ((x+\sqrt{-1}y)Z_{\alpha, \beta}(0,0,1)) = -y.
\]
A similar computation together with Riemann-Roch shows
\begin{align*}
(x + \beta y)H^2 &= \Im ((x+iy) \cdot Z_{\alpha, \beta} (0, H, 0)) \\
&= \chi(w_{\sigma} \cdot (0, H, 0))\\
&= \chi(0, -yH, H \cdot C) \\
&= \int_X (0, -yH, H \cdot C) \cdot \left(1, -\frac{K_X}{2}, \chi(\ensuremath{\mathcal O}_X) \right) \\
&= \frac{y}{2} H \cdot K_X + H \cdot C.
\end{align*}
Since $\mathop{\mathrm{Pic}}\nolimits(X) = \ensuremath{\mathbb{Z}} \cdot H$, we obtain
\[
C = (x + \beta y) H - \frac{y}{2} K_X = ys_W H - \frac{y}{2} K_X.
\]
The last step used $x + \beta y = ys_W$ which is a straightforward computation. We get
\[
D_{\alpha, \beta, \ensuremath{\mathcal U}_n} \in \ensuremath{\mathbb{R}}_{> 0} \left( \lambda_{\ensuremath{\mathcal U}_n}(-1, s_W H - \frac{K_X}{2}, m) \right),
\]
where $m$ is uniquely determined by $w_{\sigma} \in v^{\#}$. The statement follows now from a direct application of Proposition \ref{prop:donaldson_morphism_hilb}.
\end{proof}
\end{cor}
\begin{proof}[Proof of Theorem \ref{thm:divisor_hilbert_scheme}]
By Proposition \ref{prop:surface_largest_wall} and the assumption $n \geq a^2H^2$ we know that the largest wall destabilizes those ideal sheaves $\ensuremath{\mathcal I}_Z$ that fit into an exact sequence
\[
0 \to \ensuremath{\mathcal O}(-aH) \to \ensuremath{\mathcal I}_Z \to \ensuremath{\mathcal I}_{Z/C} \to 0,
\]
where $C \in |aH|$. This wall has center
\[
s = -\frac{a}{2} - \frac{n}{aH^2}.
\]
By Corollary \ref{cor:explicit_divisor} we get that
\[
D = \frac{1}{2} K_X^{[n]} + \left(\frac{a}{2} + \frac{n}{aH^2}\right) H^{[n]} - \frac{1}{2} E
\]
is nef. We are left to show extremality of the divisor in case $n \geq g+1$. By part (2) of the Positivity Lemma, we have to construct a one dimensional family of $S$-equivalent objects. It exists if and only if $\mathop{\mathrm{ext}}\nolimits^1(\ensuremath{\mathcal I}_{Z/C}, \ensuremath{\mathcal O}(-aH)) \geq 2$. A Riemann-Roch calculation shows
\begin{align*}
1 - g = \chi(\ensuremath{\mathcal O}_C) &= \int_X \left(0, aH, -\frac{a^2H^2}{2} \right) \cdot \left(1, -\frac{K_X}{2}, \chi(\ensuremath{\mathcal O}_X) \right) \\
&= - \frac{a}{2} H \cdot K_X -\frac{a^2H^2}{2}.
\end{align*}
Another application of Riemann-Roch for surfaces shows
\begin{align*}
\mathop{\mathrm{ext}}\nolimits^1(\ensuremath{\mathcal I}_{Z/C}, \ensuremath{\mathcal O}(-aH)) &\geq -\chi(\ensuremath{\mathcal I}_{Z/C}, \ensuremath{\mathcal O}(-aH)) \\
&= -\int_X \left(1, -aH, \frac{a^2H^2}{2} \right) \left(0, -aH, -\frac{a^2H^2}{2} - n \right) \left(1, -\frac{K_X}{2}, \chi(\ensuremath{\mathcal O}_X) \right) \\
&= n - \frac{a}{2} H \cdot K_X - \frac{a^2H^2}{2} \\
&= n + 1 - g.
\end{align*}
Therefore, $n \geq g+1$ implies that $D$ is extremal.
\end{proof}
\begin{rmk}
In particular cases, Theorem \ref{thm:divisor_hilbert_scheme} can be be made more precise and general.
In the case of the projective plane \cite{CHW14:effective_cones_p2,CH15:nef_cones, LZ16:NewStabilityP2} and of K3 surfaces \cite{BM14:projectivity,BM14:stability_k3}, given any primitive vector, varying stability conditions corresponds to a directed Minimal Model Program for the corresponding moduli space. This allows to completely describe the nef cone, the movable cone, and the pseudo-effective cone for them. Also, all corresponding birational models appear as moduli spaces of Bridgeland stable objects. This has also deep geometrical applications; for example, see \cite{Bay16:BrillNoether}.
\end{rmk}
\section{Stability Conditions on Threefolds}
\label{sec:P3}
In this section we will give a short sketchy introduction to the higher-dimensional case.
The main result is the construction of stability condition on $X = \ensuremath{\mathbb{P}}^3$.
By abuse of notation, we will identify $\mathop{\mathrm{ch}}\nolimits^{\beta}_i(E)\in\ensuremath{\mathbb{Q}}$, for any $E \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(\ensuremath{\mathbb{P}}^3)$.
\subsection{Tilt stability and the second tilt}
The construction of $\sigma_{\alpha, \beta} = (\mathop{\mathrm{Coh}}\nolimits^{\beta}(X), Z_{\alpha, \beta})$ for $\alpha > 0$ and $\beta \in \ensuremath{\mathbb{R}}$ can carried out as before. However, this will not be a stability condition, because $Z_{\alpha, \beta}$ maps skyscraper sheaves to the origin. In \cite{BMT14:stability_threefolds}, this (weak) stability condition is called \emph{tilt stability}. The idea is that, by repeating the previous process of tilting another time with $\sigma_{\alpha, \beta}$ instead of slope stability, this might allow to construct a Bridgeland stability condition on $\mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(X)$. Let
\begin{align*}
\ensuremath{\mathcal T}'_{\alpha, \beta} &= \{E \in \mathop{\mathrm{Coh}}\nolimits^{\beta}(\ensuremath{\mathbb{P}}^3) : \text{any quotient $E
\ensuremath{\twoheadrightarrow} G$ satisfies $\nu_{\alpha, \beta}(G) > 0$} \}, \\
\ensuremath{\mathcal F}'_{\alpha, \beta} &= \{E \in \mathop{\mathrm{Coh}}\nolimits^{\beta}(\ensuremath{\mathbb{P}}^3) : \text{any subobject $F
\ensuremath{\hookrightarrow} E$ satisfies $\nu_{\alpha, \beta}(F) \leq 0$} \}
\end{align*}
and set $\ensuremath{\mathcal A}^{\alpha, \beta}(\ensuremath{\mathbb{P}}^3) = \langle \ensuremath{\mathcal F}'_{\alpha, \beta}[1],
\ensuremath{\mathcal T}'_{\alpha, \beta} \rangle $. For any $s>0$, we define
\[
Z_{\alpha,\beta,s} = -\mathop{\mathrm{ch}}\nolimits^{\beta}_3 + (s+\tfrac{1}{6})\alpha^2 \mathop{\mathrm{ch}}\nolimits^{\beta}_1 + \sqrt{-1} (\mathop{\mathrm{ch}}\nolimits^{\beta}_2 - \frac{\alpha^2}{2} \mathop{\mathrm{ch}}\nolimits^{\beta}_0).
\]
and the corresponding slope
\[
\lambda_{\alpha,\beta,s} = \frac{\mathop{\mathrm{ch}}\nolimits^{\beta}_3 - (s+\tfrac{1}{6})\alpha^2 \mathop{\mathrm{ch}}\nolimits^{\beta}_1}{\mathop{\mathrm{ch}}\nolimits^{\beta}_2 - \frac{\alpha^2}{2} \cdot \mathop{\mathrm{ch}}\nolimits^{\beta}_0}.
\]
Recall that the key point in the construction stability conditions on surfaces was the classical Bogomolov inequality for slope stable sheaves. That is the idea behind the next theorem.
\begin{thm}[\cite{BMT14:stability_threefolds, Mac14:conjecture_p3, BMS14:abelian_threefolds}]
\label{thm:construction_threefold}
The pair $(\ensuremath{\mathcal A}^{\alpha, \beta}(\ensuremath{\mathbb{P}}^3), \lambda_{\alpha,\beta,s})$ is Bridgeland stability condition for all $s > 0$ if and only if for all tilt stable objects $E \in \mathop{\mathrm{Coh}}\nolimits^{\beta}(\ensuremath{\mathbb{P}}^3)$ we have
\[Q_{\alpha, \beta}(E) = \alpha^2 \Delta(E) + 4(\mathop{\mathrm{ch}}\nolimits_2^{\beta}(E))^2 - 6\mathop{\mathrm{ch}}\nolimits_1^{\beta}(E) \mathop{\mathrm{ch}}\nolimits_3^{\beta}(E) \geq 0.\]
\end{thm}
It was more generally conjectured that similar inequalities are true for all smooth projective threefolds. This turned out to be true in various cases. The first proof was in the case of $\ensuremath{\mathbb{P}}^3$ in \cite{Mac14:conjecture_p3} and a very similar proof worked for the smooth quadric hypersurface in $\ensuremath{\mathbb{P}}^4$ in \cite{Sch14:conjecture_quadric}. These results were generalized with a fundamentally different proof to all Fano threefolds of Picard rank one in \cite{Li15:conjecture_fano_threefold}. The conjecture is also known to be true for all abelian threefolds with two independent proofs by \cite{MP16:conjecture_abelian_threefoldsII} and \cite{BMS14:abelian_threefolds}. Unfortunately, it turned out to be problematic in the case of the blow up of $\ensuremath{\mathbb{P}}^3$ in a point as shown in \cite{Sch16:counterexample}.
\begin{rmk}\label{rmk:ExtensionToTiltStability3folds}
Many statements about Bridgeland stability on surfaces work almost verbatim in tilt stability. Analogously to Theorem \ref{prop:locally_finite} walls in tilt stability are locally finite and stability is an open property. The structure of walls from Proposition \ref{prop:StructureThmWallsSurfaces} is the same in tilt stability. The Bogomolov inequality also works, i.e., any tilt semistable object $E$ satisfies $\Delta(E) \geq 0$. The bound for high degree walls in Lemma \ref{lem:higherRankBound} is fine as well.
Finally, slope stable sheaves $E$ are $\nu_{\alpha, \beta}$-stable for all $\alpha \gg 0$ and $\beta < \mu(E)$.
\end{rmk}
We will give the proof of Chunyi Li in the special case of $\ensuremath{\mathbb{P}}^3$ in these notes.
We first recall the Hirzebruch-Riemann-Roch Theorem for $\ensuremath{\mathbb{P}}^3$.
\begin{thm}
Let $E \in \mathop{\mathrm{D}^{\mathrm{b}}}\nolimits(\ensuremath{\mathbb{P}}^3)$. Then
\[
\chi(\ensuremath{\mathbb{P}}^3, E) = \mathop{\mathrm{ch}}\nolimits_3(E) + 2 \mathop{\mathrm{ch}}\nolimits_2(E) + \frac{11}{6} \mathop{\mathrm{ch}}\nolimits_1(E) + \mathop{\mathrm{ch}}\nolimits_0(E).
\]
\end{thm}
In order to prove the inequality in Theorem \ref{thm:construction_threefold}, we want to reduce the problem to a simpler case.
\begin{defn}
For any object $E \in \mathop{\mathrm{Coh}}\nolimits^{\beta}(\ensuremath{\mathbb{P}}^3)$, we define
\[
\overline{\beta}(E) = \begin{cases}
\frac{\mathop{\mathrm{ch}}\nolimits_1(E) - \sqrt{\Delta_H(E)}}{\mathop{\mathrm{ch}}\nolimits_0(E)} & \mathop{\mathrm{ch}}\nolimits_0(E) \neq 0, \\
\frac{\mathop{\mathrm{ch}}\nolimits_2(E)}{\mathop{\mathrm{ch}}\nolimits_1(E)} & \mathop{\mathrm{ch}}\nolimits_0(E) = 0.
\end{cases}
\]
The object $E$ is called \emph{$\overline{\beta}$-(semi)stable}, if $E$ (semi)stable in a neighborhood of $(0, \overline{\beta}(E))$.
\end{defn}
A straightforward computation shows that $\mathop{\mathrm{ch}}\nolimits_2^{\overline{\beta}}(E) = 0$.
\begin{lem}[\cite{BMS14:abelian_threefolds}]
Proving the inequality in Theorem \ref{thm:construction_threefold} can be reduced to $\overline{\beta}$-stable objects. Moreover, in that case the inequality reduces to $\mathop{\mathrm{ch}}\nolimits_3^{\overline{\beta}}(E) \leq 0$.
\begin{proof}
Let $E \in \mathop{\mathrm{Coh}}\nolimits^{\beta_0}(\ensuremath{\mathbb{P}}^3)$ be a $\nu_{\alpha_0, \beta_0}$-stable object with $\mathop{\mathrm{ch}}\nolimits(E) = v$. If $(\alpha_0, \beta_0)$ is on the unique numerical vertical wall for $v$, then $\mathop{\mathrm{ch}}\nolimits_1^{\beta_0}(E) = 0$ and therefore,
\[
Q_{\alpha_0, \beta_0} = \alpha_0^2 \Delta(E) + 4(\mathop{\mathrm{ch}}\nolimits_2^{\beta_0}(E))^2 \geq 0.
\]
Therefore, $(\alpha_0, \beta_0)$ lies on a unique numerical semicircular wall $W$ with respect to $v$. One computes that there are $x,y \in \ensuremath{\mathbb{R}}$ such that
\begin{align*}
Q_{\alpha, \beta}(E) \geq 0 &\Leftrightarrow \Delta(E) \alpha^2 + \Delta(E) \beta^2 + x \beta + y \alpha \geq 0, \\
Q_{\alpha, \beta}(E) = 0 &\Leftrightarrow \nu_{\alpha, \beta}(E) = \nu_{\alpha, \beta}\left(\mathop{\mathrm{ch}}\nolimits_1(E), 2\mathop{\mathrm{ch}}\nolimits_2(E), 3\mathop{\mathrm{ch}}\nolimits_3(E), 0 \right).
\end{align*}
In particular, the equation $Q_{\alpha, \beta}(E) \geq 0$ defines the complement of a semi-disc with center on the $\beta$-axis or a quadrant to one side of a vertical line in case $\Delta(E) = 0$. Moreover, $Q_{\alpha, \beta}(E) = 0$ is a numerical wall with respect to $v$.
We will proceed by an induction on $\Delta(E)$ to show that it is enough to prove the inequality for $\overline{\beta}$-stable objects. Assume $\Delta(H) = 0$. Then $Q_{\alpha_0, \beta_0}(E) \geq 0$ is equivalent to $Q_{0, \overline{\beta}}(E) \geq 0$. If $E$ would not be $\overline{\beta}$-stable, then it must destabilize along a wall between $W$ and $(0, \overline{\beta}(E))$. By part (3) of Lemma \ref{lem:convex_cone} with $Q = \Delta$ all stable factors of $E$ along that wall have $\Delta = 0$. By part (4) of the same lemma this can only happen if at least one of the stable factors satisfies $\mathop{\mathrm{ch}}\nolimits_{\leq 2}(E) = (0,0,0)$ which only happens at the numerical vertical wall.
Assume $\Delta(H) > 0$. If $E$ is $\overline{\beta}$-stable, then it is enough to show $Q_{0, \overline{\beta}(E)} \geq 0$. Assume $E$ is destabilized along a wall between $W$ and $(0, \overline{\beta}(E))$ and let $F_1, \ldots, F_n$ be the stable factors of $E$ along this wall. By Lemma \ref{lem:convex_cone} (3) we have $\Delta(F_i) < \Delta(E)$ for all $i = 1, \ldots n$. We can then use Lemma \ref{lem:convex_cone} (3) with $Q = Q_{\alpha, \beta}$ to finish the proof by induction.
\end{proof}
\end{lem}
The idea for the following proof is due to \cite{Li15:conjecture_fano_threefold}.
\begin{thm}
\label{thm:conj_p3}
For all tilt stable objects $E \in \mathop{\mathrm{Coh}}\nolimits^{\beta}(\ensuremath{\mathbb{P}}^3)$ we have
\[Q_{\alpha, \beta}(E) = \alpha^2 \Delta(E) + 4(\mathop{\mathrm{ch}}\nolimits_2^{\beta}(E))^2 - 6 \mathop{\mathrm{ch}}\nolimits_1^{\beta}(E) \mathop{\mathrm{ch}}\nolimits_3^{\beta}(E) \geq 0.\]
\begin{proof}
As observed in Remark \ref{rmk:ExtensionToTiltStability3folds}, tilt semistable objects satisfy $\Delta\geq0$.
Hence, line bundles are stable everywhere in tilt stability in the case $X = \ensuremath{\mathbb{P}}^3$.
Let $E \in \mathop{\mathrm{Coh}}\nolimits^{\overline{\beta}}(X)$ be a $\overline{\beta}$-stable object. By tensoring with line bundles, we can assume $\overline{\beta} \in [-1,0)$.
By assumption we have $\nu_{0, \overline{\beta}}(E) = 0 < \nu_{0, \overline{\beta}}(\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^3})$ which implies $\mathop{\mathrm{Hom}}\nolimits(\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^3}, E) = 0$. Moreover, the same argument together with Serre duality shows
\[
\mathop{\mathrm{Ext}}\nolimits^2(\ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^3}, E) = \mathop{\mathrm{Ext}}\nolimits^1(E, \ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^3}(-4)) = \mathop{\mathrm{Hom}}\nolimits(E, \ensuremath{\mathcal O}_{\ensuremath{\mathbb{P}}^3}(-4)[1]) = 0.
\]
Therefore,
\begin{align*}
0 \geq \chi(\ensuremath{\mathcal O}, E) &= \mathop{\mathrm{ch}}\nolimits_3(E) + 2 \mathop{\mathrm{ch}}\nolimits_2(E) + \frac{11}{6} \mathop{\mathrm{ch}}\nolimits_1(E) + \mathop{\mathrm{ch}}\nolimits_0(E) \\
&= \mathop{\mathrm{ch}}\nolimits_3^{\overline{\beta}}(E) + (\overline{\beta} + 2) \mathop{\mathrm{ch}}\nolimits_2^{\overline{\beta}}(E) + \frac{1}{6}(3\overline{\beta}^2 + 12 \overline{\beta} + 11) \mathop{\mathrm{ch}}\nolimits_1^{\overline{\beta}}(E) \\
& \ \ \ + \frac{1}{6}(\overline{\beta}^3 + 6\overline{\beta}^2 + 11\overline{\beta} + 6) \mathop{\mathrm{ch}}\nolimits_0^{\overline{\beta}}(E) \\
& = \mathop{\mathrm{ch}}\nolimits_3^{\overline{\beta}}(E) + \frac{1}{6}(3\overline{\beta}^2 + 12 \overline{\beta} + 11) \mathop{\mathrm{ch}}\nolimits_1^{\overline{\beta}}(E) + \frac{1}{6}(\overline{\beta}^3 + 6\overline{\beta}^2 + 11\overline{\beta} + 6) \mathop{\mathrm{ch}}\nolimits_0^{\overline{\beta}}(E).
\end{align*}
By construction of $\mathop{\mathrm{Coh}}\nolimits^{\overline{\beta}}(X)$, we have $\mathop{\mathrm{ch}}\nolimits_1^{\overline{\beta}}(E) \geq 0$. Due to $\overline{\beta} \in [-1,0)$ we also have $3\overline{\beta}^2 + 12 \overline{\beta} + 11 \geq 0$ and $\overline{\beta}^3 + 6\overline{\beta}^2 + 11\overline{\beta} + 6 \geq 0$. If $\mathop{\mathrm{ch}}\nolimits_0^{\overline{\beta}}(E) \geq 0$ or $\overline{\beta} = -1$, this finishes the proof. If $\mathop{\mathrm{ch}}\nolimits_0^{\overline{\beta}}(E) < 0$ and $\overline{\beta} \neq -1$, the same type of argument works with $\chi(\ensuremath{\mathcal O}(3), E) \leq 0$.
\end{proof}
\end{thm}
\subsection{Castelnuovo's genus bound}
The next exercise outlines an application of this inequality to a proof of Castelnuovo's classical theorem on the genus of non-degenerate curves in $\ensuremath{\mathbb{P}}^3$.
\begin{thm}[\cite{Cas37:inequality}]
Let $C \subset \ensuremath{\mathbb{P}}^3$ be an integral curve of degree $d \geq 3$ and genus $g$. If $C$ is not contained in a plane, then
\[
g \leq \frac{1}{4}d^2 - d + 1.
\]
\end{thm}
\begin{exercise}
Assume there exists an integral curve $C \subset \ensuremath{\mathbb{P}}^3$ of degree $d \geq 3$ and arithmetic genus $g > \tfrac{1}{4}d^2 - d + 1$ which is not contained in a plane.
\begin{enumerate}
\item Compute that the ideal sheaf $\ensuremath{\mathcal I}_C$ satisfies
\[
\mathop{\mathrm{ch}}\nolimits(\ensuremath{\mathcal I}_C) = (1,0,-d,2d + g - 1).
\]
\emph{Hint: Grothendieck-Riemann-Roch!}
\item Show that $\ensuremath{\mathcal I}_C$ has to be destabilized by an exact sequence
\[
0 \to F \to \ensuremath{\mathcal I}_C \to G \to 0,
\]
where $\mathop{\mathrm{ch}}\nolimits_0(F) = 1$. \emph{Hint: Use Lemma \ref{lem:higherRankBound} to show that any wall of higher rank has to be contained inside $Q_{\alpha, \beta}(\ensuremath{\mathcal I}_C) < 0$.}
\item Show that $\ensuremath{\mathcal I}_C$ has to be destabilized by a sequence of the form
\[
0 \to \ensuremath{\mathcal O}(-a) \to \ensuremath{\mathcal I}_C \to G \to 0,
\]
where $a > 0$ is a positive integer. \emph{Hint: Show that if the subobject is an actual ideal sheaf and not a line bundle, then integrality of $C$ implies that $\ensuremath{\mathcal I}_C$ is destabilized by a line bundle giving a bigger or equal wall.}
\item Show that the wall is inside $Q_{\alpha, \beta}(\ensuremath{\mathcal I}_C) < 0$ if $a > 2$. Moreover, if $d = 3$ or $d = 4$, then the same holds additionally for $a = 2$.
\item Derive a contradiction in the cases $d = 3$ and $d = 4$.
\item Let $E \in \mathop{\mathrm{Coh}}\nolimits^{\beta}(\ensuremath{\mathbb{P}}^3)$ be a tilt semistable object for some $\alpha > 0$ and $\beta \in \ensuremath{\mathbb{R}}$ such that $\mathop{\mathrm{ch}}\nolimits(E) = (0,2,d,e)$ for some $d$, $e$. Show that the inequality
\[
e \leq \frac{d^2}{4} + \frac{1}{3}
\]
holds. \emph{Hint: Use Theorem \ref{thm:conj_p3} and Lemma \ref{lem:higherRankBound}.}
\item Obtain a contradiction via the quotient $G$ in $0 \to \ensuremath{\mathcal O}(-2) \to \ensuremath{\mathcal I}_C \to G \to 0$.
\end{enumerate}
\end{exercise}
| {'timestamp': '2017-03-28T02:13:28', 'yymm': '1607', 'arxiv_id': '1607.01262', 'language': 'en', 'url': 'https://arxiv.org/abs/1607.01262'} |
\section{Introduction}
\label{sec:intro}
The problem of 2D DOA estimation, as an instantiation of multivariate spectral analysis, arises in many applications such as azimuth-elevation angle estimation using 2D arrays and transceiver design in MIMO communications. Despite of the large body of literature \cite{hlv}, existing techniques can be quite complex for implementation in emerging large-scale antenna systems such as massive MIMO, where super-resolution 2D angle estimation need to be performed with low computing time from a small number of measurements.
Subspace methods such as MUSIC and ESPRIT are popular for super-resolution 2D DOA estimation \cite{haardt19952d, roy1989esprit, hua1993pencil}. However, they all hinge on the sample covariance, which requires the number of snapshots to be larger than the antenna size. Besides, they are sensitive to signal correlation and noise, and may fail for coherent sources \cite{hlv}.
Advances in compressed sensing (CS) suggests to exploit source sparsity for frequency or angular estimation \cite{donoho2006, candes2006stable}. CS enables DOA estimation even from a single snapshot of measurements, regardless of source correlation. However, the angular estimates are confined on a finite-resolution grid, and the accuracy is sensitive to off-grid source mismatch \cite{chi2011sensitivity}.
Recently, a new line of gridless CS for spectral analysis is developed via atomic norm minimization (ANM) in the form of semi-definite programming (SDP) \cite{tang2013compressed, candes2014towards}. It is a structure-based optimization approach where the Vandermonde structure of the array manifold is captured in the SDP via a Toeplitz matrix. It has been extended to the 2D case by exploiting a two-level Toeplitz structure \cite{chi2015compressive, yang2016vandermonde}, which enjoys the benefits of the ANM approach in terms of super-resolution from single-snapshot measurements, and resilience to source correlation.
However, the computational load is heavy, which becomes near intractable for large-scale antenna systems.
This paper presents a new formulation of ANM by introducing a new atom set that naturally decouples a two-level Toeplitz matrix into two Toeplitz matrices in one dimension. Accordingly, a new SDP formulation is developed for the decoupled ANM (D-ANM), which has a much reduced problem size and hence markedly improved computational efficiency. The time complexity is several orders of magnitude lower than that based on two-level Toeplitz, while other benefits of ANM are preserved in terms of accuracy, resolution and use of a small number of snapshots. Analytic proof and simulations are presented in the paper to validate the proposed D-ANM for low-complexity 2D DOA estimation.
\section{RELATION TO PRIOR WORK}
\label{sec:rel}
\vspace{-0.05in}
This work belongs to the ANM-based optimization approach to spectral estimation \cite{candes2014towards, tang2013compressed, bhaskar2013atomic, yang2015gridless}, which has emerged as an effective alternative to traditional statistics-based approaches when the number of measurements is small. While SDP formulations for ANM are mostly done for 1D spectral estimation, this work is closely related to the recent 2D results in \cite{chi2015compressive, yang2016vandermonde}. Therein, the 2D array manifold matrix is vectorized into a long vector, whose structure is expressed in a SDP formula through a two-level Toeplitz matrix, followed by two-level Vandermonde decomposition. For an $N\times M$ rectangular antenna array, the size of the SDP is $(NM+1)$ \cite{chi2015compressive, yang2016vandermonde}. This work presents a new SDP formula for decoupled ANM (D-ANM) with a small size of $(N+M)$, which drastically reduces the run time. The D-ANM not only avoids the cumbersome vectorization step in \cite{chi2015compressive, yang2016vandermonde}, but also enables the use of simple one-level Vandermonde decomposition and automatic angle pairing. Hence, both the SDP formulation for ANM and the ensuing Vandermonde decomposition for 2D DOA estimation are different from those in \cite{chi2015compressive, yang2016vandermonde}.
\section{Signal Model}
\label{sec:bak}
Consider $K$ far-field narrowband waveforms $\mathbf{s}^\star=(s_1^\star, \dots, s_K^\star)^\mathrm{T}$ that impinge on an $N\times M$ uniform rectangular array with $N$ and $M$ elements along $x$-direction and $y$-direction respectively. The corresponding DOAs are denoted by $\boldsymbol{\theta}_x^\star=(\theta_{x, 1}^\star, \dots, \theta_{x, K}^\star)$ and $\boldsymbol{\theta}_y^\star=(\theta_{y, 1}^\star, \dots, \theta_{y, K}^\star)$ respectively. The noise-free baseband model for the array output matrix is
\begin{equation}
\label{eq:2.1}
\mathbf{X}(t) = \sum_{k=1}^{K} s_k^\star(t) \mathbf{a}_N(\theta_{x, k}^\star)\mathbf{a}_M^\mathrm{H}(\theta_{y, k}^\star)
\end{equation}
where $\mathbf{a}_N(\theta_x)$ of length $N$ is the 1D array response vector in $x$-direction with a Vandemonde structure along $\theta_x$ \cite{hlv}, and $\mathbf{a}_M(\theta_y)$ of length $M$ is similarly defined.
For clear exposition, we consider a single snapshot, thus dropping time $t$. Letting $\mathbf{A}_N(\boldsymbol{\theta}_x^\star)=[\mathbf{a}_N(\theta_{x, 1}^\star), \dots, \mathbf{a}_N(\theta_{x, K}^\star)]$, $\mathbf{A}_M(\boldsymbol{\theta}_y^\star)=[\mathbf{a}_M(\theta_{y, 1}^\star), \dots, \mathbf{a}_M(\theta_{y, K}^\star)]$, and $\mathbf{S}^\star=\mathrm{diag}(\mathbf{s}^\star)$, (\ref{eq:2.1}) can be concisely written as
\begin{equation}
\label{eq:2.4}
\mathbf{X}=\mathbf{A}_N(\boldsymbol{\theta}_x^\star)\mathbf{S}^\star\mathbf{A}_M^\mathrm{H}(\boldsymbol{\theta}_y^\star).
\end{equation}
The goal of 2D DOA estimation is to recover $\boldsymbol{\theta}_x^\star$ and $\boldsymbol{\theta}_y^\star$ from observations of $\mathbf{X}$. Here we focus on the noise-free signal structure to formulate an optimization approach.
\section{The Atomic Norm Approach}
This section reviews the atomic norm minimization (ANM) approach to 2D DOA estimation in \cite{chi2015compressive, yang2016vandermonde}.
Using the Kronecker product $\otimes$, the signal $\mathbf{X}$ can be vectorized as \cite{chi2015compressive}
\begin{equation}
\label{eq:2.5}
\mathbf{x}=\mathrm{vec}(\mathbf{X})=\sum_{k=1}^{K} s_k^\star \mathbf{a}_M^\ast({\theta}_{y, k}^\star)\otimes\mathbf{a}_N({\theta}_{x, k}^\star)=\sum_{k=1}^{K} s_k \mathbf{a}(\boldsymbol{\theta}_k^\star)
\end{equation}
where $\boldsymbol{\theta}=(\theta_{x}, \theta_{y})$, and $\mathbf{a}(\boldsymbol{\theta})=\mathbf{a}_M^\ast({\theta}_{y})\otimes\mathbf{a}_N(\theta_{x})$ is an extended array response vector of length $NM$.
The atom set $\mathcal{A}_V$ is defined as
\begin{equation}
\label{eq:2.atom}
\mathcal{A}_V=\{\mathbf{a}(\boldsymbol{\theta}), \quad \boldsymbol{\theta}\in[0, 2\pi]\times[0, 2\pi]\}.
\end{equation}
Let $\mathbf{T}_\mathrm{2D}(\mathbf{u})$, defined by its first row $\mathbf{u}$ of length $NM$, denote a two-level Hermitian Toeplitz matrix constructed from the two-level Vandemonde structure of $\mathbf{a}(\boldsymbol{\theta})$ \cite{chi2015compressive}.
Then the atomic norm of $\mathbf{x}$ can be calculated via SDP:
\begin{eqnarray}
\|\mathbf{x}\|_{\mathcal{A}_V}&\hspace{-0.08in}=&\hspace{-0.08in}\inf \left\{\sum_k |s_k| \left| \mathbf{x}=\sum_k s_k \mathbf{a}(\boldsymbol{\theta}_k), \ \ \mathbf{a}(\boldsymbol{\theta})\in\mathcal{A}_V \!\! \right. \right\} \nonumber \\
&\hspace{-0.08in}=&\hspace{-0.08in}\min_{\mathbf{u}, v} \ \left\{\frac{1}{2}\left(v+\mathrm{trace}\big(\mathbf{T}_\mathrm{2D}(\mathbf{u})\big)\right)\right\} \nonumber \\
&& \mathrm{s.t.}\quad \left(\begin{array}{cc}
v & \mathbf{x}^\mathrm{H} \\
\mathbf{x} & \mathbf{T}_\mathrm{2D}(\mathbf{u})
\end{array}\right)\succeq \mathbf{0}. \label{eq:2.6}
\end{eqnarray}
It has been shown that if $\mathbf{x}$ is composed of only a few adequately-separated sources, then $(\boldsymbol{\theta}_x^\star, \boldsymbol{\theta}_y^\star)$ can be recovered by computing $\|\mathbf{x}\|_{\mathcal{A}_V}$ from (noiseless, noisy or partial) observations of $\mathbf{x}$ \cite{chi2015compressive}. The SDP in (\ref{eq:2.6}) results in the two-level Toeplitz matrix $\mathbf{T}_\mathrm{2D}(\mathbf{u})$, which contains the angular information from both dimensions and can be processed via two-level Vandemonde decomposition to yield $(\boldsymbol{\theta}_x^\star, \boldsymbol{\theta}_y^\star)$ \cite{yang2016vandermonde}.
The main issue of the vectorization-based ANM in (\ref{eq:2.6}) is its high complexity. Due to the vectorization in (\ref{eq:2.5}), the matrix size in the SDP constraint is $(NM+1) \times (NM+1)$, which incurs high complexity in both computation and memory as $N$ and $M$ become large. We tried simulations on a PC for $N=M=32$, in which case the SDP calculation could not finish in two days. For large-scale antenna systems, an efficient implementation of the ANM principle is motivated.
\section{2D DOA estimation via Decoupled ANM}
\label{sec:main}
This section presents the main results, namely a decoupled ANM formulation for efficient 2D DOA estimation.
\subsection{Decoupled ANM and its SDP reformulation}
Recall the signal model in (\ref{eq:2.1}) and (\ref{eq:2.4}). Alternative to the vectorized atom set in (\ref{eq:2.atom}), we adopt a new atom set $\mathcal{A}_M$ as
\begin{equation}
\label{eq:3.1}
\begin{split}
\mathcal{A}_M&=\left\{\mathbf{a}_N(\theta_x)\mathbf{a}_M^\mathrm{H}(\theta_y), \quad \theta_x\in[0, 2\pi], \theta_y\in[0, 2\pi]\right\} \\
&=\left\{\mathbf{A}(\boldsymbol{\theta}), \quad \boldsymbol{\theta}\in[0, 2\pi]\times[0, 2\pi]\right\}.
\end{split}
\end{equation}
Our approach to find $(\boldsymbol{\theta}_x^\star, \boldsymbol{\theta}_y^\star)$ from $\mathbf{X}$ is to find the following atomic norm:
\begin{equation}
\label{eq:3.2}
\|\mathbf{X}\|_{\mathcal{A}_M}\!=\inf \left\{\!\sum_k |s_k| \left| \mathbf{X}=\!\sum_k s_k \mathbf{A}(\boldsymbol{\theta}_k), \ \ \mathbf{A}(\boldsymbol{\theta})\in\mathcal{A}_M \!\! \right. \right\}.
\end{equation}
This is an infinite programming problem over all feasible $\boldsymbol{\theta}$.
By reformulating (\ref{eq:3.2}) via SDP, our main results follow.
\begin{theorem}
\label{th:1}
Consider an $N\times M$ data matrix $\mathbf{X}$ given by
\begin{equation}
\label{eq:u2}
\mathbf{X}=\textstyle \sum_{k=1}^{K} s_k^\star \mathbf{A}(\boldsymbol{\theta}_k^\star).
\end{equation}
Define the minimal angle distances as $\Delta_{\min, x}=\min_{i\neq j} |\sin \theta_{x, i}^\star-\sin \theta_{x, j}^\star|$ and $\Delta_{\min, y}=\min_{i\neq j} |\sin \theta_{y, i}^\star-\sin \theta_{y, j}^\star|$,
which are wrapped distances on the unit circle. If they satisfy
\begin{equation}
\Delta_{\min, x}\geq\frac{1.19}{\lfloor(N-1)/4\rfloor}, \mathrm{~~~}\Delta_{\min, y}\geq\frac{1.19}{\lfloor(M-1)/4\rfloor},
\end{equation}
then it is guaranteed that (\ref{eq:u2}) is the optimal solution to (\ref{eq:3.2}). Further, it can be efficiently computed via the following SDP:
\begin{eqnarray}
\|\mathbf{X}\|_{\mathcal{A}_M}&\hspace{-0.15in}=&\hspace{-0.15in}\min_{\mathbf{u}_x, \mathbf{u}_y}\! \!\left\{\! \frac{1}{2\sqrt{NM}}\bigg(\!\mathrm{trace}\big(\mathbf{T}(\mathbf{u}_x)\big)\!+\!\mathrm{trace}\big(\mathbf{T}(\mathbf{u}_y)\big)\!\!\bigg)\!\right\} \nonumber \\
&&\mathrm{s.t.}\quad \left(\begin{array}{cc}
\mathbf{T}(\mathbf{u}_y) & \mathbf{X}^\mathrm{H} \\
\mathbf{X} & \mathbf{T}(\mathbf{u}_x)
\end{array}\right)\succeq \mathbf{0}\label{eq:3.3}
\end{eqnarray}
where $\mathbf{T}(\mathbf{u}_x)$ and $\mathbf{T}(\mathbf{u}_y)$ are one-level Hermitian Toeplitz matrices defined by the first rows $\mathbf{u}_x$ and $\mathbf{u}_y$ respectively.
\end{theorem}
\noindent {\bf \em Remark 1: Angular information.} At the optimal $\hat{\mathbf{u}}_x$, the $N\times N$ Toeplitz matrix $\mathbf{T}(\hat{\mathbf{u}}_x)$ reveals $\boldsymbol{\theta}_x^\star$ via
\begin{equation} \label{eq:Tx}
\mathbf{T}(\hat{\mathbf{u}}_x)=\mathbf{A}_N(\boldsymbol{\theta}_x^\star) \mathbf{D}_x \mathbf{A}_N^{\mathrm{H}}(\boldsymbol{\theta}_x^\star), \quad \mathbf{D}_x \succeq \mathbf{0} \mathrm{~is~diagonal}.
\end{equation}
Similarly, $\boldsymbol{\theta}_y^\star$ is coded in the $M\times M$ matrix $\mathbf{T}(\hat{\mathbf{u}}_y)$.
Hence, after the SDP in (\ref{eq:3.3}), 2D DOA information can be acquired via Vandemonde decomposition on these one-level Toeplitz matrices, which is much simpler than the two-level Vandemonde decomposition needed in \cite{chi2015compressive, yang2016vandermonde}, and there are many mature techniques such as
subspace methods, matrix pencil \cite{yang2016vandermonde} and Prony's method \cite{yang2015gridless}.
\noindent{\bf \em Remark 2: Decoupling.} The main benefit of the new result (\ref{eq:3.3}) is its low complexity via a decoupled formulation for ANM. Instead of coupling the 2D DOA via vectorization to form a constraint of size $(NM+1)\times (NM+1)$ in (\ref{eq:2.6}), (\ref{eq:3.3}) decouples the angular information into two one-level Toeplitz matrices, which markedly reduces the constraint size in SDP to $(N+M)\times(N+M)$.
\subsection{Sketch of proof}
\label{sec:prf}
The proof for Theorem \ref{th:1} will be detailed in a journal version. It is outlined next under the page limit.
\subsubsection{Uniqueness of atomic norm via dual polynomial}
Define the dual norm of $\|\cdot\|_{\mathcal{A}_M}$ for $\forall\mathbf{Q}\in\mathbb{C}^{N\times M}$ as
\begin{equation}
\|\mathbf{Q}\|_{\mathcal{A}_M}^\ast=\sup_{\|\mathbf{X}\|_{\mathcal{A}_M}\leq 1} \langle\mathbf{Q}, \mathbf{X}\rangle_\mathcal{R}
\end{equation}
where $\langle\cdot, \cdot\rangle$ denotes the Frobenius inner product, and $\langle\cdot, \cdot\rangle_\mathcal{R}=\Re\left\{\langle\cdot, \cdot\rangle\right\}$ keeps the real part.
Following standard Lagrangian analysis as in \cite{tang2013compressed, boyd2004convex},
one can show that if there exists a dual polynomial
\begin{equation}
Q(\boldsymbol{\theta}) =\langle \mathbf{Q}, \mathbf{A}(\boldsymbol{\theta})\rangle, \quad \mathbf{Q}\in\mathbb{C}^{N\times M}, \ \mathbf{A}(\boldsymbol{\theta})\in\mathcal{A}_M
\end{equation}
satisfying the conditions (bounded interpolation property)
\begin{equation}
\label{eq:c1+2}
\begin{split}
Q(\boldsymbol{\theta}_k^\star)&=\mathrm{sign}(s_k^\star), \quad \forall \boldsymbol{\theta}_k^\star\in\Omega;
\\
|Q(\boldsymbol{\theta})|&<1, \quad \quad \quad \quad \forall \boldsymbol{\theta}\notin\Omega,
\end{split}
\end{equation}
then it is guaranteed that the optimal solution to (\ref{eq:3.2}) is unique, where $\Omega=\{\boldsymbol{\theta}_1^\star, \dots, \boldsymbol{\theta}_K^\star\}$ collects all supports of $\mathbf{X}$.
The rest of proof is to find a dual polynomial that satisfies the above conditions, following a similar procedure as in \cite{chi2015compressive}.
\subsubsection{Equivalence of atomic norm to SDP with decoupling}
Denote the optimal solution of (\ref{eq:3.3}) as $\mathrm{SDP}(\mathbf{X})$. On one hand, for arbitrary atomic decomposition of $\mathbf{X}=\sum_k s_k \mathbf{A}(\boldsymbol{\theta}_k)$, it is easy to verify the semi-definite contraint in (\ref{eq:3.3}) by letting $\mathbf{T}(\mathbf{u}_x)=\sum_k |s_k| \sqrt{\frac{M}{N}} \mathbf{a}_N(\theta_{x, k}) \mathbf{a}_N^\mathrm{H}(\theta_{x, k})$ and $\mathbf{T}(\mathbf{u}_y)=\sum_k |s_k| \sqrt{\frac{N}{M}} \mathbf{a}_M(\theta_{y, k}) \mathbf{a}_M^\mathrm{H}(\theta_{y, k})$; hence, $\mathrm{SDP}(\mathbf{X})\leq\|\mathbf{X}\|_{\mathcal{A}_M}$.
On the other hand, from the semi-definite condition, $\mathbf{X}$ lies in the column space of $\mathbf{T}(\mathbf{u}_x)$ and row space of $\mathbf{T}(\mathbf{u}_y)$. Similar to \cite{yang2014exact}, using Schur completion lemma and geometry averaging inequality, one can verify $\mathrm{SDP}(\mathbf{X})\geq\|\mathbf{X}\|_{\mathcal{A}_M}$.
\subsection{2D DOA Estimation based on decoupled ANM}
In practice, $\mathbf{X}$ is observed implicitly. We consider a linear observation model in the presence of an additive noise $\mathbf{W}$:
\begin{equation}
\label{eq:5.3.1}
\mathbf{Y}= \mathcal{L}(\mathbf{X}) + \mathbf{W}
\end{equation}
where $\mathcal{L}(\cdot)$ represents linear mixing, with possible missing entries and down-sampling. Given $\mathbf{Y}$, and exploiting the signal structure (\ref{eq:3.3}), $\mathbf{X}$ can be estimated via regularization
\begin{equation}
\min_\mathbf{X} \quad \lambda\|\mathbf{X}\|_{\mathcal{A}_M}+\|\mathbf{Y}-\mathcal{L}(\mathbf{X})\|_\mathrm{F}^2
\end{equation}
where $\lambda\geq 0$ is a weighting scalar. To find the 2D DOA, it is adequate to find the two desired Toeplitz matrices via
\begin{equation}
\label{eq:5.3.3}
\begin{split}
\min_{\mathbf{u}_x, \mathbf{u}_y, \mathbf{X}} &\quad \frac{\lambda}{2\sqrt{NM}}\bigg(\!\mathrm{trace}\big(\mathbf{T}(\mathbf{u}_x)\big)+\mathrm{trace}\big(\mathbf{T}(\mathbf{u}_y)\big)\!\!\bigg) \\
& \quad +\|\mathbf{Y}-\mathcal{L}(\mathbf{X})\|_\mathrm{F}^2\\
\mathrm{s.t.} & \quad \left(\begin{array}{cc}
\mathbf{T}(\mathbf{u}_y) & \mathbf{X}^\mathrm{H} \\
\mathbf{X} & \mathbf{T}(\mathbf{u}_x)
\end{array}\right)\succeq \mathbf{0}.
\end{split}
\end{equation}
As mentioned in {\bf \em Remark 1}, mature 1D DOA estimators can be employed to obtain the estimates $\hat{\boldsymbol{\theta}}_{x}$ and $\hat{\boldsymbol{\theta}}_y$ from the optimal $\mathbf{T}(\hat{\mathbf{u}}_x)$ and $\mathbf{T}(\hat{\mathbf{u}}_y)$, in a decoupled manner.
Like all 2D DOA estimators, a pairing step is critical to find $(\hat{\theta}_{x, k}, \hat{\theta}_{y, k})$ pairs, $\forall k$. Since we have $\hat{\mathbf{X}}$ at hand via (\ref{eq:5.3.3}), we develop a simple angle pairing technique as follows.
\begin{enumerate}
\itemsep -0.03in
\item Construct the array response matrix $\mathbf{A}_N(\hat{\boldsymbol{\theta}}_x)$ from $\hat{\boldsymbol{\theta}}_x$.
\item Compute $\mathbf{V}_M=\mathbf{D}_x^{-1}\mathbf{A}_N^\dagger(\hat{\boldsymbol{\theta}}_x)\hat{\mathbf{X}}$, where $(\cdot)^\dagger$ denotes pseudo-inverse and $\mathbf{D}_x$ is the diagonal matrix in (\ref{eq:Tx}) obtained from Vandermonde decomposition of $\mathbf{T}(\hat{\mathbf{u}}_x)$. In the noise-free case, $\mathbf{V}_M^\mathrm{H} = [\mathbf{v}_1, \ldots, \mathbf{v}_K]$ is the same as $\mathbf{A}_M(\boldsymbol{\theta}_y^\star)$ up to phase ambiguity and global scaling.
\item Pair up $\hat{\theta}_{y,k}$ with $\hat{\theta}_{x,k_y}$ via maximum correlation:
\[ k_y = \arg \max_{j\in [1, K]} \left| \left\langle \mathbf{a}_M(\hat{\theta}_{y,k}), \mathbf{v}_j \right \rangle \right|, \quad k = 1, \ldots, K. \]
\end{enumerate}
\section{Performance Evaluation}
\label{sec:sim}
\subsection{Complexity analysis}
As mentioned in {\bf Remark 2},
in the {\em vectorized ANM} method \cite{chi2015compressive}, the semi-definite constraint in (\ref{eq:2.6}) is of size $(NM+1)\times(NM+1)$. As a result, the SDP step for recovering the two-level Toeplitz has time complexity $\mathcal{O}(N^{3.5}M^{3.5}\log(1/\epsilon))$, where $\epsilon$ is the desired recovery precision \cite{krishnan2005interior}; the ensuing step of two-level Vandermonde decomposition for angle estimation and pairing has time complexity $\mathcal{O}(N^2 M^2 K)$ \cite{yang2016vandermonde}.
In contrast, our {\em decoupled ANM} formulation in (\ref{eq:3.3}) and (\ref{eq:5.3.3}) has a constraint of smaller size $(N+M)\times(N+M)$. The time complexity of the SDP step is $\mathcal{O}((N+M)^{3.5}\log(1/\epsilon))$, and that of the one-level Vandermonde decomposition and pairing step is $\mathcal{O}((N^2 +M^2) K)$. The overall complexity is reduced by an order of $\mathcal{O}(N^{3.5})$ for $N=M$.
Simulations of the run time are performed for a square array with $M=N$, and $K=4$ sources. The running speeds of these two methods are plotted on a logarithmic scale versus $N$ in Figure \ref{fig:3}. Our method exhibits a huge benefit in computational efficiency for large-scale arrays. When $M=N=22$, the running time of the vectorized ANM is 733.1842s, while that of the decoupled ANM is only 1.4997s.
\begin{figure}[t]
\centering
\includegraphics[width=.93\columnwidth]{2.eps}
\vspace{-0.1in}
\caption{Computing complexity: run time versus $N$ ($N=M$).}
\label{fig:3}
\end{figure}
\subsection{Recovery accuracy and noise performance}
Mont\`{e} Carlo simulations are carried out to evaluate the DOA estimation performance of both ANM-based 2D DOA methods, with $M=N=16$ and $K=4$. Following SDP, the matrix pencil method is used to carry out both one- and two-level Vandermonde decomposition for DOA estimation \cite{yang2016vandermonde, hua1993pencil}.
Figure \ref{fig:1} depicts the estimated 2D angles at a high signal to noise ratio (SNR) of 20dB. All source angles are accurately recovered, which confirms that both ANM methods correctly capture the inherent signal structures in their formulations.
\begin{figure}[t]
\centering
\includegraphics[width=.93\columnwidth]{1.eps}
\vspace{-0.1in}
\caption{2D DOA (SNR = 20dB).}
\label{fig:1}
\end{figure}
Figure \ref{fig:4} compares the average mean square error (MSE) of the estimates $\sin\boldsymbol{\hat{\theta}}$ versus SNR, with reference to the Cramer-Rao bound (CRB) \cite{liu2007multidimensional}. The MSE performance of the proposed decoupled ANM is quite close to that of the vectorized ANM, both approaching the CRB for large SNR.
\begin{figure}[t]
\centering
\includegraphics[width=.93\columnwidth]{3.eps}
\vspace{-0.1in}
\caption{Noise performance: MSE vs. SNR ($N=M=16$).}
\label{fig:4}
\end{figure}
\section{Conclusion}
\label{sec:con}
This work presents a novel decoupled ANM approach to 2D DOA estimation. By introducing a new atom set and decoupling the angular information onto lower dimension, we have reduced the computational load by several orders of magnitude in array size, while retaining the benefits of ANM in terms of gridless estimation, light measurement requirements and robustness to signal correlation. Automatic angle pairing is also developed. The proposed low-complexity optimization technique can be extended to other high-dimensional spectral estimation problems as well. Future work includes performance analysis in the presence of data compression and noise.
\bibliographystyle{IEEEbib}
| {'timestamp': '2018-08-06T02:02:39', 'yymm': '1808', 'arxiv_id': '1808.01035', 'language': 'en', 'url': 'https://arxiv.org/abs/1808.01035'} |
\section{Conclusion}
To identify the root causes of performance anomalies of kernel IO, \sys provides
fine-grained performance profiling of kernel IO stacks. \sys tags each IO request
with {\tt request ID} and traces the {\tt request ID} through IO layers, measuring
precise latency of each IO layer the IO request travels. \sys leverages existing
kernel instrumentation framework to dynamically install trace points and collects
latency information within 3 - 10\% overhead.
\section{Design and Implementation}
\begin{figure}
\center{\includegraphics[width=0.4\textwidth]
{outline2.pdf}}
\caption{\label{fig:outline} \textbf{\sys} Architecture.}
\end{figure}
The goal of \sys is to provide precise and fine-grained latency information of kernel IO subsystems
with minimal performance overhead.
\sys starts tracing on each system call and generates trace data profiling timing of
kernel IO layers. With the trace data, \sys computes fine-grained latency breakdown of each IO layer.
To minimize runtime overhead of a target system, \sys leverages an existing lightweight
instrumentation framework~\cite{kprobe,uprobe, tracepoints, ftrace,perf, ebpf, lttng,stap,dtrace}.
The number of trace points directly affects the runtime
overhead of a target system. Kernel instrumentation frameworks have the flexibility
to control the number of trace points, enabling \sys to manage the runtime overhead.
The remainder of this section describes the design of \sys by giving a case study
of the read system call.
\subsection{Architecture}
\sys comprises a front end and a back end system. Figure~\ref{fig:outline} shows
the overall architecture of \sys. The front end is planted to a target system
and collects tracing data, and the back end processes data obtained from the front end and visualizes them.
\begin{figure}
\center{\includegraphics[width=0.4\textwidth]
{frontend.pdf}}
\caption{\label{fig:frontend} Profile script describes probing points and probe handers. A probe
handler generally consists of getting rid, collecting trace data to a log, and writing the log
to the ring buffer.}
\end{figure}
\textbf{Leveraging instrumentation.} Modern operating systems support kernel dynamic instrumentation~\cite{lttng,ebpf,stap,dtrace,ftrace}.
A kernel dynamic instrumentation allows execution of user-defined code into in-memory kernel code
at function boundaries (i.e., entering and exiting a function) or in any lines (called {\it probe point}).
When a kernel execution hits an installed probe point, the instrumentation framework calls
a procedure (called {\it probe handler}) registered by the framework.
The front end takes advantage of existing kernel dynamic instrumentation frameworks to
install arbitrary probe points into a running kernel~\cite{kprobe, uprobe, tracepoints, ftrace, perf,stap}.
The front end takes a profile script to install probe points.
In Figure~\ref{fig:frontend}, we show an example of the profile script.
The profile script is written in a domain-specific language
describing probe points of each kernel layer as a list of kernel functions, entry and exit
actions defined as a probe handler, and hardware events~\cite{perf} to monitor.
The tracing framework compiles the profile script to a binary and inserts it
into kernel memory using a kernel dynamic instrumentation framework.
\subsection{Front end}
The primary goal of the front end install probe points and probe handlers
to collect timing information of each IO layer in each IO request.
The front end installs probe points in execution paths of the system call,
off-CPU events, and interrupt service routines.
\begin{figure}
\center{\includegraphics[width=0.45\textwidth]
{RidCallChain.pdf}}
\caption{\label{fig:RidCallChain} request id propagation through function calls.
After the request is queued at the request layer, SCSI driver keeps checking the queue and handles the queued requests. }
\end{figure}
\noindent \textbf{Tracing system calls.}
Because the system call is the way of entering into the kernel space,
we first focus on how we trace system calls to get the {\it per-request} latency distribution.
An IO system call extends across multiple layers of kernel components.
For example, in Linux, a read system call travels through
VFS and page cache layer, file system layer, block IO layer, and device driver layer.
Each layer has different performance characteristics such as memory copy bandwidth or
slowdown by lock contentions. In addition to that, concurrent system calls make the analysis more complicated because
it is hard to distinguish what system call is executing without having
a complex function parameter analysis and matching thread id which causes
non-negligible overhead in IO-intensive workloads.
To provide lightweight and fine-grained tracing, \sys associates a {\it request id}
(or {\it rid}) with an IO system call to trace. {\tt rid} is a timestamp monotonically
increasing and serves as a unique identifier of an IO system call. To track
individual IO system calls across the layers, \sys propagates an assigned {\tt rid} for each read system call
to lower layers of the kernel as depicted in Figure~\ref{fig:RidCallChain}. The value of {\it rid} is initially stored
in {\tt struct file} (VFS layer), transferred to {\tt struct kiocb} (memory management layer)
and the subsequent layers. To implement the {\it rid} propagation, we slightly modify kernel data structures
to hold the {\it rid} and add code to maintain the {\it rid} value when a layer is switched.
We use a kernel instrumentation framework to change 29 lines of code in the Linux kernel for tracing read IO
(7 lines for {\it rid} in struct and 22 lines for {\it rid} propagation).
This minimizes performance and space overhead when tracing a large volume of IO requests.
\sys can work with any instrumentaion framework that supports on-the-fly
modification of kernel code.
\noindent \textbf{Tracing off-CPU events.} In addition to system calls, \sys supports profiling off-CPU events.
Some IO system calls execute their part of code paths on a different CPU,
which makes it hard to distinguish the origin of an IO request.
For example, read or write system calls make a block IO request
to a kernel IO thread and wait until the kernel IO thread completes the IO request.
The off-CPU execution path must be added to the total latency of the system calls.
To associate a system call and off-CPU execution, \sys transfers the {\it rid} to
off-CPU kernel functions.
Generally, there is a shared data structure for the off-CPU handler (e.g., {\tt struct bio}
and {\tt struct request} in Figure~\ref{fig:RidCallChain}).
\sys installs probe points on off-CPU kernel events to profile and
delivers {\it rid} to an off-CPU probe point via a data structure used for a parameter.
\noindent \textbf{Tracing kernel activities.} Tracing only execution paths of system calls
causes an inaccurate profile result because the kernel execution can be interrupted by
various external events. For example, an interrupt handling during a read
system call causes a delay in a layer. If probe points are installed at function boundaries
of a system call execution path, the delay would not be accounted.
To sort out the delay made by the kernel activities, \sys also installs probe points
on interrupt handling routines, schedule in and out handlers, and the wakeup points
of background kernel threads, recording occurences of the event and its timing
information.
\sys installs a probe handler, described in Figure~\ref{fig:frontend}, on each probe point.
A probe point can be either an entry and an exit of a function or both based on
profile description in Figure~\ref{fig:frontend}.
A probe handler creates a trace log consisting of {\tt <function name, process id, cpu id,
rid, timestamp, hardware events>} and records the log to a ring buffer. The ring buffer
is a protected, shared memory between kernel and user-level applications.
The ring buffer size is 4MB (1024 * 4KB page) by default. A \sys's
user-level agent in a target system periodically sends trace logs to the back end
via the network to avoid buffer overruns.
\subsection{Back end}
The goal of the back end is to compute {\it per-layer} latency breakdowns from
raw log records traced by the front end.
The back end aggregates the log records with a {\it request id} and then
invokes an analyzer for each {\it request id}.
Along with a profile description, the analyzer takes a layer description, which
specifies layer types and how to separate layers with given function names.
With the layer description, the analyzer computes {\it per-layer} latency of each
system call request which has the same {\it rid} in the log records. If a traced function, $f_1$,
calls a function, $f_2$, belonging to a lower layer, the analyzer subtracts the
time taken by $f_2$ when computing latency of $f_1$. The analyzer checks
whether an off-CPU event happens in the middle of execution of a traced function.
If an event happens, the analyzer separately accounts the latency caused by
the off-CPU event when computing latency of a traced function. In case that
a hardware event (e.g., CPI) is recorded in a traced function, the analyzer correlates the hardware event with
the traced function and records the hardware event data with a computed latency.
Finally, the analyzer stores the processed data to an intermediate format, and
the visualizer draws a figure, which can be rendered in web browsers.
\section{Evaluation}
In this section, we explore possible application scenarios of \sys to understand kernel behaviors at fine-grained level.
Our prototype leverages the {\tt ebpf} framework supported by standard Linux to make use of the tracing facility~\cite{ebpf}.
The target system consists of the Intel Xeon CPU E5-2695 v4 CPU, 128GB DDR4 memory, and 256GB SATA SSD.
We use a Ubuntu 16.04 machine with modified kernel version 4.15 and the ext4 filesystem.
We run various workloads of synchronous read I/O operations generated by FIO benchmark \cite{fio} on the target system
to observe the effectiveness of \sys in terms of the {\it per-request} and {\it per-layer} profiling facility.
First, we examine single-threaded sequential and random read access patterns, then we move on to the multi-threaded benchmark study.
\subsection{Read System Calls and Page Cache}
In general, when the Linux operating system receives a read request from a user, it first looks up
in the main memory cache, called \textit{page cache}, to check if the requested data is already in memory.
Based on this mechanism, we categorize the read system requests into
two classes - page cache hit and page cache miss.
In cases of page cache hits, the read requests are rapidly served without
traversing through multiple kernel layers.
On the other hand, read system calls that experience page cache misses are more time-consuming because
they go down to lower kernel layers to access the storage medium and
bring the requested page into the memory.
\begin{figure}[]
\centering
\includegraphics[width=0.5\textwidth]{irq.pdf}
\caption{{\it Per-request} latency (solid) and CPI (dotted) for single-threaded sequential read requests in chronological order.
Only page cache hit cases are shown. Each latency is broken down into four kernel layers - {\tt vfs, mm, cpy, irq}}
\label{fig:latencyandCPI}
\end{figure}
\noindent {\bf Tail latency within page cache hits.}
We expect that the latency variation among read requests would not be significant if the requested data reside in the page cache.
Interestingly, however, \sys reports that latency still varies widely within page cache hit cases.
Figure~\ref{fig:latencyandCPI} shows {\it per-request} read system call latency (for page cache hits) when a file is read sequentially by a single thread.
The x-axis represents individual requests in chronological order.
For most requests, the latency is measured around 18000ns, but we observe that there are several noticeable spikes that rise above 25000ns.
To identify the cause of tail latencies, we additionally analyze the CPI
(cycles per instruction) provided by \sys front end for each request, which is depicted by the dotted line in the same figure.
As the latency fluctuates, the CPI follows the same trend.
Using the {\it per-layer} facility of \sys, we dissect the latency into four layers
- virtual file system ({\tt vfs}), memory management ({\tt mm}), page copy ({\tt cpy}), and interrupt request handle ({\tt irq}).
The data visualization output by our tracer shows that the fluctuation does not occur in a particular layer,
but, instead, different components are responsible for slowdown in each request.
{\tt vfs} layer turns out to be the causative factor of tail \ding{202} and \ding{205}.
Similarly, {\tt cpy} layer was the bottleneck in tail \ding{203} and \ding{207}.
The layer breakdown utility shows us that tail \ding{204} additionally contains {\tt irq} layer.
In fact, the high latency in this particular request was due to a hardware interrupt for IO completion which occurred at {\tt mm} layer.
{\tt irq} not only adds an extra layer in the request
but also introduces huge context switch overhead in the interrupted layer.
\sys allows in-depth analysis to locate unusual latency spikes even within page cache hits
and further diagnose the root cause of each at hardware and software levels.
\begin{figure}[]
\centering
\includegraphics[width=0.4\textwidth]{LayerBreakdownSeparate.pdf}
\caption{{\it Per-layer} breakdown of latency for single-threaded random read requests in chronological order.}
\label{fig:per_layer}
\end{figure}
\noindent {\bf Tail latency within page cache misses.}
We also examine a scenario where the read system calls do not benefit from the page cache due to the random access pattern.
Figure~\ref{fig:per_layer} shows {\it per-layer} breakdown of request latency for single-threaded random reads on a file.
When read requests are made for random positions in a file, most requests experience page cache misses
and suffer much higher latency because they penetrate through deeper layers down to storage to bring data into memory.
A large portion of the time is spent in performing disk I/O, as depicted by the
thick green layer ({\tt io}) in Figure~\ref{fig:lat_breakdown}.
Figure~\ref{fig:per_layer} corresponds to the layer decomposition of requests in the spiking region (requests 3560 - 3600) of Figure~\ref{fig:lat_breakdown}.
By breaking each request down to units of a kernel layer,
it is possible to observe that each burst is caused by different layers
even within a small interval of 40 requests.
The largest three spikes (\textcircled{\raisebox{-0.8pt}{A}}) are due to delay in the Disk I/O layer,
but the smaller spikes that appear immediately before and after (\textcircled{\raisebox{-0.8pt}{B}}) are caused by
equally proportional slowdown in all other six layers.
Since the random read requests do not have access locality, we anticipate that there is no significant latency difference among them.
However, \sys informs us that multiple tail latencies can occur in a batched manner within a small time interval
and that different layers contribute to each tail.
\subsection{Profiling Multi-threaded Behaviors}
\begin{figure}[]
\centering
\includegraphics[width=0.5\textwidth]{FairnessBreakdown.pdf}
\caption{99th-percentile latency within each time interval for threads T1, T2, T3, T4
simultaneously performing random read operations on a single file.
Total 90,000 requests are divided into 15 intervals, each containing 6000
requests.}
\label{fig:FairnessBreakdown}
\end{figure}
In this subsection, we examine the {\it fairness} of issuing random read operations by multiple threads.
As each thread makes its read request, the requests end up mixed across the underlying kernel layers.
With its ability to measure the latency on a {\it per-request} basis, however, \sys is able to retrieve the owner thread of each request.
Figure~\ref{fig:FairnessBreakdown} shows 99th percentile latency of each thread over time with 6,000 requests as a group of the time interval ({\it window}), with
each bar representing individual threads.
From window 5 to 13, we observe that T1 and T3 experience much longer tail latency.
In fact, we identify that the burden of IRQ handling was unfairly distributed
throughout threads. Certain threads served much more IRQ requests than the others and had to spend up to
60x more time on IRQ handling as shown in Table~\ref{tab:irq_time}.
\begin{table}[]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{c|r|r}
\toprule
Thread & \# of IRQ handled & Time spent on IRQ (ns) \\
\midrule
T1 & 7,276 & 47,849,914 \\
T2 & 128 & 816,828 \\
T3 & 2,953 & 19,398,496 \\
T4 & 121 & 787,582 \\
\bottomrule
\end{tabular}
}
\caption{Number of IRQ events handled by each thread and the total time spent on handling IRQ during execution.}
\label{tab:irq_time}
\end{table}
\subsection{Overhead}
We end the section by evaluating the performance overhead of \sys.
First, we measure the read throughput (bandwidth) for sequential read access patterns with 8 threads.
In this case, threads do not experience any throughput degradation when running with \sys.
Since most of the read operations are served by the page cache, the depth of IO path for most requests is small.
As a result, the overhead introduced by the additional CPU cycles from the
tracer is negligible.
Next, we measure the overhead of \sys for random read accesses by 8 threads.
Table~\ref{tab:overhead} shows the throughput overhead with increasing depth of probe layers.
As the probe depth increases, the number of probe points also increases.
For random accesses, the page cache does not help, so all possible tracing points across the 8 layers will be reached frequently,
resulting in non-negligible overhead.
At minimum depth ({\tt L1}), the bandwidth degradation is only 3\%, but it can increase up to 10.7\% with maximum depth ({\tt L8}).
This result depicts the tradeoff between the profiling depth and the
overhead.
In response to this, \sys provides a control knob to users to adjust the profiling granularity.
\begin{table}[]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l|cccccccc}
\toprule
Probe depth & L1 & L2 & L3 & L4 & L5 & L6 & L7 & L8 \\ \hline
Overhead(\%) & 3 & 3.3 & 6 & 6.1 & 8.3 & 8.4 & 9.5 & 10.7 \\ \hline
\# Probe points & 2 & 6 & 8 & 9 & 12 & 13 & 14 & 15 \\
\bottomrule
\end{tabular}%
}
\caption{Bandwidth degradation with increasing probe depth when tracing
8 threads simultaneously performing random read a total of 2GB of data.}
\label{tab:overhead}
\end{table}
\section{Introduction}
Understanding IO performance problems is challenging. Performance of kernel IO stacks
are affected by underlying hardware behaviors such as CPU cache locality~\cite{densefs,flexsc,Yang:2012}.
The hardware behaviors add an unexpected delay to kernel IO executions, causing
high performance variations and tail latency. Also, kernel IO executions are often
interrupted by internal kernel activities such as interrupts and exceptions or
scheduler preemptions, make it hard for developers to pinpoint the root cause
of an unexpected performance anomaly.
What makes the cases harder is that developers require fine-grained profiling information
from the complex kernel IO stack. Modern IO stack is built by a set of abstraction layers
each of which has different performance characteristics.
Developers want to profile the latency breakdown of each layer to identify performance bottlenecks.
Furthermore, they would like to know latency distributions of each IO request
to understand what request causes a tail latency and which layer causes the slowdown
comparing to other requests.
In response to the requirement of kernel IO profiling, there are many research and
practical tools to make an effort to provide useful information.
They provide latency breakdown of entire kernel IO stack or block layer but
do not give detailed latency distributions of individual requests~\cite{Joukov:2006, Traeger:2008,
Shin:2014, Xu:2015} and require massive kernel changes to collect trace data~\cite{Joukov:2006, Doray:2017, Ruan:2004}.
\begin{figure}
\center
\includegraphics[width=0.45\textwidth]
{LayerBreakdown.pdf}
\caption{Read latency breakdown of each layer. X-axis is read requests.}
\label{fig:lat_breakdown}
\end{figure}
In this paper, we introduce a profiling tool for analyzing kernel IO performance in detail,
called \sys ({\tt per-\underline{Re}quest per-\underline{Lay}er} tracer).
\sys provides the latency distributions of each kernel request along with
fine-grained information such as {\it per-abstraction-layer} latency breakdown.
To that end, \sys maps each kernel request (e.g., system call)
to {\it request ID} and tracks the {\it request ID} across the abstraction layers.
By tagging each IO request with a {\it request ID}, which propagates across IO layers, \sys can
report a latency breakdown of each layer of an individual IO request.
Figure~\ref{fig:lat_breakdown} shows the latency breakdown of IO layers profiled by \sys.
Peaks show tail latency, and Figure~\ref{fig:lat_breakdown} shows where the latency peaks happen among
the seven layers. In Linux, a background kernel thread performs device IO asynchronously.
\sys traces the {\it off-CPU event} made by an IO request using the {\it request ID}.
To provide a precise latency breakdown, \sys analyzes an unexpected
delay made by internal kernel activities and accounts them separately.
Also, \sys monitors hardware performance behavior (e.g., IPC) along with a latency profile of each layer,
supporting reasoning about tail latency.
\sys adopts the split architecture consisting of a front end and a back end. The front end of \sys
runs with the target system to profile, collecting data with minimal runtime overhead.
The back end of \sys, working as separate processes, processes data collected by the front end and visualizes the
processed data with graphs. The front end of \sys leverages the dynamic instrumentation
framework~\cite{lttng,ebpf,dtrace,stap,ftrace}, supported in the most of modern operating systems, to
minimize profiling overheads ({\it low overhead}) by tracing only required execution points of interest.
With the dynamic instrumentation, \sys can
profile any kernel subsystems ({\it versatility}) and easily adapt kernel code changes ({\it portability}).
\sys provides fine-grained performance profiling with 3 - 10\% for random read, 0.1\% for sequential read runtime overhead.
This paper makes the following contributions:
\begin{compactitem}
\item By tracing each IO request with a {\it request ID}, \sys can trace per-layer latency on the path
of executing an individual IO request.
\item By monitoring internal kernel activities, \sys can separately account the interference caused the
kernel activities.
\item By bundling software and hardware performance information, \sys can precisely analyze the Linux
IO performance.
\end{compactitem}
The current scope of the work focuses on the read path of IO. Adding support for
other IO system calls is future work.
| {'timestamp': '2019-06-18T02:34:50', 'yymm': '1906', 'arxiv_id': '1906.07124', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.07124'} |